Search

23 February, 2022

Liquidity of bank, liquidity crisis

 The ability of a bank to meet its current obligations, the quality that makes an assets quickly and readily convertible into cash. The most liquid item is cash itself, since it does not need to be converted. Money market funds which can be immediately sold for cash. The time and demand can also turned into cash quickly. Every banking company shall maintain in Bangladesh in cash or gold or unencumbered approved securities, valued at current market price, the value of which shall not at the close of business on any day be less than the such percentage of its total time and demand liabilities as the Bangladesh bank determines from time to time.   liquidity crisis Banks cannot afford to fail in payment to its customers anytime. If it happens, banks may face the risk of liquidation, such failure is called liquidity crisis. It occurs when bank is out of enough cash needed to meet its financial obligation.   causes for liquidity crisis of a Bank Liquidity crisis is a dangerous & risky problem of Bank. Because of liquidity crisis bank lost its main strength. The reasons behind liquidity crisis are: o  When banks give out a great portion of long-term loan compare to cash deposit. Though long-term loan is profitable but bank have to pay back the money when customer call for it. o  When bank’s significant portion of loans has the chance for classification and besides this bank again give out new loan then there can be liquidity crisis. o  When depositors of the bank lost their faith against bank & withdraw their money, then bank may fall in huge liquidity crisis. o  If interest rate in deposit is less compare to different bank in the country & outside the country, so there is a huge possibility of liquidity crisis as because most of customer can be switch to other bank also money can be transferred from one country to another country. o  different kinds of social conflict  like war, internal crush, inflation, economic disaster may  discourage customer from savings in the bank, which cause liquidity crisis.  

22 February, 2022

What are the different type of financial institutions

 Different types of FIs are: 1.    Depository Institutions: These are the FIs, those accepts deposits. These deposits represent the liabilities of the deposit-accepting institution. Their income is derived from two sources: a) the income generated from the loans they make and the securities they purchase, and b) fee income. The various types of depository institutions are: a.    Commercial Banks: It provides numerous services in financial system. The services can classify into i) individual banking, ii) institutional banking, and iii) global banking.   b.    Credit unions: They are commonly known as cooperative societies. The purpose of credit union is to service their members’ saving and borrowing needs.   2.    Insurance companies: it provides insurance policies, which are legally binding contracts for which the policy holder pays insurance premium and the company promise to pay to policy holder on the occurrence of future events.   3.    Mutual Funds: These are the portfolios of securities, mainly stocks, bonds, and money market instruments. The investment manager actively manages the portfolio i.e. buy and sell securities.   4.    Pension funds: It is a fund that is established for eventual payment of retirement benefits financed by contribution by the employer. A pension is a form of employee remuneration for which the employee is not taxed until funds are withdrawn.

21 February, 2022

Number Systems

 Based on the base value and the number of allowed digits, number systems are of many types. The four common types of Number System are: 

  1. Decimal Number System
  2. Binary Number System
  3. Octal Number System
  4. Hexadecimal Number System

1. Decimal Number System

Number system with base value 10 is termed as Decimal number system. It uses 10 digits i.e. 0-9 for the creation of numbers. Here, each digit in the number is at a specific place with place value a product of different powers of 10. Here, the place value is termed from right to left as first place value called units, second to the left as Tens, so on Hundreds, Thousands, etc. Here, units has the place value as 100, tens has the place value as 101, hundreds as 102, thousands as 103, and so on. 

For example: 10285 has place values as

(1 × 104) + (0 × 103) + (2 × 102) + (8 × 101) + (5 × 100)

1 × 10000 + 0 × 1000 + 2 × 100 + 8 × 10 + 5 × 1

10000 + 0 + 200 + 80 + 5

10285

2. Binary Number System

Number System with base value 2 is termed as Binary number system. It uses 2 digits i.e. 0 and 1 for the creation of numbers. The numbers formed using these two digits are termed as Binary Numbers. Binary number system is very useful in electronic devices and computer systems because it can be easily performed using just two states ON and OFF i.e. 0 and 1. 

Decimal Numbers 0-9 are represented in binary as: 0, 1, 10, 11, 100, 101, 110, 111, 1000, and 1001

Examples:

14 can be written as 1110

19 can be written as 10011

50 can be written as 110010

3. Octal Number System

Octal Number System is one in which the base value is 8. It uses 8 digits i.e. 0-7 for creation of Octal Numbers. Octal Numbers can be converted to Decimal value by multiplying each digit with the place value and then adding the result. Here the place values are 80, 81, and 82. Octal Numbers are useful for the representation of UTF8 Numbers. 

Example: 

(135)10 can be written as (207)8

(215)10 can be written as (327)8

4. Hexadecimal Number System

Number System with base value 16 is termed as Hexadecimal Number System. It uses 16 digits for the creation of its numbers. Digits from 0-9 are taken like the digits in the decimal number system but the digits from 10-15 are represented as A-F i.e. 10 is represented as A, 11 as B, 12 as C, 13 as D, 14 as E, and 15 as F. Hexadecimal Numbers are useful for handling memory address locations. 

Examples: 

(255)10 can be written as (FF)16

(1096)10 can be written as (448)16

(4090)10 can be written as (FFA)16

Functions of the CPU, How to test the performance of the CPU, Multi core processor

 The Four Primary Functions of the CPU

The CPU processes instructions it receives in the process of decoding data. In processing this data, the CPU performs four basic steps:

  1. Fetch: Each instruction is stored in memory and has its own address. The processor takes this address number from the program counter, which is responsible for tracking which instructions the CPU should execute next.
  2. Decode: All programs to be executed are translated into Assembly instructions. Assembly code must be decoded into binary instructions, which are understandable to your CPU. This step is called decoding.
  3. Execute: While executing instructions, the CPU can do one of three things: Do calculations with its ALU, move data from one memory location to another, or jump to a different address.
  4. Store: The CPU must give feedback after executing an instruction, and the output data is written to the memory. After executing an instruction, the CPU must provide feedback and write the output data into the memory. The number of operations that the CPU can perform depends on its speed (in GHz Hz), and 1 Hz is the speed at which an operation can be performed in one second. Generally, the speed of a computer is measured in gigahertz. 1 GHz is the speed required by the CPU to perform one million simple tasks. “Simple tasks” include the smallest steps that the processor can perform. Generally, the processor understands and executes assembly instructions that last four cycles. The faster the CPU, the more instructions it can execute in one second, but don’t let this number fool you. CPU speed is not the only indicator that affects computer performance. In order to obtain independent results, many other factors must be evaluated, such as CPU architecture, cache size, and bus speed. When buying a processor, don’t simply pursue the highest speed. Evaluate all factors.
Multi-core processor
Multi-core processors are actually CPUs with two or more independent cores, which are similar to ordinary processors. They execute program instructions. The main advantage of a multi-core processor is that it can run multiple instructions at the same time. This function greatly improves the performance speed, and all programs with parallel computing functions can run on multi-core processors.
How to test the performance of the CPU?
Various benchmarks and tools can be used to test CPU performance. These tools bring heavy workload to the CPU. However, since the overall performance of the computer involves multiple components (CPU, RAM, video processor, etc.), simultaneous evaluation is used Benchmarks for all these components are important.

Features of Microprocessor

    Low Cost - Due to integrated circuit technology microprocessors are available at very low cost. It will reduce the cost of a computer system.

  • High Speed - Due to the technology involved in it, the microprocessor can work at very high speed. It can execute millions of instructions per second.
  • Small Size - A microprocessor is fabricated in a very less footprint due to very large scale and ultra large scale integration technology. Because of this, the size of the computer system is reduced.
  • Versatile - The same chip can be used for several applications, therefore, microprocessors are versatile.
  • Low Power Consumption - Microprocessors are using metal oxide semiconductor technology, which consumes less power.
  • Less Heat Generation - Microprocessors uses semiconductor technology which will not emit much heat as compared to vacuum tube devices.
  • Reliable - Since microprocessors use semiconductor technology, therefore, the failure rate is very less. Hence it is very reliable.
  • Portable - Due to the small size and low power consumption microprocessors are portable

Microprocessor, concept and architecture of a microcomputer, Block Diagram of Microcomputer

 The concept and architecture of a microcomputer

A microcomputer is a computer built on the basis of a microprocessor i.e. a processor implemented as an integrated circuit. Since all processors are now produced in the form of integrated circuits, we can say that all computers are microcomputers. The general method for constructing microcomputers consists in connecting to the microprocessor busses additional sub-systems such as memories and peripheral device controllers (input/output units).

The basic block diagram of a simple microcomputer is shown in the figure below. We can see there a microprocessor with three its busses going out: data bus, address bus and control bus. To these busses, the following devices are connected: operational memory composed of RAM (Random Access Memory)and ROM (Read Only Memory) memories, as well as input/output units to which peripheral devices are connected.



The central processing unit (CPU) is the primary component of any digital computer system, consisting of the main memory, the control unit, and the arithmetic-logic unit. It is the physical heart of the entire computer system, to which various peripheral equipment, such as input/output devices and auxiliary storage units, are connected. The CPU in modern computers is housed on an integrated circuit chip known as a microprocessor.

microprocessor is a small electronic device that contains the arithmetic, logic, and control circuitry required to perform the functions of a digital computer’s central processing unit. In practice, this type of integrated circuit is capable of interpreting and executing program instructions in addition to performing arithmetic operations.

The central processing unit’s control unit regulates and integrates the computer’s operations. It selects and retrieves instructions from the main memory in the correct sequence and interprets them so that the other functional elements of the system can perform their respective operations at the appropriate time. All input data are transferred via main memory to the arithmetic-logic unit for processing, which includes the four basic arithmetic functions (addition, subtraction, multiplication, and division) as well as certain logic operations such as data comparison and selection of the desired problem-solving procedure or a viable alternative based on predetermined decision criteria.

The Central Processing Unit (CPU) has the following characteristics:

  • The CPU is regarded as the computer’s brain.
  • The CPU is responsible for all data processing operations.
  • It saves information such as data, intermediate results, and instructions (program).
  • It directs the operation of all computer components.

The CPU itself is made up of the three components listed below.

  • Memory or Storage Unit
  • Control Unit
  • Arithmetic Logic Unit
  1. Memory or Storage Unit

This unit has the capability of storing instructions, data, and intermediate results. When necessary, this unit sends data to other computer units. It is also referred to as an internal storage unit, main memory, primary storage, or Random Access Memory (RAM). Its size has an impact on its speed, power, and capability. In a computer, there are two types of memories: primary memory and secondary memory. The memory unit’s functions are as follows:

  • It saves all of the data and instructions needed for processing.
  • It saves intermediate processing results.
  • It saves the final results of processing before they are sent to an output device.
  • The main memory is where all inputs and outputs are routed.

The Control Unit

This unit manages the operations of all computer components but does not perform any actual data processing. To function properly, all CPU components must be synchronized. The control unit performs this function at a rate determined by the clock speed and is in charge of directing the operations of the other units through the use of timing signals that run throughout the CPU.

This unit’s functions are as follows:

  • It is in charge of controlling the transfer of data and instructions among the various components of a computer.
  • It manages and coordinates all of the computer’s units.
  • It reads instructions from memory, interprets them, and directs the computer’s operation.
  • It communicates with Input/Output devices to transfer data.
  • It neither processes nor stores data.

Arithmetic Logic Unit

This unit is divided into two subsections, namely,

Sections of Arithmetic and Logic

Arithmetic Unit
The arithmetic unit’s function is to perform arithmetic operations such as addition, subtraction, multiplication, and division. All complex operations are carried out by repeatedly performing the aforementioned operations.

Logic Unit
The logic unit’s function is to perform logic operations on data such as comparing, selecting, matching, and merging.

The arithmetic logic unit (ALU) is responsible for the computer’s arithmetic and logical functions. The input data is held in the A and B registers, and the result of the operation is received in the accumulator. The instruction register stores the instruction that the ALU will execute.

When adding two numbers, for example, one is placed in the A register and the other in the B register. The addition is performed by the ALU, and the result is stored in the accumulator. The data to be compared is placed into the input registers if the operation is logical. The comparison result, a 1 or 0, is stored in the accumulator. The accumulator content is then placed into the cache location reserved by the program for the result, whether it is a logical or arithmetic operation.

The ALU also performs another type of operation. The result is a memory address, which is used to calculate a new memory location to begin loading instructions. The outcome is stored in the instruction pointer register.

Instruction register and pointer

The instruction pointer identifies the memory location in which the CPU will execute the next instruction. When the current instruction is completed, the CPU loads the next instruction into the instruction register from the memory location specified by the instruction pointer.

Cache

The CPU never has direct access to RAM. Modern CPUs have one or more cache layers. The CPU’s calculation speed is much faster than the RAM’s ability to feed data to the CPU.

Cache memory is faster than system RAM and, because it is located on the processor chip, it is closer to the CPU. The cache stores data and instructions to keep the CPU from having to wait for data to be retrieved from RAM. When the CPU requires data—and program instructions are considered data—the cache checks to see if the data is already in residence and returns it to the CPU.

If the requested data is not in the cache, it is retrieved from RAM and used to move more data from RAM into the cache using predictive algorithms. The cache controller analyses the requested data and attempts to predict what additional data from RAM will be required. It loads the expected data into the cache. By storing some data closer to the CPU in a faster-than-RAM cache, the CPU can stay busy and avoid wasting cycles waiting for data.

Our simple CPU has three cache levels. Levels 2 and 3 are intended to predict what data and program instructions will be required next, and to move that data from RAM to a location closer to the CPU so that it is ready when needed. These cache sizes typically range from 1 MB to 32 MB, depending on the processor’s speed and intended use.




Evolution of Microprocessors, generations of Microprocessors

 We can categorize the microprocessor according to the generations or according to the size of the microprocessor:

First Generation (4 - bit Microprocessors)

The first generation microprocessors were introduced in the year 1971-1972 by Intel Corporation. It was named Intel 4004 since it was a 4-bit processor.

It was a processor on a single chip. It could perform simple arithmetic and logical operations such as addition, subtraction, Boolean OR and Boolean AND.

I had a control unit capable of performing control functions like fetching an instruction from storage memory, decoding it, and then generating control pulses to execute it.

Second Generation (8 - bit Microprocessor)

The second generation microprocessors were introduced in 1973 again by Intel. It was a first 8 - bit microprocessor which could perform arithmetic and logic operations on 8-bit words. It was Intel 8008, and another improved version was Intel 8088.

Third Generation (16 - bit Microprocessor)

The third generation microprocessors, introduced in 1978 were represented by Intel's 8086, Zilog Z800 and 80286, which were 16 - bit processors with a performance like minicomputers.

Fourth Generation (32 - bit Microprocessors)

Several different companies introduced the 32-bit microprocessors, but the most popular one is the Intel 80386.

Fifth Generation (64 - bit Microprocessors)

From 1995 to now we are in the fifth generation. After 80856, Intel came out with a new processor namely Pentium processor followed by Pentium Pro CPU, which allows multiple CPUs in a single system to achieve multiprocessing.

Other improved 64-bit processors are Celeron, Dual, Quad, Octa Core processors.

Computer Memories, Primary memory/Main memory, auxiliary memory/Secondary memory, difference between Primary memory and Secondary memory

 Computer Memories store data and instruction. Memory system can be divided into 4 categories:

  • CPU register
  • Cache memory
  • Primary / Main memory
  • Secondary Memory / Mass Storage

They can be represented in an hierarchical form as:



1. Primary / Main memory:
Primary memory is the computer memory that is directly accessible by CPU. It is comprised of DRAM and provides the actual working space to the processor. It holds the data and instructions that the processor is currently working on.

2. Secondary Memory / Mass Storage:
The contents of the secondary memory first get transferred to the primary memory and then are accessed by the processor, this is because the processor does not directly interact with the secondary memory.

Now, Let’s see the difference between Primary memory and Secondary memory:

Sr.No.Primary memorySecondary memory
1.Primary memory is temporary.Secondary memory is permanent.
2.Primary memory is directly accessible by Processor/CPU.Secondary memory is not directly accessible by the CPU.
3.Nature of Parts of Primary memory varies, RAM- volatile in nature. ROM- Non-volatile.It’s always Non-volatile in nature.
4.Primary memory devices are more expensive than secondary storage devices.Secondary memory devices are less expensive when compared to primary memory devices.
5.The memory devices used for primary memory are semiconductor memories.The secondary memory devices are magnetic and optical memories.
6.Primary memory is also known as Main memory or Internal memory.Secondary memory is also known as External memory or Auxiliary memory.
7.Examples: RAM, ROM, Cache memory, PROM, EPROM, Registers, etc.Examples: Hard Disk, Floppy Disk, Magnetic Tapes, etc.


What is a Compiler and Interpreter? Difference Between Compiler and Interpreter

 compiler is a program that translates a source program written in some high-level programming language (such as Java) into machine code for some computer architecture (such as the Intel Pentium architecture). The generated machine code can be later executed many times against different data each time.

An interpreter reads an executable source program written in a high-level programming language as well as data for this program, and it runs the program against the data to produce some results. One example is the Unix shell interpreter, which runs operating system commands interactively.

Note that both interpreters and compilers (like any other program) are written in some high-level programming language (which may be different from the language they accept) and they are translated into machine code. For a example, a Java interpreter can be completely written in C, or even Java. The interpreter source program is machine independent since it does not generate machine code. (Note the difference between generate and translated into machine code.) An interpreter is generally slower than a compiler because it processes and interprets each statement in a program as many times as the number of the evaluations of this statement. For example, when a for-loop is interpreted, the statements inside the for-loop body will be analyzed and evaluated on every loop step. Some languages, such as Java and Lisp, come with both an interpreter and a compiler. Java source programs (Java classes with .java extension) are translated by the javac compiler into byte-code files (with .class extension). The Java interpreter, called the Java Virtual Machine (JVM), may actually interpret byte codes directly or may internally compile them to machine code and then execute that code (JIT: just-in-time compilation).



What is Interpreter?

An interpreter is a computer program, which converts each high-level program statement into the machine code. This includes source code, pre-compiled code, and scripts. Both compiler and interpreters do the same job which is converting higher level programming language to machine code. However, a compiler will convert the code into machine code (create an exe) before program run. Interpreters convert code into machine code when the program is run.

Difference Between Compiler and Interpreter

Basis of difference Compiler Interpreter
Programming Steps
  • Create the program.
  • Compile will parse or analyses all of the language statements for its correctness. If incorrect, throws an error
  • If no error, the compiler will convert source code to machine code.
  • It links different code files into a runnable program(know as exe)
  • Run the Program
  • Create the Program
  • No linking of files or machine code generation
  • Source statements executed line by line DURING Execution
Advantage The program code is already translated into machine code. Thus, it code execution time is less. Interpreters are easier to use, especially for beginners.
Disadvantage You can’t change the program without going back to the source code. Interpreted programs can run on computers that have the corresponding interpreter.
Machine code Store machine language as machine code on the disk Not saving machine code at all.
Running time Compiled code run faster Interpreted code run slower
Model It is based on language translation linking-loading model. It is based on Interpretation Method.
Program generation Generates output program (in the form of exe) which can be run independently from the original program. Do not generate output program. So they evaluate the source program at every time during execution.
Execution Program execution is separate from the compilation. It performed only after the entire output program is compiled. Program Execution is a part of Interpretation process, so it is performed line by line.
Memory requirement Target program execute independently and do not require the compiler in the memory. The interpreter exists in the memory during interpretation.
Best suited for Bounded to the specific target machine and cannot be ported. C and C++ are a most popular programming language which uses compilation model. For web environments, where load times are important. Due to all the exhaustive analysis is done, compiles take relatively larger time to compile even small code that may not be run multiple times. In such cases, interpreters are better.
Code Optimization The compiler sees the entire code upfront. Hence, they perform lots of optimizations that make code run faster Interpreters see code line by line, and thus optimizations are not as robust as compilers
Dynamic Typing Difficult to implement as compilers cannot predict what happens at turn time. Interpreted languages support Dynamic Typing
Usage It is best suited for the Production Environment It is best suited for the program and development environment.
Error execution Compiler displays all errors and warning at the compilation time. Therefore, you can’t run the program without fixing errors The interpreter reads a single statement and shows the error if any. You must correct the error to interpret next line.
Input It takes an entire program It takes a single line of code.
Output Compliers generates intermediate machine code. Interpreter never generate any intermediate machine code.
Errors Display all errors after, compilation, all at the same time. Displays all errors of each line one by one.
Pertaining Programming
languages
C, C++, C#, Scala, Java all use complier. PHP, Perl, Ruby uses an interpreter.

Super Computer

 A supercomputer is a computer with a high level of performance compared to a general-purpose computer. Performance of a supercomputer is measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Supercomputers contain tens of thousands of processors and can perform billions and trillions of calculations or computations per second. Some supercomputers can perform up to a hundred quadrillion FLOPS. Since information moves quickly between processors in a supercomputer (compared to distributed computing systems) they are ideal for real-time applications.

Supercomputers are used for data-intensive and computation-heavy scientific and engineering purposes such as quantum mechanics, weather forecasting, oil and gas exploration, molecular modeling, physical simulations, aerodynamics, nuclear fusion research and cryptoanalysis. Early operating systems were custom made for each supercomputer to increase its speed. In recent years, supercomputer architecture has moved away from proprietary, in-house operating systems to Linux. Although most supercomputers use a Linux-based operating system, each manufacturer optimizes its own Linux derivative for peak hardware performance. In 2017, half of the world’s top 50 supercomputers used SUSE Enterprise Linux Server.

The largest, most powerful supercomputers are actually multiple computers that perform parallel processing. Today, many academic and scientific research firms, engineering companies and large enterprises that require massive processing power are using cloud computing instead of supercomputers. High performance computing (HPC) via the cloud is more affordable, scalable and faster to upgrade than on-premises supercomputers. Cloud-based HPC architectures can expand, adapt and shrink as business needs demand. SUSE Linux Enterprise High Performance Computing allows organizations to leverage their existing hardware for HPC computations and data-intensive operations.

Number Conversion, binary to decimal, decimal to binary, binary to hexadecimal















Types of Computer Languages with Their Advantages and Disadvantages, Machine Language, Assembly Language, High-Level Languages

 Different kinds of languages have been developed to perform different types of work on the computer. Basically, languages can be divided into two categories according to how the computer understands them.

Two Basic Types of Computer Language

  • Low-Level Languages: A language that corresponds directly to a specific machine
  • High-Level Languages: Any language that is independent of the machine

There are also other types of languages, which include

  • System languages: These are designed for low-level tasks, like memory and process management
  • Scripting languages: These tend to be high-level and very powerful
  • Domain-specific languages: These are only used in very specific contexts
  • Visual languages: Languages that are not text-based
  • Esoteric languages: Languages that are jokes or are not intended for serious use

Low-level computer languages are either machine codes or are very close them. A computer cannot understand instructions given to it in high-level languages or in English. It can only understand and execute instructions given in the form of machine language i.e. binary. There are two types of low-level languages:

  • Machine Language: a language that is directly interpreted into the hardware
  • Assembly Language: a slightly more user-friendly language that directly corresponds to machine language

Machine Language

Machine language is the lowest and most elementary level of programming language and was the first type of programming language to be developed. Machine language is basically the only language that a computer can understand and it is usually written in hex.

In fact, a manufacturer designs a computer to obey just one language, its machine code, which is represented inside the computer by a string of binary digits (bits) 0 and 1. The symbol 0 stands for the absence of an electric pulse and the 1 stands for the presence of an electric pulse. Since a computer is capable of recognizing electric signals, it understands machine language.

Advantages

 

Disadvantages

Machine language makes fast and efficient use of the computer. (High Speed)

 

All operation codes have to be remembered

It requires no translator to translate the code. It is directly understood by the computer. (Translation Free)

 

All memory addresses have to be remembered.

 

 

It is hard to amend or find errors in a program written in the machine language.

  1. Machine Dependent : Program written in machine language are machine dependent and hence program developed for one system does not execute on other system.
  2. Complex Language for Programming : Since machine language consists only sequence of 0s and 1s, so it is very difficult for programmer to remember and write each instructions.
  3. Error Prone : While programming using machine language everything need to be expressed in sequence of 0s and 1s which is very tedious task. So errors are frequently occurred while programming using machine language.
  4. Time Consuming : Writing program in machine language is time consuming process.

Assembly Language

Assembly language was developed to overcome some of the many inconveniences of machine language. This is another low-level but very important language in which operation codes and operands are given in the form of alphanumeric symbols instead of 0’s and l’s.

These alphanumeric symbols are known as mnemonic codes and can combine in a maximum of five-letter combinations e.g. ADD for addition, SUB for subtraction, START, LABEL etc. Because of this feature, assembly language is also known as ‘Symbolic Programming Language.'

This language is also very difficult and needs a lot of practice to master it because there is only a little English support in this language. Mostly assembly language is used to help in compiler orientations. The instructions of the assembly language are converted to machine codes by a language translator and then they are executed by the computer.

Advantages

 

Disadvantages

Assembly language is easier to understand and use as compared to machine language.

 

Like machine language, it is also machine dependent/specific.

It is easy to locate and correct errors.

 

Since it is machine dependent, the programmer also needs to understand the hardware.

It is easily modified.

 

 


High-level computer languages use formats that are similar to English. The purpose of developing high-level languages was to enable people to write programs easily, in their own native language environment (English).

High-level languages are basically symbolic languages that use English words and/or mathematical symbols rather than mnemonic codes. Each instruction in the high-level language is translated into many machine language instructions that the computer can understand.

Advantages

 

Disadvantages

High-level languages are user-friendly

 

A high-level language has to be translated into the machine language by a translator, which takes up time

They are similar to English and use English vocabulary and well-known symbols

 

The object code generated by a translator might be inefficient compared to an equivalent assembly language program

They are easier to learn

 

 

They are easier to maintain

 

 

They are problem-oriented rather than 'machine'-based

 

 

A program written in a high-level language can be translated into many machine languages and can run on any computer for which there exists an appropriate translator

 

 

The language is independent of the machine on which it is used i.e. programs developed in a high-level language can be run on any computer text

 

 

Types of High-Level Languages

Many languages have been developed for achieving a variety of different tasks. Some are fairly specialized, and others are quite general.

These languages, categorized according to their use, are:

1) Algebraic Formula-Type Processing

These languages are oriented towards the computational procedures for solving mathematical and statistical problems.

Examples include:

  • BASIC (Beginners All Purpose Symbolic Instruction Code)
  • FORTRAN (Formula Translation)
  • PL/I (Programming Language, Version 1)
  • ALGOL (Algorithmic Language)
  • APL (A Programming Language)