Search

21 February, 2022

Features of Microprocessor

    Low Cost - Due to integrated circuit technology microprocessors are available at very low cost. It will reduce the cost of a computer system.

  • High Speed - Due to the technology involved in it, the microprocessor can work at very high speed. It can execute millions of instructions per second.
  • Small Size - A microprocessor is fabricated in a very less footprint due to very large scale and ultra large scale integration technology. Because of this, the size of the computer system is reduced.
  • Versatile - The same chip can be used for several applications, therefore, microprocessors are versatile.
  • Low Power Consumption - Microprocessors are using metal oxide semiconductor technology, which consumes less power.
  • Less Heat Generation - Microprocessors uses semiconductor technology which will not emit much heat as compared to vacuum tube devices.
  • Reliable - Since microprocessors use semiconductor technology, therefore, the failure rate is very less. Hence it is very reliable.
  • Portable - Due to the small size and low power consumption microprocessors are portable

Microprocessor, concept and architecture of a microcomputer, Block Diagram of Microcomputer

 The concept and architecture of a microcomputer

A microcomputer is a computer built on the basis of a microprocessor i.e. a processor implemented as an integrated circuit. Since all processors are now produced in the form of integrated circuits, we can say that all computers are microcomputers. The general method for constructing microcomputers consists in connecting to the microprocessor busses additional sub-systems such as memories and peripheral device controllers (input/output units).

The basic block diagram of a simple microcomputer is shown in the figure below. We can see there a microprocessor with three its busses going out: data bus, address bus and control bus. To these busses, the following devices are connected: operational memory composed of RAM (Random Access Memory)and ROM (Read Only Memory) memories, as well as input/output units to which peripheral devices are connected.



The central processing unit (CPU) is the primary component of any digital computer system, consisting of the main memory, the control unit, and the arithmetic-logic unit. It is the physical heart of the entire computer system, to which various peripheral equipment, such as input/output devices and auxiliary storage units, are connected. The CPU in modern computers is housed on an integrated circuit chip known as a microprocessor.

microprocessor is a small electronic device that contains the arithmetic, logic, and control circuitry required to perform the functions of a digital computer’s central processing unit. In practice, this type of integrated circuit is capable of interpreting and executing program instructions in addition to performing arithmetic operations.

The central processing unit’s control unit regulates and integrates the computer’s operations. It selects and retrieves instructions from the main memory in the correct sequence and interprets them so that the other functional elements of the system can perform their respective operations at the appropriate time. All input data are transferred via main memory to the arithmetic-logic unit for processing, which includes the four basic arithmetic functions (addition, subtraction, multiplication, and division) as well as certain logic operations such as data comparison and selection of the desired problem-solving procedure or a viable alternative based on predetermined decision criteria.

The Central Processing Unit (CPU) has the following characteristics:

  • The CPU is regarded as the computer’s brain.
  • The CPU is responsible for all data processing operations.
  • It saves information such as data, intermediate results, and instructions (program).
  • It directs the operation of all computer components.

The CPU itself is made up of the three components listed below.

  • Memory or Storage Unit
  • Control Unit
  • Arithmetic Logic Unit
  1. Memory or Storage Unit

This unit has the capability of storing instructions, data, and intermediate results. When necessary, this unit sends data to other computer units. It is also referred to as an internal storage unit, main memory, primary storage, or Random Access Memory (RAM). Its size has an impact on its speed, power, and capability. In a computer, there are two types of memories: primary memory and secondary memory. The memory unit’s functions are as follows:

  • It saves all of the data and instructions needed for processing.
  • It saves intermediate processing results.
  • It saves the final results of processing before they are sent to an output device.
  • The main memory is where all inputs and outputs are routed.

The Control Unit

This unit manages the operations of all computer components but does not perform any actual data processing. To function properly, all CPU components must be synchronized. The control unit performs this function at a rate determined by the clock speed and is in charge of directing the operations of the other units through the use of timing signals that run throughout the CPU.

This unit’s functions are as follows:

  • It is in charge of controlling the transfer of data and instructions among the various components of a computer.
  • It manages and coordinates all of the computer’s units.
  • It reads instructions from memory, interprets them, and directs the computer’s operation.
  • It communicates with Input/Output devices to transfer data.
  • It neither processes nor stores data.

Arithmetic Logic Unit

This unit is divided into two subsections, namely,

Sections of Arithmetic and Logic

Arithmetic Unit
The arithmetic unit’s function is to perform arithmetic operations such as addition, subtraction, multiplication, and division. All complex operations are carried out by repeatedly performing the aforementioned operations.

Logic Unit
The logic unit’s function is to perform logic operations on data such as comparing, selecting, matching, and merging.

The arithmetic logic unit (ALU) is responsible for the computer’s arithmetic and logical functions. The input data is held in the A and B registers, and the result of the operation is received in the accumulator. The instruction register stores the instruction that the ALU will execute.

When adding two numbers, for example, one is placed in the A register and the other in the B register. The addition is performed by the ALU, and the result is stored in the accumulator. The data to be compared is placed into the input registers if the operation is logical. The comparison result, a 1 or 0, is stored in the accumulator. The accumulator content is then placed into the cache location reserved by the program for the result, whether it is a logical or arithmetic operation.

The ALU also performs another type of operation. The result is a memory address, which is used to calculate a new memory location to begin loading instructions. The outcome is stored in the instruction pointer register.

Instruction register and pointer

The instruction pointer identifies the memory location in which the CPU will execute the next instruction. When the current instruction is completed, the CPU loads the next instruction into the instruction register from the memory location specified by the instruction pointer.

Cache

The CPU never has direct access to RAM. Modern CPUs have one or more cache layers. The CPU’s calculation speed is much faster than the RAM’s ability to feed data to the CPU.

Cache memory is faster than system RAM and, because it is located on the processor chip, it is closer to the CPU. The cache stores data and instructions to keep the CPU from having to wait for data to be retrieved from RAM. When the CPU requires data—and program instructions are considered data—the cache checks to see if the data is already in residence and returns it to the CPU.

If the requested data is not in the cache, it is retrieved from RAM and used to move more data from RAM into the cache using predictive algorithms. The cache controller analyses the requested data and attempts to predict what additional data from RAM will be required. It loads the expected data into the cache. By storing some data closer to the CPU in a faster-than-RAM cache, the CPU can stay busy and avoid wasting cycles waiting for data.

Our simple CPU has three cache levels. Levels 2 and 3 are intended to predict what data and program instructions will be required next, and to move that data from RAM to a location closer to the CPU so that it is ready when needed. These cache sizes typically range from 1 MB to 32 MB, depending on the processor’s speed and intended use.




Evolution of Microprocessors, generations of Microprocessors

 We can categorize the microprocessor according to the generations or according to the size of the microprocessor:

First Generation (4 - bit Microprocessors)

The first generation microprocessors were introduced in the year 1971-1972 by Intel Corporation. It was named Intel 4004 since it was a 4-bit processor.

It was a processor on a single chip. It could perform simple arithmetic and logical operations such as addition, subtraction, Boolean OR and Boolean AND.

I had a control unit capable of performing control functions like fetching an instruction from storage memory, decoding it, and then generating control pulses to execute it.

Second Generation (8 - bit Microprocessor)

The second generation microprocessors were introduced in 1973 again by Intel. It was a first 8 - bit microprocessor which could perform arithmetic and logic operations on 8-bit words. It was Intel 8008, and another improved version was Intel 8088.

Third Generation (16 - bit Microprocessor)

The third generation microprocessors, introduced in 1978 were represented by Intel's 8086, Zilog Z800 and 80286, which were 16 - bit processors with a performance like minicomputers.

Fourth Generation (32 - bit Microprocessors)

Several different companies introduced the 32-bit microprocessors, but the most popular one is the Intel 80386.

Fifth Generation (64 - bit Microprocessors)

From 1995 to now we are in the fifth generation. After 80856, Intel came out with a new processor namely Pentium processor followed by Pentium Pro CPU, which allows multiple CPUs in a single system to achieve multiprocessing.

Other improved 64-bit processors are Celeron, Dual, Quad, Octa Core processors.

Computer Memories, Primary memory/Main memory, auxiliary memory/Secondary memory, difference between Primary memory and Secondary memory

 Computer Memories store data and instruction. Memory system can be divided into 4 categories:

  • CPU register
  • Cache memory
  • Primary / Main memory
  • Secondary Memory / Mass Storage

They can be represented in an hierarchical form as:



1. Primary / Main memory:
Primary memory is the computer memory that is directly accessible by CPU. It is comprised of DRAM and provides the actual working space to the processor. It holds the data and instructions that the processor is currently working on.

2. Secondary Memory / Mass Storage:
The contents of the secondary memory first get transferred to the primary memory and then are accessed by the processor, this is because the processor does not directly interact with the secondary memory.

Now, Let’s see the difference between Primary memory and Secondary memory:

Sr.No.Primary memorySecondary memory
1.Primary memory is temporary.Secondary memory is permanent.
2.Primary memory is directly accessible by Processor/CPU.Secondary memory is not directly accessible by the CPU.
3.Nature of Parts of Primary memory varies, RAM- volatile in nature. ROM- Non-volatile.It’s always Non-volatile in nature.
4.Primary memory devices are more expensive than secondary storage devices.Secondary memory devices are less expensive when compared to primary memory devices.
5.The memory devices used for primary memory are semiconductor memories.The secondary memory devices are magnetic and optical memories.
6.Primary memory is also known as Main memory or Internal memory.Secondary memory is also known as External memory or Auxiliary memory.
7.Examples: RAM, ROM, Cache memory, PROM, EPROM, Registers, etc.Examples: Hard Disk, Floppy Disk, Magnetic Tapes, etc.


What is a Compiler and Interpreter? Difference Between Compiler and Interpreter

 compiler is a program that translates a source program written in some high-level programming language (such as Java) into machine code for some computer architecture (such as the Intel Pentium architecture). The generated machine code can be later executed many times against different data each time.

An interpreter reads an executable source program written in a high-level programming language as well as data for this program, and it runs the program against the data to produce some results. One example is the Unix shell interpreter, which runs operating system commands interactively.

Note that both interpreters and compilers (like any other program) are written in some high-level programming language (which may be different from the language they accept) and they are translated into machine code. For a example, a Java interpreter can be completely written in C, or even Java. The interpreter source program is machine independent since it does not generate machine code. (Note the difference between generate and translated into machine code.) An interpreter is generally slower than a compiler because it processes and interprets each statement in a program as many times as the number of the evaluations of this statement. For example, when a for-loop is interpreted, the statements inside the for-loop body will be analyzed and evaluated on every loop step. Some languages, such as Java and Lisp, come with both an interpreter and a compiler. Java source programs (Java classes with .java extension) are translated by the javac compiler into byte-code files (with .class extension). The Java interpreter, called the Java Virtual Machine (JVM), may actually interpret byte codes directly or may internally compile them to machine code and then execute that code (JIT: just-in-time compilation).



What is Interpreter?

An interpreter is a computer program, which converts each high-level program statement into the machine code. This includes source code, pre-compiled code, and scripts. Both compiler and interpreters do the same job which is converting higher level programming language to machine code. However, a compiler will convert the code into machine code (create an exe) before program run. Interpreters convert code into machine code when the program is run.

Difference Between Compiler and Interpreter

Basis of difference Compiler Interpreter
Programming Steps
  • Create the program.
  • Compile will parse or analyses all of the language statements for its correctness. If incorrect, throws an error
  • If no error, the compiler will convert source code to machine code.
  • It links different code files into a runnable program(know as exe)
  • Run the Program
  • Create the Program
  • No linking of files or machine code generation
  • Source statements executed line by line DURING Execution
Advantage The program code is already translated into machine code. Thus, it code execution time is less. Interpreters are easier to use, especially for beginners.
Disadvantage You can’t change the program without going back to the source code. Interpreted programs can run on computers that have the corresponding interpreter.
Machine code Store machine language as machine code on the disk Not saving machine code at all.
Running time Compiled code run faster Interpreted code run slower
Model It is based on language translation linking-loading model. It is based on Interpretation Method.
Program generation Generates output program (in the form of exe) which can be run independently from the original program. Do not generate output program. So they evaluate the source program at every time during execution.
Execution Program execution is separate from the compilation. It performed only after the entire output program is compiled. Program Execution is a part of Interpretation process, so it is performed line by line.
Memory requirement Target program execute independently and do not require the compiler in the memory. The interpreter exists in the memory during interpretation.
Best suited for Bounded to the specific target machine and cannot be ported. C and C++ are a most popular programming language which uses compilation model. For web environments, where load times are important. Due to all the exhaustive analysis is done, compiles take relatively larger time to compile even small code that may not be run multiple times. In such cases, interpreters are better.
Code Optimization The compiler sees the entire code upfront. Hence, they perform lots of optimizations that make code run faster Interpreters see code line by line, and thus optimizations are not as robust as compilers
Dynamic Typing Difficult to implement as compilers cannot predict what happens at turn time. Interpreted languages support Dynamic Typing
Usage It is best suited for the Production Environment It is best suited for the program and development environment.
Error execution Compiler displays all errors and warning at the compilation time. Therefore, you can’t run the program without fixing errors The interpreter reads a single statement and shows the error if any. You must correct the error to interpret next line.
Input It takes an entire program It takes a single line of code.
Output Compliers generates intermediate machine code. Interpreter never generate any intermediate machine code.
Errors Display all errors after, compilation, all at the same time. Displays all errors of each line one by one.
Pertaining Programming
languages
C, C++, C#, Scala, Java all use complier. PHP, Perl, Ruby uses an interpreter.

Super Computer

 A supercomputer is a computer with a high level of performance compared to a general-purpose computer. Performance of a supercomputer is measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Supercomputers contain tens of thousands of processors and can perform billions and trillions of calculations or computations per second. Some supercomputers can perform up to a hundred quadrillion FLOPS. Since information moves quickly between processors in a supercomputer (compared to distributed computing systems) they are ideal for real-time applications.

Supercomputers are used for data-intensive and computation-heavy scientific and engineering purposes such as quantum mechanics, weather forecasting, oil and gas exploration, molecular modeling, physical simulations, aerodynamics, nuclear fusion research and cryptoanalysis. Early operating systems were custom made for each supercomputer to increase its speed. In recent years, supercomputer architecture has moved away from proprietary, in-house operating systems to Linux. Although most supercomputers use a Linux-based operating system, each manufacturer optimizes its own Linux derivative for peak hardware performance. In 2017, half of the world’s top 50 supercomputers used SUSE Enterprise Linux Server.

The largest, most powerful supercomputers are actually multiple computers that perform parallel processing. Today, many academic and scientific research firms, engineering companies and large enterprises that require massive processing power are using cloud computing instead of supercomputers. High performance computing (HPC) via the cloud is more affordable, scalable and faster to upgrade than on-premises supercomputers. Cloud-based HPC architectures can expand, adapt and shrink as business needs demand. SUSE Linux Enterprise High Performance Computing allows organizations to leverage their existing hardware for HPC computations and data-intensive operations.

Number Conversion, binary to decimal, decimal to binary, binary to hexadecimal















Types of Computer Languages with Their Advantages and Disadvantages, Machine Language, Assembly Language, High-Level Languages

 Different kinds of languages have been developed to perform different types of work on the computer. Basically, languages can be divided into two categories according to how the computer understands them.

Two Basic Types of Computer Language

  • Low-Level Languages: A language that corresponds directly to a specific machine
  • High-Level Languages: Any language that is independent of the machine

There are also other types of languages, which include

  • System languages: These are designed for low-level tasks, like memory and process management
  • Scripting languages: These tend to be high-level and very powerful
  • Domain-specific languages: These are only used in very specific contexts
  • Visual languages: Languages that are not text-based
  • Esoteric languages: Languages that are jokes or are not intended for serious use

Low-level computer languages are either machine codes or are very close them. A computer cannot understand instructions given to it in high-level languages or in English. It can only understand and execute instructions given in the form of machine language i.e. binary. There are two types of low-level languages:

  • Machine Language: a language that is directly interpreted into the hardware
  • Assembly Language: a slightly more user-friendly language that directly corresponds to machine language

Machine Language

Machine language is the lowest and most elementary level of programming language and was the first type of programming language to be developed. Machine language is basically the only language that a computer can understand and it is usually written in hex.

In fact, a manufacturer designs a computer to obey just one language, its machine code, which is represented inside the computer by a string of binary digits (bits) 0 and 1. The symbol 0 stands for the absence of an electric pulse and the 1 stands for the presence of an electric pulse. Since a computer is capable of recognizing electric signals, it understands machine language.

Advantages

 

Disadvantages

Machine language makes fast and efficient use of the computer. (High Speed)

 

All operation codes have to be remembered

It requires no translator to translate the code. It is directly understood by the computer. (Translation Free)

 

All memory addresses have to be remembered.

 

 

It is hard to amend or find errors in a program written in the machine language.

  1. Machine Dependent : Program written in machine language are machine dependent and hence program developed for one system does not execute on other system.
  2. Complex Language for Programming : Since machine language consists only sequence of 0s and 1s, so it is very difficult for programmer to remember and write each instructions.
  3. Error Prone : While programming using machine language everything need to be expressed in sequence of 0s and 1s which is very tedious task. So errors are frequently occurred while programming using machine language.
  4. Time Consuming : Writing program in machine language is time consuming process.

Assembly Language

Assembly language was developed to overcome some of the many inconveniences of machine language. This is another low-level but very important language in which operation codes and operands are given in the form of alphanumeric symbols instead of 0’s and l’s.

These alphanumeric symbols are known as mnemonic codes and can combine in a maximum of five-letter combinations e.g. ADD for addition, SUB for subtraction, START, LABEL etc. Because of this feature, assembly language is also known as ‘Symbolic Programming Language.'

This language is also very difficult and needs a lot of practice to master it because there is only a little English support in this language. Mostly assembly language is used to help in compiler orientations. The instructions of the assembly language are converted to machine codes by a language translator and then they are executed by the computer.

Advantages

 

Disadvantages

Assembly language is easier to understand and use as compared to machine language.

 

Like machine language, it is also machine dependent/specific.

It is easy to locate and correct errors.

 

Since it is machine dependent, the programmer also needs to understand the hardware.

It is easily modified.

 

 


High-level computer languages use formats that are similar to English. The purpose of developing high-level languages was to enable people to write programs easily, in their own native language environment (English).

High-level languages are basically symbolic languages that use English words and/or mathematical symbols rather than mnemonic codes. Each instruction in the high-level language is translated into many machine language instructions that the computer can understand.

Advantages

 

Disadvantages

High-level languages are user-friendly

 

A high-level language has to be translated into the machine language by a translator, which takes up time

They are similar to English and use English vocabulary and well-known symbols

 

The object code generated by a translator might be inefficient compared to an equivalent assembly language program

They are easier to learn

 

 

They are easier to maintain

 

 

They are problem-oriented rather than 'machine'-based

 

 

A program written in a high-level language can be translated into many machine languages and can run on any computer for which there exists an appropriate translator

 

 

The language is independent of the machine on which it is used i.e. programs developed in a high-level language can be run on any computer text

 

 

Types of High-Level Languages

Many languages have been developed for achieving a variety of different tasks. Some are fairly specialized, and others are quite general.

These languages, categorized according to their use, are:

1) Algebraic Formula-Type Processing

These languages are oriented towards the computational procedures for solving mathematical and statistical problems.

Examples include:

  • BASIC (Beginners All Purpose Symbolic Instruction Code)
  • FORTRAN (Formula Translation)
  • PL/I (Programming Language, Version 1)
  • ALGOL (Algorithmic Language)
  • APL (A Programming Language)