Wednesday, February 10, 2010

IS Hardware -Main frames to micro computers

IS Hardware
Computer size
Computer sizes can be classified into 4 categories according to the support they offered namely
super computers
micro computers
mini computer
main frame computers





Micro Computer System
These are also known as personal computers and are the ones mostly found in big and small office, they are normally standalone computers known PC, or Desktop Computers. Micro Computers are small and expensive designed for individual use. It contains two types of memories RAM and ROM. Some manufacturing companies of micro computers include HP Computers,, Dell Computers, IBM Computers and many more







mini computer systems
mini computers are midsized computers capable of supporting from 4 – 200 users simultaneously. Mini pc are mainly used as departmental computers for data processing in large organization or governmental institutions like hospitals










main frame computer systems
A main frame computer is a very large expensive computer system capable of supporting hundreds and thousands of users simultaneously, most of these computers are found in large organizations like universities, hospitals, world governing body like UN among many others






Super computer systems
Super Computers are the fastest types of computers which were very expensive and requires a lot of mathematical calculations. The first generation of super computer was developed by VON-Newman. This computer generation of super computers had only one controller and was called VON-Newman. Modern super computers systems are equipped with many processors enabling them to to process complex operation, e.g. weather forecast which will needs to be processed within a short period









Computer generationsWithin modern computer systems the basic element of storage is the binary digit (or bit) which can represent a 0 or a 1. The reason for this is that it is very easy to build electronic switches where an off/on condition is used to represent a 0/1 binary value. Although a single bit can only have two states, 0 or 1, a sequence of bits can be used to represent a larger range of values. Such a sequence is called a word of storage and is usually 8, 16, 32, 64 or 128 bits in length. An 8-bit word, for example, can represent an unsigned positive number in the range 0 to 11111111 binary (0 to 255 decimal) thus:







In the diagram above the least significant or rightmost bit, bit 0, represents 20 or 1 and the most significant or leftmost bit, bit 7, represents 27 or 128 decimal (the convention for identifying the bits within a word is that the rightmost or least significant bit is numbered 0). The combinations of 1s and 0s of the 8-bit word thus represent an unsigned value in the range 0 to 11111111 binary (0 to 255 decimal). The general term given to an 8-bit storage word is a byte which is used by the majority of modern computer systems as their fundamental unit of storage. To represent values that are too large to store in 8-bits a number of bytes may be used. For example, a 16-bit number (made up from two bytes) can represent an unsigned value in the range 0 to 65535 decimal. In this way all data, i.e. numeric (integer or real), characters, user defined types, etc. and instructions are encoded.
The reason for using the binary number system is that it is very easy to build electronic switches to represent an off/on or 0/1 binary value. It would be much more difficult, although possible, to build electronic circuits that could take ten states to represent decimal numbers. Programs convert between external information representation (decimal numbers, text characters, programs written in Modula 2, Pascal, Cobol, etc.) and the internal binary form. End-users, therefore have no need to use the binary systems, or even be aware of its use.
The power of a computer system is directly related to the number of these 'electronic switches'. The electronic switches are used to build the:
Data storage. As more data storage is added the size of programs and information held will increase.
Processing circuits. As the processing circuits become more complex the power of individual instructions increases and the mechanisms of accessing and manipulating data become more powerful and flexible. As computer systems developed over the years the technology used to build the electronic binary 'switches' gave rise to 'generations' of systems.
First generation (1940's) The electronic switches were built using thermionic valves. Valves operated on the principal of controlling the electron flow between a heated cathode and a positively charged anode. The electron flow could be 'switched' on and off by varying a negative voltage applied to a grid positioned between the cathode and anode. Using values as the basic circuit element of a computer systems posed problems:
1 valves by their nature were large, typically 2 or 3cm high.
2 by relying on a heated cathode to generate the electrons their life time was limited (typically several thousand hours).
Even the small computer systems used in the 1940's required at least ten thousand electronic 'switches', therefore, first generation systems were large, generated vast amounts of heat, and, tended to break down every few minutes (as a valve died).
Such systems were very expensive, programmed in machine or assembly code and used in applications where even the limited computing power was essential. For example, in code breaking and physics.
Second generation (1950's) The electronic switches were built using transistors. A transistor is fabricated using a semiconductor material (e.g. silicon) where the flow of electrons (or holes) between an 'emitter' and a 'collector' is controlled by a voltage applied to a 'base'. Transistors are much smaller than valves, typically less than half a cm high, required no heat, and had lifetimes measured in ten or hundreds of thousands of hours.
By using transistors it was therefore possible to build computer systems much more powerful than the earlier valve systems, i.e. larger information storage and more powerful processing elements. In addition, the cost of the computer systems decreased to the point where medium sized organisations could afford them, e.g. large commercial or industrial firms and universities. During this period it was realised that using machine or assembly languages to implement commercial quality software was not practical and machine independent problem oriented languages were developed, e.g. Fortran for scientific and Cobol for commercial applications.
Third generation (1960's) The electronic switches were built using small scale integrated circuits. During the 1950's and 1960's the size of the semiconductor wafers or chips used to fabricate transistors increased markedly (see below) to the point where several transistors could be built onto a single chip. It was therefore possible to fabricate a small electronic circuit onto a single chip, e.g. a basic computer 'gate'. These devices with a small circuit on a chip were called 'integrated circuits' or ICs.
Using integrated circuits, computer systems became much smaller and more powerful. Prices reduced and applications increased markedly. Sequences of jobs were organised into batches gaining the title of a 'batch processing environment'.
Fourth generation (1970's to date) The size of the silicon semiconductor chips continued to increase and therefore the complexity of the circuits which could be fabricated:
Small scale integration (typically 2 to 64 transistors per chip) used to fabricated basic circuit elements, e.g.simple gates, AND, OR, EXOR, NOT, etc.
Medium scale integration (typically 64 to 2000 transistors per chip) used to fabricate basic system elements, e.g. counters, registers, adders, etc.
Large scale integration (typically 2000 to 64000 transistors per chip) used to fabricate major system elements, e.g. ALUs, I/O interfaces, small microprocessors, etc.
Very large scale integration (typically 64000 to 2000000 transistors per chip) used to fabricate complete system components, e.g. microprocessors, DMA controllers, etc.
Ultra large scale integration (typically 2000000 to 64000000 transistors per chip) used to fabricate very complex systems, e..g. parallel processors, 1 Mbyte memory chips, etc. As the integrated circuits became more complex system size and costs reduced and power increased. System storage became sufficiently large for several programs to be held in memory at any time with the operating systems scheduling which job could use the processor at any instant, i.e. a multitasking or multiprocessing environment. Although batch processing continued, users could access computer systems 'on-line' from terminals thus reducing the 'turn around' time associated with batch processing.
By the late 1970's a simple processor could be fabricated on a single integrated circuit chip; called a microprocessor. This allowed more and more power to be placed in the user terminal to the point where for simple applications, the terminal became independent and the microcomputer was created.
Fifth generation (?) The computer systems we use today are fourth generation, i.e. more powerful and faster versions of those used over the past ten to twenty years. It is envisaged that fifth generation computer systems will use the same or extended hardware technology but that the operating environment will be totally different. Systems will display 'intelligence' and be able to communicate with humans on more equal terms, e.g. using speech instead of keyboards.










'Faster and larger' are terms that come to mind when looking at the above table. The transition from vacuum tube to VLSI technology has produced an increase in processor speed of four orders of magnitude. This speed up is partly due to the reduction in gate switching time and wire length, and partly due to the miniaturisation of circuits (which allowed caches and extended sets of registers close to the processor). The result is fewer primary memory accesses and better matching of processor and memory speeds. Also with primary memory size that is three or four orders of magnitude larger, more programs and data can reside simultaneously in main memory, this increasing the multiprogramming level of timeshared machines. This has allowed single user microcomputers and workstations to run sophisticated programming environments (in a memory similar in size to the mainframes of the 1960's and early 1970's).
In the context of processors, size reduction has been the most important phenomenon. Microprocessors and workstations have evolved, and in mainframes and minicomputers the numbers of registers have expanded and special ALUs can coexist (floating point, decimal). Special purpose processors called multifunction units, such as I/O controllers, graphics display, array processors for matrix manipulation, etc. can be attached. This extends the concept of 'multi functional' units first implemented on the CDC 6600 mainframe and its successors in the 1960's.
Cache memories are now 'standard' on mainframe and minicomputers and becoming common on the more powerful microprocessors. A cache is a high-speed memory which sits between the processor and primary memory. The goal is to keep copies of the most frequently used words in the cache. Program execution is mainly sequential so when the processor requests a byte from primary memory the byte is fetched plus several bytes following, which are placed in the high-speed cache (during the period when the processor is decoding or executing). When the processor requests the next byte it is likely to be in the cache (the 'hit' rate is typically 80 to 90%). This technique makes the primary memory appear faster than it really is.
Until the third generation the control unit was hardwired (physical circuits wired into the computer). When compact ROMs appeared, these allowed practical microprogramming (the instruction decode and control circuits of the processor are in a 'microprogram' in ROM, thus allowing 'easy' modification of the instruction set). In addition some modern processors have control units that can be microprogrammed by user programs (using RAM instead of ROM), allowing optimisation for certain high-level languages.
The virtual memory techniques developed on the third generation mainframes (to allow programs larger than physical memory) are now being applied to professional single-user workstations. At any instant the vast majority of programs are using very little of the overall code and data. In a virtual memory system the program and data is broken down into 'pages' (typical size 4Kbytes) which are held on disk. Pages are then brought into primary memory as required. This technique allows program size to be much larger than the physical primary memory size (typically a modern microcomputer may have 1 to 4Mbytes of primary memory but a virtual memory size of 16Mbytes).
Although very high density disks have increased the size of secondary memory, there is still a gap of four orders of magnitude between the speed of primary and secondary memory (primary memory speed is less than 0.000001 of a second and secondary memory speed greater than 0.001 of a second). 'Electronic disks' such as bubble memory and CCD devices have not made any significant impact.





A microcomputer: a single user computer system (cost L2000 to L5000) based on an 8-bit microprocessor (Intel 8080, Zilog Z80, Motorola 6800). These were used for small industrial (e.g. small control systems), office (e.g. word-processing, spreadsheets) and program development (e.g. schools, colleges) applications.
A minicomputer: a medium sized multi-user system (cost L20000 to L200000) used within a department or a laboratory. Typically it would support 4 to 16 concurrent users depending upon its size and area of application, e.g. CAD in a design office.
A mainframe computer: a large multi-user computer system (cost L500000 upwards) used as the central computer service of a large organisation, e.g. Gas Board customer accounts. Large organisations could have several mainframe and minicomputer systems, possibly on different sites, linked by a communications network.
As technology advanced the classifications have become blurred and modern microcomputers are as powerful as the minicomputers of ten years ago or the mainframes of twenty years ago.


Initially computers (first and early second generation) were single user stand-alone machines programmed in assembly languages. The users (mainly mathematicians, scientists and engineers) used and operated the machines themselves, feeding in data and waiting for answers. Second generation computers were mainly operated in batch mode, with users punching programs and data onto punched cards and tapes, which were fed into the computer by operators. Commercial use became extensive and the majority of applications programming was in high-level languages. The main problem was that the turn around on jobs was such that only a few runs per day could be achieved. Although third generation computers ran batch streams, the faster processor and larger memories allowed users to access the machine directly from teletypes, running programs under a timesharing operating system (the operating system gave each user program a 'turn' with the processor). Users could type in their programs and data, compile and then execute. As user requirements grew, larger and larger programs became the norm, with expectations of a good response time (under 5 seconds). As processor power increased and size reduced it became possible to put simple processors in the users terminal. The advent of microprocessors allowed these 'terminals' to become stand-alone microcomputers equipped with disks and printers (they could be linked to a host mainframe over a normal terminal line). Today the microcomputer has grown into powerful personal workstations equipped with powerful processors, 1 to 4 Mbytes of memory, 100 Mbyte disk and a high-quality graphics display screen. Such workstations need to have access to information sources on other machines and therefore can be connected to a high speed LAN (Local Area Network). This allows the users to access remote databases, share programs and data, use mail facilities, etc.













No comments:

Post a Comment