Monday, 25 July 2016

Generations of Operating Systems

Operating systems have evolved in tandem to hardware changes. However, there are instances where operating system (OS) developers have demanded for a modification of the corresponding hardware so as to ensure compatibility between the OS and the hardware architecture. Therefore, it is evident that evolution of the hardware architecture impacts the evolution of the OS, and vice versa. The concise history of the development and evolution of the operating system is described below.
Operating Systems. Photo Credit: Quozr.com
First Generation (1940s – 1950s)
It was characterized by vacuum tubes, punched cards and plug boards. The punched cards provided the primary medium through which a programmer interacted with the machine system. Programs were written in machine language, and this meant that the programmer performed all the essential operational tasks of the program as there were neither linkers nor loaders among other software aids.
Second Generation (1950s – 1960s)
It was defined by the development of programming languages, and software aids and tools; such as compliers, loaders, linkers, assemblers, line printers, and magnetic tapes. The programmer was able to use a high-level language to write the required program, and then compile it on a compiler tape that was mounted in the tape drive of the computer system. Thereafter, the card reader in the tape drive would read the program, and afterwards an assembly language output was produced. This output was then assembled by an assembler that was contained in the assembler tape. The output of the assembler was a binary object output that was then loaded into the memory. The memory executed this binary output. This setup suffered several setbacks which are described below.
First of all, it caused time wastage, especially if an error occurred during the operation as one would be required to start all over again as there was no primary hard drive to store the processed data. Secondly, the set-up for the various tasks (loading, compiling, reading, assembling, unloading and executions) not only wasted time, but also made the CPU idle, and depending on the set-up duration for each task, the idle time of the CPU increased correspondingly. Finally, this set-up caused considerable job delays which inconvenienced the users. The reason for this was that computer systems were very expensive and hence many users needed to use only one computer, thus leading to a situation where high system utilization was punctuated by extended periods of CPU idle time, hence wastage of resources.
Set-up delays were compounded by queuing jobs of different requirements together. Thus, if the first job in the queue was written in COBOL (Common Business-Oriented Language), and the second one in FORTRAN (Formula Translation) and the third in COBOL; one was required to unload the COBOL complier tape and then load the FORTRAN complier tape and then unload it and reload back the COBOL compiler tape, and this process increased time wastage. For this reason, jobs of the same requirements were batched together so as to minimize the attendant time wastage. Nonetheless, one was still required to manually prepare the next set-up job once the batched tasks were completed. This process of manually ascertaining that a task has been completed before the next job is set-up contributed to CPU idling, and there was thus need to reduce the CPU idle time by automating the process of job sequencing.
Monitor Program
Automation of job sequencing embodies the development of the first operating system. The monitor program was conceived as the software that would perform the automation task. Job sequencing requires jobs to be executed and completed in a pre-determined manner. Nonetheless, a job can have several tasks that need to be executed and completed as per a pre-determined sequence. Hence, the control cards concept was introduced in which the directives for the various tasks for a job were contained in control cards. Because the monitor program handled the control cards as well as the job sequencing, it had to be permanently embedded into the memory and it was thus referred to as the resident monitor and memory space was allocated to it.
The resident monitor read the jobs sequentially from the tape or card reader before loading the matching tasks into the memory and passing control to the job being executed. It would later regain control upon completion of the executed job. This means that the resident monitor could only run one job at a time, hence it only supported mono-tasking.
Offline Concept
Batch tasking also faced a unique problem as the electronic CPU operated at a far higher speed than the electromechanical input/output (I/O) devices. This speed mismatch necessitated copying inputs to a magnetic tape called the input magnetic tape that could be processed in a single instance. The same happened to the output, and the output magnetic tape could be printed at a later moment. This process reduced the online processing time. The input processing time is the duration taken to directly process either inputs or outputs straight through CPU. Thus, it is evident that the input and output magnetic tapes introduced an offline operation to the process. This meant that the slow I/O devices did not constrain CPU operations as the inputs and outputs were processed from the input and output magnetic tapes respectively thus the offline concept eliminated the need for direct processing of inputs or outputs by the CPU.
Third Generation (1960s – 1980s)
The operations of batched systems were constrained by the magnetic tapes whose low storage capacity necessitated use of different tapes for input and output, and hence one needed to have separate tapes for jobs. Additionally, one needed to repeatedly rewind the tape so as to read and write in between as the magnetic tape operates as a sequential access device. These shortcomings were solved by the development of disks whose random access nature was augmented by the large storage capacity. This meant that the input and output could be written, stored, read and processed from the disk, and that data could be accessed from anywhere in the disk.
Multi-Programming
A corresponding technological breakthrough in CPU technology led to the development of faster CPUs that could effectively handle the tasks in the batch systems thus resulting in increased performance for the computer system. Nonetheless, idle time also increased as the CPU speeds increased. Additionally, some job executions required access to I/O devices and the mismatch between the CPU speed and the speeds of I/O devices constrained CPU operations besides increasing CPU idle time. To solve this problem, it was conceived that the CPU could execute another task when it was idle while processing the initial task. This necessitated the partitioning of the memory so that separate tasks could be allocated different memory partitions, hence resulting in processing of multiple tasks. This led to the development of multiprogrammed batch systems which supported overlapping of the I/O task of one job with processing of a second job. If the multiprogrammed batch systems ran on a computer that contained a disk, then the operation was referred to as Simultaneous Peripheral Operation Online (abbreviated as SPOOL). The process came to be called spooling. Spooling was supported by the multiprogramming concept which theorized that the idle time could be reduced if the OS loaded another job to an idle CPU.
Multiprogramming led to an increase in OS size as the OS now required components for memory management, spooling, job management, and task scheduling among other requirements. These components were added to the operating system thus resulting in a leap in OS development. Thus, it is evident that the multiprogramming concept is the basic concept that serves as the substratum upon which the concepts of modern OS developed.
Interactivity
Even though multiprogramming and batched systems greatly improved performance of computer systems, the lack of interaction between the programmer and the job was unsuitable for interactive jobs which required constant attention of the user. This posed a unique challenge to the debugging process as the batched system would run the batched jobs first before outputting their results to the programmer. This batch processing ensured that the programmer could analyze the output of a program after the cycle had been completed. In case of an error, the batch processing would be repeated, and the output considerably delayed. Thus, absence of programmer/user interaction with batched jobs made debugging a time-consuming process. This necessitated the use of dedicated dumb terminals to facilitate programmer/user interaction with their jobs.
Each dedicated dumb terminal was operated by a single (dedicated) user. The dumb terminals were connected to the main system and this meant that the CPU resources in the system had to be shared among several users. Nonetheless, it was necessary to ensure that each user believed that he/she was the only one using the entire system. This meant that the users individually submitted jobs to the system, and that there was no submission of a batched job. This ensured that the user retained interaction with his or her job. Nevertheless, the submitted jobs were initially sequestered in the partitioned memory of the computer system as per the directives of the multi-programming concept. This resulted in the CPU time being shared amongst the multiple users, and thus the system was described as a time-sharing multi-user system.
Massachusetts Institute of Technology (MIT) developed a time-sharing multi-user system known as compatible time-sharing system (abbreviated as CTSS) whose successful operation led to the development of MULTiplexed Information and Computing Service (abbreviated as MULTICS) through the collaborative joint efforts of MIT, General Electric, and Bell Labs. MULTICS was conceived to support thousands of users unlike CTSS which only supported a specific number of users. Nonetheless, MULTICS never came to actual fruition but the concepts behind it greatly impacted the development of modern operating systems.
Birth of UNIX and the C-programming language
Later on, when the single-user version (or type) of MULTICS was run on a discarded PDP-7 minicomputer, it was found to function properly. This spurred Ken Thompson, Dennis Ritchie, Brian Kernighan, and their colleagues to start a project called UNiplexed Information and Computing Service (UNICS) that was later renamed UNIX. The successful operation of UNIX led it to be ported to the then existing computers including PDP-11/70 and PDP-11/45. These computers had in-built memory protection mechanisms.
Nonetheless, the process of porting UNIX to computers faced a matchless challenge. Different computer machines had their own unique hardware components, and this made it difficult for them to operate UNIX. This necessitated the re-writing of UNIX from its original assembly language into a programming language that was supported by the different computer hardware. Dennis Ritchie designed a new high-level language to solve this problem, and called it C. UNIX was rewritten by Ritchie and Thompson using the ‘C’ programming language. Its portable version was also developed. This paved the way to modern computing which is dominated by the time-sharing multi-user OS model.
Fourth Generation (1980s – Present Day)
UNIX was a multi-user system that shared processing time, and this posed a challenge when it was run on a single system. There was a need for personalization of the processing time in a single system so as to ensure that the CPU resources were dedicated to a single user. This was facilitated by advances in hardware technology, especially the Very Large Scale Integration (VLSI) and Large Scale Integration (LSI) technology that supported integration of thousands of transistors into a silicon chip. This resulted in the exponential reduction in computer size as well as increase in the processing speeds of CPUs. Additionally, the resulting architecture gave rise to the microcomputer, which came to be called a Personal Computer (abbreviated as PC). The PC could be shared by multiple users.
Microprocessors
The advent of microcomputers necessitated the need for a compatible OS as the existing operating systems were not compatible. Intel 8080 served as the microprocessor for the first PCs, and Gary Kildall designed a compatible OS called control program for microcomputers (abbreviated as ‘CP/M’). The success of ‘CP/M’ drove Kildall to form a company called Digital Research which specialized in OS for microcomputers including the Zilog Z80.
International Business Machines (IBM) entered the PC market shortly thereafter and introduced the IBM PCs. Tim Paterson, who had written an OS called Disk Operating System (DOS), was hired by IBM (then working in collaboration with Bill Gates) to produce a modified version of DOS that was compatible with IBM computers, and this resulted in the creation of Microsoft DOS (abbreviated as MS-DOS). The success of the PCs running the MS-DOS OS revolutionized computing as well as spurred advances in Intel microprocessors which led to the development of 8086, 80386, 80486, and 80286 microprocessors; while utilities such as LOTUS, dBASE, and Wordstar were being developed alongside advances in programming languages such as C under DOS, BASIC, and COBOL.
Unlike UNIX, MS-DOS originally lacked the time-sharing multi-user feature - a key strength of UNIX as an operating system. UNIX also underwent modifications geared towards making it a competitive and versatile operating system alongside MS-DOS. Likewise, MS-DOS also incorporated a hierarchical file system that was derived from the UNIX file system. Microsoft also acknowledged the critical capability of time-sharing multi-user feature of UNIX, and sought to incorporate it in its XENIX OS. Microsoft sought to take the advantages offered by the Intel 80286 and later 80X86 microprocessors which were very fast and also supported execution of different jobs by multiple users. Shortly afterwards, Microsoft and IBM jointly developed a multi-user capable OS called OS/2 that could be installed in PCs built with Intel 80286 and 80386 microprocessors.
Invention of Graphical User Interface
MS-DOS and earlier operating systems were command line (CLI)-based. This meant that one had to input the right commands into a relatively blank screen so as to perform an operation. With the number of commands increasing rapidly and the attendant need to comprehend the hierarchical file system so as to effectively use the PC, there was need for a user-friendly and convenient OS that would eliminate the cumbersomeness associated with the CLI-based operating systems.
Research done by Doung Engelbart at Stanford Research Institute led to the conception and invention of the graphical user interface (GUI). Steve Jobs adopted the GUI concept, and used to build the Lisa system which failed, but the subsequent Apple Macintosh system was a success – largely due to its user-friendliness. The success of Apple Macintosh drove Microsoft to also adopt the GUI concept. The development of the Intel 80486-based system supported a hardware architecture that could utilize the speed factor to power graphic displays. Microsoft benefitted from these developments, and was able to introduce a new GUI-based operating system called Windows. Nonetheless, this OS was simply a GUI overlaying MS-DOS. The true GUI-based operating system was released by Microsoft in 1995 and was called Windows 95.
Microsoft has developed more powerful GUI-based operating systems that featured powerful functionalities and sleek user-friendly interfaces. This has enabled Microsoft to dominate the PC OS market, with the latest GUI-based OS being Windows 10 which was released in July 2015.
Advances in Intel microprocessor technology, including the introduction of the Pentium series, Celeron, Atom, Dual Core, Core 2 Duo, Core 2 Quad, Core i3, Core i5 and Core i7 family of processors as well as the Xeon processors; have supported the ever expanding functionalities of Microsoft-based operating systems. Additionally, the need of users to run multiple windows concurrently in Microsoft operating systems required the OS developer to create an OS which would support multiple tasks from a single user, and this concept is referred to as multi-tasking.
The success of Microsoft GUI-based operating systems influenced UNIX developers to adopt the GUI concept. The first UNIX-based OS that featured GUI was X Windows, which only incorporated basic windows management. The subsequent GUI-based OS was Motif.
Network Operating System
Technological advancements have supported the development of the network system. The network system provides some basic functionality such as remote login, file transfers, remote storage, and cloud service; and this requires a network interface to manage the interactions of the user with the network as well as support network control. This has led to the development of a low-level software layer that is designed as an OS, and it is called the Network Operating System.
Distributed Operating System
Distributed systems have developed alongside network systems. Distributed systems are basically large network systems in which the user does not know the location or address of destination computers that perform a task. This means that the user submits a complex task to a distributed system and then receives the results, and is unaware of how the multiple tasks were divided, distributed and performed by the computers in the networked system. The software layer that supports these functionalities as well as performs these tasks besides supporting distributed scheduling in the distributed system is known as a distributed operating system.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Further Reading
Anderson, T., & Dahlin, M. (2012). Operating Systems: Principles and Practice. Recursive Books.
Arpaci-Dusseau, R. H., & Arpaci-Dusseau, A. C. (2015). Operating Systems: Three Easy Pieces. Arpaci-Dusseau.
Engler, D. R., & Kaashoek, M. F. (1995). Exokernel: An Operating System Architecture for Application-Level Resource Management (Vol. 29, No. 5, pp. 251-266). ACM.
Gude, N., Koponen, T., Pettit, J., Pfaff, B., Casado, M., McKeown, N., & Shenker, S. (2008). NOX: towards an operating system for networks. ACM SIGCOMM Computer Communication Review38(3), 105-110.
Hansen, P. B. (Ed.). (2013). Classic Operating Systems: From Batch Processing to Distributed Systems. Springer Science & Business Media.
McHoes, A., & Flynn, I. M. (2013). Understanding Operating Systems. Cengage Learning.
Maekawa, M., Shimizu, K., Jia, X., Sinha, P., Park, K. S., Ashihara, H., & Utsunomiya, N. (2012). Operating System. Distributed Environments: Software Paradigms and Workstations, 259.
Lister, A. (2013). Fundamentals of Operating Systems. Springer Science & Business Media.
Pinard, K. T. (2013). Computer Concepts: Illustrated Essentials. Cengage Learning.
Silberschatz, G. (2011). Gagne. Operating System Concepts Essentials.
Tanenbaum, A. S., & Bos, H. (2014). Modern Operating Systems. Prentice Hall Press.

No comments:

Post a Comment