AUi SilberschatZ Computer-System Operation 27 Network Structure 48 The operating system must ensure the correct operation of the computer. provides a clear description of the concepts that underlie operating systems. As prerequisites . As we wrote this Sixth Edition, we were guided by the many comments and suggestions we to Avi Silberschatz, Vice President, Information Sciences Research Center, MH. 2T, Bell Peter Baer Galvin, Norton, MA, operating system concepts sixth edition - sofinafoods - operating system operating system concepts 6th edition silberschatz galvin ppt pdf may not make.
|Language:||English, Spanish, Arabic|
|Genre:||Academic & Education|
|Distribution:||Free* [*Registration Required]|
Operating System Concepts – 9 Silberschatz, Galvin And 2 operating system concepts – 9th edition silberschatz, galvin and gagne dispatcher. Operating System Concepts Silberschatz, Galvin And Gagne instructor's manual to accompany operating-system concepts seventh edition abraham. ABRAHAM SILBERSCHATZ. Yale University. PETER BAER GALVIN. Pluribus . As we wrote this Ninth Edition of Operating System Concepts, we were guided.
Save to Library. Create Alert. This paper has citations. From This Paper Topics from this paper. Explore Further: Topics Discussed in This Paper Operating system. Version 6 Unix. Citations Publications citing this paper. Sort by: Influence Recency. Highly Influenced. Khawatreh Practical transactional memory: WordPress Shortcode. Published in: Full Name Comment goes here. Are you sure you want to Yes No. Yan Phyoe.
Show More. No Downloads. Views Total views. Actions Shares. Embeds 0 No embeds. No notes for slide. Operating system concepts rw 6th ed silberschatz galvin 1. The cover was printed by Phoenix Color Corporation.
This book is printed on acid-free paper. The paper in this book was manufactured by a mill whose forest manage- ment programs include sustained yield harvesting of its timberlands. Sustained yield harvesting principlesensure that the numbers of trees cut each year does not exceed the amount of new growth.
All rights reserved. No part of this publicationmay be reproduced, stored in a retrievalsystem or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise,except as permitted under Sections or of the United States Copyright Act, without either the prior written permis- sion of the Publisher, or authorization through payment of the appropriate per- copy fee to the Copyright Clearance Center, Rosewood Drive, Danvers,MA , , fax Similarly, a course on operating systems is an essential part of any computer-science education.
This field is undergoing change at a breathtakingly rapid rate, as computers are now prevalent in virtually every application, from games for children through the most sophisticated planning tools for governments and multinational firms.
Yet the fundamental concepts remain fairly clear, and it is on these that we base this book. We wrote this book as a text for an introductory coursein operating systems at the junior or senior undergraduate level or at the first-year graduate level. It provides a clear description of the concepts that underlie operating systems. As prerequisites, we assume that the reader is familiar with basic data structures, computer organization, and a high-level language, such as C.
The hardware topics required for an understanding of operating systems are included in Chapter 2. For code examples, we use predominantly C as well as some Java, but the reader can still understand the algoritluns without a thorough knowledge of these languages. The fundamental concepts and algorithms covered in the book are often based on those used in existing commercial operating systems. Our aim is to present these concepts and algorithms in a general setting that is not tied to one particular operating system.
Important theoretical results are covered, but formal proofs are omitted. The bibliographical notes contain pointers to research papers in which results were first presented and proved, as well as references to material for further reading.
In place of proofs, figures and examples are used to suggest why we should expect the result in question to be true. Content of this Book The text is organized in seven major parts: Chapters 1 through 3 explain what operating systems are, what they do, and how they are designed and constructed.
They explain how the concept of an operating system has developed, what the common features of an operating system are, what an operating system does for the user, and what it does for the computer-system operator. The presentation is motivational, historical, and explanatory in nature.
We have avoided a discussion of how things are done internally in these chapters. Therefore, they are suitable for individuals or for students in lower-level classes who want to learn what an operating system is, without getting into the details of the internal algorithms. Chapter 2 covers the hardware topics that are important to an understanding of operating systems. Process management: Chapters 4 through 8 describe the process concept and concurrency as the heart of modern operating systems.
A process is the unit of work in a system. Such a system consists of a collection of concurrently executing processes, some of which are operating-system processes those that execute system code , and the rest of which are user processes those that execute user code.
These chapters cover methods for process scheduling, interprocess communication,process synchronization, and deadlock handling. Also included under this topic is a discussion of threads. Storage management: Chapters 9 through 12 deal with a process in main memory during execution. To improve both the utilization of CPU and the speed of its response to its users, the computer must keepseveral processes in memory.
There are many different memory-management schemes. These schemes reflect various approaches to memory management, and the effectivenessof the differentalgorithms depends on the situation. Since main memory is usually too small to accommodateall data and programs, and since it cannot store data permanently,the computer system must pro- vide secondary storage to back up main memory.
Most modern computer systems use disks as the primary on-line storage medium for information, 5. Preface ix both programs and data. The file system provides the mechanism for on- line storage of and access to both data and programs residing on the disks. These chapters describe the classic internal algorithms and structures of storage management. They provide a firm practical understanding of the algorithms used-the properties, advantages, and disadvantages.
I10 systems: Chapters 13 and 14 describe the devices that attach to a com- puter and the multiple dimensions in which they vary. In many ways, they are also the slowest major components of the computer. Because devices differ so widely, the operating system needs to provide a wide range of functionality to applications to allow them to control all aspects of the devices. Because devices are a performance bottleneck, performance issues are examined.
Matters related to secondary and tertiary storage are explained as well.
Distributed systems: Chapters 15 through 17 deal with a collection of processors that do not share memory or a clock-a distributed system. Such a system provides the user with access to the various resources that the system maintains. Access to a shared resource allows computation speedup and improved data availabilityand reliability.
Such a system also provides the user with a distributed file system, which is a file-service system whose users, servers, and storage devices are dispersed among the sites of a distributed system. A distributed system must provide various mechanisms for process synchronization and communication, for dealing with the deadlock problem and the variety of failures that are not encountered in a centralized system.
Protection and security: Chapters 18 and 19 explain the processes in an operating system that must be protected from one another's activities. For the purposes of protection and security, we use mechanisms that ensure that only those processes that have gained proper authorization from the operating system can operate on the files, memory segments, CPU, and other resources.
Protection is a mechanism for controlling the access of programs, processes, or users to the resources defined by a computer system. This mechanism must provide a means for specification of the controls to be imposed, as well as a means of enforcement. Security protects the information stored in the system both data and code , as well as the physical resources of the computer system, from unauthorized access, malicious destruction or alteration, and accidental introduction of inconsistency.
Case studies: Chapters 20 through 22, in the book, and Appendices A through C, on the website, integrate the conceptsdescribed in this book by describing real operating systems. Most of its internal algorithms were selected for simplicity, rather than for speed or sophistication. Both Linux and FreeBSD are readily available to computer-science departments, so many students have access to these systems. We chose Windows becauseit provides an opportunity for us to study a modern operating system that has a design and implementation drasticallydifferentfrom thoseof UNIX.
Wealso cover the NachosSystem, which allowsstudents to get their hands dirty-to take apart the code for an operating system, to see how it works at a low level, to build significant pieces of the operating system themselves, and to observe the effects of their work. Chapter 22 brieflydescribesa few other influential operating systems. The Sixth Edition As we wrote this Sixth Edition, we were guided by the many comments and suggestions we received from readers of our previous editions, as well as by our own observations about the rapidly changing fields of operating systems and networking.
We rewrote the material in most of the chapters by bringing older material up to date and removing material that was no longer of interest. We rewrote all Pascal code, used in previous editions to demonstrate certain algorithms, into C, and we included a small amount of Java as well. We made substantive revisions and changes in organization in many of the chapters.
Most importantly, we added two new chapters and reorganized the distributed systems coverage. Because networking and distributed systems have become more prevalent in operating systems, we moved some distributed systems material, client-server,in particular, out of distributed systems chap- ters and integrated it into earlier chapters. Chapter 4, Processes, includes new sectionsdescribingsockets and remote procedure calls RPCs.
Chapter 5, Threads, is a new chapter that covers multithreaded computer systems. Many modern operating systems now provide features for a process to contain multiple threads of control. Chapters 6 through 10 are the old Chapters 5 through 9, respectively Chapter 11, File-System Interface, is the old Chapter Preface xi Chapter 12 and 13are the old Chapters 11and 12, respectively.
Chapter 14, Mass-Storage Structure, combines old Chapters 13and Chapter 15, Distributed System Structures, combines old Chapters 15 and Chapter 19, Security, is the old Chapter Chapter 20, The Linux System,is the old Chapter 22, updated to covernew recent developments. Chapter 21, Windows , is a new chapter. Chapter 22, Historical Perspective, is the old Chapter Appendix B coversthe Mach operating system. Appendix C coversthe Nachos system.
The three appendices are provided online. Teaching Supplements and Web Page The web page for this book contains the three appendices, the set of slides that accompanies the book, in PDF and Powerpoint format, the three case studies, the most recent errata list, and a link to the authors home page. You can find your representative at the "Find a Rep? We have created a mailing list consisting of users of our book with the following address: If you wish to be on the list, please send a message to aviabell-1abs.
We would appreciate hearing from you about any textual errors or omissions that you identify. If you would like to suggest improvements or to contribute exer- cises, we would also be glad to hear from you.
Acknowledgments This book is derived from the previous editions, the first three of which were coauthored by James Peterson.
We thank the following people who contributed to this edition of the book: Bruce Hillyer reviewed and helped with the rewrite of Chapters 2, 12, 13, and Mike Reiter reviewed and helped with the rewrite of Chapter Parts of Chapter 14 were derived from a paper by Hillyer and Silberschatz .
Parts of Chapter 17 were derived from a paper by Levy and Silberschatz . Chapter 20 was derived from an unpublished manuscript by Stephen Tweedie. Chapter 21 was derived from an unpublished manuscript by Cliff Martin. Mike Shapiro reviewed the Solaris information and Jim Mauro answered several Solaris-related questions. We thank the following people who reviewed this edition of the book: Preface xiii gia Tech; Larry L.
They were both assisted by Susan- nah Barr, who managed the many details of this project smoothly. Katherine Hepburn was our Marketing Manager. Thecoverillustrator wasSusan Cyr while the cover designer was Made- lyn Lesure.
Barbara Heaney was in charge of overseeing the copy-editing and Katie Habibcopyedited the manuscript. The freelance proofreader was Katrina Avery; the freelance indexer was Rosemary Simpson.
Marilyn Turnamian helped generate figures and update the text, Instructors Manual, and slides.
Finally, we would like to add some personal notes. Avi would like to extend his gratitude to KrystynaKwiecien, whose devoted care of his mother has given him the peace of mind he needed to focus on the writing of this book; Pete, would like to thank Harry Kasparian, and his other co-workers, who gave him the freedom to work on this project while doing his "real job"; Greg would like to acknowledge two significant achievements by his children during the period he worked on this text: Tom-age 5-learned to read, and Jay-age 2 -learned to talk.
Contents xvii Chapter 7 Process Synchronization 7. Networking AFS Windows NT Part One! An operating system is a program that acts as an intermediary between the user of a computer and the computer hardware. The purpose of an operating system is to provide an environment in which a user can execute programs in a convenient and efficient manner.
We trace the development of operating systems from the first hands-on sys-! Understanding the evolutionof operating systemsgives us an appreciation for what an operating system does and how it does it. The operating system must ensure the correct operation of the computer system. To prevent user programs from interfering with the proper opera- tion of the system, the hardware must provide appropriate mechanisms.
We describe the basiccomputer architecturethat makesit possible to write a correct operating system. Theoperating system provides certainservices to programs and to the users of those programs in order to make their tasks easier. The services differ from one operating system to another, but we identify and explore some common classes of these services. Chapter 1 An operating system is a program that manages the computer hardware.
It also provides a basis for application programs and acts as an intermediary between a user of a computer and the computer hardware. An amazing aspect of operating systems is how varied they are in accomplishing these tasks.
Mainframe operating systems are designed primarily to optimize utilization of hardware. Personal computer PC operating systems support complex games, business applications, and everything in between. Handheld computer operating systems are designed to provide an environment in which a user can easily interface with the computer to execute programs. Thus, some operating systems are designed to be convenient, others to be eficient, and others some combinationof the two.
To understand what operating systems are, we must first understand how they have developed. In this chapter, we trace the development of operating systemsfrom the first hands-on systems through multiprogramrned and time- shared systems to PCs, and handheld computers. We also discuss operating system variations, such as parallel, real-time, and embedded systems. As we move through the various stages, we see how the components of operating systemsevolved as natural solutions to problems in early computer systems.
What Is an Operating System? An operating system is an important part of almost every computer system. A computer system can be divided roughly into four components: The hardware-the central processing unit CPU , the memory, and the inputloutput devices-provides the basic computing resources.
The application programs-suchas word processors, spreadsheets, compilers,and web browsers-define the ways in which these resources are used to solve the computing problems of the users. The operating system controls and coordinates the use of the hardware among the various application programs for the various users.
Thecomponents of a computer system are its hardware, software,and data. The operating system provides the means for the proper use of these resources in the operation of the computer system.
An operating system is similar to a government. Like a government, it performs no useful function by itself. It simply provides an enuironment within which other programs can do useful work. Operating systems can be explored from two viewpoints: Most computer users sit in front of a PC, consisting of a monitor, keyboard, mouse, and system unit. Such a system is designed for one user to monopolize its resources, to maximize the work or play that the user is performing. In this case, the operating system is designed mostly for ease of use, with Some users sit at a terminal connected to a mainframe or minicomputer.
Other users are accessing the same computer through other terminals. These users share resources and may exchange information. Other users sit at workstations, connected to networks of other worksta- tions and servers. These users have dedicated resources at their disposal, but they also share resources such as networking and servers-file, compute and print servers. Therefore, their operating system is designed to compromise betweenindividual usability and resource utilization.
Recently, many varieties of handheld computers have come into fashion. These devicesare mostly standalone, used singly by individual users. Some are connected to networks, either directly by wire or moreoften through wireless modems. Due to power and interface limitations they perform relatively few remote operations.
The operating systems are designed mostly for individual usability, but performance per amount of battery life is important as well.
Some computers have little or no user view. For example, embedded computers in home devices and automobiles may have a numeric keypad, and may turn indicator lights on or off to show status, but mostly they and their operating systems are designed to run without user intervention. We can view an operating system as a resource allocator. A computer system has many resources-hardwareand software-that may be required to solve a problem: The operating system acts as the manager of these resources.
Facingnumerous and possiblyconflictingrequests for resources, the operating system must decide how to allocate them to specific programs and users so that it can operate the computer system efficientlyand fairly. An operating system is a control program.
A control program manages the execution of user programs to prevent errors and improper use of the computer. In general, however, we have no completely adequate definition of an operating system. Operating systems exist because they are a reasonable way to solve the problem of creating a usable computing system.
The fundamental Toward this goal, computer hardware is constructed. Since bare hardware alone is not particularly easy to use, application programs are developed. Thecommonfunctions of controllingand allocating resources are then brought together into one piece of software: In addition, we have no universally accepted definition of what is part of the operating system.
A simple viewpoint is that it includes everything a vendor ships when you order "the operating system. Thestorage capacity of a system is measured in gigabytes. Some systems take up less than 1 megabyte of space and lack even a full- screen editor, whereas others require hundreds of megabytes of space and are entirely based on graphical windowing systems. A more common definition is that the operating system is the one program running at all times on the computer usually called the kernel , with all else being application programs.
This last definition is the one that we generally follow. The matter of what constitutes an operating system is becoming important. In , the United States Department of Justice filed suit against Microsoft, in essence claiming that Microsoft included too much functionality in its operating systems and thus prevented competitionfrom application vendors.
The primary goal of some operating system is convenience for the user. Operating systems exist because they are supposed to make it easier to compute with them than without them. This view is particularly clear when you look at operating systems for small PCs. The primary goal of other operating systems is efficient operation of the computer system. This is the case for large, shared, multiuser systems. These systems are expensive, so it is desirable to make them as efficient as possible. These two goals-convenience and efficiency-are sometimes contradictory.
In the past, efficiency was often more important than convenience Section 1. Thus, much of operating-system theory concentrates on optimal use of computing resources. Operating systems have also evolved over time. For example, UNIX started with a keyboardand printer as its interface,limitinghow convenient it could be for the user.
Over time, hardware changed, and UNIX was ported to new hardware with more user-friendlyinterfaces. Many graphic The design of an operating system is a complex task. Designers face many tradeoffsin the design and implementation, and many people are involved not only in bringing an operating system to fruition, but also constantly revising and updating it. How well any given operating system meetsits design goalsis open to debate, and is subjective to the different users of the operating system.
To see what operating systems are and what they do, let us consider how they have developed over the past 45 years. By tracing that evolution, we can identify the common elements of operating systems, and see how and why these systems have developed as they have.
Operating systems and computer architecture have influenced each other a great deal. To facilitate the use of the hardware, researchers developed operating systems. Users of the operating systems then proposed changes in hardware design to simplify them. In this short historical review, notice how identificationof operating-systemproblemsled to the introduction of new hardware features.
In this section, we trace the growth of mainframe systems from simple batch systems, where the computer runs one -and only one-application, to time-shared systems, which allow for user interactionwith the computer system. The common input deviceswere card readers and tape drives. The common output devices were line printers, tape drives, and card punches. The user did not intkract directly with the computer systems.
Rather, the user prepared a job -which consisted of the program, the data, and some control information about the nature of the job controlcards -and submitted it to the computer operator.
The job was usually in the form of punch cards. At some later time after minutes, hours, or days , the output appeared. The output consisted of the result of the program, as well as a dump of the final memory and register contents for debugging. The operating system in these early computers was fairly simple. Its major task was to transfer control automatically from one job to the next. The operating system was always resident in memory Figure1. To speed up processing, operators batched together jobs with similar needs and ran them through the computer as a group.
Thus, the programmers would The operator would sort programs into batches with similar requirements and, as the computer became available, would run each batch. The output from each job would be sent back to the appropriate programmer. Even a slow CPU works in the microsecond range, with thousands of instructions executed per second.
A fast card reader, on the other hand, might read cards per minute or 20 cards per second. However, CPU speeds increased to an even greater extent, so the problem was not only unresolved,but exacerbated.
The introduction of disk technology allowed the operating system to keep all jobson a disk, rather than in a serialcard reader.
Withdirect accessto several jobs, the operating system could perform job scheduling, to use resourcesand perform tasks efficiently. We discuss a few important aspects of job and CPU scheduling here; we discuss them in detail in Chapter 6.
The idea is as follows: The operating system keeps several jobs in memory simultaneously Figure1. This set of jobs is a subset of the jobs kept in the job pool-since the number of jobs that can be kept simultaneously in memory is usually much smaller than the number of jobs that can be in the job pool.
The In a non-multiprogrammed system, the CPU would sit idle. In a multiprogramming system, the operating system simply switches to, and executes, another job. When that job needs to wait, the CPU is switched to another job, and so on.
Eventually, the first job finishes waiting and gets the CPU back. As long as at least one job needs to execute, the CPU is never idle. This idea is common in other life situations.
A lawyer does not work for only one client at a time. While one case is waiting to go to trial or have papers typed, the lawyer can work on another case.
If she has enough clients, the lawyer will never be idle for lack of work. Idle lawyers tend to become politicians, so there is a certain social value in keeping lawyers busy. Multiprogramming is the first instance where the operating system must make decisions for the users. Multiprogrammed operating systems are there- fore fairly sophisticated. All the jobs that enter the system are kept in the job pool.
This pool consists of all processes residing on disk awaiting allocation of main memory. If several jobs are ready to be brought into memory, and if there is not enough room for all of them, then the system must choose among 1 them. Making this decision is job scheduling, which is discussed in Chapter 6. I When the operating system selects a job from the job pool, it loads that job into memory for execution.
Having several programs in memory at the same time requires some form of memory management, which is covered in Chapters 9 and In addition, if several jobsare ready to run at the same time, the system must choose among them. Making this decision is CPU scheduling, which is discussed in Chapter 6. Finally, multiple jobs running concurrently require that their ability to affect one another be limited in all phases of the operating These considerations are discussed throughout the text.
Time sharing or multitasking is a logical extension of multipro- gramming. The CPU executes multiple jobs by switching among them, but the switchesoccurso frequentlythat the users caninteractwith each program while it is running. An interactive or hands-on computer system provides direct communi- cation between the user and the system.
The user gives instructions to the operating system or to a program directly, using a keyboard or a mouse, and waits for immediate results. Accordingly, the response time should be short- typically within 1second or so. A time-shared operating system allows many users to share the computer simultaneously. Since each action or command in a time-sharedsystem tends to be short, only a little CPU time is needed for each user.
As the system switches rapidly from one user to the next, each user is given the impression that the entire computer system is dedicated to her use, even though it is being shared among many users.
A time-shared operating system uses CPU scheduling and multiprogram- ming to provide each user with a small portion of a time-shared computer. Each user has at least one separate program in memory.
A program loaded into memory and executing is commonly referred to as a process. Input, for example, may be bounded by the user's typing speed; seven characters per second is fast for people, but incrediblyslow for comput- ers.
Rather than let the CPU sit idle when this interactive input takes place, the operating system will rapidly switch the CPU to the program of some other user. Time-sharing operating systems are even more complex than multipro- grammed operating systems. In both, several jobs must be kept simultaneously in memory, so the system must have memory management and protection Chapter9. To obtain a reasonableresponse time, jobs may have to be swapped in and out of main memory to the disk that now serves as a backing store for main memory.
A common method for achieving this goal is virtual memory, which is a technique that allows the execution of a job that may not be com- pletely in memory Chapter The main advantage of the virtual-memory Further, it abstractsmain memoryinto a large, uniform array of storage,separating logical memory as viewed by the user from physical memory. This arrangement frees programmers from concernover memory-storagelimitations.
Time-sharingsystems must also provide a file system Chapters11and The file system resides on a collection of disks; hence, disk management must be provided Chapter Also, time-sharingsystems provide a mechanismfor concurrent execution, which requires sophisticated CPU-scheduling schemes Chapter 6. To ensure orderly execution, the system must provide mechanisms for job synchronization and communication Chapter 7 ,and it may ensure that jobs do not get stuck in a deadlock, forever waiting for one another Chapter8.
The idea of time sharing was demonstrated as early as , but since time-shared systems are difficult and expensive to build, they did not become common until the early s. Although some batch processing is still done, most systems today are time sharing. Accordingly, multiprogramming and time sharing are the central themes of modern operating systems, and they are the central themes of this book. Desktop Systems Personal computers PCs appeared in the s.
During their first decade, the CPUs in PCs lacked the features needed to protect an operating system from user programs. PC operating systems therefore were neither multiuser nor multitasking. However, the goalsof theseoperating systems havechanged with time; instead of maximizingCPU and peripheral utilization, the systems opt for maximizing user convenience and responsiveness.
The Apple Macintosh operating system has been ported to more advanced hardware, and now includes new features, such as virtual memory and mul- titasking. Operating systems for these computers have benefited in several ways from the development of operating systems for mainframes.
Microcomputers were immediately able to adopt some of the technology developed for larger operating systems. On the other hand, the hardware costs for microcomputers are sufficiently low that individuals have sole use of the computer, and CPU utilization is no longer a prime concern. Thus, some of the design decisions made in operating systems for mainframes may not be appropriate for smaller systems.
For example, file protection was, at first, not necessary on a personal machine. However, these computers are now often tied into other computers over local-area networks or other Internet connec- tions.
When other computers and other users can access the files on a PC, file protectionagain becomesa necessary feature of the operating system.
The lack of such protection has made it easy for malicious programs to destroy data on systems such as MS-DOS and the Macintosh operating system. These programs may be self-replicating,and may spread rapidly via worm or virus mechanisms and disrupt entire companies or even worldwide networks. Advanced time- sharing featuressuch as protected memory and file permissionsare not enough, on their own, to safeguard a system from attack. Recentsecurity breaches have shown that time and again.
These topics are discussed in Chapters18 and However, multiprocessor systems also known as parallel systems or tightly coupled systems are growing in importance. Such systems have more than one processor in close communication, sharing the computer bus, the clock, and sometimes memory and peripheral devices.
Multiprocessorsystems have three main advantages. Increased throughput. By increasing the number of processors, we hope to get more work done in less time. The speed-up ratio with N processors is not N; rather, it is less than N.
When multiple processors cooperate on a task, a certain amount of overhead is incurred in keeping all the parts working correctly. This overhead, plus contention for shared resources, lowers the expected gain from additional processors. Similarly, a group of N programmers working closely together does not result in N times the amount of work being accomplished.
Economy of scale. Multiprocessor systems can save more money than multiple single-processor systems, because they can share peripherals, massstorage, and power supplies. If several programs operate on the same set of data, it is cheaper to store those data on one disk and to have all the processorsshare them, than to have many computers with local disks and many copies of the data. Increased reliablility.
If functions can be distributed properly among several processors, then the failureof one processor will not halt the system, only slow it down. If we have ten processorsand one fails, then each of the remaining nine processors must pick up a share of the work of the failed processor.
Thus, the entire system runs only 10 percent slower, rather than failing altogether. This ability to continue providing service proportional Systems designed for graceful degradation are also called fault tolerant.
Continued operation in the presence of failures requires a mechanism to allow the failure to be detected, diagnosed, and, if possible, corrected. The Tan- dem system uses both hardware and software duplication to ensure continued operation despite faults. The system consists of two identical processors, each with its own local memory. The processors are connected by a bus. One pro- cessor is the primary and the other is the backup. Two copies are kept of each process: At fixed checkpointsin the execution of the system, the state information of each job- including a copy of the memory image-iscopied from the primary machineto the backup.
If a failure is detected, the backup copy is activated and is restarted from the most recent checkpoint. This solution is expensive, since it involves considerablehardware duplication. The most common multiple-processor systems now use symmetric mul- tiprocessing SMP , in whch each processor runs an identical copy of the operating system, and these copies communicate with one another as needed. Some systems use asymmetric multiprocessing, in which each processor is assigned a specific task.
A master processor controls the system; the other pro- cessors either look to the master for instruction or have predefined tasks. This scheme defines a master-slave relationship. The master processor schedules and allocates work to the slave processors. SMP means that all processorsare peers; no master-slave relationshipexists between processors.
Each processor concurrently runs a copy of the operating system. Figure 1. This computer can be configured such that it employs dozens of processors, all running copies of UNIX.
The benefit of this model is that many processes can run simultaneously-N processes can run if there are N CPUs-without causinga significant deterioration of performance. Also, since the CPUs are separate, one may be sitting idle while another is overloaded, resulting in inefficiencies.