When a computer is multiprogrammed, it frequently has multiple processes competing for the
CPU at the same time. This situation occurs whenever two or more processes are simultaneously
in the ready state. If only one CPU is available, a choice has to be made which process to run
next. The part of the operating system that makes the choice is called the scheduler and the
algorithm it uses is called the scheduling algorithm. These topics form the subject matter of the
2.5.1 Introduction to Scheduling
Back in the old days of batch systems with input in the form of card images on a magnetic tape,
the scheduling algorithm was simple: just run the next job on the tape. With timesharing systems,
the scheduling algorithm became more complex because there were generally multiple users
waiting for service
a batch job may be a request to run multiple programs in succession, but for this section,
we will just assume it is a request to run a single program.) Because CPU time is a scarce
resource on these machines, a good scheduler can make a big difference in perceived
performance and user satisfaction. Consequently, a great deal of work has gone into devising
clever and efficient scheduling algorithms.
Nearly all processes alternate bursts of computing with (disk) I/O requests, as shown in Fig. 2-37.
Typically the CPU runs for a while without stopping, then a system call is made to read from
a file or write to a file. When the system call completes, the CPU computes again until it needs
more data or has to write more data and so on. Note that some I/O activities count as computing.
For example, when the CPU copies bits to a video RAM to update the screen, it is computing,
not doing I/O, because the CPU is in use. I/O in this sense is when a process enters the blocked
state waiting for an external device to complete its work.
Figure 2-37. Bursts of CPU usage...