Menu
Theme
Bachelor of Science in Computer Science
Course Content

Process Scheduling

Operating Systems

Habari! Let's Talk Computers... and Matatus!

Habari mwanafunzi! I hope you are having a great day. Today, we are diving into one of the most fascinating topics in Operating Systems: Process Scheduling. Now, I know that sounds very technical, but stick with me. Have you ever been at a busy matatu terminus, like 'Railways' or 'Kencom' in Nairobi? Imagine hundreds of people trying to get into a few matatus. It looks like chaos, right? But there's a system. The 'makanga' (the tout) is the one deciding who gets in, which route goes next, and making sure everything moves.

In the world of computers, the CPU (Central Processing Unit) is our matatu – it does all the real work. The programs and tasks you run (like listening to music while browsing the web) are the passengers. And the Process Scheduler? That's our very efficient 'makanga'. It's the part of the Operating System that decides which process gets to use the CPU and for how long. Without it, your computer would be pure chaos!

Image Suggestion: A vibrant, stylized digital art piece in the Afrofuturism style. A central, glowing computer CPU, shaped like a futuristic Nairobi skyscraper, is managing streams of colorful data packets (the processes) that are flowing in from all directions. The scene is clean, modern, and energetic, conveying a sense of organized complexity.

The Life of a Process: From Birth to Termination

Before we can schedule a process, we need to understand its lifecycle. A process goes through several states. Think of it like a person's day: you wake up (New), you get ready to go to school (Ready), you are in class learning (Running), you wait for the teacher to finish writing on the board (Waiting), and finally, you go home (Terminated).


   +-------+      +--------+       +---------+
   |  New  |----->| Ready  |------>| Running |
   +-------+      +--------+       +---------+
                    ^    |               |
                    |    |               | IO or Event Wait
                    |    |               v
                 Scheduler         +---------+
                 Dispatch          | Waiting |
                    |              +---------+
                    |                  ^
                    |                  | IO or Event Complete
                    +------------------+
                                       |
                                       | Interrupt
                                       v
                                   +------------+
                                   | Terminated |
                                   +------------+
  • New: The process is being created.
  • Ready: The process is waiting to be assigned to the CPU. It has everything it needs to run.
  • Running: Instructions are being executed by the CPU.
  • Waiting: The process is waiting for some event to occur (like waiting for a file to be read from the disk or for user input).
  • Terminated: The process has finished execution.

Preemptive vs. Non-Preemptive: Can You Be Interrupted?

Scheduling can happen in two main ways. To understand this, let's use a local example.

Imagine you are at a KCB bank counter. Once the teller starts serving you, they will complete your entire transaction before moving to the next person. No interruptions! This is Non-Preemptive Scheduling. Once the CPU gives a process its time, it keeps it until it's finished or voluntarily gives it up (e.g., for I/O).

Now, think of the emergency room at Aga Khan Hospital. You might be there with a small cut, and a critically injured patient arrives. The doctor will immediately stop treating you to attend to the more urgent case. This is Preemptive Scheduling. The OS can interrupt a running process and give the CPU to a more important (higher-priority) one.

The Main Event: Scheduling Algorithms!

The OS uses different rules, or algorithms, to decide which process in the 'Ready' queue gets the CPU next. Let's look at the most common ones.

1. First-Come, First-Served (FCFS)

This is the simplest algorithm. Just like a queue for M-PESA, the first process to arrive is the first one to be served. It is a non-preemptive algorithm.

  • Pros: Very simple to understand and implement.
  • Cons: The average waiting time can be very long, especially if a short process gets stuck behind a very long one. This is called the convoy effect.

2. Shortest-Job-First (SJF)

This algorithm is clever! It looks at all the processes in the ready queue and picks the one that will take the shortest amount of time to complete. This can be both preemptive and non-preemptive.

  • Non-Preemptive SJF: Once a process starts, it runs to completion.
  • Preemptive SJF (also called Shortest-Remaining-Time-First or SRTF): If a new process arrives with a shorter completion time than what's remaining for the currently running process, the OS will preempt the current one and run the new, shorter one.
  • Pros: It's provably optimal – it gives the minimum average waiting time.
  • Cons: How do you know the exact length of the next CPU burst? You can't! We have to predict it, which can be inaccurate.

3. Round Robin (RR)

This is the most common algorithm used in modern time-sharing systems. It's designed to be fair. The OS defines a small unit of time, called a time quantum (or time slice), typically 10-100 milliseconds. Each process gets the CPU for one time quantum. If it's not finished by the end, it's sent to the back of the ready queue.

Think of a teacher trying to help five students with their work. Instead of spending 30 minutes with the first student while others wait, she spends 5 minutes with each student, one by one, and then cycles back. Everyone gets attention in a fair manner. That's Round Robin!

  • Pros: Very fair and gives good response time. No process has to wait too long.
  • Cons: A lot of time can be wasted in context switching (we'll cover this later!) if the quantum is too small.

Let's Do Some Hesabu! (Calculations)

Seeing is believing! Let's calculate the performance of these algorithms. We need to know two key metrics:

  • Turnaround Time: Total time from when a process arrives until it is completely finished. (Completion Time - Arrival Time)
  • Waiting Time: Total time a process spends waiting in the ready queue. (Turnaround Time - Burst Time)

Consider these processes that arrive at time 0:


Process   | Burst Time (ms)
---------------------------
P1        | 24
P2        | 3
P3        | 3

Example 1: FCFS Calculation

If the processes arrive in the order P1, P2, P3, the schedule looks like this. We can show it using a Gantt Chart.


Gantt Chart (FCFS):
|    P1    | P2 | P3 |
0          24   27   30

Calculations:
Waiting Time:
P1 = 0
P2 = 24
P3 = 27
Average Waiting Time = (0 + 24 + 27) / 3 = 51 / 3 = 17 ms

Turnaround Time:
P1 = 24 - 0 = 24
P2 = 27 - 0 = 27
P3 = 30 - 0 = 30
Average Turnaround Time = (24 + 27 + 30) / 3 = 81 / 3 = 27 ms

That's a long wait for P2 and P3!

Example 2: SJF (Non-Preemptive) Calculation

Now, let's see what happens if the OS is smart and uses SJF. It will schedule P2, then P3, then the long P1.


Gantt Chart (SJF):
| P2 | P3 |    P1    |
0    3    6          30

Calculations:
Waiting Time:
P1 = 6 (It waited for P2 and P3 to finish)
P2 = 0
P3 = 3
Average Waiting Time = (6 + 0 + 3) / 3 = 9 / 3 = 3 ms

Turnaround Time:
P1 = 30 - 0 = 30
P2 = 3 - 0 = 3
P3 = 6 - 0 = 6
Average Turnaround Time = (30 + 3 + 6) / 3 = 39 / 3 = 13 ms

Look at that! The average waiting time dropped from 17 ms to just 3 ms. That is a huge improvement in performance!

Image Suggestion: A split-screen illustration. On the left, a long line of people (labeled P2, P3) are waiting unhappily behind one person (P1) who is taking a very long time at an M-PESA agent. The caption reads 'FCFS'. On the right, the M-PESA agent is smartly serving the two people with quick transactions (P2, P3) first, while the person with the long transaction (P1) waits. Everyone looks happier. The caption reads 'SJF'. Style: Clean, infographic-style cartoon.

What is Context Switching?

When the OS decides to stop one process (e.g., P1) and start another one (e.g., P2), it can't just switch instantly. It has to save the exact state of P1 (its progress, what it was doing) and then load the saved state of P2 to resume it. This process of saving the context of one process and loading the context of another is called a Context Switch. It's pure overhead – no useful work is done during a context switch. This is why a very small time quantum in Round Robin can be inefficient, as it causes many context switches.

Wrapping It Up: Why This All Matters

Process scheduling is the heart of a modern multi-tasking operating system. A good scheduling algorithm ensures that:

  • The CPU is kept as busy as possible (High CPU Utilization).
  • The system feels responsive and fast to you, the user (Low Response Time).
  • All processes get a fair share of the CPU, preventing any single process from hogging it.

The next time you are playing a game, listening to music, and downloading a file all at once, say a little "Asante" to the process scheduler in your OS. It's the master 'makanga' making sure all your digital passengers get where they need to go smoothly!


Cheza na Akili! (Challenge Yourself!)

Given the following processes, calculate the average waiting time and turnaround time for the FCFS and SJF algorithms. Assume they all arrive at time 0.


Process   | Burst Time (ms)
---------------------------
P1        | 6
P2        | 8
P3        | 7
P4        | 3

Try drawing the Gantt charts and doing the calculations yourself. It's the best way to learn! Kazi kwako!

Pro Tip

Take your own short notes while going through the topics.

Previous UML
KenyaEdu
Add KenyaEdu to Home Screen
For offline access and faster experience