Site hosted by Angelfire.com: Build your free website today!

Department of Computer Science, UMass Amherst Andrew H. Fagg

Announcements/Reminders

_ Class news group: rcfnews.cs.umass.edu::cmpsci.edlab.cs377

CMPSCI 377: Operating Systems Lecture 5, Page 1

Department of Computer Science, UMass Amherst Andrew H. Fagg

Last Class: Processes

_ A process is the unit of execution.

_ Processes are represented as Process Control Blocks in the OS

{ PCBs contain process state, scheduling and memory management

information, etc

_ A process is either New, Ready, Waiting, Running, or Terminated.

_ On a uniprocessor, there is at most one running process at a time.

_ The program currently executing on the CPU is changed by performing a

context switch

_ Processes communicate either with message passing or shared memory

CMPSCI 377: Operating Systems Lecture 5, Page 2

Department of Computer Science, UMass Amherst Andrew H. Fagg

Today: Threads

_ What are threads?

_ Where should we implement threads? In the kernel? In a user-level

threads package?

_ How should we schedule threads (or processes) onto the CPU?

CMPSCI 377: Operating Systems Lecture 5, Page 3

Department of Computer Science, UMass Amherst Andrew H. Fagg

Processes versus Threads

_ A process de_nes the address space, text, resources, etc.,

_ A thread de_nes a single sequential execution stream within a process

(PC, stack, registers).

_ Threads extract the thread of control information from the process

_ Threads are bound to a single process.

_ Each process may have multiple threads of control within it.

{ The address space of a process is shared among all its threads

{ No system calls are required to cooperate among threads

{ Interprocess communication is simpler than message passing and

(process-level) shared-memory

CMPSCI 377: Operating Systems Lecture 5, Page 4

Department of Computer Science, UMass Amherst Andrew H. Fagg

Classifying Threaded Systems

Operating Systems can support one or many address spaces, and one or

many threads per address space.

Address Space

Thread

MS/DOS

Mach, Chorus, NT, Solaris Xerox Pilot, Embedded Systems

UNIX, Ultrix

CMPSCI 377: Operating Systems Lecture 5, Page 5

Department of Computer Science, UMass Amherst Andrew H. Fagg

Example Threaded Program

main()

global in, out, n, bu_er[n];

in = 0; out = 0;

fork thread (producer());

fork thread (consumer());

end

producer

repeat

nextp = produced item

while in+1 mod n = out do no-op

bu_er[in] = nextp; in = (in+1) mod n

consumer

repeat

while in = out do no-op

nextc = bu_er[out]; out = (out+1) mod n

consume item nextc

One possible

memory layout:

heap

thread 2

thread 1

SP

SP

stack

stack

1

1

PC 2

PC

2

Memory

static data

text

_ Forking a thread can be a system call to the kernel, or a procedure call to

a thread library (user code).

CMPSCI 377: Operating Systems Lecture 5, Page 6

Department of Computer Science, UMass Amherst Andrew H. Fagg

Kernel Threads

_ A kernel thread, also known as a lightweight process, is a thread that

the operating system knows about.

_ Switching between kernel threads of the same process requires a small

context switch.

{ The values of registers, program counter, and stack pointer must be

changed.

{ Memory management information does not need to be changed since

the threads share an address space.

_ The kernel must manage and schedule threads (as well as processes), but

it can use the same process scheduling algorithms.

) Switching between kernel threads is slightly faster than switching between

processes.

CMPSCI 377: Operating Systems Lecture 5, Page 7

Department of Computer Science, UMass Amherst Andrew H. Fagg

User-Level Threads

_ A user-level thread is a thread that the OS does not know about.

_ The OS only knows about the process containing the threads.

_ The OS only schedules the process, not the threads within the process.

_ The programmer uses a thread library to manage threads (create and

delete them, synchronize them, and schedule them).

{ This is the case for Java.

CMPSCI 377: Operating Systems Lecture 5, Page 8

Department of Computer Science, UMass Amherst Andrew H. Fagg

User-Level Threads

Thread Ready

Queue

Kernel

Process Ready Queue

Thread Ready

Queue

Kernel Processes

User-Level Thread Scheduler

Current Thread for each Process

User

CMPSCI 377: Operating Systems Lecture 5, Page 9

Department of Computer Science, UMass Amherst Andrew H. Fagg

User-Level Threads: Advantages

_ There is no context switch involved when switching threads.

_ User-level thread scheduling is more exible:

{ A user-level code can de_ne a problem dependent thread scheduling

policy.

{ Each process might use a di_erent scheduling algorithm for its own

threads.

{ A thread can voluntarily give up the processor by telling the scheduler

it will yield to other threads.

_ User-level threads do not require system calls to create them or context

switches to move between them

) User-level threads are typically much faster than kernel threads

CMPSCI 377: Operating Systems Lecture 5, Page 10

Department of Computer Science, UMass Amherst Andrew H. Fagg

User-Level Threads: Disadvantages

_ Since the OS does not know about the existence of the user-level threads,

it may make poor scheduling decisions:

{

{

) Solving these problems requires communication between the kernel and

the user-level thread manager.

_ Since the OS just knows about the process, it schedules the process the

same way as other processes, regardless of the number of user threads.

(For kernel threads, the more threads a process creates, the more time

slices the OS will dedicate to it.)

CMPSCI 377: Operating Systems Lecture 5, Page 11

Department of Computer Science, UMass Amherst Andrew H. Fagg

User-Level Threads: Disadvantages

_ Since the OS does not know about the existence of the user-level threads,

it may make poor scheduling decisions:

{ It might run a process that only has idle threads.

{ If a user-level thread is waiting for I/O, the entire process will wait.

) Solving these problems requires communication between the kernel and

the user-level thread manager.

_ Since the OS just knows about the process, it schedules the process the

same way as other processes, regardless of the number of user threads.

(For kernel threads, the more threads a process creates, the more time

slices the OS will dedicate to it.)

CMPSCI 377: Operating Systems Lecture 5, Page 12

Department of Computer Science, UMass Amherst Andrew H. Fagg

Example: Kernel and User-Level Threads in Solaris

Kernel

Address Space

Lightweight process

Thread

Kernel

Threads

User Threads

CMPSCI 377: Operating Systems Lecture 5, Page 13

Department of Computer Science, UMass Amherst Andrew H. Fagg

More Examples of Kernel and User-Level Threads

Ultrix: 1 thread per address space

Topaz: multiple kernel threads per address space

FastThreads: multiple user threads per address space

Ultrix Topaz FastThreads

Fork 11,320 1208 39

Signal/Wait 1846 229 52

Operation times in Microseconds on a MIPS 3000

) User-level thread operations are orders of magnitude faster than similar

kernel thread operations

CMPSCI 377: Operating Systems Lecture 5, Page 14

Department of Computer Science, UMass Amherst Andrew H. Fagg

Scheduling Processes

_ Multiprogramming: having more than one process at a time that is

ready to run enables the OS to increase system utilization and throughput

by overlapping I/O and CPU activities.

_ Process Execution State

New Running Terminated

Waiting

Ready

_ All of the processes that the OS is currently managing reside in one and

only one of these state queues.

CMPSCI 377: Operating Systems Lecture 5, Page 15

Department of Computer Science, UMass Amherst Andrew H. Fagg

Scheduling Processes

_ Long Term Scheduling:: How does the OS determine the degree of

multiprogramming, i.e., the number of jobs executing at once in the

primary memory? How does the OS determine which processor on which

to place a job?

_ Short Term Scheduling: How does (or should) the OS select a process

from the ready queue to execute?

{ Policy Goals

{ Policy Options

{ Implementation considerations

CMPSCI 377: Operating Systems Lecture 5, Page 16

Department of Computer Science, UMass Amherst Andrew H. Fagg

Short Term Scheduling

_ The kernel runs the scheduler at least when:

1. A process switches from running to waiting,

2. A process switches from waiting to ready,

3. An interrupt occurs, or

4. A process is created or terminated.

_ Non-preemptive system: the scheduler must wait for one of these

events

_ Preemptive system: the scheduler can interrupt a running process

CMPSCI 377: Operating Systems Lecture 5, Page 17

Department of Computer Science, UMass Amherst Andrew H. Fagg

Criteria for Comparing Scheduling Algorithms:

CPU Utilization The percentage of time that the CPU is busy.

Throughput The number of processes completing in a unit of time.

Turnaround time The length of time it takes to run a process from

initialization to termination, including all the waiting time.

Waiting time The total amount of time that a process is in the ready

queue.

Response time The time between when a process is ready to run and its

next I/O request.

Note: these are all quantitative metrics by which we can measure a

scheduler's performance.

CMPSCI 377: Operating Systems Lecture 5, Page 18

Department of Computer Science, UMass Amherst Andrew H. Fagg

Scheduling Policies

Ideally, choose a CPU scheduler that optimizes all criteria simultaneously

(utilization, throughput,..), but this is not generally possible

Instead, choose a scheduling algorithm based on its ability to satisfy a policy

_ Minimize average response time - provide output to the user as quickly as

possible and process their input as soon as it is received.

_ Minimize variance of response time - in interactive systems, predictability

may be more important than a low average with a high variance.

CMPSCI 377: Operating Systems Lecture 5, Page 19

Department of Computer Science, UMass Amherst Andrew H. Fagg

Scheduling Policies (cont)

_ Maximize throughput - two components

1. minimize overhead (OS overhead, context switching)

2. e_cient use of system resources (CPU, I/O devices)

_ Minimize waiting time - give each process the same amount of time on

the processor. This might actually increase average response time.

_ Fairness - share the CPU in an equitable fashion.

It turns out that fairness is a tradeo_ against average response time. We

can get better average response time by making the system less fair.

CMPSCI 377: Operating Systems Lecture 5, Page 20

Department of Computer Science, UMass Amherst Andrew H. Fagg

Measuring Scheduling Policy Performance:

Simplifying Assumptions

_ One process per user

_ One thread per process

_ Processes are independent

Researchers developed these algorithms in the 70's when these

assumptions were more realistic; it is still an open problem as to how to

relax these assumptions.

CMPSCI 377: Operating Systems Lecture 5, Page 21

Department of Computer Science, UMass Amherst Andrew H. Fagg

Scheduling Algorithms: A Snapshot

FCFS: First Come, First Served

Round Robin: Use a time slice and preemption to alternate jobs.

SJF: Shortest Job First

Multilevel Feedback Queues: Round robin on each priority queue.

Lottery Scheduling: Jobs get tickets and scheduler randomly picks winning

ticket.

CMPSCI 377: Operating Systems Lecture 5, Page 22

Department of Computer Science, UMass Amherst Andrew H. Fagg

Scheduling Policies

FCFS: First-Come-First-Served (or FIFO: First-In-First-Out)

_ The scheduler executes jobs to completion in arrival order.

_ In early FCFS schedulers, the job did not relinquish the CPU even when it

was doing I/O.

_ We will assume a FCFS scheduler that runs when processes are blocked

on I/O, but that is non-preemptive, i.e., the job keeps the CPU until it

blocks (say on an I/O device).

CMPSCI 377: Operating Systems Lecture 5, Page 23

Department of Computer Science, UMass Amherst Andrew H. Fagg

FCFS Scheduling Policy: Example

Time

B C A

B C A

A requests I/O

Arrival order: A,B,C (A does I/O)

C A A B

Arrival order: A,B,C (no I/O)

Arrival order: B,C,A (no I/O)

0 2 5 10

0 5 7 10

0 2 4 7 10

_ If processes arrive 1 time unit apart, what is the average wait time in these three cases?

How about turnaround time?

CMPSCI 377: Operating Systems Lecture 5, Page 24

Department of Computer Science, UMass Amherst Andrew H. Fagg

FCFS Scheduling Policy: Example

Case 1

_ Turnaround time: (2 + 4 + 8)=3 = 4:67

_ Wait time: (0 + 1 + 3)=3 = 1:33

Case 2

_ Turnaround time: (5 + 6 + 8)=3 = 6:33

_ Wait time: (0 + 4 + 5)=3 = 3

Case 3

_ Turnaround time: (7 + 3 + 8)=3 = 6

_ Wait time: (assume A does 2 units of I/O) (0 + 1 + 5)= = 2

CMPSCI 377: Operating Systems Lecture 5, Page 25

Department of Computer Science, UMass Amherst Andrew H. Fagg

FCFS: Advantages and Disadvantages

Advantage: simple

Disadvantages:

_ Average wait time is highly variable as short jobs may wait behind long

jobs.

_ May lead to poor overlap of I/O and CPU since CPU-bound processes

will force I/O bound processes to wait for the CPU, leaving the I/O

devices idle.

CMPSCI 377: Operating Systems Lecture 5, Page 26

Department of Computer Science, UMass Amherst Andrew H. Fagg

Summary

_ Thread: a single execution stream within a process

_ Switching between user-level threads is faster than between kernel threads

since a context switch is not required.

_ User-level threads may result in the kernel making poor scheduling

decisions, resulting in slower process execution than if kernel threads were

used.

_ Many scheduling algorithms exist. Selecting an algorithm is a policy

decision and should be based on characteristics of processes being run

and goals of operating system (minimize response time, maximize

throughput, ...).

CMPSCI 377: Operating Systems Lecture 5, Page 27

Department of Computer Science, UMass Amherst Andrew H. Fagg

Next Time

_ Friday: Threads in Java (so _nish reading Chapter 5)

_ Monday: More scheduling (Chapter 6)

CMPSCI 377: Operating Systems Lecture 5, Page 28