Site hosted by Angelfire.com: Build your free website today!

Welcome to our linux Kernel hack page.....

A Note to Final year Project seekers:

In the past few years, we had come across many people who have aspired to do the same for their final year project. Nothing wrong with that. But some of them think that we will give away the sources to them and that they can finish their projects with a mouse-click. Thats a very bad attitude. We are ready to help you with basics of scheduling and other things pertaining to linux/OS. (Please note that we had given away the idea. We are not a selfish bunch.) But you need to pour in your efforts. Thats when results will b sweet 2 u.

A TIP for Final year linux-kernel project seekers

Therez a freely available utility called "cscope" which can be used to browse sources of any C project.The same can be used for browsing linux sources. To throw more light: You can use this utility for "Finding Global Definition of symbol", "Find out which functions call a specific function", "Functions called by a function" etc.... Its a MUST for kernel programmers.

Original Linux CPU scheduler, as in 2.2.14

According to nature of processes, some of them tend to consume more CPU bandwidth and some others less (CPU-bound processes and IO-bound processes). Needless to say, the entire scheduling process pivots on time slices. Each process is given a fixed time slice to execute. As a process executes, it consumes its time slice and in the due course of execution, either voluntarily relinquishes the CPU or gets pre-empted by the timer handler after it exhausts its time slice. Thus a fair scheduling algorithm can evaluate processes based on the value of remaining time slice. The remaining “time slice” represents an index of starvation, higher the remaining time slice, higher the process has starved. The above condition is an ideal condition and doesn’t consider sleep-wakeups happening in the system. The point is, when an IO job sleeps, the CPU bound job might have completed many quantum of time slices. Thus when the IO job wakes up, its remaining time quantum has a big phase difference when compared to other CPU jobs, which have completed many quanta of time slices. So all processes should be normalized against a scale before relative goodness can be evaluated. " Mapping this to data_structure level:
  • The data structure that represents a "process" in the kernel is "task_struct" (include/linux/sched.h.. CSCOPE will tell u..)
  • Lot of Lists run thru this data structure. All process-list, Runnable process list etc... "next_run and prev_run" -- implement runnable process list, "next and prev" -- all process list etc..
  • The main CPU scheduler is "schedule()". It basically runs through the list of runnable processes once, evaluating the maximum deserving one. (The function goodness() evaluates relative goodness of processes based on remaining time slice)
  • Therez a #define called SMP which will be set if you enable Symmetric Multi Processing at compile time. This is required only for mother boards with multiple processors on it. The CPU scheduler's handling of things changes a lot for UP and SMP machines. However the broad picture still remains the same. I have not studied the SMP stuff of linux. It should not be difficult tho.
  • Ok, Lets start with the mapping of the bigger picture to the data structure level.
  • (task_struct->counter) This is the remaining time slice. Index of starvation. (See the defn of task_struct in "include/linux/sched.h"
  • It is initialized to a value in the INIT_TASK macro. Check out. I forgot the value. Its actually a #define thing.. The INIT_TASK macro initializes the task_struct of the FIRST TASK in the system. The value is propogated to all other tasks (via "fork").
  • "counter" of the current task is reduced at every timer tick. Checkout "arch/i386/kernel/entry.S". (I hope its on an i386 box. Otherwise check out the appropriate kernel directory.) This file contains ENTRY POINTS for processor exceptions, external interrupts etc.. The code will b in assembly. You need to understand it. First of All, The assembler used in linux world is "as". The syntax is totally different. Altho it is scary for the first time, its all very simple. Download the "GNU assembler" GAS documentaion OR "man as".
  • In "kernel/sched.c" there is a function called "recalculate_counters". This is what NORMALISES the I/O job's phase difference with the CPU job.
  • (task_struct->need_resched) When this field is set to 1, it means that the current process' time slice is over. This is set by the timer_handler. This field is required because, the timer handler pre-empts a process that has exhausted its time slice only if it is in user-mode. If it is in kernel-mode, the handler just sets this field and goes away. SYSTEM CALL return paths, interrupt RETURN paths etc.. check whether this is SET. If it is set then they all call "schedule()" (The main CPU scheduler. check kernel/sched.c)

  • A new CPU scheduling Algorithm in Linux

    This was done as a part of final year project for our B.Tech course at Pondicherry engineering college. The members involved are Sarnath.K, Sriram.V and Sriram.S

    The Worst case of this algorithm is the existing one... But the best case is around 10% better than the original.

    SYNOPSIS

    Let us start an argument with how can better scheduling be done. First criteria could be that no process starves indefinitely. Secondly the main resource CPU should be used to the maximum possible extent. Once we are convinced with this , let us proceed further.

    Assume we have only a bunch of CPU bound jobs, In what so ever manner we are going to schedule, the total turnaround time would be constant is that agreed ? The problem of scheduling is when we have I/O jobs. I/O bound jobs relinquish the CPU sooner than CPU bound jobs. So our scheduling algorithm should be intelligent enough to find out a substitute job whenever an I/O job relinquishes the CPU. Is that agreed ?

    So precisely we have tried to implement this idea. We extract I/O jobs from the queue , give it more preference and finish it off at the earliest, soquishes the CP that when any I/O job relinU , u always have a CPU bound guy to use the CPU.

    But beyond this , our algorithm maintains the no-starve attitude and is flexible to choose the amount of I/O preference and CPU preference. But the peaking point is that , this algorithm has lesser computing complexity than the original scheduling algorithm and is far more efficient.

    Here's the algorithm.

    PERFORMANCE:

    When the system was loaded heavily, meaning when we tested the system running a large no. of process , our scheduler could either perform as good as original or better to the extent of saving 10-15 seconds when it could. The reason for the improvement in performance was mainly due to the finishing of the I/O jobs earlier than the CPU jobs in our scheduler, which was not the case in original scheduler.
    TESTING

    The testing mechanism was solid. We built in a kernel that could log on info about a process when it exits. The logging of data is enabled by a variable in the kernel. Once the logging variable is set to true, whenever a process exits, all its details like CPU usage time , wait time will put out to new  procfs entry we created.

    Our main testing program written in 'C' a user level code, reads in from a text file, executes the jobs specified together.b4 doing that it enables logging of kernel info. so as and when a process exits, its info is loaded to procfs entry. When all the processes specified to run exits, the main testing routine reads the procfs entry and puts the info into a test file. Isn't that comprehensive ?

    Watch out for new links here as to how, these were done, or if u r  impatient mail to : "paramvi@usa.net"

                                                                                                                                        - SRIRAM.V
                                                                                                                                           SARNATH.K