University of Pittsburgh A Round Robin Scheduler Program Project
In this Lab, we will be exploring the idea of an execution engine with a schedulerattached to it. The scheduler pulls tasks from a Queues that implements a specific
policy.
The main units that you will implement are the following:
•
Execution Engine:
o In our case, it will be a simulation of a CPU that executes a Task.
The task is assigned to the CPU by the scheduler.
o To implement the CPU simulation,
▪ The CPU will be simulated using a thread.
▪ In the thread, you will receive a pointer to a function in
the form of “void* Func(void*)” that is stored in a struct
node.
▪ The CPU thread will execute this function and once it is
completed, it looks for the next assignment from the
scheduler.
•
A Scheduler:
o The scheduler is a thread that implements a specific algorithm to
execute.
o For example: A Round-Robin Scheduler will go through all the
Queues one at a time and select the first item in the queue and
execute this task, VS. a priority-based scheduler will select the
queue with the highest priority first.
•
Queue with a Policy:
o This represents a list of items or nodes
o Each item (Node) contains a pointer to a function. “functions to be
executed on CPU unit”.
o The function should be of type “Void * func(void *);
o The order of the items depends on the Queue policy.
o Policies for each Queue can be either: FIFO or LIFO.
o These policies are executed within your “Enqueue” or add to the
queue function.
Look at the following figures to see the overall structure of the Lab:
Overall Structure of the Lab program
The figure shows the overall structure of your program. The Scheduler is a thread that
contains the logic for the selection of the next item in the Queue. In this lab, we will
focus on One policy: Round-Robin.
The CPU thread on the other hand simulates an execution unit in your application. In
this example, the thread will simply call a function that is stored in each item (node) sent
by the scheduler.
The following figure shows a flow-chart of the operations you have to implement:
Lab Units Flow-Chart
The following describes the steps mentioned:
•
•
•
•
•
•
The main function initiates all threads and blocks until all threads complete
and join.
The scheduler selects the next item from the queue list depending on the
policy. In this lab, we are using only Round Robin.
The Scheduler will select the item from the queues.
Send the item to the CPU Thread.
CPU thread executes the function saved in the item.
Repeat all of these steps until completion of all items in all queues. Finally
Join with the main thread and end program
Some Notes:
•
•
•
•
You will need to have the Queues ready with the items before initializing the
threads. Since the Scheduler will have to start looking into the Queues.
The Queues can be Global variables in your program, to ease the
development of the lab.
You will need to use Semaphores to Synchronize the operation between
Scheduler and CPU thread. (if you need help, A Tutorial Attached)
All functions listed in the nodes should be a simple print statement to simulate
the execution of this task. For example:
void * T1 (void * arg)
{
Printf(“task T1: completed”);
}
The Queue (LIFO) numbers in the second figure above are, 5,6,7, and 8. So it the print
will be 1, 5, 2, 6, 3, 7, 4, 8.
•
A queue Code is provided for your use if need it.
Synchronization Using Mutex and Semaphores
To review, the 4 main function of pthreads library are:
int pthread_create(pthread_t *thread, pthread_attr_t *attr, void *(*start_routine)(void *), void *arg);
int pthread_join(pthread_t thread, void ** status);
void pthread_exit(void * status);
And all of these functions require that you include: #include with –lpthreads in gcc
command.
gcc -D_REENTRANT Cfile.c -o Program –lpthread
To provide sequencing, we used either the run_turn variable with specific value for each thread or using
the pthread_join function to wait on threads to complete their tasks. This has worked with no issues since
one thread does not need to work until the other thread has completed or finished its task. But what if you
want to write a program where one thread exchanges information with another thread continuously using
a shared resource or memory.
The producer and consumer problem is an example of two execution entities where one produce
something and another entity wants to use or to consume it. This is a type of synchronization problem
where Semaphores and Mutexs are used. Luckily there are libraries that provide Mutex and semaphores
primitives (operations) that we can use in our code. Mutual exclusion is used when you want to only
allow one thread or a process to perform something on a shared resource. For example, if you have a
shared printer with no buffering queue, you want to have only one print job to use the printer at a time,
otherwise you will end up with pages from different documents shuffled together. The same idea can be
applied to shared memories and buffers, you only want one thread or process to write or read at a time, so
you do not end up with mixed and corrupt data.
Part I: Mutex
Mutexs are so common that they are provided as part of the pthread POSIX library. As with other
pthreads functions, you will need to include #include Library. All functions return 0 for success
and other values for errors just like the ptherads functions.
The following is a list of some of the Mutex functions provided in pthreads:
• pthread_mutex_init (mutex mx ,attr x)
o Initialize a Mutex mx, with attributes x associated with it – set it to Null.
• pthread_mutex_lock (mutex mx)
o Lock a Mutex. Get a lock on mx or block till it becomes available.
•
pthread_mutex_unlock (mutex mx)
o Unlock mutex mx, and allow other threads waiting on it to run.
•
pthread_mutex_destroy (mutex mx)
o End and free the mutex mx
C declarations for these functions:
•
int pthread_mutex_init(pthread_mutex_t *restrict mutex,
const pthread_mutexattr_t *restrict attr);
•
int pthread_mutex_lock(pthread_mutex_t *mutex);
•
int pthread_mutex_unlock(pthread_mutex_t *mutex);
•
int pthread_mutex_destroy(pthread_mutex_t *mutex);
In the following example, a program with two threads is written to coordinate access to a shared buffer
area called work_area. Work_area is just an array of characters used to store some values. The program is
divided into two threads, the main thread where input is requested from the user and stored in the
work_area buffer. A second thread that will count the number of characters the user has inputted. Try the
following program and follow the operations of different threads and notice where mutex is used.
#include
#include
#include
#include
#include
void *thread_function(void *arg);
pthread_mutex_t work_mutex; /* protects both work_area and time_to_exit */
#define WORK_SIZE 1024
char work_area[WORK_SIZE];
int time_to_exit = 0;
// Main Thread and function
int main() {
int res;
pthread_t a_thread;
void *thread_result;
// Create and initialize a new mutex called work_mutex
res = pthread_mutex_init(&work_mutex, NULL);
if (res != 0) {
perror(“Mutex initialization failed”);
exit(EXIT_FAILURE);
}
// Create a new thread that will execute the thread_function function.
res = pthread_create(&a_thread, NULL, thread_function, NULL);
if (res != 0) {
perror(“Thread creation failed”);
exit(EXIT_FAILURE);
}
// Lock on work_mutex before using the work_area buffer.
pthread_mutex_lock(&work_mutex);
printf(“Input some text. Enter ‘end’ to finish\n”);
// Continue looping till time_to_exit is true, this flag is set in the other thread
// That’s why it needed to be protected using the mutex.
while(!time_to_exit) {
scanf(“%s”, work_area);
// done saving data into the buffer, release the lock.
pthread_mutex_unlock(&work_mutex);
// check if the other thread done working by checking if the
// if the first character in the buffer is Null ‘\0’
// if it is not, that means the other thread is not done yet, wait.
while(1) {
pthread_mutex_lock(&work_mutex);
if (work_area[0] != ‘\0’) {
pthread_mutex_unlock(&work_mutex);
sleep(1);
}
else {
break;
}
}
}
}
// unLock on work_mutex.
pthread_mutex_unlock(&work_mutex);
printf(“\nWaiting for thread to finish…\n”);
res = pthread_join(a_thread, &thread_result);
if (res != 0) {
perror(“Thread join failed”);
exit(EXIT_FAILURE);
}
printf(“Thread joined\n”);
pthread_mutex_destroy(&work_mutex);
exit(EXIT_SUCCESS);
void *thread_function(void *arg) {
sleep(1);
// Get the lock since we want to access shared resouce
pthread_mutex_lock(&work_mutex);
// Loop continues till the first letters in the buffer = “end”
while(strncmp(“end”, work_area, 3) != 0) {
printf(“You input %d characters\n”, (int)strlen(work_area));
// Indicate that you done counting using ‘\0’ at the beginning of the buffer.
work_area[0] = ‘\0’;
pthread_mutex_unlock(&work_mutex);
sleep(1);
// The following code checks if there are any new inputs from the user,
// If the value at the beginning of the buffer is still ‘\0’ sleep and try again later.
pthread_mutex_lock(&work_mutex);
while (work_area[0] == ‘\0’ ) {
pthread_mutex_unlock(&work_mutex);
sleep(1);
// This line line locks the mutex since the while condition loop checks the shared resource
pthread_mutex_lock(&work_mutex);
}
}
//
}
User typed end, tell other thread to stop.
time_to_exit = 1;
work_area[0] = ‘\0’;
pthread_mutex_unlock(&work_mutex);
pthread_exit(0);
The program uses different techniques to coordinate access between threads. For example, the main
function needs to know when the thread has finished counting the numbers, this is not possible by using
Mutexs. Because if the main thread is much faster than the second thread and it reacquires the lock before
the second thread can, it can overwrite old data with new data from the user. So to solve this problem, the
main thread will continuously check if there is a NULL ‘\0’ at the begging of the buffer and will continue
to ask the user for input only after it finds one there, the counting thread will place one there once it is
done counting.
Another type of flag is used also to indicate that the user entered the word “end”, this is done in the
second thread. The second thread will set time_to_end variable to 1 if it finds that the first 3 characters in
the buffer equals to “end”. The main thread uses this as a condition to exit the loop in which it asks the
users to input data.
Notice how every time a check or an edit to the work_area buffer, a lock is established first. This is also
true in all of the loops, just notice how a lock operation is always performed at the end of each iteration to
ensure that the next iteration will only occur if the thread gets the lock. The sequence of getting the lock,
then check or edit an operation, then release the lock is the core of all Mutex operations. Always make
sure that you release a lock once your code has done editing or checking the shared resource.
Part II: Semaphores
The difference between Mutual exclusion and semaphores is in their purpose and function. Mutexs are
used to protect a shared resource or critical regions in your code from having multiple threads or
processes accessing it. Only one thread/process that hold the Mutex lock can access the resource and
only it can release this lock, No other thread/process can release the lock. If there are any other threads
trying to lock on this mutex, it will block and wait till the original thread who locked the resource decides
to release it.
On the other hand, semaphores are a more generic form of synchronization technique that can coordinate
access between multiple different threads and processes to a shared resources. Semaphores function
is to coordinate activities between threads, thus multiple threads can access the shared resource at the
same time, but not perform the same operations. Semaphores can have values ranging from 1 to n, and
any thread is able to release the semaphore wait locks and perform a wakeup operation, which is very
useful for solving problems like producer and consumer where one thread is consuming data while the
other is producing it. Mutexs and semaphores can be used together to ensure both Mutual exclusion and
synchronization.
In general Semaphores act like an inventory system, where it keeps track on how many units of a resource
is available. If any update to this resource count needs to be done, it is done in an atomic way to ensure no
conflict or race conditions occurs and data stays valid “consistent” across multiple threads.
There are two types of semaphores, binary and counting semaphores. The counting semaphores are used
to ensure synchronization and valid number of data items across all threads or processes. Conceptually, a
counting semaphore is a nonnegative integer count. Semaphores are typically used to coordinate access to
resources, with the semaphore count initialized to the number of free resources. Threads then atomically
increment the count when resources are added and atomically decrement the count when resources are
removed. When the semaphore count becomes zero, indicating that no more resources are present, threads
trying to decrement the semaphore block wait until the count becomes greater than zero.
Binary semaphores are a more specialized form of Semaphores. Instead of counting, binary semaphores
only accepts two values: 0 and 1. These are useful to implement locks for resources that need to indicate
that it is available or unavailable one step at a time.
The binary semaphore has a similar function to the mutex but address a different problem. Mutex ensures
that no other threads access a critical region code where you perform operations that conflict with other
threads, only one thread get to access or edit the operations. On the other hand, Binary semaphore is a
way synchronize two threads operations, where one thread is waiting for an event to occur, the event
might be triggered by another thread or a process. When the event happens, the other thread uses
semaphore operation “Up or Post” to announce to any thread waiting that it can proceed. Although you
can use binary semaphores as Mutexs, semaphore has the disadvantage that any thread can unlock it.
Mutex can be implemented using binary semaphores, but with some additional restrictions to ensure
proper mutual exclusion.
The following is a list of semaphore operations, you will need to include #include to
access these functions:
•
int sem_init(sem_t *sem, int pshared, unsigned int value);
The sem_init() function is used to initialize the semaphore referred to by sem.
The value of the initialized semaphore is value.
•
int sem_wait(sem_t *sem);
The sem_wait() function locks the semaphore referenced by sem by performing a semaphore
lock operation on that semaphore. If the semaphore value is currently zero, then the calling
thread will not return from the call to sem_wait() until a post operation unlock the semaphore.
This function represents the down method we used in our lectures.
•
int sem_post(sem_t *sem);
The sem_post() function unlocks the semaphore referenced by sem by performing a semaphore
unlock operation on that semaphore. If the semaphore value resulting from this operation is
positive, then no threads were blocked waiting for the semaphore to become unlocked; the
semaphore value is simply incremented. This function represent the UP method we used input
lectures.
•
int sem_destroy(sem_t *sem);
The sem_destroy() function is used to destroy the semaphore indicated by sem. Only a semaphore
that was created using sem_init() may be destroyed using sem_destroy();
The following example shows how we could have used binary semaphores instead of mutex to
synchronize two threads exchanging information. In the example, a program with two threads is written
to coordinate access to a shared buffer area called work_area. Work_area is just an array of characters
used to store some values. The program is divided into two threads, the main thread where input is
requested from the user and stored in the work_area buffer. A second thread that will count the number of
characters the user has inputted. Try this code and try to follow with the code.
_____________________________________________________________________________________
#include
#include
#include
#include
#include
#include
void *thread_function(void *arg);
sem_t bin_sem;
//A binary semaphore to protect the workarea
#define WORK_SIZE 1024
char work_area[WORK_SIZE];
int main() {
int res;
pthread_t a_thread;
void *thread_result;
//
//
//
//
//
Create and initialize a new semaphore called binay_semaphore
First parameter is a pointer to the semaphore,
second is the sharing parameter, in Linux it’s always 0
Third the initial value for the semaphore, in our case it is 0, since we are
using a binary semaphore. (0 or 1)
res = sem_init(&bin_sem, 0, 0);
if (res != 0) {
perror(“Semaphore initialization failed”);
exit(EXIT_FAILURE);
}
// Create a thread that will count the number of characters
res = pthread_create(&a_thread, NULL, thread_function, NULL);
if (res != 0) {
perror(“Thread creation failed”);
exit(EXIT_FAILURE);
}
// Prompt user to enter text.
printf(“Input some text. Enter ‘end’ to finish\n”);
// Compare to “end”, if result is not zero, continue accepting text.
while(strncmp(“end”, work_area, 3) != 0) {
// Get input from user
scanf(“%s”, work_area);
}
// Tell all threads waiting on this semaphore that there is
// A new data that has been posted to the work_area buffer.
sem_post(&bin_sem);
printf(“\nWaiting for thread to finish…\n”);
res = pthread_join(a_thread, &thread_result);
if (res != 0) {
perror(“Thread join failed”);
}
exit(EXIT_FAILURE);
}
printf(“Thread joined\n”);
sem_destroy(&bin_sem);
exit(EXIT_SUCCESS);
void *thread_function(void *arg) {
// Wait for bin_semaphore to have a non-zero value in it to continue.
// Since the initial value for the semaphore is 0 — Look at initialization line.
// This will block until the main thread perform the sem_post operation.
// In other words, it will wait for the arrival of new data event.
sem_wait(&bin_sem);
}
//compare the text inputted in work_area to “end”
while(strncmp(“end”, work_area, 3) != 0) {
// Count
printf(“You input %d characters\n”, (int)strlen(work_area));
//Wait on semaphore to have a non-zero value to continue.
// if semaphore has a zero, this thread will block till
// a post operation is performed.
sem_wait(&bin_sem);
}
pthread_exit(NULL);
In the main thread, every time a new input is entered by the user, the thread will perform a post operation,
which will increment the semaphore to 1. If you look at the initialization code for the semaphore, it is a
binary semaphore, which initializes it to 0. Assuming that the second “counting” thread was faster than
the main thread and tried to count the number of characters in the buffer before the main thread had a
chance to fill it, the counting thread will block since there is a sem_wait operation at the beginning of the
thread function and semaphore is initialized to 0. The counting thread will only continue to the counting
event only if the main thread performs a sem_post operation and announce that there is new data available
to the counting thread to consume. The counting thread will always wait for data arrival event “user
inputted new data” while the main thread will always announce the arrival of new data. Look again at the
example and try to follow.
Top-quality papers guaranteed
100% original papers
We sell only unique pieces of writing completed according to your demands.
Confidential service
We use security encryption to keep your personal data protected.
Money-back guarantee
We can give your money back if something goes wrong with your order.
Enjoy the free features we offer to everyone
-
Title page
Get a free title page formatted according to the specifics of your particular style.
-
Custom formatting
Request us to use APA, MLA, Harvard, Chicago, or any other style for your essay.
-
Bibliography page
Don’t pay extra for a list of references that perfectly fits your academic needs.
-
24/7 support assistance
Ask us a question anytime you need to—we don’t charge extra for supporting you!
Calculate how much your essay costs
What we are popular for
- English 101
- History
- Business Studies
- Management
- Literature
- Composition
- Psychology
- Philosophy
- Marketing
- Economics