Januari 16, 2017

Final

Judul: Final
Penulis: Sachin Patidar


INTRODUCTION
Linux is a Unix-like and mostly POSIX-compliant computer operating system assembled under the model of free and open source software development and distribution. The defining component of Linux is the Linux kernel, an operating system kernel first released on 5 October 1991 by Linus Torvalds.
Linux was originally developed as a free operating system for Intel x86-based personal computers. It has since been ported to more computer hardware platforms than any other operating system. It is a leading operating system on servers and other big iron systems such as mainframe computers and supercomputers. This includes mobile phones, tablet computers, network routers, facility automation controls, televisions and video game consoles. Android, which is a widely used operating system for mobile devices, is built on top of the Linux kernel.
System features:
Linux supports features found in other implementations of UNIX, and many which aren't found elsewhere. In this section, we'll take a nickel tour of the features of the Linux kernel. Linux is a complete multitasking, multiuser operating system, as are all other versions of UNIX. This means that many users can log into and run programs on the same machine simultaneously.
Other specific internal features of Linux include POSIX job control (used by shells like csh and bash), pseudo terminals ( pty devices), and support for dynamically loadable national or customized keyboard drivers. Linux supports virtual consoles that let you switch between login sessions on the same system console. Users of the screen program will find the Linux virtual console implementation familiar. The kernel can emulate 387-FPU instructions, and systems without a math coprocessor can run programs that require floating-point math capability.
Software features:
Virtually every utility one would expect of a standard UNIX implementation has been ported to Linux, including basic commands like ls, awk, tr, sed, bc, and more. The familiar working environment of other UNIX systems is duplicated on Linux. All standard commands and utilities are included. (Novice UNIX or Linux users should see Chapter 3 for an introduction to basic UNIX commands). Many text editors are available, including vi, ex, pico, jove, and GNU emacs, and variants like Lucid emacs, which incorporates extensions of the X Window System, and joe. The text editor you're accustomed to using has more than likely been ported to Linux.
Most of the basic Linux utilities are GNU software. GNU utilities support advanced features that are not found in the standard versions of BSD and UNIX System Vprograms. For example, the GNU vi clone, elvis, includes a structured macro language that differs from the original implementation. However, GNU utilities are intended to remain compatible with their BSD and System V counterparts. Many people consider the GNU versions to be superior to the originals. A shell is a program which reads and executes commands from the user. In addition, many shells provide features like job control, managing several processes at once, input and output redirection, and a command language for writing shell scripts. A shell script is a program in the shell's command language and is analogous to a MS-DOS batch file.
Differences between Linux and other operating systems.It is important to understand the differences between Linux and other operating systems, like MS-DOS, OS/2, and the other implementations of UNIX for personal computers. First of all, Linux coexists happily with other operating systems on the same machine: you can run MS-DOS and OS/2 along with Linux on the same system without problems.
Linux, instead of a well known, well tested, and well documented commercial operating system? We could give you a thousand reasons. One of the most important, however, is that Linux is an excellent choice for personal UNIX computing. If you're a UNIX software developer, why use MS-DOS at home? Linux allows you to develop and test UNIX software on your PC, including database and X Window System applications. If you're a student, chances are that your university computing systems run UNIX. You can run your own UNIX system and tailor it to your needs. Installing and running Linux is also an excellent way to learn UNIX if you don't have access to other UNIX machines.
But let's not lose sight. Linux isn't only for personal UNIX users. It is robust and complete enough to handle large tasks, as well as distributed computing needs. Many businesses--especially small ones--have moved their systems to Linux in lieu of other UNIX based, workstation environments. Universities have found that Linux is perfect for teaching courses in operating systems design. Large, commercial software vendors have started to realize the opportunities which a free operating system can provide.
Linux vs. MS-DOS.It's not uncommon to run both Linux and MS-DOS on the same system. Many Linux users rely on MS-DOS for applications like word processing. Linux provides its own analogs for these applications, but you might have a good reason to run MS-DOS as well as Linux. If your dissertation is written using WordPerfect for MS-DOS, you may not be able to convert it easily to TeX or some other format. Many commercial applications for MS-DOS aren't available for Linux yet, but there's no reason that you can't use both.
MS-DOS does not fully utilize the functionality of 80386 and 80486 processors. On the other hand, Linux runs completely in the processor's protected mode, and utilizes all of its features. You can directly access all of your available memory (and beyond, with virtual RAM). Linux provides a complete UNIX interface which is not available under MS-DOS. You can easily develop and port UNIX applications to Linux, but under MS-DOS you are limited to a subset of UNIX functionality.
Linux and MS-DOS are different entities. MS-DOS is inexpensive compared to other commercial operating systems and has a strong foothold in the personal computer world. No other operating system for the personal computer has reached the level of popularity of MS-DOS, because justifying spending $1,000 for other operating systems alone is unrealistic for many users. Linux, however, is free, and you may finally have the chance to decide for yourself.
Linux vs. other implementations of UNIX.Several other implementations of UNIX exist for 80386 or better personal computers. The 80386 architecture lends itself to UNIX, and vendors have taken advantage of this. Other implementations of UNIX for the personal computer are similar to Linux. Almost all commercial versions of UNIX support roughly the same software, programming environment, and networking features. However, there are differences between Linux and commercial versions of UNIX.
Linux supports a different range of hardware than commercial implementations. In general, Linux supports most well-known hardware devices, but support is still limited to hardware which the developers own. Commercial UNIX vendors tend to support more hardware at the outset, but the list of hardware devices which Linux supports is expanding continuously. We'll cover the hardware requirements for Linux in Section
System ArchitectureThe Linux kernel is useless in isolation; it participates as one part in a larger system that, as a whole, is useful. As such, it makes sense to discuss the kernel in the context of the entire system. Figure 1 shows a decomposition of the entire Linux operating system:

Figure1.Decomposition of Linux System into Major Subsystems
The Linux operating system is composed of four major subsystems:
User Applications -- the set of applications in use on a particular Linux system will be different depending on what the computer system is used for, but typical examples include a word-processing application and a web-browser.
O/S Services -- these are services that are typically considered part of the operating system (a windowing system, command shell, etc.); also, the programming interface to the kernel (compiler tool and library) is included in this subsystem.
Linux Kernel -- this is the main area of interest in this paper; the kernel abstracts and mediates access to the hardware resources, including the CPU.
Hardware Controllers -- this subsystem is comprised of all the possible physical devices in a Linux installation; for example, the CPU, memory hardware, hard disks, and network hardware are all members of this subsystem
Purpose of the KernelThe Linux kernel presents a virtual machine interface to user processes. Processes are written without needing any knowledge of what physical hardware is installed on a computer -- the Linux kernel abstracts all hardware into a consistent virtual interface. In addition, Linux supports multi-tasking in a manner that is transparent to user processes: each process can act as though it is the only process on the computer, with exclusive use of main memory and other hardware resources. The kernel actually runs several processes concurrently, and is responsible for mediating access to hardware resources so that each process has fair access while inter-process security is maintained.
Overview of the Kernel StructureThe Linux kernel is composed of five main subsystems:
The Process Scheduler (SCHED) is responsible for controlling process access to the CPU. The scheduler enforces a policy that ensures that processes will have fair access to the CPU, while ensuring that necessary hardware actions are performed by the kernel on time.
The Memory Manager (MM) permits multiple process to securely share the machine's main memory system. In addition, the memory manager supports virtual memory that allows Linux to support processes that use more memory than is available in the system. Unused memory is swapped out to persistent storage using the file system then swapped back in when it is needed.
The Virtual File System (VFS) abstracts the details of the variety of hardware devices by presenting a common file interface to all devices. In addition, the VFS supports several file system formats that are compatible with other operating systems.
The Network Interface (NET) provides access to several networking standards and a variety of network hardware.
The Inter-Process Communication (IPC) subsystem supports several mechanisms for process-to-process communication on a single Linux system.
This diagram emphasizes that the most central subsystem is the process scheduler: all other subsystems depend on the process scheduler since all subsystems need to suspend and resume processes. Usually a subsystem will suspend a process that is waiting for a hardware operation to complete, and resume the process when the operation is finished. For example, when a process attempts to send a message across the network, the network interface may need to suspend the process until the hardware has completed sending the message successfully. After the message has been sent (or the hardware returns a failure), the network interface then resumes the process with a return code indicating the success or failure of the operation.

Figure2. Kernel Subsystem Overview.
General overview of the Linux file system
Most files are just files, called regular files; they contain normal data, for example text files, executable files or programs, input for or output from a program and so on.
Directories: files that are lists of other files.
Special files: the mechanism used for input and output. Most special files are in /dev, we will discuss them later.
Links: a system to make a file or directory visible in multiple parts of the system's file tree. We will talk about links in detail.
(Domain) sockets: a special file type, similar to TCP/IP sockets, providing inter-process networking protected by the file system's access control.
Named pipes: act more or less like sockets and form a way for processes to communicate with each other, without using network socket semantics.
POSIXPOSIX, an acronym for "Portable Operating System Interface", is a family of standards specified by the IEEE for maintaining compatibility between operating systems. POSIX defines the application programming interface (API), along with command line shells and utility interfaces, for software compatibility with variants of Unix and other operating systems There are actually several different POSIX releases, but the most important are POSIX.1 and POSIX.2, which define system calls and command-line interface, respectively The POSIX specifications describe an operating system that is similar to, but not necessarily the same as, Unix. Though POSIX is heavily based on the BSD and System V releases, non-Unix systems such as Microsoft's Windows NT and IBM's Open Edition MVS are POSIX compliant.
POSIX oriented operating systems
Depending upon the degree of compliance with the standards, one can classify operating systems as fully or partly POSIX compatible. Certified products can be found at the IEEE's website. The following, while not officially certified as POSIX compatible, comply in large part:
BeOS (and subsequently Haiku)
FreeBSDContikiDarwin (core of OS X and iOS)
illumosLinux (most distributions — see Linux Standard Base)
MINIX (now MINIX3)
NetBSDNucleus RTOSOpenBSDOpenSolarisSanosSkyOSSyllableVSTaVxWorksLITERATURE SURVEY
Linux makes so different is that it is a free implementation of UNIX. It was and still is developed cooperatively by a group of volunteers, primarily on the Internet, who exchange code, report bugs, and fix problems in an open-ended environment. Anyone is welcome to join the Linux development effort. All it takes is interest in hacking a free UNIX clone, and some programming
UNIX is one of the most popular operating systems worldwide because of its large support base and distribution. It was originally developed at AT&T as a multitasking system for minicomputers and mainframes in the 1970's, but has since grown to become one of the most widely-used operating systems anywhere, despite its sometimes confusing interface and lack of central standardization.
Many hackers feel that UNIX is the Right Thing--the One True Operating System. Hence, the development of Linux by an expanding group of UNIX hackers who want to get their hands dirty with their own system. Versions of UNIX exist for many systems, from personal computers to supercomputers like the Cray Y-MP. Most versions of UNIX for personal computers are expensive and cumbersome. 
Linux is a free version of UNIX developed primarily by Linus Torvalds at the University of Helsinki in Finland, with the help of many UNIX programmers and wizards across the Internet. Anyone with enough know-how and gumption can develop and change the system. The Linux kernel uses no code from AT&T or any other proprietary source, and much of the software available for Linux was developed by the GNU project of the Free Software Foundation in Cambridge, Massachusetts, U.S.A. However, programmers from all over the world have contributed to the growing pool of Linux software.
Linux was originally developed as a hobby project by Linus Torvalds. It was inspired by Minix, a small UNIX system developed by Andy Tanenbaum. On October 5, 1991, Linus announced the first ``official'' version of Linux, which was version 0.02. At that point, Linus was able to run bash (the GNU Bourne Again Shell) and gcc (the GNU C compiler), but not much else. Again, this was intended as a hacker's system. The primary focus was kernel development--user support, documentation, and distribution had not yet been addressed. Today, the Linux community still seems to treat these issues as secondary to ``real programming''--kernel development.
After version 0.03, Linus bumped up the version number to 0.10, as more people started to work on the system. After several further revisions, Linus increased the version number to 0.95 in March, 1992, to reflect his expectation that the system was ready for an ``official'' release soon. Almost a year and a half later, in late December of 1993, the Linux kernel was still at version 0.99.pl14--asymptotically approaching 1.0.
The Linux system is mostly compatible with several UNIX standards (inasmuch as UNIX has standards) at the source level, including IEEE POSIX.1, UNIX System V, and Berkely System Distribution UNIX. Linux was developed with source code portability in mind, and it's easy to find commonly used features that are shared by more than one platform. Much of the free UNIX software available on the Internet and elsewhere compiles under Linux ``right out of the box.'' In addition, all of the source code for the Linux system, including the kernel, device drivers, libraries, user programs, and development tools, is freely distributable.
As of 2009, POSIX documentation is divided in two parts:
POSIX.1-2008: POSIX Base Definitions, System Interfaces, and Commands and Utilities (which include POSIX.1, extensions for POSIX.1, Real-time Services, Threads Interface, Real-time Extensions, Security Interface, Network File Access and Network Process-to-Process Communications, User Portability Extensions, Corrections and Extensions, Protection and Control Utilities and Batch System Utilities)
POSIX Conformance Testing: A test suite for POSIX accompanies the standard: PCTS or the POSIX Conformance Test Suite
POSIX.1
POSIX.1, Core Services (incorporates Standard ANSI C) (IEEE Std 1003.1-1988)
Process Creation and Control
SignalsFloating Point ExceptionsSegmentation / Memory ViolationsIllegal InstructionsBus ErrorsTimersFile and Directory Operations
PipesC Library (Standard C)I/O Port Interface and Control
POSIX.1b
POSIX.1b, Real-time extensions (IEEE Std 1003.1b-1993)
Priority SchedulingReal-Time SignalsClocks and Timers
SemaphoresMessage PassingShared MemoryAsynch and Synch I/O
Memory Locking Interface
POSIX.1c
POSIX.1c, Threads extensions (IEEE Std 1003.1c-1995)
Thread Creation, Control, and Cleanup
Thread Scheduling
Thread Synchronization
Signal Handling
POSIX.2
POSIX.2, Shell and Utilities (IEEE Std 1003.2-1992)
Command InterpreterUtility Programs
Versions after 1997
After 1997, the Austin Group developed the POSIX revisions. The specifications are known under the name Single UNIX Specification, before they become a POSIX standard when formally approved by the ISO.
The papers referred are "Beginning Linux programming" 4th edition by Neil Matthew[1], Richard Stones. POSIX Threads and the Linux Kernel by Dave McCracken[2] IBM Linux Technology Center Austin in "Ottawa Linux Symposium 2002" POSIX.1 conformance document[3]. A user-level check pointing library for POSIX threads programs by Dieter W.R, Department of electronic engineer, IEEE 1999.
PROCESS MANAGEMENT
Process is an address space in with one or more threads executing within that address space (memory locations that can be accessed by the threads for process). process details contains in process control block (PCB) ,each process has its own PCB. The Linux process table is like a data structure describing all the processes that are currently. Each PCB block contains following information: Process ID (process id can get by using getpid() function call), priority Parent Process & Child Process Address to the next process PCB, which will run next. Allocated program memory block details in Physical Memory and Virtual Memory Allocated heap memory details, Allocated process stack addresses
Process primitives:
Process creation and execution.
Process Termination.
signals.
Process creation:
Operating System manages processes using their PIDs
use pid_t typedef is used to refer the process ID.
getpid() is used to get the process ID of running process.
getppid() is used to get the process ID of parent process.
Fork () call:
fork is used to create a duplicate process, called the child process. The child process id is the return value for parent process. If there is no child process for the process the return value is ZERO. The parent process id and child process id can get by using getppid() and getpid().
Process Termination:
Wait for Process Termination: wait() and waitpid() functions may report the status of any traced child that is in the stopped state. A child process is traced if it has called ptrace() with a first argument of PTRACE_TRACME, or if its parent process has called ptrace()with a first argument of PTRACE_ATTACH and a second argument equal to the child process ID.
Before going to thread concept, we need to know the difference between fork() and thread. It's important to know difference between the fork system call and the creation of new threads. When a process executes a fork call, a new copy of the process is created with its own variables and its own PID. new process is scheduled independently, and (in general) When we create a new thread in a process, in contrast, the new thread of execution gets its own stack (and hence local variables) but shares global variables, file descriptors, signal handlers, and its current directory state with the process that created it.
Thread:
A thread is a path of execution within a process. Thread working is control by PCB. It is the smallest sequence of programmed instructions that can be managed independently by an operating system scheduler. One thread does not interfere with other thread. <pthread.h> header file includes the thread functionality.
Thread is within the process, process shares instruction of a process, process address space and data, file descriptors, signal handlers. threads are having their own library, that we need mention if suppose we are using those threads in our applications. Thread maintains its own thread ID, stack pointer, program counter, set of registers.
Threads are categories in two types.
User level thread.
28575225425Kernel level thread.
Figure3. User level thread vs Kernel level thread
Multithreading:
On a single processor the multithreading generally occurs by time-division multiplexing (as in multitasking).In multithread the processor switches between all the different threads. Multiprocessor or multi-core system, threads can be truly concurrent, with every processor or core executing a separate thread simultaneously. Many modern operating systems directly support both time-sliced and multiprocessor threading with a process scheduler. Two thread libraries are pthread.h, java thread.
Thread creation:
# include <pthread.h>
intpthread_create(pthread_t*thread,pthread_attr_t*attr,void*(*start_rout)
(*void), void (*arg))
thread indicates pointer to memory location and its used to store thread id
attr indicates special arrtribute normally NULL (usually)
void *(*start_routine)(*void) indicates thread function
void (*arg) indicates NULL / pointer to return value
Thread join:
pthread_join - wait for thread termination
int pthread_join(pthread_t thread, void **retval);
pthread_join() function waits for the thread specified by thread to terminate. If that thread has already terminated, then pthread_join() returns immediately.
retval is not NULL, then pthread_join() copies the exit status of the target thread 
Thread attributes:
Thread attributes are thread characteristics that affect the behavior of the thread.
int pthread_attr_setdetachstate(pthread_attr_t *attr, int detachstate);
int pthread_attr_setschedpolicy(pthread_attr_t *attr, int policy);
int pthread_attr_setschedparam(pthread_attr_t *attr, const struct
sched_param *param);
int pthread_attr_setinheritsched(pthread_attr_t *attr, int inherit);
int pthread_attr_setscope(pthread_attr_t *attr, int scope);
int pthread_attr_setstacksize(pthread_attr_t *attr, int scope);
detachedstate:
This attribute allows you to avoid the need for threads to rejoin. As withmost of these _set functions, it takes a pointer to the attribute and a flag to determine the state required. The two possible flag values for pthread_attr_setdetachstate are PTHREAD_CREATE_JOINABLE and PTHREAD_CREATE_DETACHED. By default, the attribute will have the value PTHREAD_CREATE_JOINABLE
schedpolicy:
This controls how threads are scheduled. The options are SCHED_OTHER, SCHED_RP, and SCHED_FIFO. By default, the attribute is SCHED_OTHER. The other two types of scheduling are available only to processes running with super user permissions, because they both have real-time scheduling but with slightly different behavior. SCHED_RR uses a round robin scheduling scheme, and SCHED_FIFO uses a "first in, first out" policy.
schedparam:
This is a partner to schedpolicy and allows control over the scheduling of threads running with schedule policy SCHED_OTHER.
inheritsched:
This attribute takes two possible values: PTHREAD_EXPLICIT_SCHED and PTHREAD_INHERIT_SCHED. By default, the value is PTHREAD_EXPLICIT_SCHED scope: This attribute controls how the scheduling of a thread is calculated. Because Linux currently supports only the value PTHREAD_SCOPE_SYSTEM.
stacksize:
This attribute controls the thread creation stack size, set in bytes. This is part of the "optional" section of the specification and is supported only on implementations where _POSIX_THREAD_ATTR_STACKSIZE is defined. Linux implements threads with a large amount of stack by default, so the feature is generally redundant on Linux.
Advantages of Threads:
Creating a new thread has some distinct advantages over creating a new process in certain circumstances.
switching between threads requires the operating system to do much less work than switching between process
Multiple threads inside a process , enable a single process to better utilize the hardware resources available.
Disadvantages:
Writing multithreaded programs requires very careful design.
Table 1. Process vs Thread
Debugging a multithreaded program is much, much harder than debugging a single-threaded one, because the interactions between the threads are very hard to control.
Process Thread
Difficult to create Easy to create than process don't require separate address space.
No sharing Multithread requires careful programming thread share data structure should be modified by only one thread at a time
Not light weight Light weight use less resources than process.
Process are independent to each other Interdependent share same address space.
SIGNALS:
A signal is the software analog of an interrupt, signal sent to a task indicates some asynchronous event has occurred. A task can attach a signal handler to take appropriate action when the signal is received. Upon completion of signal handling, normal task execution is resumed (unless the signal corresponds to an exception).
The following additional signals beyond those required by POSIX.1 occur in LynxOS:
Signal Name Description
SIGTRAP Trace trap debugger trap
SIGCORE Kill with core dump (sent by the user)
SIGSYS Bad system call number
SIGURG Urgent condition (data) on socket
SIGIO I/O possible on descriptor (sent when I/O is possible (data has arrived) on a file descriptor on which an fcntl(..., FASYNC) was performed.)
SIGVTALRM Virtual time alarm-This signal is sent when a virtual timer (set by set timer (ITIMER_VIRTUAL)) expires.
SIGPROF Profiling alarm-This signal is sent when a profile timer (set by setitimer(ITIMER_PROFILE)) expires.
SIGWINCH Window size change
SIGPRIO Sent to a process when its priority or process group is changed
SYNCHRONIZATION
Previously we have seen that both threads execute together, but the method of switching between them was difficult and very inefficient. Fortunately, there are specifically designed functions to provide better ways to control the execution of threads and access to critical sections of code. The two basic methods are: semaphores, which act as gatekeepers around a piece of code, and mutexes, which act as a mutual exclusion device to protect sections of code.
Synchronization with Semaphores
There are two sets of interface functions for semaphores: One is taken from POSIX Real-time Extensions and used for threads, and the other is known as System V semaphores, which are commonly used for process synchronization. A semaphore is a special type of variable that can be incremented or decremented, but crucial access to the variable is guaranteed to be atomic, even in a multithreaded program. The simplest type of semaphore is binary semaphore that takes only values 0 or 1, and the other general semaphore, counting semaphore that takes a wider range of values.
The four basic semaphore functions used in threads are all quite simple. A semaphore is created with the sem_init function, which is declared as follows:
#include <semaphore.h>
int sem_init (sem_t *sem, int pshared, unsigned int value);
Ex:
sem_t bin_sem;
sem_init(bin_sem,0,0);
This function initializes a semaphore object pointed to be sem, and it gives a initial integer value. The pshared parameter controls the type of semaphore. If the value of pshared is 0, the semaphore is local to the current process. Otherwise, the semaphore may be shared between processes. Here, the semaphores that are not shared between processes. At the time of writing, Linux doesn't support this sharing, and passing a nonzero value for pshared will cause the call to fail. The next pair of functions controls the value of the semaphore and is declared as follows:
int sem_wait (sem_t * sem);
int sem_post (sem_t * sem);
Ex:
int sem_wait (&bin_sem);
int sem_post (&bin_sem);
These both take a pointer to the semaphore object initialized by a call to sem_init. The sem_post function atomically increases the value of the semaphore by 1. Atomically here means that if two threads simultaneously try to increase the value of a single semaphore by 1, they do not interfere with each other, as might happen if two programs read, increment, and write a value to a file at the same time. If both programs try to increase the value by 1, the semaphore will always be correctly increased in value by 2. The sem_wait function atomically decreases the value of the semaphore by one, but always waits until the semaphore has a nonzero count first. Thus, if you call sem_wait on a semaphore with a value of 2, the thread will continue executing but the semaphore will be decreased to 1. If sem_wait is called on a semaphore with a value of 0, the function will wait until some other thread has incremented the value so that it is no longer 0. If two threads are both waiting in sem_wait for the same semaphore to become nonzero and it is incremented once by a third process, only one of the two waiting processes will get to decrement the semaphore and continue; the other will remain waiting.
The last semaphore function is sem_destroy. This function tidies up the semaphore when you have finished with it. It is declared as follows:
int sem_destroy (sem_t * sem);
Ex:
int sem_destroy (&bin_sem);
Again, this function takes a pointer to a semaphore and tidies up any resources that it may have. If you attempt to destroy a semaphore for which some thread is waiting, you will get an error. Like most Linux functions, these functions all return 0 on success.
Synchronization with Mutexes
The other way of synchronizing access in multithreaded programs is with mutexes which act by allowing the programmer to "lock" an object so that only one thread can access it. To control access to a critical section of code you lock a mutex before entering the code section and then unlock it when you have finished. The basic functions required to use mutexes are very similar to those needed for semaphores. They are declared as follows:
#include <pthread.h>
int pthread_mutex_init(pthread_mutex_t *mutex, const
pthread_mutexattr_t *mutexattr);
int pthread_mutex_lock(pthread_mutex_t *mutex));
int pthread_mutex_unlock(pthread_mutex_t *mutex);
int pthread_mutex_destroy(pthread_mutex_t *mutex);
Ex:
#include<pthread.h>
pthread_mutex_t work_mutex;
pthread_mutex_init (&work_mutex, NULL);
pthread_mutex_lock (&work_mutex);
-------------------task---------------------
pthread_mutex_unlock (&work_mutex);
As usual, 0 is returned for success, and on failure an error code is returned, but errno is not set; you must use the return code. As with semaphores, they all take a pointer to a previously declared object, in this case a pthread_mutex_t. The extra attribute parameter pthread_mutex_init allows you to provide attributes for the mutex, which control its behavior. The attribute type by default is "fast." This has the slight drawback that, if your program tries to call pthread_mutex_lock on a mutex that it has already locked, the program will block. Because the thread that holds the lock is the one that is now blocked, the mutex can never be unlocked and the program is deadlocked. It is possible to alter the attributes of the mutex so that it either checks for this and returns an error or acts recursively and allows multiple locks by the same thread if there are the same number of unlocks afterward.
INTER PROCESS COMMUNICATION (IPC)
Inter-Process communication is a set of methods for the exchange of data among multiple threads in one or more processes. Processes may be running on one or more computers connected by a network. IPC methods are divided into methods for message passing, synchronization, shared memory, and remote procedure calls (RPC). The method of IPC used may vary based on the bandwidth and latency of communication between the threads, and the type of data being communicated. The following are the some of the basic IPC methods that we discussed in this paper:
Pipes
Shared Memory
Message Queues
Semaphores
PIPES:
The term pipe means connecting a data flow from one process to another. The shell arranges the standard input and output of the two commands, so that
The standard input to cmd1 comes from the terminal keyboard.
The standard output from cmd1 is fed to cmd2 as its standard input.
The standard output from cmd2 is connected to the terminal screen.
Process pipes:
The simplest way of passing data between two programs is with the popen and pclose function. These have the following prototypes:
#include <stdio.h>
FILE *popen (const char *command, const char *open_mode);
int pclose (FILE *stream_to_close);
Ex:
#include <stdio.h>
FILE *r_f;
r_f = popen ("uname -a", "r");
---------- checks the details of the OS used-----------------
pclose (r_f);
popen:
The popen function allows a program to invoke another program as a new process and either pass data to it or receive data from it. The command string is the name of the program to run, together with any parameters (open_mode must be either "r" or "w").
If the open_mode is "r", output from the invoked program is made available to the invoking program and can be read from the file stream FILE * returned by popen, using the usual stdio.h library functions for reading (for example, fread). However, if open_mode is "w", the program can send data to the invoked command with calls to fwrite. The invoked program can then read the data on its standard input. Normally, the program being invoked won't be aware that it's reading data from another process; it simply reads its standard input stream and acts on it.
pclose:
When the process started with popen has finished, you can close the file stream associated with it using pclose. The pclose call will return only when the process started with popen finishes. If it's still running when pclose is called, the pclose call will wait for the process to finish. The pclose call normally returns the exit code of the process whose file stream it is closing. If the invoking process has already executed a wait statement before calling pclose, the exit status will be lost because the invoked process has finished and pclose will return –1, with errno set to ECHILD.
SEMAPHORES:
All the Linux semaphore functions operate on arrays of general semaphores rather than a single binary semaphore. At first sight, this just seems to make things more complicated, but in complex cases where a process needs to lock multiple resources, the ability to operate on an array of semaphores is a big advantage. The semaphore function definitions are
#include <sys/sem.h>
int semget(key_t key, int num_sems, int sem_flags);
int semctl(int sem_id, int sem_num, int command, ...);
int semop(int sem_id, struct sembuf *sem_ops, size_t num_sem_ops);
The header file sys/sem.h usually relies on two other header files, sys/types.h and sys/ipc.h. Normally they are automatically included by sys/sem.h and you do not need to explicitly add a #include for them. The functions were designed to work for arrays of semaphore values, which makes their operation significantly more complex than would have been required for a single semaphore.
semget:
The semget function creates a new semaphore or obtains the semaphore key of an existing semaphore:
int semget(key_t key, int num_sems, int sem_flags);
The first parameter, key, is an integral value used to allow unrelated processes to access the same semaphore. All semaphores are accessed indirectly by the program supplying a key, for which the system then generates a semaphore identifier. The semaphore key is used only with semget. All other semaphore functions use the semaphore identifier returned from semget.
There is a special semaphore key value, IPC_PRIVATE, that is intended to create a semaphore that only the creating process could access, but this rarely has any useful purpose. It should provide a unique, non-zero integer value for key when you want to create a new semaphore. The num_sems parameter is the number of semaphores required. This is almost always 1.
The sem_flags parameter is a set of flags, very much like the flags to the open function. The lower nine bits are the permissions for the semaphore, which behave like file permissions. In addition, these can be bitwise ORed with the value IPC_CREAT to create a new semaphore. It's not an error to have the IPC_CREAT flag set and give the key of an existing semaphore. The IPC_CREAT flag is silently ignored if it is not required. We can use IPC_CREAT and IPC_EXCL together to ensure that you obtain a new, unique semaphore. It will return an error if the semaphore already exists. The semget function returns a positive (nonzero) value on success; this is the semaphore identifier used in the other semaphore functions. On error, it returns –1.
semop:
The function semop is used for changing the value of the semaphore:
int semop(int sem_id, struct sembuf *sem_ops, size_t num_sem_ops);
The first parameter, sem_id, is the semaphore identifier, as returned from semget. The second parameter, sem_ops, is a pointer to an array of structures, each of which will have at least the following members:
struct sembuf
{
short sem_num;
short sem_op;
short sem_flg;
};
The first member, sem_num, is the semaphore number, usually 0 unless you're working with an array of semaphores. The sem_op member is the value by which the semaphore should be changed. In general, only two values are used, –1, which is for operation to wait for a semaphore to become available, and +1, which is for operation to signal that a semaphore is now available.
The final member, sem_flg, is usually set to SEM_UNDO. This causes the operating system to track the changes made to the semaphore by the current process and, if the process terminates without releasing the semaphore, allows the operating system to automatically release the semaphore if it was held by this process. It's good practice to set sem_flg to SEM_UNDO, unless you specifically require different behavior. If you do decide you need a value other than SEM_UNDO, it's important to be consistent, or you can get very confused as to whether the kernel is attempting to "tidy up" your semaphores when your process exits. All actions called for by semop are taken together to avoid a race condition implied by the use of multiple semaphores. You can find full details of the processing of semop in the manual pages.
semctl:
The semctl function allows direct control of semaphore information:
int semctl(int sem_id, int sem_num, int command, ...);
The first parameter, sem_id, is a semaphore identifier, obtained from semget. The sem_num parameter is the semaphore number. The two common values of command are:
SETVAL: Used for initializing a semaphore to a known value. The value required is passed as the value member of the union semun. This is required to set the semaphore up before it's used for the first time.
IPC_RMID: Used for deleting a semaphore identifier when it's no longer required.
The semctl function returns different values depending on the command parameter. For SETVAL and IPC_RMID it returns 0 for success and –1 on error
SHARED MEMORY:
Shared memory is the second of the three IPC facilities. It allows two unrelated processes to access the same logical memory. Shared memory is a very efficient way of transferring data between two running processes. Although the X/Open standard doesn't require it, it's probable that most implementations of shared memory arrange for the memory being shared between different processes to be the same physical memory.

Figure 4. Working of shared memory
Shared memory is a special range of addresses that is created by IPC for one process and appears in the address space of that process. Other processes can then "attach" the same shared memory segment into their own address space. All processes can access the memory locations just as if the memory had been allocated by malloc. If one process writes to the shared memory, the changes immediately become visible to any other process that has access to the same shared memory. Shared memory provides an efficient way of sharing and passing data between multiple processes. There are no automatic facilities to prevent a second process from starting to read the shared memory before the first process has finished writing to it. It's the responsibility of the programmer to synchronize access. Figure 3 shows an illustration of how shared memory works.
The functions for shared memory resemble those for semaphores:
#include <sys/shm.h>
void *shmat (int shm_id, const void *shm_addr, int shmflg);
int shmctl (int shm_id, int cmd, struct shmid_ds *buf);
int shmdt (const void *shm_addr);
int shmget (key_t key, size_t size, int shmflg);
As with semaphores, they include files sys/types.h and sys/ipc.h are normally automatically included by shm.h.
shmget:
By using the following function we can create the shared memory:
int shmget(key_t key, size_t size, int shmflg);
As with semaphores, the program provides key, which effectively names the shared memory segment, and the shmget function returns a shared memory identifier that is used in subsequent shared memory functions. There's a special key value, IPC_PRIVATE, that creates shared memory private to the process. The second parameter, size, specifies the amount of memory required in bytes. The third parameter, shmflg, consists of nine permission flags that are used in the same way as the mode flags for creating files. A special bit defined by IPC_CREAT must be bitwise ORed with the permissions to create a new shared memory segment. It's not an error to have the IPC_CREAT flag set and pass the key of an existing shared memory segment. The IPC_CREAT flag is silently ignored if it is not required. The permission flags are very useful with shared memory because they allow a process to create shared memory that can be written by processes owned by the creator of the shared memory, but only read by processes that other users have created
shmat:
After creating shared memory, the process needs to access it so shmat function is necessary:
void *shmat(int shm_id, const void *shm_addr, int shmflg);
The first parameter, shm_id, is the shared memory identifier returned from shmget. The second parameter, shm_addr, is the address at which the shared memory is to be attached to the current process. The third parameter, shmflg, is a set of bitwise flags. The two possible values are SHM_RND, which, in conjunction with shm_addr, controls the address at which the shared memory is attached, and SHM_RDONLY, which makes the attached memory read-only. If the shmat call is successful, it returns a pointer to the first byte of shared memory. On failure –1 is returned.
shmdt:
The shmdt function detaches the shared memory from the current process. It takes a pointer to the address returned by shmat. On success, it returns 0, on error –1. Note that detaching the shared memory doesn't delete it; it just makes that memory unavailable to the current process.
shmctl:
The control functions for shared memory are simpler than the more complex ones for semaphores:
int shmctl(int shm_id, int command, struct shmid_ds *buf);
The shmid_ds structure has at least the following members:
struct shmid_ds
{
uid_t shm_perm.uid;
uid_t shm_perm.gid;
mode_t shm_perm.mode;
};
The first parameter, shm_id, is the identifier returned from shmget. The second parameter, command, is the action to take. It can take three values, shown in the following table 1. The third parameter, buf, is a pointer to the structure containing the modes and permissions for the shared memory.
Table2. Description of the commands for shared memory
Command Description
IPC_STAT Sets the data in the shmid_ds structure to reflect the values associated with the shared memory
IPC_SET Sets the values associated with the shared memory to those provided in the shmid_ds structure, if the process has permission to do so.
IPC_RMID Deletes the shared memory segment
MESSAGE QUEUES:
Message queues provide a reasonably easy and efficient way of passing data between two unrelated processes. They have the advantage over named pipes that the message queue exists independently of both the sending and receiving processes, which removes some of the difficulties that occur in synchronizing the opening and closing of named pipes.
The message queue function definitions are:
#include <sys/msg.h>
int msgctl(int msqid, int cmd, struct msqid_ds *buf);
int msgget(key_t key, int msgflg);
int msgrcv(int msqid, void *msg_ptr, size_t msg_sz, long int msgtype, int
msgflg);
int msgsnd(int msqid, const void *msg_ptr, size_t msg_sz, int msgflg);
As with semaphores and shared memory, they include files sys/types.h and sys/ipc.h are normally automatically included by msg.h.
msgget:
To access a message queue using the msgget function:
int msgget(key_t key, int msgflg);
The program must provide a key value that, as with other IPC facilities, names a particular message queue. The special value IPC_PRIVATE creates a private queue, which in theory is accessible only by the current process. As with semaphores and messages, on some Linux systems the message queue may not actually be private. Because a private queue has very little purpose, that's not a significant problem. As before, the second parameter, msgflg, consists of nine permission flags. A special bit defined by IPC_CREAT must be bitwise ORed with the permissions to create a new message queue. It's not an error to set the IPC_CREAT flag and give the key of an existing message queue. The IPC_CREAT flag is silently ignored if the message queue already exists. The msgget function returns a positive number, the queue identifier, on success or –1 on failure.
msgsnd:
The msgsnd function allows you to add a message to a message queue:
int msgsnd(int msqid, const void *msg_ptr, size_t msg_sz, int msgflg);
The structure of the message is constrained in two ways. First, it must be smaller than the system limit, and second, it must start with a long int, which will be used as a message type in the receive function. When you're using messages, it's best to define your message structure something like this :
struct my_message
{
long int message_type;
/* The data you wish to transfer */
}
Because the message_type is used in message reception, you can't simply ignore it. You must declare your data structure to include it, and it's also wise to initialize it so that it contains a known value. The first parameter, msqid, is the message queue identifier returned from a msgget function. The second parameter, msg_ptr, is a pointer to the message to be sent, which must start with a long int type as described previously. The third parameter, msg_sz, is the size of the message pointed to by msg_ptr. This size must not include the long int message type. The fourth parameter, msgflg, controls what happens if either the current message queue is full or the system wide limit on queued messages has been reached. If msgflg has the IPC_NOWAIT flag set, the function will return immediately without sending the message and the return value will be –1.
If the msgflg has the IPC_NOWAIT flag clear, the sending process will be suspended, waiting for space to become available in the queue. On success, the function returns 0, on failure –1. If the call is successful, a copy of the message data has been taken and placed on the message queue.
msgrcv:
The msgrcv function retrieves messages from a message queue:
int msgrcv(int msqid, void *msg_ptr, size_t msg_sz, long int msgtype, int
msgflg);
The first parameter, msqid, is the message queue identifier returned from a msgget function. The second parameter, msg_ptr, is a pointer to the message to be received, which must start with a long int type as described previously in the msgsnd function. The third parameter, msg_sz, is the size of the message pointed to by msg_ptr, not including the long int message type. The fourth parameter, msgtype, is a long int, which allows a simple form of reception priority to be implemented. If msgtype has the value 0, the first available message in the queue is retrieved. If it's greater than zero, the first message with the same message type is retrieved. If it's less than zero, the first message that has a type the same as or less than the absolute value of msgtype is retrieved. This sounds more complicated than it actually is in practice. If you simply want to retrieve messages in the order in which they were sent, set msgtype to 0. If you want to retrieve only messages with a specific message type, set msgtype equal to that value. If you want to receive messages with a type of n or smaller, set msgtype to -n.
The fifth parameter, msgflg, controls what happens when no message of the appropriate type is waiting to be received. If the IPC_NOWAIT flag in msgflg is set, the call will return immediately with a return value of –1. If the IPC_NOWAIT flag of msgflg is clear, the process will be suspended, waiting for an appropriate type of message to arrive. On success, msgrcv returns the number of bytes placed in the receive buffer, the message is copied into the user-allocated buffer pointed to by msg_ptr, and the data is deleted from the message queue. Itreturns –1 on error.
msgctl:
The final message queue function is msgctl, which is very similar to that of the control function for shared memory:
int msgctl(int msqid, int command, struct msqid_ds *buf);
The msqid_ds structure has at least the following members:
struct msqid_ds
{
uid_t msg_perm.uid;
uid_t msg_perm.gid
mode_t msg_perm.mode;
};
The first parameter, msqid, is the identifier returned from msgget. The second parameter, command, is the action to take. It can take three values, described in the following table2:
Table3: Description of commands
Command Description
IPC_STAT Sets the data in the msqid_ds structure to reflect the values associated with the shared memory
IPC_SET Sets the values associated with the shared memory to those provided in the msqid_ds structure, if the process has permission to do so.
IPC_RMID Deletes the shared memory segment
INTER PROCESS COMMUNICATION PROBLEMS
Inter Process communication (IPC) requires the use of resources, such as memory, which are shared between processes or threads. If special care is not taken to correctly coordinate or synchronize access to shared resources, a number of problems can potentially arise such as overwrite and overflow problems in memory.
Lets us see briefly describe about IPC problems:
Starvation
Deadlock
Data consistency
Shared Memory
Priority Inversion
Starvation:
A Starvation condition can occur when multiple processes or threads compete for access to a shared resource. One process may monopolize the resource while others are denied access.
Example:

Figure5. Starvation example
Frequency of occurrence:P1:2sec; P2:3sec; P3:4sec
Suppose process P3 is accessing the critical section at present. Let process P1 comes with higher priority as shown in brackets, so P1 will access critical section now. Since frequency of occurrence for P1 is 2sec, it won't allow others process to execute. So here P1 will monopolize the resources others will be denied.
Deadlock:
A Deadlock condition can occur when two processes need multiple shared resources at the same time in order to continue.
Example: Thread A is waiting to receive data from thread B. Thread B is waiting to receive data from thread A. The two threads are in deadlock because they are both waiting for the other and not continuing to execute.

Figure 6. Deadlock problem
Data Consistency:
When shared resources are modified at the same time by multiple resources, data errors or inconsistencies may occur.
Sections of a program that might cause these problems are called critical sections.
Failure to coordinate access to a critical section is called a race condition because success or failure depends on the ability of one process to exit the critical section before another process enters the critical section.
Example: Suppose consider two process bus and car are going to cross a junction. If they want to cross the junction with any signal, then it would be danger. So to overcome that problem, we are going to use signals.

Figure7. Data Consistency Problem
Shared Buffer Problem:
In computing, the producer–consumer problem (also known as the shared-buffer problem) is a classic example of a Multi-process synchronization problem. It can be solved by using Semaphores.
Example:
Producer –consumer problem:
The producer's job is to generate a piece of data, put it into the buffer and start again. At the same time, the consumer is consuming the data (i.e., removing it from the buffer) one piece at a time. The problem is to make sure that the producer won't try to add data into the buffer if it's full and that the consumer won't try to remove data from an empty buffer.
The solution for the producer is to either go to sleep or discard data if the buffer is full. The next time the consumer removes an item from the buffer, it notifies the producer, who starts to fill the buffer again. In the same way, the consumer can go to sleep if it finds the buffer to be empty. The next time the producer puts data into the buffer, it wakes up the sleeping consumer. The solution can be reached by means of inter-process communication, typically using semaphores.
Priority Inversion:
In computer science, priority inversion is a problematic scenario in scheduling in which a high priority task is indirectly pre-empted by a medium priority task effectively "inverting" the relative priorities of the two tasks. Two tasks of equal priority, each are blocked by another.
There are three ways to solve priority inversion:
Disable all the interrupts while lower priority task is accessing the resource. But this could be a problem in hard real time systems.
Make the lower priority task as higher priority for temporary so no interrupt can access the resource for that time.
Provide Critical section area with some priority number, the one which is accessing critical section currently will be having the same number.
Conclusion
Case study on Linux, it is available for many different systems. Its adaptability is such that enterprising souls have persuaded it to run in one form or another on just about anything with a processor in it.
REFERENCES:
Beginning Linux programming" 4th edition by "Neil Matthew", "Richard Stones".
POSIX.1 conformance document, 1987 - 2002, Linux Works .
www.lynuxworks.com_products_posix_0414-00-posix_confwww.wikipedia.com/inter process communication.


Download Final.docx

Download Now



Terimakasih telah membaca Final. Gunakan kotak pencarian untuk mencari artikel yang ingin anda cari.
Semoga bermanfaat

banner
Previous Post
Next Post

Akademikita adalah sebuah web arsip file atau dokumen tentang infografi, presentasi, dan lain-lain. Semua pengunjung bisa mengirimkan filenya untuk arsip melalui form yang telah disediakan.

0 komentar: