Tuesday 21 October 2014

Concepts of Memory Management

OS - Memory Management
Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
Memory management provides protection by using two registers, a base register and a limit register. The base register holds the smallest legal physical memory address and the limit register specifies the size of the range. For example, if the base register holds 300000 and the limit register is 1209000, then the program can legally access all addresses from 300000 through 411999.
Instructions and data to memory addresses can be done in following ways
·         Compile time -- When it is known at compile time where the process will reside, compile time binding is used to generate the absolute code.
·         Load time -- When it is not known at compile time where the process will reside in memory, then the compiler generates re-locatable code.
·         Execution time -- If the process can be moved during its execution from one memory segment to another, then binding must be delayed to be done at run time
Dynamic Loading
In dynamic loading, a routine of a program is not loaded until it is called by the program. All routines are kept on disk in a re-locatable load format. The main program is loaded into memory and is executed. Other routines methods or modules are loaded on request. Dynamic loading makes better memory space utilization and unused routines are never loaded.
Dynamic Linking
Linking is the process of collecting and combining various modules of code and data into a executable file that can be loaded into memory and executed. Operating system can link system level libraries to a program. When it combines the libraries at load time, the linking is called static linking and when this linking is done at the time of execution, it is called as dynamic linking.
In static linking, libraries linked at compile time, so program code size becomes bigger whereas in dynamic linking libraries linked at execution time so program code size remains smaller.
Logical versus Physical Address Space
An address generated by the CPU is a logical address whereas address actually available on memory unit is a physical address. Logical address is also known a Virtual address.
Virtual and physical addresses are the same in compile-time and load-time address-binding schemes. Virtual and physical addresses differ in execution-time address-binding scheme.
The set of all logical addresses generated by a program is referred to as a logical address space. The set of all physical addresses corresponding to these logical addresses is referred to as a physical address space.
The run-time mapping from virtual to physical address is done by the memory management unit (MMU) which is a hardware device. MMU uses following mechanism to convert virtual address to physical address.
·         The value in the base register is added to every address generated by a user process which is treated as offset at the time it is sent to memory. For example, if the base register value is 10000, then an attempt by the user to use address location 100 will be dynamically reallocated to location 10100.
·         The user program deals with virtual addresses; it never sees the real physical addresses.
Swapping
Swapping is a mechanism in which a process can be swapped temporarily out of main memory to a backing store , and then brought back into memory for continued execution.
Backing store is a usually a hard disk drive or any other secondary storage which fast in access and large enough to accommodate copies of all memory images for all users. It must be capable of providing direct access to these memory images.
Major time consuming part of swapping is transfer time. Total transfer time is directly proportional to the amount of memory swapped. Let us assume that the user process is of size 100KB and the backing store is a standard hard disk with transfer rate of 1 MB per second. The actual transfer of the 100K process to or from memory will take
100KB / 1000KB per second
= 1/10 second
= 100 milliseconds

Memory Allocation
Main memory usually has two partitions
·         Low Memory -- Operating system resides in this memory.
·         High Memory -- User processes then held in high memory.
Operating system uses the following memory allocation mechanism.
S.N.
Memory Allocation
Description
1
Single-partition allocation
In this type of allocation, relocation-register scheme is used to protect user processes from each other, and from changing operating-system code and data. Relocation register contains value of smallest physical address whereas limit register contains range of logical addresses. Each logical address must be less than the limit register.
2
Multiple-partition allocation
In this type of allocation, main memory is divided into a number of fixed-sized partitions where each partition should contain only one process. When a partition is free, a process is selected from the input queue and is loaded into the free partition. When the process terminates, the partition becomes available for another process.
Fragmentation
As processes are loaded and removed from memory, the free memory space is broken into little pieces. It happens after sometimes that processes can not be allocated to memory blocks considering their small size and memory blocks remains unused. This problem is known as Fragmentation.
Fragmentation is of two types
S.N.
Fragmentation
Description
1
External fragmentation
Total memory space is enough to satisfy a request or to reside a process in it, but it is not contiguous so it can not be used.
2
Internal fragmentation
Memory block assigned to process is bigger. Some portion of memory is left unused as it can not be used by another process.
External fragmentation can be reduced by compaction or shuffle memory contents to place all free memory together in one large block. To make compaction feasible, relocation should be dynamic.
Paging
External fragmentation is avoided by using paging technique. Paging is a technique in which physical memory is broken into blocks of the same size called pages (size is power of 2, between 512 bytes and 8192 bytes). When a process is to be executed, it's corresponding pages are loaded into any available memory frames.
Logical address space of a process can be non-contiguous and a process is allocated physical memory whenever the free memory frame is available. Operating system keeps track of all free frames. Operating system needs n free frames to run a program of size n pages.
Address generated by CPU is divided into
·         Page number (p) -- page number is used as an index into a page table which contains base address of each page in physical memory.
·         Page offset (d) -- page offset is combined with base address to define the physical memory address.

Segmentation
Segmentation is a technique to break memory into logical pieces where each piece represents a group of related information. For example ,data segments or code segment for each process, data segment for operating system and so on. Segmentation can be implemented using or without using paging.
Unlike paging, segment are having varying sizes and thus eliminates internal fragmentation. External fragmentation still exists but to lesser extent.

Address generated by CPU is divided into
·         Segment number (s) -- segment number is used as an index into a segment table which contains base address of each segment in physical memory and a limit of segment.
·         Segment offset (o) -- segment offset is first checked against limit and then is combined with base address to define the physical memory address.



Operating System - Virtual Memory
Virtual memory is a technique that allows the execution of processes which are not completely available in memory. The main visible advantage of this scheme is that programs can be larger than physical memory. Virtual memory is the separation of user logical memory from physical memory.
This separation allows an extremely large virtual memory to be provided for programmers when only a smaller physical memory is available. Following are the situations, when entire program is not required to be loaded fully in main memory.
·         User written error handling routines are used only when an error occured in the data or computation.
·         Certain options and features of a program may be used rarely.
·         Many tables are assigned a fixed amount of address space even though only a small amount of the table is actually used.
·         The ability to execute a program that is only partially in memory would counter many benefits.
·         Less number of I/O would be needed to load or swap each user program into memory.
·         A program would no longer be constrained by the amount of physical memory that is available.
·         Each user program could take less physical memory, more programs could be run the same time, with a corresponding increase in CPU utilization and throughput.
Virtual memory is commonly implemented by demand paging. It can also be implemented in a segmentation system. Demand segmentation can also be used to provide virtual memory.


Concepts of File System

1.File
A file is a named collection of related information that is recorded on secondary storage such as magnetic disks, magnetic tapes and optical disks.In general, a file is a sequence of bits, bytes, lines or records whose meaning is defined by the files creator and user.
2.File Structure
File structure is a structure, which is according to a required format that operating system can understand.
·         A file has a certain defined structure according to its type.
·         A text file is a sequence of characters organized into lines.
·         A source file is a sequence of procedures and functions.
·         An object file is a sequence of bytes organized into blocks that are understandable by the machine.
·         When operating system defines different file structures, it also contains the code to support these file structure. Unix, MS-DOS support minimum number of file structure.
3. File Type
File type refers to the ability of the operating system to distinguish different types of file such as text files source files and binary files etc. Many operating systems support many types of files. Operating system like MS-DOS and UNIX have the following types of files:
ORDINARY FILES
·         These are the files that contain user information.
·         These may have text, databases or executable program.
·         The user can apply various operations on such files like add, modify, delete or even remove the entire file.
DIRECTORY FILES
·         These files contain list of file names and other information related to these files.
Files are identified by their extensions.
Filename has two parts name and extension separated by a “period” character.

4. File Access Mechanisms
File access mechanism refers to the manner in which the records of a file may be accessed. There are several ways to access files
·         Sequential access
·         Direct/Random access
·         Indexed sequential access

SEQUENTIAL ACCESS
A sequential access is that in which the records are accessed in some sequence i.e the information in the file is processed in order, one record after the other. This access method is the most primitive one. Example: Compilers usually access files in this fashion.
DIRECT/RANDOM ACCESS
·         Random access file organization provides, accessing the records directly.
·         Each record has its own address on the file with by the help of which it can be directly accessed for reading or writing.
·         The records need not be in any sequence within the file and they need not be in adjacent locations on the storage medium.
INDEXED SEQUENTIAL ACCESS
·         This mechanism is built up on base of sequential access.
·         An index is created for each file which contains pointers to various blocks.
·         Index is searched sequentially and its pointer is used to access the file directly.
5. Space Allocation
Files are allocated disk spaces by operating system. Operating systems deploy following three main ways to allocate disk space to files.
·         Contiguous Allocation
·         Linked Allocation
·         Indexed Allocation
CONTIGUOUS ALLOCATION
·         Each file occupy a contiguous address space on disk.
·         Assigned disk address is in linear order.
·         Easy to implement.
·         External fragmentation is a major issue with this type of allocation technique.
LINKED ALLOCATION
·         Each file carries a list of links to disk blocks.
·         Directory contains link / pointer to first block of a file.
·         No external fragmentation
·         Effectively used in sequential access file.
·         Inefficient in case of direct access file.
INDEXED ALLOCATION
·         Provides solutions to problems of contigous and linked allocation.
·         A index block is created having all pointers to files.
·         Each file has its own index block which stores the addresses of disk space occupied by the file.

·         Directory contains the addresses of index blocks of files.

Unitwise Questions for Operating System

Questions

UNIT-1

  1. Define operating system. Explore the operating system from different viewpoints?
  2. Explain the Dual-mode operation of the OS?
  3. Explain the concepts of Multiprogramming and Multitasking.
  4. Explain the functions of OS w.r.t following.
i)                    Process management  ii) Memory management
  1. Explain the different computing environment?
  2. What are the tightly coupled systems? What are the advantages?
  3. Explain the different special-purpose systems?
  4. Explain the different OS operations?
  5. Explain the different sets of OS services?
  6. Define the following:
i)                    Context switch  ii) Dispatcher  iii) Job scheduling
  1. What is a system program, explain its different categories?
  2. Explain the Layered Approach of structuring on OS.
  3. Define virtual machine with fig. Explain its working.
  4.  What are the benefits of a virtual machine?

UNIT-2

1.      What is thread? Explain the different threading models.
2.      What is a process? Explain PCB.
3.      Explain the various scheduling criteria’s.
4.      Explain the different states of a process.
5.      List the different types of Schedulers and differentiate between them.
6.      Define IPC. Explain Message-Passing Systems?
7.      Explain the benefits of using threads?
8.      Consider the following set of process:

Process
Arrival time
Burst time
P1
0
8
P2
1
4
P3
2
9
P4
3
5
i)                    Draw the Gantt chart illustrating the execution of these processes using preemptive SJF.
ii)                  Compute the average waiting time and average turnaround time.

9.      Consider the snap-shot
             
Process
Burst time
Priority
P1
10
3
P2
1
1
P3
2
3
P4
1
4
P5
5
2
i)                    Draw the Gantt charts using
a)      FCFS  b) SJF  c) non preemptive priority 1 has highest priority
d)  RR algorithm. Time Quantum=1ms.
           ii)        Calculate average waiting time and average turnaround time.
iii)                Which of the scheduling algorithm results in the minimal average waiting time?
10.  Write a program using fork ( ) system call to create a process, where parent waits for child to complete its execution.
11.  Explain the concept of thread cancellation.

UNIT-3

  1. What is critical section problem? Give its general structure. Explain the requirements that must be satisfied by solution to critical section problem.
  2. Give an algorithm for critical section problem involving at least ‘2’ processes satisfying all necessary and sufficient conditions.
  3. What is a semaphore? How can it be used to solve mutual exclusion problem? Give a solution to bounded buffer problem using semaphores.
  4. With necessary syntax describe the term monitor.
  5. Describe the critical section problem.
  6. Describe ‘TestAndSet( )’ and ‘ Swap’ instructions of binary semaphore variable.
  7. What are the semaphores? Explain two primitive semaphore operations. What are the advantages of semaphores?
  8. Explain signal and wait primitive structures of binary semaphore variable.
  9. What are monitors, Explain?

UNIT-4

  1. Explain how resource allocation graph is used to determine deadlocks?
  2. What are the different methods for handling deadlocks? Explain Bonker’s algorithm?
  3. “A safe state is not a deadlock state but a deadlock state is an unsafe state”. Explain.
  4. What is a deadlock? Explain the four necessary conditions for a deadlock to occur.
  5. Explain the different options to recover from a deadlock.
  6. Define the terms: safe state and safe sequence. Give an algorithm to find whether or not the system is in safe state.
  7. Explain how deadlock can be prevented?
  8. What is a wait for graph, how it is used for detecting deadlocks.
  9. Draw the Resource Allocation Graph for the following situation. Determine if the system is deadlock,
Given sets P, R & E
P={P1,P2,P3}
R={R1,R2,R3,R4}
E={ P1->R1,P2->R3,R1->P2,R2->P2,R2->P1,R3->P3}
  1. Consider the following snap-shot of the system:

Allocation
Max
Available

R1  R2  R3  R4
R1  R2  R3  R4
R1  R2  R3  R4
P1
0     0     1      2
  0    0     1     2                       
2     1     0     0
P2
2     0     0      0      
  2    7     5     0               

P3
0     0     3      4    
  6    6     5     6            

P4
2     3     5      4      
  4    3     5     6     

P5
0     3     3      2                     
  0    6     5     2          


i)                    Compute the NEED matrix.
ii)                   Is the system in safe state? Justify.
iii)                Can a request (0 1 0 0) from p3 be safely granted.
  1. Give deadlock detection algorithm for single and multiple instances of resources.

UNIT-5

  1. What is Paging and Swapping?
  2. With a neat diagram, discuss the steps involved in handling a page fault.
  3. Consider the following page reference string:
7  0  1  2  0  3  0  4  2  3  0   3  2  1  2  0  1  7  0  1
For a memory with three frames. How many page faults would occur for LRU, FIFO and optimal page replacement algorithms? Which is most efficient among them?
  1. What do you mean by a address binding? Explain with necessary steps, the binding of instructions and data to memory address.
  2. What do you mean by a copy-on-write? Where it is used? Explain in brief.
  3. Consider the following page reference string:
1        0  7  1  0  3  1  3  2  0  3  2  4  0  3  2  1  0  7
For a memory with three frames. How many page faults would occur for LRU, FIFO and optimal page replacement algorithms? Which is most efficient among them?
  1. Consider the following page reference string:
0        9  0  1  8  1  8  7  8  7  1  2  8  2  7  8  2  3  8  3
For a memory with three frames. How many page faults would occur for LRU, FIFO and optimal page replacement algorithms? Which is most efficient among them?
  1. Explain the segmentation memory management. Describe the hardware support required for its implementation.
  2. What do you mean by dynamic storage allocation problem? Explain the possible solutions to this problem.
  3. Explain the concept of forward-mapped page table.
  4. Write a note on Thrashing.
  5. On system using demand paged memory it takes 0.12µs to satisfy memory request. If the page is in memory. If the page is not in memory the requesr takes 5000 µs. What would the page fault rate need to be to achieve an effective access time 1000 µs? Assume the system is only running a single process and the CPU is idle during the page swaps.

UNIT-6

  1. Explain the following:
i)                    File types  ii) File operations  iii) File attribute
  1. Explain the methods for implementing directories.
  2. Explain any two file allocation methods with their merits and demerits.
  3. What do you mean by free space list? With suitable examples, explain any two methods of implementation of a free space list.
  4. What are the major methods used for allocating a disk space? Explain each, with suitable examples.
  5. With the help of a neat diagram, describe:
i)                    Tree- structured directory
ii)                  Acyclic-graph directory
  1. Explain Virtual File System (VFS).
  2. Discuss the directory implementation using:
i)                    Linear list     ii) Hash table
  1. What is meant by ‘Consistency Semantics’? Explain the consistency semantics as implemented in a modern O.S.

UNIT- 7

  1. Write a note on following:
i)                    SCAN and C-SCAN disk scheduling
ii)                  Disk attachment
  1. Describe the access matrix model used for protection purpose in computer system.
  2. Explain the following disk scheduling algorithms:
i)                    SSTF    ii) SCAN    iii) LOOK
  1. Explain the following:
i)                    Disk management    ii) Swap-space management
  1. What are the goals of protection?
  2. Explain principles of protection.
  3. Suppose the position of cylinder is at 53, sketch the graphical representation for the queue of pending requests in order – 98, 183, 37, 122, 14, 124, 65,67 for FCFS,SSTF and LOOK scheduling schemes. Give your comment on this scenario for above schemes.

UNIT-8

  1. What are the components that the kernel module supports under Linux? Explain in detail.
  2. What is Linux file system? Explain conflict resolution mechanism of Linux.
  3. Explain design principles of Linux system.
  4. What are inter process communication facility in Linux.
  5. Explain I/O system in Linux.
  6. Write short notes on:

i)                    System calls  ii) Processes and Threads   iii) Virtual memory