Operating Systems
1.
Explain the concept of Reentrancy.
It
is a useful, memory-saving technique for multiprogrammed timesharing systems. A
Reentrant Procedure is one in which multiple users can share a single
copy of a program during the same period. Reentrancy has 2 key aspects:
i.)
The program code cannot modify itself,
ii.)
The local data for each user process must be stored
separately.
Thus,
the permanent part is the code, and the temporary part is the pointer back to
the calling program and local variables used by that program. Each execution
instance is called activation. It executes the code in the permanent
part, but has its own copy of local variables/parameters. The temporary part
associated with each activation is the activation record. Generally, the
activation record is kept on the stack.
Note: A reentrant procedure can
be interrupted and called by an interrupting program, and still execute
correctly on returning to the procedure.
2.
Explain Belady's Anomaly.
Also
called FIFO anomaly. Usually, on increasing the number of frames allocated to a
process' virtual memory, the process execution is faster, because fewer page
faults occur. Sometimes, the reverse happens, i.e., the execution time
increases even when more frames are allocated to the process. This is Belady's
Anomaly. This is true for certain page reference patterns.
3.
What is a binary semaphore? What is its
use?
A
binary semaphore is one, which takes only 0 and 1 as values. They are used to
implement mutual exclusion and synchronize concurrent processes.
4.
What is thrashing?
It is a phenomenon in virtual memory
schemes, when the processor spends most of its time swapping pages, rather than
executing instructions. This is due to an inordinate number of page faults.
5.
List the Coffman's conditions that lead to
a deadlock.
a) Mutual Exclusion: Only one process may use
a critical resource at a time.
b)
Hold & Wait: A process may be allocated some resources while waiting
for others.
c)
No Pre-emption: No resource can be forcible removed from a process
holding it.
d)
Circular Wait: A closed chain of processes exist such that each process
holds at least one resource needed by another process in the chain.
6.
What are short-, long- and medium-term
scheduling?
Long term scheduler determines which
programs are admitted to the system for processing. It controls the degree
of multiprogramming. Once admitted, a job becomes a process.
Medium
term scheduling is part of the swapping function. This relates to processes
that are in a blocked or suspended state. They are swapped out of main-memory
until they are ready to execute. The swapping-in decision is based on
memory-management criteria.
Short
term scheduler, also know as a dispatcher executes most frequently, and
makes the finest-grained decision of which process should execute next. This
scheduler is invoked whenever an event occurs. It may lead to interruption of
one process by preemption.
7.
What are turnaround time and response time?
Turnaround
time is the interval between the submission of a job and its completion. Response
time is the interval between submission of a request, and the first
response to that request.
8.
What are the typical elements of a process
image?
a)User data:
Modifiable part of user space. May include program data, user stack area, and
programs that may be modified.
b) User
program: The instructions to be executed.
c) System Stack: Each process has one or more
LIFO stacks associated with it. Used to store parameters and calling addresses
for procedure and system calls.
d) Process Control Block (PCB): Info needed by
the OS to control processes.
9.
What is the Translation Lookaside Buffer
(TLB)?
In a
cached system, the base addresses of the last few referenced pages is
maintained in registers called the TLB that aids in faster lookup. TLB contains
those page-table entries that have been most recently used. Normally, each
virtual memory reference causes 2 physical memory accesses-- one to fetch
appropriate page-table entry, and one to fetch the desired data. Using TLB
in-between, this is reduced to just one physical memory access in cases of
TLB-hit.
10.
What is the resident set and working set of
a process?
Resident set is that portion of the
process image that is actually in main-memory at a particular instant. Working
set is that subset of resident set that is actually needed for execution.
(Relate this to the variable-window size method for swapping techniques.)
11.
When is a system in safe state?
The
set of dispatchable processes is in a safe state if there exist at least one
temporal order in which all processes can be run to completion without
resulting in a deadlock.
12.
What is cycle stealing?
We
encounter cycle stealing in the context of Direct Memory Access (DMA). Either
the DMA controller can use the data bus when the CPU does not need it, or it
may force the CPU to temporarily suspend operation. The latter technique is
called cycle stealing. Note that cycle stealing can be done only at specific
break points in an instruction cycle.
13.
What is meant by arm-stickiness?
If
one or a few processes have a high access rate to data on one track of a
storage disk, then they may monopolize the device by repeated requests to that
track. This generally happens with most common device scheduling algorithms
(LIFO, SSTF, C-SCAN, etc). High-density multi-surface disks are more likely to
be affected by this, than the low density ones.
14.
What are the stipulations of C2 level
security?
C2
level security provides for:
1.
Discretionary Access Control
2.
Identification and Authentication
3.
Auditing
4.
Resource Reuse
15.
What is busy waiting?
The
repeated execution of a loop of code while waiting for an event to occur is
called busy-waiting. The CPU is not engaged in any real productive
activity during this period, and the process does not progress toward
completion.
16.
Explain the popular multiprocessor
thread-scheduling strategies.
Load
Sharing: Processes are not assigned to a particular processor. A global
queue of threads is maintained. Each processor, when idle, selects a thread
from this queue. Note that load balancing refers to a scheme where work
is allocated to processors on a more permanent basis.
Gang
Scheduling: A set of related threads is scheduled to run on a set of
processors at the same time, on a 1-to-1 basis. Closely related threads /
processes may be scheduled this way to reduce synchronization blocking, and
minimize process switching. Group scheduling predated this strategy.
Dedicated processor assignment:
Provides implicit scheduling defined by assignment of threads to processors.
For the duration of program execution, each program is allocated a set of
processors equal in number to the number of threads in the program. Processors
are chosen from the available pool.
Dynamic scheduling: The number of
thread in a program can be altered during the course of execution.
17.
When does the condition 'rendezvous' arise?
In
message passing, it is the condition in which, both, the sender and receiver
are blocked until the message is delivered.
18.
What is a trap and trapdoor?
Trapdoor is a secret
undocumented entry point into a program, used to grant access without normal
methods of access authentication. A trap is a software interrupt,
usually the result of an error condition.
19.
What are local and global page
replacements?
Local
replacement means that an incoming page is brought in only to the relevant
process' address space. Global replacement policy allows any page frame
from any process to be replaced. The latter is applicable to variable
partitions model only.
20.
Define latency, transfer and seek time with
respect to disk I/O.
Seek
time is the time required to move the disk arm to the required track. Rotational
delay or latency is the time to move the required sector to the disk
head. Sums of seek time (if any) and the latency is the access time, for
accessing a particular track in a particular sector. Time taken to actually
transfer a span of data is transfer time.
21.
Describe the Buddy system of memory
allocation.
Free
memory is maintained in linked lists, each of equal sized blocks. Any such
block is of size 2^k. When some memory is required by a process, the block size
of next higher order is chosen, and broken into two. Note that the two such
pieces differ in address only in their kth bit. Such pieces are called buddies.
When any used block is freed, the OS checks to see if its buddy is also free.
If so, it is rejoined, and put into the original free-block linked-list.
22.
What is time stamping?
It is a technique proposed by Lamport, used to order
events in a distributed system without the use of clocks. This scheme is
intended to order events consisting of the transmission of messages. Each
system 'i' in the network maintains a counter Ci. Every time a system transmits
a message, it increments its counter by 1 and attaches the time-stamp Ti to the
message. When a message is received, the receiving system 'j' sets its counter
Cj to 1 more than the maximum of its current value and the incoming time-stamp
Ti. At each site, the ordering of messages is determined by the following
rules:
For
messages x from site i and messages y from site j, x precedes y if one of the
following conditions holds if Ti < Tj or
if Ti = Tj and i < j.
23.
How are the wait/signal operations for monitor
different from those for semaphores?
If a
process in the monitor signals and no task is waiting on the condition
variable, the signal is lost. So this allows easier program design. Whereas in
semaphores, every operation affects the value of the semaphore, so the wait and
signal operations should be perfectly balanced in the program.
24.
In the context of memory management, what
are placement and replacement algorithms?
Placement
algorithms determine where in the available main-memory to load the incoming
process. Common methods are first-fit, next-fit, and best-fit. Replacement
algorithms are used when memory is full, and one process (or part of a process)
needs to be swapped out to accommodate the new incoming process. The
replacement algorithm determines which are the partitions (memory portions occupied
by the processes) to be swapped out.
25.
In loading processes into memory, what is
the difference between load-time dynamic linking and run-time dynamic linking?
For load-time
dynamic linking: Load module to be loaded is read into memory. Any reference
to a target external module causes that module to be loaded and the references
are updated to a relative address from the start base address of the
application module.
With
run-time dynamic loading: Some of the linking is postponed until actual
reference during execution. Then the correct module is loaded and linked.
26.
What are demand- and pre-paging?
With
demand paging, a page is brought into the main-memory only when a
location on that page is actually referenced during execution. With prepaging,
pages other than the one demanded by a page fault are brought in. The selection
of such pages is done based on common access patterns, especially for secondary
memory devices.
27.
What is mounting?
Mounting
is the mechanism by which two different file systems can be combined
together. This is one of the services provided by the operating system, which
allows the user to work with two different file systems, and some of the
secondary devices.
28.
What do you mean by dispatch latency?
The
time taken by the dispatcher to stop one process and start running another
process is known as the dispatch latency.
29.
What is multi-processing?
The
ability of an operating system to use more than one CPU in a single computer
system. Symmetrical multiprocessing refers to the OS's ability to assign
tasks dynamically to the next available processor, whereas asymmetrical
multiprocessing requires that the original program designer choose the
processor to use for a given task at the time of writing the program.
30.
What is multitasking?
Multitasking
is a logical extension of multi-programming. This refers to the simultaneous
execution of more than one program, by switching between them, in a single
computer system.
31.
Define multithreading?
The
concurrent processing of several tasks or threads inside the same program or
process. Because several tasks can be processed parallely and no tasks have to
wait for the another to finish its execution.
32.
Define compaction.
Compaction
refers to the mechanism of shuffling the memory portions such that all the free
portions of the memory can be aligned (or merged) together in a single large
block. OS to overcome the problem of fragmentation, either internal or
external, performs this mechanism, frequently. Compaction is possible only if
relocation is dynamic and done at run-time, and if relocation is static and
done at assembly or load-time compaction is not possible.
33.
What do you mean by FAT (File Allocation
Table)?
A
table that indicates the physical location on secondary storage of the space
allocated to a file. FAT chains the clusters (group of sectors) to define the
contents of the file. FAT allocates clusters to files.
34.
What is a Kernel?
Kernel
is the nucleus or core of the operating system. This represents small part of
the code, which is thought to be the entire operating system, it is most
intensively used. Generally, the kernel is maintained permanently in main
memory, and other portions of the OS are moved to and from the secondary
storage (mostly hard disk).
35.
What is memory-mapped I/O?
Memory-mapped I/O, meaning that the communication between
the I/O devices and the processor is done through physical memory locations in
the address space. Each I/O device will occupy some locations in the I/O
address space. I.e., it will respond when those addresses are placed on the
bus. The processor can write those locations to send commands and information
to the I/O device and read those locations to get information and status from
the I/O device. Memory-mapped I/O makes it easy to write device drivers in a
high-level language as long as the high-level language can load and store from
arbitrary addresses.
36.
What are the advantages of threads?
Ø
Threads provide parallel processing like
processes but they have one important advantage over process, they are much
more efficient.
Ø
Threads are cheaper to create and destroy
because they do not require allocation and de-allocation of a new address space
or other process resources.
Ø
It is faster to switch between threads. It will
be faster since the memory-mapping does not have to be setup and the memory and
address translation caches do not have to be violated.
Ø
Threads are efficient as they share memory. They
do not have to use system calls (which are slower because of context switches)
to communicate.
37.
What are kernel threads?
The processes that execute in the Kernel-mode that
processes are called kernel threads.
38.
What are the necessary conditions for
deadlock to exist?
Ø
Process claims exclusive control for the
Resources allocated to them. (Mutual exclusion condition).
Ø
Resources cannot be de-allocated until the
process completes they are used for its complete execution. (No preemption
condition).
Ø
A process can hold one resource and wait for
other resources to be allocated. (Wait for condition)
Ø
Circular wait condition.
39.
What are the strategies for dealing with
deadlock?
Ø
Prevention- Place restrictions on
resource requests so that deadlock cannot occur.
Ø
Avoidance- Plan ahead so that you never
get in to a situation where deadlock is inevitable.
Ø
Recovery- when deadlock is identified in
the system, it recovers from it by removing some of the causes of the deadlock.
Ø
Detection – detecting whether the
deadlock actually exists and identifies the processes and resources that are
involved in the deadlock.
40.
Paging
a memory management function, while
multiprogramming a processor management function, are the two interdependent?
Yes.
41.
What
is page cannibalizing?
Page
swapping or page replacements are called page cannibalizing.
42.
What
has triggered the need for
multitasking in PCs?
Ø Increased speed and memory capacity of
microprocessors together with the support fir virtual memory and
Ø Growth of
client server computing
43.
What
are the four layers that Windows NT have in order to achieve independence?
Ø Hardware abstraction layer
Ø Kernel
Ø Subsystems
Ø System Services.
44.
What
is SMP?
To
achieve maximum efficiency and reliability a mode of operation known as
symmetric multiprocessing is used. In essence, with SMP any process or threads
can be assigned to any processor.
45.
What
are the key object oriented concepts used by Windows NT?
Ø Encapsulation
Ø Object class and instance
46.
Is Windows NT a full blown object oriented operating
system? Give reasons.
No
Windows NT is not so, because its not implemented in object oriented language
and the data structures reside within one executive component and are not
represented as objects and it does not support object oriented capabilities .
47. What is a drawback of MVT?
It
does not have the features like
Ø ability to support multiple processors
Ø virtual storage
Ø source level debugging
48. What is process spawning?
When
the OS at the explicit request of another process creates a process, this
action is called process spawning.
49. How many jobs can be run concurrently on
MVT?
15
jobs
50. List out some reasons for process
termination.
Ø Normal completion
Ø Time limit exceeded
Ø Memory unavailable
Ø Bounds violation
Ø Protection error
Ø Arithmetic error
Ø Time overrun
Ø I/O failure
Ø Invalid instruction
Ø Privileged instruction
Ø Data misuse
Ø Operator or OS intervention
Ø Parent termination.
51. What are the reasons for process
suspension?
Ø swapping
Ø interactive user request
Ø timing
Ø parent process request
52. What is process migration?
It
is the transfer of sufficient amount of the state of process from one machine
to the target machine
53. What is mutant?
In
Windows NT a mutant provides kernel mode or user mode mutual exclusion with the
notion of ownership.
54. What is an idle thread?
The
special thread a dispatcher will execute when no ready thread is found.
55. What is FtDisk?
It
is a fault tolerance disk driver for Windows NT.
56. What are the possible threads a thread can
have?
Ø Ready
Ø Standby
Ø Running
Ø Waiting
Ø Transition
Ø Terminated.
57. What are rings in Windows NT?
Windows
NT uses protection mechanism called rings provides by the process to implement
separation between the user mode and kernel mode.
58. What is Executive in Windows NT?
In Windows NT,
executive refers to the operating system code that runs in kernel mode.
59. What are the sub-components of I/O manager
in Windows NT?
Ø Network redirector/ Server
Ø Cache manager.
Ø File systems
Ø Network driver
Ø Device driver
60. What are DDks? Name an operating system
that includes this feature.
DDks are device driver
kits, which are equivalent to SDKs for writing device drivers. Windows NT includes
DDks.
61. What level of security does Windows NT
meets?
C2
level security.
Address Space
|
Deadlock
|
External Fragmentation
|
Mailbox
|
Pipe
|
Spooling
|
API
|
Deadlock avoidance
|
Field
|
Main Memory
|
Preemption
|
Starvation
|
Base Address
|
Deadlock detection
|
File
|
Message
|
Privileged Instruction
|
Swapping
|
Batch Processing
|
Deadlock prevention
|
File Allocation Table
|
Microkernel
|
Process
|
Symmetric Multiprocessing
|
Binary semaphore
|
Decryption
|
File Management System (FMS)
|
Monitor
|
PCB
|
Task
|
Block
|
Demand Paging
|
File Organization
|
Monolithic OS
|
Race Condition
|
Thrashing
|
Busy waiting
|
Device Driver
|
FIFO
|
Multiprocessing
|
Record
|
Thread
|
Cache Memory
|
Direct Access
|
Frame
|
Multiprogramming
|
Relative Address
|
Time Sharing
|
Chained List
|
DMA
|
Gang Scheduling
|
Multitasking
|
RPC
|
Virtual Address
|
Client
|
Disabled interrupt
|
Indexed File
|
Mutual Exclusion
|
Round Robin
|
Virtual Storage
|
Cluster
|
Disk Cache
|
Internal Fragmentation
|
Network OS
|
Secondary Memory
|
Virus
|
Compaction
|
Dispatch
|
Interrupt
|
Operating System
|
Segment
|
Worm
|
Concurrent
|
DOS
|
Kernel
|
Page
|
Segmentation
|
|
Consumable Resource
|
Dynamic Relocation
|
LIFO
|
Page Frame
|
Semaphore
|
|
Critical Section
|
Enabled interrupt
|
Lightweight Process
|
Paging
|
Sequential Access
|
|
Cryptography
|
Encryption
|
Logical Address
|
Physical Address
|
Server
|
|
its nice..thnk u
ReplyDelete