Study notes on Operating System from BCA solved exam papers. Explore OS concepts, comprehend memory allocation and process management, and expand your understanding for future success.
Dudes 🤔.. You want more useful details regarding this subject. Please keep in mind this as well. Important Questions For Operating System: * Important Short Questions * Solved Question Paper * Syllabus
Section A: Operating System Very Short Question Answers
Q1. Define different a file system.
Ans. The different file system are explained below:
- (a) Disk File Systems: A disk file system takes advantages of the ability of disk storage media to randomly address data in a short amount of time.
- (b) Flash File Systems: A flash file system considers the special abilities, performance and restrictions of flash memory device.
- (c) Tape File Systems: A tape file system is a file system and tape format designed.to store files on tape.
- (d) DataBased File Systems: The systems that are used to organise and maintain data files are known as file based data systems. These file systems are used to handle a single or multiple files and are not very efficient.
- (e) Network File Systems: A network file system is a file system that acts as a client for a remote file access protocol, providing access to files on a server.
Q2. What is purpose of system calls?
Ans. System calls act as a conduit between the process and the operating system. System calls enable user-level programmes to request services from the operating system that the process itself is not permitted to do. In order to handle the trap, the operating system will enter kernel mode, where it will have access to privileged instructions and will be able to perform the necessary service on behalf of the user-level process. Because of the vital nature of the procedures, the operating system performs them every time they are required. For example, in I/O, a process contains a system call instructing the operating system to read or write a specific location, and the operating system fulfils this request.
Q3. What is the principle advantage of multiprogramming?
Ans.
- 1. A multiprogramming system has numerous programmes ready to run in memory at the same time.
- 2. There is only one CPU, and only one programme can be performed on it at any given moment.
- 3. Once a programme is allocated, the CPU continues to run it until some I/O activity is required. The CPU is subsequently assigned to another programme.
- 4. CPU utilisation improves since the CPU is always active executing some or all of the programme rather than waiting for slow I/O to complete when a programme is awaiting I/O.
- 5. This operating system discriminates against programmes that demand a large amount of I/O. This is due to the fact that these would be constantly interrupted for the l/O and would then have to wait until they were scheduled to run again.
- 6. Multiprogramming makes efficient use of the CPU by overlapping the demands from many users on the CPU and its I/O devices. It tries to maximise CPU utilisation by always having something for the CPU to do.
Q4. What is hard and soft semaphore?
Ans. A semaphore is a variable. There are two types of semaphores :
- (a) Binary semaphores.
- (b) Counting semaphores.
Binary semaphores are associated with two methods (up, down/lock, unlock). Binary semaphores can only take two values (0/1). They are employed in the acquisition of locks. When a resource is available, the process in charge sets the semaphore to 1, otherwise it is set to 0. Counting semaphores with values larger than one are commonly used to assign resources from a pool of identical resources.
Q5. How measure reliability for H/W disk?
Ans. The major measures of H/W disc reliability are capacity, access time, data transfer rate, and reliability. The period between when a read or write request is sent and when data transfer begins is known as access time.
Section B: Operating System Short Question Answers
Q6. On a system using paging and segmentation, The virtual address space consists of up to 8 segments where each segment can be up to 2 bytes long. The hardware page each segment into 256 bytes pages. How many bits in the virtual address specify the:
(i) Segment number.
(ii) Page number.
(iii) offset within page.
(iv) Entire virtual address.
Ans. (i) Segment Number: Virtual address space consists of up to 8 segments,
So, 8= 23
∴ 3 bits are needed to specify segment number.
(ii) Page Number: Hardware pages each segments into 256 byte pages.
So 256 = 28 byte pages
Size of segment is 229 bytes pages
∴ 229 /28 = 229-8 = 221 = 21 Pages
∴ 21 bits are required to specify the page number.
(iii) Offset within Page: For 28 byte page, 8 bits are needed.
(iv) Entire Virtual Address:
= Segment number + Page number + Offset
= 3 + 21+8
= 32
Q7. Discuss characteristics of device which effect on supervisor call interface.
Ans. The characteristics of operating systems are as follows:
- (a) Protected and supervisor mode.
- (b) Allows disk access and file systems device drivers networking security.
- (c) Programme execution.
- (d) Memory management virtual memory multi-tasking.
- (e) Handling I/O operations.
- (f) Manipulation of the file system.
- (g) Error detection and handling.
- (h) Resource allocation.
- (i) Information and resource protection.
Q8. What methods determines how a file’s records are allocated into blocks?
Ans. File’s Allocation Methods: The allocation methods define how the files are stored in the disk blocks. There are three main disk space or file allocation methods:
(a) Contiguous Allocation: In this scheme, each file occupies a contiguous set of blocks on the disk. For example, if a file requires n blocks and is given a block b as the starting location, then the blocks assigned to the file will be: b, b + 1, b + 2, ….b + n – 1. This means that given the starting block address and the length of the file (in terms of blocks required), we can determine the blocks occupied by the file. The directory entry for a file with contiguous allocation contains: address of starting block and length of the allocated portion.
Advantages: The advantages of contiguous allocation are as follow:
- (i) This allows for both sequential and direct access. The address of the kth block of the file, which begins at block b, can be easily determined for direct access as (b + k).
- (ii) This is exceptionally quick because the number of seeks is kept to a minimum due to the contiguous allocation of file blocks.
Disadvantages: The disadvantages of contiguous allocation are as follows:
- (i) Internal and external fragmentation plague this strategy. As a result, it is inefficient in terms of memory use.
- (ii) Extending file size is difficult since it is dependent on the availability of contiguous memory at any one time.
(b) Linked List Allocation: In this scheme, each file is a linked list of disk blocks which need not be contiguous. The disk blocks can be scattered anywhere on the disk.
The directory entry contains a pointer to the starting and the ending file block. Each block contains a pointer to the next block occupied by the file.
Advantages: The advantages of linked list allocation are as follows:
- (i) In terms of file size, this is extremely adaptable. Because the system does not have to hunt for a contiguous portion of memory, file size can be simply increased.
- (ii) There is no external fragmentation using this procedure. As a result, it is relatively superior in terms of memory consumption.
Disadvantages: The disadvantages of linked list allocation are as follows:
- (i) It does not support direct or random access. We cannot directly access a file’s blocks. A file’s block Ko can be obtained by traversing Kblocks consecutively from the file’s initial block using block pointers.
- (ii) The use of pointers in the linked allocation incurs some additional cost.
(c) Indexed Allocation: In this scheme, a special block known as the index block contains the pointers to all the blocks occupied by a file. Each file has its own index block. The ith entry in the index block contains the disk address of the ith file block.
Advantages: The advantages of index allocation are as follows:
- (i) This supports direct access to the blocks occupied by the file and therefore provides fast access to the file blocks.
- (ii) It overcomes the problem of external fragmentation.
Disadvantages: The disadvantages of indexed allocation are as follows:
- (i) The pointer overhead for indexed allocation is greater than linked allocation.
- (ii) For very small files, say files that expand only 2-3 blocks, the indexed allocation would keep one entire block (index block) for the pointers which is inefficient in terms of memory utilisation. However, in linked allocation we lose the space of only 1 pointer per block.
Section C: Operating System Detailed Question Answer
Q9. Define multiprocessor system. What are difference between symmetric and asymmetric multiprocessing? Explain, advantages and disadvantage of multiprocessor systems.
Ans. Multiprocessor System
A multiprocessor system is defined as ‘a system with more than one processor’ and more precisely.
In other words, A multiprocessor is a computer system with two or more Central Processing Units (CPUS), with each one sharing the common main memory as well as the peripherals. This helps in simultaneous processing of programme.
Difference between Symmetric and Asymmetric Multiprocessing
S. No. | Symmetric multiprocessing | Asymmetric multiprocessing |
1. | In symmetric multiprocessing, all the processors are treated equally. | In asymmetric multiprocessing, the processors are not treated equally. |
2. | Task of the operating system are done individual processor. | Task of the operating system are done by master processor. |
3. | Symmetric multiprocessing systems are costlier. | Asymmetric multiprocessing systems are cheaper. |
4. | Symmetric multiprocessing systems are complex to design. | Asymmetric multiprocessing systems are easier to design. |
5. | The architecture of each processor is the same. | All processors Can exhibit. different architecture. |
Advantages of Multiprocessor Systems: The advantages of multiprocessor systems are as follows:
- (a) High Throughput: Throughput is the number of processes executed by the CPU at a given time so this type of system has higher throughout.
- (b) High Reliability: As multiple processors share their work between on and another so work is completed with collaboration. That means these systems are reliable.
- (c) Economic: As more work is completed by the CPU’s so these systems are economically good as well.
Disadvantages of Multiprocessor Systems: The disadvantages of multiprocessor systems are as follows:
- (a) Communication: As multiple processors are communicating with each other so the operating system implementation is complex to handle.
- (b) More Memory Required: As there are multiprocessors working with each other so each processor needs memory space.
- (c) Performance: If any processor fails to work then the work is divided into other processors. The bad effect will be that work will be completed in high time and the performance of the system is affected.
Q10. Write short notes on following:
(i) Deadlock prevention
Ans. Deadlock prevention approaches manage resources in such a way that at least one of the four required conditions of deadlock is not met. It rejects at least one of the conditions necessary for a deadlock to develop. Deadlock prevention strategies do not make proper use of a resource.
(ii) Disk swap-space management
Ans. Virtual memory uses disc space as an extension of main memory in swap space management. Because disc access is substantially slower than RAM access, employing swap space inhibits system performance. The primary purpose of swap space design and implementation is to provide the highest possible throughput for the virtual memory system.
(iii) Real time system
Ans. A Real Time Operating System (RTOS) is an OS designed for real-time applications. These operating systems respond to application requests in near real time. Programmers have more control over process priorities with a real-time operating system. The process priority level of an application may be higher than that of a system process. Real-time operating systems minimise crucial areas of system code, making the interruption of the programme nearly critical.
The level of consistency in the length of time it takes to accept and execute an application’s task is a crucial feature of a real time OS; the variability is jitter. Jitter is lower in a hard real time operating system than in a soft real time operating system. The primary design goal is to ensure a soft or hard performance category rather than high throughput. A soft real time operating system can typically or generally fulfil a deadline, but a hard real time operating system can meet a deadline deterministically.
A real-time operating system has a sophisticated scheduling algorithm. Although scheduler flexibility allows for broader computer-system orchestration of process priorities, a real-time operating system is typically dedicated to a small group of applications. Minimal interrupt latency and thread switching delay are important features in a real time OS, although a real time OS is valued more for how consistently it can respond than for the quantity of work it can complete in a given period of time.
A Real Time Operating System (RTO) is a computing environment that responds to input in real time. A real-time deadline can be so short that the system response appears to be immediate. However, the phrase “real time computing” has also been used to represent “slow real time” output with a longer but definite time limit.
It’s as simple as envisioning yourself in a computer game to learn the difference between real time and regular operating systems. Each action you do in the game is analogous to a programme running in that environment. Because you can count on a precise “lag time,” a game with a real-time operating system for its environment can feel like an extension of your body. The time between your action request and the computer’s visible execution of your action request. A normal operating system, on the other hand, may feel disjointed due to inconsistent lag time. Real-time programmes and their operating system environments must prioritise deadline actualization over all else in order to achieve time reliability. In the gaming example, this might result in dropped frames or lower visual quality when reaction time and visual effects conflict.
(iv) Page replacement
Ans. Page Replacement : A page fault happens when a user programme is executed. The operating system analyses its internal table to ensure that this is a page fault and not an unauthorised memory access. The operating system detects that the necessary page resides on the backup store, but there are no free frames listed, and all memory is in use, necessitating page replacement.
Q11. (i) What computer bus? Different list different type of bus.
Ans. Computer Bus
A computer bus is a subsystem that connects and transfers data between computer components. An internal bus, for example, links computer internals to the motherboard. A ‘bus topology’ or design can also be used to describe digital links in different ways.
Different Types of Computer Bus
The various types of computer bus are explained below:
- (a) System Bus: A parallel bus that simultaneously transfers data in 8-, 16-, or 32-bit channels and is the primary pathway between the CPU and memory.
- (b) Internal Bus: Connects a local device, like internal CPU memory.
- (c) External Bus: Connects peripheral devices to the motherboard, such as scanners or disk drives.
- (d) Expansion Bus: Allows expansion boards to access the CPU and RAM.
- (e) Frontside Bus: Main computer bus that determines data transfer rate speed and is the primary data transfer path between the CPU, RAM and other motherboard devices.
- (f) Backside Bus: Transfer secondary cache (L2 cache) data at faster speeds, allowing more efficient CPU operations.
(ii) What is architecture of peripheral component interconnect bus?
Ans. The CPU and extension boards such as modem cards, network cards, and sound cards are connected through a Peripheral Component Interconnect Bus (PCI bus). These extension boards are typically inserted into motherboard expansion slots.
The PCI local bus has supplanted the Video Electronics Standards Association (VESA) local bus and the Industry Standard Architecture (ISA) bus as the common standard for a PC expansion bus. USB has essentially supplanted PCI.
This term is also known as conventional PCI or simply PCI.
PCI requirements include:
- (a) Bus timing.
- (b) Physical size (determined by the wiring and spacing of the circuit board).
- (c) Electrical features.
- (d) Protocols.
The peripheral component interconnect special interest group standardises PCI requirements. Most PCs today do not have expansion cards, but rather components built into the motherboard. Specific cards continue to use the PCI bus. However, for practical purposes, the PCl expansion card has been superseded by USB.
The operating system looks for all PCI buses during system startup to obtain information about the resources required by each device. The operating system (OS) connects with each device and allocates system resources like as memory, interrupt requests, and allotted input/output (I/O) space.
Q12. What is demand paging? How page fault occurs? What are factors that affect the determination of the page size?
Ans. Demand Paging : The most prevalent virtual memory system is demand paging. Demand paging works in the same way as a paging system with swapping. The backup store is a switching device where programmes are stored. When we need to run a programme, we load it into memory. We employ a lazy swapper instead of swapping the complete programme into memory. The lazy swapper will never swap a page into memory unless it is absolutely necessary. There are numerous benefits to utilising a lazy swapper. It reduces swap time and the amount of physical memory required, allowing for more multiprogramming.
A page fault happens when a page is accessed that has not been brought into main memory. If the memory access is invalid, the operating system aborts the programme. If it is valid, a free frame is found and I/O is requested to read the required page into it.
The main factors affecting page size are explained below:
(a) About the Task: The size of data page is determined by the buffer pool in which you define the table space. For example, a table space that is defined in a 4 kB buffer pool has 4 kB page sizes. and one that is defined in an 8 kB buffer pool has 8 kB page sizes.
Data in table spaces is stored and allocated in record segments. Any record segment can be 4 kB in size, or the size determined by the buffer pool (4 kB, & kB, 16 kB or 32 kB). In a table space with 4 kB record segments, an 8 kB page size requires two 4 kB records, and a 32 kB pages size requires eight 4 kB records.
(b) Procedure: To choose data page sizes, use the following approaches:
Use the default of 4 kB page sizes as a starting point when access to the datåis random and only a few rows per page are needed.
If row sizes are very small, using the 4 kB page size is recommended.
Use larger page sizes in the following situations:
When the size of individual rows is greater than 4 kB.
When you can achieve higher density on disk by choosing a large page size.
Q13. What is thread? Discuss different approaches for implementation of process threads.
Ans. Processes and Threads : We can think of a thread as basically a lightweight process. In order to understand this let us consider the two main characteristics of a process :
Unit of Resource Ownership : A process is allocated :
- (i) A virtual address space to hold the process image.
- (ii) Control of some resources (files, I/O devices..).
Unit of Dispatching : A process is an execution path through one or more programs :
- (i) Execution may be interleaved with other processes.
- (ii) The process has an execution state and a dispatching priority.
If we treat these two characteristics as being independent (as does modern OS theory).
The unit of resource ownership is usually referred to as a process or task. These Processes have:
- (i) A virtual address space which holds the process image.
- (ii) Protected access to processors, other processes, files and I/O resources.
The unit of dispatching is usually referred to a thread ora lightweight process. Thus, a thread :
- (i) Has an execution state (running, ready etc.).
- (ii) Saves thread context when not running.
- (iii) Has an execution stack and some per-thread static storage for local variables.
- (iv) Has access to the memory address space and resources of its process.
- (v) All threads of a process share this when one thread alters a (non-private) memory item, all other threads (of the process) sees that a file open with one thread, is available to others.
Benefits of Threads vs Processes : If implemented correctly then threads have some advantages of (multi) processes, They take :
- (i) Less time to create a new thread than a process, because the newly created thread uses the current process address space.
- (ii) Less time to terminate a thread than a process.
- (iii) Less time to switch between two threads within the same process, partly because the newly created thread uses the current process address space.
- (iv) Less communication overheads – communicating between the threads of one process is simple because the threads share everything :
Address space in particular. So, data produced by one thread is immediately available to all the other threads.