click below
click below
Normal Size Small Size show me how
OS
| Question | Answer |
|---|---|
| A program that acts as an interface between the user and the computer hardware | Operating System |
| A software that controls and coordinates the use of the hardware among the various application programs for the various users. | Operating System |
| Execute user programs, make computer system convenient, and use computer hardware efficiently are the goals of OS | True |
| Allows user to enter and receive information | User Interface |
| Three types of user interface | Command Line, Batch Based, and Graphical User |
| Capability of OS to load a program into memory and execute it | Program Execution |
| An OS service that refers to programs needed to be read and write as files | File system manipulation. |
| The OS is responsible for reading and/or writing data from I/O devices such as disks, tapes, printers, keyboards, etc. | True |
| A service of OS that refers to a process to swap over information with other process | Communication |
| The OS manages resources and allocate them to different programs and users | Resource Allocation |
| Ability to detect errors within the computer system and take action | Error Detection |
| Service of OS that keeps track of time and resources used by various tasks and users | Job Accounting |
| mechanism for controlling access of processes or users to resources defined by the OS | Protection |
| defense of the system against internal and external attacks | Security |
| A central part of an OS which manages system resources and resides in memory | Kernel |
| First program that loads after bootloader | Kernel |
| a program that loads and starts the boot time tasks and processes of an OS | Bootloader |
| The user never directly interacts with the computer | Batch Operating System |
| logical extension in which CPU switches jobs so frequently that users can interact with each job while it is running | Multitasking/Time-Sharing |
| The Response time should be < 1 second | True |
| Unix, Linux, Multics and Windows are under Time-sharing OS | True |
| Uses multiple central processors to serve multiple real-time applications and multiple users. Also known as loosely coupled system | Distributed Operating System |
| WWW and Cloud Computing are types of Distributed OS | True |
| A system that runs on a server and provides the server the capability to manage data | Network Operating System |
| System that allows shared file and printer access among multiple computers in a network | Network Operating System |
| Systems used when there are time requirements are very strict like missile systems, air traffic control systems, robots | Real-time systems |
| Also known as Mobile OS, built exclusively for a mobile device | Handheld Operating System |
| One or more CPUs, device controllers connect through common ______ providing access to shared memory | bus |
| I/O devices and the CPU cannot execute concurrently | False |
| Each device controller has a local buffer | True |
| It is a signal emitted by hardware or software when a process or an event needs immediate attention | Interrupt |
| Is a signal created and sent to the CPU that is caused by some action taken by a hardware device | Hardware Interrupt |
| Arises due to illegal and erroneous use of an instruction or data. It often occurs when an application software terminates | Software Interrupt |
| An operating system sends signal to each devices asking if they have a request | Polling Interupt |
| requesting device sends interrupt to the operating system | Vectored Interrupt System |
| An operation that allows OS to protect itself and other system components | Dual-mode |
| a way for programs to interact with the OS | System Call |
| User code is bit 0 | False |
| Kernel code is bit 0 | True |
| Instructions that are only executable in kernel mode | Privileged |
| One main CPU capable of executing a general- purpose instruction set | Single-Processor System |
| Also known as parallel-system/multicore/tightly-coupled systems | Multiprocessor System |
| The advantages of multiprocessor systems are increased throughput, economy of scale, and increase reliability | True |
| Each processor is assigned a specific task | Asymmetric multiprocessing |
| The most commonly used in which each processor performs all tasks within the operating system. | Symmetric multiprocessing |
| All processors are peers and no boss-worker relationship | True |
| A trend in CPU design to have multiple computing cores on a single chip | Multicore |
| Provides a high-availability service which survives failures and shares storage via storage-area network | Clustered System |
| Blurring over time | Traditional Computer |
| Refers to computing on handheld smartphones | Mobile Computing |
| A collection of physically separate computer systems to provide users an access to various resources that the system maintains | Distributed system |
| provides an interface to client to REQUEST services (i.e. database) | Compute-server |
| provides interface for clients to STORE & RETRIEVE files | File-server |
| technology that allows operating systems to run as applications within other operating system | Virtualization |
| used when the source CPU type is different from the target CPU type | Emulation |
| type of computing that delivers computing, storage and even applications as a service across a network | Cloud Computing |
| cloud available via the Internet | Public Cloud |
| cloud run by a company for that company’s own use | Private Cloud |
| cloud that includes both public and private | Hybrid Cloud |
| one or more applications available via the Internet | Software as a service(SaaS) |
| software stack ready for application use via the Internet | Platform as a service(PaaS) |
| servers or storage available over the Internet | Infrastructure as a service(IaaS) |
| A system that allows others to study, change as well as distribute the software to other people | Open Source operating systems |
| an operating system that provides services across the network | Network operating system |
| It is a program in execution | Process |
| A program by itself is not a process | True |
| A program is a active entity | False |
| A process is a active entity | True |
| a cycle between two states, CPU execution (CPU burst) and I/O wait (I/O burst) | Process |
| Process execution begins with a CPU burst that is followed by an I/O burst | True |
| When a program is loaded into the memory and it becomes a process | True |
| Four sections of a process | stack, heap, text, data |
| Contains temporary data like function parameters and local variables | Stack |
| Dynamically allocated memory to a process during its run time | Heap |
| Current activity represented by the value of Program Counter and contents of processor's registers | Text |
| Contains global and static variables | Data |
| The process is being created | New |
| The CPU is executing its instructions | Running |
| The process is waiting for some event to occur | Waiting |
| The process is waiting for the OS to assign a processor to it | Ready |
| The process has finished execution | Terminated |
| It represents each process in OS | process control block/task control block |
| data block containing information associated with a specific process | process control block(pcb) |
| The state may be new, ready, running, waiting, or halted | Process state |
| It indicates the address of the next instruction to be executed | Program Counter |
| It include accumulators, index registers, stack pointers, and general-purpose registers | CPU Registers |
| This information includes a process priority & pointers to any other scheduling parameters | CPU Scheduling Information |
| This information includes limit registers or page tables | Memory Management Information |
| This information includes the amount of CPU and real time used, time limits, process numbers | Accounting Information |
| This information includes outstanding I/O requests | I/O Status Information |
| serves as the repository for any information that may vary from process to process | process control block(pcb) |
| A CPU being switched from one process to another | Context Switch Diagram |
| A process may create several new processes, via a create-process system call | True |
| Known as creating process | parent process |
| Known as new processes | children of parent process |
| A process terminates when it finishes its last statement and asks the operating system to delete it using the exit system call | True |
| A parent may terminate the execution if the task assigned to the child is no longer required | True |
| A parent may terminate the execution if the child has exceeded its usage of some of the resources it has been allocated | True |
| A parent may terminate the execution if the parent is exiting and OS does not allow to continue | True |
| A phenomenon wherein if a process terminates, then all its children must also be terminated by the operating system | cascading termination |
| A process that cannot be affected by other processes | Independent |
| A characteristic of a process wherein the result of the execution depends solely on the input state | Its execution is deterministic |
| The result of the execution will always be the same for the same input | Its execution is reproducible |
| The processes execution can be stopped and restarted without causing ill effects | True |
| A process that can be affected by other processes | Cooperating |
| The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization | True |
| This queue consists of all processes in the system | job queue |
| The processes that are residing in main memory and are ready and waiting to execute | ready queue |
| Scheduler that selects processes from the secondary storage and loads them into memory for execution | Long-term scheduler |
| Scheduler that selects process from among the processes that are ready to execute, and allocates the CPU to one of them | Short-term scheduler |
| This scheduler removes (swaps out) certain processes from memory to lessen the degree of multiprogramming | Medium-term scheduler |
| A scheme wherein the process can be reintroduced into memory and its execution can be continued where it left off | Swapping |
| A task that switches cpu to another process to save the state and load it for the new process | context switch |
| CPU scheduling decisions may take place when a process switches from the running state to the waiting state | True |
| CPU scheduling decisions may take place when a process switches from the running state to the ready state | True |
| CPU scheduling decisions may take place when a process switches from the waiting state to the ready state | True |
| CPU scheduling decisions may take place when a process terminates | True |
| Scheduling scheme that takes place between decision 1 and 4 | non-preemptive |
| Scheduling scheme that takes place between decision 2 and 3 | preemptive |
| No process is interrupted until it is completed | Non-preemptive scheduling |
| It works by dividing time slots of CPU to a given process and used when the process switch to ready state | Preemptive scheduling |
| It measures how busy is the CPU. It ranges from 40% to 90% in real system | CPU Utilization |
| the amount of work completed in a unit of time. It must look to maximize the number of jobs processed per time unit | Throughput |
| measures how long it takes to execute a process and the interval from the time of submission to the time of completion | Turnaround Time |
| the time a job waits for resource allocation and the total amount of time a process spends waiting in the ready queue | Waiting Time |
| time from the submission of a request until the system makes the first response. It is the amount of time it takes to start respond | Response Time |
| A good CPU scheduling algorithm maximizes CPU utilization and throughput and minimizes turnaround time, waiting time and response time | True |
| the simplest and non-preemptive CPU-scheduling algorithm. Also used to break the tie for other scheduling | FCFS |
| When the CPU is available, it is assigned to the process that has the smallest next CPU burst | SJF |
| It is the most common scheduling algorithms in batch systems | Priority scheduling (NP) |
| A new process arriving may have a shorter next CPU burst than what is left of the currently executing process | Preemptive SJF/SRTF |
| It will preempt the CPU if the priority of the newly arrived process is higher than the currently running process | Priority scheduling (P) |
| This algorithm is specifically for time-sharing systems and has time quantum(small unit of time) | Round Robin |
| A functionality of an operating system which handles or manages primary memory and moves processes back and forth between main memory and disk during execution | Memory management |
| the process of mapping from one address space to another address space | Address binding |
| a program resides on a disk as a binary executable file | True |
| The program must then be brought into main memory before the CPU can execute it | True |
| If it is not known at the compile time where process will reside then relocatable address will be generated | Load time |
| The instructions are in memory and are being processed by the CPU | Execution time |
| An address generated by the CPU | logical address |
| An address seen by the memory unit | physical address |
| The run-time mapping from logical to physical addresses is done by what hardware device | memory management unit |
| The base register is now the relocation register | True |
| process of reserving a partial or complete portion of computer memory for the execution of programs and processes | Memory allocation |
| Main memory has two partitions: Low and High Memory | True |
| Operating system resides in this memory | Low Memory |
| User processes are held in high memory | High Memory |
| It set aside some memory for the OS and user program gets the rest and does not support multiprogramming | Single Partition Allocation |
| the oldest and simplest technique used to put more than one processes in the main memory | Fixed Partitions |
| The degree of multiprogramming is bounded by the number of partitions | True |
| first job claims the first available memory with space more than or equal to it’s size | FIRST-FIT allocation |
| keeps the free/busy list in order by size – smallest to largest | BEST-FIT allocation |
| As processes are loaded and removed from memory, the free memory space is broken into little pieces | Fragmentation |
| Two types of fragmentation | Internal fragmentation & External fragmentation |
| occurs when a partition is too big for a process. The amount of it is the difference between partition and process | Internal fragmentation |
| occurs when a partition is available, but is too small for any waiting job | External fragmentation |
| this is a memory allocation technique where it is possible to have a variable number of tasks in memory simultaneously | Multiple Variable Partition Technique (MVT) |
| in MVT, initially the OS views memory as one large block of available memory called a ____ | hole |
| a memory allocation technique where if a hole in memory exists, the OS allocates only as much as is needed, keeping the rest available | Multiple Variable Partition Technique (MVT) |
| Allocates the first hole that is large enough. It is generally faster and the spaces goes to higher memory | First Fit |
| Allocates the smallest hole that is large enough, it produces the smallest leftover hole | Best Fit |
| Allocates the largest hole and produces the largest leftover hole | Worst Fit |
| If the new hole is adjacent to other holes, the system merges these adjacent holes to form one larger hole | coalescing |
| Internal fragmentation does not exist in MVT | True |
| The goal is to shuffle the memory contents to place all free memory together in one large block | Compaction |
| it is possible only if relocation is dynamic, and is done at execution time | Compaction |
| this is a memory allocation technique where compaction is possible | multiple relocatable variable partition technique (MRVT) |
| this is a memory allocation technique where the OS can move processes around in memory | multiple relocatable variable partition technique (MRVT) |
| This technique can minimize external fragmentation | Paging |
| It permits a program’s memory to be non-contiguous | Paging |
| technique for controlling how a computer or virtual machine's (VM's) memory resources are shared | Memory paging |
| A non-physical memory that is a section of a hard disk that's set up to emulate the computer's RAM | Virtual Memory |
| The portion of the hard disk that acts as physical memory | Page File |
| In paging, what do you call the divided main memory turned to fixed-sized blocks | frames |
| The system also breaks a process into blocks called _______ | pages |
| The size of a memory frame is EQUAL to the size of a process page | True |
| What do you use in order to translate a logical address into a physical address | page table |
| It indicates what page the word resides | page number |
| It selects the word within the page | page offset |
| It is used as an index into the page table | page number |
| It contains the base address of each page in physical memory | Page table |
| The page size (like the frame size) is defined by the hardware and the size of a page is typically a power of 2 | True |
| There is no external fragmentation in paging since the operating system can allocate any free frame to a process that needs it | True |
| What happens if the memory requirements of a process do not happen to fall on page boundaries | Internal fragmentation |
| A memory management technique in which each job is divided into several segments of different sizes | Segmentation |
| Each segment is actually a different logical address space of the program | True |
| Difference between paging and segmentation is that segments are of variable-length where as in paging pages are of fixed size | True |
| It contains the program's main function and other utilities | Program segment |
| It is a collection of segments | logical address space |
| in segmentation, the OS maintains a _______ for every process | segment map table |
| for each segment, the table stores the starting address of the segment called ____ and the length of the segment called ____ | base; limit |
| A technique that allows the execution of processes that may not be completely in memory | Virtual Memory |
| It abstracts main memory into an extremely large, uniform array of storage, separating logical memory as viewed by the user from physical memory | Virtual Memory |
| This technique frees programmers from the concerns of memory- storage limitations | Virtual Memory |
| It is the separation of user logical memory from physical memory | Virtual Memory |
| he OS (particular the pager) swaps only the necessary pages into memory (lazy swapping) | demand-paging system |
| The bit is valid if the page is in memory | True |
| The bit is valid if the page is in secondary storage | True |
| It occurs when a process tries to use a page that is not in physical memory. This also causes a trap to the OS | page-fault |
| A paging wherein you never bring a page into memory until it is required | pure demand paging |
| This ensures that programs do not access a new page of memory with each instruction execution | principle of locality of reference |
| The effectiveness of the demand paging is based on a property of computer programs called the locality of reference | True |
| Most programs execution time is spent on routines in which many instructions are executed repeatedly | True |
| It is important to keep the page-fault rate low in a demand- paging system | True |
| It is a problem that if there is a need to transfer a page from disk to memory but there is no memory space available | memory is over-allocated |
| A scheme wherein the OS removes/replaces one of the existing pages in memory to give way for the incoming page | page replacement |
| The page replacement algorithm is necessary to select which among the pages currently residing in memory will be replaced | True |
| If no frame is free, the system finds one that is currently being used and frees it | True |
| Freeing a frame means transferring its contents to the disk and changing the page table to show that the page is not in the memory | True |
| The page fault service routine has 7 steps | True |
| The page fault service routine, 1.Find the location of the desired page on the disk 2.Find a free frame 3. If there is a free frame, use it 4. else, use a page-replacement algorithm to select a victim frame | 5. Write the victim page to the disk; change the page and frame tables 6. Read the desired page into the free frame; change the page and frame tables 7. Restart the user process |
| Shutdown the user process is the last step in page fault service routine | False |
| Modify or dirty bit is a must for each page/frame to reduce overhead | True |
| It is necessary to swap out pages whose modify bit is 0 | False (no longer necessary) |
| A problem on how many frames will the operating system allocate to a process | Frame allocation |
| A problem on how will the operating system select pages that are to be removed from memory to give way for incoming pages | Page replacement |
| These are techniques that decides which memory pages to swap out/write to disk when page of memory needs allocation | Page replacement algorithms |
| It is the string of memory references | reference string |
| An algorithm is evaluated by running it on a particular string of memory references and computing the number of page faults | True |
| There are three page-replacement algorithms | True(First- In, First-Out (FIFO), Optimal, and Least Recently Used (LRU) Page Algorithm) |
| This is the simplest page-replacement algorithm, the oldest page is replaced | First-In First-Out Algorithm |
| An algorithm wherein more frames available in physical memory result in lower the page-fault rate | First-In First-Out Algorithm |
| The page-fault rate may increase as the number of physical memory frames increases | Belady’s Anomaly |
| This algorithm has the lowest page-fault rate of all algorithm, the page that will not be used for the longest is replaced | Optimal Algorithm |
| This algorithm is difficult to implement, since it requires future knowledge of the reference string | Optimal Algorithm |
| This algorithm uses the recent past to approximate the near future. Replaces the page that has not been used the longest | Least Recently Used Algorithm |
| Secondary storage must be able to store a large amount of data temporarily | False (permanently) |
| It is one of the earliest secondary storage media | magnetic tape |
| This provide the bulk of secondary storage for modern computer systems | Magnetic disks |
| A magnetic disk system has several disk platters | True |
| Each disk platter has a flat circular shape, like a phonograph record | True |
| Information is recorded on the surfaces | True |
| Disks are rigid metal or glass platters covered with magnetic recording material | True |
| It has a separate read-write head for each track and allows the computer to switch from track to track quickly | fixed-head system |
| It has only one read-write head per surface and the system moves the head to access a particular track | movable-head system |
| These are tracks on one drive that can be accessed w/o moving the heads | cylinder |
| The disks are coated with a hard surface, so the read-write head scans it directly on the disk surface without destroying the data | Floppy disks |
| There are two ways of reading and writing of data on disks | True(Constant Linear Velocity (CLV) & Constant Angular Velocity (CAV)) |
| This method is used in CD-ROM and DVD-ROM drives | Constant Linear Velocity (CLV) |
| This method is used in hard disks and floppy disks | Constant Angular Velocity (CAV) |
| In disks using it, the density of bits (bits/unit length) per track is uniform | Constant Linear Velocity (CLV) |
| In disks using it, the number of bits per track is uniform | Constant Angular Velocity (CAV) |
| the time it takes to move the read-write head to the correct track | Seek time |
| the time it takes for the sector to rotate under the head | Latency time |
| the time it takes to actually transfer data between disk and main memory | Transfer time |
| capability of a system to fulfill its mission in the presence of attacks | System Survivability |