click below
click below
Normal Size Small Size show me how
CS0051
PARALLEL - MT
| Question | Answer |
|---|---|
| A key advantage of distributed memory architectures is that they are more responsive than shared memory systems. | FALSE |
| A key advantage of distributed memory architectures is that they are more scalable than shared memory systems. | TRUE |
| A key advantage of distributed memory architectures is that they are less complex than shared memory systems. | FALSE |
| If public data is used by a single-processor, then shared data is used by a multi-processor. | FALSE |
| If private data is used by a single-processor, then shared data is used by a multi-processor. | TRUE |
| Parallelism naturally leads to complexity. | FALSE |
| Parallelism naturally leads to dependency | FALSE |
| Parallelism naturally leads to concurrency. | TRUE |
| Shared data is used by a multi-processor. | TRUE |
| In most modern multi-core CPUs, cache coherency is usually handled by the user. | FALSE |
| In most modern multi-core CPUs, cache coherency is usually handled by the application software. | FALSE |
| In most modern multi-core CPUs, cache coherency is usually handled by the | PROCESSOR HARDWARE |
| In most modern multi-core CPUs, cache coherency is usually handled by the processor hardware. | TRUE |
| In most modern multi-core CPUs, cache coherency is usually handled by the operating system. | FALSE |
| A Symmetric Multi-Processing (SMP) system has two or more _____ processors connected to a single _____ main memory. | IDENTICAL; SHARED |
| A Symmetric Multi-Processing (SMP) system has two or more identical processors connected to a single shared main memory. | TRUE |
| An SMP system has two or more identical processors which are connected to a single shared memory often through a system bus. | TRUE |
| An SMP system a single processor connected to a single shared memory often through a system bus | FALSE |
| An SMP system a single processor connected to a single shared memory often through a serial circuit. | FALSE |
| The four classifications of Flynn's Taxonomy are based on the number of concurrent program streams and data streams available in the architecture. | FALSE* |
| The four classifications of Flynn's Taxonomy are based on the number of concurrent input streams and output streams available in the architecture. | FALSE |
| The four classifications of Flynn's Taxonomy are based on the number of concurrent instruction streams and I/O streams available in the architecture. | TRUE |
| Uniform Memory Access is often made by physically connecting multiple SMP systems together | FALSE |
| Nonuniform Memory Access is often made by logically connecting multiple SMP systems together | TRUE |
| Cache coherency is not an issue handled by the hardware in multicore processors | FALSE |
| Cache coherency is one of the issues handled by the hardware in multicore processors. | TRUE |
| Parallel computing can increase the number of tasks a program executes in a set time. | TRUE |
| Data transfer over a bus is much slower. | FALSE |
| Data transfer over a bus is much faster. | FALSE* |
| Shared memory don't always scale well. | TRUE* |
| Shared memory always scales well. | FALSE |
| Modern multi-core PCs fall into the MISD classification of Flynn's Taxonomy. | FALSE |
| Modern multi-core PCs fall into the SISD classification of Flynn's Taxonomy. | FALSE |
| Modern multi-core PCs fall into the MIMD classification of Flynn's Taxonomy. | TRUE |
| Parallel processing has single multiple flow. | FALSE |
| Parallel processing has single execution flow. | FALSE |
| In parallel processing, several instructions are executed simultaneously. | TRUE |
| Concurrency is the term used for simultaneous access to a resource, physical or logical. | TRUE |
| A single applications independently running, is typically called Multithreading. | FALSE |
| Multiple-applications dependently running, are typically called Multithreading. | FALSE |
| In a distributed memory architecture, each processor operates dependently, and if it makes changes to its local memory, that change is not automatically reflected in the memory of other processors. | FALSE |
| In a concurrent memory architecture, each processor operates independently, and cannot make changes to its local memory. | FALSE |
| In a concurrent memory architecture, each processor operates independently, and if it makes changes to its local memory, that change is not automatically reflected in the memory of other processors. | FALSE |
| In a distributed memory architecture, each processor operates independently, and if it makes changes to its local memory, that change is not automatically reflected in the memory of other processors | TRUE? |
| A Symmetric Multi-Processing (SMP) system has two or more identical processors connected to a single distributed main memory. | FALSE |
| A Symmetric Multi-Processing (SMP) system has two or more dissimilar processors connected to a single shared main memory. | FALSE |
| The tightly coupled set of threads' execution working on multiple task is called Parallel Processing. | FALSE |
| Each core of modern processors' has a separate cache that stores frequently accessed data. | TRUE |
| In a parallel memory architecture, each processor operates independently, and if it makes changes to its local memory, that change is not automatically reflected in the memory of other processors. | FALSE |
| It is necessary in shared memory that the data exists on the same physical device, hence it could not be spread across a cluster of systems. | FALSE |
| Computer memory usually operates at the same speed than processors do. | FALSE |
| Multiple-applications independently running, are typically called Multithreading. | TRUE |
| A key advantage of distributed memory architectures is that they are _____ than shared memory systems. | more scalable |
| Parallel computing can increase the scale of problems a program can tackle. | TRUE |
| In a shared memory architecture, only one processor at a time sees everything that happens in the shared memory space. | FALSE |
| In a distributed memory architecture, each processor operates independently, and if it makes changes to its local memory, that change is not automatically reflected in the memory of other processors. | TRUE |
| Distributed memory scales better than Shared memory. Group of answer choices | TRUE |
| In most modern multi-core CPUs, cache coherency is usually handled by the _____. | processor hardware |
| Shared data is used by a multi-processor. | TRUE |
| The tightly coupled set of threads' execution working on a single task is called Distributed Processing. | FALSE |
| Distributed memory can be easily scaled. | TRUE |
| An SMP system a single processor connected to a single distributed memory often through a system bus. | FALSE |
| Each core of modern processors' has their own cache that stores not frequently accessed data. | FALSE |
| Distributed memory cannot be easily scaled. | FALSE |
| Distributed processing is the term used for simultaneous access to a resource, physical or logical. | FALSE |
| Shared memory doesn't necessarily mean all of the data exists on the same physical device, hence it could be spread across a cluster of systems. | TRUE |
| Each core of modern processors' has their own cache that stores frequently accessed data. | TRUE |
| Computer memory usually operates at a much slower speed than processors do. | TRUE |
| UMA stands for Universal Memory Access. | FALSE |
| Execution of several activities at the same time is referred to as parallel processing. | TRUE |
| The four classifications of Flynn's Taxonomy are based on the number of concurrent memory streams and I/O streams available in the architecture. | FALSE |
| Shared memory scales better than Distributed memory. | FALSE |
| The four classifications of Flynn's Taxonomy are based on the number of concurrent _____ streams and _____ streams available in the architecture. | instruction; data |
| A thread that calls the join method on another thread will enter the terminated state until the other thread finishes executing. | FALSE |
| The operating system assigns each process a unique process name. | FALSE |
| In most operating systems, the processor hardware determines when each of the threads and processes gets scheduled to execute. | FALSE |
| A thread contains one or more processes. | FALSE |
| If you run multiple Java applications at the same time, they will execute in equivalent. | FALSE |
| The operating system assigns each process a unique CPU core. | TRUE |
| A process contains one or more threads. | TRUE |
| The operating system assigns each process a unique _____. | process ID number |
| Math library for processing large matrices would benefit much from parallel execution. | TRUE |
| Which of these applications would benefit the most from parallel execution? | math library for processing large matrices |
| In most operating systems, the user determines when each of the threads and processes gets schedules to execute. | FALSE |
| In most operating systems, the _________ determines when each of the threads and processes gets scheduled to execute. | operating system |
| If Thread A calls the lock() method on a Lock that is already possessed by Thread B, Thread A will immediately take possession of the Lock from Thread B. | FALSE |
| Every thread is independent and has its own separate address space in memory | FALSE |
| The time required to create a new thread in an existing process is greater than the time required to create a new process | FALSE |
| A _________ contains one or more ___________. | process; threads |
| If Thread A calls the lock() method on a Lock that is already possessed by Thread B, Thread A and Thread B will both possess the Lock. | FALSE |
| A process contains one or more other processes. | FALSE |
| Processes are considered more "lightweight" than threads. | FALSE |
| if Thread A calls the lock() method on a Lock that is already possessed by Thread B, Thread B will block and wait for Thread A to execute the critical section | FALSE |
| Why would ThreadA call the ThreadB.join() method? | ThreadA needs to wait until after ThreadB has terminated to continue. |
| You can Safely expect threads to execute in the same relative order that you create them | FALSE |
| It is possible for two tasks to execute in parallel using a single-core processor | FALSE |
| Processes are faster to switch between the threads | FALSE |
| If you run multiple Java applications at the same time, they will execute in equivalent | FALSE |
| Graphical user interface (GUI) for an accounting application would benefit much from parallel execution. | FALSE |
| Processes require more overhead to create than threads. | TRUE |
| In most operating systems, the operating system determines when each of the threads and processes gets scheduled to execute. | TRUE |
| System logging application that frequently writes to a database would benefit much from parallel execution. | FALSE |
| It is possible for two tasks to execute concurrently using a single-core processor. | TRUE |
| The operating system assigns each process a unique process ID number. | TRUE |
| A thread that calls the join method on another thread will enter the blocked state until the other thread finishes executing. | TRUE |
| Processes are simpler to communicate between than threads. | FALSE |
| In most operating systems, the user determines when each of the threads and processes gets scheduled to execute. | FALSE |
| A process can be terminated due to normal exit and fatal error. | TRUE |
| A process can be both single threaded and multithreaded. | TRUE |
| What happens if Thread A calls the lock() method on a Lock that is already possessed by Thread B? | Thread A will block and wait until Thread B calls the unlock() method. |
| There is no limit on the number of threads that can possess a Lock at the same time. | FALSE |
| Protecting a critical section of code with mutual exclusion means only allowing authorized threads to execute the critical section. | FALSE |
| Protecting a critical section of code with mutual exclusion means implementing proper error handling techniques to catch any unexpected problems. Group of answer choices True False | FALSE |
| Using the ++ operator to increment a variable in Java executes as multiple instructions at the lowest level. | TRUE |
| Using the ++ operator to increment a variable in Java executes as a single instruction at the lowest level. | FALSE |
| Data races can be hard to identify because the problems that data races cause have an insignificant impact on the program's performance. Group of answer choices True False | FALSE |
| Data races can be hard to identify because the data race may not always occur during execution to cause a problem. Group of answer choices True False | TRUE |
| Unlike during a deadlock, the threads in a livelock scenario are still making progress towards their goal. Group of answer choices True False | FALSE |
| What is the difference between the tryLock() and the regular lock() method in Java? | tryLock() is a non-blocking version of the lock() method |
| Which statement describes the relationship between Lock and ReentrantLock in Java? | ReentrantLock is a class that implements the Lock interface |
| In Java program, data race only occurs when each of the threads are incrementing a shared variable a large number of time because the large number of write operations on the shared variable provided more opportunities for the data race to occur. | TRUE |
| Data races can be hard to identify because it is impossible to identify the potential for a data race. | FALSE |
| Two threads that are both reading the same shared variable has no potential for a data race. Group of answer choices True False | TRUE |
| The best use case for using a ReadWriteLock is when lots of threads need to modify the value of a shared variable. Group of answer choices True False | FALSE |
| What does it mean to protect a critical section of code with mutual exclusion? | Prevent multiple threads from concurrently executing in the critical section |
| In the Java program to demonstrate a data race, why did the data race only occur when each of the threads were incrementing a shared variable a large number of time? | The large number of write operations on the shared variable provided more opportunities for the data race to occur |
| How many threads can possess the ReadLock while another thread has a lock on the WriteLock? | 0 |
| Two threads that are both reading and writing the same shared variable has the potential for a data race. | TRUE |
| When a thread calls Java's tryLock() method on a Lock that is NOT currently locked by another thread the method immediately returns true. | TRUE |
| Lock and ReentrantLock are two names for the same class. | FALSE |
| A ReentrantLock instantiates a new internal Lock object every time its lock() method is called. | FALSE |
| A ReentrantLock can be locked _____. Group of answer choices once by multiple threads at the same time multiple times by different threads none of the mentioned multiple times by the same thread | multiple times by the same thread |
| Starvation occurs when a thread is unable to gain access to a necessary resource, and is therefore unable to make progress. Group of answer choices True False | TRUE |
| A maximum of 2 threads can possess a Lock at the same time. | FALSE |
| The number of threads that can possess a Lock at the same time depends on the operating system. | FALSE |
| In Java program, data race only occurs when each of the threads are incrementing a shared variable a large number of time because the JVM's automatic data race prevention system can only protect against a small number of operations. | FALSE |
| There is no limit on the number of threads that can possess the ReadLock while another thread has a lock on the WriteLock. | FALSE |
| When a thread calls Java's tryLock() method on a Lock that is NOT currently locked by another thread the method will block until the Lock is available and then return true. | FALSE |
| A ReentrantLock can be locked by the same thread as many times depending on the operating system. | FALSE |
| When the threads in the program are not making progress, you can determine if it is due to a deadlock or a livelock waiting to see if the problem eventually resolves itself. Group of answer choices True False | FALSE |
| Dining Philosophers Problem is a classic example that's used to illustrate synchronization issues when multiple threads are competing for multiple locks. Group of answer choices True False | TRUE |
| Having too many concurrent threads can lead to starvation. | TRUE |
| Protecting a critical section of code with mutual exclusion means that whenever a thread enters the critical section, it pauses all other threads in the program. | FALSE |
| The best use case for using a ReadWriteLock is when lots of threads need to modify the value of a shared variable, but only a few thread need to read its value. Group of answer choices True False | FALSE |
| Which of these scenario describes the best use case for using a ReadWriteLock? | Lots of threads need to read the value of a shared variable, but only a few thread need to modify its value |
| When a thread calls Java's tryLock() method on a Lock that is NOT currently locked by another thread the method immediately returns false. | FALSE |
| tryLock() is a non-blocking version of the lock() method. | TRUE |
| A thread does not need to unlock a ReentrantLock before another thread can acquire it because multiple threads can lock a ReentrantLock at the same time. | FALSE |
| Unlike during a deadlock, the threads in a livelock scenario are _____. Group of answer choices still making progress towards their goal actively executing without making useful progress stuck in a blocked state waiting on other threads | actively executing without making useful progress |
| Deadlock occurs when each member of a group is waiting for some other member to take action, and as a result, neither member is able to make progress. Group of answer choices True False | TRUE |
| The processor decides when each thread gets scheduled to execute. Group of answer choices True False | FALSE |
| Two threads that are both writing to the same shared variable has no potential for a data race. Group of answer choices True False | FALSE |
| A maximum of 1 thread can possess the WriteLock of a ReentrantReadWriteLock at a time. | TRUE |
| A maximum of 2 threads can possess the WriteLock of a ReentrantReadWriteLock at the same time. | FALSE |
| To lock a mutex multiple times, using a reentrant mutex may seem like an easy way to avoid a deadlock. | TRUE |
| When a thread calls Java's tryLock() method on a Lock that is NOT currently locked by another thread the method will block until the Lock is available and then return false. | FALSE |
| The tryLock() method is useful because if multiple threads try to acquire a lock simultaneously, the tryLock() method will randomly pick one to succeed. | FALSE |
| The lock() method can be called recursively on a ReentrantLock object, but not on a regular lock object. | TRUE |
| A thread must unlock a ReentrantLock as many times as that thread locked it before another thread can acquire it. | FALSE |
| A thread must unlock a ReentrantLock once before another thread can acquire it. | TRUE |
| A possible strategy to resolve a livelock between multiple threads is thru randomly terminating one of the threads involved in the livelock. | FALSE |
| Which of these is a possible strategy to resolve a livelock between multiple threads? Group of answer choices Implement a randomized mechanism to determine which thread goes first. Randomly terminate one of the threads involved in the livelock. | Implement a randomized mechanism to determine which thread goes first |
| Using the ++ operator to increment a variable in Java executes as an atomic instruction at the lowest level. | FALSE |
| A maximum of 2 threads can possess the ReadLock while another thread has a lock on the WriteLock? | FALSE |
| What is the maximum number of threads that can possess the ReadLock of a ReentrantReadWriteLock at the same time? | no limit |
| Only 1 thread can possess a Lock at the same time. | TRUE |
| The maximum number of threads that can possess the WriteLock of a ReentrantReadWriteLock at the same time depends on the operating system. | FALSE |
| Locker Mutex protects critical section of the code to defend against data races, which can occur when multiple threads are concurrently accessing the same location in memory and at least one of those threads is writing to that location. True False | TRUE |
| A possible strategy to resolve a livelock between multiple threads is thru implementing a randomized mechanism to determine which thread goes first. Group of answer choices True False | TRUE |
| Having too many concurrent threads may still not lead to starvation. Group of answer choices True False | FALSE |
| No thread can possess the ReadLock while another thread has a lock on the WriteLock. | TRUE |
| The reader-writer lock is useful especially when there are lots of threads that only need to be read. Group of answer choices True False | TRUE |
| The tryLock() method is useful because it enforces fairness among multiple threads competing for ownership of the same lock. | FALSE |
| A ReentrantLock can be locked multiple times by the same thread. | TRUE |
| The threads in your program are clearly not making progress. How might you determine if it is due to a deadlock or a livelock? | Use the Resource Monitor to investigate the program's CPU usage to see if it is actively executing |
| To avoid livelock, ensure that only one process takes action chosen by priority or some other mechanism, like random selection. Group of answer choices True False | TRUE |
| Using the ++ operator to increment a variable in Java executes as _____ at the lowest level. | multiple instructions |
| ReentrantLock is a class that implements the Lock interface. | TRUE |
| How many times must a thread unlock a ReentrantLock before another thread can acquire it? | as many times as that thread locked it |
| The best use case for using a ReadWriteLock is when lots of threads need to read the value of a shared variable, but only a few thread need to modify its value. Group of answer choices True False | TRUE |
| Read-write locks can improve a program's performance compared to using a standard mutex. Group of answer choices True False | TRUE |
| A possible strategy to resolve a livelock between multiple threads is thru patience because if you wait long enough all livelocks will eventually resolve themself. Group of answer choices True False | FALSE |
| Data race occurs when a thread is unable to gain access to a necessary resource, and is therefore unable to make progress. Group of answer choices True False | FALSE |
| A ReentrantLock can be locked once by multiple threads at the same time. | FALSE |
| Unlike during a deadlock, the threads in a livelock scenario are stuck in a blocked state waiting on other threads. Group of answer choices True False | FALSE |
| What is the maximum number of threads that can possess the WriteLock of a ReentrantReadWriteLock at the same time? | 1 |
| trylock() includes built-in error handling so you do not need a separate try/catch statement. | FALSE |
| Why is the trylock() method useful? | It enables a thread to execute alternate operations if the lock it needs to acquire is already taken |
| Which of these scenarios does NOT have the potential for a data race? Two threads are both reading and writing the same shared variable. Two threads are both reading the same shared variable. | Two threads are both reading the same shared variable |
| Calling the semaphore's release() method blocks all other threads waiting on the semaphore. | TRUE |
| The semaphore's release() method decrements its value if the counter is positive. | FALSE |
| The semaphore's release() method always increments the counter's value. | TRUE |
| The semaphore's acquire() method decrements its value if the counter is positive. | TRUE |
| The binary semaphore can be acquired and released by different threads. | TRUE |
| If the producer puts elements into a fixed-length queue faster than the consumer removes them, the queue will continuously expand to hold the extra items. | FALSE |
| Pipeline architecture consists of a chained-together series of producer-consumer pairs. | TRUE |
| Which of these is a possible strategy to prevent deadlocks when multiple threads will need to acquire multiple locks? | Prioritize the locks so that all threads will acquire them in the same relative order |
| What is the difference between a binary semaphore and a mutex? | The binary semaphore can be acquired and released by different threads |
| When should a thread typically signal a condition variable? | after doing something to change the state associated with the condition variable but before unlocking the associated mutex |
| Condition variables work together with a thread serving as a monitor. | FALSE |
| When implementing a recursive divide-and-conquer algorithm in Java, the ForkJoinPool automatically subdivides the problem for you. | FALSE |
| When implementing a recursive divide-and-conquer algorithm in Java, threads cannot recursively spawn other threads. | FALSE |
| The Callable interface's call() method returns a result object but the Runnable interface's run() method does not. | TRUE |
| A future allows a program to change how it will function the next time it is run. | FALSE |
| A future serves as the counterpart to a programming past. | FALSE |
| Condition variables work together with which other mechanism serving as a monitor? | the OS execution scheduler |
| A race condition is a flaw in the timing or ordering of a program's execution that causes incorrect behavior. | TRUE |
| Heisenbug is a software bug that seems to disappear or alter its behavior when you try to study it. | TRUE |
| A deadlock avoidance algorithm dynamically examines the _____ to ensure that a circular wait condition can never exist. | resource allocation state |
| Create an extra thread to release the locks at random intervals to breakup a deadlock is a possible strategy to prevent deadlocks when multiple threads will need to acquire multiple locks. | FALSE |
| Tracking the availability of a limited resource is a common use case for a counting semaphore. | TRUE |
| Tracking how long a program has been running is a common use case for a counting semaphore. | FALSE |
| Calling the semaphore's release() method signals another thread waiting to acquire the semaphore. | FALSE |
| The semaphore's release() method increments its value if the counter is positive. | FALSE |
| The binary semaphore will have a value of 0, 1, 2, 3, etc. | FALSE |
| Distributed architecture consists of a chained-together series of producer-consumer pairs. | FALSE |
| The consumption rate should be less than or equal to the production rate in a producer-consumer architecture. | FALSE |
| A Semaphore is different from a Mutex in such a way that both can be released by different threads. | FALSE |
| Threads reuse threads to reduce the overhead that would be required to create a new, separate thread for every concurrent task. | TRUE |
| A deadlock avoidance algorithm dynamically examines the system storage state to ensure that a circular wait condition can never exist. | FALSE |
| A deadlock avoidance algorithm dynamically examines the resources to ensure that a circular wait condition can never exist. | FALSE |
| Pipe is a synchronization tool? | TRUE |
| What does a divide-and-conquer algorithm do when it reaches the base case? | Stop subdividing the current problem and solve it |
| What is the difference between Java's Callable and Runnable interfaces? | The Callable interface's call() method returns a result object but the Runnable interface's run() method does not |
| When several processes access the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place, is called? | Farace conditionlse |
| Track how many threads the program has created is a common use case for a counting semaphore. | FALSE |
| The semaphore's release() method always decrements the counter's value. | FALSE |
| The binary semaphore can only be acquired and released by the same thread. | FALSE |
| The average rates of production and consumption has not relation in a producer-consumer architecture? | TRUE |
| In addition to modifying the counter value, what else does calling the semaphore's release() method do? | Signal another thread waiting to acquire the semaphore |
| The Runnable interface's run() method can have an optional return value, but the Callable interface's call() method is required to always return an object. | FALSE |
| A future is a task that can be assigned to a thread pool for execution. | TRUE |
| When using a thread pool in Java, the programmer assigns submitted tasks to specific threads within the available pool to execute. | FALSE |
| A deadlock avoidance algorithm dynamically examines the resource allocation state to ensure that a circular wait condition can never exist. | TRUE |
| Socket is a synchronization tool? | FALSE |
| A barrier for a group of threads or processes in the source code means any thread/process must stop at this point and cannot proceed until all other threads/processes reach this barrier. | TRUE |
| Which one of the following is a synchronization tool? | semaphore |
| Prioritizing the locks so that all threads will acquire them in the same relative order is a possible strategy to prevent deadlocks when multiple threads will need to acquire multiple locks. | TRUE |
| The semaphore's acquire() method increments its value if the counter is positive. | FALSE |
| What happens if the producer puts elements into a fixed-length queue faster than the consumer removes them? | The queue will fill up and cause an error |
| Which architecture consists of a chained-together series of producer-consumer pairs? | pipeline |
| Condition variables serve as a _____ for threads to _____. | holding place; wait for a certain condition before continuing execution |
| When it reaches the base case, a divide-and-conquer algorithm recursively solves a set of smaller subproblems. | FALSE |
| Threads provide a convenient way to group and organize a collection of related threads. | FALSE |
| A deadlock avoidance algorithm dynamically examines the operating system to ensure that a circular wait condition can never exist. | FALSE |
| Semaphore is a synchronization tool? | TRUE |
| Data races can occur when two or more threads concurrently access the same memory location. | TRUE |
| The semaphore's acquire() method always decrements the counter's value. | FALSE |
| FIFO architecture consists of a chained-together series of producer-consumer pairs. | FALSE |
| Why would you use the condition variable's signal() method instead of signalAll()? | You only need to wake up one waiting thread and it does not matter which one |
| Condition variables work together with a mutex serving as a monitor. | TRUE |
| Condition variables work together with a process serving as a monitor. | FALSE |
| When implementing a recursive divide-and-conquer algorithm in Java, the ForKJoinPool manages a thread pool to execute its ForKJoinTasks, which reduces the overhead of thread creation. | TRUE |
| When it reaches the base case, a divide-and-conquer algorithm divides the problem into two smaller subproblems. | FALSE |
| A Runnable object cannot be used to create a Future. | TRUE |
| When using a thread pool in Java, the host operating system assigns submitted tasks to specific threads within the available pool to execute. | FALSE |
| What is the purpose of a future? | It serves as a placeholder to access a result that may not been computed yet |
| The consumption and production rates must be exactly the same in a producer-consumer architecture. | FALSE |
| When it reaches the base case, a divide-and-conquer algorithm solves all of the subproblems that have been created. | FALSE |
| When using a thread pool in Java, the compiler assigns submitted tasks to specific threads within the available pool to execute. | FALSE |
| When implementing a recursive divide-and-conquer algorithm in Java, why should you use a ForkJoinPool instead of simply creating new threads to handle each subproblem? | The ForkJoinPool manages a thread pool to execute its ForkJoinTasks, which reduces the overhead of thread creation |
| Calling the semaphore's release() method blocks and waits until the semaphore is available. | FALSE |
| If the producer puts elements into a fixed-length queue faster than the consumer removes them, the queue will fill up and cause an error. | FALSE |
| The producer-consumer pattern follows FIFO method. | TRUE |
| Client-server architecture consists of a chained-together series of producer-consumer pairs. | FALSE |
| What does the semaphore's release() method do to the counter value? | If the counter is positive, increment its value |
| Condition variables enable threads to signal each other when the state of the queue changes. | TRUE |
| When it reaches the base case, a divide-and-conquer algorithm stops subdividing the current problem and solve it. | TRUE |
| It's not possible to have data races without a race condition but possible to have race conditions without a data race. | FALSE |
| There is no limit on the number of threads that can possess the ReadLock while another thread has a lock on the WriteLock. | FALSE |
| Protecting a critical section of code with mutual exclusion means preventing multiple threads from concurrently executing in the critical section. Group of answer choices True False | TRUE |
| What is the maximum number of threads that can possess the WriteLock of a ReadWriteLock at the same time? | 1 |
| Try lock or try enter is a blocking version of the lock or acquire method. Group of answer choices True False | FALSE |
| Data races can be hard to identify because data races are caused by hardware errors and cannot be debugged in software. Group of answer choices True False | FALSE |
| Why can potential data races be hard to identify? | The data race may not always occur during execution to cause a problem. |
| There is no limit on the number of threads that can possess the WriteLock of a ReadWriteLock at the same time. Group of answer choices True False | FALSE |
| A maximum of 2 threads can possess the WriteLock of a ReadWriteLock at the same time. Group of answer choices True False | FALSE |
| When the threads in the program are not making progress, you can determine if it is due to a deadlock or a livelock by randomly guessing between deadlock and livelock. Group of answer choices True False | FALSE |
| Unlike during a deadlock, the threads in a livelock scenario are actively executing without making useful progress. Group of answer choices True False | TRUE |
| What is the maximum number of threads that can possess the WriteLock of a ReadWriteLock at the same time? Group of answer choices 2 1 no limit 0 | 1 |
| Why can potential data races be hard to identify? Group of answer choices Data races are caused by hardware errors and cannot be debugged in software. The data race may not always occur during execution to cause a problem. | The data race may not always occur during execution to cause a problem. |
| When the threads in the program are not making progress, you can determine if it is due to a deadlock or a livelock by using the Resource Monitor to investigate the program's memory usage to see if it continues to grow. True False | FALSE |
| Only 1 thread can possess the ReadLock while another thread has a lock on the WriteLock. Group of answer choices True False | FALSE |
| Protecting a critical section of code with mutual exclusion means that whenever a thread enters the critical section, it pauses all other threads in the program. Group of answer choices True False | FALSE |
| Which of these scenario describes the best use case for using a ReadWriteLock? | Lots of threads need to read the value of a shared variable, but only a few thread need to modify its value. |