click below
click below
Normal Size Small Size show me how
Parallel
| Question | Answer |
|---|---|
| A key advantage of distributed memory architectures is that they are more responsive than shared memory systems. | FALSE |
| A key advantage of distributed memory architectures is that they are more scalable than shared memory systems. | TRUE |
| A key advantage of distributed memory architectures is that they are less complex than shared memory systems. | FALSE |
| If public data is used by a single-processor, then shared data is used by a multi-processor. | FALSE |
| If private data is used by a single-processor, then shared data is used by a multi-processor. | TRUE |
| Parallelism naturally leads to complexity. | FALSE |
| Parallelism naturally leads to dependency | FALSE |
| Parallelism naturally leads to concurrency. | TRUE |
| Shared data is used by a multi-processor. | TRUE |
| In most modern multi-core CPUs, cache coherency is usually handled by the user. | FALSE |
| In most modern multi-core CPUs, cache coherency is usually handled by the application software. | FALSE |
| In most modern multi-core CPUs, cache coherency is usually handled by the | PROCESSOR HARDWARE |
| In most modern multi-core CPUs, cache coherency is usually handled by the processor hardware. | TRUE |
| In most modern multi-core CPUs, cache coherency is usually handled by the operating system. | FALSE |
| A Symmetric Multi-Processing (SMP) system has two or more _____ processors connected to a single _____ main memory. | IDENTICAL; SHARED |
| A Symmetric Multi-Processing (SMP) system has two or more identical processors connected to a single shared main memory. | TRUE |
| An SMP system has two or more identical processors which are connected to a single shared memory often through a system bus. | TRUE |
| An SMP system a single processor connected to a single shared memory often through a system bus | FALSE |
| An SMP system a single processor connected to a single shared memory often through a serial circuit. | FALSE |
| The four classifications of Flynn's Taxonomy are based on the number of concurrent program streams and data streams available in the architecture. | FALSE* |
| The four classifications of Flynn's Taxonomy are based on the number of concurrent input streams and output streams available in the architecture. | FALSE |
| The four classifications of Flynn's Taxonomy are based on the number of concurrent instruction streams and I/O streams available in the architecture. | TRUE |
| Uniform Memory Access is often made by physically connecting multiple SMP systems together | FALSE |
| Nonuniform Memory Access is often made by logically connecting multiple SMP systems together | TRUE |
| Cache coherency is not an issue handled by the hardware in multicore processors | FALSE |
| Cache coherency is one of the issues handled by the hardware in multicore processors. | TRUE |
| Parallel computing can increase the number of tasks a program executes in a set time. | TRUE |
| Data transfer over a bus is much slower. | FALSE |
| Data transfer over a bus is much faster. | FALSE* |
| Shared memory don't always scale well. | TRUE* |
| Shared memory always scales well. | FALSE |
| Modern multi-core PCs fall into the MISD classification of Flynn's Taxonomy. | FALSE |
| Modern multi-core PCs fall into the SISD classification of Flynn's Taxonomy. | FALSE |
| Modern multi-core PCs fall into the MIMD classification of Flynn's Taxonomy. | TRUE |
| Parallel processing has single multiple flow. | FALSE |
| Parallel processing has single execution flow. | FALSE |
| In parallel processing, several instructions are executed simultaneously. | TRUE |
| Concurrency is the term used for simultaneous access to a resource, physical or logical. | TRUE |
| A single applications independently running, is typically called Multithreading. | FALSE |
| Multiple-applications dependently running, are typically called Multithreading. | FALSE |
| In a distributed memory architecture, each processor operates dependently, and if it makes changes to its local memory, that change is not automatically reflected in the memory of other processors. | FALSE |
| In a concurrent memory architecture, each processor operates independently, and cannot make changes to its local memory. | FALSE |
| In a concurrent memory architecture, each processor operates independently, and if it makes changes to its local memory, that change is not automatically reflected in the memory of other processors. | FALSE |
| In a distributed memory architecture, each processor operates independently, and if it makes changes to its local memory, that change is not automatically reflected in the memory of other processors | TRUE? |
| A Symmetric Multi-Processing (SMP) system has two or more identical processors connected to a single distributed main memory. | FALSE |
| A Symmetric Multi-Processing (SMP) system has two or more dissimilar processors connected to a single shared main memory. | FALSE |
| The tightly coupled set of threads' execution working on multiple task is called Parallel Processing. | FALSE |
| Each core of modern processors' has a separate cache that stores frequently accessed data. | TRUE |
| In a parallel memory architecture, each processor operates independently, and if it makes changes to its local memory, that change is not automatically reflected in the memory of other processors. | FALSE |
| It is necessary in shared memory that the data exists on the same physical device, hence it could not be spread across a cluster of systems. | FALSE |
| Computer memory usually operates at the same speed than processors do. | FALSE |
| Multiple-applications independently running, are typically called Multithreading. | TRUE |
| A key advantage of distributed memory architectures is that they are _____ than shared memory systems. | more scalable |
| Parallel computing can increase the scale of problems a program can tackle. | TRUE |
| In a shared memory architecture, only one processor at a time sees everything that happens in the shared memory space. | FALSE |
| In a distributed memory architecture, each processor operates independently, and if it makes changes to its local memory, that change is not automatically reflected in the memory of other processors. | TRUE |
| Distributed memory scales better than Shared memory. Group of answer choices | TRUE |
| In most modern multi-core CPUs, cache coherency is usually handled by the _____. | processor hardware |
| Shared data is used by a multi-processor. | TRUE |
| The tightly coupled set of threads' execution working on a single task is called Distributed Processing. | FALSE |
| Distributed memory can be easily scaled. | TRUE |
| An SMP system a single processor connected to a single distributed memory often through a system bus. | FALSE |
| Each core of modern processors' has their own cache that stores not frequently accessed data. | FALSE |
| Distributed memory cannot be easily scaled. | FALSE |
| Distributed processing is the term used for simultaneous access to a resource, physical or logical. | FALSE |
| Shared memory doesn't necessarily mean all of the data exists on the same physical device, hence it could be spread across a cluster of systems. | TRUE |
| Each core of modern processors' has their own cache that stores frequently accessed data. | TRUE |
| Computer memory usually operates at a much slower speed than processors do. | TRUE |
| UMA stands for Universal Memory Access. | FALSE |
| Execution of several activities at the same time is referred to as parallel processing. | TRUE |
| The four classifications of Flynn's Taxonomy are based on the number of concurrent memory streams and I/O streams available in the architecture. | FALSE |
| Shared memory scales better than Distributed memory. | FALSE |
| The four classifications of Flynn's Taxonomy are based on the number of concurrent _____ streams and _____ streams available in the architecture. | instruction; data |
| A thread that calls the join method on another thread will enter the terminated state until the other thread finishes executing. | FALSE |
| The operating system assigns each process a unique process name. | FALSE |
| In most operating systems, the processor hardware determines when each of the threads and processes gets scheduled to execute. | FALSE |
| A thread contains one or more processes. | FALSE |
| If you run multiple Java applications at the same time, they will execute in equivalent. | FALSE |
| The operating system assigns each process a unique CPU core. | TRUE |
| A process contains one or more threads. | TRUE |
| The operating system assigns each process a unique _____. | process ID number |
| Math library for processing large matrices would benefit much from parallel execution. | TRUE |
| Which of these applications would benefit the most from parallel execution? | math library for processing large matrices |
| In most operating systems, the user determines when each of the threads and processes gets schedules to execute. | FALSE |
| In most operating systems, the _________ determines when each of the threads and processes gets scheduled to execute. | operating system |
| If Thread A calls the lock() method on a Lock that is already possessed by Thread B, Thread A will immediately take possession of the Lock from Thread B. | FALSE |
| Every thread is independent and has its own separate address space in memory | FALSE |
| The time required to create a new thread in an existing process is greater than the time required to create a new process | FALSE |
| A _________ contains one or more ___________. | process; threads |
| If Thread A calls the lock() method on a Lock that is already possessed by Thread B, Thread A and Thread B will both possess the Lock. | FALSE |
| A process contains one or more other processes. | FALSE |
| Processes are considered more "lightweight" than threads. | FALSE |
| if Thread A calls the lock() method on a Lock that is already possessed by Thread B, Thread B will block and wait for Thread A to execute the critical section | FALSE |
| Why would ThreadA call the ThreadB.join() method? | ThreadA needs to wait until after ThreadB has terminated to continue. |
| You can Safely expect threads to execute in the same relative order that you create them | FALSE |
| It is possible for two tasks to execute in parallel using a single-core processor | FALSE |
| Processes are faster to switch between the threads | FALSE |
| If you run multiple Java applications at the same time, they will execute in equivalent | FALSE |
| Graphical user interface (GUI) for an accounting application would benefit much from parallel execution. | FALSE |
| Processes require more overhead to create than threads. | TRUE |
| In most operating systems, the operating system determines when each of the threads and processes gets scheduled to execute. | TRUE |
| System logging application that frequently writes to a database would benefit much from parallel execution. | FALSE |
| It is possible for two tasks to execute concurrently using a single-core processor. | TRUE |
| The operating system assigns each process a unique process ID number. | TRUE |
| A thread that calls the join method on another thread will enter the blocked state until the other thread finishes executing. | TRUE |
| Processes are simpler to communicate between than threads. | FALSE |
| In most operating systems, the user determines when each of the threads and processes gets scheduled to execute. | FALSE |
| A process can be terminated due to normal exit and fatal error. | TRUE |
| A process can be both single threaded and multithreaded. | TRUE |
| What happens if Thread A calls the lock() method on a Lock that is already possessed by Thread B? | Thread A will block and wait until Thread B calls the unlock() method. |