Frequently Asked OS Multithreading Interview Questions
Q – 1 What is simultaneous multi-threading?
Ans- The most advanced type of multi-threading applies to superscalar processors. A normal superscalar processor issues multiple instructions from a single thread every CPU cycle. In Simultaneous Multi-threading (SMT), the superscalar processor can issue instructions from multiple threads every CPU cycle.
Q – 2 What is interleaved multi-threading?
Ans- The purpose of interleaved multithreading is to remove all data dependency stalls from the execution pipeline. Since one thread is relatively independent from other threads, there’s less chance of one instruction in one pipe stage needing an output from an older instruction in the pipeline.
Q – 3 What is block multi-threading?
Ans- The simplest type of multi-threading occurs when one thread runs until it is blocked by an event that normally would create a long latency stall. Such a stall might be a cache-miss that has to access off-chip memory, which might take hundreds of CPU cycles for the data to return.
Instead of waiting for the stall to resolve, a threaded processor would switch execution to another thread that was ready to run. Only when the data for the previous thread had arrived, would the previous thread be placed back on the list of ready-to-run threads.
Q – 4 Explain some disadvantages of multithreading?
Ans- Some criticisms of multithreading include:
Multiple threads can interfere with each other when sharing hardware resources such as caches or translation lookaside buffers (TLBs).
Execution times of a single thread are not improved but can be degraded, even when only one thread is executing. This is due to slower frequencies and/or additional pipeline stages that are necessary to accommodate thread-switching hardware.
Hardware support for multithreading is more visible to software, thus requiring more changes to both application programs and operating systems than multiprocessing.
Q – 5 Explain advantages of multithreading?
Ans- Some advantages include:
If a thread gets a lot of cache misses, the other thread(s) can continue, taking advantage of the unused computing resources, which thus can lead to faster overall execution, as these resources would have been idle if only a single thread was executed.
If a thread cannot use all the computing resources of the CPU (because instructions depend on each other’s result), running another thread can avoid leaving these idle.
If several threads work on the same set of data, they can actually share their cache, leading to better cache usage or synchronization on its values.
Q – 6 Please tell me what are the types of memory?
Ans-
RAM-
Random Access Memory is a volatile form of memory. This means that the content written on RAM is wiped off when the power is turned OFF.
ROM –
Read only memory like hard disks, tapes are ROM where the content once written is permanently written unless wiped off by the user..