During technical interviews, one of the most challenging and sophisticated subjects is concurrency in Java. This page offers responses to some of the related interview questions you might come across.
1. The distinction between a process and a thread
Both processes and threads are concurrently operating system (CPU) units, however processes and threads differ significantly in that they share a shared memory, but threads do not.
A process is an autonomous piece of running software that has its own virtual memory space according to the operating system. To prevent one malfunctioning process from slowing down all other processes by scrambling common memory, any multitasking operating system (which includes practically any modern operating system) must split processes in memory.
The processes are typically segregated as a result, and they cooperate through a mechanism known as inter-process communication (IPC), which the operating system defines as a type of intermediary API.
On the other hand, a thread is an application component that, along with other threads from the same application, has a shared memory. Using common memory enables you to reduce a significant amount of overhead, design the threads to work together, and communicate data between them much more quickly.
2. In Java Concurrency, describe Countdownlatch.
One thread can wait for one or more other threads to finish processing by using Java's Countdownlatch synchronizer. In server-side Java applications, it is significant.
Using Countdownlatch, we can place the thread that is currently running in the waiting state if one or more events are being used by other threads, but we want to use them in the one that is presently running.
3. What Sets the Runnable Interface Apart from the Callable Interface? How Do They Get Used?
One run method exists for the Runnable interface. It stands for a compute unit that demands its thread of execution. This method is not permitted by the Runnable interface to return a value or to raise unchecked exceptions.
A job with a value is represented by the Callable interface, which only has one call function. In light of this, the call method returns a value. It has the ability to throw exceptions. Typically, ExecutorService instances use Callable to initiate an asynchronous job and then use the returned Future instance to retrieve its value.
4. Describe Java Concurrency's Cyclicbarrier.
The Cyclicbarrier idea is utilized, just like the Countdownlatch, to put the thread in the waiting state. The CyclicBarrier places the threads in a state of anticipation while they wait for one another to cross a shared barrier. When each thread hits the barrier, we must invoke the await() method on the Cyclicbarrier object.
5. In Java Concurrency, describe Cyclicbarrier.
The Cyclicbarrier notion is used to place the thread in the waiting state, just the way the Countdownlatch is. The CyclicBarrier places the threads in a condition of waiting until they all hit a single barrier point. When each thread hits the barrier, we must use the Cyclicbarrier objects await() method.
Curiously, the message may not be printed if you run this as a component of the main() method. This might occur if the main() thread were to exit before the daemon has finished writing the message. Since daemon threads won't even be able to complete their final blocks and close the resources if abandoned, you should generally avoid performing any I/O in them.
6. Consider a countdownlatch that has the value 3 as its initial value (3). Does the countdown need to have three threads?
It is not necessary to have three threads, or rather we should say that the countdown does not require the same number of threads.
7. What Does the Interrupt Flag on the Thread Mean? How Can You Check It and Set It? What Connection Does It Have to the InterruptedException?
An internal Thread flag called the interrupt flag, often known as the interrupt status, is set when the thread is interrupted. Use the thread object's thread.interrupt() method to set it.
This function immediately throws InterruptedException if a thread is currently inside one of the methods that throw InterruptedException (wait, join, sleep, etc.). This exception may be handled in any way the thread sees fit.
If thread.interrupt() is called while a thread is not inside the procedure, nothing unusual occurs. Using the static Thread.interrupted() or instance isInterrupted() methods, the thread has the duty to periodically check the interrupt status. The static Thread.interrupted() method clears the interrupt flag, whereas isInterrupted() does not. This is how both methods differ from one another.
8. In Java Concurrency, describe Phaser.
In order to keep threads synchronized over one or more phases of activity, a phaser is crucial. When synchronizing a single phase, the Phaser behaves as a Cyclicbarrier. Its use is quite versatile and reusable.
9. What Are the Terms Executor and ExecutorService? What Distinctions Exist Between These Interfaces?
The Java.util.concurrent framework's Executor and ExecutorService interfaces are related. With only one execute method that accepts Runnable instances, the Executor interface is incredibly straightforward. Your task-executing code should typically be dependent on this interface.
ExecutorService adds extra methods to the Executor interface for handling and monitoring the lifecycle of a concurrent task execution service (such as methods for terminating tasks in the event of shutdown) as well as methods for handling more advanced asynchronous tasks like Futures.
A Guide to Java ExecutorService is a good resource for more information on using Executor and ExecutorService.
10. In Java Concurrency, describe Exchanger.
It has to do with exchanging something, as the name would imply. When exchanging data between two threads, the exchanger is crucial. Exchanger streamlines the process of data sharing between threads. A synchronization point is a location that the exchanger makes available for these threads to pair up and swap elements.
11. Which Executorservice Implementations in the Standard Library Are Available?
Three standard implementations of the ExecutorService interface exist:
Tasks can be carried out using a pool of threads by using the ThreadPoolExecutor. A thread returns to the pool after it has completed the task. If every thread in the pool is occupied, the task must wait for its turn.
When a thread is available, the ScheduledThreadPoolExecutor enables scheduling task execution rather than immediately executing it. Tasks with a predetermined rate or fixed delay can also be scheduled.
ForkJoinPool is a unique ExecutorService designed to handle jobs involving recursive algorithms. Recursive algorithms need the usage of a standard ThreadPoolExecutor, and you will rapidly discover that all of your threads are occupied as you wait for the lower levels of recursion to complete.
The so-called work-stealing mechanism, which is implemented by the ForkJoinPool, enables it to employ available threads more effectively.
12. Explain Java concurrency's use of semaphore.
The Java.util concurrent package has a class called Semaphore. It functions essentially as a set of permissions maintained by a semaphore.
To obtain access to the shared resource, the thread utilizes the acquire() method to acquire permissions. While this is happening, the count value of the semaphore will be reduced by one and granted permission if it is not equal to 0. In the absence of a permit, the thread will be blocked. When finished utilizing the shared resources, the thread releases them using the release() method.
13. Java Memory Model (JMM): What Is It? Describe Its Goal and Fundamental Principles.
Chapter 17.4 of the Java language specification describes the Java Memory Model. It describes how several threads in a concurrent Java application access shared memory and how data changes performed by one thread are made available to other threads. JMM is brief and to the point, but without a solid mathematical foundation, it could be challenging to understand.
Because your Java code accesses data differently from what occurs at lower levels, a memory model is required. As long as the observable outcome of these reads and writes is the same, memory writes and reads may be reordered or optimized by the Java compiler, JIT compiler, or even CPU.
Because the majority of these optimizations are designed with a single thread of execution in mind, they can produce unexpected outcomes when your application is scaled to multiple threads (the cross-thread optimizers are still extremely hard to implement). The fact that current systems have tiered memory is another major issue. Since different processor cores may hold some non-flushed data in their caches or read/write buffers, this might have an impact on how the memory is perceived by other cores.
The fact that there are several memory access architectures would violate Java's promise of "write once, execute everywhere," which would make matters worse. It's good news for programmers that the JMM provides a few assurances you may rely on when creating multithreaded programs. Programmers can build multithreaded code that is reliable and portable across different architectures by adhering to these criteria.
14. Explain the Java Concurrency Reentrantlock.
In its simplest form, the ReentrantLock is just a class that implements the Lock interface. Methods receive synchronization when a thread makes use of shared resources. The ReentrantLock on a resource allows the thread to enter the lock more than once.
15. What Exactly Is a Volatile Field, and What Constraints Does the JMM Hold for One?
According to the Java Memory Model, a volatile field has specific characteristics. A volatile variable's reads and writes are synchronization operations, which means they have a complete ordering (all threads will observe a consistent order of these actions). It is guaranteed that a read of a volatile variable will reflect the order of the last write to the variable.
There is little assurance as to what a certain thread would read from a field that is accessed by numerous threads and has at least one thread writing to it, thus you should think about making it volatile.
The atomicity of writing and reading 64-bit values is another assurance of volatility (long and double). Without a volatile modifier, someone reading such a field might see a value that had been partially written by another thread.
16. Readwritelock in Java Concurrency: Describe.
Java multi-threading programs heavily rely on the Readwritelock. Multiple read and write operations can take place concurrently for a shared resource in multi-threading applications. It is typically utilized when there are two simultaneous writes or when "read and write" operations take place. There is a possibility of writing and reading the incorrect value in the read and write scenario. To increase performance, Readwritelock primarily locks either read or write operations.
17. What Particular Guarantees Does the Jmm Hold for the Final Fields of a Class?
In essence, JVM assures that the final fields of a class will be initialized before any thread grabs control of the object. Without this guarantee, reordering or other optimizations may cause a reference to an object to be published, or made visibly, to a different thread before all of its fields have been populated. Rogue access to these fields might result from this.
For this reason, even if a field cannot be accessed via getter methods, it should always be made final when building an immutable object.
18. Reentrantreadwritelock in Java Concurrency explained.
It is a ReadWriteLock interface class that gives us a read-write lock pair by implementing the interface. The readWrite.readLock.lock() and readWrite.writeLock.lock() methods are used, respectively, to obtain the read and write locks. A Reentrantreadwritelock instance is being used in this case for the readWrite. ReentrantReadWriteLock further permits a writing lock downgrade to a read lock.
19. Could One of These Threads Block If Two Threads Call a Synchronized Method on Different Object Instances at the Same Time? What Happens If the Process Is Static?
When a method is an instance method, the instance serves as a method monitor. No one gets stopped since two threads invoking the procedure on different instances obtain different monitors.
The class object is the monitor if the method is static. Since the monitor is the same for both threads, one of them will likely block and wait for the other to exit the synchronized procedure.
20. Concurrenthashmap in Java Concurrency explained.
Similar to the HashMap is the concurrenthashmap. The locking method employed by Concurrenthashmap is the only distinction between HashMap and it. Each method in the Concurrenthashmap is not synchronized to a single lock.
21. Describe the conditions of starvation, deadlock, and livelock. Give an explanation of the potential causes of these conditions.
Deadlock is a situation in which a group of threads is unable to advance because each thread must obtain a resource that has already been acquired by another thread in the group. The simplest scenario is when two threads need to lock both resources in order to move forward, but only one thread has already locked the first resource and the other thread has locked the second. These threads will never advance because they will never get a lock on either resource.
Multiple threads reacting to circumstances or events that they have independently created is known as livelock. A thread must process an event that occurs in that thread in another thread. A new event arises during this processing that must be handled by the first thread, and so on. Such threads are active and unblocked, but they are clogged with pointless work from one another, which prevents them from moving forward.
A thread can go into starvation if another thread (or threads) takes up too much of its resources or have a higher priority than it does. A thread is unable to advance and can thus not carry out useful tasks.
22. In Java Concurrency, describe Lock Striping.
With each lock locking on a different size group of independent objects, the Lock Striping idea is used to segregate locks for a section of a data structure.
23. Give an explanation of the Fork/Join Framework's goals and use cases.
Parallelizing recursive algorithms is possible with the fork/join framework. The biggest issue with parallelizing recursion is that you might quickly run out of threads because each recursive step would need its thread, while the threads up the stack would be idle and waiting.
The ForkJoinPool class, an ExecutorService implementation, serves as the entry point for the fork/join framework. In order to "steal" work from active threads, the work-stealing algorithm is implemented. This makes it possible to use fewer threads than would otherwise be necessary to complete the calculations by distributing them among several threads.
24. How does Java concurrency work with Copywritearraylist?
A class that carries out the List interface is called Copywritearraylist. List and Copywritearraylist are fundamentally different from one another in that Copywritearraylist is thread-safe whereas List is not. When the lint has undergone more iterations than mutations, it performs better.
25. Describe the distinctions between an ArrayList and a CopywriteArraylist in Java concurrency.
The ArrayList is not thread-safe and is not recommended for use in a multi-threaded environment, which is the main distinction between the two. Because the Copywritearraylist is thread-safe, it can be used in a multi-threaded context. Both ArrayList and CopyOnWriteArrayList return iterators that are fail-safe and fail-fast, respectively.
26. What is ConcurrentLinkedQueue in Java concurrency?
The unconstrained and thread-safe queue is called ConcurrentLinkedQueue. The elements are saved as linked nodes. It adheres to the FIFO principle (First In First Out).
27. Explain Java concurrency's ConcurrentLinkedDequeue.
The ConcurrentLinkedDequeue is an unbounded thread-safe deque, just as ConcurrentLinkedQueue. It utilizes the Deque interface, which makes insertion and deletion feasible from either end possible.
28. What do Java Concurrency blocking methods do?
Methods classified as blocking are those that carry out the allocated task without transferring control to the other threads.
29. Concurrency in Java: LinkedBlockingQueue explanation.
The LinkedBlockingQueue uses linked nodes internally for storage while drawing support from the BlockingQueue interface. The LinkedBlockingQueue is optionally bound, unlike the ArrayBlockingQueue.
30. Give a Java Concurrency PriorityBlockingQueue explanation.
The PriorityBlockingQueue uses the BlockingQueue interface's support and saves the elements in the same order in which they are ordinarily arranged. The comparator, which is offered at queue construction time, also determines the order of the items.