Issues with Multithreading design:
Before java 1.5, Multithreading applications were created using Thread group, Thread Pool or Custom Thread Pool.
Here entire thread management was the responsibility of the programmer which are below:
Thread synchronization
Thread waiting
Thread joining
Thread locking
Thread notification
Handling deadlock
Thread behaviors are dependent on the environment where the application is deployed and running. So the same application
might behave in different way on different deployment environment based on the Processor speed, the RAM size, the bandwidth
etc. All have a direct impact on the multithreading application.
What is Executor Framework:
Executors framework (java.util.concurrent.Executor) is used for running the Runnable objects without creating new threads
every time and mostly re-using the already created threads. This provides multi-threading applications an easy abstraction layer.
The executor abstraction layer hides the critical parts of concurrent execution and the programmer only concentrates on the
business logic implementation.
In java executor framework all parallel works are considered as tasks instead of simple threads. So the application now deals
with instances of Runnable (basically collections of tasks or parallel works) and then it is passed to an Executor to process.
The ExecutorService interface extends the simplistic Executor interface. The ExecutorService interface represents an asynchronous
execution mechanism which is capable of executing tasks in the background. An ExecutorService is thus very similar to a Thread pool.
Example:
ExecutorService executorService = Executors.newFixedThreadPool(10);
executorService.execute(new Runnable() {
public void run() {
System.out.println("Asynchronous task");
}
});
executorService.shutdown();
ExecutorService executorService1 = Executors.newSingleThreadExecutor(); //Single thread to execute commands
ExecutorService executorService2 = Executors.newFixedThreadPool(10);
ExecutorService executorService3 = Executors.newScheduledThreadPool(20);
ExecutorService executorService3 = Executors.newCachedThreadPool();
The newFixedThreadPool(int): returns a ThreadPoolExecutor instance with an initialized and unbounded queue and a
fixed number of threads. Here no extra thread is created during execution than the set value. So if there
is no free thread available the task has to wait and then execute when one thread is free.
The newCachedThreadPool(): returns a ThreadPoolExecutor instance initialized with an unbounded queue and unbounded
number of threads. Here existing threads are reused if available. But if no free thread is available, a new one
is created and added to the pool to complete the new task. Threads that have been idle for longer than a timeout period
will be removed automatically from the pool.
Different methods to delegate tasks for execution to an ExecutorService:
execute(Runnable)
submit(Runnable)
submit(Callable)
invokeAny(...)
invokeAll(...)
Q. Difference between "Executors.newSingleThreadExecutor().execute(command) and "new Thread(command).start()";
A. Once you have an Executor instance, you can submit multiple tasks to it, and have them executed one after another.
You can't do that simply with a raw Thread.
Executor framework creates tasks by using instances of Runnable or Callable. In case of Runnable, the run() method does not
return a value or throw any checked exception. But Callable is a more functional version in that area. It defines a call()
method that allows the return type as Object. This can be used in future processing and it also throws an exception if necessary.
The FutureTask class is another important component which is used to get future information about the processing. An
instance of this class can wrap either a Callable or a Runnable. You can get an instance of this as the return value of
submit() method of an ExecutorService. You can also manually wrap your task in a FutureTask before calling execute() method.
Apart from above Executors, here are the functional steps to implement the Java ThreadPoolExecutor:
A pool of multiple threads is created.
A queue is created holding all the tasks but these tasks are not yet assigned to threads from the pool.
Rejection handler is used to handle the situation when one or more tasks are not able to assign in the queue.
As per the default rejection policy, it will simply throw a RejectedExecutionException, a runtime exception, and the
application can catch it or discard it.
Creating Executors:
Executor is an interface having only "public abstract void execute(java.lang.Runnable)" method. Used to submit a new task.
ExecutorService is a sub-interface of Executor. It has other methods like "shutdown(), shutdownNow(), isTerminated(),
Future submit(Callable), Future submit(Runnable, Object), Future submit(Runnable)" etc.
----------------------------
Callable<String> myCommand2 = ...
ExecutorService executorService = ... // Build an executorService
executorService.submit(myCommand1);
//submit Accepts also a Callable
Future<String> resultFromMyCommand2 = executorService.submit(myCommand2);
//Will wait for myCommand1 and myCommand2 termination
executorService.shutdown();
Runnable myCommand3 = ...;
//Will throw a RejectedExecutionException because no new task can be submitted
executorService.submit(myCommand3);
----------------------------
ScheduledExecutorService is a sub-interface of ExecutorService and has "schedule(), scheduleAtFixedRate(),
scheduleWithFixedDelay()" methods. Used to execute commands periodically or after a given delay.
----------------------------
ScheduledExecutorService executor = ...;
Runnable command1 = ...;
Runnable command2 = ...;
Runnable command3 = ...;
//Will start command1 after 50 seconds
executor.schedule(command1, 50L, TimeUnit.SECONDS);
//Will start command 2 after 20 seconds, 25 seconds, 30 seconds ...
executor.scheduleAtFixedRate(command2, 20L, 5L, TimeUnit.SECONDS);
//Will start command 3 after 10 seconds and if command3 takes 2 seconds to be executed also after 17, 24, 31, 38 seconds...
executor.scheduleWithFixedDelay(command3, 10L, 5L, TimeUnit.SECONDS);
----------------------------
Executors is a Class and it has number of static factory methods to create an ExecutorService and
ScheduledExecutorService objects depending upon the requirement of the application.
----------------------------
ExecutorService ex3 = Executors.newSingleThreadExecutor();
Future future = ex3.submit(new Callable(){
public Object call() {
for(int i=20;i<=23;i++)
System.out.println("Asynchronous Callable: "+i);
return "My Result";
}
}
);
try {
System.out.println("Callable: "+future.get());
} catch (Exception e) {
e.printStackTrace();
}
ex3.shutdown();
System.out.println("All Executors are Shutdown...");
if(!ex3.isTerminated()) //Recheck if not shut down.
ex3.shutdownNow();
----------------------------
ThreadPool Executor:
private static final Executor executor = new ThreadPoolExecutor(6, 12, 5000L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>(250));
The parameter values depend upon the application need. Here the core pool is having 6 threads which can run concurrently
and the maximum number is 12. The queue is capable of keeping 250 tasks. Here one point should be remembered that the pool
size should be kept on a higher side to accommodate all tasks. The idle time limit is kept as 5 ms.
Submit the task to the Executor: After creating the ExecutorService and proposed tasks, we need to submit the task to the
executor by using either submit() or execute() method. Now as per our configuration the tasks will be picked up from the queue
and run concurrently. For example if you have configured 5 concurrent executions, then 5 tasks will be picked up from the queue
and run in parallel. This process will continue till all the tasks are finished from the queue.
Execute the task: Next the actual execution of the tasks will be managed by the framework. The Executor is responsible for
managing the task’s execution, thread pool, synchronization and queue. If the pool has less than its configured number of
minimum threads, new threads will be created as per requirement to handle queued tasks until that limit is reached. If the
number is higher than the configured minimum, then the pool will not start any more threads. Instead, the task is queued
until a thread is freed up to process the request.
And Finally Shutdown the Executor: The termination is executed by invoking its shutdown() method or shutdownNow().
You can choose to terminate it gracefully, or abruptly.
Some Theories taken from : http://mrbool.com/working-with-java-executor-framework-in-multithreaded-application/27560
--------------------------------------------------------------------
Working Example:
----------------
package concurrency;
public class MyRunnable implements Runnable {
public void run(){
System.out.println("Run method");
}
}
----------------
//Sample ThreadPool Class to understand Executor Framework
package concurrency;
public class ThreadPool {
public static void main(String ar[]){
Thread worker[] = new Thread[3];
Runnable r = new MyRunnable();
System.out.println("Running ThreadPool Task..");
for(int i=0;i<worker.length;i++){
worker[i]=new Thread(r);
worker[i].start();
}
for(int i=0;i<worker.length;i++){
try {
worker[i].join();
} catch (InterruptedException e) {
e.printStackTrace();
}
worker[i]=null;
}
}
}
----------------
//Above ThreadPool code is same as below Executor Framework
package concurrency;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
public class ExecutorTask {
public static void main(String[] args) {
int poolSize = 3;
int jobCount = 3;
Runnable r = new MyRunnable();
System.out.println("Running Executor Task..");
ExecutorService ex = Executors.newFixedThreadPool(poolSize);
for(int i=0;i<jobCount;i++){
ex.execute(r);
}
ex.shutdown();
System.out.println("Running Executor Task for Callable..");
List<Future> list = new ArrayList<Future>();
//ExecutorService executor = Executors.newFixedThreadPool(poolSize);
ExecutorService executor = new ThreadPoolExecutor(3, 3, 0L, TimeUnit.MILLISECONDS, new ArrayBlockingQueue<Runnable>(15));
//CorePoolSize, MaxPoolSize, Alive Timeout (0 means lifetime), BlockingQueue
for(int i=0;i<jobCount;i++){
list.add(executor.submit(new MyCallable()));
}
try {
for(Future f: list)
System.out.println("Future Returned get: "+f.get());
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
executor.shutdown();
}
}
----------------
package concurrency;
import java.util.concurrent.Callable;
public class MyCallable implements Callable {
public Object call(){
System.out.println("Call method..");
return "Server msg is Hi";
}
}
-----------------------------END---------------------------------------
Tuesday, February 14, 2017
Executor Framework in Java
ConcurrentHashMap in java
ConcurrentHashMap performs better than Hashtable or synchronized Map because it only locks a portion of Map,
instead of whole Map, which is the case with Hashtable and synchronized Map (as HashMap is not thread-safe).
ConcurrentHashMap allows multiple readers to read concurrently without any blocking and the same time maintains
integrity by synchronizing write operations.
ConcurrentHashMap is introduced as an alternative of Hashtable and provided all functions supported by Hashtable
with an additional feature called "concurrency level", which allows ConcurrentHashMap to partition Map. It partitions
Map into different parts based on concurrency level and locking only a portion of Map during updates.
Default concurrency level is 16, and accordingly Map is divided into 16 parts and each part is governed with a different
lock. This means, 16 threads can operate on Map simultaneously until they are operating on different part of Map.
This makes ConcurrentHashMap high performance despite keeping thread-safety intact. Though, it comes with a limitation.
Since update operations like put(), remove(), putAll() or clear() is not synchronized, sometimes concurrent retrieval may
not reflect most recent change on Map.
Iterator returned by keySet of ConcurrentHashMap are weekly consistent and they only reflect state of ConcurrentHashMap
and may not reflect any recent change. Iterator of ConcurrentHashMap's keySet is also fail-safe and doesn’t throw
ConcurrentModificationExceptoin.
Default concurrency level is 16 and can be changed, by providing a number which make sense and work for you while creating
ConcurrentHashMap. Since concurrency level is used for internal sizing and indicates number of concurrent update without
contention. Hence if you just have few writer threads to update Map, keeping it low is much better. ConcurrentHashMap also
uses ReentrantLock to internally lock its segments.
ConcurrentHashMap examples are similar to Hashtable examples, however they have one more method "putIfAbsent()". Many times
we need to insert entry into Map if it's not present already, and we wrote following kind of code:
synchronized(map){
if (map.get(key) == null){
return map.put(key, value);
} else{
return map.get(key);
}
}
Though this code will work fine in HashMap and Hashtable, This won't work in ConcurrentHashMap; because, during put operation
whole map is not locked, and while one thread is putting value, other thread's get() call can still return null which result
in one thread overriding value inserted by other thread. Ofcourse, you can wrap whole code in synchronized block and make it
thread-safe but that will only make your code single threaded. ConcurrentHashMap provides putIfAbsent(key, value) which does
same thing but atomically and thus eliminates above race condition.
ConcurrentHashMap is best suited when you have multiple readers and few writers. If writers outnumber reader, or writer is equal
to reader, than performance of ConcurrentHashMap effectively reduces to synchronized map or Hashtable. Performance of CHM drops,
because you got to lock all portion of Map, and effectively each reader will wait for another writer, operating on that portion
of Map. ConcurrentHashMap is a good choice for caches, which can be initialized during application start up and later accessed
my many request processing threads. CHM is also a good replacement of Hashtable and should be used whenever possible.
Summary:
1. ConcurrentHashMap allows concurrent read and thread-safe update operation. Use this when you have more readers than writers.
2. During the update operation, ConcurrentHashMap only locks a portion of Map instead of whole Map.
3. The concurrent update is achieved by internally dividing Map into the small portion which is defined by concurrency level.
4. Choose concurrency level carefully as a significantly higher number can be a waste of time and space and the lower number
may introduce thread contention.
5. All operations of ConcurrentHashMap are thread-safe.
6. Since ConcurrentHashMap implementation doesn't lock whole Map, there is chance of read overlapping with update operations
like put() and remove(). In that case result returned by get() method will reflect most recently completed operation from there.
7. Iterator returned by ConcurrentHashMap is weekly consistent, fail-safe and never throw ConcurrentModificationException.
8. ConcurrentHashMap doesn't allow null as key or value.
9. During putAll() and clear() operations, the concurrent read may only reflect insertion or deletion of some entries.
Taken from : http://javarevisited.blogspot.com/2013/02/concurrenthashmap-in-java-example-tutorial-working.html#ixzz4YdshMH8E
Thursday, February 9, 2017
Synchronization and Inter-Thread communication in Java
Synchronization in java is the capability to control the access of multiple threads to any shared resource.
Synchronization is mainly used to avoid Race-Condition and Thread Safety in Application.
There are some Issues caused by Synchroniation are:
DeadLock: If not properly handled. Check this: http://deepakmodi2006.blogspot.in/2017/02/multithreading-issues-and-solutions-in.html
Application becomes slow.
Types of synchronization:
1) Process Synchronization (done at OS level)
2) Thread Synchronization
--Mutual Exclusive
--Synchronized method
--Synchronized block
--static synchronization
--Inter Thread Communication
--wait and notify
We are going to discuss "Thread Synchronization".
Mutually Exclusive: means keeps threads away from interfering with one another while sharing data.
Synchronization is built around an internal entity known as the lock or monitor.
Every object has an lock associated with it. Hence a thread which needs consistent access to an object's fields has to acquire
the object's lock before accessing them, and then release the lock when it's done with them.
1) Synchronized method is used to lock an object for any shared resource. When a thread invokes a synchronized method, it
automatically acquires the lock for that object and releases it when the thread completes its task.
----------------------------------
package threads;
class BookMyShowTicketHelper{
static int MAX_TICKET=10;
synchronized boolean bookTicket(int n){
if(MAX_TICKET-n>=0) {
MAX_TICKET = MAX_TICKET-n;
System.out.println("MAX_TICKET: "+MAX_TICKET+", Ticket booked: "+n);
return true;
}
return false;
}
public static void main(String ar[]){
final BookMyShowTicketHelper obj = new BookMyShowTicketHelper();
Thread t1 = new Thread(){ public void run() { obj.bookTicket(3); } }; //Access Synchronized method using Anonymous class
Thread t2 = new Thread(){ public void run() { obj.bookTicket(2); } }; //Access Synchronized method using Anonymous class
t1.start();
t2.start();
}
}
----------------------------------
2) Synchronized block is used to lock an object for any shared resource. Scope of synchronized block is smaller than the method.
boolean bookTicket(int n){
if(MAX_TICKET-n>=0) {
MAX_TICKET = MAX_TICKET-n;
System.out.println("MAX_TICKET: "+MAX_TICKET+", Ticket booked: "+n);
return true;
}
return false;
}
3) Static Synchronization: If you make any static method as synchronized, the lock will be on the class not on object.
Suppose there are 2 objects (Obj1 and Obj2) of BookMyShowTicketHelper class.
Obj1 is accessed by Thread t1 and t2.
Obj2 is accessed by Thread t3 and t4.
By using synchronized method or block, Interference between t1 and t2 OR t3 and t4 is stopped, but
interference between t1 and t3 OR t2 and t4 still can happen. Because each Object has one lock, hence t1 and t3 OR
t2 and t4 has got other locks. Static Synchronization is at class level and is the solution here.
synchronized static boolean bookTicket(int n){ return true/false; }
OR
static boolean bookTicket(int n) {
synchronized (BookMyShowTicketHelper.class) { // Synchronized block on class
//...
}
}
Inter-Thread Communication: It is a mechanism where a running thread is pausing itself in its critical section and allows other thread
to enter (or lock) in the same critical section to be executed. It is implemented by following methods of Object class:
wait()
notify()
notifyAll()
wait(): This causes current thread to release the lock and wait until either another thread invokes the notify() method or the notifyAll()
method for this object, or a specified amount of time has elapsed (wait(long timeout) is wait for specified amount of time). But
the current thread must own this object's monitor, then only it can release the lock hence wait() must be called from the
synchronized method only otherwise it will throw exception. Syntax: public final void wait()throws InterruptedException.
notify(): Wakes up a single thread that is waiting on this object's monitor. If many threads are waiting to get lock of this object, any one
of them is chosen to be awakened. The choice is arbitrary and occurs at the discretion of the implementation or at OS level.
Syntax: public final void notify()
notifyAll(): Wakes up all threads that are waiting on this object's monitor. Syntax: public final void notifyAll()
Note: wait(), notify() and notifyAll() methods are defined in Object class because they are related to lock and object has a lock.
Difference between: wait() and sleep() method
wait() will push thread from running to waiting state.
sleep() will push thread from running to runnable state.
wait() will release the lock, but sleep() will be holding.
wait(long) has timeout for waiting, else this will throw Interrupted Exception.
wait() must be awakened by notify() or notifyAll(), but sleep() will be completed after specified time.
wait() is in Object Class and non-static.
sleep() is in Thread Class and static.
----------------------
synchronized void withdraw(int amount){
if(this.amount<amount){
System.out.println("Less balance; waiting for deposit...");
try{ wait(); } catch(Exception e){ }
}
this.amount = this.amount - amount;
System.out.println("withdraw completed...");
}
synchronized void deposit(int amount){
this.amount = this.amount + amount;
System.out.println("deposit completed... ");
notify();
}
Complete wait() and notify()
http://deepakmodi2006.blogspot.in/2011/01/wait-and-notify-in-java-wait-and-notify.html
----------------------
NOTE:
The Java interpreter has a thread "scheduler" that manages all the threads of a program and decides which ones are to be run.
OS maintains a table containing list of Processes and List of Threads belonging to each process. New Processes/Threads gets added to the list.
Some helps taken from here: http://www.javatpoint.com/synchronization-in-java
Monday, February 6, 2017
JOIN in ORACLE
The JOIN operations are the very confusing queries between two tables. However some join results can be achieved by
using an explicit equality test also in a WHERE clause, such as "WHERE t1.col1 = t2.col2".
1. The purpose of a join is to combine the data across tables.
2. A join is actually performed by the where clause which combines the specified rows of tables.
3. If a join involves in more than two tables then Oracle joins first two tables based on the joins condition and then compares the
result with the next table and so on.
Type of JOIN Operations:
1 Equi-join or Using clause or On clause
2 Non-Equi join
3 Self join
4 Natural join
5 Cross join
6 Inner join
7 Outer join
Left outer
Right outer
Full outer
---------DB Tables and Data Populations--------------
CREATE TABLE DEPT (
DEPTNO NUMBER(3) NOT NULL,
DNAME VARCHAR2(50) NOT NULL,
LOC VARCHAR2(50) NULL,
CONSTRAINT DEPT_PK PRIMARY KEY (DEPTNO)
);
CREATE TABLE EMP(
EMPNO NUMBER(3) NOT NULL,
ENAME VARCHAR2(50) NOT NULL,
JOB VARCHAR2(30) NOT NULL,
MGR NUMBER(3) NOT NULL,
DEPTNO NUMBER(3),
CONSTRAINT EMP_PK PRIMARY KEY (EMPNO),
CONSTRAINT EMP_FK FOREIGN KEY (DEPTNO) REFERENCES DEPT(DEPTNO)
);
INSERT INTO DEPT VALUES('10','INVENTORY','HYBD');
INSERT INTO DEPT VALUES('20','FINANACE','BGLR');
INSERT INTO DEPT VALUES('30','HR','MUMBAI');
INSERT INTO DEPT VALUES('40','IT','DELHI');
INSERT INTO DEPT VALUES('50','ENGINEERING','BGLR');
INSERT INTO DEPT VALUES('60','PROD_SUPPORT','BGLR');
INSERT INTO EMP VALUES('111','Deepak','Engineer','114','50');
INSERT INTO EMP VALUES('112','Rajesh','Manager','113','20');
INSERT INTO EMP VALUES('113','Chandan','Systems','112','40');
INSERT INTO EMP VALUES('114','Devansh','Analyst','111','10');
INSERT INTO EMP VALUES('115','Saransh','HR_Team','115','30');
INSERT INTO EMP VALUES('116','Mummy','HR_Team','115','30');
---------DB Tables and Data Populations--------------
1. EQUI JOIN, Using Clause and ON Clause gives same return.
EQUI Join: A Join which contains an equal to ‘=’ operator in the joins condition.
Using clause: Use of Using Clause.
On Clause: Use of On Clause.
Ex: SELECT EMPNO,ENAME,JOB,DNAME,LOC,E.DEPTNO FROM EMP E, DEPT D WHERE E.DEPTNO=D.DEPTNO; //DEPTNO=60 Missing from Result.
Ex: SELECT EMPNO,ENAME,JOB,DNAME,LOC,DEPTNO FROM EMP E JOIN DEPT D USING (DEPTNO); //DEPTNO=60 Missing from Result.
Ex: SELECT EMPNO,ENAME,JOB,DNAME,LOC,E.DEPTNO FROM EMP E JOIN DEPT D ON(E.DEPTNO=D.DEPTNO); //DEPTNO=60 Missing from Result.
2. NON-EQUI JOIN: A join which contains an operator other than equal to ‘=’ in the joins condition, means < or >.
However you should be careful while using such queries on common column.
Ex: SQL> SELECT EMPNO,ENAME,JOB,DNAME,LOC,E.DEPTNO FROM EMP E,DEPT D WHERE E.DEPTNO > D.DEPTNO;
3. SELF JOIN: Joining the table itself is called self join.
Ex: SQL> SELECT E1.EMPNO,E2.ENAME,E1.JOB,E2.DEPTNO FROM EMP E1,EMP E2 WHERE E1.EMPNO=E2.MGR;
4. NATURAL JOIN: Natural join compares all the common columns. If a column is matched (DEPTNO here), data will display in ASC Order of DEPTNO.
Ex: SQL> SELECT EMPNO,ENAME,JOB,DNAME,LOC,DEPTNO FROM EMP NATURAL JOIN DEPT; //Same AS EQUI JOIN
5. CROSS JOIN: This will gives the cartesial product.
Ex: SQL> SELECT EMPNO,ENAME,JOB,DNAME,LOC FROM EMP CROSS JOIN DEPT; //Total m*n records. m FROM EMP, n FROM DEPT table.
6. INNER JOIN: This will display all the records that have matched. Same as EQUI Join.
Ex: SQL> SELECT EMPNO,ENAME,JOB,DNAME,LOC FROM EMP INNER JOIN DEPT USING (DEPTNO);
7. OUTER JOIN: Outer join gives the non-matching records along with matching records.
LEFT OUTER JOIN: A join between two tables with an explicit join clause, Showing unmatched rows from the first table (as null) along with All matched Rows.
RIGHT OUTER JOIN: A join between two tables with an explicit join clause, Showing unmatched rows from the second table (as null) along with All matched Rows.
FULL OUTER JOIN: A join between two tables with an explicit join clause, Showing unmatched rows from both tables (as null) along with All matched Rows.
LEFT OUTER JOIN:
SELECT EMPNO,ENAME,JOB,DNAME,LOC,E.DEPTNO,D.DEPTNO FROM EMP E LEFT OUTER JOIN DEPT D ON(E.DEPTNO=D.DEPTNO); //Unmatched DEPT rows will show as null.
SELECT EMPNO,ENAME,JOB,DNAME,LOC,E.DEPTNO,D.DEPTNO FROM EMP E,DEPT D WHERE E.DEPTNO=D.DEPTNO(+); //Unmatched DEPT rows will show as null.
RIGHT OUTER JOIN:
SELECT EMPNO,ENAME,JOB,DNAME,LOC,E.DEPTNO,D.DEPTNO FROM EMP E RIGHT OUTER JOIN DEPT D ON(E.DEPTNO=D.DEPTNO); //Unmatched EMP rows will show as null.
SELECT EMPNO,ENAME,JOB,DNAME,LOC,E.DEPTNO,D.DEPTNO FROM EMP E,DEPT D WHERE E.DEPTNO(+)=D.DEPTNO; //Unmatched EMP rows will show as null.
FULL OUTER JOIN: SELECT EMPNO,ENAME,JOB,DNAME,LOC,E.DEPTNO,D.DEPTNO FROM EMP E FULL OUTER JOIN DEPT D ON(E.DEPTNO=D.DEPTNO);
For more details, please refer: http://www.javatpoint.com/oracle-create-table
--------------------------------END------------------------------------
Wednesday, February 1, 2017
BlockingQueue and Producer Consumer Problem
BlockingQueue and Sharing Data between two threads (Producer Consumer issue):
Sharing of Data between threads should be minimized if not prevented completely as this will open bugs like thread-safety.
However if required this can be done using shared object or shared data structures like Queue. One good API provided by Java
is concurrent collection BlockingQueue. Here we can easily share data without being bothered about thread safety and inter-thread
communication. BlockingQueue doesn't allow null to be stored in Queue, will throw NullPointerException.
There are many implementations of this Interface:
ArrayBlockingQueue (Most Used)
DelayQueue
LinkedBlockingQueue (Most Used)
PriorityBlockingQueue (Most Used)
SynchronousQueue
BlockingQueue is a unique collection type which not only store elements but also supports flow control by introducing
blocking if either BlockingQueue is full or empty. "take()" method of BlockingQueue will block if Queue is empty and
"put()" method of BlockingQueue will block if Queue is full. This property makes BlockingQueue an ideal choice for implementing
Producer consumer design pattern where one thread insert elements into BlockingQueue and other thread consumes it.
All queuing method uses concurrency control and internal locks to perform operation atomically. Since BlockingQueue also extend
Collection, bulk Collection operations like addAll(), containsAll() are not performed atomically until any BlockingQueue
implementation specifically supports it. So call to addAll() may fail after inserting couple of elements. BlockingQueue can
be bounded or unbounded. A bounded BlockingQueue is one which is initialized with initial capacity and call to put() will be
blocked if BlockingQueue is full and size is equal to capacity. This bounding nature makes it ideal to use a shared queue between
multiple threads like in most common Producer consumer solutions in Java. An unbounded Queue is one which is initialized without
capacity, actually by default it initialized with Integer.MAX_VALUE.
Most common example of BlockingQueue uses bounded BlockingQueue as shown below:
BlockingQueue bQueue = new ArrayBlockingQueue(2); //Size is 2
bQueue.put("Java");
System.out.println("Item 1 inserted into BlockingQueue");
bQueue.put("JDK");
System.out.println("Item 2 is inserted on BlockingQueue");
bQueue.put("J2SE"); //This insertion is not done. BlockingQueue will block here for further adding of items as size is only 2.
System.out.println("Done");
Output:
Item 1 inserted into BlockingQueue
Item 2 inserted on BlockingQueue
ArrayBlockingQueue and LinkedBlockingQueue are common implementation of BlockingQueue interface.
ArrayBlockingQueue is backed by array and Queue impose orders as FIFO. Head of the queue is the oldest element in terms of time and
tail of the queue is youngest element. ArrayBlockingQueue is also fixed size bounded buffer on the other hand LinkedBlockingQueue is
an optionally bounded queue built on top of Linked nodes. In terms of throughput LinkedBlockingQueue provides higher throughput than
ArrayBlockingQueue in Java.
-------------------------------------------------------------
Complete Example:
//BlockingQueueExample.java
package concurrency;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;
public class BlockingQueueExample {
public static void main(String[] args) throws Exception {
BlockingQueue queue = new ArrayBlockingQueue(2); //Size is 2.
ProducerNew producer = new ProducerNew(queue);
ConsumerNew consumer = new ConsumerNew(queue);
new Thread(producer).start();
new Thread(consumer).start();
Thread.sleep(2000);
}
}
//ProducerNew.java
package concurrency;
import java.util.concurrent.BlockingQueue;
public class ProducerNew implements Runnable{
protected BlockingQueue queue = null; //This declaration can be avoided if ProducerNew extends BlockingQueueExample
public ProducerNew(BlockingQueue queue) { //This can be avoided if ProducerNew extends BlockingQueueExample
this.queue = queue;
}
public void run() {
try {
System.out.println("Inserted: 1");
queue.put("1");
//queue.put(null); //Null Pointer Exception
Thread.sleep(500);
System.out.println("Inserted: 2");
queue.put("2");
Thread.sleep(500);
System.out.println("Inserted: 3");
queue.put("3");
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
//ConsumerNew.java
package concurrency;
import java.util.concurrent.BlockingQueue;
public class ConsumerNew implements Runnable{
protected BlockingQueue queue = null; //This declaration can be avoided if ConsumerNew extends BlockingQueueExample
public ConsumerNew(BlockingQueue queue) { //This can be avoided if ConsumerNew extends BlockingQueueExample
this.queue = queue;
}
public void run() {
try {
System.out.println("Consumed: "+queue.take());
System.out.println("Consumed: "+queue.take());
System.out.println("Consumed: "+queue.take());
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
----------------------------------------------------------------------------------
Producer Consumer Using Executor Service and BlockingQueue:
//ProducerConsumer_ExecutorService.java
package concurrency;
public class ProducerConsumer_ExecutorService {
ExecutorServiceThreadPool ex; //This Class is defined below.
public static void main(String[] args) {
ProducerConsumer_ExecutorService prodconsumer = new ProducerConsumer_ExecutorService();
prodconsumer.init();
}
private void init() {
ex = new ExecutorServiceThreadPool();
for(int i = 0; i < 10; i++){
ex.addThread(new Producer(i));
//ex.addThread(new Producer(i)); //Adding more Producer, Once Queue is full, it will be blocked for more insertions.
ex.addThread(new Consumer());
}
ex.finish();
}
private class Producer implements Runnable {
int data;
public Producer(int datatoput) {
data = datatoput;
}
@Override
public void run() {
System.out.println("Inserting Element " + data);
try {
ex.queue.put(data);
Thread.sleep(100);
} catch (Exception e) {
e.printStackTrace();
}
}
}
private class Consumer implements Runnable {
int datatake;
@Override
public void run() {
try {
datatake = ex.queue.take();
System.out.println("Fetching Element " + datatake);
Thread.sleep(100);
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
//ExecutorServiceThreadPool.java
package concurrency;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.logging.Level;
import java.util.logging.Logger;
public class ExecutorServiceThreadPool {
final BlockingQueue queue = new ArrayBlockingQueue(10);
ExecutorService executor = Executors.newFixedThreadPool(10);
public void addThread(Runnable r){
Future f = executor.submit(r);
try {
System.out.println("Status: "+f.get()); //null means successful.
}catch(Exception e) {
e.printStackTrace();
}
}
public void finish(){
try {
executor.shutdown();
executor.awaitTermination(50, TimeUnit.SECONDS);
} catch (InterruptedException ex) {
Logger.getLogger(ExecutorServiceThreadPool.class.getName()).log(Level.SEVERE, null, ex);
}
System.out.println("Finished all threads");
}
}
------------------------------------------------------------------
Multithreading Issues and Solutions in Java
Multithreading Issues and Solutions in Java:
Race-Condition
Thread Safety
DeadLock
------------------------------------------------------------------------
Race-Condition: In Java it is a type of concurrency bug or issue which is introduced in your program because parallel execution
of your program by multiple threads at same time, Since Java is a multi-threaded programming language hence risk of Race
condition is higher in Java. This requires clear understanding of what causes a race condition and how to avoid that.
Race condition is of 2 types: "Check and Act" and "Read Modify Write"
The solution is use: Synchronization
Check and Act Race Condition:
If you call getInstance() method below from two thread simultaneously, then its possible that while one thread is initializing
singleton after null check, another thread sees value of _instance reference variable as null (especially if your object takes longer
time to initialize) and enters into critical section which eventually results in getInstance() returning two separate instance of Singleton.
public Singleton getInstance(){
if(_instance == null){ //race condition if two threads sees _instance= null
_instance = new Singleton();
}
}
Solution: Use Synchronized in getInstance()
Read Modify Write Race Condition:
This comes due to improper synchronization of non-atomic operations or combination of two individual atomic operations which is not atomic
when combined. Here both contains() and put() are atomic but still this code can result in race condition since both operation together is
not atomic. Consider thread T1 checks for conditions and goes inside if block, now CPU is switched from T1 to thread T2 which also checks
condition and goes inside if block. Now we have two thread inside if block which result in either T1 overwriting T2 value or vice-versa
based on which thread has CPU for execution. In order to fix this race condition in Java you need to wrap this code inside synchronized
block which makes them atomic together because no thread can go inside synchronized block if one thread is already there.
if(!hashtable.contains(key)){
hashtable.put(key,value);
}
Solution: Move these two codes lines inside Synchronized block.
------------------------------------------------------------------------
Thread Safety: Thread-safety or Thread-safe code in Java refers to code which can safely be used or shared in concurrent or
multi-threading environment and they will behave as expected. Any code, Class or Object which can behave differently from its
contract on concurrent environment is not thread-safe.
Ex: //Non Thread-Safe Class
public class Counter {
private int count;
//This below method is not thread-safe because ++ is not an atomic operation
public int getCount(){
return count++;
}
}
Above example is not thread-safe because ++ (increment operator) is not an atomic operation and can be broken down into
Read, Update and Write operation. If multiple thread call getCount() approximately same time each of these three operation
may coincide or overlap with each other for example while thread 1 is updating value, thread 2 reads and still gets old value,
which eventually let thread 2 override thread 1 increment and one count is lost because multiple thread called it concurrently.
How to make above code thread safe in Java:
1) Use synchronized keyword in Java and lock the getCount() method so that only one thread can execute it at a time which
removes possibility of coinciding or interleaving.
2) use Atomic Integer, which makes this ++ operation atomic and since atomic operations are thread-safe and saves cost of external synchronization.
//Complete Code, Thread-Safe Example
public class Counter {
private int count;
//This method is thread-safe now because of locking and synchornization
public synchronized int getCount(){
return count++;
}
}
OR
import java.util.concurrent.atomic.AtomicInteger;
public class Counter {
AtomicInteger atomicCount = new AtomicInteger(0);
//This method is thread-safe because count is incremented atomically
public int getCountAtomically(){
return atomicCount.incrementAndGet();
}
}
//Java Source code for method AtomicInteger.incrementAndGet()
public final int incrementAndGet() {
for (;;) {
int current = get();
int next = current + 1;
if (compareAndSet(current, next)) //This does Compare and Set operation.
return next;
}
}
NOTE about Thread-Safety:
1) Immutable objects are by default thread-safe because their states can't be modified once created. String is immutable, hence inherently thread-safe.
2) Read only or final variables in Java are also thread-safe.
3) Locking is one way of achieving thread-safety in Java.
4) Static variables if not synchronized properly becomes major cause of thread-safety issues.
5) Example of thread-safe class in Java: Vector, Hashtable, ConcurrentHashMap, String etc.
6) Atomic operations in Java are thread-safe. Ex: Reading a 32 bit integer from memory because its an atomic operation it can't interleave with other thread.
7) Local variables are also thread-safe because each thread has its own copy. Hence using local variables is good way for writing thread-safe code in Java.
8) In order to avoid thread-safety issue minimize sharing of objects between multiple thread.
9) Volatile keyword in Java can also be used to instruct threads not to cache variables and read from main memory.
10) Sometimes JVM plays a spoiler since it can reorder code for optimization, so the code which looks sequential and runs fine in development
environment are not guaranteed to run similarly in production environment because JVM may ergonomically adjust itself as server JVM and
perform more optimization and reorder which cause thread-safety issues. Volatile is the solution for variables if there are for such cases.
Remember JVM tuning flag: server
------------------------------------------------------------------------
DeadLock: When two or more threads are waiting for each other to release lock and get stuck for infinite time, situation is called deadlock.
It will only happen in case of multitasking.
How to check if a code has possibility for Deadlock: Check if there are nested synchronized block or calling one synchronized method from other
or trying to get lock on different objects.
How to resolve DeadLock: When you are locked while running the application, take thread dump, in Linux you can do this by command "kill -3".
This will display status of all the threads and you can see which thread is locked on which object. Other way is to use jconsole, it will also
show you exactly which threads are get locked and on which object.
Example of Deadlock:
package threads;
//Java program to create a deadlock by imposing circular wait.
public class DeadLockDemo {
//This method request two locks, first String and then Integer
public void method1() {
synchronized (String.class) {
System.out.println("Aquired lock on String.class object");
synchronized (Integer.class) { //nested Synchronized
System.out.println("Aquired lock on Integer.class object");
}
}
}
//This method also requests same two locks but in exactly Opposite order i.e. first Integer and then String.
//This creates potential deadlock, if one thread holds String lock and other holds Integer lock and they wait
//for each other, forever.
public void method2() {
synchronized (Integer.class) {
System.out.println("Aquired lock on Integer.class object");
synchronized (String.class) {
System.out.println("Aquired lock on String.class object");
}
}
}
}
/*
If method1() and method2() both will be called by two or many threads, there is a good chance of deadlock because if
Thread-1 acquires lock on String object while executing method1() and Thread-2 acquires lock on Integer object while
executing method2() both will be waiting for each other to release lock on Integer and String to proceed further which will never happen.
*/
Solution: Make the order of Locks in nested Synchronized same in both methods. So, if thread A acquires lock on Integer object, thread B
will not proceed until thread A releases Integer lock, same way thread A will not be blocked even if thread B holds String lock because
now thread B will not expect thread A to release Integer lock to proceed further.
------------------------------------------------------------------------
//References from java67 and javarevisited.
Subscribe to:
Posts (Atom)