Sunday 24 September 2017

DeadLock in Multithreading in Java


Deadlock is a situation  where one thread1(t1) is waiting for the release of a lock(say str1) acquired by another thread t2 and this thread is waiting for the release of the lock (say str2) acquired by  first thread t1.



 This condition creates a never ending process of release of lock and no threads ever able to acquire the lock and programme hang in this state.

Deadlock is tricky situation and most likely not able to reproduce in many condition,  but should be handle  with full care.

I will try to demonstrate couple of scenarios where  deadlock doesn't occur but probable code of creating a deadlock and another scenario where deadlock occurs for sure and application remain in hanged state.

Demo 1 : Situation likely to be a deadlock but works well and doesn't hang the application.

public class MultiThreading {
String s1 = "John";
String s2 = "Cena";
Thread t1 = new Thread(() -> {
synchronized (s1) {
synchronized (s2) {
System.out.println(s1 + " " + s2);
}
}
});
Thread t2 = new Thread(() -> {
synchronized (s2) {
synchronized (s1) {
System.out.println(s2 + " " + s1);
}
}
});

public static void main(String[] args) {
MultiThreading mt = new MultiThreading();
mt.t1.start();
mt.t2.start();
}

}

Output : 

John Cena
Cena John


Demo2 : Deadlock occurs and application gets in the hanged state with the addition of a while loop in the above code.

public class MultiThreading {
String s1 = "John";
String s2 = "Cena";
Thread t1 = new Thread(() -> {
while (true) {  /// note that addition of this loop
synchronized (s1) {
synchronized (s2) {
System.out.println(s1 + " " + s2);
}
}
}
});
Thread t2 = new Thread(() -> {
while (true) {  /// note that addition of this loop
synchronized (s2) {
synchronized (s1) {
System.out.println(s2 + " " + s1);
}
}
}
});

public static void main(String[] args) {
MultiThreading mt = new MultiThreading();
mt.t1.start();
mt.t2.start();
}
}

Ouptut :  [prints following then application gets hanged]

John Cena
John Cena
John Cena
John Cena
John Cena
John Cena


Most likely deadlock to be occurred in the places where inside of some loop multiple threads are accessing common resources which is common among the thread.

How to avoid deadlock ?

Answer : Mutex is the saviour in this kind of situation. Mutex is a semaphore with a permit count 1 . It allows one thread at a time to consume the common resource and other thread need a permit from the semaphore before using the resource. This exclusion of the common resource from the multi thread allows the updates on the value of the common resource safe and doesn't corrupt the data and make thread safe. 

If run the same above code of deadlock using a mutex , then it goes well without hanging the application.

public class MultiThreading {
String s1 = "John";
String s2 = "Cena";
Semaphore semaphore = new Semaphore(1);
Thread t1 = new Thread(() -> {
while (true) {
try{
semaphore.acquire();
synchronized (s1) {
synchronized (s2) {
System.out.println(s1 + " " + s2);
}
}
}catch(Exception e){
} finally {
semaphore.release();
}
}
});
Thread t2 = new Thread(() -> {
while (true) {
try{
semaphore.acquire();
synchronized (s2) {
synchronized (s1) {
System.out.println(s2 + " " + s1);
}
}
}catch(Exception e){
} finally {
semaphore.release();
}
}
});

public static void main(String[] args) {
MultiThreading mt = new MultiThreading();
mt.t1.start();
mt.t2.start();
}
}

Output : [without hanging :) and without deadlock :) ]

Cena John
Cena John
Cena John
Cena John
Cena John
Cena John
Cena John
Cena John
Cena John 

Enjoy the coding and multithreading!!!!

Wednesday 2 August 2017

LRU caching implementation in Java


Designing a Least recently used cache is an important piece in the software development where an element is removed from the  cache if the size of the cache is reaches to the capacity.

Caching is always helpful in the faster retrieval  and gives O(1) or constant time retrieval complexity.

Map data structure are helpful in achieving the O(1) time complexity and other data structures like Queue or Stack could be a solution where pop or remove method are useful in adding new element in the data structure in a constant time  or using doubly linked list could also be one solution for this caching.

To provide a solution , I have used an implementation using the LinkedHashMap which has the property of giving the least recently used element with the iterator . First element is always an LRU element in the map.

I will demonstrate the same using the following code snippet :

import java.util.Iterator;
import java.util.LinkedHashMap;
import java.util.Map;
import java.util.Map.Entry;

/**
 * @author sachin.srivastava
 * tek9g.blogspot.com
 *
 */
public class LRUCaching<T> {
private int capacity;
private LinkedHashMap<String, T> map;

public LRUCaching(int capacity) {
this.capacity = capacity;
map = new LinkedHashMap<>();
}
public T get(String key){
T t = this.map.get(key);
if(t == null){
return null;
} else {
this.set(key, t);
return t;
}
}
public void set(String key,T value){
if(this.map.containsKey(key)){
this.map.remove(key);
} else if(this.map.size() == this.capacity){
Iterator<String> it =  this.map.keySet().iterator();
it.next();
it.remove();
}
this.map.put(key, value);
}
public static <T> void printMap(Map<String, T> map){
System.out.println("\n--printing the map elements--\n");
for (Entry<String, T> entry : map.entrySet()) {
System.out.println("key=" + entry.getKey() + " and value=" + entry.getValue());
}
}
public static void main(String[] args) {
LRUCaching<Integer> lru = new LRUCaching<>(5);//capacity of 5 elements.
lru.set("a", 1);
lru.set("b", 2);
lru.set("c", 3);
lru.set("d", 4);
lru.set("e", 5);
printMap(lru.map);
System.out.println("\nget key=a and value=" + lru.get("a"));// it will move key a to front and b would be at the rear.
printMap(lru.map);
//setting the element beyond the capacity
lru.set("f", 6);//it will remove the least used key b
printMap(lru.map);
}

}


Output : 


--printing the map elements--

key=a and value=1
key=b and value=2
key=c and value=3
key=d and value=4
key=e and value=5

get key=a and value=1

--printing the map elements--

key=b and value=2
key=c and value=3
key=d and value=4
key=e and value=5
key=a and value=1

--printing the map elements--

key=c and value=3
key=d and value=4
key=e and value=5
key=a and value=1
key=f and value=6

--------------------------------------

In the above example the Map always contains the number of elements as per the capacity and as it reaches the limit , it removes the Least Recently Used element from the map. As soon as an element is get from the map , it ranks higher in the map so is the case when  key 'a' is get , it moves the key in the above to key 'b' which becomes the LRU element so when adding a new key 'f' it shows the capacity is reached and hence the LRU element which is removed from the map and allows a new key to inserted in the map.

Enjoy the coding!!!!

Tuesday 1 August 2017

Producer and Consumer problem in Java


Solving a Producer and Consumer problem  in a multithreading environment following illustration is sited, I have tried to solve the problem using a Queue,  Producer producing the counter and Consumer consumes the counter :


import java.util.LinkedList;
import java.util.Queue;

/**
 * @author sachin.srivastava
 * tek9g.blogspot.com
 *
 */
public class ProducerConsumer extends Thread{

static Queue<Integer> queue = new LinkedList<>();
private static final int capacity = 10;
private static  int counter = 1;
public static void main(String[] args) {
///starting the producer and consumer
new Producer().start();
new Consumer().start();
}

static class Producer extends Thread {
public void run(){
for ( ;;) {
try {
Thread.sleep(2000);
synchronized (queue) {
if( queue.size() <= capacity){
queue.add(counter);
System.out.println("queue added counter produced=" + counter);
counter++;
} else if(queue.size() > capacity)  {
queue.remove(queue.size()-1);
System.out.println("queue is full so can't produce new element");
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
static class Consumer extends Thread {
public void run(){
for (;;) {
try {
Thread.sleep(2000);
synchronized (queue) {
if(!queue.isEmpty()){
Integer head = queue.remove();
System.out.println("queue pop counter consumer=" + head);
} else {
System.out.println("queue is empty so nothing to pop for consumers");
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
}



Output : 


queue added counter produced=1
queue pop counter consumer=1
queue added counter produced=2
queue pop counter consumer=2
queue added counter produced=3
queue pop counter consumer=3
queue added counter produced=4
queue pop counter consumer=4
queue added counter produced=5
queue pop counter consumer=5
queue added counter produced=6
queue pop counter consumer=6
queue added counter produced=7
queue pop counter consumer=7
queue added counter produced=8
queue pop counter consumer=8
queue added counter produced=9
queue pop counter consumer=9
queue added counter produced=10
queue pop counter consumer=10
queue added counter produced=11
queue pop counter consumer=11
queue added counter produced=12
queue pop counter consumer=12
queue added counter produced=13
queue pop counter consumer=13
queue added counter produced=14
queue pop counter consumer=14
queue added counter produced=15
queue pop counter consumer=15
queue added counter produced=16
queue pop counter consumer=16
queue added counter produced=17
queue pop counter consumer=17
queue added counter produced=18
queue pop counter consumer=18
queue added counter produced=19
queue pop counter consumer=19
queue added counter produced=20
queue pop counter consumer=20
queue added counter produced=21
queue pop counter consumer=21
queue added counter produced=22
queue pop counter consumer=22
queue added counter produced=23
queue pop counter consumer=23
queue added counter produced=24
queue pop counter consumer=24
queue added counter produced=25
queue pop counter consumer=25
queue added counter produced=26
queue pop counter consumer=26
queue added counter produced=27
queue pop counter consumer=27
queue added counter produced=28
queue pop counter consumer=28
queue added counter produced=29
queue pop counter consumer=29
--------


If you seen the output  starts with the producer 1  then consumer 1  then  producer 2 then consumer 2 and so on ... so the queue always  remain  in the consistent state and also remain thread safe.

Enjoy the coding  !!!!  

Monday 24 July 2017

Graphs - BFS and DFS


BFS  -  Breadth First Search

DFS  - Depth First Search


import java.util.Arrays;
import java.util.LinkedList;
import java.util.Queue;
import java.util.Stack;

/**
 * 
 */

/**
 * @author sachin.srivastava
 * @tek9g.blogspot.com
 * @github.com/sks1130/
 *
 */
public class Graph_BFS_DFS {

private static int numberOfNodes;
private static int[][] adjMatrix;
private static boolean[] visited;

public static void main(String[] args) {
numberOfNodes   = 9 ;
adjMatrix  = new int[numberOfNodes][numberOfNodes];
visited = new boolean[numberOfNodes];
adjMatrix = new int[][]{  { 0, 1, 0, 1, 0, 0, 0, 0, 1 },  
                 { 1, 0, 0, 0, 0, 0, 0, 1, 0 },       
                 { 0, 0, 0, 1, 0, 1, 0, 1, 0 },  
                 { 1, 0, 1, 0, 1, 0, 0, 0, 0 },  
                 { 0, 0, 0, 1, 0, 0, 0, 0, 1 },  
                 { 0, 0, 1, 0, 0, 0, 1, 0, 0 },  
                 { 0, 0, 0, 0, 0, 1, 0, 0, 0 },  
                 { 0, 1, 1, 0, 0, 0, 0, 0, 0 },  
                 { 1, 0, 0, 0, 1, 0, 0, 0, 0 } };

for (int i = 0; i < numberOfNodes; i++) {
System.out.println(Arrays.toString(adjMatrix[i]));
}
BFS(0);
DFS(0);
}
public static void BFS(int root){
if(root >= numberOfNodes){
System.out.println("input node is beyond range");
return;
}
System.out.println("---BFS--------");
Queue<Integer> queue = new LinkedList<>();
queue.add(root);
visited[root]  = true;
System.out.println("visited node-->" + root + " ");
while(queue.size()!=0){
int head = queue.remove();
int child = -1;
while((child=getUnvisitedNode(head))!=-1){
queue.add(child);
visited[child] = true;
System.out.println("visited node-->" + child + " ");
}
}
clearAllVisitedNodes();
}
public static void DFS(int root){
if(root >= numberOfNodes){
System.out.println("input node is beyond range");
return;
}
System.out.println("---DFS--------");
Stack<Integer> stack = new Stack<>();
stack.push(root);
visited[root]  = true;
System.out.println("visited node-->" + root + " ");
while(!stack.isEmpty()){
int head = stack.peek();
int child = -1;
if(( child = getUnvisitedNode(head))!=-1){
stack.push(child);
visited[child] = true;
System.out.println("visited node-->" + child + " ");
} else {
stack.pop();
}
}
clearAllVisitedNodes();
}
public static void clearAllVisitedNodes(){
for (int i = 0; i < visited.length; i++) {
visited[i] = false;
}
}
public static int getUnvisitedNode(int n ){
for (int i = 0; i < adjMatrix[n].length; i++) {
if( adjMatrix[n][i] == 1 && !visited[i]){
return i;
}
}
return -1;
}
}



Output

[0, 1, 0, 1, 0, 0, 0, 0, 1]
[1, 0, 0, 0, 0, 0, 0, 1, 0]
[0, 0, 0, 1, 0, 1, 0, 1, 0]
[1, 0, 1, 0, 1, 0, 0, 0, 0]
[0, 0, 0, 1, 0, 0, 0, 0, 1]
[0, 0, 1, 0, 0, 0, 1, 0, 0]
[0, 0, 0, 0, 0, 1, 0, 0, 0]
[0, 1, 1, 0, 0, 0, 0, 0, 0]
[1, 0, 0, 0, 1, 0, 0, 0, 0]

---BFS--------
visited node-->0 
visited node-->1 
visited node-->3 
visited node-->8 
visited node-->7 
visited node-->2 
visited node-->4 
visited node-->5 
visited node-->6 


---DFS--------
visited node-->0 
visited node-->1 
visited node-->7 
visited node-->2 
visited node-->3 
visited node-->4 
visited node-->8 
visited node-->5 
visited node-->6 


Enjoy the coding!!!!

Sunday 25 June 2017

Checking a number is product of two prime numbers

If  m and n are positive integer and n is a divisor or factor of m  then there exists a non zero integer p such that :

n * p = m where (n , p , m  > 0) and n | m (Usual notation to show n is divisor of m)

Case : We need to find a if a number is multiplier of two prime numbers  so we need to find n and p such that n * p = m

E.g ,

Input =35  then output = true   [ 35 = 7 * 5 (both are prime numbers)]

Input =21  then output = true  [21 = 7 * 3 ]

Input =20  then output = false [20 = 2 * 10 or 4 *5   not prime numbers]

Code snippet : 

public static boolean checkMultiplierOfPrime(int n){
if( n< 6){
return false;  ///as least number satifying the condition is 6 = 2 * 3
}
int k = (int) Math.sqrt(n);  /////important catch here to find the square root
for (int i = k; i >= 2; i--) {
if(n % i == 0){
int j = n/i;
if(isPrimeNumber(i) && isPrimeNumber(j)){
return true;
}
}
}
return false;
}

 public static boolean isPrimeNumber(int number){
    if(number<=1){
    return false;
    }
    for(int i=2; i<=number/2; i++){
    if(number % i == 0){
    return false;
    }
    }
    return true;
    }


Time Complexity : O(n) for a number of n  , iteration would from 2 to n/2 , hence complexity would be O(n/2) means O(n)

Enjoy the coding !!!!!!
       

Sunday 4 June 2017

Aggregation Framework - finding & removing the duplicate elements in MongoDB

 Given a collection  name product having following documents in the collection.


{ "_id" : NumberInt("1"),"account" : "abc","vendor" : "amazon","category" : "mobile"},
{"_id" : NumberInt("2"), "account" : "abc","vendor" : "overstock","category" : "mobile"},
{"_id" : NumberInt("3"),"account" : "xyz","vendor" : "fashionStreet","category" : "fashion"},
{"_id" : NumberInt("4"),"account" : "pqr","vendor" : "foodTruck","category" : "food"},
{"_id" : NumberInt("5"),"account" : "abc","vendor" : "amazon", "category" : "mobile"}

If one notice there are duplicate elements exist in the collection with the composite key(account+ vendor+ category) marked in blue on colour , so how do we identify the duplicates in a collection and how do we remove the duplicate keys in the MongoDB?

This problem is easily solved using Aggregation pipeline framework which provides  $group operator to group an compound or composite key along with the $match operator to match a criteria.

db.product.aggregate([

{ $group: { 

_id: { account: "$account", vendor: "$vendor", category: "$category" }, 
count: { $sum: 1 } 
}}, 
{
    $match: {
        count: { $gt: 1 }
    }
}, 
{
 $sort: { 
    count : -1 }
 }
]);

Above query find all the duplicates elements whose account, vendor , category  values are same and having duplicate values and output of the above query is as follow  with the descending order of count : 

Output

{
"_id" : {
"account" : "abc",
"vendor" : "amazon",
"category" : "mobile"
},
"count" : 2
}

Note that _id is made composite of  3 columns account , vendor , category so in general 

_id :  { key1 : "$key1",  key2 : "$key2", key3 : "$key3", key4 : "$key4", key5 : "$key5" }

and count plays an important role to find out the number of duplicates in the collection.

With the above example , we have identified the duplicate elements in the collection , so the next question arises , how to remove duplicate keys in a collection in MongoDB?

var duplicates = [];
db.product.aggregate([
  { $group: { 
    _id: { account: "$account" , vendor : "$vendor", category : "$category"}, // can be grouped on multiple properties/keys/columns 
    dups: { "$addToSet": "$_id" }, 
    count: { "$sum": 1 } 
  }}, 
  { $match: { 
    count: { "$gt": 1 }    // Duplicates considered as count greater than one
  }}
])             
.forEach(function(doc) {
    doc.dups.shift();      // First element skipped for deleting and important line here
    doc.dups.forEach( function(dupId){ 
        duplicates.push(dupId);   // Getting all duplicate ids
        }
    )    
});

// If you want to Check all "_id" which you are deleting
printjson(duplicates); 

Output : 

[ NumberInt("1") ]



Now we have identified the array of duplicate "_id"   as shown in the output of the above query.
To remove all the duplicates from the collection  , need to execute the following 

// Remove all duplicates in one go    
db.product.remove({_id:{$in:duplicates}});

db.product.find({});

Output

{"_id" : NumberInt("2"), "account" : "abc","vendor" : "overstock","category" : "mobile"},
{"_id" : NumberInt("3"),"account" : "xyz","vendor" : "fashionStreet","category" : "fashion"},
{"_id" : NumberInt("4"),"account" : "pqr","vendor" : "foodTruck","category" : "food"},
{"_id" : NumberInt("5"),"account" : "abc","vendor" : "amazon", "category" : "mobile"}


So you will get an amazing result with the all the records are unique in the collection.

Be alert while applying an index on the composite columns with {unique: true, dropDups: true}). 

Suppose in the above scenario we have applied these condition then it will throw an error as 


{
    "createdCollectionAutomatically" : false,
    "numIndexesBefore" : 1,
    "errmsg" : "exception: E11000 duplicate key error index
    "code" : 11000,
    "ok" : 0
}
Note that dropDups has been removed in the Mongo 3.x release and has a severe disadvantage of removing the any random duplicate element.

Even if you apply ensureIndex() or createIndex()  at the time of collection having duplicate keys then it will prompt   exception "exception: E11000 duplicate key error index"  , so its important to remove the duplicate elements with the above mentioned aggregation example and  then applying the {unique: true}  on the composite index created with the fields say here {account, vendor , category}.


Enjoy the MongoDB  and the brilliant  Aggregation Framework !!!!

Enjoy  the coding with tek9g!!!