Reduce lock granularity – Concurrency optimization

Performance is very important in high load multithreading applications. Developers must be aware of concurrency issues in order to achieve better performance. When we need concurrency we usually have a resource that must be shared by two or more threads. In such cases we have a race condition, where only one of the threads will acquire the lock (on resource) and all the other threads that want the lock will block. This synchronization feature does not come for free; Both JVM and OS consume resources in order to provide you with a valid concurrency model. The three most fundamental factors that makes concurrency implementation resource intensive are:

  • Context switching
  • Memory synchronization
  • Blocking

In order to write optimized code for synchronization you have to be aware of these 3 factors and how to decrease them. There are many things that you must watch out when writing such code. In this article I will show you a technique to decrease these factors by reducing lock granularity.

Starting with the basic rule: Do not hold the lock longer than necessary.

Do whatever you need to do before acquire the lock, use the lock only to act on synchronised resource and release it immediately. See a simple example:

public class HelloSync {
	private Map dictionary = new HashMap();
	public synchronized void borringDeveloper(String key, String value) {
		long startTime = (new java.util.Date()).getTime();
		value = value + "_"+startTime;
		dictionary.put(key, value);
		System.out.println("I did this in "+((new java.util.Date()).getTime() - startTime)+" miliseconds");
	}
}

In this example we violate the basic rule, because we create two Date objects, call System.out.println(), and do many String concatenations. The only one action that needs synchronization is action: “dictionary.put(key, value);” Alter the code and move synchronization from method scope to this single line. A slightly better code is this:

public class HelloSync {
	private Map dictionary = new HashMap();
	public void borringDeveloper(String key, String value) {
		long startTime = (new java.util.Date()).getTime();
		value = value + "_"+startTime;
		synchronized (dictionary) {
			dictionary.put(key, value);
		}
		System.out.println("I did this in "+((new java.util.Date()).getTime() - startTime)+" miliseconds");
	}
}

Above code can written even better, but I just want to give you the idea. If wondering how check java.util.concurrent.ConcurrentHashMap.

So, how can we reduce lock granularity? With a short answer, by asking for locks as less as possible. The basic idea is to use separate locks to guard multiple independent state variables of a class, instead of having only one lock in class scope. Check this simple example that I have seen it in many applications.

public class Grocery {
	private final ArrayList fruits = new ArrayList();
	private final ArrayList vegetables = new ArrayList();
	public synchronized void addFruit(int index, String fruit) {
		fruits.add(index, fruit);
	}
	public synchronized void removeFruit(int index) {
		fruits.remove(index);
	}
	public synchronized void addVegetable(int index, String vegetable) {
		vegetables.add(index, vegetable);
	}
	public synchronized void removeVegetable(int index) {
		vegetables.remove(index);
	}
}

The grocery owner can add/remove fruits and vegetables in/from his grocery shop. This implementation of Grocery guards both fruits and vegetables using the base Grocery lock, as the synchronization is done on method scope. Instead of this fat lock, we can use two separate guards, one for each resource (fruits and vegetables). Check the improved code below.

public class Grocery {
	private final ArrayList fruits = new ArrayList();
	private final ArrayList vegetables = new ArrayList();
	public void addFruit(int index, String fruit) {
		synchronized(fruits) fruits.add(index, fruit);
	}
	public void removeFruit(int index) {
		synchronized(fruits) {fruits.remove(index);}
	}
	public void addVegetable(int index, String vegetable) {
		synchronized(vegetables) vegetables.add(index, vegetable);
	}
	public void removeVegetable(int index) {
		synchronized(vegetables) vegetables.remove(index);
	}
}

After using two guards (splitting the lock) we will see less locking traffic than the original fat lock would have. This technique works better when we apply it on locks that have medium lock contention. If we apply it on locks that have slight contention, then the gain is small, but still positive. If we apply it on locks that have heavy contention, then the result is not always better and you must be aware of this.

Please use this technique with conscience. If you suspect that this is a heavy contention lock then please follow these steps:

  1. Confirm the traffic of your production requirements, multiple it by 3 or 5 (or even 10 even if you want to be prepared).
  2. Run the appropriate tests on your testbed, based on the new traffic.
  3. Compare both solutions and only then choose the most appropriate.

There are more techniques that can improve synchronization performance, but for all techniques the basic rule is one: Do not hold the lock longer than necessary.
This basic rule can be translated to “asking for locks as less as possible” as I have already explained you, or to other translations(solutions) which I will try to describe them in future articles.

Two more important advices:

  • Be aware of classes in java.util.concurrent package (and subpackages) as there are very clever and useful implementations.
  • Concurrency code most times can be minimized by using good design patterns. Always have in mind Enterprise Integration Patterns, they can save your nights.

Regards,
Adrianos Dadis.

Democracy requires Free Software

Advertisement

About Adrianos Dadis

Building Big Data & Stream processing solutions in many business domains. Interested in distributed systems and enterprise integration.
This entry was posted in Java, Software Development and tagged , , , . Bookmark the permalink.

4 Responses to Reduce lock granularity – Concurrency optimization

  1. Hey I approached this problem in a different way. Check out my post at javaconcepts.in, would like to know your thoughts about it. By the way i saw this exact post on javacodegeeks.com, is it by any chance your website?

    • Adrianos Dadis says:

      Hi Aman,

      first of all thanks for your comment and very sorry for the late response.

      I read your article and it is really helpful. I like the way you approach the concurrency issue. You are giving an example that many developers assume as correct, but you show them that they are wrong and how they can refactor their code. And yes, the ConcurrentHashMap is the best option, when you need a concurrent Map/Dictionary.

      In my article I analyse the issue of “lock granularity” and how to reduce lock granularity in order to gain performance. Inside the article, I have mentioned the ConcurrentHashMap (and java.util.concurrent package in general), which for most cases (but not always) is the “correct” solution, but I do not want to give to readers a “ready” solution. I want to help them understand the concept and the problems of concurrency. I want readers understand how to avoid common mistakes and also give them a few options of how they can achieve better performance. I Believe both articles (mine and yours) are on the right way (help readers on java concurrenncy) 🙂

      My article is also on javacodegeeks.com as I am a on javacodegeeks partner.

      Best regards,
      Adrianos Dadis.

  2. Pingback: JavaPins

  3. Jackson Cunha says:

    Congratulations!
    Your post was my starting point on studies about concurrency today.
    I’m grateful, thanks.

Post your thought

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s