Composable concurrency in Java or any other programming language

While I was reading a research paper on concurrency named Software and the Concurrency Revolution (html version). I came across following lines:

Unfortunately, although locks work, they pose serious problems for
modern software development. A fundamental problem with locks is that
they are not composable. You can’t take two correct lock-based pieces
of code, combine them, and know that the result is still correct.
Modern software development relies on the ability to compose libraries
into larger programs, and so it is a serious difficulty that we cannot
build on lock-based components without examining their

  1. I was thinking, how Java guarantee composable concurrency or even there is a way to produce this scenarios.

  2. And how we can synchronize data in one or more libraries? Can a programmer do it from his program or it is up to library to synchronize things.

  3. If not Java then is there any other language which use lock based concurrency and guarantee composable concurrency?

Following is also taken from same paper:

There are at least three major problems with synchronized methods.
First, they are not appropriate for types whose methods call virtual
functions on other objects (e.g., Java’s Vector and .NET’s
SyncHashTable), because calling into third-party code while holding a
lock opens the possibility of deadlock
. Second, synchronized methods
can perform too much locking, by acquiring and releasing locks on all
object instances, even those never shared across threads (typically
the majority). Third, synchronized methods can also perform too
little locking, by not preserving atomicity when a program calls
multiple methods on an object or on different objects. As a simple
example of the latter, consider a banking transfer:
account1.Credit(amount); account2.Debit(amount)…

Note: Paper was published on September 2005


It’s not the Java language. It’s the nature of locks (mutexes).

There are better ways to get improved concurrency while still guaranteeing correctness, ways that are language independent:

  1. Using immutable objects, so that you don’t need locks.
  2. Using promises and continuation-passing style.
  3. Using lock-free data structures.
  4. Using Software Transactional Memory to share state safely.

All of these techniques allow improved concurrency without using locks. None of them depend on the Java language specifically.

I was thinking, how Java guarantee composable concurrency or even there is a way to produce this scenarios.

As the article says, it’s not possible to guarantee composability when using locks along with virtual methods (or other similar mechanism, like passing functions as parameters). If a piece of code has access to virtual methods that come from another piece of code and both potentially use locks, then to safely (i.e. without the risk of a deadlock) compose the two pieces of code, you need to inspect the source code of both.

And how we can synchronize data in one or more libraries? Can a programmer do it from his program or it is up to library to synchronize things.

Generally, it’s up to the programmer using the libraries to do the synchronization. That way, the programmer knows where all the locks are and they can make sure they do not deadlock.

If not Java then is there any other language which use lock based concurrency and guarantee composable concurrency?

Again, the point of the article is that this is not possible.


Low-level locking mechanisms are inherently not composable. This is mostly because locks reach underneath the world to affect the machine executing the instructions.

Subsequent Java libraries have added higher and higher level mechanisms for ensuring correct multithreaded operation. They do this by constraining the use of lock() and volatile to certain well-known, and controllable circumstances. For instance, a concurrent queue implementation has very localized behavior and allows reasoning about before and after states. Using higher level mechanisms means you need to read less of the spec or code to get it right. But, and this is a big but, you still need to understand the locking model of any subsystems and how they interact with each other. Also, the changes in Java for concurrency after Java 5 are almost exclusively related to the libraries and not the language.

The major issue with any locking mechanism is that it affects state and operates in the time domain. Neither humans nor computers reason well about either state or time. It’s the ability to reason about value and structure that allowed computer scientists to come up with monads, the first thing that comes to my mind with regard to composability in a language.

The closest we have come is Communicating Sequential Processes. This still requires a high-level mechanism, such as mailboxes and message passing. In my humble opinion, CSP still does not deal adequately with either large systems (the ultimate target of composable software) or time-based reasoning.


First of all I thank all the members who answered this question, especially Robert Harvey whose answer seems very close to mine.

I have been researching on concurrency concepts for two years and according to my findings, no language provide a guarantee that its concurrency constructs are composable. Perfectly good running code using immutable data structures and STM can also produce unexpected results because under the hood, STM uses locks. STM is very good for atomic operations, but if we talk about composability of concurrency contrasts between modules, there is a (very slight) chance that STM may not work as expected.

But still, we can minimize uncertainty by using following (techniques/methods/constructs):

  1. Avoiding locks and synchronization
  2. Using STM
  3. Using persistent (immutable) data structures
  4. Avoiding sharing state
  5. Pure functions
  6. Asynchronous programming style
  7. Avoiding frequent context switching
  8. Multi-threading paradigm and the nature of its environment play major role

Perhaps the most fundamental objection […] is that lock-based
programs do not compose: correct fragments may fail when combined. —Tim Harris et al., “Composable Memory Transactions”, Section 2: Background, pg.2


Thanks to Jules, I stand corrected. STM uses various implementations and most of them are lock free. But I still believe that STM is a good solution here, but not the perfect one, and it has drawbacks:

Each read (or write) of a memory location from inside a transaction
is performed by a call to an STM routine for reading (or writing)
data. With sequential code, these accesses are performed by a single
CPU instruction. STM read and write routines are significantly more
than corresponding CPU instructions as they, typically, have
to maintain bookkeeping data about every access. Most STMs check for
conflicts with other concurrent transactions, log the access, and in
case of a write, log the current (or old) value of the data, in
addition to reading or writing the accessed memory location. Some of
these operations use expensive synchronization instructions
and access
shared metadata, which further increases their costs. All of this
reduces single-threaded performance when compared to sequential code. – Why STM can be more than a research toy – page 1

Also see these papers:


These papers are a few years old. Things may have changed/improved but not all.


I’ve heard it said by well-respected researchers that any useful synchronization mechanism can be used to construct a deadlock. Transactional Memory (whether hardware or software) is no different. For example, consider this approach to writing a thread barrier:

transaction {

while (true) {
    transaction {
        if (counter == num_threads)

(Note: the example is taken from a paper by Yannis Smaragdakis at PACT 2009)

If we ignore the fact that this is not a good way to synchronize a large number of threads, it seems to be correct. But it’s not composable. The decision to put the logic into two transactions is essential. If we were to call this from another transaction, such that everything flattened into one transaction, then we’d probably never complete.

The same is true of message passing channels: communication cycles can cause deadlocks. Ad-hoc synchronization with C++ atomics can lead to deadlocks. RCU, sequence locks, readers/writer locks, condition variables, and semaphores can all be used to create deadlocks.

That’s not to say that transactions or channels (or locks or RCU) are bad. Rather, it’s to say that some things just don’t seem possible. Scalable, composable, pathology-free concurrency control mechanisms are probably not possible.

The best way to avoid trouble is not to look for a silver-bullet mechanism, but to rigorously use good patterns. In the world of parallel computing, a good starting point is Structured Parallel Programming: Patterns for Efficient Computation, by Arch Robison, James Reinders, and Michael McCool. For concurrent programming, there are some good patterns (see @gardenhead’s comment), but C++ and Java programmers are not likely to use them. One pattern that more people could start using right ways is to replace ad-hoc synchronizations in programs with a multi-producer multi-consumer queue. And TM is definitely better than locks, in that it raises the level of abstraction, so that programmers focus on what needs to be atomic, not how to implement a clever lock protocol to ensure atomicity. Hopefully, as hardware TM improves and languages add more TM support, we’ll reach a point where TM supplants locks in the common case.


Trả lời

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *