Are “Lock free” structures necessarily more performant (quicker) than the traditional approach to synchronization, relying on locks ? How does the number of threads impact performance ?It’s time for a little speed test.
An AccountDate class (a simple wrapper around a Date) is updated 1,000,000 times in two scenarios:
– scenario 1: 20 threads updating an accountDate instance 50,000 times.
– scenario 2: 2 threads updating an accountDate instance 500,000 times.
Within each scenario, three different objects are used to execute the update.
– The SimpleAccountDate updater acts a baseline. It does not use synchronization at all, which will lead to a race condition (most probably)
– AccountDateSynchronized uses the synchronized keyword to serialize access by the threads to the content of the AccountDate.
– AccountDateAtomicRef uses a lock free AtomicReference and CAS to update the underlying AccountDate.
results with 20 threads
SimpleAccountDate in 14 ms race condition
AccountDateSynchronized in 68 ms OK
AccountDateAtomicRef in 119 ms OK
results with 2 threads
SimpleAccountDate in 13 ms race condition
AccountDateSynchronized in 63 ms OK
AccountDateAtomicRef in 60 ms OK
Unsurprisingly the SimpleAccountDate is very fast (but wrong). Performance under synchronization degrades only very slightly with the number of threads, while the CAS strategy does not scale nearly as well. This can be explained by the higher number of threads causing higher contention on a single accountDate object, with many threads simultaneously attempting to update this object, but only one thread at at time managing to do so.
The code source