6 private links
For decades, the C and C++ standards treated multi-threading and concurrency as something existing outside the standard sphere - in that "target-dependent" world of shades which the "abstract machine" targeted by the standards doesn't cover. The immediate, cold-blooded replies of "C++ doesn't know what a thread is" in mountains of mailing list and newsgroup questions dealing with parallelism will forever serve as a reminder of this past.
Merge sort is a wonderful, widely used sorting algorithm, with consistent data-independent performance. When not in-place merge sorting, that is when the source and destination array are not the same, performance is O(nlgn). When in-place, so the source and destination array are the same, performance is slightly slower: O(nlg2n). Because not-in-place sorting algorithms are faster, implementations frequently allocate memory for the destination array, copy the sorted result back into the source array, and proceed to deallocate the destination array. STL uses this type of a strategy to construct a fast in-place sort from a faster not-in-place sort whenever memory is available. Of course, when memory is not available, in-place sort is necessary, and STL also provides such an implementation.
It doesn't make any sense to talk about lock free data structures without covering such topics as atomic operations, memory model in programming languages, safe memory reclamation, compiler and optimizations used by them, modern CPU designs, — all of these topics will be covered more or less in this series.
Parallel programming is essential for writing performant applications on modern hardware. You've probably noticed that, in recent years, CPU clock speeds have barely increased. At the same time, dual-core and quad-core computers have become common.
Programming for multiple threads is not fundamentally different from writing an event-oriented GUI application or even a straight up sequential application. The important lessons of encapsulation, separation of concerns, loose coupling, etc. all apply. But developers get into trouble with multiple threads when they don’t apply those lessons; instead they try to apply the mostly-irrelevant bits of information they learned about threads and synchronization primitives from introductory multithreading texts.
lockfree, waitfree, obstructionfree synchronization algorithms and data structures, scalability-oriented architecture, multicore/multiprocessor design