In concurrent computing, multiple calculations are made within overlapping time frames. It takes advantage of the concept that multiple threads or processes can make progress on a task without waiting for others to complete. This general approach to writing and executing computer programs is called concurrency.

Concurrent computing is different than synchronous (sequential) computing, where calculations are made one after the other, with each waiting for the previous to complete. It’s not the same as parallel computing, where calculations are made simultaneously on separate processors.

The three main types of concurrent computing are threading, asynchrony, and preemptive multitasking. Each method has its own special precautions which must be taken to prevent race conditions, where multiple threads or processes access the same shared data in memory in improper order.

History

The origins of concurrent computing date back to the 1800s, when railroad operators needed to manage the paths of multiple trains on a single railroad. It had further development in the early 1900s, when telegraph operators needed to manage multiple signals on a single telegraph line.

CPU terms, Multitasking, Network, Processes

For in-depth information about concurrent programming, see the Apple developer guide, Concurrency Programming: An Introduction.