Swift’s new concurrency system has been on all Swift developers’ lips since its introduction at WWDC 21. The new model brings (among other things) the well-known async/await syntax pattern into Swift, making it possible to write asynchronous code in a linear fashion. It also improves how the scheduler prioritizes work items, reducing overhead costs associated with the previous concurrency system. But how does it differ from Grand Central Dispatch (GCD), and what problems does it solve?

Concurrency in GCD

 

Both Swift and Objective-C developers know GCD, which organizes work using First-In-First-Out (FIFO) queues. These queues could either be serial or concurrent. The former would execute a work item and wait for it to finish before dequeuing the next in line, while the latter keeps dequeuing work items as long as it’s not empty and there are available threads. This approach to concurrent queues incidentally leads us to one of the significant drawbacks of GCD, which we will explore with the help of a theoretical scenario.

Handling of Threads

 

A concurrent queue runs on a two-core system with three enqueued tasks. It will dequeue two items and assign them each to a thread, leaving one waiting. Imagine that one of the running items needs to wait for some resource, like a lock. It gets suspended while waiting, and one core is now available to perform other work. At this point, GCD will spin up another thread to service the remaining item on the queue.

In a contrived example such as this, the number of threads will not become an issue. However, modern applications might schedule hundreds of concurrent tasks, possibly leading to a phenomenon called ”thread explosion.” Whenever a core starts servicing a new thread, it needs to perform a context switch, replacing stacks and kernel data structures. These switches can incur noticeable performance costs if many threads need to time-share the available hardware.

Priority Inversion

 

Apart from the above, GCD also suffers a more subtle performance drawback. Working with FIFO queues means that GCD prioritizes work in a suboptimal way. Say that four low-priority tasks are sitting in a queue when adding a high-priority item. Rather than letting the new task cut in line, GCD raises the priority of every item in front of it. Figure 1 shows a visualization of this principle at work, which causes the system to service less critical tasks as if they were vital.

Figure 1. Priority Inversion as implemented in Grand Central Dispatch.
Figure 1. Priority Inversion as it occurs in a Grand Central Dispatch queue.

Swift Concurrency

 

Now we know the most common pitfalls of GCD. The real question is whether Swift Concurrency does anything different. Does it solve the problems related to GCD? If so, do other challenges arise? Let us have a look!

Cooperative Thread Pool

 

Swift Concurrency works with a cooperative thread pool, meaning that the system actively tries to match the number of threads to the number of available cores. This approach eliminates the issues related to thread explosion, but they require a different approach to dispatch work items. 

 

Instead of using the same work items that GCD uses, Swift concurrency uses a construct called a ”continuation” for asynchronous code. It stores all information needed to suspend and resume an individual task, pushing the suspension point down the hierarchy. This structure means that when a particular method needs to wait for a resource, the thread can pick up another task instead. 

Scheduler Optimizations

 

Swift Concurrency works with task trees instead of queues, giving the scheduler a better chance of prioritizing work. If several tasks wait for processing time, the scheduler can choose the most critical ones when threads become available. This property mitigates the challenges with priority inversion that GCD shows and makes the execution more efficient.

Pitfalls

 

Even though this post brings up what Swift Concurrency does well, there are also things that we need to take extra care of. Since GCD suspends an entire thread, holding a lock across a suspension point might not do any damage. Unless there is a deadlock, it will continue executing and unlock the resource at some finite point in time. However, the new concurrency system does not do well with locks held across suspension points. Since any arbitrary thread can resume the task, it may not release the lock when it picks up again. Therefore, it is crucial to think about how we use instances of, for example, NSLock and os_unfair_lock.

Wrapping Up

 

Swift’s new concurrency system brings a lot of new and very welcome features to the language. Given how it handles scheduling and execution of work items, it could give many multithreaded applications a performance boost.

Author – Jimmy M Andersson

 

A software engineer with a flair for the Swift language. He develops and maintains the open-source library StatKit – a collection of statistical analysis tools for Swift developers. He is also the author of “Statistical Analysis with Swift,” publ

ished on Apress Media.

LinkedIn: https://www.linkedin.com/in/jmandersson/