Swift Concurrency Resources:
DevForums tags: Concurrency
The Swift Programming Language > Concurrency documentation
Migrating to Swift 6 documentation
WWDC 2022 Session 110351 Eliminate data races using Swift Concurrency — This ‘sailing on the sea of concurrency’ talk is a great introduction to the fundamentals.
WWDC 2021 Session 10134 Explore structured concurrency in Swift — The table that starts rolling out at around 25:45 is really helpful.
Swift Async Algorithms package
Swift Concurrency Proposal Index DevForum post
Why is flow control important? DevForums post
Matt Massicotte’s blog
Dispatch Resources:
DevForums tags: Dispatch
Dispatch documentation — Note that the Swift API and C API, while generally aligned, are different in many details. Make sure you select the right language at the top of the page.
Dispatch man pages — While the standard Dispatch documentation is good, you can still find some great tidbits in the man pages. See Reading UNIX Manual Pages. Start by reading dispatch in section 3.
WWDC 2015 Session 718 Building Responsive and Efficient Apps with GCD [1]
WWDC 2017 Session 706 Modernizing Grand Central Dispatch Usage [1]
Avoid Dispatch Global Concurrent Queues DevForums post
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
[1] These videos may or may not be available from Apple. If not, the URL should help you locate other sources of this info.
Dispatch
RSS for tagExecute code concurrently on multicore hardware by submitting work to dispatch queues managed by the system using Dispatch.
Posts under Dispatch tag
29 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi Team,
We intend to create a custom serial dispatch queue targetting a global queue.
let serialQueue = DispatchQueue(label: "corecomm.tallyworld.serial", target: DispatchQueue.global(qos: .default))
The documentation for DispatchQueue init does not show any minimum OS versions. BUT DispatchSerialQueue init does show iOS 17.0+ iPadOS 17.0+ Mac Catalyst macOS 14.0+ tvOS 17.0+ visionOS watchOS 10.0+.
Does that mean - I will not be able to create a custom serial dispatch queue below iOS 17?
In my app I have a background task performed on a custom DispatchQueue. When it has completed, I update the UI in DispatchQueue.main.async. In a particular case, the app then needs to show a modal window that contains a table view, but I have noticed that when scrolling through the tableview, it only responds very slowly.
It appears that this happens when the table view in the modal window is presented in DispatchQueue.main.async. Presenting it in perform(_:with:afterDelay:) or in a Timer.scheduledTimer(withTimeInterval:repeats:block:) on the other hand works. Why? This seems like an ugly workaround.
I created FB7448414 in November 2019 but got no response.
@NSApplicationMain
class AppDelegate: NSObject, NSApplicationDelegate {
func applicationDidFinishLaunching(_ aNotification: Notification) {
let windowController = NSWindowController(window: NSWindow(contentViewController: ViewController()))
// 1. works
// runModal(for: windowController)
// 2. works
// perform(#selector(runModal), with: windowController, afterDelay: 0)
// 3. works
// Timer.scheduledTimer(withTimeInterval: 0, repeats: false) { [self] _ in
// self.runModal(for: windowController)
// }
// 4. doesn't work
DispatchQueue.main.async {
self.runModal(for: windowController)
}
}
@objc func runModal(for windowController: NSWindowController) {
NSApp.runModal(for: windowController.window!)
}
}
class ViewController: NSViewController, NSTableViewDataSource, NSTableViewDelegate {
override func loadView() {
let tableView = NSTableView()
tableView.dataSource = self
tableView.delegate = self
tableView.addTableColumn(NSTableColumn())
let scrollView = NSScrollView(frame: CGRect(x: 0, y: 0, width: 400, height: 400))
scrollView.documentView = tableView
scrollView.hasVerticalScroller = true
scrollView.translatesAutoresizingMaskIntoConstraints = false
view = scrollView
}
func numberOfRows(in tableView: NSTableView) -> Int {
return 100
}
func tableView(_ tableView: NSTableView, viewFor tableColumn: NSTableColumn?, row: Int) -> NSView? {
let view = NSTableCellView()
let textField = NSTextField(labelWithString: "\(row)")
textField.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(textField)
NSLayoutConstraint.activate([textField.leadingAnchor.constraint(equalTo: view.leadingAnchor), textField.trailingAnchor.constraint(equalTo: view.trailingAnchor), textField.topAnchor.constraint(equalTo: view.topAnchor), textField.bottomAnchor.constraint(equalTo: view.bottomAnchor)])
return view
}
}
We are currently developing a VoIP application that supports Local Push extention.
I would like to ask for your advice on how the extension works when the iPhone goes into sleep mode.
Our App are using GCD (Grand Central Dispatch) to perform periodic processing within the extension, creating a cycle by it.
[sample of an our source]
class LocalPushProvider: NEAppPushProvider {
let activeQueue: DispatchQueue = DispatchQueue(label: "com.myapp.LocalPushProvider.ActiveQueue", autoreleaseFrequency: .workItem)
var activeSchecule: Cancellable?
override func start(completionHandler: @escaping (Error?) -> Void) {
:
self.activeSchecule = self.activeQueue.schedule(
after: .init(.now() + .seconds(10)), // start schedule after 10sec
interval: .seconds(10) // interval 10sec
) {
self.activeTimerProc()
}
completionHandler(nil)
}
}
However In this App that we are confirming that when the iPhone goes into sleep mode, self.activeTimerProc() is not called at 10-second intervals, but is significantly delayed (approximately 30 to 180 seconds).
What factors could be causing the timer processing using GCD not to be executed at the specified interval when the iPhone is in sleep mode?
Also, please let us know if there are any implementation errors or points to note.
I apologize for bothering you during your busy schedule, but I would appreciate your response.
Topic:
App & System Services
SubTopic:
Processes & Concurrency
Tags:
PushKit
CallKit
Network Extension
Dispatch
i am trying to build my code and have ran into this error.
"Trailing closure passed to parameter of type 'DispatchWorkItem' that does not accept a closure"
i have been trying to figure it out for so long, and even ai cant figure it out. is this a bug, or am i missing some obvious way to fix this ?
func loadUser(uid: String, completion: (() -> Void)? = nil) {
db.collection("users").document(uid).getDocument { [weak self] snapshot, error in
guard let data = snapshot?.data(), error == nil else { completion?(); return }
DispatchQueue.main.async {
self?.currentUser = User(
username: data["username"] as? String ?? "Learner",
email: data["email"] as? String ?? "",
profileImageName: "person.circle.fill",
totalXP: data["totalXP"] as? Int ?? 0,
currentStreak: data["currentStreak"] as? Int ?? 0,
longestStreak: data["longestStreak"] as? Int ?? 0,
level: data["level"] as? Int ?? 1,
levelProgress: data["levelProgress"] as? Double ?? 0.0,
xpToNextLevel: data["xpToNextLevel"] as? Int ?? 100,
completedLessons: data["completedLessons"] as? [String] ?? []
)
self?.saveUser()
completion?()
}
}
}
I’m working with apple dispatch queue in C with the following design: multiple dispatch queues enqueue tasks into a shared context, and a dedicated dispatch queue (let’s call it dispatch queue A) processes these tasks. However, it seems this design has a memory visibility issue.
Here’s a simplified version of my setup:
I have a shared_context struct that holds:
task_lis: a list that stores tasks to be prioritized and run — this list is only modified/processed by dispatch queue A (a serially dispatch queue), so I don't lock around it.
cross_thread_tasks: a list that other queues push tasks into, protected by a lock.
Other dispatch queues call a function schedule_task that
locks and appends a new task to cross_thread_tasks
call dispatch_after_f() to schedule a process_task() on dispatch queue A
process_task() that processes the task_list and is repeatedly scheduled on dispatch queue A :
Swaps cross_thread_tasks into a local list (with locking).
Pushes the tasks into task_list.
Runs tasks from task_list.
Reschedules itself via dispatch_after_f().
Problem:
Sometimes the tasks pushed from other threads don’t seem to show up in task_list when process_task() runs. The task_list appears to be missing them, as if the cross-thread tasks aren’t visible. However, if the process_task() is dispatched from the same thread the tasks originate, everything works fine.
It seems to be a memory visibility or synchronization issue. Since I only lock around cross_thread_tasks, could it be that changes to task_list (even though modified on dispatch queue A only) are not being properly synchronized or visible across threads?
My questions
What’s the best practice to ensure shared context is consistently visible across threads when using dispatch queues? Is it mandatory to lock around all tasks? I would love to minimize/avoid lock if possible.
Any guidance, debugging tips, or architectural suggestions would be appreciated!
===============================
And here is pseudocode of my setup if it helps:
struct shared_data {
struct linked_list* task_list;
}
struct shared_context {
struct shared_data *data;
struct linked_list* cross_thread_tasks;
struct thread_mutex* lock; // lock is used to protect cross_thread_tasks
}
static void s_process_task(void* shared_context){
struct linked_list* local_tasks;
// lock and swap the cross_thread_tasks into a local linked list
lock(shared_context->lock)
swap(shared_context->cross_thread_tasks, local_tasks)
unlock(shared_context->lock)
// I didnt use lock to protect `shared_context->data` as they are only touched within dispatch queue A in this function.
for (task : local_tasks) {
linked_list_push(shared_context->data->task_list)
}
// If the `process_task()` block is dispatched from `schedule_task()` where the task is created, the `shared_context` will be able to access the task properly otherwise not.
for (task : shared_context->data->task_list) {
run_task_if_timestamp_is_now(task)
}
timestamp = get_next_timestamp(shared_context->data->task_list)
dispatch_after_f(timestamp, dispatch_queueA, shared_context, process_task);
}
// On dispatch queue B
static void schedule_task(struct task* task, void* shared_context) {
lock(shared_context->lock)
push(shared_context->cross_thread_tasks, task)
unlock(shared_context->lock)
timestamp = get_timestamp(task)
// we only dispatch the task if the timestamp < 1 second. We did this to avoid the dispatch queue schedule the task too far ahead and prevent the shutdown process. Therefore, not all task will be dispatched from the thread it created.
if(timestamp < 1 second)
dispatch_after_f(timestamp, dispatch_queueA, shared_context, process_task);
}
Hi,
I’m using a Local Push Connectivity Extension and encountering an issue with DispatchSourceTimer.
In my extension, I create a DispatchSourceTimer that is supposed to fire every 1 second. It works as expected at first. However, when the app is in the foreground and the device is locked, the timer eventually stops firing after 1–3 hours.
The extension process is still alive, and no errors are thrown
Has anyone experienced this behavior?
Is this a known limitation for timers inside NEAppPushProvider, or is the extension being deprioritized silently by the system?
Any insights or suggestions would be greatly appreciated.
Thanks!
Im using the low-level C xpc api <xpc/xpc.h> and i get this error when I run it: Underlying connection interrupted. I know this error stems from the call to xpc_session_send_message_with_reply_sync(session, message, &reply_err);. I have no previous experience with xpc or dispatch and I find the xpc docs very limited and I also found next to no code examples online. Can somebody take a look at my code and tell me what I did wrong and how to fix it? Thank you in advance.
Main code:
#include <stdio.h>
#include <xpc/xpc.h>
#include <dispatch/dispatch.h>
// the context passed to mainf()
struct context {
char* text;
xpc_session_t sess;
};
// This is for later implementation and the name is also rudimentary
void mainf(void* c) {
//char * text = ((struct context*)c)->text;
xpc_session_t session = ((struct context*)c)->sess;
dispatch_queue_t messageq = dispatch_queue_create("y.ddd.main",
DISPATCH_QUEUE_SERIAL);
xpc_object_t message = xpc_dictionary_create(NULL, NULL, 0);
xpc_dictionary_set_string(message, "test", "eeeee");
if (session == NULL) {
printf("Session is NULL\n");
exit(1);
}
__block xpc_rich_error_t reply_err = NULL;
__block xpc_object_t reply;
dispatch_sync(messageq, ^{
reply = xpc_session_send_message_with_reply_sync(session,
message,
&reply_err);
if (reply_err != NULL) printf("Reply Error: %s\n",
xpc_rich_error_copy_description(reply_err));
});
if (reply != NULL)
printf("Reply: %s\n", xpc_dictionary_get_string(reply, "test"));
else printf("Reply is NULL\n");
}
int main(int argc, char* argv[]) {
// Create seperate queue for mainf()
dispatch_queue_t mainq = dispatch_queue_create("y.ddd.main",
DISPATCH_QUEUE_CONCURRENT);
dispatch_queue_t xpcq = dispatch_queue_create("y.ddd.xpc",
NULL);
// Create the context being sent to mainf
struct context* c = malloc(sizeof(struct context));
c->text = malloc(sizeof("Hello"));
strcpy(c->text, "Hello");
xpc_rich_error_t sess_err = NULL;
xpc_session_t session = xpc_session_create_xpc_service("y.getFilec",
xpcq,
XPC_SESSION_CREATE_INACTIVE,
&sess_err);
if (sess_err != NULL) {
printf("Session Create Error: %s\n",
xpc_rich_error_copy_description(sess_err));
xpc_release(sess_err);
exit(1);
}
xpc_release(sess_err);
xpc_session_set_incoming_message_handler(session, ^(xpc_object_t message) {
printf("message recieved\n");
});
c->sess = session;
xpc_rich_error_t sess_ac_err = NULL;
xpc_session_activate(session, &sess_ac_err);
if (sess_err != NULL) {
printf("Session Activate Error: %s\n",
xpc_rich_error_copy_description(sess_ac_err));
xpc_release(sess_ac_err);
exit(1);
}
xpc_release(sess_ac_err);
xpc_retain(session);
dispatch_async_f(mainq, (void*)c, mainf);
xpc_release(session);
dispatch_main();
}
XPC Service code:
#include <stdio.h>
#include <xpc/xpc.h>
#include <dispatch/dispatch.h>
int main(void) {
xpc_rich_error_t lis_err = NULL;
xpc_listener_t listener = xpc_listener_create("y.getFilec",
NULL,
XPC_LISTENER_CREATE_INACTIVE,
^(xpc_session_t sess){
printf("Incoming Session: %s\n", xpc_session_copy_description(sess));
xpc_session_set_incoming_message_handler(sess,
^(xpc_object_t mess) {
xpc_object_t repl = xpc_dictionary_create_empty();
xpc_dictionary_set_string(repl, "test", "test");
xpc_rich_error_t send_repl_err = xpc_session_send_message(sess, repl);
if (send_repl_err != NULL) printf("Send Reply Error: %s\n",
xpc_rich_error_copy_description(send_repl_err));
});
xpc_rich_error_t sess_ac_err = NULL;
xpc_session_activate(sess, &sess_ac_err);
if (sess_ac_err != NULL) printf("Session Activate: %s\n",
xpc_rich_error_copy_description(sess_ac_err));
},
&lis_err);
if (lis_err != NULL) {
printf("Listener Error: %s\n", xpc_rich_error_copy_description(lis_err));
xpc_release(lis_err);
}
xpc_rich_error_t lis_ac_err = NULL;
xpc_listener_activate(listener, &lis_ac_err);
if (lis_ac_err != NULL) {
printf("Listener Activate Error: %s\n", xpc_rich_error_copy_description(lis_ac_err));
xpc_release(lis_ac_err);
}
dispatch_main();
}
I’m trying to create a property wrapper that that can manage shared state across any context, which can get notified if changes happen from somewhere else.
I'm using mutex, and getting and setting values works great. However, I can't find a way to create an observer pattern that the property wrappers can use.
The problem is that I can’t trigger a notification from a different thread/context, and have that notification get called on the correct thread of the parent object that the property wrapper is used within.
I would like the property wrapper to work from anywhere: a SwiftUI view, an actor, or from a class that is created in the background. The notification preferably would get called synchronously if triggered from the same thread or actor, or otherwise asynchronously. I don’t have to worry about race conditions from the notification because the state only needs to reach eventuall consistency.
Here's the simplified pseudo code of what I'm trying to accomplish:
// A single source of truth storage container.
final class MemoryShared<Value>: Sendable {
let state = Mutex<Value>(0)
func withLock(_ action: (inout Value) -> Void) {
state.withLock(action)
notifyObservers()
}
func get() -> Value
func notifyObservers()
func addObserver()
}
// Some shared state used across the app
static let globalCount = MemoryShared<Int>(0)
// A property wrapper to access the shared state and receive changes
@propertyWrapper
struct SharedState<Value> {
public var wrappedValue: T {
get { state.get() }
nonmutating set { // Can't set directly }
}
var publisher: Publisher {}
init(state: MemoryShared) {
// ...
}
}
// I'd like to use it in multiple places:
@Observable
class MyObservable {
@SharedState(globalCount)
var count: Int
}
actor MyBackgroundActor {
@SharedState(globalCount)
var count: Int
}
@MainActor
struct MyView: View {
@SharedState(globalCount)
var count: Int
}
What I’ve Tried
All of the examples below are using the property wrapper within a @MainActor class. However the same issue happens no matter what context I use the wrapper in: The notification callback is never called on the context the property wrapper was created with.
I’ve tried using @isolated(any) to capture the context of the wrapper and save it to be called within the state in with unchecked sendable, which doesn’t work:
final class MemoryShared<Value: Sendable>: Sendable {
// Stores the callback for later.
public func subscribe(callback: @escaping @isolated(any) (Value) -> Void) -> Subscription
}
@propertyWrapper
struct SharedState<Value> {
init(state: MemoryShared<Value>) {
MainActor.assertIsolated() // Works!
state.subscribe {
MainActor.assertIsolated() // Fails
self.publisher.send()
}
}
}
I’ve tried capturing the isolation within a task with AsyncStream. This actually compiles with no sendable issues, but still fails:
@propertyWrapper
struct SharedState<Value> {
init(isolation: isolated (any Actor)? = #isolation, state: MemoryShared<Value>) {
let (taskStream, continuation) = AsyncStream<Value>.makeStream()
// The shared state sends new values to the continuation.
subscription = state.subscribe(continuation: continuation)
MainActor.assertIsolated() // Works!
let task = Task {
_ = isolation
for await value in taskStream {
_ = isolation
MainActor.assertIsolated() // Fails
}
}
}
}
I’ve tried using multiple combine subjects and publishers:
final class MemoryShared<Value: Sendable>: Sendable {
let subject: PassthroughSubject<T, Never> // ...
var publisher: Publisher {} // ...
}
@propertyWrapper
final class SharedState<Value> {
var localSubject: Subject
init(state: MemoryShared<Value>) {
MainActor.assertIsolated() // Works!
handle = localSubject.sink {
MainActor.assertIsolated() // Fails
}
stateHandle = state.publisher.subscribe(localSubject)
}
}
I’ve also tried:
Using NotificationCenter
Making the property wrapper a class
Using NSKeyValueObserving
Using a box class that is stored within the wrapper.
Using @_inheritActorContext.
All of these don’t work, because the event is never called from the thread the property wrapper resides in.
Is it possible at all to create an observation system that notifies the observer from the same context as where the observer was created?
Any help would be greatly appreciated!
Hi!
We are seeing a bit surprising behavior of dispatch_main on macOS where it seems to spawn a different thread instead of preserving the one it gets called from.
Managed to reproduce it in a completely empty command-line tool project in Xcode
int main(int argc, const char * argv[]) {
@autoreleasepool {
dispatch_main();
return 0;
}
}
I put a breakpoint on the line with dispatch_main and see that I am on Thread 1 and inside main function. That makes sense.
I resume execution and pause again. Looking at Thread output in Xcode, I can only see Thread 2.
Thread 1 is gone and the executable keeps on running.
So dispatch_main did what was expected (prevented the process from termination) but throws out the thread it was called from and creates a new one? Is that behavior expected or am I missing something?
Just a brain teaser at this point. But we could not make sense out of it. :)
Based on crash reports for our app in production, we're seeing these SwiftUI crashes but couldn't figure out why it is there. These crashes are pretty frequent (>20 crashed per day).
Would really appreciate it if anyone has any insight on why this happens. Based on the stacktrace, i can't really find anything that links back to our app (replaced with MyApp in the stacktrace).
Thank you in advance!
Crashed: com.apple.main-thread
0 libdispatch.dylib 0x39dcc _dispatch_semaphore_dispose.cold.1 + 40
1 libdispatch.dylib 0x4c1c _dispatch_semaphore_******_slow + 82
2 libdispatch.dylib 0x2d30 _dispatch_dispose + 208
3 SwiftUICore 0x77f788 destroy for StoredLocationBase.Data + 64
4 libswiftCore.dylib 0x3b56fc swift_arrayDestroy + 196
5 libswiftCore.dylib 0x13a60 UnsafeMutablePointer.deinitialize(count:) + 40
6 SwiftUICore 0x95f374 AtomicBuffer.deinit + 124
7 SwiftUICore 0x95f39c AtomicBuffer.__deallocating_deinit + 16
8 libswiftCore.dylib 0x3d783c _swift_release_dealloc + 56
9 libswiftCore.dylib 0x3d8950 bool swift::RefCounts<swift::RefCountBitsT<(swift::RefCountInlinedness)1>>::doDecrementSlow<(swift::PerformDeinit)1>(swift::RefCountBitsT<(swift::RefCountInlinedness)1>, unsigned int) + 160
10 SwiftUICore 0x77e53c StoredLocation.deinit + 32
11 SwiftUICore 0x77e564 StoredLocation.__deallocating_deinit + 16
12 libswiftCore.dylib 0x3d783c _swift_release_dealloc + 56
13 libswiftCore.dylib 0x3d8950 bool swift::RefCounts<swift::RefCountBitsT<(swift::RefCountInlinedness)1>>::doDecrementSlow<(swift::PerformDeinit)1>(swift::RefCountBitsT<(swift::RefCountInlinedness)1>, unsigned int) + 160
14 MyApp 0x1673338 objectdestroyTm + 6922196
15 libswiftCore.dylib 0x3d783c _swift_release_dealloc + 56
16 libswiftCore.dylib 0x3d8950 bool swift::RefCounts<swift::RefCountBitsT<(swift::RefCountInlinedness)1>>::doDecrementSlow<(swift::PerformDeinit)1>(swift::RefCountBitsT<(swift::RefCountInlinedness)1>, unsigned int) + 160
17 SwiftUICore 0x650290 _AppearanceActionModifier.MergedBox.__deallocating_deinit + 32
18 libswiftCore.dylib 0x3d783c _swift_release_dealloc + 56
19 libswiftCore.dylib 0x3d8950 bool swift::RefCounts<swift::RefCountBitsT<(swift::RefCountInlinedness)1>>::doDecrementSlow<(swift::PerformDeinit)1>(swift::RefCountBitsT<(swift::RefCountInlinedness)1>, unsigned int) + 160
20 SwiftUICore 0x651b44 closure #1 in _AppearanceActionModifier.MergedBox.update()partial apply + 28
21 libswiftCore.dylib 0x3d783c _swift_release_dealloc + 56
22 libswiftCore.dylib 0x3d8950 bool swift::RefCounts<swift::RefCountBitsT<(swift::RefCountInlinedness)1>>::doDecrementSlow<(swift::PerformDeinit)1>(swift::RefCountBitsT<(swift::RefCountInlinedness)1>, unsigned int) + 160
23 libswiftCore.dylib 0x3b56fc swift_arrayDestroy + 196
24 libswiftCore.dylib 0xa2a54 _ContiguousArrayStorage.__deallocating_deinit + 96
25 libswiftCore.dylib 0x3d783c _swift_release_dealloc + 56
26 libswiftCore.dylib 0x3d8950 bool swift::RefCounts<swift::RefCountBitsT<(swift::RefCountInlinedness)1>>::doDecrementSlow<(swift::PerformDeinit)1>(swift::RefCountBitsT<(swift::RefCountInlinedness)1>, unsigned int) + 160
27 SwiftUICore 0x4a6c4c type metadata accessor for _ContiguousArrayStorage<CVarArg> + 120
28 libswiftCore.dylib 0x3d783c _swift_release_dealloc + 56
29 libswiftCore.dylib 0x3d8950 bool swift::RefCounts<swift::RefCountBitsT<(swift::RefCountInlinedness)1>>::doDecrementSlow<(swift::PerformDeinit)1>(swift::RefCountBitsT<(swift::RefCountInlinedness)1>, unsigned int) + 160
30 SwiftUICore 0x4a5d88 static Update.dispatchActions() + 1332
31 SwiftUICore 0xa0db28 closure #2 in closure #1 in ViewRendererHost.render(interval:updateDisplayList:targetTimestamp:) + 132
32 SwiftUICore 0xa0d928 closure #1 in ViewRendererHost.render(interval:updateDisplayList:targetTimestamp:) + 708
33 SwiftUICore 0xa0b0d4 ViewRendererHost.render(interval:updateDisplayList:targetTimestamp:) + 556
34 SwiftUI 0x8f1634 UIHostingViewBase.renderForPreferences(updateDisplayList:) + 168
35 SwiftUI 0x8f495c closure #1 in UIHostingViewBase.requestImmediateUpdate() + 72
36 SwiftUI 0xcc700 thunk for @escaping @callee_guaranteed () -> () + 36
37 libdispatch.dylib 0x2370 _dispatch_call_block_and_release + 32
38 libdispatch.dylib 0x40d0 _dispatch_client_callout + 20
39 libdispatch.dylib 0x129e0 _dispatch_main_queue_drain + 980
40 libdispatch.dylib 0x125fc _dispatch_main_queue_callback_4CF + 44
41 CoreFoundation 0x56204 __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 16
42 CoreFoundation 0x53440 __CFRunLoopRun + 1996
43 CoreFoundation 0x52830 CFRunLoopRunSpecific + 588
44 GraphicsServices 0x11c4 GSEventRunModal + 164
45 UIKitCore 0x3d2eb0 -[UIApplication _run] + 816
46 UIKitCore 0x4815b4 UIApplicationMain + 340
47 SwiftUI 0x101f98 closure #1 in KitRendererCommon(_:) + 168
48 SwiftUI 0xe2664 runApp<A>(_:) + 100
49 SwiftUI 0xe5490 static App.main() + 180
50 MyApp 0x8a7828 main + 4340250664 (MyApp.swift:4340250664)
51 ??? 0x1ba496ec8 (Missing)
We would be creating N NWListener objects and M NWConnection objects in our process' communication subsystem to create server sockets, accepted client sockets on server and client sockets on clients.
Both NWConnection and NWListener rely on DispatchQueue to deliver state changes, incoming connections, send/recv completions etc.
What DispatchQueues should I use and why?
Global Concurrent Dispatch Queue (and which QoS?) for all NWConnection and NWListener
One custom concurrent queue (which QoS?) for all NWConnection and NWListener? (Does that anyways get targetted to one of the global queues?)
One custom concurrent queue per NWConnection and NWListener though all targetted to Global Concurrent Dispatch Queue (and which QoS?)?
One custom concurrent queue per NWConnection and NWListener though all targetted to single target custom concurrent queue?
For every option above, how am I impacted in terms of parallelism, concurrency, throughput & latency and how is overall system impacted (with other processes also running)?
Seperate questions (sorry for the digression):
Are global concurrent queues specific to a process or shared across all processes on a device?
Can I safely use setSpecific on global dispatch queues in our app?
I understand that GCD and it's underlying implementations have evolved over time. And many things have not been shared explicitly in Apple documentation.
The most concepts of DispatchQueue (serial and concurrent queues), DispatchQoS, target queue and system provided queues: main and globals etc.
I have some doubts & questions to clarify:
[Main Dispatch Queue] [Link] Because the main queue doesn't behave entirely like a regular serial queue, it may have unwanted side-effects when used in processes that are not UI apps (daemons). For such processes, the main queue should be avoided. What does it mean? Can you elaborate?
[Global Concurrent Dispatch Queues] Are they global to a process or across processes on a device. I believe it is the first case but just wanted to be sure.
[Global Concurrent Dispatch Queues] Does system create 4 (for each QoS) * 2 (over-commiting and non-overcommiting queues) = 8 queues in all. When does which type of queue comes into play?
[Custom Queue][Target Queue concept] [swift-corelibs-libdispatch/man/dispatch_queue_create.3] QUOTE The default target queue of all dispatch objects created by the application is the default priority global concurrent queue. UNQUOTE Is this stil true?
We could not find a mention of this in any latest official apple documentation (though some old forum threads (one more) and github code documentation indicate the same).
The official documentation only says:
[dispatch_set_target_queue] QUOTE If you want the system to provide a queue that is appropriate for the current object UNQUOTE
[dispatch_queue_create_with_target] QUOTE Specify DISPATCH_TARGET_QUEUE_DEFAULT to set the target queue to the default type for the current dispatch queue.UNQUOTE
[Dispatch>DispatchQueue>init] QUOTE Specify DISPATCH_TARGET_QUEUE_DEFAULT if you want the system to provide a queue that is appropriate for the current object. UNQUOTE
What is the difference between passing target queue as 'nil' vs 'DISPATCH_TARGET_QUEUE_DEFAULT' to DispatchQueue init?
[Custom Queue][Target Queue concept] [dispatch_set_target_queue] QUOTE The system doesn't allocate threads to the dispatch queue if it has a target queue, unless that target queue is a global concurrent queue. UNQUOTE
The system does allocate threads to the custom dispatch queues that have global concurrent queue as the default target.
What does that mean? Why does targetting to global concurrent queues mean in that case?
[System / GCD Thread Pool] that excutes work items from DispatchQueue: Is this thread pool per queue? or across queues per process? or across processes per device?
I’m currently developing an iOS metronome app using DispatchSourceTimer as the timer. The interval is set very small, around 50 milliseconds, and I’m using CFAbsoluteTimeGetCurrent to calculate the elapsed time to ensure the beat is played within a ±0.003-second margin.
The problem is that once the app goes to the background, the timing becomes unstable—it slows down noticeably, then recovers after 1–2 seconds.
When coming back to the foreground, it suddenly speeds up, and again, it takes 1–2 seconds to return to normal. It feels like the app is randomly “powering off” and then “overclocking.” It’s super frustrating.
I’ve noticed that some metronome apps in the App Store have similar issues, but there’s one called “Professional Metronome” that’s rock solid with no such problems. What kind of magic are they using? Any experts out there who can help? Thanks in advance!
P.S. I’ve already enabled background audio permissions.
The professional metronome that has no issues: https://link.zhihu.com/?target=https%3A//apps.apple.com/cn/app/pro-metronome-%25E4%25B8%2593%25E4%25B8%259A%25E8%258A%2582%25E6%258B%258D%25E5%2599%25A8/id477960671
I’ve been experimenting with Dispatch, and workloops in particular. I gather that they’re similar to serial queues, except that they reorder work items by QoS. I suspect there’s more to workloops than meets the eye, though; calling dispatch_set_target_queue on them has no effect, in spite of the <dispatch/workloop.h> saying that workloops “can be passed to all APIs accepting a dispatch queue, except for functions from the dispatch_sync() family”.
Workloops keep showing up in odd places like Metal and Network.framework backtraces, and <dispatch/workloop.h> includes functionality for tying workloops to os_workgroups (?!).
What exactly is a workloop beyond just a serial queue with priority ordering, and why can’t I set the target queue of one?
I have the following TaskExecutor code in Swift 6 and is getting the following error:
//Error
Passing closure as a sending parameter risks causing data races between main actor-isolated code and concurrent execution of the closure.
May I know what is the best way to approach this?
This is the default code generated by Xcode when creating a Vision Pro App using Metal as the Immersive Renderer.
Renderer
@MainActor
static func startRenderLoop(_ layerRenderer: LayerRenderer, appModel: AppModel) {
Task(executorPreference: RendererTaskExecutor.shared) { //Error
let renderer = Renderer(layerRenderer, appModel: appModel)
await renderer.startARSession()
await renderer.renderLoop()
}
}
final class RendererTaskExecutor: TaskExecutor {
private let queue = DispatchQueue(label: "RenderThreadQueue", qos: .userInteractive)
func enqueue(_ job: UnownedJob) {
queue.async {
job.runSynchronously(on: self.asUnownedSerialExecutor())
}
}
func asUnownedSerialExecutor() -> UnownedTaskExecutor {
return UnownedTaskExecutor(ordinary: self)
}
static let shared: RendererTaskExecutor = RendererTaskExecutor()
}
Crash occurs in @MainActor class or function in iOS 14
Apps built and distributed targeting Xcode 16 version swift6 crash on iOS 14 devices.
We create a static library and put it in our app's library.
Crash occurs in all classes or functions of the static library (@MainActor in front).
It does not occur from iOS / iPadOS 15.
If you change the minimum supported version of the static library to iOS 11, a crash occurs, and if you change it to iOS 14, a crash does not occur.
Is there a way to keep the minimum version of the static library at iOS 11 and prevent crashes?
I create a DispatchIO object (in Swift) from a socketpair, set the low/high water marks to 1, and then call read on it. Elsewhere (multi-threaded, of course), I get data from somewhere, and write to the other side of it. Then when my data is done, I call dio?.close()
The cleanup handler never gets called.
What am I missing? (ETA: Ok, I can get it to work by calling dio?.close(flags: .stop) so that may be what I was missing.)
(Also, I really wish it would get all the data available at once for the read, rather than 1 at a time.)
Let's say I queue some tasks on DispatchQueue.global() and then switch to another app or locking screen for a while. The app was not terminated but stayed in the background.
Is there a chance that some tasks queued but not yet start could be discarded, even if the app hasn’t been terminated, after switching to another app or locking the screen for a while?
I'm calling the following function in a SwiftUI View modifier in Xcode 16.1:
nonisolated function f -> CGFloat {
let semaphore = DispatchSemaphore(value: 0)
var a: CGFloat = 0
DispatchQueue.main.async {
a = ...
semaphore.******()
}
semaphore.wait()
return a
}
The app freezes, and code in the main queue is never executed.
We are getting a crash _dispatch_assert_queue_fail when the cancellationHandler on NSProgress is called.
We do not see this with iOS 17.x, only on iOS 18. We are building in Swift 6 language mode and do not have any compiler warnings.
We have a type whose init looks something like this:
init(
request: URLRequest,
destinationURL: URL,
session: URLSession
) {
progress = Progress()
progress.kind = .file
progress.fileOperationKind = .downloading
progress.fileURL = destinationURL
progress.pausingHandler = { [weak self] in
self?.setIsPaused(true)
}
progress.resumingHandler = { [weak self] in
self?.setIsPaused(false)
}
progress.cancellationHandler = { [weak self] in
self?.cancel()
}
When the progress is cancelled, and the cancellation handler is invoked. We get the crash. The crash is not reproducible 100% of the time, but it happens significantly often. Especially after cleaning and rebuilding and running our tests.
* thread #4, queue = 'com.apple.root.default-qos', stop reason = EXC_BREAKPOINT (code=1, subcode=0x18017b0e8)
* frame #0: 0x000000018017b0e8 libdispatch.dylib`_dispatch_assert_queue_fail + 116
frame #1: 0x000000018017b074 libdispatch.dylib`dispatch_assert_queue + 188
frame #2: 0x00000002444c63e0 libswift_Concurrency.dylib`swift_task_isCurrentExecutorImpl(swift::SerialExecutorRef) + 284
frame #3: 0x000000010b80bd84 MyTests`closure #3 in MyController.init() at MyController.swift:0
frame #4: 0x000000010b80bb04 MyTests`thunk for @escaping @callee_guaranteed @Sendable () -> () at <compiler-generated>:0
frame #5: 0x00000001810276b0 Foundation`__20-[NSProgress cancel]_block_invoke_3 + 28
frame #6: 0x00000001801774ec libdispatch.dylib`_dispatch_call_block_and_release + 24
frame #7: 0x0000000180178de0 libdispatch.dylib`_dispatch_client_callout + 16
frame #8: 0x000000018018b7dc libdispatch.dylib`_dispatch_root_queue_drain + 1072
frame #9: 0x000000018018bf60 libdispatch.dylib`_dispatch_worker_thread2 + 232
frame #10: 0x00000001012a77d8 libsystem_pthread.dylib`_pthread_wqthread + 224
Any thoughts on why this is crashing and what we can do to work-around it? I have not been able to extract our code into a simple reproducible case yet. And I mostly see it when running our code in a testing environment (XCTest). Although I have been able to reproduce it running an app a few times, it's just less common.