I’d love to see Apple implement a Bionic Reading feature as a system-wide accessibility option. This type of reading aid highlights the first part of each word in bold to help guide the eyes and improve comprehension.
It’s been shown to be especially helpful for people with ADHD, dyslexia, and other neurodivergent needs. Having a toggle in Settings > Accessibility would be life-changing.
Ideally, it could be:
• Enabled system-wide, or per-app
• Allow customization of how much of the word is bolded
• Available in Safari, Messages, Books, News, etc.
General
RSS for tagExplore best practices for creating inclusive apps that cater to users with diverse abilities
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
[macOS 15.4] Game Controller Background Input Capture Broken - Accessibility App No Longer Functions
Our application,
https://apps.apple.com/us/app/gamecontroller-mapper/id6737088417
which maps game controller inputs to keyboard/mouse events system-wide, has stopped functioning properly after the macOS 15.4 update. Specifically, the app can no longer capture game controller inputs when running in the background, severely impacting its core functionality.
Environment
macOS version: 15.4
Previous working versions: All versions prior to 15.4
App type: Background utility with accessibility permissions
Hardware: All game controller brands compatible with macOS
Detailed Description
Before macOS 15.4
Our application correctly captured game controller inputs from any brand connected to Mac and successfully translated them to keyboard/mouse events system-wide. Users could control any application (e.g., scrolling through documents in Preview using controller buttons) while our app ran in the background with the accessibility permissions granted.
After macOS 15.4
The application only works when it has active focus (is in the foreground). When any other application gains focus, our app completely stops receiving or detecting any input events from the game controller while running in the background. For instance, pressing the 'down' button on the controller while another app is active results in no event being registered within our application.
We've tried updating the app to work in accessory mode (in the menubar), but the issue persists.
Steps to Reproduce
Install our application on macOS 15.3 or earlier
Grant accessibility permissions when prompted
Connect a compatible game controller (e.g., Xbox or other controller)
Open another application (e.g., Preview with a PDF document)
Press buttons on the controller to navigate the document without touching the keyboard
Expected result on 15.3: Controller inputs are translated to keyboard events, even when our app is in the background
Upgrade to macOS 15.4
Repeat steps 2-5
Actual result on 15.4: Controller inputs are only translated to keyboard events when our application has focus
Technical Implementation
Our app uses:
CGEvent.tapCreate() to create a global event tap
CGEvent for simulating keyboard and mouse events
GCController.extendedGamepad?.valueChangedHandler for detecting controller inputs
Proper NSAccessibilityUsageDescription and appropriate entitlements
GCController.shouldMonitorBackgroundEvents = true to ensure controller events continue when the app is inactive
Possible Relation to Recent Changes
We noticed in the macOS 15.4 Release Notes:
Game Controller - Resolved Issues:
Fixed: Game controllers might stop responding when accessibility features, such as Voice Over, are enabled. (141497799)
We suspect this fix might have introduced a regression or intentional limitation affecting applications like ours that rely on background event simulation with game controller input.
Impact
This change severely impacts:
Applications designed to use game controllers as assistive input devices for users who may have difficulty using traditional keyboard and mouse inputs
Applications for media control, presentation navigation, and other similar use cases
Users who rely on our application for accessibility purposes
Questions
Is this an intentional security change or an unintended side effect of the controller fix mentioned in the release notes?
Are there any new APIs or alternative approaches we should implement to restore functionality?
If this is a system bug, when can we expect a fix?
We would greatly appreciate any guidance on how to restore our application's functionality. Thank you for your assistance.
Hello!
I was faced with unexpected behavior of hardware keyboard focus in UITests.
A clear description of the problem
When running UITests on the iOS Simulator with both "Full Keyboard Access" and "Connect Hardware Keyboard" options enabled, there is a noticeable delay between keyboard actions for focus managing (like pressing Tab or arrow keys). The delay seems to increase with repeated input and suggests that events are being queued instead of processed immediately.
I will describe why I have such an assumption later.
A step-by-step set of instructions to reproduce the problem
Launch the iOS Simulator.
Enable both "Full Keyboard Access" and "Connect Hardware Keyboard" in the Simulator settings.
Run a UITest on a target application (ideally an endless or long-running test).
Once the app is launched, press the Tab key several times.
Observe the delay in focus movement.
Optionally, press the Tab or arrow keys rapidly, then stop the UITest.
After stopping, you’ll see a burst of rapid focus changes.
What results you expected
We expected keyboard actions (like Tab) to be handled immediately and the UI focus to update smoothly during UITests.
What results you saw
There was a 4–10 (end more) second delay between pressing keys and seeing a response. All stacked keyboard events (used for managing focus) are performed all at once after stopping the UITest.
The version of Xcode you are using
Xcode: Version 16.3 (16E140)
Simulator: iPhone 16 Pro (iOS 18.4 and 18.1)
Simulator: iPad Pro 11-inch (M4) (iPadOS 17.5)
I need to understand the different layers that are there in the iPhone X and later OLED screens as I am designing a hardware attachment. They seem to be projecting letters and images from a different layer than the subpixel layer. Is this proprietary information, or is there a resource that explores them?
Topic:
Accessibility & Inclusion
SubTopic:
General
When I am doing a file search, in TextEdit, and on certain webistes the space bar will quit functioning as soon as i start typing. If I hold down the "Option" key it allows the space bar to work as normal. I have checked every setting I can think of and nothing has helped.
Topic:
Accessibility & Inclusion
SubTopic:
General
I’m developing an ARKit application where I aim to attach procedurally generated audio to detected planes in the environment. While using a static audio file with SCNAudioSource and SCNAudioPlayer works as expected, integrating procedural audio via AVAudioSourceNode does not produce any sound, nor does it generate any error messages: Stack Overflow Post
Working Implementation with Static Audio File:
let audioPlayer = SCNAudioPlayer(source: audioSource)
node.addAudioPlayer(audioPlayer)
Attempted Implementation with Procedural Audio:
// Audio generation code
}
let audioPlayer = SCNAudioPlayer(avAudioNode: audioNode)
node.addAudioPlayer(audioPlayer)
In this setup, the AVAudioSourceNode successfully generates audio when connected directly to an AVAudioEngine. However, when used with SCNAudioPlayer and attached to an SCNNode, it fails to produce sound. What doesn’t work is creating some procedural audio with an AVAudioNode, as documented here:
Apple docs
Additionally, I explored the WWDC18 AR game project, SwiftShot, which utilizes SCNAudioPlayer(avAudioNode:). After updating it for the latest Xcode, the graphics function correctly, but the audio does not play. I also noted that the Apple documentation mentions an audioPlayerWithAVAudioNode: method, stating:
Using this initializer is typically not necessary. Instead, call the audioPlayerWithAVAudioNode: method, which returns a cached audio player object if one for the specified AVAudioNode object has already been created and is available for use.
However, this method does not appear to be available in Swift. Any insights or guidance on this matter would be greatly appreciated.
I have been working on a feature, where I have a List in SwiftUI with previous and next data loading, user can scroll up and down to load previous/next page data.
Recently, I faced one accessibility issue while testing voice-over, when user lands on the listing screen and swipe across the screen from navigation and when focus comes on list it should highlight the first item visible.
But when user swipes back:
Should it load the previous data and announce the previous item or it should go back to the navigation items?
If it loads the previous item, what if the user wants to go to the navigation to switch to other actions and vice-versa?
Did anyone come across this kind of issue? What can be the standard expected behavior in this case if list has both previous and next page scroll?
I different tried gestures https://support.apple.com/en-in/guide/iphone/iph3e2e2281/ios, but it isn't working
Hi everyone,
My team and I are developing an accessibility-focused VisionOS app (MindTap) as part of a university project, aiming to support individuals with Locked-In Syndrome using Brain-Computer Interface (BCI) signals to trigger interactions (e.g., tapping) within the Apple Vision Pro environment.
Problem 1: Simulating Eye Tracking in Simulator
We are testing onHover with Send pointer to the device under I/O > Input in the simulator, and while it mostly works (a bit laggy), we found that onHover won't function on the actual Vision Pro hardware. From what I understand, we should be using FocusState for proper gaze interaction, but testing this requires the physical device. Is there any workaround or official Apple-recommended way to simulate Focus-based gaze detection without a real Vision Pro?
Problem 2: WebSocket-triggered "Click" doesn't work outside the app
We successfully use WebSocket to send a custom signal (a "1" from the brain signal device) to trigger an action inside our app. However, when the user opens a third-party app like Apple News, the WebSocket-triggered "click" no longer works.
We suspect this is due to sandbox restrictions or lack of system-level permissions.
Is it possible in anyway to:
Trigger interaction events outside the app using custom input (like BCI via Websocket)?
Access system-wide click/tap simulation APIs from within VisionOS apps
Integrate this with accessibility services (like Voice Control or AssistiveTouch)
We'd appreciate any official guidance or tips from others building similar accessibility apps with alternative input methods in VisionOS.
Thanks in advance for any insight you can provide!
Topic:
Accessibility & Inclusion
SubTopic:
General
Tags:
Xcode
Accessibility
iPad and iOS apps on visionOS
Please update Accessibility OS Settings for VoiceOver in iPhone iOS and iPadOS to include frames on the Rotor, and to make web navigation and component gestures easier to find and assign. Please add content to the iPhone and iPad Apple User Guide to use VoiceOver in web navigation with touch gestures.
Specifically... iframes.
There is no clear guidance in Apple documentation for VoiceOver users in iPhone or iPadOS to access iframes with touch gestures. A common belief as written on AppleVis, other blogs, and internet searches is that iframes in Safari or a webView in an app are only available with explore by touch.
If explore by touch is the only option for some interactions, that needs to be included in Apple User Guides. If not, details on equivalent touch gestures for VO that have keyboard interactions in Mac need to be clear for users.
VoiceOver for Mac includes a default keyboard interaction of VO-Command-F in its extensive User Guide (https://support.apple.com/guide/voiceover/by-images-or-frames-mchlp2740/mac). A user can include a rotor option for web navigation for iframes.
VoiceOver for iPhone and iPad does not include a default swipe gesture assigned to frames. An option is not available for the Rotor.
While there is iPhone User Guide guidance that gestures can be customized (https://support.apple.com/guide/iphone/customize-gestures-and-keyboard-shortcuts-iph59a8e6fd2/18.0/ios/18.0), it is not clear that for adding this gesture, "Move to the next frame" is tucked into the advanced navigation commands for VoiceOver Accessibility Settings in the OS. At least in my phone, the word "frame" was not searchable despite the All Commands screen using a search bar.
I have some doubts about how VoiceOver handles focus when the screen updates.
When a new UIViewController is pushed onto a UINavigationController or presented modally, how does VoiceOver decide which element to focus on? Is there a way to control or customize this behavior?
In a UISplitViewController, when an item is selected in the primary view controller, the focus should shift to the relevant content in the secondary view controller. How can we ensure that VoiceOver correctly moves focus to the right element in the secondary panel?
I am developing a vision os app for controlling an underwater ROV. I have ornaments with telemetry and buttons around a central video view feed. I have custom buttons mappings, such as "A" for locking the depth of the drone. However, when I look at buttons or certain ornaments, my custom gamepad logic is kept from running. This means that when a SwiftUI Button gains focus on visionOS, pressing the controller’s A button triggers the system’s default “click” on that Button rather than my custom buttonA handler. Essentially, focus interception by the system is stealing my A-press events and preventing my custom gamepad logic from running.
Is there a way to disable the built in gamepad interaction and only allow my custom gamepad mappings?
Topic:
Accessibility & Inclusion
SubTopic:
General
Tags:
Game Controller
Accessibility
Focus
visionOS
Hi everybody,
I'm trying to build a QR-Code Scanner and Generator App for IOS.
Whenever I try to implement the camera the app crashes with this comment:
This app has crashed because it attempted to access privacy-sensitive data without a usage description. The app's Info.plist must contain an NSCameraUsageDescription key with a string value explaining to the user how the app uses this data.
I tried to reduce the app to the minimum of nothing but camera with the same result.
Any ideas?
Tank you and
best Regards
Horst Schippers
Topic:
Accessibility & Inclusion
SubTopic:
General
Hi,
I'm trying to fix tvOS view for VoiceOver accessibility feature:
TabView { // 5 tabs
Text(title)
Button(play)
ScrollView { // Live
LazyHStack { 200 items }
}
ScrollView { // Continue watching
LazyHStack { 500 items }
}
}
When the view shows up VoiceOver reads:
"Home tab 1 of 5, Item 2" - not sure why it reads Item 2 of the first cell in scroll view, maybe beacause it just got loaded by LazyHStack.
VocieOver should only read "Home tab 1 of 5"
When moving focus to scroll view it reads:
"Live, Item 1" and after slight delay "Item 1, Item 2, Item 3, Item 4"
When moving focus to second item it reads:
"Item 2" and after slight delay "Item 1, Item 2, Item 3, Item 4"
When moving focus to third item it reads:
"Item 3" and after slight delay "Item 1, Item 2, Item 3, Item 4"
It should be just reading what is focused, idealy just
"Live, Item 1, 1 of 200"
then after moving focus on item 2
"Item 2, 2 of 200"
this time without the word "Live" because we are on the same scroll view (the same horizontal list)
Currently the app is unusable, we have visually impaired testers and this rotor reading everything on the screen is totaly confusing, because users don't know where they are and what is actually focused.
This is a video streaming app and we are streaming all the time, even on home page in background, binge plays one item after another, usually there is never ending Live stream playing, user can switch TV channel, but we continue to play. Voice over should only read what's focused after user interaction.
Original Apple TV app does not do that, so it cannot be caused by some verbose accessibility settings. It reads correctly only focused item in scrolling lists.
How do I disable reading content that is not focused?
I tried:
.accessibilityLabel(isFocused ? title : "")
.accessibilityHidden(!isFocused)
.accessibilityHidden(true) - tried on various levels in view hierarchy
.accessiblityElement(children: .ignore) - even focused item is not read back by voice over
.accessiblityElement(children: .ignore) - even focused item is not read back by voice over
.accessiblityElement(children: .contain) - tried on various levels in view hierarchy
.accessiblityElement(children: .combine) - tried on various levels in view hierarchy
.accessibilityAddTraits(.isHeader) - tried on various levels in view hierarchy
.accessibilityRemoveTraits(.isHeader) - tried on various levels in view hierarchy
// the last 2 was basically an attempt to hack it
.accessibilityRotor("", ranges []) - another hack that I tried on ScrollView, LazyHStack, also on top level view.
50+ other attempts at configuring accessibility tags attached to views.
I have seen all the accessibility videos, tried all sample code projects, I haven't found a solution anywhere, internet search didn't find anything, AI didn't help as it can only provide code that someone else wrote before.
Any idea how to fix this?
Thanks.
I have a view dynamically overlaid on a UITableView with proper padding (added when certain conditions are met). When VoiceOver focuses on a cell beneath this overlay, the focused element does not scroll into view. I’ve noticed similar behavior in Apple’s first-party Podcasts app.
Please find the attached image for reference. How can I resolve this issue and ensure VoiceOver scrolls the focused cell into view?
Environment:xcode 16.2
WidgetKit: Image(uiImage: UIImage(named: "jp_jump")!).resizable().scaledToFit().frame(width: 58, height: 16).padding(EdgeInsets(top: 0, leading: 16, bottom: 0, trailing: 0))
”jp_jump“: Local color picture load widget crashes
info:
Thread 4: EXC_RESOURCE (RESOURCE_TYPE_MEMORY: high watermark memory limit exceeded) (limit=30 MB)
I have a UIImageView as the background of a custom UIView subclass. The image itself does not contain any text. On top of this image view, I have added two UILabels.
To improve accessibility, I converted the entire view into a single accessibility element and set a proper accessibilityLabel. Additionally, I disabled accessibility for the UIImageView and the labels by setting isAccessibilityElement = false.
However, when VoiceOver's Accessibility Recognition's Text Recognition feature is enabled, VoiceOver still detects and announces the text inside the UILabels at the end after reading my custom accessibility properties. This text should not be announced.
It seems that VoiceOver treats the UILabel content as part of the UIImageView. Additionally, when using the Explore Image rotor action, the entire subview is recognized as a single image.
Is this the expected behavior? If so, is there a way to disable VoiceOver’s text recognition for this view while keeping custom accessibility intact?
class BackgroundLabelView: UIView {
private let backgroundImageView = UIImageView()
private let backgroundImageView2 = UIImageView()
private let titleLabel = UILabel()
private let subtitleLabel = UILabel()
override init(frame: CGRect) {
super.init(frame: frame)
setupView()
}
required init?(coder: NSCoder) {
super.init(coder: coder)
setupView()
configureAceesibility()
}
private func configureAceesibility() {
backgroundImageView.isAccessibilityElement = false
backgroundImageView2.isAccessibilityElement = false
titleLabel.isAccessibilityElement = false
subtitleLabel.isAccessibilityElement = false
isAccessibilityElement = true
accessibilityTraits = .button
}
func configure(backgroundImage: UIImage?, title: String, subtitle: String) {
backgroundImageView.image = backgroundImage
titleLabel.text = title
subtitleLabel.text = subtitle
accessibilityLabel = "Holiday Offer ," + title + "," + subtitle
}
private func setupView() {
backgroundImageView2.contentMode = .scaleAspectFill
backgroundImageView2.clipsToBounds = true
backgroundImageView2.translatesAutoresizingMaskIntoConstraints = false
backgroundImageView2.image = UIImage(resource: .bannerfestival)
addSubview(backgroundImageView2)
backgroundImageView.contentMode = .scaleAspectFit
backgroundImageView.clipsToBounds = true
backgroundImageView.translatesAutoresizingMaskIntoConstraints = false
addSubview(backgroundImageView)
titleLabel.font = UIFont.systemFont(ofSize: 18, weight: .bold)
titleLabel.textColor = .white
titleLabel.translatesAutoresizingMaskIntoConstraints = false
titleLabel.numberOfLines = 0
addSubview(titleLabel)
subtitleLabel.font = UIFont.systemFont(ofSize: 14, weight: .regular)
subtitleLabel.textColor = .white.withAlphaComponent(0.8)
subtitleLabel.translatesAutoresizingMaskIntoConstraints = false
subtitleLabel.numberOfLines = 0
addSubview(subtitleLabel)
NSLayoutConstraint.activate([
backgroundImageView2.leadingAnchor.constraint(equalTo: leadingAnchor),
backgroundImageView2.trailingAnchor.constraint(equalTo: trailingAnchor),
backgroundImageView2.heightAnchor.constraint(equalToConstant: 200),
backgroundImageView.centerYAnchor.constraint(equalTo: centerYAnchor),
backgroundImageView.topAnchor.constraint(equalTo: topAnchor),
backgroundImageView.leadingAnchor.constraint(greaterThanOrEqualTo: leadingAnchor),
backgroundImageView.trailingAnchor.constraint(equalTo: trailingAnchor),
backgroundImageView.bottomAnchor.constraint(equalTo: bottomAnchor),
titleLabel.leadingAnchor.constraint(equalTo: leadingAnchor, constant: 16),
titleLabel.trailingAnchor.constraint(lessThanOrEqualTo: centerXAnchor),
titleLabel.bottomAnchor.constraint(equalTo: centerYAnchor, constant: -4),
subtitleLabel.leadingAnchor.constraint(equalTo: leadingAnchor, constant: 16),
subtitleLabel.trailingAnchor.constraint(lessThanOrEqualTo: centerXAnchor),
subtitleLabel.topAnchor.constraint(equalTo: centerYAnchor, constant: 4)
])
}
override func layoutSubviews() {
super.layoutSubviews()
backgroundImageView.layer.cornerRadius = layer.cornerRadius
}
}
Can you guys like probably make Visual Intelligence available for the action button on the iPhone 16e? It should be only for iPhones that use A18 and future gen apple chips.
Topic:
Accessibility & Inclusion
SubTopic:
General
I’m trying to set the accessibilityActivationPoint directly on a UITableViewCell so that VoiceOver activate on a specific button inside the cell. However, this approach doesn’t seem to work.
Instead, when I override the accessibilityActivationPoint property inside the UITableViewCell subclass and return the desired point, it works as expected.
Why doesn’t setting accessibilityActivationPoint directly on the cell work, but overriding it inside the cell does? Is there a recommended approach for handling this scenario?
The following approach works,
override var accessibilityActivationPoint: CGPoint {
get {
return convert(toggleSwitch.center, to: nil)
}
set{
super.accessibilityActivationPoint = newValue
}
}
but setting accessibility point directly not works
private func configureAccessibility() {
isAccessibilityElement = true
accessibilityLabel = titleLabel.text
accessibilityTraits = .toggleButton
accessibilityActivationPoint = self.convert(toggleSwitch.center, to: self)
accessibilityValue = toggleSwitch.accessibilityValue
}
After enabling Developer Mode on my iPhone and restarting it, the device asks me to press the Home button to confirm. Unfortunately, my Home button is broken, so I can’t access Developer Mode. The iPhone itself still works, but I can’t enable the mode. Is there any way to bypass this without the Home button?
Topic:
Accessibility & Inclusion
SubTopic:
General
I have a parent view containing 10 subviews. To control the VoiceOver navigation order, I set only a few elements in accessibilityElements. However, the remaining elements are not being focused or are completely inaccessible.
Is this the expected behavior? If I only specify a subset of elements in accessibilityElements, does it exclude the rest? What’s the best way to ensure all elements remain accessible while customising the order?