스트리밍은 대부분의 브라우저와
Developer 앱에서 사용할 수 있습니다.
-
앱 인텐트의 새로운 업데이트 살펴보기
올해 릴리스에서 앱 인텐트 프레임워크의 새로운 개선 사항을 모두 살펴보세요. 지연된 속성과 같은 개발자의 편리성을 높이는 기능, 대화형 앱 인텐트 스니펫과 같은 새로운 기능, 엔티티 뷰 주석, Visual Intelligence 통합 방법 등을 확인할 수 있습니다. 그 어느 때보다 표현력이 풍부해지고 쉽고 부드럽게 적용되는 앱 인텐트를 소개합니다. 또한 Spotlight과 Visual Intelligence 등 올해 앱 인텐트의 흥미로운 새 클라이언트를 소개하고 이러한 맥락에서 효과적으로 작동하는 앱 인텐트를 작성 방법을 알아보겠습니다.
챕터
- 0:00 - Introduction
- 0:55 - Interactive snippets
- 8:15 - New system integrations
- 15:01 - User experience refinements
- 21:02 - Convenience APIs
리소스
- Accelerating app interactions with App Intents
- Adopting App Intents to support system experiences
- App intent domains
- App Intents
- App Shortcuts
- Building a workout app for iPhone and iPad
- Creating your first app intent
- Integrating actions with Siri and Apple Intelligence
- Making actions and content discoverable and widely available
- PurchaseIntent
관련 비디오
WWDC25
WWDC24
-
비디오 검색…
Hi, my name is Jeff, an engineer on the App Intents team. Today, we’ll be exploring new advances in App Intents.
App Intents gives your app the ability to integrate its features throughout people’s devices, including in places like Shortcuts, Spotlight, and Visual Intelligence.
If you're not familiar with the framework, I suggest checking out the “Get to know App Intents” session first. I'll be glad to have you back anytime. We have lots to go over today, including new experiences you can build with interactive snippets, additional ways your app can leverage App Intents throughout the system, more features you can use to refine your app’s user experiences, and various convenience APIs we've added to enhance your developer experience. Let’s get started with the interactive snippets.
Snippets let your app display tailored views with your App Intents, either to ask for confirmation or display a result.
You can now bring those snippets to life with interactivity.
For example, you can build a gardening snippet that suggests turning on the sprinklers when the soil is too dry. Or configure a food order before sending it to the restaurant. Ooh, how about integrating with other system features like Live Activities to immediately start following the sports game after checking the score.
Let me give you a quick demo of an interactive snippet with our sample app. Then we'll explore its implementation.
The TravelTracking app contains lots of landmarks across the world.
I’ll tap on the control that runs an App Intent to find the closest landmark.
After locating it, the intent will show a snippet displaying the landmark with a heart button next to the title. I’m from Toronto. Niagara Falls is practically in my backyard, so I’ll tap the heart button to add it to my favorites.
The snippet will immediately update to show its new status.
This is built by adopting the new Snippet Intent protocol. These intents render views according to their parameters and the state of the app. You can use them to show results after an action or request confirmation. Let’s first explore how result snippets work. Any intent in your app can return a snippet intent filled with parameter values as part of its result. This is used by the system every time it needs to refresh the snippet.
When that happens, the system will populate the parameters on the snippet intent with values you specified. If any of them are app entities, they’ll be fetched from their queries.
Then the system runs the Snippet Intent’s perform method. There, the intent can access its parameters and the app state to render the view, then return that as part of the result. The view can associate buttons or toggles with any App Intents from your app.
In the example, the heart button runs the Update Favorites Intent, and the find tickets button runs the Find Tickets Intent. You don’t have to create these intents just for snippets. You can actually reuse existing ones without modifying them.
When the hard button is tapped, the system runs the corresponding intent and waits for it to complete. This ensures the favorite status is up to date. Once that finishes, the system will use the parameterized snippet intent that you returned earlier to trigger another update.
Same as before, you’ll populate the parameters with values that you specified. Any app entities will also be fetched from their queries again. Then the system will run the perform method, which gives the snippet intent a chance to render an updated view.
In this case, the heart icon is now filled in.
Any changes in the view will be animated according to the contentTransition APIs from SwiftUI.
The cycle continues until the snippet is dismissed.
Let’s implement returning a snippet intent.
First, I take the existing intent and add ShowsSnippetIntent to its return type.
Then provide the snippet intent using the new parameter on the result method. This tells the system to populate the parameter with the given landmark every time the snippet intent is used.
Now let’s implement the snippet intent itself.
First of all, it needs to conform to the SnippetIntent protocol. Then I’ll add the variables it needs to render the view correctly. They must be marked as parameters, otherwise they won’t be populated by the system. And since it’s an App Intent, it also has access to AppDependencies. In the perform methods return type, add ShowsSnippetView.
And in the body, use methods to fetch the states needed to produce the view, then return that using the view parameter in a result method.
Similar to SwiftUI views, your snippet intent will be created and run many times during its lifecycle. Therefore, ensure it doesn’t mutate the app state.
The system might even run it additional times to react to device changes, such as going to dark mode.
Also, make sure to render your views quickly so they don't feel unresponsive.
And since you only specify parameter values once, you should only use them for App Entities, which are queried every time, and primitive values that will never change. All other values should be fetched in the perform method.
With the snippet built, let’s now explore how the SwiftUI view triggers intents.
In the view body, LandmarkView uses the Button initializer to associate their corresponding App Intents. Similar APIs exist for toggles as well.
These APIs were introduced with interactive widgets alongside the contentTransition modifier to customize animations.
If you would like to learn more, Check out Luca's talk from 2023. Now, let’s implement the find tickets feature using a confirmation snippet.
If I tap on the Find Tickets button, it’ll ask me how many tickets I want to search for. I'll increase it to four because I have friends coming along, then start search.
Eventually, it’ll show the cheapest price it can find using a result snippet.
Let's go over it step by step.
Starting with the result snippet, if your buttons or toggles trigger intents that present their own snippets, it'll replace the original one. Keep in mind, this only works if the original was showing a result.
The follow up intent can present either kinds of snippets, but here we're using request confirmation to present the configuration view.
To do so, simply call the requestConfirmation method.
You can customize the action name and provide the snippet intent that drives your confirmation experience.
This method will throw an error if the snippet is canceled at any time. Don’t try to catch it, just let it terminate your perform method.
Then every interaction with the view will go through the same update cycle as the result snippet.
In the example, the snippet intent is always showing the updated number of tickets. How does it do that? Well, because I’ve modeled search request as an AppEntity, the system will always fetch the newest value from the query when populating this parameter. So I can just pass it to my view and automatically get the latest values.
The system won’t terminate your app as long as your snippet is visible, so feel free to retain states in memory. There’s no need to store them in a database.
After all the interactions, when the search action is finally tapped, the original App Intents execution is resumed.
Sometimes you might want your snippet to update in the middle of a task because there's a new state.
In these scenarios, simply call the static reload method on the snippet intent that should be updated.
Check out this session to learn more about designing the best interactive snippets.
Snippets was just the beginning. Let’s now explore even more ways App Intents can help you integrate your app throughout the system. The first of which is image search. This feature allows people to perform searches directly from a camera capture or a screenshot. New in iOS 26, your app’s search results can show up too.
Here, I have a screenshot of a landmark. I’ll highlight it to perform an image search, and the system will show a search panel.
I can select the TravelTracking app to see its results. Then when I tap on a result, it’ll open the app to the corresponding landmark page.
To support image search, implement a query that conforms to the IntentValueQuery protocol. It will accept the SemanticContentDescriptor type as input, and return an array of App Entities. Image Search will display your AppEntities using their display representations.
When a result is tapped, the corresponding AppEntity will be sent to its OpenIntent for your app to handle it.
This OpenIntent must exist, otherwise your app won’t show up.
To implement the query, start with a struct that conforms to the IntentValueQuery protocol. The values method must have SemanticContentDescriptor as its input. It contains the pixels of the selected area.
You can use APIs from Video Toolbox or CoreImage to convert it into a familiar type, such as CGImage.
After you perform the search, return an array of the matched entities.
Next, implement an intent that conforms to the OpenIntent protocol to support tapping on the result.
The target parameter must have the same entity type as the results.
OpenIntents aren’t just for supporting Image Search. They can also be called in other places like Spotlight, giving your users an easy way to navigate to an entity in your app.
People will have the best experience if they can find what they’re looking for quickly. So return a few pages of results to increase the likelihood of the perfect match showing up in the system UI.
You can use your servers to query a larger dataset. However, don't search for too long. Otherwise, you'll feel unresponsive.
Finally, allow them to continue the search experience in your app if they don’t find what they’re looking for. Let’s add that to TravelTracking.
When I don’t see the result I want in the list, I can tap the more results button, which opens my app and navigates to the search view.
To implement this, use the new AppIntent macro to specify the semanticContentSearch schema. This is the new API that replaces AssistantIntent macro because we’ve expanded schemas to features outside of the assistant, like visualIntelligence. Add the semantic content property required by the schema. The macro will automatically mark it as an intent parameter for you.
In the perform method, process the search metadata, then navigate to the search view.
As an additional feature, I want TravelTracking to show collections that contain the matched landmarks. But how can I have one query that returns a mixture of LandmarkEntity and CollectionEntity? The answer is UnionValues.
First, declare a UnionValue with each case representing a type of entity that the query might return. Then, change the result type to be an array of them. Now it’s possible for me to return a mixture of both. Of course, don’t forget to implement an OpenIntent for each type of entity. To learn more about UnionValues, check out Kenny’s App Intents session from 2024.
It’s great that your app’s content is searchable outside of the app, but there’s more you can do with Apple Intelligence when your app is active. Let’s talk about onscreen entities. By using NSUserActivities, your app has the ability to associate entities with onscreen content. This allows people to ask ChatGPT about things currently visible in your app.
In the TravelTracking app, while looking at Niagara Falls, I can ask Siri if this place is near the ocean. Siri will figure out I’m referring to the view on the screen and will offer to send a screenshot to ChatGPT. But LandmarkEntity supports PDF, so I’ll choose the full content option instead. I can give it a quick preview, then send it over.
The response will be displayed, and I've learned that while big, the Great Lakes are NOT an ocean.
To associate an entity with a view, we’ll start by adding the userActivity modifier to the LandmarkDetailView. Then in the update closure, associate an entity identifier with the activity.
Next, we need to support converting LandmarkEntity into a data type that ChatGPT can understand, like a PDF.
We do this by first conforming it to the Transferable protocol, then providing the PDF data representation.
Other supported types include plain text and rich text.
Check out this App Intents documentation to learn more about on screen entities.
Speaking of surfacing your app’s content to the system, I want to briefly talk about Spotlight. Now that we support running actions directly from Spotlight on Mac, there are a few things you can do with App Intents to provide the best experience.
First, make your app entities conform to IndexedEntity and donate them to Spotlight. This allows Spotlight search to drive the filtering experience for your parameters.
To associate entity properties with the Spotlight keys, you can now use the new indexingKey parameter on the property attribute. In this example, I gave continent a customIndexingKey. This will allow people to search for Asian landmarks by simply typing "Asia" in the text field.
As a bonus for adopting the API, this allows Shortcuts app automatically generate find actions for your entities.
Second, annotate your on screen content with entities, so these entities will be prioritized in suggestions when their views are visible. Third, implement PredictableIntent so the system can learn and provide suggestions for your intents based on intents and parameters from previous user behavior. You can even provide tailored descriptions depending on their values.
If you would like to learn more about this feature, check out this session on Shortcuts and Spotlight.
Now that you have all these features in your app, let me show you some new ways to refine your app’s user experiences. Let’s start with Undo. People are more likely to try different things if they know they can change their mind. That's why it's important they can reverse actions performed with your app. With the new UndoableIntent protocol, you can easily allow people to undo your App Intents with gestures they already know.
Here are some of my collections in the TravelTracking app.
I’ll use Type to Siri to run one of my custom shortcuts to delete the Sweet Deserts collection. After confirming the deletion, it’s removed from the app. If I change my mind now, I can swipe left with three fingers to trigger undo and restore the collection.
DeleteCollectionIntent participates in the undo stack by first conforming to the UndoableIntent protocol, which provides an optional undoManager property that you can register the undo actions with. Other operations like setActionName are available as well.
The system gives your intent the most relevant undo manager through this property, even when those intents are run in your extensions. This ensures your app’s undo actions across UI and App Intents stay in sync, so they can be undone in the correct order. Let’s polish this intent even more by providing alternatives to deletion.
We can use the new multiple-choice API to present several options for people to choose from. I’ll run the intent that deletes the collection again.
This time, instead of just asking me to confirm deletion, it provides an additional option to archive it.
Hmm, this collection brings back memories. I think I'll archive it instead.
In the perform method, present the multiple choice snippet by calling the requestChoice method and provide an array of options using the between parameter. Options can be created with custom titles. You can also specify a style which tells the system how to render it.
Then, you can customize the snippet with a dialog and a custom SwiftUI view.
When an option is picked, it’s returned from the requestChoice method, except for cancellation which throws an error that terminates the perform method. You shouldn’t catch this error. If a request is canceled, the intent should stop immediately.
Then as follow up, use a switch statement to branch on the chosen option. You can use the original options that you’ve created as the expected values.
Let’s talk about one more area of polish. Supported Modes gives your intents greater control over foregrounding the app. This allows them to behave differently depending on how the user is interacting with the device. For example, if I were driving, I want intents to give me information via voice only. But if I were looking at the device, it should take me directly to the app because it can show me more information there. With this feature, I can implement one App Intent that does both of these things. Let me show you.
Here’s the intent to get the crowd status for Niagara Falls.
I’ll turn off the Open When Run option. This resembles scenarios where foregrounding is not possible, like using Siri while wearing headphones. When I run it, you’ll only show a dialog, which can also be read by Siri.
But if I run it by turning off the dialog and enabling Open When Run, you’ll take me directly to the landmark page that shows the occupancy as well as the current weather.
Let's add this feature to the intent.
First, I’ll add the supported modes with the static variable. If I set it to background only, it tells the system the app will never be foregrounded by the intent. However, I also want the intent to take me to the app if possible. So I’ll add the foreground mode, which asks the system to launch the app before running the intent.
Then I can use the new currentMode property to check if we’re in the foreground. If we are, we can navigate accordingly.
Hmm, wait a minute. Why is it saying there’s no one at the landmark? Oh, it's closed. Let’s modify the intent so it doesn’t open the app in this scenario.
In the perform method, I can check to see if the landmark is open and exit early if it's closed. In this case, I don't want the app to be foregrounded by the system before running the intent. So I’ll modify the foreground mode to be dynamic. The available supported modes include background and three foreground modes. There’s immediate, which tells the system to foreground the app before running the intent. Dynamic, which allows the intent to decide whether or not to launch the app. And deferred, which indicates the intent will eventually foreground the app, just not immediately.
GetCrowdStatus might not launch the app, so dynamic is the perfect choice.
For these two modes, the intent can use the continueInForeground method to control exactly when to bring the app forward.
After we are sure the landmark is open, we can use the new property on systemContext to check if we can foreground the app.
If we can, we’ll call the continueInForeground method to bring it forward.
By setting alwaysConfirm to false, the system will avoid prompting if there was activity in the last few seconds.
If launching the app was successful, we can finally navigate to the crowd status view.
However, if the launch request is denied, either by the system or the user, the method will throw an error. You can catch it and handle it accordingly. Before we wrap up, I want to go over some ways we’ve made developing with App Intents easier for you.
If your App Intents want to perform UI navigation, they need to have access to all the app states driving the views. This can only be done with globally shared objects like AppDependencies or singletons. But with the new view control APIs, you can remove UI code from App Intents and let the views handle it themselves. Let’s use it to refactor our open landmark intent. First of all, we have to conform the intent to the TargetContentProvidingIntent protocol. In the SwiftUI view, I have a path property that’s used to programmatically modify the NavigationStack. Since it’s marked as a state, it can only be accessed from the view body. This is where the onAppIntentExecution view modifier comes in. It takes the type of App Intent you want to handle, and an action closure with the intent passed in. Inside, I can reference the intent’s parameters and modify the UI accordingly.
With this code in place, we can remove UI code from the intent itself, or remove the perform method entirely, as well as the dependency we no longer need. How lovely is that? The system runs your action closure shortly before foregrounding the app. Therefore, you can only read the parameter values from the intent. All other operations like requesting values are not supported.
And if multiple views have the same modifier, all of them will be run, so each view can respond accordingly. If your app supports several scenes or windows, you might want to control which one runs a particular intent. For example, when I’m editing a collection in the TravelTracking app, I wouldn’t want it to navigate to a landmark with that scene. It'll be very disruptive. Thankfully, I can control this by using the handlesExternalEvents APIs. A target content providing intent has the contentidentifier property, which defaults to its persistentidentifier. And usually it's the intent's struck name.
You can always customize this value to be even more specific.
We can use this with the HandlesExternalEvents modifiers on scenes to set the activation condition, which tells the system to use the scene to execute the App Intent. It’ll create one if it doesn’t exist already.
Make sure the identifier in the array matches the content identifier property of the intent you want to handle. But if the activation conditions are dynamic, you can use the same modifier on views instead. Here, I’m only allowing the scene to handle OpenLandmarkIntent if we are not editing a collection.
You can learn more about SwiftUI scenes and activation conditions from these two sessions.
For UIKit, you can conform intents to the UISceneAppIntent protocol, which gives them the ability to access UIScene with these members. Or your scene delegate can respond to an intent execution after conforming to AppIntentSceneDelegate.
Finally, you can also use activation conditions to decide which scene will handle an App Intent. The next improvement is the new ComputedProperty macro that lets you avoid storing values on AppEntities. Here’s my SettingsEntity that copies the defaultPlace from UserDefaults. I want to avoid storing a duplicate value on the struct like this. Instead, it should be derived directly from the source of truth.
With the new ComputedProperty macro, I can achieve this by accessing UserDefaults directly from the getter.
We’ve also added a new DeferredProperty macro to lower the cost of instantiating an AppEntity. In LandmarkEntity, there’s a crowdStatus property whose value comes from a network server, so fetching it is relatively expensive. I want to fetch it only if the system explicitly requests it.
By marking this property as DeferredProperty, I can provide an asynchronous getter.
Inside, I can call methods that make the network calls.
This async getter will only be called if system features like Shortcuts requests it, not when LandmarkEntity is created and passed around.
Here are some key differences between the three property types. Overall, choose ComputedProperty over Deferred due to its lower system overhead. Only fallback to Deferred if that property is too expensive to calculate.
Last but not least, you can now put your App Intents in Swift Packagess.
Previously, we’ve enabled packaging your AppIntents code with frameworks and dynamic libraries. Now you can put them in Swift Packages and static libraries. To learn more about how to use the AppIntentsPackage protocol, check out the “Get to know App Intents” session.
I hope you enjoyed learning about these features with me. As next steps, try out interactive snippets to see what they can do for your intents. Associate entities with onscreen content so the system can automatically suggest them to the user.
Provide more options for people to choose from using the multiple choice API. Support multiple modes to ensure your intents provide the best experience no matter how they run.
Finally, to dig into the code further, check out our sample app on the developer website.
I’m so excited to finally share all this with you. I can’t wait to see all the creative ideas you’ll come up with.
That's it for now. Thanks for watching.
-
-
4:08 - Returning a Snippet Intent
import AppIntents import SwiftUI struct ClosestLandmarkIntent: AppIntent { static let title: LocalizedStringResource = "Find Closest Landmark" @Dependency var modelData: ModelData func perform() async throws -> some ReturnsValue<LandmarkEntity> & ShowsSnippetIntent & ProvidesDialog { let landmark = await self.findClosestLandmark() return .result( value: landmark, dialog: IntentDialog( full: "The closest landmark is \(landmark.name).", supporting: "\(landmark.name) is located in \(landmark.continent)." ), snippetIntent: LandmarkSnippetIntent(landmark: landmark) ) } }
-
4:31 - Building a SnippetIntent
struct LandmarkSnippetIntent: SnippetIntent { static let title: LocalizedStringResource = "Landmark Snippet" @Parameter var landmark: LandmarkEntity @Dependency var modelData: ModelData func perform() async throws -> some IntentResult & ShowsSnippetView { let isFavorite = await modelData.isFavorite(landmark) return .result( view: LandmarkView(landmark: landmark, isFavorite: isFavorite) ) } }
-
5:45 - Associate intents with buttons
struct LandmarkView: View { let landmark: LandmarkEntity let isFavorite: Bool var body: some View { // ... Button(intent: UpdateFavoritesIntent(landmark: landmark, isFavorite: !isFavorite)) { /* ... */ } Button(intent: FindTicketsIntent(landmark: landmark)) { /* ... */ } // ... } }
-
6:53 - Request confirmation snippet
struct FindTicketsIntent: AppIntent { func perform() async throws -> some IntentResult & ShowsSnippetIntent { let searchRequest = await searchEngine.createRequest(landmarkEntity: landmark) // Present a snippet that allows people to change // the number of tickets. try await requestConfirmation( actionName: .search, snippetIntent: TicketRequestSnippetIntent(searchRequest: searchRequest) ) // Resume searching... } }
-
7:24 - Using Entities as parameters
struct TicketRequestSnippetIntent: SnippetIntent { static let title: LocalizedStringResource = "Ticket Request Snippet" @Parameter var searchRequest: SearchRequestEntity func perform() async throws -> some IntentResult & ShowsSnippetView { let view = TicketRequestView(searchRequest: searchRequest) return .result(view: view) } }
-
8:01 - Updating a snippet
func performRequest(request: SearchRequestEntity) async throws { // Set to pending status... TicketResultSnippetIntent.reload() // Kick off search... TicketResultSnippetIntent.reload() }
-
9:24 - Responding to Image Search
struct LandmarkIntentValueQuery: IntentValueQuery { @Dependency var modelData: ModelData func values(for input: SemanticContentDescriptor) async throws -> [LandmarkEntity] { guard let pixelBuffer: CVReadOnlyPixelBuffer = input.pixelBuffer else { return [] } let landmarks = try await modelData.searchLandmarks(matching: pixelBuffer) return landmarks } }
-
9:51 - Support opening an entity
struct OpenLandmarkIntent: OpenIntent { static var title: LocalizedStringResource = "Open Landmark" @Parameter(title: "Landmark") var target: LandmarkEntity func perform() async throws -> some IntentResult { /// ... } }
-
10:53 - Show search results in app
@AppIntent(schema: .visualIntelligence.semanticContentSearch) struct ShowSearchResultsIntent { var semanticContent: SemanticContentDescriptor @Dependency var navigator: Navigator func perform() async throws -> some IntentResult { await navigator.showImageSearch(semanticContent.pixelBuffer) return .result() } // ... }
-
11:40 - Returning multiple entity types
@UnionValue enum VisualSearchResult { case landmark(LandmarkEntity) case collection(CollectionEntity) }a struct LandmarkIntentValueQuery: IntentValueQuery { func values(for input: SemanticContentDescriptor) async throws -> [VisualSearchResult] { // ... } } struct OpenLandmarkIntent: OpenIntent { /* ... */ } struct OpenCollectionIntent: OpenIntent { /* ... */ }
-
13:00 - Associating a view with an AppEntity
struct LandmarkDetailView: View { let landmark: LandmarkEntity var body: some View { Group{ /* ... */ } .userActivity("com.landmarks.ViewingLandmark") { activity in activity.title = "Viewing \(landmark.name)" activity.appEntityIdentifier = EntityIdentifier(for: landmark) } } }
-
13:21 - Converting AppEntity to PDF
import CoreTransferable import PDFKit extension LandmarkEntity: Transferable { static var transferRepresentation: some TransferRepresentation { DataRepresentation(exportedContentType: .pdf) {landmark in // Create PDF data... return data } } }
-
14:05 - Associating properties with Spotlight keys
struct LandmarkEntity: IndexedEntity { // ... @Property(indexingKey: \.displayName) var name: String @Property(customIndexingKey: /* ... */) var continent: String // ... }
-
15:49 - Making intents undoable
struct DeleteCollectionIntent: UndoableIntent { // ... func perform() async throws -> some IntentResult { // Confirm deletion... await undoManager?.registerUndo(withTarget: modelData) {modelData in // Restore collection... } await undoManager?.setActionName("Delete \(collection.name)") // Delete collection... } }
-
16:52 - Multiple choice
struct DeleteCollectionIntent: UndoableIntent { func perform() async throws -> some IntentResult & ReturnsValue<CollectionEntity?> { let archive = Option(title: "Archive", style: .default) let delete = Option(title: "Delete", style: .destructive) let resultChoice = try await requestChoice( between: [.cancel, archive, delete], dialog: "Do you want to archive or delete \(collection.name)?", view: collectionSnippetView(collection) ) switch resultChoice { case archive: // Archive collection... case delete: // Delete collection... default: // Do nothing... } } // ... }
-
18:47 - Supported modes
struct GetCrowdStatusIntent: AppIntent { static let supportedModes: IntentModes = [.background, .foreground] func perform() async throws -> some ReturnsValue<Int> & ProvidesDialog { if systemContext.currentMode == .foreground { await navigator.navigateToCrowdStatus(landmark) } // Retrieve status and return dialog... } }
-
19:30 - Supported modes
struct GetCrowdStatusIntent: AppIntent { static let supportedModes: IntentModes = [.background, .foreground(.dynamic)] func perform() async throws -> some ReturnsValue<Int> & ProvidesDialog { guard await modelData.isOpen(landmark) else { /* Exit early... */ } if systemContext.currentMode.canContinueInForeground { do { try await continueInForeground(alwaysConfirm: false) await navigator.navigateToCrowdStatus(landmark) } catch { // Open app denied. } } // Retrieve status and return dialog... } }
-
21:30 - View Control
extension OpenLandmarkIntent: TargetContentProvidingIntent {} struct LandmarksNavigationStack: View { @State var path: [Landmark] = [] var body: some View { NavigationStack(path: $path) { /* ... */ } .onAppIntentExecution(OpenLandmarkIntent.self) { intent in self.path.append(intent.landmark) } } }
-
23:13 - Scene activation condition
@main struct AppIntentsTravelTrackerApp: App { var body: some Scene { WindowGroup { /* ... */ } WindowGroup { /* ... */ } .handlesExternalEvents(matching: [ OpenLandmarkIntent.persistentIdentifier ]) } }
-
23:33 - View activation condition
struct LandmarksNavigationStack: View { var body: some View { NavigationStack(path: $path) { /* ... */ } .handlesExternalEvents( preferring: [], allowing: !isEditing ? [OpenLandmarkIntent.persistentIdentifier] : [] ) } }
-
24:23 - Computed property
struct SettingsEntity: UniqueAppEntity { @ComputedProperty var defaultPlace: PlaceDescriptor { UserDefaults.standard.defaultPlace } init() { } }
-
24:48 - Deferred property
struct LandmarkEntity: IndexedEntity { // ... @DeferredProperty var crowdStatus: Int { get async throws { await modelData.getCrowdStatus(self) } } // ... }
-
25:50 - AppIntentsPackage
// Framework or dynamic library public struct LandmarksKitPackage: AppIntentsPackage { } // App target struct LandmarksPackage: AppIntentsPackage { static var includedPackages: [any AppIntentsPackage.Type] { [LandmarksKitPackage.self] } }
-
-
- 0:00 - Introduction
App Intents is a framework that enables apps to integrate features across devices, such as Shortcuts, Spotlight, and Visual Intelligence. Learn about new interactive snippets, system-wide app integration, refined user experiences, and convenience APIs for developers.
- 0:55 - Interactive snippets
With the latest updates, you can now enhance your apps with interactive snippets. These snippets are dynamic views that display tailored information based on App Intents. For instance, a gardening app can suggest turning on the sprinklers, or a food-ordering app can allow people to configure their orders before placing them. The TravelTracking sample app serves as an excellent example. When a user searches for the closest landmark, an interactive snippet appears. This snippet includes the landmark's name and a heart button. By tapping the heart button, a person can favorite the landmark, and the snippet instantly updates to reflect the new status, all without leaving the current view. This interactivity is achieved through the new 'SnippetIntent' protocol. You can create result snippets that display information after an action, and these snippets can include buttons or toggles that trigger other App Intents. For example, in the TravelTracking app, the heart button runs the Update Favorites intent, and you can add a Find Tickets button to launch another intent. When someone interacts with these buttons, the system runs the corresponding intent, and the snippet updates accordingly. The system ensures that the data is always fresh and up to date by fetching app entities from queries whenever the snippet refreshes. This process is seamless and animated, providing a smooth user experience. You can also create confirmation snippets that ask people to provide additional information before proceeding. For instance, in the TravelTracking app, when someone taps Find Tickets, a confirmation snippet appears, asking how many tickets they want to search for. The person can then interact with this snippet, and the system updates the view in real time based on their inputs.
- 8:15 - New system integrations
App Intents in iOS 26 enhances system integration, enabling image search directly from camera captures or screenshots. Apps can now display their search results in the system search panel. To support this capability, implement queries that conform to the 'IntentValueQuery' protocol, processing 'SemanticContentDescriptor' data to return arrays of app entities. When someone taps a result, the corresponding 'OpenIntent' is triggered, opening the app to the relevant page. 'OpenIntents' aren't limited to image search, and you can also use them in Spotlight. Consider optimizing search performance, returning multiple pages of results, and allowing people to continue the search within the app. Beyond image search, App Intents enable on-screen entities, allowing people to interact with Siri and ChatGPT about content visible in the app. You can associate entities with views, conform them to the 'Transferable' protocol, and support various data types, like PDF, plain text, and rich text. To enhance Spotlight search, make app entities conform to 'IndexedEntity', donate them to Spotlight, and annotate on-screen content with entities. Implementing 'PredictableIntent' allows the system to learn from user behavior and provide personalized suggestions.
- 15:01 - User experience refinements
The new features in the app-development platform enhance user experiences through improved undo functionality, multiple-choice options, and supported modes. The 'UndoableIntent' protocol allows people to reverse actions performed with App Intents using familiar gestures, providing a safety net for experimentation. You can implement this by conforming their intents to the protocol and registering undo actions with the undo manager. With the multiple-choice API, you can present people with several options for an intent, rather than just a binary confirmation. You can also customize with a dialog and a custom SwiftUI view. Supported Modes gives intents greater control over foregrounding the app. This capability allows intents to behave differently depending on how a person is interacting with the device. For example, an intent can provide voice-only information when the person is driving but take them directly to the app when they're looking at the device. You can specify the supported modes for their intents and use the 'currentMode' property to check which mode is active.
- 21:02 - Convenience APIs
With the new view control APIs in SwiftUI, you can refactor App Intents by removing UI code and letting views handle navigation directly. Use the 'onAppIntentExecution' view modifier, which enables views to respond to specific App Intents, and modify the UI accordingly. The system runs the action closure shortly before foregrounding the app, and multiple views can respond to the same intent. You can control which scene handles an intent using 'handlesExternalEvents' APIs, ensuring contextually appropriate navigation. Additionally, new macros like 'ComputedProperty' and 'DeferredProperty' optimize 'AppEntities', reducing storage and instantiation costs. App Intents can also now be packaged in Swift Packages, providing greater flexibility and reusability.