After watching the What's new in App Intents session I'm attempting to create an intent conforming to URLRepresentableIntent. The video states that so long as my AppEntity conforms to URLRepresentableEntity I should not have to provide a perform method . My application will be launched automatically and passed the appropriate URL.
This seems to work in that my application is launched and is passed a URL, but the URL is in the form: FeatureEntity/{id}.
Am I missing something, or is there a trick that enables it to pass along the URL specified in the AppEntity itself?
struct MyExampleIntent: OpenIntent, URLRepresentableIntent {
static let title: LocalizedStringResource = "Open Feature"
static var parameterSummary: some ParameterSummary {
Summary("Open \(\.$target)")
}
@Parameter(title: "My feature", description: "The feature to open.")
var target: FeatureEntity
}
struct FeatureEntity: AppEntity {
// ...
}
extension FeatureEntity: URLRepresentableEntity {
static var urlRepresentation: URLRepresentation {
"https://myurl.com/\(.id)"
}
}
How did we do? We’d love to know your thoughts on this year’s conference. Take the survey here
General
RSS for tagExplore the power of machine learning within apps. Discuss integrating machine learning features, share best practices, and explore the possibilities for your app.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Does the new Image Playground API allow programmatically generating images? Can the app generate and use them without the API's UI or would that require using another generative image model?
Topic:
Machine Learning & AI
SubTopic:
General
Hi everyone !
I'm getting random crashes when I'm using the Speech Recognizer functionality in my app.
This is an old bug (for 8 years on Apple Forums) and I will really appreciate if anyone from Apple will be able to find a fix for this crashes.
Can anyone also help me please to understand what could I do to keep the Speech Recognizer functionality still available in my app, but to avoid this crashes (if there is any other native library available or a CocoaPod library).
Here is my code and also the crash log for it.
Code:
func startRecording() {
startStopRecordBtn.setImage(UIImage(#imageLiteral(resourceName: "microphone_off")), for: .normal)
if UserDefaults.standard.bool(forKey: Constants.darkTheme) {
commentTextView.textColor = .white
} else {
commentTextView.textColor = .black
}
commentTextView.isUserInteractionEnabled = false
recordingLabel.text = Constants.recording
if recognitionTask != nil {
recognitionTask?.cancel()
recognitionTask = nil
}
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(AVAudioSession.Category.record)
try audioSession.setMode(AVAudioSession.Mode.measurement)
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
} catch {
showAlertWithTitle(message: Constants.error)
}
recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
let inputNode = audioEngine.inputNode
guard let recognitionRequest = recognitionRequest else {
fatalError(Constants.error)
}
recognitionRequest.shouldReportPartialResults = true
recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in
var isFinal = false
if result != nil {
self.commentTextView.text = result?.bestTranscription.formattedString
isFinal = (result?.isFinal)!
}
if error != nil || isFinal {
self.audioEngine.stop()
inputNode.removeTap(onBus: 0)
self.recognitionRequest = nil
self.recognitionTask = nil
self.startStopRecordBtn.isEnabled = true
}
})
let recordingFormat = inputNode.outputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) {[weak self] (buffer: AVAudioPCMBuffer, when: AVAudioTime) in // CRASH HERE
self?.recognitionRequest?.append(buffer)
}
audioEngine.prepare()
do {
try audioEngine.start()
} catch {
showAlertWithTitle(message: Constants.error)
}
}
Here is the crash log:
Thanks for very much for reading this !
i'm trying to create an NLModel within a MessageFilterExtension handler.
The code works fine in the main app, but when I try to use it in the extension it fails to initialize. Just this doesn't even work and gets the error below.
Single line that fails.
SMS_Classifier is the class xcode generated for my model. This line works fine in the main app.
let mlModel = try SMS_Classifier(configuration: MLModelConfiguration()).model
Error
Unable to locate Asset for contextual word embedding model for local en.
MLModelAsset: load failed with error Error Domain=com.apple.CoreML Code=0 "initialization of text classifier model with model data failed" UserInfo={NSLocalizedDescription=initialization of text classifier model with model data failed}
Any ideas?
Xcode 15.3 AppIntentsSSUTraining warning: missing the definition of locale # variables.1.definitions
Hello!
I've noticed that adding localizations for AppShortcuts triggers the following warnings in Xcode 15.3:
warning: missing the definition of zh-Hans # variables.1.definitions
warning: missing the definition of zh-Hans # variables.2.definitions
This occurs with both legacy strings files and String Catalogs.
Example project: https://github.com/gongzhang/AppShortcutsLocalizationWarningExample
When I add AppEnity to my model, I receive this error that is still repeated for each attribute in the model. The models are already marked for Widget Extension in Target Membership. I have already cleaned and restarted, nothing works. Will anyone know what I'm doing wrong?
Unable to find matching source file for path "@_swiftmacro_21HabitWidgetsExtension0A05ModelfMm.swift"
import SwiftData
import AppIntents
enum FrecuenciaCumplimiento: String, Codable {
case diario
case semanal
case mensual
}
@Model
final class Habit: AppEntity {
@Attribute(.unique) var id: UUID
var nombre: String
var descripcion: String
var icono: String
var color: String
var esHabitoPositivo: Bool
var valorObjetivo: Double
var unidadObjetivo: String
var frecuenciaCumplimiento: FrecuenciaCumplimiento
static var typeDisplayRepresentation: TypeDisplayRepresentation = "Hábito"
static var defaultQuery = HabitQuery()
var displayRepresentation: DisplayRepresentation {
DisplayRepresentation(title: "\(nombre)")
}
static var allHabits: [Habit] = [
Habit(id: UUID(), nombre: "uno", descripcion: "", icono: "circle", color: "#BF0000", esHabitoPositivo: true, valorObjetivo: 1.0, unidadObjetivo: "", frecuenciaCumplimiento: .mensual),
Habit(id: UUID(), nombre: "dos", descripcion: "", icono: "circle", color: "#BF0000", esHabitoPositivo: true, valorObjetivo: 1.0, unidadObjetivo: "", frecuenciaCumplimiento: .mensual)
]
/*
static func loadAllHabits() async throws {
do {
let modelContainer = try ModelContainer(for: Habit.self)
let descriptor = FetchDescriptor<Habit>()
allHabits = try await modelContainer.mainContext.fetch(descriptor)
} catch {
// Manejo de errores si es necesario
print("Error al cargar hábitos: \(error)")
throw error
}
}
*/
init(id: UUID = UUID(), nombre: String, descripcion: String, icono: String, color: String, esHabitoPositivo: Bool, valorObjetivo: Double, unidadObjetivo: String, frecuenciaCumplimiento: FrecuenciaCumplimiento) {
self.id = id
self.nombre = nombre
self.descripcion = descripcion
self.icono = icono
self.color = color
self.esHabitoPositivo = esHabitoPositivo
self.valorObjetivo = valorObjetivo
self.unidadObjetivo = unidadObjetivo
self.frecuenciaCumplimiento = frecuenciaCumplimiento
}
@Relationship(deleteRule: .cascade)
var habitRecords: [HabitRecord] = []
}
struct HabitQuery: EntityQuery {
func entities(for identifiers: [Habit.ID]) async throws -> [Habit] {
//try await Habit.loadAllHabits()
return Habit.allHabits.filter { identifiers.contains($0.id) }
}
func suggestedEntities() async throws -> [Habit] {
//try await Habit.loadAllHabits()
return Habit.allHabits// .filter { $0.isAvailable }
}
func defaultResult() async -> Habit? {
try? await suggestedEntities().first
}
}
Hello,
I posted an issue on the coremltools GitHub about my Core ML models not performing as well on iOS 17 vs iOS 16 but I'm posting it here just in case.
TL;DR
The same model on the same device/chip performs far slower (doesn't use the Neural Engine) on iOS 17 compared to iOS 16.
Longer description
The following screenshots show the performance of the same model (a PyTorch computer vision model) on an iPhone SE 3rd gen and iPhone 13 Pro (both use the A15 Bionic).
iOS 16 - iPhone SE 3rd Gen (A15 Bioinc)
iOS 16 uses the ANE and results in fast prediction, load and compilation times.
iOS 17 - iPhone 13 Pro (A15 Bionic)
iOS 17 doesn't seem to use the ANE, thus the prediction, load and compilation times are all slower.
Code To Reproduce
The following is my code I'm using to export my PyTorch vision model (using coremltools).
I've used the same code for the past few months with sensational results on iOS 16.
# Convert to Core ML using the Unified Conversion API
coreml_model = ct.convert(
model=traced_model,
inputs=[image_input],
outputs=[ct.TensorType(name="output")],
classifier_config=ct.ClassifierConfig(class_names),
convert_to="neuralnetwork",
# compute_precision=ct.precision.FLOAT16,
compute_units=ct.ComputeUnit.ALL
)
System environment:
Xcode version: 15.0
coremltools version: 7.0.0
OS (e.g. MacOS version or Linux type): Linux Ubuntu 20.04 (for exporting), macOS 13.6 (for testing on Xcode)
Any other relevant version information (e.g. PyTorch or TensorFlow version): PyTorch 2.0
Additional context
This happens across "neuralnetwork" and "mlprogram" type models, neither use the ANE on iOS 17 but both use the ANE on iOS 16
If anyone has a similar experience, I'd love to hear more.
Otherwise, if I'm doing something wrong for the exporting of models for iOS 17+, please let me know.
Thank you!
Hi everyone, I might need some help with on-device recognition. It seems that the speech recognition task will discard whatever it has transcribed after a new sentence starts (or it believes it becomes a new sentence) during a single audio session, with requiresOnDeviceRecognition is set to true.
This doesn't happen with requiresOnDeviceRecognition set to false.
System environment: macOS 14 with Xcode 15, deploying to iOS 17
Thank you all!
I am working on the neural network classifier provided on the coremltools.readme.io in the updatable->neural network section(https://coremltools.readme.io/docs/updatable-neural-network-classifier-on-mnist-dataset).
I am using the same code but I get an error saying that the coremltools.converters.keras.convert does not exist. But this I know can be coreml version issue. Right know I am using coremltools version 6.2. I converted this model to mlmodel with .convert only. It got converted successfully.
But I face an error in the make_updatable function saying the loss layer must be softmax output. Even the coremlt package API reference there I found its because the layer name is softmaxND but it should be softmax.
Now the problem is when I convert the model from Keras sequential model to coreml model. the layer name and type change. And the softmax changes to softmaxND.
Does anyone faced this issue?
if I execute this builder.inspect_layers(last=4)
I get this output
[Id: 32], Name: sequential/dense_1/Softmax (Type: softmaxND)
Updatable: False
Input blobs: ['sequential/dense_1/MatMul']
Output blobs: ['Identity']
[Id: 31], Name: sequential/dense_1/MatMul (Type: batchedMatmul)
Updatable: False
Input blobs: ['sequential/dense/Relu']
Output blobs: ['sequential/dense_1/MatMul']
[Id: 30], Name: sequential/dense/Relu (Type: activation)
Updatable: False
Input blobs: ['sequential/dense/MatMul']
Output blobs: ['sequential/dense/Relu']
In the make_updatable function when I execute
builder.set_categorical_cross_entropy_loss(name='lossLayer', input='Identity')
I get this error
ValueError: Categorical Cross Entropy loss layer input (Identity) must be a softmax layer output.
in iOS 15, on stopSpeaking of AVSpeechSynthesizer,
didFinish delegate method getting called instead of didCancel which is working fine in iOS 14 and below version.
Did something change on face detection / Vision Framework on iOS 15?
Using VNDetectFaceLandmarksRequest and reading the VNFaceLandmarkRegion2D to detect eyes is not working on iOS 15 as it did before. I am running the exact same code on an iOS 14 and iOS 15 device and the coordinates are different as seen on the screenshot?
Any Ideas?
Can access to SoundAnalysis (sound classifier built into next version of MacOS, iOS, WatchOS) be provided to my app running in the background on iPhone or Apple Watch?
I want to monitor local sounds from Apple Watch and iPhones and take remote action for out of band data (ie. send alert to caregiver if coughing rate is too high, or if someone is knocking on the door for more than a minute, etc.)