Thanks for being a part of WWDC25!

How did we do? We’d love to know your thoughts on this year’s conference. Take the survey here

Machine Learning

RSS for tag

Create intelligent features and enable new experiences for your apps by leveraging powerful on-device machine learning.

Posts under Machine Learning tag

50 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

CreateMl Hand Pose Classifier Preview not showing the Prediction result
I have created and trained a Hand Pose classifier model and am trying to test it. I have noticed in the WWDC2021 "Classify hand poses and actions with Create ML" the preview windows has a prediction result that gives you the prediction based on the live preview or the images. Mine does not have that. When i try to import pictures or do the live test there is no result. Its just the wireframe view and under it there is nothing. How do I fix this please? Thanks.
1
0
726
Jul ’24
Use iPad M1 processor as GPU
Hello, I’m currently working on Tiny ML or ML on Edge using the Google Colab platform. Due to the exhaust of my compute unit’s free usage, I’m being prompted to pay. I’ve been considering leveraging the GPU capabilities of my iPad M1 and Intel-based Mac. Both devices utilize Thunderbolt ports capable of sharing connections up to 30GB/s. Since I’m primarily using a classification model, extensive GPU usage isn’t necessary. I’m looking for assistance or guidance on utilizing the iPad’s processor as an eGPU on my Mac, possibly through an API or Apple technology. Any help would be greatly appreciated!
2
0
1.1k
Sep ’24
On device training of text classifier model
I have made a text classifier model but I want to train it on device too. When text is classified wrong, user can make update the model on device. Code : // // SpamClassifierHelper.swift // LearningML // // Created by Himan Dhawan on 7/1/24. // import Foundation import CreateMLComponents import CoreML import NaturalLanguage enum TextClassifier : String { case spam = "spam" case notASpam = "ham" } class SpamClassifierModel { // MARK: - Private Type Properties /// The updated Spam Classifier model. private static var updatedSpamClassifier: SpamClassifier? /// The default Spam Classifier model. private static var defaultSpamClassifier: SpamClassifier { do { return try SpamClassifier(configuration: .init()) } catch { fatalError("Couldn't load SpamClassifier due to: \(error.localizedDescription)") } } // The Spam Classifier model currently in use. static var liveModel: SpamClassifier { updatedSpamClassifier ?? defaultSpamClassifier } /// The location of the app's Application Support directory for the user. private static let appDirectory = FileManager.default.urls(for: .applicationSupportDirectory, in: .userDomainMask).first! class var urlOfModelInThisBundle : URL { let bundle = Bundle(for: self) return bundle.url(forResource: "SpamClassifier", withExtension:"mlmodelc")! } /// The default Spam Classifier model's file URL. private static let defaultModelURL = urlOfModelInThisBundle /// The permanent location of the updated Spam Classifier model. private static var updatedModelURL = appDirectory.appendingPathComponent("personalized.mlmodelc") /// The temporary location of the updated Spam Classifier model. private static var tempUpdatedModelURL = appDirectory.appendingPathComponent("personalized_tmp.mlmodelc") // MARK: - Public Type Methods static func predictLabelFor(_ value: String) throws -> (predication :String?, confidence : String) { let spam = try NLModel(mlModel: liveModel.model) let result = spam.predictedLabel(for: value) let confidence = spam.predictedLabelHypotheses(for: value, maximumCount: 1).first?.value ?? 0 return (result,String(format: "%.2f", confidence * 100)) } static func updateModel(newEntryText : String, spam : TextClassifier) throws { guard let modelURL = Bundle.main.url(forResource: "SpamClassifier", withExtension: "mlmodelc") else { fatalError("Could not find model in bundle") } // Create feature provider for the new image let featureProvider = try MLDictionaryFeatureProvider(dictionary: ["label": MLFeatureValue(string: newEntryText), "text": MLFeatureValue(string: spam.rawValue)]) let batchProvider = MLArrayBatchProvider(array: [featureProvider]) let updateTask = try MLUpdateTask(forModelAt: modelURL, trainingData: batchProvider, configuration: nil, completionHandler: { context in let updatedModel = context.model let fileManager = FileManager.default do { // Create a directory for the updated model. try fileManager.createDirectory(at: tempUpdatedModelURL, withIntermediateDirectories: true, attributes: nil) // Save the updated model to temporary filename. try updatedModel.write(to: tempUpdatedModelURL) // Replace any previously updated model with this one. _ = try fileManager.replaceItemAt(updatedModelURL, withItemAt: tempUpdatedModelURL) loadUpdatedModel() print("Updated model saved to:\n\t\(updatedModelURL)") } catch let error { print("Could not save updated model to the file system: \(error)") return } }) updateTask.resume() } /// Loads the updated Spam Classifier, if available. /// - Tag: LoadUpdatedModel private static func loadUpdatedModel() { guard FileManager.default.fileExists(atPath: updatedModelURL.path) else { // The updated model is not present at its designated path. return } // Create an instance of the updated model. guard let model = try? SpamClassifier(contentsOf: updatedModelURL) else { return } // Use this updated model to make predictions in the future. updatedSpamClassifier = model } }
1
0
707
Jul ’24
Can you match a new photo with existing images?
I'm looking for a solution to take a picture or point the camera at a piece of clothing and match that image with an image the user has stored in my app. I'm storing the data in a Core Data database as a Binary Data object. Since the user also takes the pictures they store in the database I think I cannot use pre-trained Core ML models. I would like the matching to be done on device if possible instead of going to an external service. That will probably describe the item based on what the AI sees, but then I cannot match the item with the stored images in the app. Does anyone know if this is possible with frameworks as Vision or VisionKit?
2
0
1.1k
Jul ’24
CoreML Text Classifier in Message Filter
Hi all, I'm trying to build a scam detection in Message Filter powered by CoreML. I find the predictions of ML reliable and the solution for text frauds and scams are sorely needed. I was able to create a trained MLModel and deploy it in the app. It works on my container app, but when I try to use and initialise the model in the Message Filter extension, I get an error; initialization of text classifier model with model data failed I have tried putting the model in the container app, extension, even made a shared framework for container and extension but to no avail. Every time I invoke the codes to init my model from the extension, I am met with the same error. Here's my code for initializing the model do { let model = try Ace_v24_6(configuration: .init()) let output = try model.prediction(text: text) guard !output.label.isEmpty else { return nil } return MessagePrediction(rawValue: output.label) } catch { return nil } My question is: Is it impossible to use CoreML in MessageFilters? Cheers
1
0
791
Jun ’24
Code Completion in Xcode 16 beta can't be enabled again
Hey, I have been trying out the Xcode 16 beta's code completion for the last couple of days. I went do disable it through the settings, in the components section. I did this to go along with a tutorial and I didn't want it to help me out. But now that I want it back. I can't find a way to enable it again. I tried reinstalling both the beta and regular xcode, but it didn't show up again. Wanted to ask if someone knows how to get this back. Thanks!
0
0
1k
Jun ’24
Custom Model Not Working Correctly in the Application #56
I created a model that classifies certain objects using yolov8. I noticed that the model is not working properly in my application. While the model works fine in Xcode preview, in the application it either returns the same result with 99% accuracy for each classification or does not provide any result. In Preview it looks like this: Predictions: extension CameraVC : AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?) { guard let data = photo.fileDataRepresentation() else { return } guard let image = UIImage(data: data) else { return } guard let cgImage = image.cgImage else { fatalError("Unable to create CIImage") } let handler = VNImageRequestHandler(cgImage: cgImage,orientation: CGImagePropertyOrientation(image.imageOrientation)) DispatchQueue.global(qos: .userInitiated).async { do { try handler.perform([self.viewModel.detectionRequest]) } catch { fatalError("Failed to perform detection: \(error)") } } lazy var detectionRequest: VNCoreMLRequest = { do { let model = try VNCoreMLModel(for: bestv720().model) let request = VNCoreMLRequest(model: model) { [weak self] request, error in self?.processDetections(for: request, error: error) } request.imageCropAndScaleOption = .centerCrop return request } catch { fatalError("Failed to load Vision ML model: \(error)") } }() This is where i print recognized objects: func processDetections(for request: VNRequest, error: Error?) { DispatchQueue.main.async { guard let results = request.results as? [VNRecognizedObjectObservation] else { return } var label = "" var all_results = [] var all_confidence = [] var true_results = [] var true_confidence = [] for result in results { for i in 0...results.count{ all_results.append(result.labels[i].identifier) all_confidence.append(result.labels[i].confidence) for confidence in all_confidence { if confidence as! Float > 0.7 { true_results.append(result.labels[i].identifier) true_confidence.append(confidence) } } } label = result.labels[0].identifier } print("True Results " , true_results) print("True Confidence ", true_confidence) self.output?.updateView(label:label) } } I converted the model like this: from ultralytics import YOLO model = YOLO(model_path) model.export(format='coreml', nms=True, imgsz=[720,1280])
2
1
915
Jun ’24
PyTorch to CoreML Model inaccuracy
I am currently working on a 2D pose estimator. I developed a PyTorch vision transformer based model with 17 joints in COCO format for the same and then converted it to CoreML using CoreML tools version 6.2. The model was trained on a custom dataset. However, upon running the converted model on iOS, I observed a significant drop in accuracy. You can see it in this video (https://youtu.be/EfGFrOZQGtU) that demonstrates the outputs of the PyTorch model (on the left) and the CoreML model (on the right). Could you please confirm if this drop in accuracy is expected and suggest any possible solutions to address this issue? Please note that all preprocessing and post-processing techniques remain consistent between the models. P.S. While converting I also got the following warning. : TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)): P.P.S. When we initialize the CoreML model on iOS 17.0, we get this error: Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (9), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (9), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20. This neural network model does not have a parameter for requested key 'precisionRecallCurves'. Note: only updatable neural network models can provide parameter values and these values are only accessible in the context of an MLUpdateTask completion or progress handler.
2
0
1.9k
Jul ’24
Loading CoreML model increases app size?
Hi, i have been noticing some strange issues with using CoreML models in my app. I am using the Whisper.cpp implementation which has a coreML option. This speeds up the transcribing vs Metal. However every time i use it, the app size inside iphone settings -> General -> Storage increases - specifically the "documents and data" part, the bundle size stays consistent. The Size of the app seems to increase by the same size of the coreml model, and after a few reloads it can increase to over 3-4gb! I thought that maybe the coreml model (which is in the bundle) is being saved to file - but i can't see where, i have tried to use instruments and xcode plus lots of printing out of cache and temp directory etc, deleting the caches etc.. but no effect. I have downloaded the container of the iphone from xcode and inspected it, there are some files stored inthe cache but only a few kbs, and even though the value in the settings-> storage shows a few gb, the container is only a few mb. Please can someone help or give me some guidance on what to do to figure out why the documents and data is increasing? where could this folder be pointing to that is not in the xcode downloaded container?? This is the repo i am using https://github.com/ggerganov/whisper.cpp the swiftui app and objective-C app both do the same thing i am witnessing when using coreml. Thanks in advance for any help, i am totally baffled by this behaviour
6
3
1.7k
Aug ’24
Core ML Model performance far lower on iOS 17 vs iOS 16 (iOS 17 not using Neural Engine)
Hello, I posted an issue on the coremltools GitHub about my Core ML models not performing as well on iOS 17 vs iOS 16 but I'm posting it here just in case. TL;DR The same model on the same device/chip performs far slower (doesn't use the Neural Engine) on iOS 17 compared to iOS 16. Longer description The following screenshots show the performance of the same model (a PyTorch computer vision model) on an iPhone SE 3rd gen and iPhone 13 Pro (both use the A15 Bionic). iOS 16 - iPhone SE 3rd Gen (A15 Bioinc) iOS 16 uses the ANE and results in fast prediction, load and compilation times. iOS 17 - iPhone 13 Pro (A15 Bionic) iOS 17 doesn't seem to use the ANE, thus the prediction, load and compilation times are all slower. Code To Reproduce The following is my code I'm using to export my PyTorch vision model (using coremltools). I've used the same code for the past few months with sensational results on iOS 16. # Convert to Core ML using the Unified Conversion API coreml_model = ct.convert( model=traced_model, inputs=[image_input], outputs=[ct.TensorType(name="output")], classifier_config=ct.ClassifierConfig(class_names), convert_to="neuralnetwork", # compute_precision=ct.precision.FLOAT16, compute_units=ct.ComputeUnit.ALL ) System environment: Xcode version: 15.0 coremltools version: 7.0.0 OS (e.g. MacOS version or Linux type): Linux Ubuntu 20.04 (for exporting), macOS 13.6 (for testing on Xcode) Any other relevant version information (e.g. PyTorch or TensorFlow version): PyTorch 2.0 Additional context This happens across "neuralnetwork" and "mlprogram" type models, neither use the ANE on iOS 17 but both use the ANE on iOS 16 If anyone has a similar experience, I'd love to hear more. Otherwise, if I'm doing something wrong for the exporting of models for iOS 17+, please let me know. Thank you!
1
1
1.7k
Mar ’25