I'm developing a macOS application using the FoundationModels framework (LanguageModelSession) and encountering issues with the content sanitizer blocking legitimate text input.
** Issue Description:** The content sanitizer is flagging text strings that contain certain substrings, even when they represent legitimate technical content. For example:
- F_SEEL_SEX1S.wav (sE Electronics SEX1S microphone model)
- Technical product identifiers
- Serial numbers and version codes
** Broader Concern:** The content sanitizer appears to be applying restrictions that seem inappropriate for user-owned content. Even if a filename were something like "human sex.wav", users should have the right to process their own legitimate files on their own devices without content filtering interference.
** Error Messages:** SensitiveContentSettings: Sanitizer model found unsafe content in value FoundationModels.LanguageModelSession.GenerationError error 2
** Questions:**
- Is there a way to disable content sanitization for processing
user-owned content? 2. What's the recommended approach for applications that need to handle arbitrary user text? 3. Are there APIs to process personal content without filtering restrictions?
** Environment:**
- macOS 26.0
- FoundationModels framework
- LanguageModelSession
Any guidance would be appreciated.
Thanks for sharing the information. I’d suggest that you file a feedback report with your use case, and share your report ID here, as mentioned here.
Best,
——
Ziqiao Chen
Worldwide Developer Relations.