Auricus Voice est une appliance d’IA vocale souveraine, conçue pour les charges réglementées. L’audio et les transcriptions restent dans votre réseau ; rien ne quitte l’appliance sur le chemin d’inférence.
Le texte juridique détaillé qui suit est en anglais pour préserver les citations exactes (CFR, OJ, articles). Une traduction française intégrale est prévue après relecture.
Auricus Voice tracks the proposed simplification package — COM(2025) 837 final and COM(2025) 836 final — and supplies the engineering primitives deployers need to take advantage of it once adopted.
| Omnibus lever (proposed) | Auricus Voice mechanism |
|---|---|
| High-risk-only breach notification, 96 h deadline, ENISA single-entry-point | Structured audit log + Prometheus SLO alerts; export-ready evidence pack |
| AI-system operation exemption for special-category data, conditional on T&O measures | Per-job purge of audio in working storage; transcripts removed after delivery; no model retraining on customer audio |
| Harmonised DPIA lists + common template | Documented data-flow diagram and processing-purpose statement available to support customer DPIAs; no profiling, no Art 22 decisioning |
| Single-entry-point for incident reporting, “report once, share many” | Request-ID-keyed log schema, structured for SEP ingestion |
| Pseudonymisation / “no means reasonably likely to identify” clarification | Optional speaker-anonymisation post-processing on roadmap; appliance never holds an identifier-to-voice mapping by default |
| Trade-secret refusal vs. third-country data sharing | On-prem only; zero outbound data path on the inference hot path — third-country exposure is structurally impossible |
| Bias detection / correction with safeguards (Art 4a AI Act) | Per-language WER tracking, per-language model routing, and language-analytics dashboards already segment quality by language; customers can run bias monitoring on top of these signals with the Art 4a discipline |
| Transcription, not generative — Art 50(2) marking N/A | Verbatim transcripts of human-authored audio; not a generative AI system, so the synthetic-content marking obligation does not apply |
| AI literacy obligation shifts to Member States / Commission encouragement (Art 4 AI Act) | Vendor documentation, Grafana dashboards, and runbooks support customer AI-literacy programmes |
| Standards-linked entry into application of Chapter III (Art 113 AI Act) | Chapter III high-risk obligations apply only after a Commission decision confirms standards availability — extra runway for high-risk customer deployments built on Auricus Voice |
Auricus Voice is built to drop into the patchwork of U.S. federal and state regulation that touches voice data — and to give deployers in regulated sectors the technical primitives they need to meet their specific obligations.
| Framework | What it covers | Auricus Voice mechanism |
|---|---|---|
| HIPAA Security Rule (45 CFR §§ 164.302–318) and Privacy Rule (45 CFR Part 164, Subpart E) | Protected Health Information (PHI) handled by covered entities and business associates | On-prem only — PHI in audio and transcripts never leaves the deployer’s network. Audit logging with request IDs, TLS in transit, configurable retention (per-job purge), and bearer-token access support the §§ 164.308 (administrative safeguards), 164.312 (technical safeguards), and 164.316 (documentation) controls. Auricus is willing to execute a Business Associate Agreement (BAA) for healthcare deployments. |
| GLBA Safeguards Rule (16 CFR Part 314) | Customer financial information at financial institutions | Same on-prem posture eliminates third-party processor exposure. Structured audit logs and SLO metrics support § 314.4(d)(2) (continuous monitoring / log review) and § 314.4(h) (incident-response programme). |
| PCI DSS 4.0.1 (in force since 31 Mar 2025) | Cardholder data captured in call recordings (where pause-and-resume is not feasible end-to-end) | On-prem inference keeps audio and transcripts inside the deployer’s cardholder data environment (CDE); no external API egress. Audit logs and access controls support PCI DSS Requirements 7 (least privilege), 10 (logging & monitoring), and 12 (information-security policy) from the appliance side. Customers retain responsibility for SAD/CHD handling in the audio stream itself. |
| CJIS Security Policy (FBI, current edition) | Criminal Justice Information (CJI) handled by law-enforcement agencies and contractors | On-prem deployment, U.S.-located inference, no third-country exposure on the inference path; structured audit logging and access controls support § 5.4 (auditing & accountability) and § 5.5 (access control) primitives. |
| TCPA (47 U.S.C. § 227) and state two-party / all-party recording-consent laws (CA, CT, FL, IL, MD, MA, MT, NH, PA, WA) | Consent for call recording and outbound calling | Auricus Voice does not record audio — the deployer’s existing telephony or contact-center platform does. The appliance receives audio for transcription only and supports configurable retention so deployers can align with their notice-and-consent posture. |
| Framework | Auricus Voice mechanism |
|---|---|
| CCPA / CPRA (Cal. Civ. Code § 1798.100 et seq.) plus the CPPA’s Automated Decisionmaking Technology (ADMT) regulations | Voice recordings can be both “personal information” and “sensitive personal information” (biometric identifiers) under § 1798.140(ae). Auricus Voice keeps that data on-prem; transcription is verbatim, with no automated individual decision-making in the product. |
| Illinois BIPA (740 ILCS 14/), as amended by P.A. 103-0769 (Aug 2024) limiting per-violation accumulation | Voiceprints qualify as biometric identifiers under § 10. Auricus Voice does not derive or store a voiceprint by default; speaker-anonymisation is on the roadmap. The appliance never holds an identifier-to-voice mapping by default. |
| Texas CUBI (Tex. Bus. & Com. Code § 503.001) and Washington H.B. 1493 | Same posture — no voiceprint extraction by default; per-job audio purge in working storage; transcripts removed from appliance-side stores after delivery. |
| Comprehensive state privacy laws — VCDPA (VA), CPA (CO), CTDPA (CT), UCPA (UT), TIPA (TN), and analogues | On-prem retention controls and per-request audit trails support consumer-rights handling (access, deletion, opt-out of “sale”/“share”) at the deployer’s data-controller layer. |
| Colorado AI Act (SB 24-205, effective 1 Feb 2026) | Where a deployer integrates Auricus Voice into a “high-risk artificial intelligence system,” per-language WER tracking, language-analytics dashboards, and structured audit logs supply the algorithmic-discrimination monitoring and impact-assessment evidence the deployer needs under Colo. Rev. Stat. §§ 6-1-1701 to 6-1-1707. Compliance remains the deployer’s responsibility. |
| Tennessee ELVIS Act (Tenn. Code Ann. § 47-25-1101 et seq., effective 1 Jul 2024) and analogous voice-cloning prohibitions | Auricus Voice produces verbatim transcripts of human-authored audio; it does not synthesize, clone, or impersonate voices. The ELVIS Act voice-cloning prohibitions do not apply to the product. |
| Framework | Auricus Voice alignment |
|---|---|
| NIST Cybersecurity Framework (CSF) 2.0 (Feb 2024) | Identify / Protect / Detect / Respond / Recover primitives: structured audit logs, Prometheus SLOs, on-prem isolation, configurable retention, dead-letter handling for failed deliveries. |
| NIST AI Risk Management Framework (AI RMF) 1.0 (Jan 2023) and the Generative AI Profile (NIST AI 600-1, Jul 2024) | Auricus Voice is not generative; it produces verbatim transcripts. Per-language WER tracking, model lifecycle observability, and quality SLOs map to the Measure and Manage functions. The Govern / Map functions remain deployer-owned. |
| NIST SP 800-53 Rev. 5 | Vendor-side primitives for the AC (access control), AU (audit & accountability), CM (configuration management), IR (incident response), and SC (system & communications protection) families — exposed through bearer-token auth, structured audit logs with request IDs, configurable retention, and TLS at ingress. |
| NIST SP 800-171 Rev. 3 + DFARS 252.204-7012 / 7021 + CMMC 2.0 (32 CFR Part 170, final rule Oct 2024) | For Controlled Unclassified Information (CUI) in voice data, Auricus Voice’s on-prem posture, audit logging, and absence of inference-hot-path egress support the 800-171 control families relevant to voice handling. Deployers retain responsibility for the encompassing CMMC Level 1 / 2 / 3 assessment. |
| Executive Order 14179 (“Removing Barriers to American Leadership in AI”, 23 Jan 2025) and OMB Memos M-25-21 / M-25-22 (3 Apr 2025) on federal AI use and acquisition | The current federal AI-procurement framework prioritises American-built, transparent, and reliable AI systems with documented performance characteristics. Auricus Voice supplies a public spec sheet, exportable Prometheus metrics, per-language WER dashboards, and a documented integration surface — the evidence federal contracting officers and Chief AI Officers need under M-25-22’s acquisition guidance. |
| FedRAMP | Auricus Voice is an on-prem appliance, not a SaaS — FedRAMP authorisation is not directly applicable. For federal customers operating Auricus Voice inside an authorised boundary (cloud or on-prem), the appliance inherits the boundary’s authorisation; we supply the security documentation needed for the customer’s System Security Plan (SSP) package. |
| Section 889 (FY2019 NDAA, 41 U.S.C. § 3901 note) and supply-chain risk management | Auricus Voice’s bill of materials is documented and assessable for federal supply-chain reviews; please contact us for the current SBOM and component-origin disclosures. |
Auricus Voice is suitable for U.S. deployments in:
Healthcare, financial services, public sector, law enforcement, defense, and any deployment — on either side of the Atlantic — where cross-border, third-party, or non-deterministic voice-data processing is a non-starter.
Compliance posture aligns with Regulation (EU) 2016/679 (GDPR), OJ L 119, 4.5.2016, p. 1–88, and Regulation (EU) 2024/1689 (AI Act), OJ L 13, 2.2.2024, p. 1–177, as the in-force baseline. Engineering and documentation also track the proposed simplification package COM(2025) 837 final (Digital Omnibus, 19 Nov 2025) and COM(2025) 836 final (Digital Omnibus on AI, 19 Nov 2025) — in trilogue (Council mandate 13 Mar 2026; Parliament mandate 26 Mar 2026); final adoption targeted later in 2026. High-risk obligations under Chapter III, Sections 1–3 (Articles 9–17, Article 26) of the AI Act become enforceable on 2 August 2026.
United States posture references the Health Insurance Portability and Accountability Act, 42 U.S.C. § 1320d et seq. and 45 CFR Parts 160 and 164; the Gramm–Leach–Bliley Act and FTC Safeguards Rule, 16 CFR Part 314; the PCI Data Security Standard v4.0.1 (in force since 31 March 2025); the FBI CJIS Security Policy (current edition); the TCPA, 47 U.S.C. § 227; the California Consumer Privacy Act / Privacy Rights Act, Cal. Civ. Code § 1798.100 et seq.; the Illinois Biometric Information Privacy Act, 740 ILCS 14/, as amended by P.A. 103-0769 (Aug 2024); the Colorado AI Act, SB 24-205 (effective 1 Feb 2026); the Tennessee ELVIS Act, Tenn. Code Ann. § 47-25-1101 et seq. (effective 1 Jul 2024); NIST CSF 2.0 (Feb 2024); NIST AI RMF 1.0 (Jan 2023) plus Generative AI Profile NIST AI 600-1 (Jul 2024); NIST SP 800-53 Rev. 5; NIST SP 800-171 Rev. 3 + DFARS 252.204-7012/7021 + CMMC 2.0 (32 CFR Part 170, final rule Oct 2024); Executive Order 14179 (23 Jan 2025); and OMB Memos M-25-21 and M-25-22 (3 Apr 2025).
Auricus Voice supplies vendor-side evidence primitives where the deployer’s use case is regulated; compliance remains the deployer’s responsibility under their specific use case, jurisdiction, and contractual obligations. Mappings will be re-validated as the regulatory landscape evolves on either side of the Atlantic.