Implementation and audit guidance for managing legal risks and third-party relationships.
AIJET Principles: A = Awareness I = Integrity J = Judgment E = Ethics T = Transparency
ID | Requirement | Guidance to implement | Guidance to audit | AI Threats and Mitigation | Principles | KPI |
---|---|---|---|---|---|---|
LEG-01 | Include mandatory confidentiality terms in contract for third parties | Incorporate explicit confidentiality clauses into all third-party contracts and monitor compliance. | Signed contracts and NDA records. | Confidentiality clauses must explicitly prohibit the use of client data in unauthorized AI model training or dataset construction. | A | T | X% of third-party contracts include mandatory confidentiality terms prohibiting AI model training with client data. |
LEG-02 | Include mandatory security requirements for third parties, matching the company security requirements | Integrate security clauses into supplier agreements and conduct periodic audits. | Contract terms and security audit reports. | Security requirements must mandate disclosure of all AI tools and models used by vendors that interact with client data. | I | T | X% of third-party contracts include security clauses that mandate disclosure of AI tools used with client data. |
LEG-03 | Impose third parties to have the same set of requirements to their own third parties (4th parties) | Require all third-party vendors to disclose subcontractors handling company data. Mandate contractual flow-down of AI data protection, logging, and disclosure requirements identical to the primary vendor. Include audit and termination clauses for non-compliance. | Review vendor contracts for 4th-party obligations. Sample vendors annually to validate subcontractor disclosures and verify matching clauses. | Contracts must require that third-party subcontractors apply identical AI data protection standards (AI usage flow-down). | J | T | X% of third-party contracts ensure that subcontractors apply identical AI data protection standards. |
LEG-04 | Require annual evidence of training & awareness provided to employees of third parties | Include in contracts a requirement for annual AI security training, covering data leakage via LLMs, responsible use, and bias awareness. Vendors must submit proof of completion and content summaries. | Request anonymized completion data + training content outline. Verify training content includes AI-specific elements (e.g., prompt injection, misuse, shadow training). | Annual training evidence must include AI-related data handling practices, bias mitigation, and responsible AI tool usage awareness. | A | E | J | T | X% of third-party vendors provide annual evidence of AI-related training and awareness for their employees. |
LEG-05 | Require AI risk disclosure and risk mitigation policies from all vendors using LLMs in their services. | Add AI-specific disclosure clauses in supplier contracts and audit third-party AI tool usage annually. | Maintain copies of contracts, audit reports, and vendor AI risk certifications. | Addresses OWASP LLM03:2025 by reducing the risk of supply chain compromise via untrusted AI tools. | I | T | X% of vendor contracts disclose AI risks and risk mitigation policies for AI tools used in services. |
LEG-06 | Embed security & update clauses in all contracts for third-party AI services and pre-trained models (patch deadlines, vulnerability disclosure, provenance guarantees). | Use a standard AI-security rider with minimum SLA and provenance requirements. Mandate timely patching and disclosure of model vulnerabilities. Require right-to-audit and termination rights on security grounds. | Inspect vendor contracts for presence of AI-security rider and patch SLA. Review vendor security attestations and third-party audit reports annually. Track patch SLA compliance metrics in vendor-management system. | Limits exposure to unpatched or malicious third-party models/services and provides legal leverage for rapid remediation. | I | J | T | X% of third-party AI service contracts include AI security clauses, patch deadlines, vulnerability disclosure, and provenance guarantees. |
LEG‑07 | Require vendors of high‑risk AI to deliver a Privacy Impact Assessment. | RFP checklist; block onboarding if absent. | Audit vendor PIA vs. risk matrix during annual review. | Exposes hidden privacy threats in third‑party models. | J | T | X% of high-risk AI vendors provide a Privacy Impact Assessment (PIA) before onboarding. |