Implementation and audit guidance for incident management and business continuity.
AIJET Principles: A = Awareness I = Integrity J = Judgment E = Ethics T = Transparency
ID | Requirement | Guidance to implement | Guidance to audit | AI Threats and Mitigation | Principles | KPI |
---|---|---|---|---|---|---|
INC-01 | Implement security incident reporting channels | Develop multiple, anonymous incident reporting channels with prompt response mechanisms. | Incident logs and response time records. | Incident reporting channels must allow flagging AI-driven attacks (e.g., deepfake threats, AI phishing, impersonation). | I | T | X% of employees have access to incident reporting channels that allow flagging AI-driven threats. |
INC-02 | Implement security incident protocols for remote work environments | Develop a remote incident reporting system that integrates with overall incident management. | Incident reporting logs and system usage records. | Remote incident reporting systems must support detection of unauthorized use of AI bots or automated scam attempts. | I | T | X% of remote workers have access to a system for reporting AI-driven incidents. |
INC-03 | Critical employees are involved in a continuity plan | Identify critical roles and incorporate them into continuity planning exercises with regular simulations. | Continuity plan documents | Continuity plans must prepare for disruptions caused by AI-generated misinformation, deepfake-driven crises, or AI model outages. | I | T | X% of critical employees are included in continuity plans, with regular simulations to ensure preparedness. |
INC-04 | Employee are aware of the communication protocol in case of crisis | Conduct regular crisis communication training and simulation drills. Update protocols based on feedback. | Training records, drill logs, and updated continuity plan. | Crisis communication protocols must train employees to recognize and react to AI-generated fake messages or emergency instructions. | A | I | T | X% of employees are aware of the crisis communication protocol and receive training to recognize AI-driven misinformation. |
INC-05 | Develop and regularly test emergency plans | Draft detailed evacuation plans, conduct annual drills, and update procedures based on lessons learned. | Drill reports, continuity plan, corrective action records. | Emergency drills should simulate scenarios involving deepfake leadership commands or AI-based mass manipulation events. | I | T | X% of employees participate in emergency drills that simulate AI-driven crises like deepfake leadership commands. |
INC-06 | Simulate incident response to AI-specific attacks including model evasion or LLM misuse. | Include deepfake, hallucination, or evasion scenarios in regular incident simulations and playbooks. | Check inclusion of AI threats in tabletop exercises and red-team reports. | Addresses MITRE ATLAS T0020: Model Evasion by training staff on detecting and responding to adversarial LLM usage. | A | I | T | X% of incident response exercises simulate AI-specific attacks, including LLM misuse and evasion. |
INC-07 | Maintain and rehearse incident-response playbooks for AI-specific attacks (prompt injection, model inversion, data poisoning, model hijack). | Extend existing IR plan with AI attack scenarios. Conduct at least one AI tabletop exercise per year involving SOC, Legal, Comms. Pre-establish contacts for emergency model rollback or isolation. | Review latest tabletop/exercise after-action reports. Verify AI-specific runbooks are stored in IR knowledge base. Check that responsible roles are trained and contact lists kept current. | Ensures rapid containment of emerging LLM and model attacks, minimising downtime and reputational damage. | I | J | T | X% of incident-response playbooks include AI-specific attacks and are rehearsed at least once a year. |
INC-08 | Maintain playbooks for rapid rollback, privacy notice and legal compliance when AI misconfigurations expose biometric or personal data | Keep version-controlled configs & model snapshots. Pre-stage clean backups for quick restore. Define privacy-notification workflow with Legal & Comms. | Run misconfiguration drills twice / year. Verify rollback meets RTO target. Review evidence of timely regulator notification. | Enables rapid response to AI misconfig incidents exposing biometric or personal data | I | J | T | X% of AI misconfigurations have pre-defined playbooks for rapid rollback, legal compliance, and privacy notifications. |
INC-09 | Implement regular AI-specific training to detect and respond to Data Poisoning | Provide role-specific training: (a) General awareness for business staff to report suspicious data anomalies (e.g., sudden input shifts), (b) technical training for data scientists on poisoning vectors, defenses, and audit trails. | Verify technical staff participation in advanced poisoning mitigation training (e.g., using clean-label attack examples). Confirm alert thresholds in data ingestion pipelines and review incident simulations. | LLM03: Data Poisoning | A | X% of employees should successfully detect and report at least Y% of suspicious data patterns within Z months of training. |
INC-10 | Implement response measures to Model Extraction attempts | Deploy automated telemetry analysis on model endpoints. Monitor for suspicious API call patterns (e.g., bursty token usage, repeated sampling). Train AI teams to review entropy metrics and embedding-space probes. | Check for detection rule sets on model-serving platforms. Review sample flagged cases for false positives/negatives and verify SOC review and escalation flow exists. | LLM04: Model Extraction | A | X% of model access requests should be reviewed and flagged for suspicious activity by trained employees. |