Incident & Continuity
Implementation and audit guidance for incident management and business continuity.
Guidance to Implement
Develop multiple, anonymous incident reporting channels with prompt response mechanisms.
Guidance to Audit
Incident logs and response time records.
Key Performance Indicator
X% of employees have access to incident reporting channels that allow flagging AI-driven threats.
Guidance to Implement
Develop a remote incident reporting system that integrates with overall incident management.
Guidance to Audit
Incident reporting logs and system usage records.
Key Performance Indicator
X% of remote workers have access to a system for reporting AI-driven incidents.
Guidance to Implement
Identify critical roles and incorporate them into continuity planning exercises with regular simulations.
Guidance to Audit
Continuity plan documents
Key Performance Indicator
X% of critical employees are included in continuity plans; with regular simulations to ensure preparedness.
Guidance to Implement
Conduct regular crisis communication training and simulation drills. Update protocols based on feedback.
Guidance to Audit
Training records, drill logs, and updated continuity plan.
Key Performance Indicator
X% of employees are aware of the crisis communication protocol and receive training to recognize AI-driven misinformation.
Guidance to Implement
Draft detailed evacuation plans, conduct annual drills, and update procedures based on lessons learned.
Guidance to Audit
Drill reports, continuity plan, corrective action records.
Key Performance Indicator
X% of employees participate in emergency drills that simulate AI-driven crises like deepfake leadership commands.
Guidance to Implement
Include deepfake; hallucination; or evasion scenarios in regular incident simulations and playbooks.
Guidance to Audit
Check inclusion of AI threats in tabletop exercises and red-team reports.
Key Performance Indicator
X% of incident response exercises simulate AI-specific attacks; including LLM misuse and evasion.
Guidance to Implement
Extend existing IR plan with AI attack scenarios. Conduct at least one AI tabletop exercise per year involving SOC; Legal; Comms. Pre-establish contacts for emergency model rollback or isolation.
Guidance to Audit
Review latest tabletop/exercise after-action reports. Verify AI-specific runbooks are stored in IR knowledge base. Check that responsible roles are trained and contact lists kept current.
Key Performance Indicator
X% of incident-response playbooks include AI-specific attacks and are rehearsed at least once a year.
Guidance to Implement
Keep version-controlled configs & model snapshots. Pre-stage clean backups for quick restore. Define privacy-notification workflow with Legal & Comms.
Guidance to Audit
Run misconfiguration drills twice / year. Verify rollback meets RTO target. Review evidence of timely regulator notification.
Key Performance Indicator
X% of AI misconfigurations have pre-defined playbooks for rapid rollback; legal compliance; and privacy notifications.
Guidance to Implement
Provide role-specific training: (a) General awareness for business staff to report suspicious data anomalies (e.g.; sudden input shifts); (b) technical training for data scientists on poisoning vectors; defenses; and audit trails.
Guidance to Audit
Verify technical staff participation in advanced poisoning mitigation training (e.g.; using clean-label attack examples). Confirm alert thresholds in data ingestion pipelines and review incident simulations.
Key Performance Indicator
X% of employees should successfully detect and report at least Y% of suspicious data patterns within Z months of training.
Guidance to Implement
Deploy automated telemetry analysis on model endpoints. Monitor for suspicious API call patterns (e.g.; bursty token usage; repeated sampling). Train AI teams to review entropy metrics and embedding-space probes.
Guidance to Audit
Check for detection rule sets on model-serving platforms. Review sample flagged cases for false positives/negatives and verify SOC review and escalation flow exists.
Key Performance Indicator
X% of model access requests should be reviewed and flagged for suspicious activity by trained employees.