Incident & Continuity

Implementation and audit guidance for incident management and business continuity.


AIJET Principles: A = Awareness I = Integrity J = Judgment E = Ethics T = Transparency
Filter by principle:

Integrity Transparency
AI Threats: Incident reporting channels must allow flagging AI-driven attacks (e.g.; deepfake threats; AI phishing; impersonation).

Guidance to Implement

Develop multiple, anonymous incident reporting channels with prompt response mechanisms.

Guidance to Audit

Incident logs and response time records.

Key Performance Indicator

X% of employees have access to incident reporting channels that allow flagging AI-driven threats.

Integrity Transparency
AI Threats: Remote incident reporting systems must support detection of unauthorized use of AI bots or automated scam attempts.

Guidance to Implement

Develop a remote incident reporting system that integrates with overall incident management.

Guidance to Audit

Incident reporting logs and system usage records.

Key Performance Indicator

X% of remote workers have access to a system for reporting AI-driven incidents.

Integrity Transparency
AI Threats: Continuity plans must prepare for disruptions caused by AI-generated misinformation; deepfake-driven crises; or AI model outages.

Guidance to Implement

Identify critical roles and incorporate them into continuity planning exercises with regular simulations.

Guidance to Audit

Continuity plan documents

Key Performance Indicator

X% of critical employees are included in continuity plans; with regular simulations to ensure preparedness.

Awareness Integrity Transparency
AI Threats: Crisis communication protocols must train employees to recognize and react to AI-generated fake messages or emergency instructions.

Guidance to Implement

Conduct regular crisis communication training and simulation drills. Update protocols based on feedback.

Guidance to Audit

Training records, drill logs, and updated continuity plan.

Key Performance Indicator

X% of employees are aware of the crisis communication protocol and receive training to recognize AI-driven misinformation.

Integrity Transparency
AI Threats: Emergency drills should simulate scenarios involving deepfake leadership commands or AI-based mass manipulation events.

Guidance to Implement

Draft detailed evacuation plans, conduct annual drills, and update procedures based on lessons learned.

Guidance to Audit

Drill reports, continuity plan, corrective action records.

Key Performance Indicator

X% of employees participate in emergency drills that simulate AI-driven crises like deepfake leadership commands.

Awareness Integrity Transparency
AI Threats: Addresses MITRE ATLAS T0020: Model Evasion by training staff on detecting and responding to adversarial LLM usage.

Guidance to Implement

Include deepfake; hallucination; or evasion scenarios in regular incident simulations and playbooks.

Guidance to Audit

Check inclusion of AI threats in tabletop exercises and red-team reports.

Key Performance Indicator

X% of incident response exercises simulate AI-specific attacks; including LLM misuse and evasion.

Integrity Judgment Transparency
AI Threats: Ensures rapid containment of emerging LLM and model attacks; minimising downtime and reputational damage.

Guidance to Implement

Extend existing IR plan with AI attack scenarios. Conduct at least one AI tabletop exercise per year involving SOC; Legal; Comms. Pre-establish contacts for emergency model rollback or isolation.

Guidance to Audit

Review latest tabletop/exercise after-action reports. Verify AI-specific runbooks are stored in IR knowledge base. Check that responsible roles are trained and contact lists kept current.

Key Performance Indicator

X% of incident-response playbooks include AI-specific attacks and are rehearsed at least once a year.

Integrity Judgment Transparency
AI Threats: Enables rapid response to AI misconfiguration incidents exposing biometric or personal data.

Guidance to Implement

Keep version-controlled configs & model snapshots. Pre-stage clean backups for quick restore. Define privacy-notification workflow with Legal & Comms.

Guidance to Audit

Run misconfiguration drills twice / year. Verify rollback meets RTO target. Review evidence of timely regulator notification.

Key Performance Indicator

X% of AI misconfigurations have pre-defined playbooks for rapid rollback; legal compliance; and privacy notifications.

Awareness
AI Threats: LLM03: Data Poisoning

Guidance to Implement

Provide role-specific training: (a) General awareness for business staff to report suspicious data anomalies (e.g.; sudden input shifts); (b) technical training for data scientists on poisoning vectors; defenses; and audit trails.

Guidance to Audit

Verify technical staff participation in advanced poisoning mitigation training (e.g.; using clean-label attack examples). Confirm alert thresholds in data ingestion pipelines and review incident simulations.

Key Performance Indicator

X% of employees should successfully detect and report at least Y% of suspicious data patterns within Z months of training.

Awareness
AI Threats: LLM04: Model Extraction

Guidance to Implement

Deploy automated telemetry analysis on model endpoints. Monitor for suspicious API call patterns (e.g.; bursty token usage; repeated sampling). Train AI teams to review entropy metrics and embedding-space probes.

Guidance to Audit

Check for detection rule sets on model-serving platforms. Review sample flagged cases for false positives/negatives and verify SOC review and escalation flow exists.

Key Performance Indicator

X% of model access requests should be reviewed and flagged for suspicious activity by trained employees.