Training & Awareness

Implementation and audit guidance for cybersecurity-related awareness and training activities.


AIJET Principles: A = Awareness I = Integrity J = Judgment E = Ethics T = Transparency
Filter by principle:

Awareness
AI Threats: Integrate examples of AI-generated phishing and deepfake recognition into regular security awareness materials.

Guidance to Implement

Implement a recurring, updated security awareness communications schedule; incorporate current threat scenarios into content.

Guidance to Audit

Awareness materials

Key Performance Indicator

X% of employees recognize AI-generated threats.

Awareness Integrity Judgment
AI Threats: Mitigates LLM02 (data leakage) and LLM05 (hallucination), while curbing long-term erosion of human analytical skills.

Guidance to Implement

1- Pause & Frame: employees articulate the problem by themself before querying an AI. 2- Strip & Test: remove sensitive data and run a low-risk test prompt. 3 - Cross-Check: compare AI output with at least one human-curated source before action.

Guidance to Audit

Use structured micro-quizzes or decision-case assessments where employees must demonstrate the three-step protocol. Optionally, gather anonymized prompt summaries tagged by users in internal AI tools to assess adherence patterns.

Key Performance Indicator

Achieve X% reduction in data leakage incidents and hallucinations in AI outputs.

Awareness Ethics
AI Threats: Mitigates emotional manipulation and stress-induced errors from AI-generated coercion content.

Guidance to Implement

Integrate interactive modules + tabletop drills that include crisis-counselling protocols.

Guidance to Audit

Review completion logs and sample staff feedback; verify deepfake scenarios are included in drills.

Key Performance Indicator

Reduce stress-related errors from AI threats by X%.

Awareness Judgment Transparency
AI Threats: Knowledge base should be updated on a regular basis with real examples of AI threats and misuse cases.

Guidance to Implement

Create and maintain an up-to-date, searchable internal knowledge base with regular content reviews and updates.

Guidance to Audit

Knowledge base usage logs and periodic update records.

Key Performance Indicator

Update knowledge base quarterly with X% of new AI-related incidents.

Awareness Integrity Transparency
AI Threats: Use AI-threat simulations and deepfake tests during training evaluations to assess AI risk readiness.

Guidance to Implement

Conduct quarterly evaluations (via surveys and testing) to refine training content based on feedback and incident trends.

Guidance to Audit

Evaluation reports and documented improvement plans.

Key Performance Indicator

X% of employees pass the AI risk readiness test.

Awareness Integrity Transparency
AI Threats: Provide scenarios that include AI-driven insider threats; such as misuse of generative AI for data leaks or sabotage.

Guidance to Implement

Integrate insider threat scenarios into training modules and use simulations to reinforce learning.

Guidance to Audit

Simulation reports and incident reporting logs.

Key Performance Indicator

X% of employees successfully identify AI-driven insider threats.

Awareness Integrity Transparency
AI Threats: Include awareness on AI-driven physical threats such as facial recognition spoofing and AI-enhanced tailgating.

Guidance to Implement

Offer annual training sessions focused on physical security measures and emergency response procedures.

Guidance to Audit

Training attendance records and post-training assessments.

Key Performance Indicator

X% of employees recognize AI-enhanced physical threats.

Awareness Judgment Transparency
AI Threats: Third-party security training must address risks associated with AI misuse; including unauthorized use of generative AI.

Guidance to Implement

Extend training requirements to third parties and verify training completion before system access is granted.

Guidance to Audit

Third-party training certificates and compliance audit logs.

Key Performance Indicator

X% of third-party contractors complete AI misuse training.

Awareness Integrity Transparency
AI Threats: Executives should receive training on high-level AI risks like executive impersonation via deepfake audio/video.

Guidance to Implement

Schedule tailored security briefings for the executive board focusing on strategic risks and incident impacts.

Guidance to Audit

Executive meeting minutes, presentation slides, and attendance records.

Key Performance Indicator

X% of executives receive training on deepfake risks annually.

Awareness Transparency
AI Threats: Ensure accessibility training covers inclusive design considerations for AI-based security tools.

Guidance to Implement

Provide training materials in multiple accessible formats (video, text, interactive) and ensure compliance with accessibility standards.

Guidance to Audit

Accessibility compliance reports and user feedback surveys.

Key Performance Indicator

X% compliance with accessibility standards in AI tools.

Awareness Transparency
AI Threats: Certification for security professionals must include advanced understanding of AI threat vectors and defense strategies.

Guidance to Implement

Mandate annual certification for security professionals; offer study support and monitor status.

Guidance to Audit

Certification receipts and HR training records.

Key Performance Indicator

X% of security professionals certified in AI threat defense strategies.

Awareness Judgment Transparency
AI Threats: Incorporate AI-focused threat intelligence; such as detection of AI-driven malware or deepfake phishing trends.

Guidance to Implement

Subscribe to reputable threat intelligence sources and review the information regularly during team meetings.

Guidance to Audit

Subscription records and meeting minutes discussing threat intelligence.

Key Performance Indicator

Update threat intelligence with AI-specific data every X weeks.

Awareness Transparency
AI Threats: Encourage attending sessions on AI security challenges and countermeasures during conferences.

Guidance to Implement

Plan and budget for attendance at a major security conference and require post-event knowledge sharing sessions.

Guidance to Audit

Conference attendance records and post-event reports.

Key Performance Indicator

X% attendance rate at AI-focused security sessions.

Awareness Transparency
AI Threats: Promote participation in professional groups focused on AI safety and security issues.

Guidance to Implement

Encourage security team members to join professional cybersecurity associations and track their involvement.

Guidance to Audit

Membership certificates and activity logs.

Key Performance Indicator

X% participation in AI security professional groups.

Awareness Ethics
AI Threats: Specialized training must cover AI's impact on data privacy; including synthetic data risks and automated profiling.

Guidance to Implement

Develop specialized training modules tailored to data privacy laws and relevant regulatory requirements.

Guidance to Audit

Training completion certificates and assessment results.

Key Performance Indicator

X% of data privacy training modules include AI-specific privacy issues.

Awareness Ethics Transparency
AI Threats: Training must address AI-related regulatory issues; such as algorithmic transparency; bias mitigation; and data governance.

Guidance to Implement

Map employee roles to applicable regulations using a maintained regulatory matrix. Integrate AI-specific requirements (e.g.; transparency; explainability; fairness) into training modules and update as laws evolve. Collaborate with legal counsel to ensure coverage of high-risk areas like automated profiling; synthetic data; and algorithmic accountability.

Guidance to Audit

Verify presence of regulatory role-mapping; and check AI-related content version history in LMS or training platform.

Key Performance Indicator

X% completion rate for AI regulatory compliance training.

Awareness Integrity Transparency
AI Threats: Recognize employees who report AI-related incidents; such as spotting suspicious AI chatbot interactions.

Guidance to Implement

Establish a recognition program for employees who report potential security issues; share success stories internally.

Guidance to Audit

Recognition program records and internal communication examples.

Key Performance Indicator

X% of employees report AI-related security incidents.

Awareness
AI Threats: Celebrate quick identification of AI-enabled attacks (e.g.; deepfake phishing attempts) as good reflexes.

Guidance to Implement

Highlight positive security behaviors in internal newsletters, using anonymized case studies for learning.

Guidance to Audit

Internal newsletter editions and employee feedback surveys.

Key Performance Indicator

Highlight X% cases per quarter where AI threats were identified.

Awareness Transparency
AI Threats: Use survey insights to enhance AI-specific training and identify gaps in AI threat awareness.

Guidance to Implement

Deploy quarterly anonymous surveys to gauge security sentiment and adjust training accordingly.

Guidance to Audit

Survey reports and trend analysis documents.

Key Performance Indicator

Conduct surveys every X months with Y% participation and actionable insights.