Implementation and audit guidance for cybersecurity-related awareness and training activities.
AIJET Principles: A = Awareness I = Integrity J = Judgment E = Ethics T = Transparency
ID | Requirement | Guidance to implement | Guidance to audit | AI Threats and Mitigation | Principles | KPI |
---|---|---|---|---|---|---|
AWA-01 | Deliver recurring security awareness to all employees | Implement a recurring, updated security awareness communications schedule; incorporate current threat scenarios into content. | Awareness materials | Integrate examples of AI-generated phishing and deepfake recognition into regular security awareness materials. | A | X% of employees recognize AI-generated threats. |
AWA-02 | Adopt the “Think First, Verify Always” protocol to prevent cognitive off-loading and risky prompting. | 1- Pause & Frame: employees articulate the problem by themself before querying an AI. 2- Strip & Test: remove sensitive data and run a low-risk test prompt. 3 – Cross-Check: compare AI output with at least one human-curated source before action. | Use structured micro-quizzes or decision-case assessments where employees must demonstrate the three-step protocol. Optionally, gather anonymized prompt summaries tagged by users in internal AI tools to assess adherence patterns. | Mitigates LLM02 (data leakage) and LLM05 (hallucination), while curbing long-term erosion of human analytical skills. | A | I | J | Achieve X% reduction in data leakage incidents and hallucinations in AI outputs. |
AWA-03 | Provide micro-training on psychological safety and mental-health responses to AI-based social-engineering (deepfakes, voice-clones, extortion). | Integrate interactive modules + tabletop drills that include crisis-counselling protocols. | Review completion logs and sample staff feedback; verify deepfake scenarios are included in drills. | Mitigates emotional manipulation and stress-induced errors from AI-generated coercion content. | A | E | Reduce stress-related errors from AI threats by X%. |
AWA-04 | Develop internal knowledge base for security best practices accessible (intranet, chat channels etc) | Create and maintain an up-to-date, searchable internal knowledge base with regular content reviews and updates. | Knowledge base usage logs and periodic update records. | Knowledge base should be updated on a regular basis with real examples of AI threats and misuse cases. | A | J | T | Update knowledge base quarterly with X% of new AI-related incidents. |
AWA-05 | Implement regular evaluations to refine training programs | Conduct quarterly evaluations (via surveys and testing) to refine training content based on feedback and incident trends. | Evaluation reports and documented improvement plans. | Use AI-threat simulations and deepfake tests during training evaluations to assess AI risk readiness. | A | I | T | X% of employees pass the AI risk readiness test. |
AWA-06 | Train employees to recognize and report insider threats | Integrate insider threat scenarios into training modules and use simulations to reinforce learning. | Simulation reports and incident reporting logs. | Provide scenarios that include AI-driven insider threats, such as misuse of generative AI for data leaks or sabotage. | A | I | T | X% of employees successfully identify AI-driven insider threats. |
AWA-07 | Conduct regular physical security training for employees | Offer annual training sessions focused on physical security measures and emergency response procedures. | Training attendance records and post-training assessments. | Include awareness on AI-driven physical threats such as facial recognition spoofing and AI-enhanced tailgating. | A | I | T | X% of employees recognize AI-enhanced physical threats. |
AWA-08 | Mandate security training for third parties having accesses to the company system (contractors etc.) | Extend training requirements to third parties and verify training completion before system access is granted. | Third-party training certificates and compliance audit logs. | Third-party security training must address risks associated with AI misuse, including unauthorized use of generative AI. | A | J | T | X% of third-party contractors complete AI misuse training. |
AWA-09 | Executive board receives at least one security-focused session per year | Schedule tailored security briefings for the executive board focusing on strategic risks and incident impacts. | Executive meeting minutes, presentation slides, and attendance records. | Executives should receive training on high-level AI risks like executive impersonation via deepfake audio/video. | A | I | T | X% of executives receive training on deepfake risks annually. |
AWA-10 | Ensure accessibility of security training | Provide training materials in multiple accessible formats (video, text, interactive) and ensure compliance with accessibility standards. | Accessibility compliance reports and user feedback surveys. | Ensure accessibility training covers inclusive design considerations for AI-based security tools. | A | T | X% compliance with accessibility standards in AI tools. |
AWA-11 | Require security professionals to obtain at least one industry recognized certification per year | Mandate annual certification for security professionals; offer study support and monitor status. | Certification receipts and HR training records. | Certification for security professionals must include advanced understanding of AI threat vectors and defense strategies. | A | T | X% of security professionals certified in AI threat defense strategies. |
AWA-12 | Ensure security team receives curated threat intelligence feeds or magazine | Subscribe to reputable threat intelligence sources and review the information regularly during team meetings. | Subscription records and meeting minutes discussing threat intelligence. | Incorporate AI-focused threat intelligence, such as detection of AI-driven malware or deepfake phishing trends. | A | J | T | Update threat intelligence with AI-specific data every X weeks. |
AWA-13 | Ensure participation in at least one major security conference per year | Plan and budget for attendance at a major security conference and require post-event knowledge sharing sessions. | Conference attendance records and post-event reports. | Encourage attending sessions on AI security challenges and countermeasures during conferences. | A | T | X% attendance rate at AI-focused security sessions. |
AWA-14 | Ensure membership in at least one professional security association | Encourage security team members to join professional cybersecurity associations and track their involvement. | Membership certificates and activity logs. | Promote participation in professional groups focused on AI safety and security issues. | A | T | X% participation in AI security professional groups. |
AWA-15 | Provide specialized training on data privacy and personal data handling | Develop specialized training modules tailored to data privacy laws and relevant regulatory requirements. | Training completion certificates and assessment results. | Specialized training must cover AI’s impact on data privacy, including synthetic data risks and automated profiling. | A | E | X% of data privacy training modules include AI-specific privacy issues. |
AWA-16 | Provide specialized training on applicable regulations in line with the employee role (PCI-DSS, Fedramp…) | Map employee roles to applicable regulations using a maintained regulatory matrix. Integrate AI-specific requirements (e.g., transparency, explainability, fairness) into training modules and update as laws evolve. Collaborate with legal counsel to ensure coverage of high-risk areas like automated profiling, synthetic data, and algorithmic accountability. | Verify presence of regulatory role-mapping, and check AI-related content version history in LMS or training platform. | Training must address AI-related regulatory issues, such as algorithmic transparency, bias mitigation, and data governance. | A | E | T | X% completion rate for AI regulatory compliance training. |
AWA-17 | Promote a culture where raising a security alert is recognized | Establish a recognition program for employees who report potential security issues; share success stories internally. | Recognition program records and internal communication examples. | Recognize employees who report AI-related incidents, such as spotting suspicious AI chatbot interactions. | A | I | T | X% of employees report AI-related security incidents. |
AWA-18 | Celebrate security “good reflexes” in internal communications | Highlight positive security behaviors in internal newsletters, using anonymized case studies for learning. | Internal newsletter editions and employee feedback surveys. | Celebrate quick identification of AI-enabled attacks (e.g., deepfake phishing attempts) as good reflexes. | A | Highlight X% cases per quarter where AI threats were identified. |
AWA-19 | Conduct regular anonymous surveys to assess employees’ security comfort & confidence level | Deploy quarterly anonymous surveys to gauge security sentiment and adjust training accordingly. | Survey reports and trend analysis documents. | Use survey insights to enhance AI-specific training and identify gaps in AI threat awareness. | A | T | Conduct surveys every X months with Y% participation and actionable insights. |