Implementation and audit guidance for appropriate use of IT systems and resources.
AIJET Principles: A = Awareness I = Integrity J = Judgment E = Ethics T = Transparency
ID | Requirement | Guidance to implement | Guidance to audit | AI Threats and Mitigation | Principles | KPI |
---|---|---|---|---|---|---|
ITU-01 | Users must define and maintain strong passwords | Provide training on what constitutes a strong password (length, complexity, uniqueness). Enforce via technical controls (e.g., policy-based rules, password vaults) | Password policy document, screenshots from IAM platform, LMS records of password training | Strong passwords are critical to defend against AI-assisted password cracking and brute-force attack automation. | A | T | X% of users define and maintain strong passwords, with regular policy enforcement. |
ITU-02 | MFA is mandatory for accessing systems (corporate or third parties Saas solutions) | Implement MFA for all accounts using a centralized IAM solution and continuously monitor compliance. Train employees to apply MFA on non SSO solutions | MFA enrollment logs and compliance reports. Training records | MFA provides resilience against AI-augmented credential stuffing attacks and automated system compromise attempts. | A | I | T | X% of accounts use MFA, and compliance is continuously monitored. |
ITU-03 | Use of a corporate password manager | Deploy a corporate password manager and deliver mandatory training on its use. | Password manager usage statistics and training records. | Password managers help prevent exposure to AI-phishing campaigns targeting weak or reused passwords. | A | T | X% of employees use the corporate password manager with regular training. |
ITU-04 | Users avoids password reuse | Implement technical controls to enforce unique passwords and provide regular password hygiene training. | Password policy enforcement logs and training attendance records. | Enforce password uniqueness to reduce risks from AI analyzing breached datasets for reuse patterns. | A | T | X% compliance with unique password policies, with no reused passwords. |
ITU-05 | Prohibit password sharing and require regular password hygiene training | Enforce restrictions on password sharing through processes controls and schedule periodic training. | Training records | Policies must prohibit sharing credentials via AI-assisted messaging apps or tools. | A | T | X% of users are prohibited from password sharing with regular password hygiene training. |
ITU-06 | Implement isolation of system prompts and access controls for administrative LLM configurations. | Store system prompts in encrypted config stores or embed them in version-controlled infrastructure-as-code. Restrict edit access. | Verify permissions to the prompt management system. Audit any prompt template changes via commit logs, environment config diffs, or alerting on unauthorized file edits. | Addresses OWASP LLM07:2025 by protecting critical system instructions from unauthorized extraction or tampering. | J | T | X% of system prompts are isolated and access is restricted to authorized administrators. |
ITU-07 | Users lock screen when quitting the computer | Provide user education on best practices. Enforce automatic screen lock settings via MDM. | Training records. MDM compliance reports and screenshots of lock settings. | Workstations must prevent background AI-assisted surveillance or unapproved AI process installation. | A | T | X% of users lock their screens when leaving their workstation, with enforced automatic lock. |
ITU-08 | Users are notified in advance before an update applies to a computer | Provide advance update notifications through IT portals and allow users to schedule update times. | Notification logs and user feedback surveys. | Advance update notifications should highlight any updates related to AI threat detection modules. | A | T | X% of users are notified in advance of updates, with scheduled notifications. |
ITU-09 | Restrict installations to pre-approved software | Deploy application whitelisting solutions, maintain an updated approved software list, and train users on the exception process. | Whitelisting configuration records and change logs. | Strictly restrict installation of unauthorized AI tools (e.g., rogue LLM-based productivity bots, AI keyloggers). | A | T | X% of installations are pre-approved with an application whitelisting solution. |
ITU-10 | A process is in place to allow users request installation of a new software or plugin | Establish a formal software request process with tracking and approval via an ITSM tool. | Software request and approval logs from IT service support. | Formal request processes must evaluate AI privacy risks before approving new software installations. | E | J | T | X% of software installation requests go through a formal approval process with AI risk evaluation. |
ITU-11 | Enforce user training on prompt injection risks when interacting with AI systems. | Provide awareness sessions on prompt manipulation techniques and enforce double-confirmation workflows for sensitive instructions involving LLM outputs. | Collect training logs and random audits of user interactions with corporate-approved LLM tools. | Addresses OWASP LLM01:2025 by training employees to detect malicious prompt patterns and avoid trust in unverified LLM outputs. | A | T | X% of employees are trained on prompt injection risks and double-confirmation workflows for AI. |
ITU-12 | Implement training on Adversarial Input risks for employees working with AI tools | Provide mandatory awareness training on the dangers of adversarial inputs and how to flag or avoid them. Employees should also be trained to report any suspicious model behavior. | Verify employee participation in training and conduct periodic checks on the effectiveness of adversarial input detection. | LLM08: Adversarial Inputs | A | I | X% of employees should correctly identify adversarial inputs during regular testing scenarios. |
ITU-13 | Only allow work on company-managed devices | Enforce MDM policies to restrict work to company-managed devices and provide training on mobile threats. | MDM enrollment logs and mobile security training records. | Mobile device policies should include controls against AI apps that could capture company data or monitor activity. | A | T | X% of employees use company-managed devices for work, with enforced mobile device management (MDM). |
ITU-14 | Allow user to report a suspicious message | A process such as a button on the email software or via an ITSM tool should allow employees to report suspicious emails | Notification logs | Enable quick reporting of AI-generated spear-phishing or deepfake scam messages. | A | T | X% of users can report suspicious messages via an easy-to-use reporting system. |
ITU-15 | Ensure employees have breaks between virtual meetings to avoid IT fatigue and errors | Encourage regular breaks and adopt scheduling practices that prevent back-to-back long meetings. | Employee survey results and meeting schedule reviews. | Prevent AI-induced decision fatigue by promoting mental breaks to reduce errors during prolonged screen use. | A | J | I | X% of employees are given breaks between virtual meetings to prevent IT fatigue and reduce errors. |
ITU-16 | Implement filtering of risky websites | Use dynamic AI-focused threat intel feeds. Block known LLM sandbox sites, unregulated AI registries, and prompt-injection communities. Monitor attempts to bypass filtering. | Review access logs for blocked AI-related domains. Check for excessive proxy use or unmonitored DNS requests from corporate devices. | Filtering tools should block access to known AI abuse sites or prompt users about AI content risk. | I | T | X% of risky websites are blocked using filtering tools with clear notifications to users. |
ITU-17 | Restrict recording of sensitive meetings or screens. Raise awareness on recording ethics and consent | Provide training on consent. Implement policies restricting recording of sensitive content and monitor via IT tools. | Recording logs and compliance audit reports. | Restrict recordings to avoid inadvertent capture of sensitive data for AI training or deepfake creation. | A | E | T | X% of sensitive meetings or screens are restricted from recording, with clear consent policies. |
ITU-18 | Restrict exposure of model APIs to prevent model extraction. | Enforce authentication, query rate-limiting, and output obfuscation for any exposed AI/LLM APIs. | Review API gateway logs and access permissions to identify abnormal usage patterns. | Addresses MITRE ATLAS T0018: Model Extraction by limiting external access and reducing response fidelity. | I | J | T | X% of exposed model APIs are secured with rate-limiting, authentication, and output obfuscation. |
ITU-19 | Continuously monitor prompts and responses of LLM-based systems to detect misuse, jailbreak attempts, policy violations, or sensitive-data leakage. | Deploy an API or proxy layer that logs all prompts/completions. Integrate DLP and abuse-detection rules (e.g., regex for PII, jailbreak fingerprints). Generate real-time alerts to the SOC on high-risk patterns. | Examine log samples for completeness (prompt, user, timestamp). Verify DLP hits and SOC tickets for a 30-day window. Confirm escalation within defined SLA for any jailbreak or leakage alert. | Detects and contains prompt-injection, indirect exfiltration, and other LLM misuse before data loss or brand damage occurs. | I | J | T | X% of LLM prompts and responses are continuously monitored for misuse, policy violations, or data leakage. |
ITU-20 | Enforce data-loss-prevention (DLP) scanning and per-user token quotas on all enterprise LLM chat sessions; block or redact sensitive output. | Deploy proxy/middleware that logs prompts & completions. Apply DLP regex + contextual rules. Rate-limit tokens per user/session. | Sample logs for redacted strings. Test DLP triggers with “canary” PII. Review SOC tickets for LLM exfil alerts. | Detects LLM conversation exfiltration, prompt-injection data leakage, ChatGPT / Google Bard conversation leaks. | I | T | X% of LLM sessions are monitored with DLP scanning and per-user token quotas. |
ITU-21 | Conduct scheduled adversarial-evasion tests of AI detectors (malware, phishing, face-ID) to discover and patch bypass techniques. | Use red-team toolkits (TextAttack, AutoAttack). Retrain or patch models after findings. Retest before production redeploy. | Review red-team reports & remediation tickets. Confirm retested models resist same evasion. | Addresses adversarial evasion of AI malware/phishing detectors. e.g., Confusing Antimalware NN, ProofPoint evasion. | I | J | T | X% of adversarial evasion tests are conducted regularly to patch vulnerabilities in AI detectors. |
ITU-22 | Monitor production model endpoints for abnormal queries that signal model extraction, hijack or crypto-jacking | Enable API throttling & query-variance anomaly rules. Alert SOC on extraction-like patterns & GPU over-utilisation. Auto-isolate suspect tenants. | Inspect monitoring rules & dashboards. Check incident tickets tied to extraction alerts. Validate automatic tenant isolation works. | Mitigates ShadowRay-style model hijack, model-extraction abuse, GPU cryptojacking, abnormal query flooding. | I | J | T | X% of model endpoints are monitored for abnormal queries that may signal extraction attempts or hijacking. |
ITU-23 | Design AI systems with effective human oversight capabilities | Implement confidence scores, override mechanisms, and clear escalation paths; train human reviewers. | Test override functionality; review escalation metrics; interview oversight staff. | Protects against harmful autonomous decisions; maintains human control. | A | J | E | X% of AI systems include effective human oversight capabilities with confidence scores and override mechanisms. |