IT Usage
Implementation and audit guidance for appropriate use of IT systems and resources.
Guidance to Implement
Provide training on what constitutes a strong password (length, complexity, uniqueness). Enforce via technical controls (e.g., policy-based rules, password vaults)
Guidance to Audit
Password policy document, screenshots from IAM platform, LMS records of password training
Key Performance Indicator
X% of users define and maintain strong passwords; with regular policy enforcement.
Guidance to Implement
Implement MFA for all accounts using a centralized IAM solution and continuously monitor compliance. Train employees to apply MFA on non SSO solutions
Guidance to Audit
MFA enrollment logs and compliance reports. Training records
Key Performance Indicator
X% of accounts use MFA; and compliance is continuously monitored.
Guidance to Implement
Deploy a corporate password manager and deliver mandatory training on its use.
Guidance to Audit
Password manager usage statistics and training records.
Key Performance Indicator
X% of employees use the corporate password manager with regular training.
Guidance to Implement
Implement technical controls to enforce unique passwords and provide regular password hygiene training.
Guidance to Audit
Password policy enforcement logs and training attendance records.
Key Performance Indicator
X% compliance with unique password policies; with no reused passwords.
Guidance to Implement
Enforce restrictions on password sharing through processes controls and schedule periodic training.
Guidance to Audit
Training records
Key Performance Indicator
X% of employees complete password hygiene training with focus on AI-assisted threats.
Guidance to Implement
Store system prompts in encrypted config stores or embed them in version-controlled infrastructure-as-code. Restrict edit access.
Guidance to Audit
Verify permissions to the prompt management system. Audit any prompt template changes via commit logs; environment config diffs; or alerting on unauthorized file edits.
Key Performance Indicator
X% of system prompts are isolated and access is restricted to authorized administrators.
Guidance to Implement
Provide user education on best practices. Enforce automatic screen lock settings via MDM.
Guidance to Audit
Training records. MDM compliance reports and screenshots of lock settings.
Key Performance Indicator
X% of users lock their screens when leaving their workstation; with enforced automatic lock.
Guidance to Implement
Provide advance update notifications through IT portals and allow users to schedule update times.
Guidance to Audit
Notification logs and user feedback surveys.
Key Performance Indicator
X% of users are notified in advance of updates; with scheduled notifications.
Guidance to Implement
Deploy application whitelisting solutions, maintain an updated approved software list, and train users on the exception process.
Guidance to Audit
Whitelisting configuration records and change logs.
Key Performance Indicator
X% of installations are pre-approved with an application whitelisting solution.
Guidance to Implement
Establish a formal software request process with tracking and approval via an ITSM tool.
Guidance to Audit
Software request and approval logs from IT service support.
Key Performance Indicator
X% of software installation requests go through a formal approval process with AI risk evaluation.
Guidance to Implement
Provide awareness sessions on prompt manipulation techniques and enforce double-confirmation workflows for sensitive instructions involving LLM outputs.
Guidance to Audit
Collect training logs and random audits of user interactions with corporate-approved LLM tools.
Key Performance Indicator
X% of employees trained on prompt injection risks with verified understanding.
Guidance to Implement
Provide mandatory awareness training on the dangers of adversarial inputs and how to flag or avoid them. Employees should also be trained to report any suspicious model behavior.
Guidance to Audit
Verify employee participation in training and conduct periodic checks on the effectiveness of adversarial input detection.
Key Performance Indicator
X% of employees should correctly identify adversarial inputs during regular testing scenarios.
Guidance to Implement
Enforce MDM policies to restrict work to company-managed devices and provide training on mobile threats.
Guidance to Audit
MDM enrollment logs and mobile security training records.
Key Performance Indicator
X% of employees use company-managed devices for work; with enforced mobile device management (MDM).
Guidance to Implement
A process such as a button on the email software or via an ITSM tool should allow employees to report suspicious emails
Guidance to Audit
Notification logs
Key Performance Indicator
X% of users can report suspicious messages via an easy-to-use reporting system.
Guidance to Implement
Encourage regular breaks and adopt scheduling practices that prevent back-to-back long meetings.
Guidance to Audit
Employee survey results and meeting schedule reviews.
Key Performance Indicator
X% of employees are given breaks between virtual meetings to prevent IT fatigue and reduce errors.
Guidance to Implement
Use dynamic AI-focused threat intel feeds. Block known LLM sandbox sites; unregulated AI registries; and prompt-injection communities. Monitor attempts to bypass filtering.
Guidance to Audit
Review access logs for blocked AI-related domains. Check for excessive proxy use or unmonitored DNS requests from corporate devices.
Key Performance Indicator
X% of risky websites are blocked using filtering tools with clear notifications to users.
Guidance to Implement
Provide training on consent. Implement policies restricting recording of sensitive content and monitor via IT tools.
Guidance to Audit
Recording logs and compliance audit reports.
Key Performance Indicator
X% of sensitive meetings or screens are restricted from recording; with clear consent policies.
Guidance to Implement
Enforce authentication; query rate-limiting; and output obfuscation for any exposed AI/LLM APIs.
Guidance to Audit
Review API gateway logs and access permissions to identify abnormal usage patterns.
Key Performance Indicator
X% of exposed model APIs are secured with rate-limiting; authentication; and output obfuscation.
Guidance to Implement
Deploy an API or proxy layer that logs all prompts/completions. Integrate DLP and abuse-detection rules (e.g.; regex for PII; jailbreak fingerprints). Generate real-time alerts to the SOC on high-risk patterns.
Guidance to Audit
Examine log samples for completeness (prompt; user; timestamp). Verify DLP hits and SOC tickets for a 30-day window. Confirm escalation within defined SLA for any jailbreak or leakage alert.
Key Performance Indicator
X% of LLM prompts and responses are continuously monitored for misuse; policy violations; or data leakage.
Guidance to Implement
Deploy proxy/middleware that logs prompts & completions. Apply DLP regex + contextual rules. Rate-limit tokens per user/session.
Guidance to Audit
Sample logs for redacted strings. Test DLP triggers with "canary" PII. Review SOC tickets for LLM exfil alerts.
Key Performance Indicator
X% of LLM sessions are monitored with DLP scanning and per-user token quotas.
Guidance to Implement
Use red-team toolkits (TextAttack; AutoAttack). Retrain or patch models after findings. Retest before production redeploy.
Guidance to Audit
Review red-team reports & remediation tickets. Confirm retested models resist same evasion.
Key Performance Indicator
X% of adversarial evasion tests are conducted regularly to patch vulnerabilities in AI detectors.
Guidance to Implement
Enable API throttling & query-variance anomaly rules. Alert SOC on extraction-like patterns & GPU over-utilisation. Auto-isolate suspect tenants.
Guidance to Audit
Inspect monitoring rules & dashboards. Check incident tickets tied to extraction alerts. Validate automatic tenant isolation works.
Key Performance Indicator
X% of model endpoints are monitored for abnormal queries that may signal extraction attempts or hijacking.
Guidance to Implement
Implement confidence scores; override mechanisms; and clear escalation paths; train human reviewers.
Guidance to Audit
Test override functionality; review escalation metrics; interview oversight staff.
Key Performance Indicator
X% of AI systems include effective human oversight capabilities with confidence scores and override mechanisms.