IT Usage

Implementation and audit guidance for appropriate use of IT systems and resources.


AIJET Principles: A = Awareness I = Integrity J = Judgment E = Ethics T = Transparency
Filter by principle:

Awareness Transparency
AI Threats: Strong passwords are critical to defend against AI-assisted password cracking and brute-force attack automation.

Guidance to Implement

Provide training on what constitutes a strong password (length, complexity, uniqueness). Enforce via technical controls (e.g., policy-based rules, password vaults)

Guidance to Audit

Password policy document, screenshots from IAM platform, LMS records of password training

Key Performance Indicator

X% of users define and maintain strong passwords; with regular policy enforcement.

Awareness Integrity Transparency
AI Threats: MFA provides resilience against AI-augmented credential stuffing attacks and automated system compromise attempts.

Guidance to Implement

Implement MFA for all accounts using a centralized IAM solution and continuously monitor compliance. Train employees to apply MFA on non SSO solutions

Guidance to Audit

MFA enrollment logs and compliance reports. Training records

Key Performance Indicator

X% of accounts use MFA; and compliance is continuously monitored.

Awareness Transparency
AI Threats: Password managers help prevent exposure to AI-phishing campaigns targeting weak or reused passwords.

Guidance to Implement

Deploy a corporate password manager and deliver mandatory training on its use.

Guidance to Audit

Password manager usage statistics and training records.

Key Performance Indicator

X% of employees use the corporate password manager with regular training.

Awareness Transparency
AI Threats: Enforce password uniqueness to reduce risks from AI analyzing breached datasets for reuse patterns.

Guidance to Implement

Implement technical controls to enforce unique passwords and provide regular password hygiene training.

Guidance to Audit

Password policy enforcement logs and training attendance records.

Key Performance Indicator

X% compliance with unique password policies; with no reused passwords.

Awareness Transparency
AI Threats: Policies must prohibit sharing credentials via AI-assisted messaging apps or tools.

Guidance to Implement

Enforce restrictions on password sharing through processes controls and schedule periodic training.

Guidance to Audit

Training records

Key Performance Indicator

X% of employees complete password hygiene training with focus on AI-assisted threats.

Judgment Transparency
AI Threats: Addresses OWASP LLM07:2025 by protecting critical system instructions from unauthorized extraction or tampering.

Guidance to Implement

Store system prompts in encrypted config stores or embed them in version-controlled infrastructure-as-code. Restrict edit access.

Guidance to Audit

Verify permissions to the prompt management system. Audit any prompt template changes via commit logs; environment config diffs; or alerting on unauthorized file edits.

Key Performance Indicator

X% of system prompts are isolated and access is restricted to authorized administrators.

Awareness Transparency
AI Threats: Workstations must prevent background AI-assisted surveillance or unapproved AI process installation.

Guidance to Implement

Provide user education on best practices. Enforce automatic screen lock settings via MDM.

Guidance to Audit

Training records. MDM compliance reports and screenshots of lock settings.

Key Performance Indicator

X% of users lock their screens when leaving their workstation; with enforced automatic lock.

Awareness Transparency
AI Threats: Advance update notifications should highlight any updates related to AI threat detection modules.

Guidance to Implement

Provide advance update notifications through IT portals and allow users to schedule update times.

Guidance to Audit

Notification logs and user feedback surveys.

Key Performance Indicator

X% of users are notified in advance of updates; with scheduled notifications.

Awareness Transparency
AI Threats: Strictly restrict installation of unauthorized AI tools (e.g.; rogue LLM-based productivity bots; AI keyloggers).

Guidance to Implement

Deploy application whitelisting solutions, maintain an updated approved software list, and train users on the exception process.

Guidance to Audit

Whitelisting configuration records and change logs.

Key Performance Indicator

X% of installations are pre-approved with an application whitelisting solution.

Ethics Judgment Transparency
AI Threats: Formal request processes must evaluate AI privacy risks before approving new software installations.

Guidance to Implement

Establish a formal software request process with tracking and approval via an ITSM tool.

Guidance to Audit

Software request and approval logs from IT service support.

Key Performance Indicator

X% of software installation requests go through a formal approval process with AI risk evaluation.

Awareness Transparency
AI Threats: Addresses OWASP LLM01:2025 by training employees to detect malicious prompt patterns and avoid trust in unverified LLM outputs.

Guidance to Implement

Provide awareness sessions on prompt manipulation techniques and enforce double-confirmation workflows for sensitive instructions involving LLM outputs.

Guidance to Audit

Collect training logs and random audits of user interactions with corporate-approved LLM tools.

Key Performance Indicator

X% of employees trained on prompt injection risks with verified understanding.

Awareness Integrity
AI Threats: Addresses OWASP LLM08: Adversarial Inputs by training employees to identify and avoid adversarial patterns.

Guidance to Implement

Provide mandatory awareness training on the dangers of adversarial inputs and how to flag or avoid them. Employees should also be trained to report any suspicious model behavior.

Guidance to Audit

Verify employee participation in training and conduct periodic checks on the effectiveness of adversarial input detection.

Key Performance Indicator

X% of employees should correctly identify adversarial inputs during regular testing scenarios.

Awareness Transparency
AI Threats: Mobile device policies should include controls against AI apps that could capture company data or monitor activity.

Guidance to Implement

Enforce MDM policies to restrict work to company-managed devices and provide training on mobile threats.

Guidance to Audit

MDM enrollment logs and mobile security training records.

Key Performance Indicator

X% of employees use company-managed devices for work; with enforced mobile device management (MDM).

Awareness Transparency
AI Threats: Enable quick reporting of AI-generated spear-phishing or deepfake scam messages.

Guidance to Implement

A process such as a button on the email software or via an ITSM tool should allow employees to report suspicious emails

Guidance to Audit

Notification logs

Key Performance Indicator

X% of users can report suspicious messages via an easy-to-use reporting system.

Awareness Judgment Integrity
AI Threats: Prevent AI-induced decision fatigue by promoting mental breaks to reduce errors during prolonged screen use.

Guidance to Implement

Encourage regular breaks and adopt scheduling practices that prevent back-to-back long meetings.

Guidance to Audit

Employee survey results and meeting schedule reviews.

Key Performance Indicator

X% of employees are given breaks between virtual meetings to prevent IT fatigue and reduce errors.

Integrity Transparency
AI Threats: Use dynamic AI-focused threat intel feeds. Block known LLM sandbox sites; unregulated AI registries; and prompt-injection communities. Monitor attempts to bypass filtering.

Guidance to Implement

Use dynamic AI-focused threat intel feeds. Block known LLM sandbox sites; unregulated AI registries; and prompt-injection communities. Monitor attempts to bypass filtering.

Guidance to Audit

Review access logs for blocked AI-related domains. Check for excessive proxy use or unmonitored DNS requests from corporate devices.

Key Performance Indicator

X% of risky websites are blocked using filtering tools with clear notifications to users.

Awareness Ethics Transparency
AI Threats: Restrict recordings to avoid inadvertent capture of sensitive data for AI training or deepfake creation.

Guidance to Implement

Provide training on consent. Implement policies restricting recording of sensitive content and monitor via IT tools.

Guidance to Audit

Recording logs and compliance audit reports.

Key Performance Indicator

X% of sensitive meetings or screens are restricted from recording; with clear consent policies.

Integrity Judgment Transparency
AI Threats: Addresses MITRE ATLAS T0018: Model Extraction by limiting external access and reducing response fidelity.

Guidance to Implement

Enforce authentication; query rate-limiting; and output obfuscation for any exposed AI/LLM APIs.

Guidance to Audit

Review API gateway logs and access permissions to identify abnormal usage patterns.

Key Performance Indicator

X% of exposed model APIs are secured with rate-limiting; authentication; and output obfuscation.

Integrity Judgment Transparency
AI Threats: Detects and contains prompt-injection; indirect exfiltration; and other LLM misuse before data loss or brand damage occurs.

Guidance to Implement

Deploy an API or proxy layer that logs all prompts/completions. Integrate DLP and abuse-detection rules (e.g.; regex for PII; jailbreak fingerprints). Generate real-time alerts to the SOC on high-risk patterns.

Guidance to Audit

Examine log samples for completeness (prompt; user; timestamp). Verify DLP hits and SOC tickets for a 30-day window. Confirm escalation within defined SLA for any jailbreak or leakage alert.

Key Performance Indicator

X% of LLM prompts and responses are continuously monitored for misuse; policy violations; or data leakage.

Integrity Transparency
AI Threats: Detects LLM conversation exfiltration; prompt-injection data leakage; ChatGPT / Google Bard conversation leaks.

Guidance to Implement

Deploy proxy/middleware that logs prompts & completions. Apply DLP regex + contextual rules. Rate-limit tokens per user/session.

Guidance to Audit

Sample logs for redacted strings. Test DLP triggers with "canary" PII. Review SOC tickets for LLM exfil alerts.

Key Performance Indicator

X% of LLM sessions are monitored with DLP scanning and per-user token quotas.

Integrity Judgment Transparency
AI Threats: Addresses adversarial evasion of AI malware/phishing detectors. e.g.; Confusing Antimalware NN; ProofPoint evasion.

Guidance to Implement

Use red-team toolkits (TextAttack; AutoAttack). Retrain or patch models after findings. Retest before production redeploy.

Guidance to Audit

Review red-team reports & remediation tickets. Confirm retested models resist same evasion.

Key Performance Indicator

X% of adversarial evasion tests are conducted regularly to patch vulnerabilities in AI detectors.

Integrity Judgment Transparency
AI Threats: Mitigates ShadowRay-style model hijack; model-extraction abuse; GPU cryptojacking; abnormal query flooding.

Guidance to Implement

Enable API throttling & query-variance anomaly rules. Alert SOC on extraction-like patterns & GPU over-utilisation. Auto-isolate suspect tenants.

Guidance to Audit

Inspect monitoring rules & dashboards. Check incident tickets tied to extraction alerts. Validate automatic tenant isolation works.

Key Performance Indicator

X% of model endpoints are monitored for abnormal queries that may signal extraction attempts or hijacking.

Awareness Judgment Ethics
AI Threats: Protects against harmful autonomous decisions; maintains human control.

Guidance to Implement

Implement confidence scores; override mechanisms; and clear escalation paths; train human reviewers.

Guidance to Audit

Test override functionality; review escalation metrics; interview oversight staff.

Key Performance Indicator

X% of AI systems include effective human oversight capabilities with confidence scores and override mechanisms.