Compliance & Ethics

Implementation and audit guidance for cybersecurity-related compliance requirements and ethical AI usage.


AIJET Principles: A = Awareness I = Integrity J = Judgment E = Ethics T = Transparency
Filter by principle:

Ethics Transparency
AI Threats: Security policies must include specific guidelines for safe and ethical use of AI systems and generative AI tools.

Guidance to Implement

Publish security policies on easily accessible platforms such as intranet, mobile apps, and notify employees of updates.

Guidance to Audit

Access logs, acknowledgment records, version-controlled policy documents.

Key Performance Indicator

X% of employees have access to security policies and are notified of updates.

Ethics Judgment Transparency
AI Threats: Policy exception requests must include an AI risk impact assessment if the request involves AI tools; datasets; or automated workflows. Include automated risk scoring or expert review where feasible.

Guidance to Implement

Establish a formal, documented process for policy exception requests

Guidance to Audit

Sample exception logs for AI-related requests. Confirm inclusion of AI impact fields and validate approval by designated AI risk officers or committees.

Key Performance Indicator

X% of policy exceptions undergo a risk assessment considering AI misuse.

Ethics Judgment Transparency
AI Threats: Internal audits must verify compliance with AI data protection policies and responsible AI tool usage.

Guidance to Implement

Schedule periodic internal audits using a standardized framework such as HSOF

Guidance to Audit

Audit reports and remediation logs.

Key Performance Indicator

X% of internal audits verify compliance with AI data protection and ethical use.

Awareness Ethics Transparency
AI Threats: Ethical guidelines must address employee behavior when interacting with AI outputs (bias; transparency; accountability).

Guidance to Implement

Develop ethical guidelines and integrate them into regular employee training.

Guidance to Audit

Policy documents and training attendance records.

Key Performance Indicator

X% of employees are trained on ethical behavior regarding AI tools.

Ethics Judgment Transparency
AI Threats: Monitoring policies should disclose any AI-based surveillance or predictive behavior analysis to employees.

Guidance to Implement

Document security monitoring practices with justification and involve stakeholder reviews for transparency.

Guidance to Audit

Policy documents and stakeholder meeting minutes.

Key Performance Indicator

X% of security monitoring practices disclose AI-based surveillance and predictive analysis.

Ethics Judgment Transparency
AI Threats: Annual ethical reviews must assess risks related to internal AI monitoring; decision automation; and profiling.

Guidance to Implement

Conduct annual reviews of internal monitoring tools with legal and HR teams. Document and address ethical concerns.

Guidance to Audit

Ethics review reports and meeting minutes.

Key Performance Indicator

X% of internal monitoring tools undergo ethical reviews yearly.

Ethics Judgment Transparency
AI Threats: Ensures that human dignity, fairness, autonomy, and societal values are preserved in AI system adoption.

Guidance to Implement

Before deploying any AI system affecting employees, customers, or operations, conduct an Ethical Impact Assessment evaluating risks of bias, discrimination, privacy violations, and societal harm.

Guidance to Audit

Retain Ethical Impact Assessment reports and documented approval records for all AI tool deployments.

Key Performance Indicator

X% of new AI systems undergo Ethical Impact Assessments before deployment.

Ethics Judgment Transparency
AI Threats: Addresses OWASP LLM06:2025 by preserving human accountability and preventing uncontrolled AI autonomy.

Guidance to Implement

Ensure all LLM-based systems are classified as 'assistive-only' and require human approval for critical actions.

Guidance to Audit

Verify decision records show human intervention for AI-assisted actions.

Key Performance Indicator

X% of LLM-based systems require human oversight for critical actions.

Integrity Judgment Transparency
AI Threats: Aligns with MITRE ATLAS T0006: Unclear Model Goals by improving oversight and explainability.

Guidance to Implement

Track model lineage, usage context, and decision logs to support internal audits and regulatory reviews.

Guidance to Audit

Review documentation of model workflows, input-output traceability, and fairness testing.

Key Performance Indicator

X% of internal AI systems comply with model transparency and auditability standards.

Integrity Judgment Transparency
AI Threats: Mitigates bias and opaque decision risk; supports regulator inquiries - LLM06.

Guidance to Implement

Log prompt, model version, temperature, and human approver ID for each critical AI decision; store for ≥ 2 years.

Guidance to Audit

Randomly sample 20 decisions; confirm full trace exists and matches policy.

Key Performance Indicator

X% of AI-generated decisions influencing high-stakes processes are traceable.

Ethics Judgment Transparency
AI Threats: Mitigates reputation risk from AI misuse of personal data - T0006.

Guidance to Implement

Use a standard PIA template covering data flows, lawful basis, cross-border transfer; submit to DPO for sign-off.

Guidance to Audit

Sample 3 recent AI projects; confirm PIA completed and mitigation tracked.

Key Performance Indicator

X% of new AI use-cases processing personal data undergo Privacy-Impact Assessments.

Ethics Judgment Transparency
AI Threats: Addresses risk of discriminatory outcomes through AI systems - T0006.

Guidance to Implement

Use a multi-stakeholder review board; test outputs for disparate impact across protected attributes; document remediation steps.

Guidance to Audit

Check audit reports, remediation tickets, and board sign-off minutes for each audit cycle.

Key Performance Indicator

X% of internal or vendor-supplied AI outputs influencing HR, credit, or customer service are audited for fairness.

Awareness Judgment Transparency
AI Threats: Prevents deployment of unsafe or non-compliant AI systems; reduces liability.

Guidance to Implement

Implement NIST AI RMF processes; categorize all systems by risk tier; require pre-deployment sign-off.

Guidance to Audit

Review risk registry; verify high-risk systems have completed all required reviews.

Key Performance Indicator

X% of AI systems are categorized by risk level and receive pre-deployment sign-offs.

Awareness Ethics Transparency
AI Threats: Counters misinformation and unrealistic expectations; supports informed consent.

Guidance to Implement

Create standardized AI fact sheets; implement disclosure tags for AI content; maintain version history.

Guidance to Audit

Test user interfaces for AI disclosure notices; verify documentation completeness.

Key Performance Indicator

X% of AI capabilities, limitations, and use cases are documented and disclosed.

Awareness Transparency
AI Threats: LLM06: Malicious Prompting - Ensures new AI systems are evaluated for security risks.

Guidance to Implement

Integrate AI-specific guidelines into ethical reviews; ensuring staff understand risks related to model evasion and ways to safeguard systems.

Guidance to Audit

Review and validate that ethical impact assessments are completed for AI tools; with a focus on evasion scenarios.

Key Performance Indicator

X% of new AI tools must undergo an Ethical Impact Assessment before deployment; including considerations of evasion risks.

Awareness Integrity
AI Threats: LLM09: Resource Exhaustion - Protects against service disruption.

Guidance to Implement

Train IT/security teams to recognize alert patterns and escalate them. General employees should be encouraged to report app slowdowns or anomalies. Deploy automated monitoring of query patterns, GPU load, and queue anomalies across AI systems.

Guidance to Audit

Confirm escalation workflows are triggered. Review SOC alert logs tied to AI infrastructure.

Key Performance Indicator

X% of unusual processing behavior related to AI resource exhaustion reported within the first 24 hours.

Awareness Transparency
AI Threats: LLM10: Model Theft - Regular reviews help identify potential security gaps.

Guidance to Implement

Design a review framework that includes bias testing, transparency evaluation, and ethical considerations; with clear human accountability for each step.

Guidance to Audit

Review bias and transparency reports from audits; ensuring the responsible employee has documented any corrective actions.

Key Performance Indicator

Conduct transparency and bias audits for all high-risk AI models at least once X times per year.