Implementation and audit guidance for cybersecurity-related compliance requirements and ethical AI usage.
AIJET Principles: A = Awareness I = Integrity J = Judgment E = Ethics T = Transparency
ID | Requirement | Guidance to implement | Guidance to audit | AI Threats and Mitigation | Principles | KPI |
ETH-01 | Security policies are accessible to employees | Publish security policies on easily accessible platforms such as intranet, mobile apps, and notify employees of updates. | Access logs, acknowledgment records, version-controlled policy documents. | Security policies must include specific guidelines for safe and ethical use of AI systems and generative AI tools. | E | T | X% of employees have access to security policies and are notified of updates. |
ETH-02 | Users have a proper way to request exception of policies | Policy exception requests must include an AI risk impact assessment if the request involves AI tools, datasets, or automated workflows. Include automated risk scoring or expert review where feasible. | Sample exception logs for AI-related requests. Confirm inclusion of AI impact fields and validate approval by designated AI risk officers or committees. | Exception processes must evaluate risks of AI misuse when approving software, data access, or new workflows. | E | J | T | X% of policy exceptions undergo a risk assessment considering AI misuse. |
ETH-03 | Perform regular internal audits for policy adherence | Schedule periodic internal audits using a standardized framework such as HSOF | Audit reports and remediation logs. | Internal audits must verify compliance with AI data protection policies and responsible AI tool usage. | E | J | T | X% of internal audits verify compliance with AI data protection and ethical use. |
ETH-04 | Define guidelines for ethical behavior and external communications | Develop ethical guidelines and integrate them into regular employee training. | Policy documents and training attendance records. | Ethical guidelines must address employee behavior when interacting with AI outputs (bias, transparency, accountability). | A | E | T | X% of employees are trained on ethical behavior regarding AI tools. |
ETH-05 | Security monitoring policies are transparent and justified | Document security monitoring practices with justification and involve stakeholder reviews for transparency. | Policy documents and stakeholder meeting minutes. | Monitoring policies should disclose any AI-based surveillance or predictive behavior analysis to employees. | E | J | T | X% of security monitoring practices disclose AI-based surveillance and predictive analysis. |
ETH-06 | Assess the ethical impact of internal monitoring tools regularly and at least every year | Conduct annual reviews of internal monitoring tools with legal and HR teams. Document and address ethical concerns. | Ethics review reports and meeting minutes. | Annual ethical reviews must assess risks related to internal AI monitoring, decision automation, and profiling. | E | J | T | X% of internal monitoring tools undergo ethical reviews yearly. |
ETH-07 | Implement an Ethical Impact Assessment for new AI tool deployments. | Before deploying any AI system affecting employees, customers, or operations, conduct an Ethical Impact Assessment evaluating risks of bias, discrimination, privacy violations, and societal harm. | Retain Ethical Impact Assessment reports and documented approval records for all AI tool deployments. | Ensures that human dignity, fairness, autonomy, and societal values are preserved in AI system adoption. | E | J | T | X% of new AI systems undergo Ethical Impact Assessments before deployment. |
ETH-08 | Classifies all LLM or AI systems influencing sensitive or high-impact decisions (e.g., HR, finance, health) as ‘assistive-only’. Require documented human validation before action. Flag any violations for automatic audit. | Ensure all LLM-based systems are classified as ‘assistive-only’ and require human approval for critical actions. | Verify decision records show human intervention for AI-assisted actions. | Addresses OWASP LLM06:2025 by preserving human accountability and preventing uncontrolled AI autonomy. | E | J | T | X% of LLM-based systems require human oversight for critical actions. |
ETH-09 | Ensure internal AI usage complies with model transparency and auditability standards. | Track model lineage, usage context, and decision logs to support internal audits and regulatory reviews. | Review documentation of model workflows, input-output traceability, and fairness testing. | Aligns with MITRE ATLAS T0006: Unclear Model Goals by improving oversight and explainability. | I | J | T | X% of internal AI systems comply with model transparency and auditability standards. |
ETH-10 | Enable decision traceability for AI outputs influencing high-stakes processes (HR, credit, health). | Log prompt, model version, temperature, and human approver ID for each critical AI decision; store for ≥ 2 years. | Randomly sample 20 decisions; confirm full trace exists and matches policy. | Mitigates bias and opaque decision risk; supports regulator inquiries – LLM06 | I | J | T | X% of AI-generated decisions influencing high-stakes processes are traceable. |
ETH-11 | Conduct Privacy-Impact Assessments (PIA) on new AI use-case processing personal data. | Use a standard PIA template covering data flows, lawful basis, cross-border transfer; submit to DPO for sign-off. | Sample 3 recent AI projects; confirm PIA completed and mitigation tracked. | Mitigates reputation risk. – T0006 | E | J | T | X% of new AI use-cases processing personal data undergo Privacy-Impact Assessments. |
ETH-12 | Conduct semi-annual bias and fairness audits on internal or vendor-supplied AI outputs influencing HR, credit, or customer service decisions. | Use a multi-stakeholder review board; test outputs for disparate impact across protected attributes; document remediation steps. | Check audit reports, remediation tickets, and board sign-off minutes for each audit cycle. | Addresses risk of discriminatory outcomes – T0006 | E | J | T | X% of internal or vendor-supplied AI outputs influencing HR, credit, or customer service are audited for fairness. |
ETH-13 | Establish comprehensive risk management system for AI systems according to risk level | Implement NIST AI RMF processes; categorize all systems by risk tier; require pre-deployment sign-off. | Review risk registry; verify high-risk systems have completed all required reviews. | Prevents deployment of unsafe or non-compliant AI systems; reduces liability. | A | J | T | X% of AI systems are categorized by risk level and receive pre-deployment sign-offs. |
ETH-14 | Document AI capabilities, limitations, and intended use cases; disclose AI-generated content | Create standardized AI fact sheets; implement disclosure tags for AI content; maintain version history. | Test user interfaces for AI disclosure notices; verify documentation completeness. | Counters misinformation and unrealistic expectations; supports informed consent. | A | E | T | X% of AI capabilities, limitations, and use cases are documented and disclosed. |
ETH-15 | Ethical Impact Assessment for new AI tool deployments, including Model Evasion risks | Integrate AI-specific guidelines into ethical reviews, ensuring staff understand risks related to model evasion and ways to safeguard systems. | Review and validate that ethical impact assessments are completed for AI tools, with a focus on evasion scenarios. | LLM06: Malicious Prompting | A | T | X% of new AI tools must undergo an Ethical Impact Assessment before deployment, including considerations of evasion risks. |
ETH-16 | Ensure employees can identify AI-related resource exhaustion or query flooding attempts | Train IT/security teams to recognize alert patterns and escalate them. General employees should be encouraged to report app slowdowns or anomalies. Deploy automated monitoring of query patterns, GPU load, and queue anomalies across AI systems. | Confirm escalation workflows are triggered. Review SOC alert logs tied to AI infrastructure. | LLM09: Resource Exhaustion | A | I | X% of unusual processing behavior related to AI resource exhaustion reported within the first 24 hours. |
ETH-17 | Implement periodic review of internal AI systems and algorithms for transparency and bias risks | Design a review framework that includes bias testing, transparency evaluation, and ethical considerations, with clear human accountability for each step. | Review bias and transparency reports from audits, ensuring the responsible employee has documented any corrective actions. | LLM10: Model Theft | A | T | Conduct transparency and bias audits for all high-risk AI models at least once X times per year. |