Compliance & Ethics
Implementation and audit guidance for cybersecurity-related compliance requirements and ethical AI usage.
Guidance to Implement
Publish security policies on easily accessible platforms such as intranet, mobile apps, and notify employees of updates.
Guidance to Audit
Access logs, acknowledgment records, version-controlled policy documents.
Key Performance Indicator
X% of employees have access to security policies and are notified of updates.
Guidance to Implement
Establish a formal, documented process for policy exception requests
Guidance to Audit
Sample exception logs for AI-related requests. Confirm inclusion of AI impact fields and validate approval by designated AI risk officers or committees.
Key Performance Indicator
X% of policy exceptions undergo a risk assessment considering AI misuse.
Guidance to Implement
Schedule periodic internal audits using a standardized framework such as HSOF
Guidance to Audit
Audit reports and remediation logs.
Key Performance Indicator
X% of internal audits verify compliance with AI data protection and ethical use.
Guidance to Implement
Develop ethical guidelines and integrate them into regular employee training.
Guidance to Audit
Policy documents and training attendance records.
Key Performance Indicator
X% of employees are trained on ethical behavior regarding AI tools.
Guidance to Implement
Document security monitoring practices with justification and involve stakeholder reviews for transparency.
Guidance to Audit
Policy documents and stakeholder meeting minutes.
Key Performance Indicator
X% of security monitoring practices disclose AI-based surveillance and predictive analysis.
Guidance to Implement
Conduct annual reviews of internal monitoring tools with legal and HR teams. Document and address ethical concerns.
Guidance to Audit
Ethics review reports and meeting minutes.
Key Performance Indicator
X% of internal monitoring tools undergo ethical reviews yearly.
Guidance to Implement
Before deploying any AI system affecting employees, customers, or operations, conduct an Ethical Impact Assessment evaluating risks of bias, discrimination, privacy violations, and societal harm.
Guidance to Audit
Retain Ethical Impact Assessment reports and documented approval records for all AI tool deployments.
Key Performance Indicator
X% of new AI systems undergo Ethical Impact Assessments before deployment.
Guidance to Implement
Ensure all LLM-based systems are classified as 'assistive-only' and require human approval for critical actions.
Guidance to Audit
Verify decision records show human intervention for AI-assisted actions.
Key Performance Indicator
X% of LLM-based systems require human oversight for critical actions.
Guidance to Implement
Track model lineage, usage context, and decision logs to support internal audits and regulatory reviews.
Guidance to Audit
Review documentation of model workflows, input-output traceability, and fairness testing.
Key Performance Indicator
X% of internal AI systems comply with model transparency and auditability standards.
Guidance to Implement
Log prompt, model version, temperature, and human approver ID for each critical AI decision; store for ≥ 2 years.
Guidance to Audit
Randomly sample 20 decisions; confirm full trace exists and matches policy.
Key Performance Indicator
X% of AI-generated decisions influencing high-stakes processes are traceable.
Guidance to Implement
Use a standard PIA template covering data flows, lawful basis, cross-border transfer; submit to DPO for sign-off.
Guidance to Audit
Sample 3 recent AI projects; confirm PIA completed and mitigation tracked.
Key Performance Indicator
X% of new AI use-cases processing personal data undergo Privacy-Impact Assessments.
Guidance to Implement
Use a multi-stakeholder review board; test outputs for disparate impact across protected attributes; document remediation steps.
Guidance to Audit
Check audit reports, remediation tickets, and board sign-off minutes for each audit cycle.
Key Performance Indicator
X% of internal or vendor-supplied AI outputs influencing HR, credit, or customer service are audited for fairness.
Guidance to Implement
Implement NIST AI RMF processes; categorize all systems by risk tier; require pre-deployment sign-off.
Guidance to Audit
Review risk registry; verify high-risk systems have completed all required reviews.
Key Performance Indicator
X% of AI systems are categorized by risk level and receive pre-deployment sign-offs.
Guidance to Implement
Create standardized AI fact sheets; implement disclosure tags for AI content; maintain version history.
Guidance to Audit
Test user interfaces for AI disclosure notices; verify documentation completeness.
Key Performance Indicator
X% of AI capabilities, limitations, and use cases are documented and disclosed.
Guidance to Implement
Integrate AI-specific guidelines into ethical reviews; ensuring staff understand risks related to model evasion and ways to safeguard systems.
Guidance to Audit
Review and validate that ethical impact assessments are completed for AI tools; with a focus on evasion scenarios.
Key Performance Indicator
X% of new AI tools must undergo an Ethical Impact Assessment before deployment; including considerations of evasion risks.
Guidance to Implement
Train IT/security teams to recognize alert patterns and escalate them. General employees should be encouraged to report app slowdowns or anomalies. Deploy automated monitoring of query patterns, GPU load, and queue anomalies across AI systems.
Guidance to Audit
Confirm escalation workflows are triggered. Review SOC alert logs tied to AI infrastructure.
Key Performance Indicator
X% of unusual processing behavior related to AI resource exhaustion reported within the first 24 hours.
Guidance to Implement
Design a review framework that includes bias testing, transparency evaluation, and ethical considerations; with clear human accountability for each step.
Guidance to Audit
Review bias and transparency reports from audits; ensuring the responsible employee has documented any corrective actions.
Key Performance Indicator
Conduct transparency and bias audits for all high-risk AI models at least once X times per year.