Implementation and audit guidance for cybersecurity-related practices during the HR lifecycle.
AIJET Principles: A = Awareness I = Integrity J = Judgment E = Ethics T = Transparency
ID | Requirement | Guidance to implement | Guidance to audit | AI threats and mitigation | Principles | KPI |
---|---|---|---|---|---|---|
HRL-01 | Mention data protection skills explicitly in job descriptions | Revise job descriptions to include mandatory cybersecurity skills and benchmark these against industry standards. | Verify job description templates and recruitment process guidelines. | Explicitly include AI data ethics skills in job descriptions, especially for roles involved with AI data handling. | A | E | % of job descriptions including AI data ethics skills |
HRL-02 | Assess cybersecurity awareness during hiring | Embed a cybersecurity awareness assessment into the recruitment process via structured interviews and practical tests; incorporate results into hiring decisions. | Candidate assessment scorecards and interview evaluation reports. | Assess AI-related cybersecurity risks during hiring, ensuring candidates understand AI misuse scenarios. | A | J | % of candidates assessed for AI misuse scenarios |
HRL-03 | Data owners are aware of their responsibilities | Ensure that candidates for roles with data ownership responsibilities receive clear information about these duties, and verify acknowledgment via documented forms. | Signed acknowledgment forms and training records. | Clarify responsibilities about managing data used in AI model training and validation to prevent biased or unethical model behaviors. | A | I | E | T | % of candidates for data owner roles who sign acknowledgment forms for AI data responsibilities |
HRL-04 | Tailor background check depth based on role sensitivity | Develop a risk-based matrix to determine the depth of background checks based on role sensitivity, and document the criteria. | Role risk matrix and corresponding background check records. | Adjust background checks to detect exposure to AI misuse (e.g., unethical data use, data leakage). | E | J | T | % of high-sensitivity roles with enhanced background checks |
HRL-05 | Include confidentiality clauses in contract | Revise employment contracts to explicitly incorporate confidentiality | Signed contracts with embedded policy clauses | Highlight confidentiality obligations around AI systems and AI-generated sensitive information. | A | I | % of employment contracts with AI confidentiality clauses |
HRL-06 | Include IT acceptable usage policy in contract | Revise employment contracts to explicitly incorporate the organization’s IT Acceptable Use Policy, referencing a separate policy document for clarity and ensuring it includes AI tool usage restrictions | Check signed employment contracts for reference to the Acceptable Use Policy. Verify employee acknowledgment of the latest version via policy acceptance logs or training platforms | Communicate acceptable usage rules around AI tools, including restrictions on uploading sensitive corporate data to external AI platforms. | A | I | T | % of employees acknowledging IT acceptable usage policy including AI usage rules |
HRL-07 | Require annual renewal of confidentiality agreement | Set up automated reminders for annual confidentiality agreement renewals and maintain version-controlled agreements. | Renewal logs with updated, signed agreements and timestamps. | Annual confidentiality renewals should explicitly address handling of AI-generated content and corporate AI systems. | I | T | % of employees with renewed confidentiality agreements addressing AI |
HRL-08 | Integrate security KPIs into annual performance reviews | Incorporate clearly defined security KPIs into performance review templates and link them to training outcomes. | Performance review documents, KPI dashboards, and training completion records. | Include KPIs for secure and ethical use of AI technologies, including detection of AI misuse. | A | E | J | T | % of employees with performance reviews including AI security KPIs |
HRL-09 | Upon internal mobility, access rights must be reviewed and adjusted | Establish a documented process for managers and HR to trigger access reviews when an employee changes role. | HR-IT access review log, updated access matrix, evidence of approval workflows and revocation timestamps | Review access rights to AI systems and datasets during internal mobility to prevent unauthorized use of sensitive AI resources. | I | J | T | % of internal mobility instances where AI access rights are reviewed |
HRL-10 | Establish a sanction process for security violations | Create a graduated response policy. Ensure the process is transparent, consistent, and known by employees. | Sanction policy, training record acknowledgement | Develop sanction processes specifically addressing misuse of AI, such as unauthorized model training or sharing of outputs. | A | I | E | T | % of security violations related to AI misuse with sanctions applied |
HRL-11 | Immediately deactivate digital and physical access | Automate access revocation workflows immediately upon termination. Verify deactivation through system audit logs. | Access revocation logs and system audit reports. | Ensure prompt revocation of access to AI development environments and sensitive AI assets upon offboarding. | I | J | T | % of offboarding cases where AI access is revoked immediately |
HRL-12 | Ensure return of corporate data | Implement a structured asset return process with checklists and digital tracking for physical assets. | Asset return checklists and IT inventory reconciliation reports. | Recover AI-related data and assets (datasets, models, prompts) during the offboarding process to prevent data leaks. | I | T | % of offboarding instances with AI-related data and assets returned |
HRL-13 | Ensure return of corporate equipment | Implement a structured asset return process with checklists and digital tracking for digital assets. | Asset return checklists and IT inventory reconciliation reports. | Ensure return of digital equipment potentially storing AI models, datasets, or AI-generated intellectual property. | I | T | % of offboarding cases with AI-related equipment returned |
HRL-14 | Maintain a security responsibility matrix for high-privilege roles | Develop and regularly update a responsibility matrix for roles with elevated privileges; review and sign off annually. | Documented responsibility matrix with review dates and stakeholder approvals. | Maintain oversight of privileged roles accessing AI systems, ensuring responsibilities cover AI-specific risks (e.g., model manipulation, bias injection). | A | E | J | T | % of high-privilege roles with an updated security responsibility matrix |