HR Lifecycle

Implementation and audit guidance for cybersecurity-related practices during the HR lifecycle.


AIJET Principles: A = Awareness I = Integrity J = Judgment E = Ethics T = Transparency
Filter by principle:

Awareness Ethics
AI Threats: Explicitly include AI data ethics skills in job descriptions; especially for roles involved with AI data handling.

Guidance to Implement

Revise job descriptions to include mandatory cybersecurity skills and benchmark these against industry standards.

Guidance to Audit

Verify job description templates and recruitment process guidelines.

Key Performance Indicator

% of job descriptions including AI data ethics skills

Awareness Judgment
AI Threats: Assess AI-related cybersecurity risks during hiring; ensuring candidates understand AI misuse scenarios.

Guidance to Implement

Embed a cybersecurity awareness assessment into the recruitment process via structured interviews and practical tests; incorporate results into hiring decisions.

Guidance to Audit

Candidate assessment scorecards and interview evaluation reports.

Key Performance Indicator

% of candidates assessed for AI misuse scenarios

Awareness Integrity Ethics Transparency
AI Threats: Clarify responsibilities about managing data used in AI model training and validation to prevent biased or unethical model behaviors.

Guidance to Implement

Ensure that candidates for roles with data ownership responsibilities receive clear information about these duties; and verify acknowledgment via documented forms.

Guidance to Audit

Signed acknowledgment forms and training records.

Key Performance Indicator

% of candidates for data owner roles who sign acknowledgment forms for AI data responsibilities

Ethics Judgment Transparency
AI Threats: Adjust background checks to detect exposure to AI misuse (e.g.; unethical data use; data leakage).

Guidance to Implement

Develop a risk-based matrix to determine the depth of background checks based on role sensitivity; and document the criteria.

Guidance to Audit

Role risk matrix and corresponding background check records.

Key Performance Indicator

% of high-sensitivity roles with enhanced background checks

Awareness Integrity
AI Threats: Highlight confidentiality obligations around AI systems and AI-generated sensitive information.

Guidance to Implement

Revise employment contracts to explicitly incorporate confidentiality

Guidance to Audit

Signed contracts with embedded policy clauses

Key Performance Indicator

% of employment contracts with AI confidentiality clauses

Awareness Integrity Transparency
AI Threats: Communicate acceptable usage rules around AI tools; including restrictions on uploading sensitive corporate data to external AI platforms.

Guidance to Implement

Revise employment contracts to explicitly incorporate the organization's IT Acceptable Use Policy; referencing a separate policy document for clarity and ensuring it includes AI tool usage restrictions

Guidance to Audit

Check signed employment contracts for reference to the Acceptable Use Policy. Verify employee acknowledgment of the latest version via policy acceptance logs or training platforms

Key Performance Indicator

% of employees acknowledging IT acceptable usage policy including AI usage rules

Integrity Transparency
AI Threats: Annual confidentiality renewals should explicitly address handling of AI-generated content and corporate AI systems.

Guidance to Implement

Set up automated reminders for annual confidentiality agreement renewals and maintain version-controlled agreements.

Guidance to Audit

Renewal logs with updated; signed agreements and timestamps.

Key Performance Indicator

% of employees with renewed confidentiality agreements addressing AI

Awareness Ethics Judgment Transparency
AI Threats: Include KPIs for secure and ethical use of AI technologies; including detection of AI misuse.

Guidance to Implement

Incorporate clearly defined security KPIs into performance review templates and link them to training outcomes.

Guidance to Audit

Performance review documents; KPI dashboards; and training completion records.

Key Performance Indicator

% of employees with performance reviews including AI security KPIs

Integrity Judgment Transparency
AI Threats: Review access rights to AI systems and datasets during internal mobility to prevent unauthorized use of sensitive AI resources.

Guidance to Implement

Establish a documented process for managers and HR to trigger access reviews when an employee changes role.

Guidance to Audit

HR-IT access review log; updated access matrix; evidence of approval workflows and revocation timestamps

Key Performance Indicator

% of internal mobility instances where AI access rights are reviewed

Awareness Integrity Ethics Transparency
AI Threats: Develop sanction processes specifically addressing misuse of AI; such as unauthorized model training or sharing of outputs.

Guidance to Implement

Create a graduated response policy. Ensure the process is transparent; consistent; and known by employees.

Guidance to Audit

Sanction policy; training record acknowledgement

Key Performance Indicator

% of security violations related to AI misuse with sanctions applied

Integrity Judgment Transparency
AI Threats: Ensure prompt revocation of access to AI development environments and sensitive AI assets upon offboarding.

Guidance to Implement

Automate access revocation workflows immediately upon termination. Verify deactivation through system audit logs.

Guidance to Audit

Access revocation logs and system audit reports.

Key Performance Indicator

% of offboarding cases where AI access is revoked immediately

Integrity Transparency
AI Threats: Recover AI-related data and assets (datasets; models; prompts) during the offboarding process to prevent data leaks.

Guidance to Implement

Implement a structured asset return process with checklists and digital tracking for physical assets.

Guidance to Audit

Asset return checklists and IT inventory reconciliation reports.

Key Performance Indicator

% of offboarding instances with AI-related data and assets returned

Integrity Transparency
AI Threats: Ensure return of digital equipment potentially storing AI models; datasets; or AI-generated intellectual property.

Guidance to Implement

Implement a structured asset return process with checklists and digital tracking for digital assets.

Guidance to Audit

Asset return checklists and IT inventory reconciliation reports.

Key Performance Indicator

% of offboarding cases with AI-related equipment returned

Awareness Ethics Judgment Transparency
AI Threats: Maintain oversight of privileged roles accessing AI systems; ensuring responsibilities cover AI-specific risks (e.g.; model manipulation; bias injection).

Guidance to Implement

Develop and regularly update a responsibility matrix for roles with elevated privileges; review and sign off annually.

Guidance to Audit

Documented responsibility matrix with review dates and stakeholder approvals.

Key Performance Indicator

% of high-privilege roles with an updated security responsibility matrix