Remote Work

Implementation and audit guidance for securing remote work environments.


AIJET Principles: A = Awareness I = Integrity J = Judgment E = Ethics T = Transparency
Filter by principle:

Awareness Transparency
AI Threats: Remote work guidelines must include safe AI tool usage; prohibiting sharing sensitive data with external LLMs (e.g.; ChatGPT).

Guidance to Implement

Develop and distribute detailed guidelines for securing home networks. Offer remote support resources.

Guidance to Audit

Guideline documents and employee acknowledgment receipts.

Key Performance Indicator

X% of employees acknowledge and follow remote work security guidelines.

Integrity Transparency
AI Threats: Enforce VPN usage to prevent interception by AI-enhanced traffic analysis and unauthorized model scraping.

Guidance to Implement

Enforce VPN usage through network policies and continuously monitor remote connections.

Guidance to Audit

VPN usage logs and network access control reports.

Key Performance Indicator

X% of remote work connections use a secure VPN.

Integrity Judgment Transparency
AI Threats: Restrict use of unmanaged AI apps on remote devices to prevent data leakage and AI model poisoning.

Guidance to Implement

Enforce conditional access based on device compliance. Integrate MDM/UEM solutions to restrict access only to enrolled; compliant devices.

Guidance to Audit

Review conditional access logs; device compliance reports; and platform access attempts from unauthorized devices.

Key Performance Indicator

X% of remote devices must comply with company-approved device policies.

Integrity Judgment Transparency
AI Threats: Compliance checks should review unauthorized AI tool usage and employee adherence to AI-specific guidelines.

Guidance to Implement

Deploy automated compliance scans for remote devices and remediate non-compliant cases promptly.

Guidance to Audit

Compliance scan reports and remediation records.

Key Performance Indicator

X% of remote devices pass compliance scans and are remediated within 24 hours.

Integrity Transparency
AI Threats: Jump server environments must block uncontrolled interactions with generative AI services.

Guidance to Implement

Deploy advanced access solutions (like jump servers) for critical systems and log all sessions.

Guidance to Audit

Session logs and advanced access configuration records.

Key Performance Indicator

X% of critical remote systems must use jump servers and log all access.

Integrity Judgment
AI Threats: Addresses OWASP LLM05:2025 by reducing risks from acting on hallucinated; fabricated; or misleading LLM outputs.

Guidance to Implement

Define categories of decisions (e.g.; financial transactions; legal decisions; customer escalations) that require secondary human validation when influenced by LLM outputs.

Guidance to Audit

Sample decisions influenced by AI tools and verify whether documented human validation or source triangulation is present. Cross-check logs with team leaders for spot compliance reviews.

Key Performance Indicator

X% of AI-generated recommendations are reviewed before action.

Awareness Transparency
AI Threats: Alert traveling employees about AI-powered impersonation scams (deepfake voices pretending to be security or executives).

Guidance to Implement

Establish a pre-travel notification workflow

Guidance to Audit

Travel forms

Key Performance Indicator

X% of employees notify security before business travel and are educated on AI scams.

Awareness Integrity Transparency
AI Threats: Incident reporting during travel should include categories for AI-generated threats or manipulations.

Guidance to Implement

Set up a dedicated reporting channel (e.g., hotline or mobile app) for travel-related incidents and train employees on its use.

Guidance to Audit

Incident reports and hotline call logs.

Key Performance Indicator

X% of travel-related incidents are reported; including AI-generated threats.

Integrity Transparency
AI Threats: Recommend travelers avoid connecting sensitive devices to unsecured networks that may host AI eavesdropping tools.

Guidance to Implement

Include hotel safe usage guidelines in travel protocols and encourage their use.

Guidance to Audit

Travel policy documents and employee acknowledgment records.

Key Performance Indicator

X% of employees use hotel safes and avoid risky networks during travel.

Integrity Transparency
AI Threats: Minimize the quantity of sensitive data carried during travel to reduce risks of AI-augmented physical theft or spying.

Guidance to Implement

Advise employees on data minimization and enforce encryption for any data carried during travel.

Guidance to Audit

Travel checklists and data minimization policy documents.

Key Performance Indicator

X% of sensitive data carried during business trips is encrypted and minimized.

Awareness Integrity Transparency
AI Threats: Protect IT equipment rigorously; preventing capture by AI-based hardware surveillance technologies.

Guidance to Implement

Incorporate clear guidelines for asset supervision during travel and emphasize vigilance in training.

Guidance to Audit

Travel supervision logs and incident reports.

Key Performance Indicator

X% of employees follow guidelines for asset supervision during business trips.

Awareness Integrity Transparency
AI Threats: Encourage use of auto-lock features to protect devices from AI-enabled opportunistic attacks.

Guidance to Implement

Educate employees to lock their devices as soon as they are not in use. Reinforce via policy reminders.

Guidance to Audit

Policy documents and training attendance records.

Key Performance Indicator

X% of employees lock their devices when not in use during business trips.

Awareness Integrity Transparency
AI Threats: Exercise discretion during external interactions to mitigate risks of AI-assisted social engineering attempts.

Guidance to Implement

Provide guidelines on maintaining discretion during external interactions, include role-playing scenarios in training.

Guidance to Audit

Travel policy documents.

Key Performance Indicator

X% of employees practice discretion during external interactions to mitigate AI-related risks.