Article

Generative AI Poses New Challenges to Corporate Insider Risk Management

Learn how to make small adjustments to your insider risk program to protect your organization against evolving AI-based threats

Here’s something that sounds like a plot to a movie. And if I hadn’t spent my career in security, I’d have a hard time understanding how it worked. A cadre of North Korean spies infiltrated over 300 U.S. companies. They posed as remote IT workers with the help of generative AI, according to a Wall Street Journal report. Using phony LinkedIn profiles and AI-generated photographs, they leveraged an apparent fluency in technical languages to steal intellectual property and launch cyberattacks.

Occasionally, they even provided genuine IT assistance. 

This introduces a new dimension to insider risk, adding to the existing stack of threats your team must already mitigate. And it emphasizes the need for a unified and forward-looking security approach.

How to protect your organization against generative AI threats

So, how can you protect your organization? By building a robust insider risk program that facilitates risk management across three domains: People, processes, and technology.

People

Mitigating any type of insider risk — generative AI-based attacks included — is challenging to tackle alone. The critical information needed to address these risks is often scattered across teams. For instance, HR might receive reports of unusual behavior from employees, while IT could detect suspicious attempts to access data. Staying ahead of potential threats is nearly impossible without strong collaboration among relevant departments.

However, collaboration is often easier said than done. Teams tend to operate within their own silos, following their established processes to manage daily responsibilities. There’s rarely a natural incentive to collaborate across departments. That’s where you, as a security professional, come in. You excel at bringing teams together, whether you’re managing workplace violence cases or investigating threats against executives. Your training and expertise allow you to see the bigger picture and address insider risks effectively.

Fostering collaboration requires proactive effort. You need to align with your strategic partners in HR, legal, cybersecurity, and other departments on common goals. Building mutual trust, defining shared processes, and creating a culture of teamwork are essential. Investing time and energy now to establish strong relationships and a foundation of trust will pay off in fostering long-term collaboration and a safer environment.

Processes

Let’s walk through the nuts and bolts of how we actually run a strong insider risk operation — from helping employees understand the role AI plays in security threats to setting ground rules for sharing sensitive information. Here are some practical steps you can take to keep your organization protected, efficient, and ahead of the curve.

  • Assess your risk: Risk assessments are critical to understanding which roles and assets in the organization are most valuable to AI-based attacks or carry the most risk. 
  • Implement prevention and education strategies: Incorporate insider risk discussions into the onboarding and offboarding processes. These conversations should be updated to include generative AI’s role in phishing and other attacks (e.g., generative AI can make phishing attempts feel believable and authentic) and clear reminders about the employee’s responsibilities to the company. 
  • Make it easy for employees to report suspicious activity: Employees should have access to user-friendly digital channels for reporting suspicious activity when they see it.
  • Monitor for unauthorized or unusual behavior: AI-driven threats require more than traditional monitoring. Use user and entity behavior analytics (UEBA) to detect abnormal user activity, anomaly detection systems for real-time network irregularities, and AI-powered threat detection to identify deepfakes, phishing, and other emerging risks. SIEM platforms with AI analytics can further enhance threat visibility and response.
  • Establish information-sharing policies with relevant stakeholders: Corporate investigations operate within complex regulatory and legal frameworks. For example, HR may have concerns about sharing employee data. Clearly define what information is essential for investigations, and consider using an investigations and case management platform that allows controlled data access by department.
  • Implement repeatable and standardized processes: This will help you respond consistently to threats, reduce the chances of mistakes, and minimize the impact of any incidents. Standardized processes also improve coordination across relevant teams and ensure better compliance with regulatory requirements.

Technology

Technology can help tie together all aspects of the insider risk mitigation program. Modern solutions can offer new capabilities designed to keep up with the evolution of AI-based threats.

Choose security platforms that integrate and centralize information across the organization. Best-in-class systems will integrate with cyber-specific tools that detect anomalies and then combine that information with signals from across the organization (like HR and access control) and third-party data from public records. Connecting data sources provides your team with a comprehensive view of risk and enhances your ability to protect your organization from generative AI-based threats by enabling anomaly detection across multiple domains.

Additionally, mature security systems can help mitigate both technical and non-technical risks while allowing you to configure investigation workflows by type. For example, when an IP theft incident is reported, an investigation workflow is automatically triggered and assigned to the appropriate team. This ensures all investigations are consistent and standardized, reducing the risk of mistakes and strengthening your security posture.

A not-so-new world

Infiltration has long been a cornerstone of espionage, and threat actors are now using emerging digital technologies to refine their tactics. AI-powered phishing campaigns are becoming more sophisticated, enabling criminals to automate highly personalized and convincing attacks that are harder to detect and more likely to succeed. As the FBI warns, AI-driven cyber threats are on the rise, with adversaries using machine learning to enhance deception and breach security defenses.

And these tactics don’t always require high levels of technical expertise. The North Korean operatives in question aren’t relying on advanced hacking to breach firewalls; instead, they leverage generative AI to exploit human and procedural vulnerabilities and simply “walk through the front door.”

Traditional insider risk programs provide a solid defense against these infiltration strategies. However, many companies lack such programs, making them vulnerable targets for nation-state threat actors.

This needs to change. If your company doesn’t have an insider risk program, conduct a risk assessment to determine how the evolution of generative AI might necessitate one. If you already have one, consider updating it to address the evolving threat landscape. With the rise of generative AI, threats will only continue to evolve. Staying ahead of potential risks is crucial for protecting your company’s people and sensitive information and maintaining a strong security posture. 

Proactively adapting your strategies today will help ensure you’re prepared for the challenges of tomorrow.

Download Now

The Ultimate Guide to Building an Effective Insider Risk Program

Chuck Randolph