SPEAK WITH AN EXPERT

Insider Threats, AI and Social Engineering: The Triad of Modern Cybersecurity Threats 

By: Gouri Santhosh

Insider threats are emerging as one of the most sophisticated cyberattacks in modern times as they primarily target critical data crucial to an organization. Whether intentional or accidental, organizations will experience significant losses if individuals inside the company access sensitive data with malicious intent or are tricked by cyber criminals into causing data breaches. Insider threats may not be easily identified or detected because, in most cases, the initial breach starts with employees “insiders” with access to specific types of company data. 

These “insiders” can be categorized into: 

  • Negligent Employees: The negligent nature of these employees leads to unintentional or accidental data leaks. 
  • Vulnerable Employees: These employees may be vulnerable to psychological manipulation that leads to data breaches. 
  • Risk-taking Employees: These employees may overlook their organization’s security controls and perform actions that can lead to compromised systems. 
  • Malicious Employees: These employees may perform malicious actions with the intent to cause harm to their organization. 
  • Ideology-driven Employees: These employees can leak confidential information for their ideology. 

Essentially, the primary drivers of modern insider threats can be those who deliberately seek to inflict damage to their company or those who, whether due to negligence or susceptibility, are either targeted by external actors seeking to launch cyberattacks or cause accidental data leaks. 

Insiders who are targeted or influenced by external adversaries to commit data theft may not be addressed by traditional security solutions because attackers might use a combination of manipulation techniques with tactics to get access to the confidential data of an organization.  This can be seen in the case of Insider Threats carried out by Famous Chollima, a cyber-criminal group that targeted organizations through the employees, that were working for the criminal group. This criminal group collected individuals, falsified their identities, and helped them secure employment with the organization. Once inside, the group got access to sensitive information through the employees they helped get into the organization. 

Another technique that might have enhanced the effectiveness of this is the use of complex social engineering techniques involving pretexting and impersonation that helped convince the company that the employee was legitimate. When insider threats involve social engineering, adversaries can target the organization with privileged access to confidential information and cause massive data breaches. 

Additionally, attackers use AI to advance social engineering techniques and cause internal data breaches as well. This means that even those who do not have legitimate access to sensitive data will get their hands on it. This is one of the new challenges that needs to be tackled with both behavioral analysis and security solutions. 

The Perfectly Engineered Insider 

Despite social engineering being a common cyber-attack and security awareness, social engineering still works because human error is the weakest link to bypass security controls. Insider threats pose dangerous risks because of its complex methodology that eludes security technologies. It is especially true in cases where the insiders are tricked or targeted by cyber criminals to reveal company information. 

Both modern insider threats, where individuals maliciously exfiltrate or disclose company information for personal gain, and accidental breaches resulting from employee negligence are concerning. Additionally, sophisticated targeted insider threats exist, wherein external actors use employees to unintentionally facilitate security breaches through complex schemes. 

So, what makes targeted insider threats successful? The initial phase involves reconnaissance to gather detailed information about potential insider targets. Social engineering attacks are the best approach for cybercriminals to gather personal information about the insiders that can be used against them to get sensitive data from inside the organization.  

It is also true for malicious insiders who might conduct insider threat reconnaissance with a combination of engineering and attack techniques to get the information like reaching out to someone having access to the desired information. 

Some examples of activities malicious insiders may perform apart from accessing unauthorized systems include: 

  • Asking for access to confidential data 
  • Intentionally make security errors or ignore security protocols 
  • Performing tasks usually done by other departments 
  • Transferring/Copying files to an external USB 

Under the umbrella of threat reconnaissance comes the type of reconnaissance when an attacker carries out extensive research on an employee of the organization. Attackers conduct deep reconnaissance on workers to extract information from them using a variety of social engineering techniques that include phishing, pharming, spear phishing, or whaling. 

Leveraging AI platforms, threat actors looking to target employees benefit from rapid results, bypassing the need for time-consuming efforts typically associated with traditional persistent attacks. The prevalence of social media amplifies the risk, as data mining and scraping techniques can harvest insider information. External actors use this data to devise complex schemes with potentially severe repercussions for the organization. Adversaries can easily utilize this gathered intelligence to either groom a company insider to get involved in an information exchange deal or manipulate employees into inadvertently disclosing sensitive information. 

Use of AI in Insider Threat Attacks 

Threat actors can use LLM (Language Learning Models) and Gen AI to expedite and amplify social engineering attacks, achieving speed and scale that precedes manual efforts. 

Recently, there have been reports of cyber actors using AI/gen AI to create visual content that attempts to trick their targets into establishing means of gaining access to company data. For example, a fraudster can pose as an IT worker, trick the company with deepfake videos in interviews, submit false resumes, and enter the organization. Once they enter, they can act as an insider, deploy malware, and cause data breaches. Moreover, threat actors can automate manual steps and increase attack efficiency with Gen AI and machine learning algorithms. For example, to facilitate the success of a phishing attack on a targeted insider, an attacker can manipulate AI technologies like ChatGPT to generate email content that sounds original to the insider, tricking them into opening the mail.  

Other cases that are soaring with AI: are accidental insider threats, when employees share sensitive data with LLM models, like ChatGPT, these models could be hacked, leading to a potential data breach. 

In industry sectors such as banking and technology, where customer and proprietary data are highly valuable, gaining internal access to this information without launching an overt cyber-attack results in huge repercussions. All it takes is for one insider to use AI automation to gather confidential company data and leak it on the dark web or sell it to external cyber actors. Combined with other cyberattacks, it could lead to a full-blown breach. 

Since AI can mimic user behavior, it is hard for security teams to detect the difference between normal activity and AI-generated activity. AI can also be used by insiders to assist in their plans, such as like an insider could use AI or train AI models to analyze user activity and pinpoint the window of least activity to deploy malware onto a critical system at an optimal time and disguise this activity under a legitimate action, to avoid detection with monitoring solutions. 

Another risk of using Gen AI is the potential information leak from the AI model itself. Some AI models may use past recorded conversations with their users for training, which leads to accidental data leaks if employees reveal sensitive data in their messages with the AI models. Companies like Samsung and JPMorgan have restricted the use of gen AI due to the risk of accidental data breaches and the possibility of data being used to train AI models.  

Vulnerabilities in the AI model could also result in data breaches. For example, a poorly configured app with vulnerable AI models could unintentionally expose sensitive data like passwords or credentials through prompt injection techniques that override the original behavior of the AI. There are real-world examples of accidental data leaks through AI chatbots, AI models being manipulated into delivering inaccurate results, as well as using AI to expose user information. For example, customer-facing AI bots can be manipulated to obtain refunds greater than the original amount. 

Insiders act covertly, so detecting them might be challenging even for skilled security experts. With AI, insiders have better ways to cover their tracks and avoid detections. One “deceptive AI model” that pursues information while hiding its true nature will do the trick. In time, attackers using AI as potential “insiders” would no longer be fiction.  

Engineered Insiders and External Threat Actors

With AI, many threat actors leverage social engineering to achieve goals rather than just using malware. Threat actors can target employees inside a company, use them as “insiders” to execute their malicious intentions and use them to commit data breaches. The human element often serves as a vulnerability in most data breach cases, so external actors can use this weakness through social engineering and AI to find ways to crumble an organization’s data security. In the modern threat landscape, this leads to a complex pattern where internal and external threats become one with a mix of tactics and attack vectors. 

Attackers may employ various tactics to compromise employees inside a company, utilizing psychological techniques like blackmail, coercion, and emotional manipulation, and social engineering techniques like phishing, pharming, quid pro quo, and baiting. They may identify employees who display dissatisfaction with their employer and manipulate them into becoming the attacker’s help from the inside.  

Generally, there are two ways an attacker can target employees: one is to use them as “unintentional insiders” for compromised access and sensitive data leaks or turn them over time to commit breaches intentionally. The former involves a mix of social engineering techniques and attack tactics by attackers, whereas the latter involves psychological techniques like motivation to covertly influence the employee to leak information, as seen in cyber espionage cases. 

Last year, the Chief Financial Officer of a company allegedly contacted a worker from the finance department of a multinational company in Hong Kong via email for a transaction. This worker was initially skeptical of this email and thought it was a phishing scam. The worker was then contacted for a video call with the officer and other employees of his company. It dismissed his suspicions, and the transactions were done. The later investigations after the worker discovered that his employees did not contact him revealed that this incident was an intricate scam incorporating two of the most prevalent social engineering tactics: phishing and whaling. Except this time, attackers used AI integrated with these techniques. The attackers initially crafted a meticulous email and sent it under a legitimate address to convince the employee it had come from the CFO. Later, they drew this worker to a call with multiple employees of his company with advanced deepfake technology that digitally recreated his employees in an artificial virtual environment. It clearly shows how threat actors craft elaborate and sophisticated attacks with social engineering and AI to target workers of an organization. Given the meticulous approach in this case, the attackers would have conducted thorough reconnaissance on the company’s personnel, and it helped them execute a structured operational crime. 

However, there are cases when insiders agree to sell information due to the influence of adversaries. These cases may differ from malicious insiders; for example, disgruntled employees actively looking to cause consequential losses to their organizations. The distinction between these cases lies in their motivation: one category gets compromised by external parties while the other actively intends to harm their organizations. 

For example, there are reports that ransomware gangs actively seek to recruit insiders to help them breach their organization’s cyber defenses. While most insiders will not be compromised, with a targeted approach, employees may become influenced to violate their organization’s security protocols. While this may sound farfetched, we cannot entirely rule this out from a technical standpoint. On the other hand, there are reports of threat actors who trick companies into hiring them and then infiltrate their organizations to commit espionage or other cybercrimes. It is a clear example of how malicious insiders use their access to breach the security of their companies. 

Conclusion

It is paramount for security experts to understand the insider threats that they may deal with, whether they are intentional or accidental. Insiders who are compromised or influenced cannot be addressed with security solutions that do not consider human elements; it may require a comprehensive approach that incorporates behavioral analysis. Therefore, insider risk management must incorporate AI and security tools to cover technical issues and human factors within the organization.  

Employees are the strongest defense of an organization, and they are crucial to maintaining the security culture of an organization. With strong cyber defenses, organizations must foster a security culture that educates employees to stay protected against social engineering and external influence. 

To combat the sophisticated and challenging landscape of modern insider threats, where malicious and even unintentionally compromised employees can leverage AI and social engineering, organizations need advanced solutions for comprehensive visibility. By leveraging expertise and integrated technologies for monitoring and analysis, CyberProof can help organizations gain critical visibility into user behavior and potential anomalies, which is essential for detecting both malicious and accidental insider threats to better thwart attacks and safeguard sensitive data.