Social engineering attacks on cybersecurity continue to plague organizations of all kinds. Malicious hackers use these attacks to target one of the weakest elements in an information system: users. These attacks have the same goal—to tempt users into performing activities they might otherwise never do. The rapid growth of AI and machine learning has only increased threat potentials, with a report from VIPRE Security Group finding that 40% of business email scams investigated in Q2 2024 were created using generative AI.
What Is Social Engineering?
Social engineering is an attack method that relies on human weakness to bypass security, gain unauthorized access, and commit criminal activities. These attacks range from amateur to highly sophisticated, from simple non-personalized emails asking users to click suspicious links, to elaborate schemes where attackers spend months gathering data to impersonate executives.
Common types include phishing (fraudulent emails/websites to steal information), smishing (phishing via text messages), and vishing (voice phishing via phone calls). So-called spear phishing targets specific users with well-researched scams, while business email compromise (BEC) typically involves impersonating executives to trick employees. Other methods include baiting (offering appealing but malicious items), pretexting (creating scenarios to gain trust), quid pro quo (exchanging services for information), watering hole (compromising commonly visited websites), tailgating (following authorized personnel into secure areas), and dumpster diving (searching trash for valuable information).
How to Avoid Social Engineering Attacks: Best Practices for Users
Employees must remain vigilant against potential social engineering scams. They should be suspicious of unsolicited communications and enable multi-factor authentication whenever possible. Before providing sensitive information, users should verify that URLs begin with HTTPS and never interact with emails from unfamiliar senders. When communications seem suspicious, verifying their authenticity through separate channels rather than responding directly is best. Users should be alert for red flags like misspellings or grammatical errors and must resist pressure tactics that create urgency. Basic security practices are essential: creating strong, unique passwords, employing spam filters and antimalware, locking computers when away, preventing tailgating, and properly disposing of sensitive documents.
How to Prevent Social Engineering Attacks: Best Practices for Organizations
Organizations must implement comprehensive defenses against cyber-related social engineering, such as understanding different attack types, examining security controls, regularly patching systems, upgrading security applications when needed, and monitoring security logs for suspicious activity. Regular reviews of firewall and intrusion detection/prevention rules are crucial, along with penetration testing that includes social engineering tactics. Email and website gateways should be routinely scanned for suspicious code, and cybersecurity systems and incident response procedures should be regularly tested.
For physical security, organizations should examine building-access arrangements, review physical security with property managers, ensure surveillance systems remain operational, maintain access records, and conduct regular physical penetration tests. Security policies should address both cyber and physical aspects by requiring frequent training, device security measures including MFA, password hygiene, mobile device management, regular backups, and encryption. Physical security policies should mandate workstation locking, clean desk practices, security monitoring, and proper access control procedures.
Impact of AI and Machine Learning
AI and machine learning (ML) raise the bar on how threat actors can successfully strike targets with content that can bequickly tailored for various attacks. For example, AI-generated emails can include data from various online resources, resulting in more convincing messaging than human-generated communications. Attackers can also train AI-enabled malware to look for specific security characteristics and patterns using substantial expertise to bypass security provisions.
The emergence of AI means security professionals must be prepared to respond to these more sophisticated attacks. In addition to using security systems with embedded AI capabilities, setting a thief to catch a thief may be wise. In other words, enterprise AI and ML systems must be trained to identify, capture, analyze, and quarantine suspicious code, emails, websites, and other information with questionable origins. This goes beyond the rules typically embedded within cybersecurity and ransomware software systems, firewalls, IDSes, and IPSes.
Cybersecurity and ransomware apps increasingly incorporate AI and ML technology, so working closely with vendors to learn how to maximize those capabilities is good practice. Just as traditional cybersecurity management is a cat-and-mouse game between security teams and attackers, regularly reviewing AI and ML resources, and retraining them based on evidence from prior attacks can stack the game against attackers. The goal is to use trained AI-based security to identify suspicious AI-generated code to keep ahead of social engineering attacks.