AI & ML News Security

Beyond Phishing: How AI is Amplifying Social Engineering Attacks

Adarsh Nair

The article delves into the evolving landscape of social engineering, propelled by AI’s capabilities. It highlights the need for robust governance frameworks to mitigate risks. AI enables hyper-personalized attacks, automation, and evolving tactics, challenging traditional cybersecurity measures. The malleable toolkit of AI includes deepfakes, voice phishing, and social media bots. Despite the challenges, maintaining skepticism, verification, and leveraging AI for defence are key strategies in combating these sophisticated attacks.

Social Engineering is the practice of manipulating people to divulge sensitive information or perform certain actions. While this deceptive ploy has been a weapon in the cybercriminal’s arsenal for decades, traditional attacks were easy to identify as they relied on generic tactics like phishing emails with bad grammar and promises of unlikely riches. However, the landscape is rapidly changing. Artificial intelligence (AI) is now being used to create a new generation of social engineering attacks that are more sophisticated, personalized, and difficult to detect than ever before.

“As AI accelerates, social engineering evolves, calling for a multi-faceted defence strategy that integrates AI-driven solutions with human-centric approaches.”

Adarsh Nair, Director & Global Head – Information Security, UST

The Rise of AI and the Need for Governance

The rapid development of AI, particularly its growing capability for malicious purposes, has spurred discussions about the need for effective governance frameworks. The EU AI Act and the American Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence are two of the most recently enacted regulations designed to combat this. These frameworks aim to establish ethical guidelines for the development and deployment of AI as well as to mitigate potential risks and ensure transparency and accountability in its use.

Several challenges exist in developing AI governance strategies. One challenge is seeking the right balance between innovation and security. While overly restrictive regulations could stifle innovation in the field, the reality is that without proper safeguards AI could be misused to manipulate people at scale, as seen in the rise of AI-powered social engineering.

Further international collaboration is crucial for effective AI governance. Different countries have reached various stages developing their own frameworks, and a patchwork of regulations could create confusion and loopholes to be exploited by malicious actors.

The field of AI governance is rapidly evolving and as AI continues to develop, it will be critical to establish effective governance frameworks to ensure its safe and beneficial use.

How AI is Amplifying Social Engineering

AI is having a profound impact on social engineering in several ways:

Hyper-Personalization: AI  can  analyze  vast  amounts  of  data  from  social media profiles, online browsing history, and even public records to create highly personalized attack vectors. Hackers can use this information to craft emails, messages, or even deepfake videos that resonate deeply with the target, making them seem more believable and trustworthy.

Automation and Efficiency: Repetitive tasks like email phishing campaigns can be automated using AI, allowing attackers to launch large-scale attacks with minimal effort. AI can also be used to identify potential victims and tailor attacks to their specific vulnerabilities.

Evolving Tactics: AI can be used to analyze the success rates of different social engineering techniques and identify patterns. This allows attackers to continuously refine their methods and develop new tactics that bypass traditional security measures.

These advancements in AI pose a significant challenge to those tasked with building robust cybersecurity frameworks.

AI’s Malleable Toolkit

One of the most concerning aspects of AI-powered social engineering is the versatility of the tools attackers can leverage. This “malleable toolkit” includes:

Deepfakes: AI can be leveraged to create realistic forged videos or audio recordings that can be used to impersonate someone in a social engineering attack. For instance, a deepfake video could be used to make it appear as if a CEO is endorsing a fraudulent investment scheme.

Voice Phishing: AI can be used to create software that mimics a real person’s voice, making phone scams more believable. Imagine receiving a call that appears to be from your bank, with a synthesized voice urging you to disclose your account information due to suspicious activity.

Social Media Bots: AI-powered bots can be used to infiltrate social media circles, build trust with potential victims, and spread misinformation or malicious links.

These are just a few examples, and as AI technology continues to evolve, we can expect attackers to develop even more sophisticated tools and techniques.

Defending Yourself in the Age of AI

While AI presents new challenges, there are steps you can take to protect yourself from social engineering attacks:

Maintain a Healthy Scepticism: Don’t be afraid to question the legitimacy of any communication, even if it appears personalized or urgent. As always, if something seems too good to be true – it probably is.

Verification is Key: Don’t click on links or attachments in emails or messages from unknown senders. Verify information directly with the supposed source through a trusted channel, like a phone call you initiated.

The rise of AI-powered social engineering attacks presents a significant challenge, but we’re still not defenceless. By staying informed about the latest tactics, maintaining a critical eye, and employing common-sense security measures, we can significantly reduce the risk of falling victim to this new threat.

However, complete eradication might not be achievable. Researchers suggest that AI- based defence mechanisms might even be the most promising solution in the long run. AI can also be leveraged to deliver effective awareness trainings to the users. As AI continues to evolve on both sides of this digital arms race, staying vigilant and adaptable will be crucial in the ongoing battle against social engineering attacks.

About the Author

With a strong foundation in Information Security, Adarsh has cultivated expertise spanning governance, risk management, ethical hacking and compliance. Currently serving as the Director & Global Head of Information Security & Business Continuity at UST, he is dedicated to ensuring organizational resilience. Beyond his corporate role, he actively contributes to the cybersecurity community, holding honorary  positions at OWASP, IAPP, EC-Council, Digital University of Kerala and Kerala Police Cyberdome. His efforts have been recognized with prestigious honours including the excellence medal from the Chief Minister of Kerala, Emerging CISO award and a Google Hall of Fame induction. Passionate about knowledge sharing, he engages through publications, media appearances, and speaking engagements worldwide, championing cybersecurity awareness and best practices.

Related posts

Eventus Security Appoints Sachin Jain as Senior Vice President to Drive North America Expansion

enterpriseitworld

Dynatrace Joins the Microsoft Intelligent Security Association

enterpriseitworld

Fuelled by AI boom, India’s Data Centre capacity to surge 66% by 2026

enterpriseitworld
x