How AI Will Impact Cybersecurity and Cybercrime in 2025
Explore predictions on AI’s impact on cybersecurity in 2025, from AI-powered scams to deepfakes and complex cyberthreats shaping the future of digital security.
Explore predictions on AI’s impact on cybersecurity in 2025, from AI-powered scams to deepfakes and complex cyberthreats shaping the future of digital security.
2024 has been a year for artificial intelligence (AI), and while it has dominated headlines in mainstream media, cybercrime has shown this, too. New sections dedicated to AI have also appeared on hacking forums and networks that are pushing cybercriminals to discover how it can empower different attack vectors.
It is present across nearly all cybercrime platforms. However, AI’s part in cybercrime is still in its infancy and has quite a lot of expansion anticipated. In this article, we make AI security predictions for the years 2024 through 2025.
Deepfakes are one of the most notable applications of AI where cybercrime is concerned. In 2024, the technology of deepfakes reached a stage that enabled cybercriminals to impersonate individuals and carry out incidents of fraud, impersonation, and misinformation. This is projected to grow a lot in 2025.
Although deepfakes are already being used in voice phishing (vishing) and video impersonation scams, the sophistication of these attacks will only get better. With AI models becoming capable of creating more realistic audio and video forgeries, we forecast a rise in the number of deepfake attacks, particularly in business email compromise (BEC), social engineering attacks, and executive impersonation scams.
We’re already in the early stages of deepfake abuse, but as we progress into 2025, the quality and scale of such attacks will increase. These deepfakes will become the cybercriminals' tool of choice, to the extent that it’s going to affect all industries, from finance to politics.
AI jailbreaking is when someone breaks an AI, or specifically a large language model (LLM), in a manner that circumvents its ethical guardrails and makes it do something it wasn’t meant to do. Jailbreak prompts are all the rage right now on cybercrime forums where criminals trade prompts and methods for breaking AI. This form of exploitation will most certainly remain a major issue in 2025.
Although jailbreaking techniques are already effective, allowing attackers to co-opt legitimate AI models for malicious purposes, there isn’t much need for further evolution in this area. It works well enough.
In 2025, AI jailbreaking will continue to allow cybercriminals to exploit LLMs for tasks such as malware development, fraud, and bypassing restrictions. This will remain a common method for turning otherwise secure AI systems into tools for cybercrime.
There has been growing interest in how AI can be applied to penetration testing and vulnerability discovery. While some envision a future where AI can autonomously scan and exploit vulnerabilities, the reality is more incremental. In 2025, we will see significant improvements in AI-augmented penetration testing tools, but the dream of fully autonomous AI hackers is still distant.
The integration of AI into security tools that already exist is what’s happening now. An example of such a use case is BurpGPT — an AI-powered plugin for the extremely popular Burp Suite, which improves web application security scanning with AI-supported insights. You will see more AI augmentations and plugins hitting the cybersecurity landscape in the coming year.
These tools won’t find zero-days or pull off complex attacks on their own, but they’ll speed up security assessment processes and help make them more accurate. By 2025, we can see entire vulnerability scanning tools based on AI models that would allow penetration testing to be faster and more effective.
2024 has been a rampant year for social media scams, and things are only going to get worse in 2025. This rise in scam activity will be fueled by AI. Today, crypto drainers, NFT scams, fake accounts, and bots are already filling up today’s social media platforms. As AI bots get even more advanced and are able to have more believable conversations with victims, these scams will progress to match them.
AI-powered bots will be able to engage with users in a way that feels authentic, tricking people into clicking malicious links, sharing personal information, or investing in fraudulent schemes.
Social engineering will become more effective as AI bots mimic human interaction with unprecedented realism, making it harder for individuals to detect scams. In 2025, social media platforms will face significant challenges in combating this growing threat as AI helps scammers adapt faster than security teams can respond.
We’re seeing big advancements in LLMs and large investments in AI, especially in cybersecurity, and we’re really just scratching the surface of what this technology can do.
Blink is an ROI force multiplier for security teams and business leaders who need to quickly and easily secure a broad variety of use cases such as SOC and incident response, vulnerability management, cloud security, identity and access management, and governance, risk, and compliance.
Blink Ops has thousands of automations in the Blink library and the ability to customize your workflows to fit your exact use case, making it a great tool to improve your security operations. Get started with Blink Ops.
Blink is secure, decentralized, and cloud-native. Get modern cloud and security operations today.