6 Real-World Applications of Artificial Intelligence (AI) in Cybersecurity

Explore 6 real-world applications of AI in cybersecurity, including BurpGPT, EPSS, Blink Copilot, VirusTotal's Code Insight, HackerOne's Hai, and Tenable's EscalateGPT.

Blink Team
Author
Jun 27, 2024
 • 
6
 min read
Share this post

Artificial intelligence (AI) has received a great deal of attention in a variety of industries, particularly cybersecurity. Gartner predicts that 34% of organizations will implement generative AI within the next year. However, while researching the topic, you will notice that many articles discussing AI lack practical real-world examples to demonstrate its capabilities. To close this gap, we will highlight six examples of organizations that have successfully integrated AI to improve specific cybersecurity practices, providing concrete evidence of its potential.

1. Web Application Testing With BurpGPT

BurpGPT is a Burp Suite extension that integrates OpenAI's GPT model to perform additional passive scans to discover highly customized vulnerabilities. It allows users to run traffic-based analysis of any type, enabling them to uncover vulnerabilities that traditional scanners may miss.

BurpGPT leverages natural language processing (NLP) to analyze HTTP requests and responses, identify potential security issues, and provide context-aware recommendations for further investigation. 

By utilizing AI techniques like pattern matching, and semantic analysis, BurpGPT streamlines the security assessment process and provides a higher-level overview of the scanned application or endpoint. This empowers security teams to more easily identify potential security issues and prioritize their analysis.

2. Vulnerability Prioritization With The EPSS

Moving on to vulnerability prioritization, the Exploit Prediction Scoring System (EPSS) is an AI-powered tool that predicts the likelihood of a publicly disclosed vulnerability being actively exploited. Developed through collaboration between researchers and organizations, EPSS utilizes machine learning algorithms to generate scores indicating the probability of exploitation.

The system leverages various data sources, including vulnerability databases, exploit repositories, and real-world attack data, to train its predictive models. By incorporating features such as vulnerability characteristics, exploit availability, and attacker motivations, EPSS can accurately assess risk. 

By incorporating EPSS into security strategies, organizations can enhance their ability to prevent successful cyber attacks, manage risks more effectively, and allocate resources more wisely. EPSS helps prioritize patching and allows you to focus on the most serious threats first.

3. Workflow Automation With Blink

Most organizations are struggling to automate common tasks or alert responses because automation requires working with a rigid SOAR tool and having technical skills to code steps. With AI, generating new automated workflows is as simple as typing in a prompt.

Blink Copilot is an example of a tool that leverages Generative AI to simplify workflow building. You no longer need to know the deep details of each integration to build powerful automations. By lowering the barrier to automation, Blink enables teams to broaden what types of tasks they think of automating, like compliance checks, syncing IAM roles, and device management.

 

You can try entering your own Blink Copilot prompts here to see it generate a new workflow.

4. Security Ops With VirusTotal Code Insight

Another area where AI is making a significant impact is security operations with VirusTotal's Code Insight. VirusTotal, a Google subsidiary, introduced Code Insight, a new AI-powered tool for analyzing suspicious files and URLs.

The tool uses Google's Cloud Security AI Workbench, to understand code semantics and behavior. It can identify malicious patterns, obfuscation techniques, and potential indicators of compromise. 

Code Insight also generates human-readable explanations of code functionality, making it easier for analysts to understand risks. With Code Insight, security operations teams can quickly understand functionality and risks to help identify threats and provide recommendations, enhancing efficiency and effectiveness.

5. Augmentation With HackerOne's Hai Co-Pilot

Next, let's explore how AI is augmenting security analysts with HackerOne's Hai co-pilot. HackerOne, a leading bug bounty and vulnerability coordination platform, introduced Hai, an AI co-pilot assisting security analysts. Hai provides easy-to-understand explanations of vulnerabilities, offers remediation advice, and helps generate clear communications.

Hai leverages AI to analyze reports, extract relevant information, and provide context. It can identify technical details, assess severity and impact, and suggest appropriate remediation. 

By augmenting analysts, Hai streamlines processes, improves collaboration, and accelerates remediation. It enables faster decision-making and helps bridge gaps between teams and hackers.

6. IAM Security Testing With Tenable's EscalateGPT

Finally, let's discuss how AI is being used for identity access and management (IAM) security testing with Tenable's EscalateGPT. Tenable, a cybersecurity company, developed EscalateGPT, an AI-powered tool for discovering privilege escalation opportunities in AWS IAM configurations.

EscalateGPT is a Python-based tool that retrieves all IAM policies associated with users or groups in an AWS account or AzureAD. It then passes these policies to the OpenAI API for comprehensive analysis. 

The AI models, such as GPT-4, can identify complex privilege escalation scenarios based on non-trivial IAM configurations, providing valuable insights to security teams.

Looking At AI's Overall Impact On Cybersecurity

The examples above illustrate how AI can enhance cybersecurity processes and improve overall threat detection. However, as emphasized in Gartner's report, realizing the full potential of generative AI requires strategic adaptation across multiple areas.

From a technical perspective, it’s important to integrate new security practices into AI to minimize potential attack vectors. Gartner identifies these practices as continuous monitoring of AI prompts, implementation of privacy-enhancing techniques, and securing training pipelines. 

Gartner also predicts a significant increase in spending on application and data security, with an estimated rise of over 15% by 2025 due to the adoption of generative AI.

Addressing talent implications is also essential, as AI is driving significant changes in required cybersecurity skills, necessitating extensive reskilling. By 2026, 50% of large enterprises are expected to prioritize agile learning methods for reskilling. 

Effective reskilling requires a human-centered approach, with Gartner predicting that 40% of cybersecurity programs will integrate behavioral science principles by 2025 to foster secure behaviors and risk awareness as part of the organizational culture.

Take Your Next Steps With BlinkOps Today

Generative AI is a technology that is best used to improve and augment humans as they work in various areas of cybersecurity. If you want to dig deeper into your AI and development journey, The Dark Reading Report on "The State of Generative AI in the Enterprise" is the next step. It's packed with information and trends about how security teams use generative AI. Click here to download a copy of the report.

No items found.
No items found.