My Latest in Cybersecurity Research

Welcome to my research blog, from the view of a cyber professional, a source for cutting-edge cybersecurity research. Explore the latest in digital defense and the ongoing battles in cyber warfare. Stay informed and stay secure with me.

My Research

AI and Machine Learning in Cybersecurity

AI and Machine Learning in Cybersecurity

September 12, 20237 min read

AI and Machine Learning in Cybersecurity

Brock Lerdall | September 11, 2023 


The advancement of artificial intelligence (AI) has exploded just in the last couple of years, and it doesn’t show any signs of slowing down. From Open AI’s ChatGPT that is a pre-trained, large language chatbot that can write code, develop stories, write responses to questions, and even take (and pass) difficult tests, to Adobe’s Generative AI that is capable of expanding, adding to, subtracting from, and completely transforming photos. Even if all of these advancements can be very exciting and transform our day-to-day work and personal lives, this can all be a little scary, especially considering that we don’t exactly know the direction or the complexity potential of these AIs. In a 2023 study by The Pew Research Center, 52% of Americans say they are more concerned than excited about the increased use of artificial intelligence in daily life. Just 10% of poll takers say that they are more excited than concerned (Tyson & Kikuchi, 2023, pewresearch.org). As a cybersecurity professional how can these advancements and the expanded threat surface affect you?


The Role of AI in Cyber Threat Detection and Prevention

Although there are growing concerns, there seems to be a bright spot for how we can leverage these AI tools in cybersecurity. AI and machine learning can be leveraged to prevent all kinds of cyber threats to include malware detection, phishing attempts, and insider threats (just to name a few but the possibilities are only limited by imagination). 


Malware Detection

Machine learning has now been in use for a while in malware detection and has played a critical role in identifying and mitigating threats. A behavioral analysis model can be used to detect suspicious or malicious behavior inside of software and network traffic. These learning patterns can detect and alert to anomalies in the file system, installed rootkits, user or group modification, connection to malicious sites, modified boot records, or any action that would be considered abnormal use. Some companies like Norton and TotalAV are using machine learning behavior analysis and artificial intelligence to detect and mitigate behavioral analysis (and other malware detection methods) threats. Norton boasts a 100% detection and mitigation rate and offers this feat real-time. This machine learning capability can continuously grow its understanding of behavioral analysis so in theory, its detection of malware only gets more accurate with time. Signature based detection is another way machine learning is leveraged in finding malware on our systems. Signature based detection allows the machine to query a database of known malware and can compare incoming code or files against this database. Signature based detection relies on finding certain strings inside code or applications that are known to be malicious. This database query allows for fast malware identification of known threats but may fall short in the case of unknown or new malware. Heuristic analysis is yet another way machine learning is leveraged in the detection of malware. Similar to signature-based detection, heuristic analysis looks for anomalies in code and software, instead of finding malicious strings, it finds patterns of behavior common in malware. Heuristic analysis uses rules to search for commands, not typically in code, they may be used maliciously. As AI advances, the malware detection methods too will advance.


Phishing Detection

As any cybersecurity professional will tell you, the greatest threat to your environment is the end user. According to an AAG.com study, in 2021 45.56% of all emails sent were spam (Griffiths, 2023, AAG). End users are arguably the most targeted assets inside your organization. How can AI be leveraged to detect and prevent social engineering attacks such as phishing attempts? Machine learning content analysis models can be used in order to detect unusual language, suspicious websites or links, attachments, etc. inside of email and website content. Another tool is user behavior analysis. This model can set a baseline for “normal” user behavior and spot anomalies such as users clicking on suspicious links and issue a warning to the user.  URL and domain analysis is a machine learning algorithm that can test the legitimacy of websites and links. It does this by comparing them to a known blacklist of suspicious or malicious sites or identifying content with known suspicious patterns. They say end user training and awareness is the best defense against phishing but with AI tools like this at our disposal, hopefully we can reduce the number of successful phishing attacks to 0. 


Insider Threat Detection

The most dangerous threat actor is the one already inside your organization. Malicious actors with legitimate access and credentials can misuse their access to compromise systems, network, or data. While you can’t necessarily stop an insider with malice, machine learning tools like user behavior analytics, data loss prevention (DLP), and sentiment analysis are here to lessen the blow of an insider threat. User behavior analytics can monitor typical user behavior and flag unusual behaviors such as attempting to access sensitive files or file locations, unusual work and access hours, or unauthorized access locations. So, if your typical user is accessing the system from 9-5 and only accessing file locations within their privilege, a flag for unusual use would be someone attempting access above their privilege at 3 am. Another machine learning tool commonly used is DLP. DLP is used to prevent data exfiltration and unauthorized data access. DLP classifications can be added to sensitive data so the DLP tool can recognize and automatically flag and prevent the transfer and possible data breach. Sentiment is a machine learning model that will monitor and analyze digital chat messages, emails, etc. in an attempt to detect if the language used is common with a disgruntled, happy, or neutral employee. This model will find and flag language that may indicate an insider threat or unhappy employee that may cause harm to the organization. So be careful sending that “I hate this place, I want to fork bomb the network,” message to your co-worker. 


In all of these cases, the strength of machine learning lies in its ability to continuously learn and improve over time. Machine learning models can learn from new data, allowing them to stay up to date with the ever-changing cyber threats. However, it's important to note that while machine learning is a powerful tool in cybersecurity, it is not an end all solution. As with any cyber tool, it should be accompanied by other security tools and policy to provide comprehensive protection against cyber threats.


Adversarial Machine Learning in Cybersecurity

Adversarial machine learning (AML) focuses on manipulating AI and ML models by making changes to their training, exploiting vulnerabilities within their language, and interfering with processes. Adversaries can craft input data that is specially designed to mislead AI and ML tools, called adversarial examples. These adversarial example inputs can be used to evade AI/ML security tools by appearing to be legitimate or benign to the detection system in place. If an adversarial example is able to fool one AI model, it may be able to fool other AI and ML models, this can make them far more dangerous because they can exploit multiple models.

When it comes to securing your AI and ML models, a robust model is necessary. You can use adversarial training to teach AI to recognize adversarial examples, making it less likely to be exploited. Combining multiple models and tools, increasing redundancy, and decreasing single points of failure are always good ways to develop a robust and resilient system. 


While cybersecurity professionals have the aforementioned tools at their disposal, our adversaries have the same access to AI and machine learning. While we (the good guys) have to be right all the time, they (the bad guys) only have to be right once. It is important to employ a robust suite of cyber tools and policy to help us be right ALL OF THE TIME. If you employ any of the mentioned AI and ML tools above, beware that their models and training can be affected maliciously. 









References:

https://www.pewresearch.org/short-reads/2023/08/28/growing-public-concern-about-the-role-of-artificial-intelligence-in-daily-life/

https://aag-it.com/the-latest-phishing-statistics/




Back to Blog

Call 719-329-4593

Email: brocklerdall@gmail.com

My website proudly features studies and work with prominent companies, although it's important to clarify that I do not possess ownership rights to their logos. These logos are displayed here solely to represent my studies and work and to showcase the brands I've had the privilege to work alongside. I respect the intellectual property of these organizations and maintain the highest ethical standards in my collaborations.