Blog
Strange Behavior: The Case for Machine Learning in Cybersecurity
Detect anomalies to stop threats
Chase Snyder
September 1, 2020
Many people are skeptical about machine learning claims and rightfully so. You can't just sprinkle machine learning pixie dust on a product and make it better. You need to first understand the use case. The strongest case for machine learning in cybersecurity is detecting unusual behavior that represents attack activity.
Once attackers have breached your defenses, there are many ways for them to evade detection from traditional security tools—not to mention that no one has the time to set up complex SIEM alerts for every possibility. Attackers will mask themselves by using legitimate credentials, delete or modify logs, encrypt or obfuscate their communications, or use sanctioned IT administration tools to move laterally while escaping notice. But one thing that they won't be able to do is escape from an always-on machine learning system that's looking for suspicious behavior on the network.
Sophisticated anomaly detection is the real benefit of machine learning for cybersecurity. Yes, ML can help in other ways like helping to speed up investigation workflows, but the strongest case for ML in cybersecurity is to detect unknown threats that traditional approaches will miss. When you apply machine learning to network traffic, you have a mechanism for anomaly detection that automatically adjusts to your dynamic environment and never sleeps.
Reveal(x) applies machine learning to network traffic to detect unusual behavior that other tools will miss.
Reveal(x) applies machine learning to network traffic to detect unusual behavior that other tools will miss.
Consider a malicious insider—the ultimate unknown threat—who decides to gather up as much proprietary data onto their laptop before leaving the company? Machine learning can detect anomalies such as when this person's credentials are used in an unusual way, and when their laptop starts behaving suspiciously on the network.
The danger of insider threats was highlighted in January 2020, when an automotive manufacturer charged a former employee with creating false usernames to make direct changes to the manufacturing automation source code. The malicious insider also exported gigabytes of data to unknown third parties. Thankfully, this company was able to detect this sabotage, but the incident illustrates the threat posed by malicious insiders.
The video below illustrates how Reveal(x) applies machine learning to network traffic to expand the observational boundary and protect you against unknown threats. In this scenario, we show how a VoIP phone can start deviating from the behavior of its peers and trigger an unusual communication detection. If you'd like to try Reveal(x)'s anomaly detection capabilities for yourself, check out our interactive online demo.
Discover more