I have spent a lot of time recently thinking about where information security research still needs to evolve in order to get ahead of the pace of threats and technological disruptions. What’s driving me to rethink the way we approach the problem is both the pace of change and the decreases in efficacy. Security has always been described in the terms of balances. There are the balances of efficacy, productivity, scale, performance, and functionality. And on the research side, there has been this balance of science and art. This post takes a look at that balance and what I am calling the “Security Venn of Research”.
In the early days of security research, reverse engineers spent their time receiving samples from customers (even sometimes via snail mail on floppy disks). The researcher would then perform some analysis on the code to determine its intent and behavior, and then update the protection system with some sort of signature. This was a hash, CRC, or some sort of byte pattern. Although there was certainly *some* science applied here the majority of this manual work was done by a person that formed their opinion based on their research and was heavily art-based and little science.
Years later the Internet was invented and malicious, self-replicating code (worms) started to be more pervasive. The volume of malware increased dramatically and the ability to push updates was available over the Internet. In addition, the research community started using behavior systems. This allowed researchers the luxury of not having to reverse engineer and analyze every sample they collected and automation was put into place to update their protection much more frequently. This overlap created the Security Venn of a small part science and part art.
Next came the rise of the cyber-criminal. This made a dramatic impact on the volume and sophistication of attacks. At the same time researchers were building bigger automation systems, creating more collection mechanisms, and building large clusters of systems to automate analysis. Although this balanced the scale of science and art there was still a lot of tuning, development, and manual classification that was being created and maintained based on short-lived problems and driven, primarily from attack samples.
Big Data meets Security
As threats continue to evolve in sophistication and increase in numbers, the reliance on attack samples has caused continued decrease in efficacy. That said, there is a lot of room for continued innovation. We are using technologies from the big data/data mining movement in combination with machine learning and other scientific classification methods based off of DNS data, traffic, and hundreds of features we collect in real-time. This allows for predictive classification with very little human involvement, post-classification push, and tips the scale towards more science and less art. While reverse-engineering and manual analysis has a role in advanced malware and forensics, it certainly is not equal to scientific research.
Design will increase in importance
Just like in other technology disciplines, information security can greatly benefit from the data visualization movement in combination with big data. The security version of data viz is sometimes referred to as “Secure-viz” or “sec-viz”. This brings back the art piece of security as the human element may be needed to analyze the data through visualization. The human brain can process graphics in a much more complex way than machines today.
Although it may be dependent on the class of attacks you are researching, it’s clear that the research community is moving towards a balance of science, art, and design as the evolution of the security venn. To learn more about how we do what we do, take a look at our Infographic on how the Umbrella Security Labs leverages Big Data for predictive threat research.