In 2002, Donald Rumsfeld, then the United States Secretary of Defense, said “Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know.”
However, he added “there are also unknown unknowns – the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones.”
The same lack of knowledge is true in the cybersecurity world. Let’s take a look.
Also see: Secure Access Service Edge: Big Benefits, Big Challenges
Cyber Threats: A Short History
The very first type of cyber threat was the notorious virus. It was an innovative computer program that could create copies of itself and “jump” from one computer to the next. Back then, attackers used floppy disks, but as time went on, they began to use computer networks and the Internet.
The first anti-virus software created databases of signatures; parts of the binary code that included lots of different such viruses and compared each file on the system against that database. Back then, that strategy was enough because developing applications was complex and, more important, time-consuming, specifically using languages such as C/C++/Assembly.
Since then, dynamic languages such as Python have appeared that facilitate the software development process by allowing users to more easily collaborate and re-use code parts. This, together with the rise of cryptocurrency, increased the attackers’ motivation and made the usage of such signature databases infeasible. The number of such known signatures and the rate at which this list is growing are simply staggering.
Many security solutions, even today, are still trying to use various types of signatures to detect malicious files, network traffic, human behaviors, and several others. Even tools that are trying to harness the immense power of large user communities to create vast databases of tens of thousands of different signatures are only effective to some extent. Overall, this strategy is not enough to provide good protection against the known threats.
And even if it was effective in protecting against known threats, it still won’t provide any protection against the known unknowns – and certainly not against the unknown unknowns. These attack types are being conceived and developed by attackers somewhere right now.
Also see: Tech Predictions for 2022: Cloud, Data, Cybersecurity, AI and More
The Struggle to Block Unknown Threats
As mentioned, the signature-based defense tactic in which we create vast databases of IOCs (Indications of Compromise – e.g., IPs, domain names, users) is becoming less and less effective today with the rise of new computer languages.
Another trend that makes this approach less effective is the use of DGAs, Domain Generation Algorithms. Using this technique, the attacker embeds a piece of code inside the malware that uses variables known to the attacker and malware (for example, the current date), that will periodically and dynamically create a domain name and access it.
If it succeeds, it will be used, and if not, it will revert to the most recently used domain name that did work. Such techniques render signatures based on domain names and IPs essentially useless as these can easily be changed by the attacker at any time.
Another technique we see includes several types of network segmentation and segregation. This method spills the network into several sub-networks via various kinds of firewalls. Then, access to the Internet is not achieved directly but rather via a proxy or some sort of a terminal server, which receives keystrokes and mouse movements and returns images (i.e. screenshots).
This can effectively protect against certain types of threats but does not guard the organization against all types of attacks. Furthermore, such methods don’t provide the organization with any tools that help detect attacks or use evasion mechanisms against these devices or services. And they don’t improve the organization’s observability overall.
Another technique that is very effective but rarely used for practical reasons is whitelisting. Using this method, the organization will create and maintain databases of signatures of allowed executables, websites, certificate issuers, file hashes, etc. The downside is apparent: the amount of data the organization needs to collect and the manual maintenance required renders such a solution infeasible for most organizations.
Combining Technologies to Block Threats
As in many other cases in technology, when confronted with the need to choose between two options, the best path forward is – usually – to combine the two while getting the pros of both options and minimizing the cons.
The same is valid here. It’s recommended that you maintain, and continue improving, your current network segregation and firewalls. But add to it a passive network monitoring solution that will continuously monitor the traffic between computers on the organization’s local network and the traffic between the organization and the outside world. Also, usage of what is known as honey tokens can help tremendously in improving the detection capabilities of the passive monitoring solution.
Then, using the data collected from all these solutions, you can create alerts that fire whenever a new value appears in selected fields – in selected events – that would be a good indication of malicious activity.
These fields will include:
- Destination and source countries
- Processes launched by the root/Administrator user
- External DNS servers accessed by internal machines that are not the internal DNS server
- RDP/VNC users
- SSH server applications
- Application types
- Internally hosted services consumed by external hosts
We still may not be able to block all the “unknown unknowns” but we would at least be combing the best technologies we have right now.
Also see: The Successful CISO: How to Build Stakeholder Trust
About the Author:
Yuval Khalifa, Security Solutions Architect, Coralogix