We frequently get contacted by organizations after they have experienced a data breach. All too frequently the incident comes as a complete shock, and the reason that they find out it because they are contacted by a 3rd party. We have compiled our top 5 reasons why organizations don’t detect a cyber breach.
Organizations assume it won’t happen to them. Consequently they don’t have a plan
Far too many people assume that they will never experience a cyber-attack. Their assumption is that they aren’t interesting enough, their data isn’t interesting enough, there are bigger and better targets, or worst of all, they have security solutions in place which means they are bullet proof. Many organizations pay incident response planning lip service. They download a vanilla incident response plan from the internet, file it in a folder, and feel good that they are compliant with the latest or greatest cyber security framework.
This has to be one of the biggest failure we see within an organization. To assume that cyber breaches won’t happen to you is not a good plan. Organized crime units will target businesses to gain access to their systems, their data and their intellectual property. In today’s digital world, data has value, and bad guys know this. Disorganized threats, such as WannaCry and Petya/NotPetya have proven that any type of system or organization can be a target. WannaCry was a ransomware based attack, that was relayed through mail systems, delivered across insecure network segments and infected any system that was unpatched and powered on. WannaCry is a great example of how attackers can now monetize attacks by targeting any type of organization, whether large or small. If you wanted your data back, your only choice was to restore from backup, or pay the ransom.
Organizations have to plan to be compromised. Through borderless networking, encompassing the cloud, social and mobile platforms it is almost impossible to keep data secure at all times. Organizations should think about the different types of threats that they may face and build incident response plans to address these threats. Instead of simply putting these plans in a folder and forgetting about them, organizations need to test the plans, conducting table top exercises, and other assurance exercises to ensure that the plans are effective and can be executed at a time of need.
Organizations don’t fully know where their data is
Many organizations think that they know where their data is. However, one of the biggest differences between data and physical records is that data can be in more than one place at the same time. It is common for an organization to focus on where they think the data is, protecting this with security controls and processes. However, they are frequently unaware where data has leaked to within a network. Today’s systems are often virtualized, and these files are often snapshotted or replicated across drive arrays and sites. Additionally, it is not uncommon for systems administrators to take backups of key system data when they do upgrades or install new applications. These backups, and snapshots often find their way on to file shares, users machines and even in to cloud infrastructure delivered by some of the internet giants.
Organizations that don’t know where all of their data is located, will struggle to defend it. They will struggle to detect attacks that target it, and they will be unable to respond when incidents occur. For an organization to be able to detect and respond to a cyber attack, they must have a full and complete picture of where their sensitive data resides in advance.
Expect that technology alone will detect a breach
Many organizations have gone out and bought IDS, IPS and SIEM technology in the hope that technology alone will detect threats within their network. Although technology can most certainly assist an organization, it needs to be configured and placed in the right places to give it any chance of seeing threats as they target organizations. Many security vendors build their security technology to detect the noisiest of network traffic, and are completely unable to identify malicious traffic that is disguised as normal internal traffic. SIEM appliances are frequently over-tuned to try to eliminate false positives, and in doing so also miss the essential indicators that could point to a threat actor being present within your network.
Technology alone isn’t enough to detect and respond. It requires human intervention and some form of process to act on alerts and to undertake a counter-action when faced with a possible breach of security. All too often, alerts are ignored. They are auto archived, or filed away for a future day, in the hope that a systems administrator will have more time, more focus and more resources to fully assess them and determine the next steps.
For organizations to be able to detect cyber attacks, it is no use to rely on technology alone. Organizations need a combination of robust people, process and technology that is well placed, well trained, and aligned to the current threat landscape.
Have never done any assurance on their detection
Many years ago, when firewalls were first deployed, organizations trusted that they were secure and assumed that they wouldn’t be breached. Over months and years, mature organizations recognized that they should probably test the security of the firewalls to ensure that they were doing the filtering in the way that they expected. Through firewall audits and external penetration tests, so organizations gained assurance that their firewalls were delivering the value that they expected.
In the world of cyber detection, this assurance activity is frequently lacking. Organizations assume that they will be able to detect, because they have bought technology and deployed it at strategic vantage points across their networks. All too often this detection technology, and any surrounding people or process involved in the detection process undergoes no form of assurance activity at all. We think this is wrong, and we are actively evangelizing about the need for change.
Organizations need to understand the threat landscape and conduct threat modelling, to understand the likely attack paths and the relevant tools, techniques and practices that threats have been seen to display. Detection and response assessments should then be conducted to simulate these threats and determine an organizations detection capability. And of course, this shouldn’t just focus on internal systems. Organizations need to have confidence that they have the right kind of tools and processes in place to detect attacks on the cloud services that they consume.
Organizations assume that the threat will look like it's from an external source
Many organizations monitor the external infrastructure effectively, and have poor detection capability internally. The expectation is that the threat is external and will look like malicious traffic.
The reality today, is that many threat actors target people, your employees and your colleagues. They compromise their machines through phishing techniques and then move laterally across the network abusing the inherent trust controls that you have built in to your systems your services and your processes. More and more attacks don’t look like external threats. They look like internal users, accessing systems and services in an abnormal manner. If you are not monitoring the internal networks, and you have no ability to detect normal from abnormal user behavior, then it will be really hard to detect many of the more common current threats.
Contact Nettitude today
Nettitude can help you protect your business by identifying the threats and ensuring you look at people, process and technology.