By Joel Snape | Cybersecurity Researcher at Nettitude
The NIST framework is voluntary guidance, based on existing standards, guidelines, and practices for organizations to better manage and reduce cybersecurity risk, produced by the National Institute of Standards and Technology. In addition to helping organizations manage and reduce risks, it was designed to foster risk and cybersecurity management communications amongst both internal and external organizational stakeholders.
Nettitude often base our guidance and solutions around this framework, in which there are some key considerations to be made when adopting cloud technology, based on the NIST framework. Continue reading to discover why the NIST Cybersecurity framework is used and how it relates to cloud adoption.
Why should organisations use the NIST framework?
The NIST Framework helps an organization to better understand, manage, and reduce its cybersecurity risks. In addition, it will assist in determining which activities are most important to assure critical operations and service delivery. In turn, that will help to prioritize investments and maximize the impact of each dollar spent on cybersecurity.
By providing a common language to address cybersecurity risk management, it is especially helpful in communicating inside and outside the organization. That includes improving communications, awareness, and understanding between and among IT, planning, and operating units, as well as senior executives of organizations. Organizations also can readily use the Framework to communicate current or desired cybersecurity posture between a buyer or supplier.
NIST and cloud integration risk
The following section will look at the five factors which the NIST is centred around when handling cybersecurity matters, from identification to protection, detection, response and recovery.
Understanding your assets. The cloud can be a dynamic and flexible environment and existing methods of maintaining inventory can struggle with this. Traditional asset models of IP addresses, servers and applications are not flexible enough to capture the complexity of cloud-based services based on the consumption of APIs and services.
Managing identity and access.
Applications and services deployed on the cloud can consume resources from multiple vendors or platforms. It’s important to ensure that there is a common identity across your platforms, and the identity lifecycle is managed as users join, transfer around and leave your organisation. Cloud services commonly allow for fine-grained role-based access, so this should be leveraged to ensure users only have access to the resources required to do their job.
Continuous awareness and training
Cloud environments evolve constantly with new features and products being introduced on a regular basis. This can make it difficult to keep up with the current best practises and make the most of the security features that are present.
Understanding data security requirements.
When using a non-private cloud your data is by definition no longer fully under your control. Understanding the requirements for the data your holding is key to ensure that you are able to adequately protect the data you are holding and meet your ethical, legal and regulatory requirements. Cloud services can be deployed in different geographic regions, and protections depend on the effectiveness of your vendor’s implementation. It is critical that you fully understand the way in which responsibilities are shared between you and your service provider – different providers have different terms but for example for AWS by mapping the ‘Shared Responsibility Model’1 from Amazon to your situation.
Planning for changing network requirements.
Moving applications to the cloud will change the traffic flows across your network, and traditional network deployments may not have been designed to accommodate this. Additionally, public-cloud hosted applications are more likely to be exposed to a range of internet-borne attacks, particularly DDoS attacks. As well as denying access, DDoS attacks can cost money from the additional resources consumed, and it’s important to remember that DDoS attacks can have an impact on more than just their intended target (for example, and attack against one customer’s infrastructure may affect others, depending on the provider’s levels of mitigations and resource separation).
As services are deployed across platforms and across disparate locations maintaining a situational awareness of what is happening becomes more challenging. This is true both for operational as well as security monitoring. Ensuring that you have visibility of events within your cloud environments is a key first step to being able to effectively detect compromise.
Efficient incident response.
The scale, flexibility and dynamic nature of the cloud can make responding when something goes wrong difficult – especially given the challenges around effective monitoring. If response is not considered up-front as part of a system’s design or migration into the cloud can lead to response being much slower and costlier.
Restoration of systems or services.
Although many cloud features can make it easier to restore affected services at pace, consuming others (particularly SaaS) may mean that your organisation does not control all aspects of incident recovery. For example, you may be dependent on a SaaS or PaaS provider to restore access before you can start using an application again. Thorough modelling of business impact can be difficult, but it is essential to ensure you understand how you will continue operating if outages occur.
For more information on cloud migration and cloud technology, please don’t hesitate to get in touch with the team. Additionally, keep your eyes pealed for the next blog post in the series.
View the full research report here.