Preventing possible data breaches often entails new data loss prevention tools or infrastructure vulnerability management solutions, but theirs is still something missing in this response. The reason breaches are so catastrophic is because data that is leaked is valuable. It isn't the “network” being leaked so data-centric security shouldn’t stem from that.

It is more attainable to secure specific data stores and files than it is to throw up defences “around” the whole infrastructure. The truth is that most data stores do not contain sensitive information. So, if we can just keep sensitive data in a small number of secured data stores, enterprises will be much more secure. Focusing on the data is a better way to prepare for a compromised environment.

What does it take to make this a reality? Organizations need a way to find, classify, and remediate all data vulnerabilities. Here are the 5 steps to adopting a data-centric security approach:

Discover shadow data and build a data asset inventory.

You can’t protect what you don’t know you have. This is true of all organizations, but especially cloud-first organizations. Cloud architectures make it easy to replicate or move data from one environment or another. It could be something as simple as a developer moving a data table to a staging environment, or a data analyst copying a file to use elsewhere. Regardless of how the shadow data is created, finding it needs to be priority number one.

Classifying the most sensitive and critical data

Many organizations already use data tagging to classify their data. While this often works well for structured data like credit card numbers, it’s important to remember that ‘sensitive data’ includes unstructured data as well. This includes company secrets like source code and intellectual property which cause as much damage as customer data in the event of a breach.

Prioritize data security according to business impact

The reason we’re investing time in finding and classifying all this data is for the simple reason that some types of data matter more than others. We can’t afford to be data agnostic– we should be remediating vulnerabilities based on the severity of the data at risk, not the technical severity of the alert. Differentiating between the signal and the noise is critical for data security. Ignore the severity rating of the infrastructure vulnerabilities if there’s no sensitive data at risk.

Continuously monitor data access and user activity

Data is extremely valuable company property. When you give employees physical company property – like a laptop or even a car– they know they’re responsible for it. But when it comes to data, too many employees see themselves as mere users of the data. This attitude needs to change. So, make all employees accountable for their data. This should not just be the security team’s sole problem or responsibility.

Shrink the data attack surface– reduce the organization’s data sprawl

Beyond remediating according to business impact, organizations should reduce the number of sensitive data stores by removing unnecessary sensitive data within them. This can be via redaction, anonymization, encryption, etc. By limiting the number of sensitive data stores, security teams effectively shrink the attack surface by reducing the number of assets worth attacking in the first place.

In conclusion, the most important aspect is understanding that data travels so its security posture must travel with it. If a sensitive data asset has a strict security posture in one location in the public cloud, it must always maintain that posture. A Social Security number is always valuable to a threat actor. It doesn’t matter whether it leaks from a secured production environment or a forgotten data store that no one has accessed for two years. Only by appreciating this context will organizations be able to ensure that their sensitive data is always secured properly

Written by Yoav Regev, co-founder and CEO of Sentra

Credit: Yifat Golan