Imagine that you have a large monorepo code base running as a monolith application. This application is backed by a database. Some of the data in the database is sensitive, so you want to restrict access to it to decrease the risk of developers mis handling it. Essentially, you only want to allow specific chunks of code that have been approved to access this data. What would be a good way to do that?

A couple ideas I had are:

  1. Lint rule to prevent unrestricted import of the data access objects. The downside of this is this does not work at runtime.
  2. Vending the sensitive data via a method that requires an API key. This could be used in conjunction with the import lint rule. The downside of this is that in a monolith/monorepo, it’s probably pretty easy to access any auth material so developers might end up sharing API keys to gain access.
  3. Having developers register their callsites and then using an inspection library to determine that the callsite requesting sensitive data is registered. This one seems very fragile and tightly coupled.

Any other ideas or potentially enhancements to these ideas?

There’s always going to be workarounds that I don’t think can be solved like developers who already have access sharing it for other use cases. I guess you’d have to rely on some developer trust there.

Lastly, this all assumes that you can’t directly access the database easily.

Thanks everyone.

1

Ewan’s answer is a good start, but I’d like to elaborate on it far more than a comment would allow.

First, developing and testing an application shouldn’t require access to this sensitive data in a database to begin with. If your application requires data to function, you should look at being able to generate test data that is representative of your sensitive production data but doesn’t have the specific, sensitive content. This allows you to give access to your source code to people that you trust as developers while maintaining tighter control on the data.

Second, you want to manage how code gets committed to your repository. You don’t want to allow a single developer to be able to commit malicious code that allows for sensitive data to be extracted. There are several techniques to scan for, allow humans to monitor for, and block malicious code from being integrated and deployed to your production environment.

Third, you want to secure your production configurations. You should be using vaults or secret managers to deploy things like database user names and passwords used to connect your application to your production databases. Minimize who has access to your production configurations and production infrastructure, review that access regularly, and have good logging of access and actions taken.

Fourth, you want to secure your infrastructure. You can reduce the risks of malicious changes or malware through the use of strong firewall rules on your infrastructure to prevent your application from initiating outgoing connections that could be sending data out or to reduce the risk of bad actors from attempting to connect to your system.

Fifth, you want to consider application-level security to monitor for suspicious usage patterns. But you also want to reduce, and ideally prevent, developers from having any accounts in your production infrastructure. Limit your production environment with your sensitive data to the end users who own and control that data.

3

It’s standard practice NOT to allow developers access to live data of any kind.

I mean i’m not saying it’s usually strictly enforced, but it’s the “best practice” guideline in many certifications/regulatory regimes.

This is achieved by having the live database connection string/api keys/passwords stored in the deployment system, rather than the code base.

Now if you mean that you want only certain parts of the application to be able to show certain data, that’s a different problem, but it should be fully covered by your testing and compliance sign off?

By having code reviews, testing, audits and various compliance sign offs by different people you protect your self/company against internal fraud of various kinds.

It seems to me that here if you have “PublicApp” accessing “PrivateDatabase” it will need a user “PublicAppUser” and this will show up in your biannual security audit as having access to “PrivateDatabase”, the audit will fail and you will change the code.