This blog is part of a series describing how to properly assess your data security needs. This installment addresses the mindset needed to design useful protections and how to look at the extent to which various protection types mitigate various risks. Despite already breaking up this topic into three separate blogs (Data Security: Critical Knowledge and Data Security: Mechanisms for Malicious Loss), this final topic is still so robust that we’re breaking it down even further to cover these areas: Designing Protection, Design Approaches and Implementations.
Although I’ll mention a few features or technologies that are useful in implementing security, I have no illusions that this blog will make you an expert. As previously mentioned, the subject is just too vast. What I can do is make you feel more confident that your plans are broad and deep enough for your needs or give you arguments for seeking an expert and enough information to tell an expert from most charlatans.
Designing protection is a bit like erecting a shelter. The range of possibilities are as diverse as putting up an umbrella to building a nuclear bunker and must take into consideration whether you are at the south pole or the equator, in a desert or the ocean. In most cases you will build something far from any of these extremes but the range of considerations is clearly diverse.
Part one talked about types of loss and the importance of assigning a value to your data. Part two talked about the various ways your data can be misused, altered or destroyed. To design the most cost-effective protection you must merge these two lists, prioritize the result and design protections that focus on the needs that your list makes clear.
No answers fit all situations
A server that handles a free service to create blogs is not at risk of being hacked for its sensitive information. In fact, if identity management is handled by a third party, such a system may have no confidential information whatsoever. But this doesn’t mean such a system would not need to be secure. The system owners might consider it critical that no one other than the owner of a given blog space be able to add, alter or delete the content within that space. It may be so important to their site model that even their system administrators should not have the ability to make such changes. Clearly security is still highly important but in a completely different way from that of a payroll company where even authorized users have access to perhaps only a tiny slice of their own companies data and it is critical that such users can’t even tell what other companies use the service much less see another company’s data. Meanwhile Customer Service may need efficient access to all the data belonging to any company. They may even need to be able to update that data on the company’s behalf (with a proper audit trail of course!)
Although there are no one-size-fits-all answers, there are a couple practices that come close to being universal: ‘Multiple barriers’, ‘Segmentation’ and ‘Surface area reduction’ are among the biggest. The ideas are simple. Try to ensure that important data has more than one barrier to loss and keep your areas of vulnerability as small as possible.
A mainstay of most serious protection schemes is multiple barriers but not every environment warrants such protection. Given that any specific protection you implement will have a weakness; multiple barriers ensure your data remains safe even if one of your protections fails.
There are cases where multiple barriers are necessary in providing a decent level of protection. There are other cases where the drawbacks require a more careful weighing of their pros and cons.
The main drawbacks to most multi-barrier systems are that they can make access more cumbersome, slow processing and operations in general and add, sometimes excessive, cost to the price of maintaining data. They can also introduce additional points of failure to a system.
Whether your needs are to keep confidential data confidential or to protect data from unauthorized modification this mechanism can be an important tool in your arsenal. A sufficiently determined attack WILL eventually succeed unless the attacker is caught before achieving that success. Moreover some environments simply can’t implement strong enough protections or can’t trust the implementation of protections that exist. In either case the goal of segmentation is to organize or otherwise partition your data such that if protections are breached only a small amount of data is vulnerable. The idea here is to ensure that a breach in one area can’t be parlayed into additional access.
As a really basic example, if you discover the password that lets you edit my Facebook data that really doesn’t help you in the slightest in modifying my boss’s data even if our data are in the very same tables on the server. The partitioning concept can extend to encryption mechanisms where groups of data are encrypted under different keys so that discovering one key doesn’t give you access to any other data.
Segmentation can be effective in both directions. Imagine three people in company A where each has one-third of an encryption key to be used between their company and company B. They each deliver their piece of the key to three different people in company B. Both companies have encryption software that accepts keys in multiple parts. The link is established with no two people having the ability to compromise the data connection.
Often such a scheme is combined with the multiple barriers concept. The secret keys that were exchanged are only ever used to initialize communications with a working key. This key is random and can change as often as every minute or even with every message. A protocol that includes a field that says ‘use this key for the next message’ virtually eliminates the possibility of the data stream key being usefully decrypted since every key is used only once. If the connection is periodically re-synched the chain is periodically broken so in the event a working key is guessed the string of data that could then be decrypted will only last until the next re-sync.
Surface area reduction
This concept can apply to every level of your business however any given environment may conclude that a particular level is not cost effective to implement. It is however very rare that this concept is not useful in a data protection scheme.
The most talked about form of surface area reduction is to keep the amount of data accessible to a given credential at the minimum necessary. This doesn’t just go for IDs that have direct access to data but must extend to any account that can be leveraged. Any time an account has the ability to read, update or otherwise manipulate any part of your system that it has no business need to access the credential has a larger surface area exposed than it needs.
This problem can be viewed in both directions. For example if, during creation of your SQL Server instance you select one of the presented account defaults: ‘Local System’, you have actually given the instance local administrator permissions when it really only needs read/write access to the directories that hold its data. Now consider what might happen if an ID that had very limited access to one database in the instance found a way (perhaps as a blatant as xp_cmdshell being available) to cause SQL Server to perform an arbitrary operation. The authority available to such a person would immediately be elevated to system administrator. Such authority could be leveraged to gain owner level access to all the databases in the instance. In addition to reading or modifying data, it might be used to compromise audit trails, encryption keys etc.
Now, from the other direction consider if the problem exists in a test system where the data at risk is minimal. Because other services, perhaps even another database instance, may run under the Local System account an otherwise authorized database user might leverage the instance’s authority to manipulate other services. It may even be parlayed into access to another system.
Keep the number of interfaces into your system and to your data at the minimum necessary. This means disabling interfaces, ports or protocols that you have no business need to keep active. It also means avoiding the use of interfaces and protocols that are overly vulnerable when providing access to sensitive data or systems that can be leveraged to access sensitive data.
For example using WIFI without using an encrypted tunnel dramatically increases your exposure to network traffic sniffing as compared to a hard-wired network that, for practical purposes, requires physical access to be sniffed. While some WIFI network encryption schemes are fairly secure, the security is only as good as the key that protects it. The bigger the network the more opportunities there are for the key to be discovered or disclosed.
A more blatant example is using a wireless keyboard. Every time you type a password you quite literally transmit your keystrokes to anyone in range to record them.
Like interfaces, systems can be restricted. Most firewalls can limit traffic to and from a given system. This mechanism is rarely used as a primary protection but it makes a very effective complementary protection. If I limit data access to specific machines then even if credentials are compromised access will be refused unless the request appears to originate from an authorized source.
Like systems and interfaces, people should not have more authority than they need. This is not synonymous with limiting the access a person’s credential grants because a person typically has access to many credentials and interfaces. These, not only include the means to read or update company data but to copy or otherwise transmit that data off premises. Copy or transmission mechanisms can include anything from direct upload of electronic data to the web or a thumb drive to carrying away hardcopy reports or photos of screens with data. Just from a ‘read’ standpoint, when your business operations consist of multiple disparate systems it can become difficult to keep tract of the effective surface area that each person represents.
From a surface area perspective, as the protections you can automate become tighter, people automatically become the weakest link. Moreover there are cases where the difference between accidental and deliberate malicious modifications can be difficult to determine but the results can be similar.
One of the most effective protection mechanisms against human failure is ‘Separation of Duties’ or dual control. The process usually takes the form of an approval process or a matching operation process. Either or both of these mechanisms can require more than two people to be involved in effecting a change but usually the minimum is two. These types of protections can be as effective against accidental changes as malicious ones.
Separation by approval
To be effective an approval process must be automated in such a way that the duties are truly separated. For example if a programmer cannot install changes to production this is only half of the separation. Requiring that such a person go to a system administrator only protects you from a malicious programmer. The system administrator must not have authority to install an arbitrary update. True separation requires their authority to be limited to installation of changes created by another person.
On modern systems, support for this is built into an OS or DBMS. In these cases the Operating System or Database Management System implements a feature whereby it will only execute digitally signed programs or stored procedures. In this way the database or system administrator has no access to the signing certificate or its keys and those that can sign have no authority to install.
Less sophisticated but substantially effective mechanisms can be constructed by removing or otherwise disabling local ports for mounting a local drive on a server and limiting the authority of a server or DBA to ‘see’ any public locations on the network file system. Only authorized persons could place scripts or software on those visible locations and the administrator would not have such authority. The administrator would, however, be the only one who could execute or install items so placed.
Separation by matching
When it comes to protecting from errant or malicious updates an approval process can work however it is often harder to ensure data is valid with this process. Accidental inaccuracies are often more likely than malicious ones and approval mechanisms are less effective at preventing accidentally inaccurate updates than they are at preventing malicious ones.
To this end redundant entries can be used. This mechanism is used quite often when entering a new password. The same person enters the data twice to prevent an accidental mistype from setting the password to an unknown value. When multiple people are required to enter data before it is considered valid you not only prevent accidental errors but make deliberately false data entry more difficult.