When it comes to your data security, regardless of the soundness in your plans, it still comes down to the implementation. As part of a series of blog posts describing how to properly assess your data security needs, this installment specifically addresses implementations. In previous posts we covered Designing Protection and Design Approaches.
Ultimately data security is the identification and study of the weakest links in a protection scheme. Most people can implement a tool or procedure but the other half of the equation is to understand each implementation’s weaknesses, usually in hope of complementing them with other protections.
Authentication systems are almost ubiquitous but they are not all created equal. Each has its own strengths and weaknesses. For example NTLM does not allow delegation but works even when the client is not part of the domain, while Kerberos allows delegation (and in general is considered more secure) but requires authentication within the domain.
Regardless of the authentication type used, the ability to perform a given task may not be controlled by such authentication. For example row or even table-level access controls in a database can, without other protections, be circumvented by reading the underlying file system, reading a backup, sniffing the network for traffic containing the desired data, scraping the server’s memory for copies of the desired data, etc.
When it comes to authenticating, passwords or biometric tokens like fingerprints are common mechanisms. As it turns out, most biometric data is somewhat easy to gather and replicate. Moreover, once a breach is detected this form of authentication can’t really be changed. The main problem with traditional passwords is making them hard to guess yet easy to remember. If users are forced to change passwords frequently, maintain several passwords or use overly complex passwords it is inevitable that some users will write them down making discovery much easier. Once a password is known it is often easy to guess the pattern being used when it must be changed so unless the breach is detected and the pattern changed an enforced periodic password change may be nearly useless.
Multiple authentication barriers
Multiple levels of authentication have their own pros and cons. If for example, to get into your system from the web, you must first get past a firewall sign-in and only then you can authenticate onto the network the considerations depend on how the firewall sign-in is implemented.
If the firewall essentially grants the ability to access a domain authentication prompt this is a far less secure implementation than if a given firewall credential granted access to only a small class of IDs. The more people that can get past a given barrier the more likely an attacker will find a weak link. It doesn’t matter who made the breach possible, once the barrier is compromised any domain ID is accessible. With a smaller surface area behind each access point the potential for damage shrinks. But the pattern doesn’t last. If each user must sign in uniquely to both authorization systems the chances that the same password will be used in both cases goes up and the extra protection can disappear.
While some algorithms appear to be stronger than others there are locations for weakness beyond the algorithm. Many algorithms show vulnerabilities when keys that meet certain criteria are used. Certain classes of keys make the resulting cryptogram vulnerable to attack. A large, well known example involves DES. Keys with even parity are vulnerable to attacks that don’t require brute force. Given you use a strong algorithm and you use software that prevents weak key classes the biggest vulnerability becomes protection of the decryption key.
In many instances key protection is handled by hardware or software that insulates a user from needing to have actual access to encryption keys. A simple approach is to have software generate an unknown key and place it where others have no authority to retrieve it. The key is stored only after encrypting it with a password that only authorized users know. Now knowledge of any two items does not help you gain knowledge of the third and without all three pieces the data cannot be decrypted. In addition, as a prior example showed, the key or the password or both can be further subdivided to increase the number of people or access mechanisms required to recover the key that decrypts critical data.
A highly secure environment can be one where the weakest link is quite strong or where any link that isn’t quite strong unlocks a very limited amount of data that can’t be leveraged to get more data.
An example might use end-to-end encryption with a strong algorithm, the keys are unknown to anyone and the hardware is protected both physically and electronically against attempts to retrieve the key or switch to a known or weak key.
Another example might use hundreds or even thousands of separate keys to encrypt groups of data coupled with some insurance that any breach could compromise only one key thus limiting exposure if a breach occurs.
Notice that a multiple key approach, by increasing the points of attack, makes a breach more likely but the scope of the breach is limited so the overall effect may be greater security than a monolithic method might provide. It’s rather like comparing parlor bingo to a national lottery. In the first case you may average $25 in winnings for every four hours of play (not worth the effort) in the latter you may get the entire $100,000,000 but more likely you will get nothing.