Defence in depth and how it applies to web applications
Information security generally refers to defending information from unauthorized
access, use, disclosure, disruption, modification or deletion from threats. Organizations
are constantly facing threats that exist both externally as well as internally — be they
from nation states, political activists, and corporate competitors or even disgruntled
employees.
Defending an organization from these threats is hard because it requires a significant
amount of effort, insight and investment. It is also difficult for non-technical users to
appreciate its importance; that is, until a security breach cripples or even destroys even
the most carefully constructed organization. To such an extent, it is important to
understand the concept of defence in depth when tasked with defending an
organization from threats.
It is critical to understand that security is always “best effort”. No system can ever be
100% secure because factors outside of the designers’ control might introduce
vulnerabilities. An example of this is the use of software that contains 0-day bugs —
undisclosed and uncorrected application vulnerabilities that could be exploited by an
attacker.
Defence in depth is a principle of adding security in layers in order to increase the
security posture of a system as a whole. In other words, if an attack causes one security
mechanism to fail, the other measures in place take arms to further deter and even
prevent an attack.
Comprehensive strategies for applying the defence in depth principle extend well
beyond technology and fall into the realm of the physical. These can take the form of
appropriate policies and procedures being set up, training and awareness, physical and
personnel security, as well as risk assessments and procedures to detect and respond to
attacks in time. These measures, crucial though they might be, are only but physical
measures to preventing what is ostensibly an information security problem. This article
on the other hand will focus on how defense in depth principles could apply to web
applications and the network infrastructure they operate within. This article will also
offer a number of pointers (that is by no means exhaustive) which can be used to
improve the security of web applications.
Key Components of defense in depth
1. Perimeter Security: This is the outermost layer of defense, typically involving
firewalls, intrusion detection systems (IDS), and intrusion prevention systems
(IPS) to monitor and control traffic entering and leaving the network.
2. Network Security: Within the network, additional security measures such as
network segmentation, VLANs (Virtual Local Area Networks), and access
control lists (ACLs) can be implemented to limit the spread of threats and control
access to sensitive resources.
3. Endpoint Security: Protecting individual devices (endpoints) such as computers,
smartphones, and servers with antivirus software, host-based firewalls, and
endpoint detection and response (EDR) solutions.
4. Application Security: Ensuring that applications are developed and deployed
securely, with measures such as code reviews, secure coding practices, and web
application firewalls (WAFs) to defend against common vulnerabilities like SQL
injection and cross-site scripting (XSS).
5. Data Security: Encrypting sensitive data at rest and in transit, implementing
access controls and data loss prevention (DLP) solutions to prevent unauthorized
access or leakage of data.
6. User Education and Awareness: Educating users about security best practices,
such as avoiding suspicious links and attachments, using strong passwords, and
being cautious with sharing sensitive information.
7. Incident Response: Having a well-defined plan and processes in place to detect,
respond to, and recover from security incidents effectively. This includes logging
and monitoring systems, as well as regular testing and drills of the incident
response plan.
Defense in depth strategies
1. The KISS principle
KISS is an acronym for “Keep it simple, stupid”. Since it is impossible to ever achieve a
system that is 100% secure (because it is impossible to build bug-free software),
simplifying the way software works is an effective strategy to reduce the number and
severity of security flaws in applications.
By not over complicating an application’s design and the infrastructure it is running on,
makes the implementation easier, and allows easier inspection of security mechanisms.
2. Fail-safe defaults
Software is bound to fail. Try as we might to create perfect, failure-resistant software,
bugs will always exist that might cause software to fail. Notwithstanding this, it is
important that this potential failure does not expose an application to a security risk.
An application should feature secure defaults; denying access to resources by default;
checking returned values for failure; and making sure that conditional code or filters
properly handle failure.
Critically, even though some part of the application is not available, or is functioning
unexpectedly, it should not be possible for an attacker to compromise the
confidentiality or integrity of an application.
3. Security before obscurity
Security through obscurity refers to the use of obfuscation or randomization of a design
or implementation to provide security. With this in mind, it becomes obvious that the
security of a system relying solely on obscurity, rather than the implementation of
sound security devices, is destined for failure.
Take for instance, an SSH daemon configured to listen on a port other than the standard
port 22. While that may deter a script-kiddie, this obscurity is going to be of little
protection against a financially motivated attacker who would not only discover the
SSH daemon on the unconventional port, but also notices a series of known and
exploitable high-severity vulnerabilities in that daemon.
While obscurity can be used as a defense in depth measure (since it would increase the
efforts an attacker needs in order to break into a system), it should never replace real
security controls. As such, when obscurity is implemented, it should only be used to
increase the cost of attack, and it should always be assumed that a savvy attacker can
identify the obscurity and overcome it.
STO is based primarily on hiding important information and enforcing secrecy as the
main security technique. By using security by obscurity, some people think they are
going to minimize the risk of getting targeted by an attack.
Examples of security by obscurity
To explain how security by obscurity works, we need to look at real-life examples. And
the first example that comes to mind is this one:
Hiding the key to your front door under a nearby rock or the welcome mat. The
principle is simple: your house will be “secure” until a thief discovers the key in its
hiding place. That’s when your house becomes vulnerable.
The same goes for building your house in the middle of the forest. Being surrounded by
trees and shrubs, it’s “secure” within that forest. However, as soon as someone walks in
and discovers your house, it’s vulnerable.
In the cybersecurity world, there are other real-life scenarios where security by
obscurity is seen every day:
1. Hiding user passwords inside binary code, or mixed with script code or
comments. This is a very popular technique that assumes the attacker won’t read
the code, and therefore, provides protection from any intrusion.
2. Changing the name of your application folder, for example from ‘admin’ to
‘_admin.’ It may take longer, but if the attacker finds you are using ‘_admin’, and
there is no additional authentication or IP-based whitelist, he’ll be able to jump
right into your administrative area.
3. Using a different daemon port is a very popular technique for reducing the
amount of brute force attacks against certain ports, such as port 22 on SSH. This
technique works; however, once a dedicated attacker finds your SSH port is
running on a different port than 22, he’ll start targeting your new port anyway. A
proper solution would be to disable password authentication, and limit logins by
IP with mechanisms such as TCP Wrappers or a firewall.
4. Hiding software versions: While there are a lot of ways you can perform a
banner grabbing attack, one of the most popular STO techniques is to hide your
software version from the public. In the case of web servers, the Nginx or Apache
version can be hidden, which can keep attackers from knowing if you’re running
a vulnerable and outdated version.
4. The Least Privilege Principle
An application does not need to use the root (MySQL), sa (Microsoft SQL Server),
postgres (PostgreSQL) or SYSDBA (Oracle Database) to connect to the database.
Likewise, it’s a bad idea to run daemons or services as root (Linux) or Administrator
(Microsoft Windows), unless there is a specific, justifiable, and carefully considered
reason to do so. An application should always be given the least privileges possible that
allow it to work properly — any additional privileges should be disabled.
If an application is connecting to a database with a privileged user account, in the event
of an SQL injection vulnerability, an attacker would be able to run SQL queries as a
database administrator, and on some database servers, also execute operating system
commands. By executing operating system commands, an attacker could have the
ability to carry out a reconnaissance exercise on the internal network behind the firewall
and escalate an attack further.
Running anything with administrative privileges defeats a tried-and-tested security
model that’s been in place for years, since it allows an attacker or a rogue application to
cause more damage in the event of a security breach. Applications and database
connections should be run with restricted, non-administrative privileges, elevating
privileges temporarily to modify the underlying system only on a per-need basis.
Examples
1. the Ghosted device
The best security practice is to ensure local admin group membership is used
appropriately. By minimizing a user’s access to only the bare necessities, you can
start with a zero-trust posture and implement least privilege access from the
start. Taking a lifecycle approach to managing privileges also includes regular
audits and culling unneeded or elevated administrator rights.
Even administrators should have different accounts for different use cases.
Instead of “one account to rule them all,” they should have specific, privileged
accounts used for access or running certain applications such as updates, access
databases or backups, etc., and a standard account to use for general purposes.
2. Over-Privileged Third-Party Contractors
a.k.a. How an air conditioner repairman unwittingly compromised personal data
of 100 million Target shoppers
3. Helpdesk Staff with Superuser Super Powers
5. Log everything, revisit often
Several defence in depth strategies help prevent breaches in the first place, however a
crucial aspect to any defence strategy is to know when an attack is underway and what
has happened after an attack occurred. Mitigating the the effects of a security breach is
only possible if attention is paid to early warning signs.
Logs are a crucial part of systems and applications. Through logs one can monitor
performance, uptime, resource usage and other such data. Logs are also indispensable
tools for monitoring security and detecting attacks. Logging is the closest thing to a time
machine, so having comprehensive, detailed records of what happened when, could
spell the difference between noticing a breach early and letting an attacker pull off a
heist.
The obvious deduction here is not to ignore early warning signs, while the less obvious
conclusion is the need to log absolutely everything and revisit those logs often.
Naturally, this could present some technical challenges, especially to larger
organizations, but it’s not impossible, especially with the ever-decreasing cost of storage
and the various log management tools out there that help filter signal from noise.
In order to respond in time to the early warnings of a security breach, an organization
first needs to know when it’s under attack and what has happened (or is happening)
during the attack. One of the more effective ways to do that is to log everything and
take action on what information logs reveal.
6. Trust no one, validate everything
Unfortunately, most vulnerabilities at the application layer can’t simply be patched by
applying an update. In order to fix web application vulnerabilities, software engineers
often need to correct mistakes within the application code. It’s therefore ideal for
software engineers to understand the security risks associated with user input. At the
end of the day, all user input should be considered unsafe.
By never trusting the user, and validating every input, an application can be built to be
more secure and more robust. This applies to any injection vulnerabilities such as SQL
injection and cross-site scripting, but it also applies to vulnerabilities that would allow
an attacker to bypass authentication, or request a file they should never be allowed to
see.
7. Parameterize SQL queries
While encrypting database tables and restricting access to a database server are valid
security measures, building an application to withstand SQL injection attacks is a
crucial web application defence strategy.
SQL injection is one of the most widely spread and most damaging web application
vulnerabilities. Fortunately, both the programming languages, as well as the RDBMSs
themselves have evolved to provide web application developers with a way to safely
query the database — parameterized SQL queries.
Parameterized queries are simple to write and understand while forcing a developer to
define the entire SQL statement before hand, using placeholders for the actual variables
within that statement. A developer would then pass in each parameter to the query
after the SQL statement is defined, allowing the database to be able to distinguish
between the SQL command and data inputted by a user. If SQL commands are inputted
by an attacker, the parameterized query would treat the input as a string as opposed to
an SQL command.
Application developers should avoid sanitizing their input by means of escaping or
removing special characters (several encoding tricks an attacker could leverage to
bypass such protections) and stick to using parameterized queries in order to avoid SQL
injection vulnerabilities.
8. Outbound, context-dependant input handling
HTML encoding data before it is inserted into a database (inbound input handling) in
order to prevent cross-site scripting (XSS) is considered to be bad practice as it could
limit the usage of that data. If data is HTML encoded it can only be used inside HTML
pages (perhaps that same data needs to be consumed by a web service, not just
rendered into an HTML page).
More importantly, if input data is handled inbound by, for example, HTML encoding it
when inserted into a database, there is no guarantee that this will prevent XSS.
Preventing XSS is highly dependant on the context in which user input is used. If user
input is used inside an HTML page, it needs to be HTML encoded; if user input is used
inside a <script> tag or inside a JSON object, HTML encoding might not stop an
attacker from delivering an XSS payload to a user.
It is therefore important for user input to be treated differently based on where user
input is being used (context-dependant outbound input handling). Context-dependant
outbound input handling, if used correctly is a very effective technique in preventing
XSS. Furthermore, it has the advantage of keeping the data in a database unmodified,
while retaining the ability to treat user input depending on the context it is being used
in.
9. Prefer whitelists to blacklists
Since user input can never be trusted, validating user input is a core concept of
application security which, if done right, not only increases the overall security of an
application, but also makes an application more robust.
Data validation strategies generally fall into two camps: whitelisting (accepting known
goods), and blacklisting (rejecting known bads). Both have their use cases, however, in
general, unless there is a specific, justifiable, and carefully considered reason to use a
blacklist, a whitelist tends to provide much stronger resistance to attacks.
A simple example of this would be the validation of a US ZIP code. It’s far easier and
safer to create a whitelist that accepts numbers consisting of five digits from 0 to 9 (or
sometimes, a set of five digits from 0 to 9, followed by a second set of four digits from 0
to 9 with a hyphen in between the two sets of numbers) as opposed to trying to come
up with a blacklist of all possible combinations that should be rejected. A blacklist, in
such a context, is not only impractical, but more importantly, it is bound to miss
something that could allow a user to input something that should be allowed.
10. Update software and components
Whether it’s a server’s operating system, a web server, a database server or even a
client-side JavaScript library, an application should not be running software with
known vulnerabilities.
Updating, removing or replacing software or components with known vulnerabilities
sounds obvious, but it’s a significant problem that thousands of organizations struggle
to manage.
Patching a handful of servers and applications may not sound like a mammoth of a
task. However, when scaled out to thousands of applications with different application
stacks, different development and infrastructure teams, spread across geographically
distributed organizations, it’s easier to understand why patching software with known
vulnerabilities tends to become a challenge.
11. Isolate services
Since the software we create can never be bug free, a common defence in depth
approach, one that goes back to the early days of UNIX (user accounts and separate
process address spaces are such examples), was to base some elements of a platform’s
security on isolation. The idea is to separate a system into smaller parts with the aim of
limiting the damage of an affected or malfunctioning system.
While this is not always easy to achieve, and in some cases may conflict with the
principle of keeping things simple, isolation techniques such as limiting what resources
can communicate with each other on a network, and forbidding everything else by
default (therefore adopting a whitelist approach), could limit the damage of an attack.
Does a web server really need to be able to communicate with a domain controller or a
printer on the same network? Should these devices even be on the same network? That
depends; the answer might be “yes”, in which case, that communication should be
allowed as long as it is carefully considered and secure.
12. Never roll your own (or weak) crypto
In 1998, world renowned cryptographer Bruce Schneier wrote the following.
Anyone, from the most clueless amateur to the best cryptographer, can create an
algorithm that he himself can’t break.
Bruce Schneier is not alone in holding this view, countless other experts in the field
agree. Just because you can’t break the crypto algorithm you created, it does not mean it
is secure.
The same argument can be made for weak cryptography because it has the same effect
— it doesn’t serve its purpose.
Cryptography is one of the aspects of security which should never, under no
circumstance be “homemade”. Instead, it’s not only wiser, but also far easier to rely on
proven, heavily scrutinized algorithms such as AES (Rijndael) or Twofish for
encryption, SHA-3 or SHA-2 for general-purpose hashing and bcrypt or PBKDF2 for
password hashing.
To conclude
While every system is different in its own right, the points above, while by no means an
exhaustive list, should serve as a general guideline for most situations. By adopting a
layered approach to application security, it not only makes applications more secure,
but also more robust and prepared for failure — hopefully enough to keep out even the
most determined of attackers.