[go: up one dir, main page]

0% found this document useful (0 votes)
26 views34 pages

Chapter 4

Chapter 4 introduces security systems and various information security models, including the Bell-LaPadula, Biba, Clark Wilson, Brewer and Nash, and Harrison Ruzzo Ullman models. It discusses the importance of security systems in protecting information from threats such as viruses, phishing, and unauthorized access, while also outlining the principles and rules governing each model. The chapter emphasizes the need for confidentiality, integrity, and availability in information security practices.

Uploaded by

ETHIOPIA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views34 pages

Chapter 4

Chapter 4 introduces security systems and various information security models, including the Bell-LaPadula, Biba, Clark Wilson, Brewer and Nash, and Harrison Ruzzo Ullman models. It discusses the importance of security systems in protecting information from threats such as viruses, phishing, and unauthorized access, while also outlining the principles and rules governing each model. The chapter emphasizes the need for confidentiality, integrity, and availability in information security practices.

Uploaded by

ETHIOPIA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Chapter-4:

Security Systems and Models

1/11/2025 1
Outline

• The objective of this chapter is to introduce student with


security system concepts and to discuss existing
information security models.

1/11/2025 2
Outline
• Introduction to security systems
• Information Security Models
• Bell-LaPadula Model
• Biba Model
• Clark Wilson Model
• Brewer and Nash Model
• Harrison Ruzzo Ullman Model
Security systems - Intro

 Security system:
 Refers to the practices and measures taken to protect objects and information from
harm, theft, and unauthorized use.

 The process of preventing and detecting unauthorized use of your computer


system, object (house, car,…), and information.

 It encompasses the range of activities designed to ensure CIA triads

 Why we need security systems?


• The Internet has transformed our lives in many good ways.

• Unfortunately, this vast network and its associated technologies also have brought
in their wake, the increasing number of security threats.

1/11/2025 4
Security systems - Intro

• Example of security threats:


 Computer security threats: Viruses, Computer Worm, Phishing, Botnet,
Rootkit, Keylogger

 Computer security system: can be defined as controls that are put in place to
provide confidentiality, integrity, and availability for all components of
computer systems.

1/11/2025 5
Computer security threats

 Virus:
• A computer virus is a malicious program which is loaded into the user’s
computer without user’s knowledge.
• It replicates itself and infects the files and programs on the user’s PC.
• The ultimate goal of a virus is to ensure that the victim’s computer will never
be able to operate properly or even at all.
 A Computer Worm:
• A computer worm is a software program that can copy itself from one
computer to another, without human interaction.
• The potential risk here is that it will use up your computer hard disk space
because a worm can replicate in greater volume and with great speed.
Computer security threats
 Phishing:
• Disguising as a trustworthy person or business, phishers attempt to steal
sensitive financial or personal information through fraudulent email or
instant messages.
• Phishing in unfortunately very easy to execute.
• You are deluded into thinking it’s the legitimate mail and you may enter
your personal information.
 Botnet:
• A botnet is a group of computers connected to the internet, that have been
compromised by a hacker using a computer virus.
• An individual computer is called ‘zombie computer’.
• The result of this threat is the victim’s computer, which is the bot will be
used for malicious activities and for a larger scale attack like DDoS.
Computer security threats
 Rootkit:
• A rootkit is a computer program designed to provide continued privileged
access to a computer while actively hiding its presence.
• Once a rootkit has been installed, the controller of the rootkit will be able to
remotely execute files and change system configurations on the host machine.
 Key logger:
• Also known as a keystroke logger, key loggers can track the real-time activity
of a user on his computer.
• It keeps a record of all the keystrokes made by user keyboard.
• Key logger is also a very powerful threat to steal people’s login credential such
as username and password.
Security models

 Fundamental problems that need to be addressed in area of security:


 Bugs/flaws in all complex software systems
 Difficulty to build computer SW/HW which is not vulnerable to attacks
 This led to develop formal models of computer security that can be
used to verify security designs and implementations.
 A security model specifically defines essential aspects of security and
their relationship with the operating system performance.
 Security models are required in order to put a security system over
organization or personal computers.
Security models
• The key reason and focus on the security model implementation are
confidentiality over and done with access controls and Information
integrity.
• With the help of these, security models that are the main
components that should be given attention to when developing
information security policies and systems.
• These models talk about the access rules required to instantiate the
defined policy and highlight the objects that are directed by the
company’s policy.
Security models

 Five most popular and valuable security models:


• Bell-LaPadula Model
• Biba Model
• Clark Wilson Model
• Brewer and Nash Model
• Harrison Ruzzo Ullman Model
Bell-LaPadula Model

 The model of Bell-LaPadula is originally the development of the US


Department of Defense (DoD).
 It makes sure that data only flows in a way that does not disturb the
system policy and is confidentiality focused.
 Focuses on preventing unauthorized access to classified information
 Each subject and object is assigned a security class/levels
 E.g. U.S. military classification scheme:
top secret > secret > confidential > restricted > unclassified
 A subject is said to have a security clearance of a given level; an
object is said to have a security classification of a given level.
Bell-LaPadula Model

 Bell-LaPadula rules and properties:


• Simple Security Property: “Certainly not read up”; a subject at a
particular clearance level cannot read an object at an upper
classification level.
• prevents lower-level users from viewing more sensitive data.
• Star Security Property: “do not write down”; a subject at a
higher level of clearance that cannot write to a lower
classification level.
• ensures that data is not leaked from higher to lower security levels.
• Strong star property: “No read write up down”; highly secured
and strongest; a subject at a particular clearance level can only
read or write an object at the same classification level.
Bell-LaPadula Model
 Bell-LaPadula rules and properties …

• Problem with this model: users can certainly not communicate with low
users; focuses only on confidentiality (does not insure integrity)
Bell-LaPadula Model

 Goal: prevent the unauthorized disclosure of information


• Deals with information flow
• Integrity incidental
• Security levels arranged in linear ordering
• Top Secret: highest
• Secret
• Confidential
• Unclassified: lowest
• Levels consist of security clearance L(s)
• Objects have security classification L(o)
Bell-LaPadula Model

Example
Security level Subject Object
Top Secret Tamara Personnel Files
Secret Samuel E-Mail Files
Confidential Claire Activity Logs
Unclassified James Telephone Lists

• Tamara can read all files


• Claire cannot read Personnel or E-Mail Files Exercise: apply same
• James can only read Telephone Lists classification for BDU
Bell-LaPadula Model

Notation
• L(S)=ls security clearance of subject S
• L(O)=lo security classification of object O
• For all classification li = 0, …, k-1,
li < li+1
Bell-LaPadula Model

Reading Information
• “Reads up” disallowed, “reads down” allowed
• Simple Security Condition
• Subject s can read object o iff L(o) ≤ L(s) and s has permission to
read o
• Note: combines mandatory control (MAC) and discretionary control (DAC)
• Sometimes called “no reads up” rule
Bell-LaPadula Model

Writing Information
• Information flows up, not down
• “Writes up” allowed, “writes down” disallowed
• *-Property
• Subject s can write object o iff L(s) ≤ L(o) and s has permission to write o
• Combines mandatory control (relationship of security levels) and discretionary control
(the required permission)
• Sometimes called “no writes down” rule
Clark Wilson Model

• Aimed at commercial rather than military applications and closely models


real commercial operations.
• David Clark and David Wilson (1987) argued that commercial security has
its own unique concerns and merits a model crafted for that domain.
• The overriding concern is consistency among the various components of
the system state.
Example: In a bank, the funds at the beginning of the day plus the funds
deposited minus the funds withdrawn should equal funds on hand at the
end of the day.
• Goal: to prevent unauthorized modifications and ensure that data is
manipulated correctly.
Clark Wilson Model

• Clark and Wilson claimed that the following are four fundamental concerns
of any reasonable commercial integrity model:
1. Authentication: identity of all users must be properly authenticated.
2. Audit: modifications should be logged to record every program executed and by
whom, in a way that cannot be subverted.
3. Well-formed transactions: users manipulate data only in constrained ways. Only
legitimate accesses are allowed.
4. Separation of duty: the system associates with each user a valid set of programs
they can run and prevents unauthorized modifications, thus preserving integrity and
consistency with the real world.
Clark Wilson Model… Key Concepts

 The model imposes integrity controls on data and the transactions that
manipulate the data.
 The principal components of the model are:
• Constrained Data Items: CDIs are the objects whose integrity is protected
• Unconstrained Data Items: UDIs are objects not covered by the integrity policy
• Transformation Procedures: TPs are the only procedures allowed to modify
CDIs, or take arbitrary user input and create new CDIs. Designed to take the
system from one valid/consistent state to another.
• Integrity Verification Procedures: IVPs are procedures meant to verify
maintenance of integrity of CDIs; assure CDIs’ conformance to models
Policy Rules

• The model enforces integrity by means of Certification (C) and Enforcement (E)
rules on TPs.
• Certification rules are security policy restrictions on the behavior of IVPs and TPs. Enforcement
rules are built-in system security mechanisms that achieve the objectives of the certification
rules.
C1: All IVPs must ensure that CDIs(constrained data items) are in a valid state when the IVP is run.
C2: All TPs(Transformation Procedures) must be certified as integrity-preserving.
C3: Assignment of TPs to users must satisfy separation of duty.
C4: The operation of TPs must be logged.
C5: TPs executing on UDIs(unconstrained data items) must result in valid CDIs.
E1: Only certified TPs can manipulate CDIs.
E2: Users must only access CDIs by means of TPs for which they are authorized.
E3: The identity of each user attempting to execute a TP must be authenticated.
Policy Rules

• Permissions are encoded as a set of triples of the form:


(userID, TP, {CDI set})
• where user is authorized to perform a transaction procedure TP, on
the given set of constrained data items (CDIs).
• Each triple in the policy must comply with all applicable certification
and enforcement rules.
Biba Model

• The Biba Integrity Model is a hierarchical security model designed to protect


system assets (or objects) from unauthorized modification; which is to say it is
designed to protect system integrity.
• In this model, subjects and objects are associated with ordinal integrity levels
where subjects can modify objects only at a level equal to or below its own
integrity level.
• The Biba model is intended to deal with the case in which there is data that
must be visible to users at multiple or all security levels but should only be
modified in controlled ways by authorized agents.
• The basic elements have the same structure as the BLP model (each subject and
object assigned an integrity level – I(S) and O(S) )
Biba Model

• Integrity rules:
• SIMPLE INTEGRITY RULE: Simple Integrity Rule states that the Subject can only Read the files on the
Same Layer of Secrecy and the Upper Layer of Secrecy but not the Lower Layer of Secrecy, due to
which we call this rule as NO READ DOWN
• STAR INTEGRITY RULE: Star Integrity Rule states that the Subject can only Write the files on the Same
Layer of Secrecy and the Lower Layer of Secrecy but not the Upper Layer of Secrecy, due to which we
call this rule as NO WRITE-UP
• STRONG STAR INTEGRITY RULE: Subject can interact only the files on the Same Layer of Secrecy
Brewer and Nash Model

• The policies so far have been general.


• Let’s consider a policy for a very specific commercial concern: the potential for
conflicts of interest and inadvertent disclosure of information by a consultant or
contractor.
Example: A lawyer specializes in product liability and consults for American
Airlines. It could be a breach of confidentiality for her to consult also for United
Airlines. Why? A simultaneous contract with McDonalds would not be a conflict.
• Brewer and Nash (1989) proposed a policy called the Chinese Wall Policy that
addresses such conflicts of interest.
Strictly speaking, this is not an integrity policy, but an access control
confidentiality policy.
Brewer and Nash Model… Chinese Wall Policy

• The security policy builds on three levels of abstraction.


• Objects such as files. Objects contain information about only one company.
• Company groups collect all objects concerning a particular company.
• Conflict classes cluster the groups of objects for competing companies.
• For example, consider the following conflict classes:
{ Ford, Chrysler, GM }
{ Bank of America, Wells Fargo, Citicorp }
{ Microsoft }
• Simple access control policy: A subject may access information from any company as
long as that subject has never accessed information from a different company in the
same conflict class.
For example, if you access a file from GM, you subsequently will be blocked from
accessing any files from Ford or Chrysler.
• You are free to access files from companies in any other conflict class.
Notice that permissions change dynamically. The access rights that any subject enjoys
depends on the history of past accesses.
Chinese Wall policy…

• Formally, the policy restricts access according to the following two


properties:
(Chinese Wall) Simple Security Rule: A subject S can be granted access to
an object O only if the object is in the same company datasets as the
objects already accessed by S, that is, “within the Wall,” or belongs to an
entirely different conflict of interest class.
(Chinese Wall) *-property: Write access is only permitted if: access is
permitted by the simple security rule, and no object can be read which is:
in a different company dataset than the one for which write access is
requested, and contains unsanitized [unsensored] information.
Harrison Ruzzo Ullman Model
• The HRU security model (Harrison, Ruzzo, Ullman model) is an operating system
level computer security model which deals with the integrity of access rights in the
system.
• The HRU model defines a protection system consisting of a set of generic rights R
and a set of commands C.
• An instantaneous description of the system is called a configuration and is defined
as a tuple ( S , O , P ) of current subjects S , current objects O and an access matrix
P.
• Since the subjects are required to be part of the objects, the access matrix contains
one row for each subject and one column for each subject and object. An entry for
subject s and object o is a subset of the generic rights R .
Harrison Ruzzo Ullman Model
• The commands are composed of primitive operations and can additionally have a
list of pre-conditions that require certain rights to be present for a pair ( s , o ) of
subjects and objects.
• The primitive requests can modify the access matrix by adding or removing access
rights for a pair of subjects and objects and by adding or removing subjects or
objects.
• Creation of a subject or object requires the subject or object not to exist in the
current configuration, while deletion of a subject or object requires it to have
existed prior to deletion.
• In a complex command, a sequence of operations is executed only as a whole. A
failing operation in a sequence makes the whole sequence fail, a form of database
transaction.
Summary

• Confidentiality models restrict flow of information


• Bell-LaPadula models multilevel security (known as NO READ UP, NO WRITE
DOWN )
• Cornerstone of much work in computer security
• Biba: This works the exact reverse of the Bell-LaPadula Model (NO READ DOWN , NO
WRITE-UP)
• Clark and Wilson identified a set of integrity concerns claimed to be of particular
relevance within commercial environments: consistency, authentication, audit, etc.
They proposed a set of mechanisms explicitly designed to address those specific
concerns. Their policy is quite abstract and must be instantiated with
specific data sets (constrained and unconstrained),transformation procedures,
verification procedures, etc
• Brewer and Nash’s Chinese Wall Policy is designed to address a very specific concern:
conflicts of interest by a consultant or contractor
• Harrison Ruzzo Ullman security model is operating system level configuration protocol
model
Thank You

1/11/2025 33
Assignment-1 and Project

You might also like