Chapter 5 – Designing Trusted Operating Systems

 

This chapter discusses the design of a trusted operating system and differentiates this concept from that of a secure operating system.  This chapter includes a discussion of security policy and models of security, upon which a trusted design can be based.

 

Definitions: Trust vs. Security

Here we discuss the term “trusted operating system” and specify why we prefer the term to something such as “secure operating system”.  Basically, security is not a quality that can be quantified easily.  Either a system is secure or it is not secure.  If a system is called secure, it should be able to resist all attacks.  The claim of security is something that has to be taken as is, either one accepts the claim or one does not.

 

Trust, on the other hand, is something that can be quantified.  A system is called trusted if it meets the intended security requirements; thus one can assign a level of trust to a system depending on the degree to which it meets a specific set of requirements.

 

The evaluation of the trust to be accorded a system is undertaken by the user of the system and depends on a number of factors, all of which can be assessed.
      1)   the enforcement of security policy, and
      2)   the sufficiency of its measures and mechanisms.

 

 

Security Policies

A security policy is a statement of the security we expect a given system to enforce.  A system can be characterized as trusted only to the extent that it satisfies a security policy.

 

All organizations require policy statements.  The evolution of policy is perhaps a fairly dull job, but it is necessary.  Policy sets the context for the rules that are implemented by an organization.  For this course, we focus on information security policy, used to give a context for the rules and practices of information security.  Policy sets the strategy – it is the “big picture”, while rules are often seen as the “little picture”.  The text states that “Policy sets rules”.  The author of these notes would state that “Policy sets the context for rules”.

 

Another way to look at policy is that rules and procedures say what to do while the policy specifies why it is done.

 

Sections of a Policy

Each policy must have four sections.

      Purpose                 Why has the policy been created and how does the company benefit?

      Scope                    What section of the company is affected by this policy?

      Responsibility      Who is held accountable for the proper implementation of

                                    the policy.

      Authority              A statement of who issued the policy and how that person has the
                                    authority to define and enforce the policy.


Types of Policy

Information security policy must cover a number of topics.  The major types of policy that are important to an organization are the following.

      Information Policy
      Security Policy
      Computer Use Policy
      Internet Use Policy
      E-Mail Use Policy
      User Management Procedures
      System Management Procedures
      Incident Response Procedures
      Configuration Management Policy

 

Information Policy

Companies process and use information of various levels of sensitivity.  Much of the information may be freely distributed to the public, but some should not.  Within the category of information not freely releasable to the public, there are usually at least two levels of sensitivity.  Some information, such as the company telephone book would cause only nuisance if released publicly.  Other information, such as details of competitive bids, would cause the company substantial financial loss if made public prematurely.

 

One should note that most information becomes less sensitive with age – travel plans of company officials after the travel has been completed, details of competitive bids after the bid has been let, etc.

 

Military Information Security Policy

The information security policy of the U.S. Department of Defense, U.S. Department of Energy and similar agencies is based on classification of information by the amount of harm its unauthorized release would cause to the national security.  The security policy of each agency is precisely spelled out in appropriate documentation; those who are cleared for access to classified information should study those manuals carefully.

 

Department of Defense (DOD) policy is not a proper course of study for this civilian course, but it provides an excellent model for the types of security we are studying.  The first thing to note may not be applicable to commercial concerns: the degree of classification of any document or other information is determined only by the damage its unauthorized release would cause; possibility of embarrassment or discovery of incompetent or illegal actions is not sufficient reason to classify anything.

 

There are four levels of classification commonly used: Unclassified, Confidential, Secret, and Top Secret.  There is a subset of Unclassified Data called For Official Use Only, with the obvious implications.  Each classification has requirements for storage, accountability, and destruction of the information.  For unclassified information, the only requirement is that the user dispose of the information neatly.  For FOUO (For Official Use Only) information, the requirement is not to leave it on top of a desk and to shred it when discarding it.

 

For Secret and Top Secret information, requirements include GAO (Government Accounting Office) approved storage containers (more stringent requirements for Top Secret), hand receipts upon transfer to establish accountability, and complete destruction (with witnessed destruction certificates) upon discard.

 

As a word of caution to everyone, the DOD anti-espionage experts give each level of classification a “life” – the average amount of time before it is known to the enemy.  When this author worked for the U.S. Air Force, the numbers were three years for Secret and seven years for Top Secret information.  Nothing stays secure for ever.

 

The U.S. Government uses security clearances as a method to establish the trustworthiness of an individual or company to access and protect classified data.  The clearances are named identically to the levels of information sensitivity of information (except that there are no clearances for Unclassified or FOUO) and indicate the highest level of sensitivity a person is authorized to access.  For example, a person with a Secret clearance is authorized for Secret and Confidential material, but not for Top Secret.

 

The granting of a security clearance is based on some determination that the person is trustworthy.  At one time Confidential clearances (rarely issued) could be granted based on the person presenting a birth certificate.  Secret clearances are commonly based on a check of police records to insure that the person has no criminal history.  Top Secret clearances always require a complete background check, involving interviews of a person’s family and friends by an agent of the U. S. Government.

 

Each security clearance must be granted by the branch of the U. S. Government that owns the classified data to which access is being granted.  It is usual for one branch of the government to accept clearances issued by another branch, but this author knows people granted Top Secret clearances by the U. S. Air Force who transferred to the U. S. Navy and had their clearances downgraded to Secret pending the completion of another background check.

 

Information access is restricted by need-to-know.  Formally this phrase implies that one must have access to this information in order to complete his or her assigned duties.  Depending on the level of sensitivity, the criteria for need-to-know differ.  Conventionally, the criteria for access to Secret and lower level data is a credible request from a person known to be cleared and working in an area related to the data.  The criteria for Top Secret information usually are more formal, such as a specific authorization by a senior person who already has access to the data.

 

For some areas, the need-to-know is formalized by creating compartments.  All information related to nuclear weapons is classified as Restricted Data, with additional security controls.

Another well-known area is “Crypto”, that area related to the ciphers and codes used to transmit classified data.  Other sensitive information specific to a given project is assigned to what is called a compartment, and is called Sensitive Compartmented Information or SCI.  Contrary to popular belief, not all SCI information is classified as Top Secret.

 


As an example of a well-known project that must have had SCI controls, consider the famous spy plane commonly called the Black Bird.  It was the follow-on to the U2, another project that must have had SCI controls.  Suppose that the project was called Blackbird (quite unlikely, but we need a name).  Some information, such as detailed design of this advanced plane would have been classified Top Secret and labeled as “Handle via Blackbird channels only”.  Administrative information on the project might be classified as Secret with the same distribution restrictions.  All of the information to be handled within the Blackbird channels would be releasable only to those people who had been specifically cleared for access to Blackbird information.

 

This author’s opinion of the classification process is that the U.S. Government had to pay money to develop the data; others should not get it for free.

 

Company Security Policy

This section covers how a company might adapt the DOD security policy to handle its own proprietary data; companies that handle U. S. Government sensitive data must follow the appropriate policies as specified in the contract allowing the company access to such data.

 

The first idea is that of a multi-level security.  It should be obvious that some company data are more sensitive that other data and require more protection.  For many companies a three level policy might suffice.  Suggested levels of classification include:

 

      3-level        Public Release, Internal Use Only, and Proprietary
      4-level        Public Release, Internal Use Only, Proprietary, and Company Confidential.

 

It cannot be overemphasized that the terms “Secret” and “Top Secret” not be used to classify company-sensitive data, as this can lead to serious embarrassments when an auditor visits from a U. S. Government agency and asks what contract allows the company to possess data considered by the U. S. Government to be classified.

 

While companies normally do not have a formal clearance system, there are certain aspects of the U. S. Government system that should be applied.  Every company should do some sort of background investigation on all of its employees (did you just hire a known felon?) and delegate to specific managers the authority to grant access to each sensitive project as needed to further the interests of the company.  The need-to-know policy should be enforced in that a person should have access to company sensitive information only when a project manager determines that such access is in the best interest of the company.

 

The idea of compartmented information comes naturally to companies; how many of us working on sensitive projects need access to payroll data and other personnel files.  Again, one is strongly cautioned not to use the term SCI to refer to any information other than that information so labeled by the U. S. Government.

 

Company policy must include instructions for storing, transferring, and destroying sensitive information.  Again, the DOD policy provides a good starting point.


Marking Sensitive Information

The U.S. Department of Defense has developed a standard approach to marking sensitive information and protecting it with cove sheets that are themselves not sensitive.  These suggestions follow the DOD practice.  The student should remember to avoid the terms “Secret” and “Top Secret” in companies that have any dealings with the government, as considerable embarrassment might arise were company data confused and considered to be sensitive and so classified under some U.S. Government regulation.

 

The suggested practice for paper documents involves cover sheets, page markings, and paragraph markings.  Each paragraph, table, and figure should be labeled according to the sensitivity of the information contained.  Each page should be marked with the sensitivity of the most sensitive information on the page, and each document should be labeled with the sensitivity of the most sensitive information in the document and given an appropriate cover sheet.  When stored in electronic form, the document should contain the markings as if it were to be printed or displayed on a monitor.  These precautions should exist in addition to any precautions to label the disk drive itself to show it contains sensitive information.

 

When designing cover sheets for physical documents and labels for electronic storage media, one should note that the goal is to indicate that the attached document is sensitive without actually revealing any sensitive information; the cover sheet itself should be publicly releasable.  The DOD practice is to have blue cover sheets for Confidential documents and red cover sheets for Secret documents.  The figure below illustrates two cover sheets.

 

 

The policy for company sensitive information should list the precautions required for storage and transmission of the various levels of sensitivity.


Processing Sensitive Information

Again, we mention a policy that is derived from DOD practice.  When a computer is being used to process or store sensitive information, access to that computer should be restricted to those employees explicitly authorized to work with that information.  This practice is a result of early DOD experiments with “multi-level security” on time-sharing computers, in which some users were able to gain access to information for which they were not authorized.

 

Destruction of Sensitive Information

Sensitive information on paper should be destroyed either by shredding with a cross-cut shredder or by burning.  Destruction of information on disk drives should be performed by professionals; the delete command of the operating system does not remove any data. 
A common practice is to destroy the disk physically, possibly by melting it.

 

Security Policy

There are a number of topics that should be addressed.  Identification and authentication are two major topics – how are the users of the system identified and authenticated.  User ID’s and passwords are the most common mechanisms, but others are possible.

 

The audit policy should specify what events are to be logged for later analysis.  One of the more commonly logged classes of events covers failed logins, which can identify attempts to penetrate the system.  One should remember, however, that event logs can be useful only if there is a method for scanning them systematically for significant events.  Manual log reading is feasible only when an event has been identified by other means – people are not good at reading long lists of events.

 

Any policy must include a provision for waivers; that is, what to do when the provisions of the policy conflict with a pressing business need.  When a project manager requests a waiver of the company security policy, it must be documented formally.  Items to include are

      the system in question,
      the section of the security policy that will not be met,
      how the non-compliance will increase the risk to the company,
      the steps being taken to manage that risk, and
      the plans for bringing the system into compliance with the policy.

 

Computer Use Policy

The policy should state clearly that an employee enters into an implicit agreement with the company when using a computer issued by the company.  Some important items are:

      1)   All computers and network resources are owned by the company,
      2)   The acceptable use (if any) of non-company-owned computers within
            the company business environment,
      3)   With the exception of customer data (which are owned by the customer), that all
            information stored on or used by the company computers is owned by the company.

      4)   That the employee is expected to use company-owned computers only for purposes
            that are related to work, and
      5)   That an employee has no expectation of privacy for information stored on company
            computers or network assets.

System Administration Policies

These should specify how software patches and upgrades are to be distributed in the company and who is responsible for making these upgrades.  There should also be policies for identification and correcting vulnerabilities in computer systems.

 

There should also be a policy for responding for security incidents, commonly called an IRP or Incident Response Policy.  There are a number of topics to be covered
      1)   how to identify the incident,
      2)   how to escalate the response as necessary until it is appropriate, and
      3)   who should contact the public press or law-enforcement authorities.

 

 

Creating and Deploying Policy

The most important issue with policy is gaining user acceptance – it should not be grudging.  The first step in creating a policy is the identification of stakeholders – those who are affected by the policy.  These must be included in the process of developing the policy.

 

Another important concept is “buy-in”, which means that people affected by the policy must agree that the policy is important and agree to abide by it.  This goal is often achieved best by a well-designed user education policy.  Face it – if security is viewed only as a nuisance imposed by some bureaucratic “bean counter”, it will be ignored and subverted.

 

Here I must recall a supposedly true story about a company that bought a building from another company that had been a defense contractor.  The company purchasing the building was not a defense contractor and had no access to information classified by the U. S. Government.  Imagine the company’s surprise, when as a part of their renovation they removed the false ceiling and were showered with documents indicating that they were the property of the U. S. Department of Defense and marked SECRET.

 

It turned out that the security officer of the previous company was particularly zealous.  It was required that every classified document be properly locked up in the company’s safe at the end of the working day.  Accessing the safe was a nuisance, so the engineers placed the documents above the false ceiling to avoid the security officer discovering them on one of his frequent inspections.  Here we have an obvious case of lack of buy-in.

 

 

Models of Security

It is common practice, when we want to understand a subject, to build a logical model and study that logical model.  Of course, the logical model is useful only to the extent that it corresponds to the real system, but we can try to get better models.  Models of security are used for a number of purposes.

 

      1)   To test the policy for consistency and adequate coverage.
            Note that I do not say “completeness” – one can only show a policy to be incomplete.

      2)   To document the policy.
      3)   To validate the policy; i.e. to determine that the policy meets its requirements.


There are many useful models of security, most of which focus on multi-level security.  We shall discuss some of these, despite this author’s documented skepticism that multi-level security systems are feasible with today’s hardware running today’s operating systems.

 

Multi-Level Security

The idea of multi-level security is that some data are more sensitive than others.  When we try to formalize a model of multi-level security using the most obvious model, we arrive at a slight problem.  Consider the four traditional security classifications and their implied order.

 

Unclassified £ Confidential £ SECRET £ Top Secret

 

This is an example of what mathematicians call a total ordering.  A total ordering is a special case of an ordering on a set.  We first define partial ordering.

 

A partial order (or partial ordering) is defined for a set S as follows.
      1)   There is an equality operator, =, and by implication an inequality operator, ¹.
            Any two elements of the set, a Î S and b Î S can be compared.
            Either a = b or a ¹ b.  All sets share this property.

 

      2)   There is an ordering operator £, and by implication the operator ³.
            If a £ b, then b ³ a.  Note that the operator could be indicated by another symbol.

 

      3)   The operator is transitive.
            For any a Î S, b Î S, c Î S, if a £ b and b £ c, then a £ c.

 

      4)   The operator is antisymmetric.
            For any a Î S, b Î S, if a £ b and b £ a, then a = b.

 

If, in addition to the above requirements for a partial ordering, it is the case that for any two elements a Î S, b Î S, that either a £ b or b £ a, then the relation is a total ordering.  We are fairly familiar with sets that support a total ordering; consider the set of positive integers.

 

In models of the security world, it is often the case that two items cannot be compared by an ordering operator.  It has been discovered that the mathematical object called a lattice provides a better model of security.

 

A lattice is a set S that supports a partial order, with the following additional requirements.

      1)   Every pair of elements a Î S, b Î S possess a common upper bound; i.e.,
            there is an element u Î S, such that a £ u and b £ u.

      2)   Every pair of elements a Î S, b Î S possess a common lower bound; i.e.,
            there is an element l Î S, such that l £ a and l £ b.

 

Obviously a total ordering is a special case of a lattice.  For any two elements a Î S, b Î S is a set with a total ordering, let l = min(a, b) and u = max(a, b) to satisfy the lattice property.

 


The most common example of a lattice is the relationship of divisibility in the set of positive integers.  Note that addition of zero to the set ruins the divisibility property.

 

The divisibility operator is denoted by the symbol “|”; we say a | b if the integer a divides the integer b, equivalently that the integer b is an integer multiple of the integer a.  Let’s verify that this operator on the set of positive integers satisfies the requirements of a partial order.

 

      1)   Both equality and inequality are defined for the set of integers.

      2)   We are given the ordering operator “|”.

 

      3)   The operator is transitive.
            For any a Î S, b Î S, c Î S, if a | b and b | c, then a | c.  The proof is easy.

            If b | c, then there exists an integer q such that c = q·b.

            If a | b, then there exists an integer p such that b = p·a.
            Thus c = q·b = c = q · p·a =  (q·p)·a, and a | c.

 

      4)   The operator is antisymmetric.
            For any a Î S, b Î S, if a | b and b | a, then a = b.

 

If the divisibility operator imposed a total order on the set of integers, then it would be the case that for any two integers a and b, that either a | b or b | a.  It is easy to falsify this claim by picking two prime numbers; say a = 5 and b = 7.  Admittedly, there are many pairs of integers that are not prime and still falsify the claim (27 = 33 and 25 = 52), but one set is enough.  We now ask if the set of integers under the divisibility operator forms a lattice.

 

It turns out that the set does form a lattice as it is quite easy to form the lower and upper bounds for any two integers.  Let a Î S and b Î S, where S is the set of positive integers.

A lower bound that always works is l = 1 and an upper bound that always works is u = a·b.  Admittedly, these are not the greatest lower bound or least upper bound, but they show that such bounds do exist.  To illustrate the last statement, consider this example.

 

      a = 4 and b = 6, with a·b = 24. 
            The greatest lower bound is l = 2, because 2 | 6 and 3 | 6, and the number 2
            is the largest integer to have that property.


            The least upper bound is u = 12, because 4 | 12 and 6 | 12, and the number 12
            is the smallest integer to have that property.

 

The lattice model has been widely accepted as a model for security systems because it incorporates two of the basic requirements.

      1)   There is a sense of the idea that some data are more sensitive than other data.
      2)   It is not always possible to rank the sensitivity of two distinct sets of data.

 


The figure below, adapted from figure 5-6 on page 241 of the textbook, shows a lattice model based on the factors of the number 60 = 22·3·5.

 

This figure is a directed acyclic graph (DAG) although the arrows are not shown on the edges as drawn.  Depending on the relation being modeled, the arrows all point up or the arrows all point down.  Note that this makes a good model of security, in that some elements may in a sense be “more sensitive” than others without being directly comparable.  In the above DAG, we see that 12 is larger than 5 in the sense of traditional comparison, but that the two numbers cannot be compared within the rules of the lattice.

 

Before proceeding with security models that allow for multi-level security, we should first mention that there are two problems associated with multi-level security.  We mention the less severe problem first and then proceed with the one discussed in the text.

 

By definition, a multi-level security system allows for programs with different levels of security to execute at the same time.  Suppose that your program is processing Top Secret data and producing Top Secret results (implying that you are cleared for Top Secret), while my program is processing SECRET data and producing SECRET results.  A leak of data from your program into my program space is less severs if I also am cleared for Top Secret, but just happen to be running a SECRET program.  If I am not cleared for access to Top Secret data, then we have a real security violation.

 

For the duration of this discussion, we shall assume the latter option – that a number of users are processing data, with each user not being authorized to see the other user’s data.

 


The Bell-LaPadula Confidentiality Model

The goal of this model is to identify allowable flows of information in a secure system.  While we are applying this to a computer system running multiple processes (say a server with a number of clients checking databases over the Internet), I shall illustrate the model with a paper-oriented example of collaborative writing of a document to be printed.  In this example, I am assuming that I have a SECRET clearance.

 

This model is concerned with subjects and objects, as are other models.  Each subject and object in the model has a fixed security class, defined as follows.
      C(S)           for subject S this is the person’s clearance
      C(O)          for objects (data and programs) this is the classification.

 

The first property is practically a definition of the meaning of a security clearance.

 

      Simple Security Property A subject S may have read access to an object O
                                                            only if C(S) ³ C(O).

 

      In my example, this implies that I may show my SECRET parts of the report only to those who are cleared for SECRET-level or higher information.  Specifically, I cannot show the information to someone cleared only for access to Confidential information.

 

      *-Property            A subject S who has read access to an object O (thus C(S) ³ C(O))
                                    may have write access to an object P only if C(O) £ C(P).

 

      This property seems a bit strange until one thinks about it.  Notice first what this does not say – that the subject has read access to the object P.  In our example, this states that if you are cleared for access to Top Secret information and are writing a report classified Top Secret, that I (having only a SECRET clearance) may submit a chapter classified SECRET for inclusion into your report.  You accept the chapter and include it.  I never get to see the entire report as my clearance level is not sufficient.

 

The strict interpretation of the *-Property places a severe constraint on information flow from one program to a program of less sensitivity.  In actual practice, such flows are common with a person taking responsibility for removing sensitive data.  The problem here is that it is quite difficult for a computer program to scan a document and detect the sensitivity of data.  For example, suppose I have a document classified as SECRET.  A computer program scanning this document can easily pick out the classification marks, but cannot make any judgments about what it is that causes the document to be so classified.  Thus, the strict rule is that if you are not cleared for the entire document, you cannot see any part of it.

 

The author of these notes will share a true story dating from his days working for Air Force intelligence.  As would be expected, much of the information handled by the intelligence organization was classified Top Secret, with most of that associated with sensitive intelligence projects.  People were hired based on a SECRET security clearance and were assigned low-level projects until their Top Secret clearance was obtained.

 

Information is the life blood of an intelligence organization.  The basic model is that the people who collect the intelligence pass it to the analysts who then determine its significance.  Most of what arrives at such an organization is quickly destroyed, but this is the preferable mode as it does not require those who collect the information to assess it.

 

There were many sensitive projects that worked with both SECRET and Top Secret data.  As the volume of documents to be destroyed was quite large, it was the practice for the data that was classified only SECRET to be packaged up, sent out of the restricted area, and given to the secretaries waiting on their Top Secret clearance to handle for destruction.  Thus we had a data flow from an area handling Top Secret to an area authorized to handle data classified no higher than SECRET.  This author was present when the expected leak happened.

 

This author walked by the desk of a secretary engaged in the destruction of a large pile of SECRET documents.  At the time, both she and I had SECRET security clearances and would soon be granted Top Secret clearances (each of got the clearance in a few months).  In among the pile of documents properly delivered was a document clearly marked Top Secret with a code word indicating that it was associated with some very sensitive project.  The secretary asked this author what to do with the obviously misplaced document.  This author could not think of anything better than to report it to his supervisor, who he knew to have the appropriate clearance.  Result – MAJOR FREAKOUT, and a change in policy.

 

The problem at this point was a large flow of data from a more sensitive area to a less sensitive area.  Here is the question: this was only one document out of tens of thousands.  How important is it to avoid such a freak accident?

 

If one silly story will not do the job, let’s try for two with another story from this author’s time in Dayton, Ohio.  At the time an adult movie-house (porn theater) was attempting to reach a wider audience, so it started showing children’s movies during the day.  This author attended the first showing.  While the movie was G rated, unfortunately nobody told the projectionist that the previews of coming attractions could not be X rated.  The result was a lot of surprised parents and amazed children.  There was no second showing for children.

 

 

The Biba Integrity Model

The Biba integrity model is similar to the Bell-La Padula model, except that it is designed to address issues of integrity of data.  Security addresses prevention of unauthorized disclosure of data, integrity addresses unauthorized modification of data.  The student should note the similarities of the two models.

 

Design of a Trusted Operating System

Here we face the immediate problem of software quality.  It is almost impossible to create a complete and consistent set of requirements for any large software system, and even more difficult to insure that the software system adheres to that set of requirements and no other.  Now we are asked to make an operating system adhere to a set of requirements specifying security – perhaps both the Bell-La Padula model and the Biba integrity model.  This is quite a chore.  The difficulty of the chore does not excuse us from trying it.

 

The main difficulty in insuring the security of an operating system is the fact that the operating system is interrupt-driven.  Imagine an ordinary user program, perhaps one written for a class project.  One can think of this as a deterministic system (although it might not be) in that the program does only what the instructions say to do.  Admittedly what the instructions say to do may be different from what the author of the program thinks they say to do, but that is always a problem.

 

The main job of an operating system is to initialize the execution environment of the computer and then enter an idle state, just waiting for interrupts.  Its job is to respond to each of the interrupts according to a fixed priority policy and to execute the program associated with the interrupt.  The association of programs with interrupts is established when the execution environment is set up; for further study consult a book on computer architecture.

 

When an interrupt causes the operating system to suspend the execution of one program and initiate the execution of another program, the operating system performs a context switch, basically loading the new program and establishing its execution environment.  It is this context switch that introduces some indeterminacy into the operating system.  Another concern is that the time and resources taken by the context switch itself are part of the overhead of the operating system – cost to the executing program that does not directly benefit the executing program.  Thus, there is pressure to make each context switch as efficient as possible.  Introducing security code into the context switch slows it down.

 

There are three main services of operating systems that interact with security.

      User Interface                  authenticates a user, allows him access to the system,
                                                and handles all interaction with the user.

      Service Management      this allows a user access to many of the low-level services
                                                of the operating system.

      Resource Allocation        this allocates resources, such as memory, I/O devices, time
                                                on the CPU, etc.

 

In a trusted operating system, designed from the beginning with security in mind, each of these main services is written as a distinct object with its own security controls, especially user authentication, least privilege (don’t let a user do more than is necessary), and complete mediation (verifying that the input is of the expected form and adheres to the “edit” rules).  Here the UNIX operating system shows its major flaw – users are either not trusted or, being super-users, given access to every resource.

Consider figure 5-11 on page 255 of the textbook.  This shows the above strategy taken to its logical and preferable conclusion.  We have postulated that the resource allocator have a security front-end to increase its security.  Each of the resources allocated by this feature should be viewed also as an object – a data structure with software to manage its access.

 


The bottom line here is that computers are fast and memory is cheap.  A recent check
(10/31/2003) of the Gateway web site found a server configured with a 3.08 GHz processor, 512 KB of cache memory, and 4GB of main memory.  We might as well spend a few of these inexpensive resources to do the job correctly.

 

Some of the features of a security-oriented operating system are obvious, while other features require a bit of explanation.  We discuss those features that are not obvious.

 

Mandatory access control (MAC) refers to the granting of access by a central authority, not by individual users.  If I have SECRET data to show you and you do not have a SECRET clearance, I cannot of my own volition grant you a SECRET clearance (although I have actually seen it done – I wonder what the Defense Department would think of that).  MAC should exist along with discretionary access control (DAC) in that objects not managed by the central authority can be managed by the individual user owning them.

 

Object reuse protection refers to the complete removal of an object before it is returned to the object pool for reuse.  The simplest example of this is protection of files.  What happens when a file is deleted.  In many operating systems, the file allocation table is modified to no longer reference the object and to place its data sectors on the free list as available for reuse.  Note that the data sectors are not overwritten, so that the original data remains.  In theory, I could declare a large file and, without writing anything to it, just read what is already there, left over from when its sectors were used by a number of other files, now deleted.

 

Object reuse protection also has a place in large object-oriented systems.  In these systems, the creation of some objects is often very computationally intense.  This leads to the practice of pooling the discarded objects rather than actually destroying the object and releasing the memory when the object is no longer in use.  A program attempting to create a new object of the type in the pool will get an object already created if one exists in the pool.  This leads to more efficient operation, but also introduces a security hole.

 

Audit log management refers to the practice of logging all events with potential security impact, protecting that log from unauthorized access and modification, and creation of procedures and software to examine the log periodically and analyze it for irregularities.  A security log is of no use if nobody looks at it.

 

Intrusion detection refers to the creation and use of system software that scans all activity looking for unusual events.  Such software is hard to write, but one should try.  For example, this author has a 128 MB flash drive that he occasionally attaches to his computer at work via the USB port.  The intrusion detection software always reports that the number of hard drives on the system has changed and says to call the administrator if this was not an intentional act.

 


Kernelized Design

A kernel is the part of an operating system that performs low-level functions.  This is distinct from the high-level services part of the operating system that does things such as handle shared printers, provides for e-mail and Internet access, etc.  The kernel of an operating system is often called the nucleus, and rarely the core.  In an operating system designed with security in mind there are two kernels: the security kernel and the operating system kernel, which includes the security kernel.

 

The security kernel is responsible for enforcing the security mechanisms of the operating system, including the handling of most of the functions normally allocated to the operating system kernel itself, as most of these low-level facilities have impact on security.

 

The reference monitor is one of the most important parts of the security kernel.  This is the process that controls access to all objects, including devices, files, memory, interprocess communication, and other objects.  Naturally, the reference monitor must monitor access to itself and include protection against its being modified in an unauthorized way.

 

 

The Trusted Computing Base (TCB)

The trusted computing base is the name given to the part of the operating system used to enforce security policy.  Naturally, this must include the security kernel.  Functions of the TCB include the following:
      1)   hardware management, including processors, memory, registers, and I/O devices,
      2)   process management, including process scheduling,
      3)   interrupt handling, including management of the clocks and timing functions, and
      4)   management of primitive low-level I/O operations.

 

Virtualization is one of the more important tools of a trusted operating system.  By this term we mean that the operating system emulates a collection of the computer system’s sensitive resources.  Obviously virtualized objects must be supported by real objects, but the idea is that these real objects can be managed via the virtual objects.

 

As an example of a virtualized object, consider a shared printer.  The printer is a real object to which it is possible to print directly.  Simultaneous execution of several programs, each with direct access to the printer would yield an output with the results of each program intermixed – a big mess.  In fact the printer is virtualized and replaced by the print spooler, which is the only process allowed to print directly to the printer.  Each process accessing the virtualized printer is really accessing the print spooler, which writes the data to a disk file associated with the process.  When the process is finished with the printer, the spooler closes the file, and queues it up for being printed on the real printer.

 

A virtual machine is a collection of hardware facilities, each of which could be real or simulated in software.  One common feature is virtual memory, in which each process appears to have access to all of the memory of the computer, with the possible exception of memory allocated to the operating system.

 

Assurance in Trusted Operating Systems

For an operating system designed to be secure, assurance is the mechanism for convincing others that the security model is correct, as are the design and implementation of the OS.  How does one gain confidence that an operating system should be trusted?  One way is by gaining confidence that a number of the more obvious security vulnerabilities have been addressed in the design of the system.

 

Input/Output processing represents one of the larger vulnerabilities in operating systems.  There are a number of reasons for the vulnerability of this processing, including
      1)   the fact that I/O processing is interrupt driven, and
      2)   the fact that I/O processing is often performed by independent hardware systems, and
      3)   the complexity of the I/O code itself, and
      4)   the desire to have the I/O process bypass the security monitors as an efficiency issue.

 

Methods for gaining assurance include testing by the creator of the software, formal testing by a unit that is independent of the software development process, formal verification (when possible – it is very difficult), and formal validation by an outside vendor.  The author of these notes had been part of a software V&V (verification and validation) team, assigned to be sure that the code was written correctly and that it adhered to the requirements.

 

Formal Evaluation

We now turn to formal evaluation of an operating system against a published set of criteria.  One of the earliest attempts for formal evaluation was called the Trusted Computer System Evaluation Criteria (TCSEC), more loosely the “Orange Book” because that was the color of the book.  This was published in the late 1970’s by the U. S. Department of Defense.  The TCSEC defined a number of levels of assurance.


      D         basically, no protection.  Any system can get this level.
      C1       – discretionary access control
      C2       – controlled access protection ( a finer grained discretionary access control)
      B1        – labeled security protection
                     Each object is assigned a security level and mandatory access controls are used.
      B2        – structured protection.  This is level B1 with formal testing of a verified design.
      B3        – security domains.  The security kernel must be small and testable.
      A1       – verified design.  A formal design exists and has been thoroughly examined.

 

The TCSEC was a good document for its day, but it was overtaken by the arrival of the Internet and connectivity to the Internet.  Several operating systems were rated as C1 or better, provided that the system was running without connection to the Internet.

 

More recently, the U. S. Government has published the Combined Federal Criteria, followed in 1998 by the Common Criteria.  This document proposed a number of levels of assurance (seven, I think) with higher levels being more secure and the top level being characterized as “ridiculously secure”.  The book has a discussion of these criteria, but few details.