[go: up one dir, main page]

Jump to content

EROS (microkernel)

From Wikipedia, the free encyclopedia
EROS
DeveloperUniversity of Pennsylvania
Johns Hopkins University
The EROS Group, LLC
Written inC
OS familyCapability-based
Working stateDiscontinued
Initial release1991; 33 years ago (1991)
Latest releaseFinal / 2005; 19 years ago (2005)
Marketing targetResearch
Available inEnglish
Update methodCompile from source code
PlatformsIA-32
Kernel typeReal-time microkernel
Default
user interface
Command-line interface
Preceded byKeyKOS
Succeeded byCapROS

Extremely Reliable Operating System (EROS) is an operating system developed starting in 1991 at the University of Pennsylvania, and then Johns Hopkins University, and The EROS Group, LLC. Features include automatic data and process persistence, some preliminary real-time support, and capability-based security. EROS is purely a research operating system, and was never deployed in real world use. As of 2005, development stopped in favor of a successor system, CapROS.

Key concepts

[edit]

The overriding goal of the EROS system (and its relatives) is to provide strong support at the operating system level for the efficient restructuring of critical applications into small communicating components. Each component can communicate with the others only through protected interfaces, and is isolated from the rest of the system. A protected interface, in this context, is one that is enforced by the lowest level part of the operating system, the kernel. That is the only part of the system that can move information from one process to another. It also has complete control of the machine and (if properly constructed) cannot be bypassed. In EROS, the kernel-provided mechanism by which one component names and invokes the services of another is a capability, using inter-process communication (IPC). By enforcing capability-protected interfaces, the kernel ensures that all communications to a process arrive via an intentionally exported interface. It also ensures that no invocation is possible unless the invoking component holds a valid capability to the invoked component. Protection in capability systems is achieved by restricting the propagation of capabilities from one component to another, often through a security policy termed confinement.

Capability systems naturally promote component-based software structure. This organizational approach is similar to the programming language concept of object-oriented programming, but occurs at larger granularity and does not include the concept of inheritance. When software is restructured in this way, several benefits emerge:

  • The individual components are most naturally structured as event loops. Examples of systems that are commonly structured this way include aircraft flight control systems (see also DO-178B Software Considerations in Airborne Systems and Equipment Certification), and telephone switching systems (see 5ESS switch). Event-driven programming is chosen for these systems mainly because of simplicity and robustness, which are essential attributes in life-critical and mission-critical systems.
  • Components become smaller and individually testable, which helps to more readily isolate and identify flaws and bugs.
  • The isolation of each component from the others limits the scope of any damage that may occur when something goes wrong or the software misbehaves.

Collectively, these benefits lead to measurably more robust and secure systems. The Plessey System 250 was a system originally designed for use in telephony switches, which capability-based design was chosen specifically for reasons of robustness.

In contrast to many earlier systems, capabilities are the only mechanism for naming and using resources in EROS, making it what is sometimes referred to as a pure capability system. In contrast, IBM i is an example of a commercially successful capability system, but it is not a pure capability system.

Pure capability architectures are supported by well-tested and mature mathematical security models. These have been used to formally demonstrate that capability-based systems can be made secure if implemented correctly. The so-called "safety property" has been shown to be decidable for pure capability systems (see Lipton). Confinement, which is the fundamental building block of isolation, has been formally verified to be enforceable by pure capability systems,[1] and is reduced to practical implementation by the EROS constructor and the KeyKOS factory. No comparable verification exists for any other primitive protection mechanism. There is a fundamental result in the literature showing that safety is mathematically undecidable in the general case (see HRU, but note that it is of course provable for an unbounded set of restricted cases[2]). Of greater practical importance, safety has been shown to be false for all of the primitive protection mechanisms shipping in current commodity operating systems. Safety is a necessary precondition to successful enforcement of any security policy. In practical terms, this result means that it is not possible in principle to secure current commodity systems, but it is potentially possible to secure capability-based systems provided they are implemented with sufficient care. Neither EROS nor KeyKOS has ever been successfully penetrated, and their isolation mechanisms have never been successfully defeated by any inside attacker, but it is not known whether the two implementations were careful enough. One goal of the Coyotos project was to demonstrate that component isolation and security has been definitively achieved by applying software verification techniques.

The L4.sec system, which is a successor to the L4 microkernel family, is a capability-based system, and has been significantly influenced by the results of the EROS project. The influence is mutual, since the EROS work on high-performance invocation was motivated strongly by Jochen Liedtke's successes with the L4 microkernel family.

History

[edit]

The primary developer of EROS was Jonathan S. Shapiro. He was also the driving force behind Coyotos, which was an "evolutionary step"[3] beyond the EROS operating system.[4]

The EROS project started in 1991 as a clean-room reconstruction of an earlier operating system, KeyKOS. KeyKOS was developed by Key Logic, Inc., and was a direct continuation of work on the earlier Great New Operating System In the Sky (GNOSIS) system created by Tymshare, Inc. The circumstances surrounding Key Logic's demise in 1991 made licensing KeyKOS impractical. Since KeyKOS did not run on popular commodity processors in any case, the decision was made to reconstruct it from the publicly available documentation.

By late 1992, it had become clear that processor architecture had changed significantly since the introduction of the capability idea, and it was no longer obvious that component-structured systems were practical. Microkernel-based systems, which similarly favor large numbers of processes and IPC, were facing severe performance challenges, and it was uncertain if these could be successfully resolved. The x86 architecture was clearly emerging as the dominant architecture but the expensive user/supervisor transition latency on the 386 and 486 presented serious challenges for process-based isolation. The EROS project was turning into a research effort, and moved to the University of Pennsylvania to become the focus of Shapiro's dissertation research. By 1999, a high performance implementation for the Pentium processor had been demonstrated that was directly performance competitive with the L4 microkernel family, which is known for its exceptional speed in IPC. The EROS confinement mechanism had been formally verified, in the process creating a general formal model for secure capability systems.

In 2000, Shapiro joined the faculty of Computer Science at Johns Hopkins University. At Hopkins, the goal was to show how to use the facilities provided by the EROS kernel to construct secure and defensible servers at application level. Funded by the Defense Advanced Research Projects Agency and the Air Force Research Laboratory, EROS was used as the basis for a trusted window system,[5] a high-performance, defensible network stack,[6] and the beginnings of a secure web browser. It was also used to explore the effectiveness of lightweight static checking.[7] In 2003, some very challenging security issues were discovered[8] that are intrinsic to any system architecture based on synchronous IPC primitives (notably including EROS and L4). Work on EROS halted in favor of Coyotos, which resolved these issues.[citation needed]

As of 2006, EROS and its successors are the only widely available capability systems that run on commodity hardware.

Status

[edit]

Work on EROS and Coyotos by the original group has halted, but there is a successor system.[4] CapROS (Capability Based Reliable Operating System), a successor of EROS, is an open-source, commercially-oriented operating system.[9]

See also

[edit]

References

[edit]
  1. ^ Shapiro, Jonathan S.; Weber, Samuel (October 29, 1999). Verifying the EROS Confinement Mechanism (PDF). 2000 IEEE Symposium on Security and Privacy. Berkeley, CA, USA. doi:10.1109/SECPRI.2000.848454.
  2. ^ Lee, Peter. "Proof-Carrying Code". Archived from the original on September 22, 2006.
  3. ^ Shapiro, Jonathan (April 2, 2006). "Differences Between Coyotos and EROS: A Quick Summary". Archived from the original on 2012-07-31.
  4. ^ a b Shapiro, Jonathan S. (April 7, 2009). "Status of Coyotos". coyotos-dev (Mailing list). Archived from the original on July 24, 2014. Retrieved 16 March 2022. Active work on Coyotos stopped several months ago, and is unlikely to resume.
  5. ^ Shapiro, Jonathan S.; Vanderburgh, John; Northup, Eric; Chizmadia, David (2004). Design of the EROS Trusted Window System (PDF). 13th USENIX Security Symposium. San Diego, CA, USA.
  6. ^ Sinha, Anshumal; Sarat, Sandeep; Shapiro, Jonathan S. (2004). Network Subsystems Reloaded: A High-Performance, Defensible Network Subsystem (PDF). 2004 USENIX Annual Technical Conference. Boston, MA, USA.
  7. ^ Chen, Hao; Shapiro, Jonathan S. "Using Build-Integrated Static Checking to Preserve Correctness Invariants" (PDF). Archived from the original (PDF) on March 3, 2016.
  8. ^ Shapiro, Jonathan S. (2003). Vulnerabilities in Synchronous IPC Designs (PDF). 2003 Symposium on Security and Privacy. Berkeley, CA, USA. doi:10.1109/SECPRI.2003.1199341.
  9. ^ Chakraborty, Pinaki (2010). "Research purpose operating systems – a wide survey". GESJ: Computer Science and Telecommunications. 3 (26). ISSN 1512-1232.

Journals

[edit]
  1. Lipton, R. J.; Snyder, L. (July 1977). "A Linear Time Algorithm for Deciding Subject Security". Journal of the ACM. 24 (3): 455–464. doi:10.1145/322017.322025. S2CID 291367.
  2. Harrison, Michael A.; Ruzzo, W. L.; Ullman, Jeffrey D. (August 1976). "Protection in Operating Systems". Communications of the ACM. 19 (8): 461–471. doi:10.1145/360303.360333. S2CID 5900205.
[edit]