WO2000039677A1 - Method and apparatus for providing operating system scheduling operations - Google Patents
Method and apparatus for providing operating system scheduling operations Download PDFInfo
- Publication number
- WO2000039677A1 WO2000039677A1 PCT/US1999/030247 US9930247W WO0039677A1 WO 2000039677 A1 WO2000039677 A1 WO 2000039677A1 US 9930247 W US9930247 W US 9930247W WO 0039677 A1 WO0039677 A1 WO 0039677A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- thread
- threads
- list
- traversal
- computer
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000008569 process Effects 0.000 description 10
- 230000008901 benefit Effects 0.000 description 7
- 239000010410 layer Substances 0.000 description 7
- 238000013459 approach Methods 0.000 description 6
- 241001522296 Erithacus rubecula Species 0.000 description 4
- 239000012792 core layer Substances 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
Definitions
- the present invention relates generally to television set-top boxes and more particularly to computer-implemented kernel operations in a set-top box television environment.
- the set-top box television environment is a real-time environment which requires operations to be performed quickly.
- computer-implemented threads execute upon a central processing unit (CPU) in order to perform many functions for the set-top box, such as, but not limited to, providing video and audio to an user.
- CPU central processing unit
- threads should be scheduled so as to optimize the time required for their functions to be achieved.
- the present invention is directed to this need and other needs of a set-top box environment.
- a computer- implemented method and apparatus for scheduling threads contained in a thread list At least two of the threads have a priority indicative of scheduled executions for the two threads.
- the present invention performs the following steps during a transversal through the thread list: modifying the scheduled execution of at least two threads which have equivalent priorities; performing deadline processing for at least one of the threads; and checking for a predetermined error condition of at least one of the threads.
- Figure 1 is a perspective view of a set-top box system
- Figure 2 is a block diagram depicting the various exemplary programs operating within the set-top box system of Figure 1;
- Figure 3 is a block diagram depicting a novel three tiered scheduling system of the present invention;
- Figure 4 is a block diagram depicting processing of a thread list in a single pass.
- Figure 5 is a flow chart depicting the operational steps for a thread in making a single pass through a thread list.
- Figure 1 shows an exemplary set-top box 20 connected to television 24 via cable 28.
- Cable 32 provides set-top box 20 with a broadcast analog, broadcast digital, and interactive digital transmission.
- Set-top box 20 contains application and operating system sof ware in order to provide an advanced interactive television environment.
- the operating system is utilized by set-top box 20 to, for example, provide interfaces between the application software and the various devices used by the set-top box 20.
- Figure 2 depicts the software components of a set-top box.
- the multitasking operating system of the set-top box addresses the high-performance demands of media- centric, real-time applications being delivered through a set-top box.
- the operating system provides an open, scalable platform for developing and delivering multimedia content to consumers across broadcast and client/server networks.
- the software architecture for the operating system includes layers of interconnected modules designed to minimize redundancy and optimize multimedia processing in an interactive, network setting.
- the layers of the architecture include a hardware abstraction layer 40, a core layer 42, an application support layer 44 and an application layer 46.
- Each hardware device associated with the multimedia client is abstracted into a software device module that resides in the hardware abstraction layer 40.
- Each of these device modules is responsible for media delivery and manipulation between a hardware device and the remainder of the operating system.
- Application program interfaces (APIs) supported by each device module separate the hardware dependencies of each device from the portable operating system facilities, and thereby mask the idiosyncrasies of different hardware implementations.
- a kernel 48 and memory manager 50 residing in the core layer 42 provide the base functionality needed to support an application.
- a fully preemptive, multithreaded, multitasking kernel is designed to optimize both set-top memory footprint and processing speed. Since the operating system resides on consumer units, it has been designed to exist in a ROM-based system with a very small footprint (e.g., 1 megabyte). In addition, the kernel has also been created to take advantage of 32-bit Reduced Instruction Set Computer (RISC) processors which enable high-speed transmission, manipulation and display of complex media types.
- RISC Reduced Instruction Set Computer
- a process is an entity which owns system resources assigned to the process.
- a thread is a unit of work or string of instructions to be executed.
- a process can have more than one thread and in such a case is considered "multithreaded.”
- Memory manager 50 provides an efficient allocation scheme to enable the best performance from limited memory resources. Memory manager 50 provides an efficient memory allocation scheme, enabling optimal performance from limited set-top memory resources.
- the core layer 42 provides an integrated event system 52 and a standard set of ANSI C utility functions.
- an application support layer 44 Built on top of the core layer 42 is an application support layer 44. This set of support modules provides higher-level processing functions and application services.
- At the highest application level 46 at least one application, referred to as a resident application is usually always executing on a set-top box.
- the application level 46 also provides the necessary capabilities for authoring applications and for managing resources (e.g., the tuner) between the resident application and other background applications residing on the set-top box.
- Figure 3 depicts the three tiered scheduling operations system which the operating system scheduler 100 utilizes.
- a round robin scheduler 104 is utilized for threads that share the same priority level. If, for example, there are three threads at priority five and all threads are ready to run, then each thread in turn is given a short time slice.
- This "soft real time" method has the advantage in that it allows separately authored multi-threaded applications to execute on the same system with no knowledge of each other.
- Middle tier 106 is a preemptive scheduler 108. In a preemptive preference scheduler
- a priority level is not allowed to violated. For example, if a thread of priority three and a thread of priority four are both ready to execute, the thread of priority three will execute.
- This scheduling approach includes the benefit that it favors the deadline of most critical ready thread. Preemptive preference scheduling is commonly used in "hard" real time systems where there may be very serious costs involved with missing critical deadlines.
- Top tier 110 permits applications and drivers to implement custom scheduling algorithms 112 by using calls that allow a thread to change its own or another's priority level.
- the ability to change priorities includes the following approaches, but is not limited to: priority inheritance and priority inversion.
- priority inheritance a thread inherits the priority of a more critical thread that may be waiting for resource held by a less critical thread.
- Another type of priority inheritance allows a monitor thread to provide services to other threads while inheriting the client threads' priority on a per request basis.
- priority inversion the priority levels are briefly inverted (that is, high becomes low priority) to allow a less critical thread to run long enough to release a resource requested by a more critical thread. This is typically triggered when a high priority thread detects a resource it needs that is being held by a low priority thread that is not being provided any run time because of a third thread with a medium priority.
- the present invention includes other custom scheduling methods that may be used, for example, rate-monotonic scheduling, leased space slack-time scheduling and earliest deadline scheduling.
- Figure 4 depicts the components and the operations associated with traversing a thread list 130 within one pass 134.
- the present invention preferably via a thread list traverser 138 traverses thread list 130 within one pass in order to perform such multiple scheduling operations within one pass 134 as: round robin scheduling 104, deadline processing 138, and error checking 142.
- preemptive kernel 146 threads of different priority as well as threads of equal priority each attempt to obtain a time slice of processing.
- the present invention utilizes a fully preemptive kernel which indicates for example, but not limited to, that a user can press a key at any time and no matter what is being executed on the said-top box a thread to service the pressed key event will interrupt whatever thread is currently executing so that it may execute.
- Round robin scheduling 104 is efficiently performed within one pass 134 by rotating the priorities of threads which have equal priorities.
- Priority rotating 150 utilizes preferably bubble sorting 154 in order to rotate threads of equal priority within one pass 134.
- one pass preferably signifies that the top of the list is not encountered and that traversals back to prior threads within the list is minimized.
- Priority rotating 150 internally reestablishes the threads of equal priority so that they are rotated. For example, but not limited to, four threads may operate all at priority level nine.
- Priority rotating approach 150 internally establishes the threads' priority levels at 9.1, 9.2, 9.3, and 9.4. However, for each pass, the priority levels are rotated such that priority levels 9.1, 9.2, 9.3, and 9.4 are switched around via bubble sorting 154. The rotation is effected such that the thread which is on the bottom (that is priority level 9.4) becomes priority level 9.1. The thread which was at priority level 9.1 becomes priority level 9.2 and so on for the two remaining threads in this example.
- deadline processing 138 performs time-out checking 158 to check for threads 162 whose time-out 166 has expired.
- Threads 162 include in their data structure a time-out value 166 which indicates how long a thread should wait in thread list 130 for an event.
- Thread list traverser 138 checks within one pass 134 whether a thread's time-out value has been reached. If the time out value 166 has been reached, then thread list traverser "wakes up" the thread so that the thread can do the proper processing for its time-out.
- the present invention allows events to be scheduled to occur at a specified future time.
- This future time may be specified down to a very fine unit of time, such as, but not limited to, on the order of a few hundred billionths of a second. This typically is used for time critical applications.
- the present invention allows each thread to have either a coarse time-out 180 or a fine timeout 184.
- a coarse time-out is that it is implemented very efficiently and typically uses less CPU (central processing unit) cycles to support.
- This efficient implementation takes advantage of the scheduler which is committed to make a single pass 134 through thread list 130 at preferably every scheduler interrupt which occurs preferably every 10 milliseconds. While thread list 130 is being traversed, time-out checking 158 can be performed in order to check for expired time-out 166 with a single compare operation.
- Scheduler 100 preferably makes a single pass 134 through thread list 130 once every 10 milliseconds. While this pass through the thread list is being performed, scheduler 100 also performs error checking 142 for such error conditions as, but not limited to, stack overflow and stack underflow.
- Threads 162 each have a priority level 190. Threads 162 may assume preferably a priority level in the range of 1-31. There may an arbitrary number of threads and multiple threads which can share the same priority level. In the preferred embodiment, the most critical priority level is 1 , while the least critical priority level is 31.
- the present invention preferably includes a certain number of predetermined threads 194 where one or more threads remain always at the top of thread list 130 while one or more threads always remain at the bottom of thread list 130.
- the present invention utilizes a system thread 198 which is at the highest priority and an idle thread 202 which is at the lowest priority.
- System thread 198 uses priority level 0 while idle thread 202 uses priority level 32 which priority levels are outside of the range allowed by drivers and applications since they preferably are within the inclusive range from priority level 1 to priority level 31. It is to be understood that the present invention is not limited to only a priority scheme of these disclosed levels but includes any priority scheme which is suitable for the situation at hand.
- scheduler 100 which manages thread list 130 is never concerned about the special case of adding a new thread to the front or end of list. Moreover, scheduler 100 preferably never has to check for an empty list.
- the present invention also includes a scheduler thread 206 having the second highest priority ⁇ that is second only to the priority level of system thread 198. Due to the use of predetermined threads, the present invention does not have to check for a final no pointer at the end since only a check for the idle thread is the indication for when to stop checking the list.
- the present invention also schedules an event to happen at time "forever” which is the most distant time that can be expressed in the preferred embodiment. This event can never occur and will always be present at the end of the time queue (which is kept sorted by event time). This means the present invention is never concerned about dealing with an empty timer queue or adding an event to the end of the list.
- Figure 5 is a flow chart depicting exemplary operational steps performed by the scheduler in making a pass through the thread list.
- Start indication block 250 indicates that iteration block 254 is executed first.
- iteration block 254 a single pass is made through the thread list.
- Process block 262 compares the priority level of the currently selected thread with a preceding thread's priority. If the threads do not have the same priority as determined by decision block 266, then process block 274 is executed. However, if the threads do have the same priority, then the round robin scheduling approach of the present invention is utilized at process block 270 wherein the threads are preferably bubble sorted so that the execution order is inverted for threads with the same priority. Processing continues at process block 274.
- a check is performed to determine if the thread which is currently being analyzed by the scheduler has timed-out. If it has, then the thread is woken up so that the thread can perform its time-out operation.
- Process block 278 is executed wherein error checking is performed.
- Decision block 282 inquires whether the idle thread which has the lowest priority has been encountered. If it has, then process block 264 is processed wherein a predetermined time is to expire before iteration block 254 is executed. If decision block 282 does not find that the idle thread has been encountered, then iteration block 258 is executed.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020017007977A KR20010103719A (en) | 1998-12-23 | 1999-12-17 | Method and apparatus for providing operating system scheduling operations |
EP99966433A EP1141827A1 (en) | 1998-12-23 | 1999-12-17 | Method and apparatus for providing operating system scheduling operations |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US21982598A | 1998-12-23 | 1998-12-23 | |
US09/219,825 | 1998-12-23 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2000039677A1 true WO2000039677A1 (en) | 2000-07-06 |
Family
ID=22820936
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US1999/030247 WO2000039677A1 (en) | 1998-12-23 | 1999-12-17 | Method and apparatus for providing operating system scheduling operations |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP1141827A1 (en) |
KR (1) | KR20010103719A (en) |
WO (1) | WO2000039677A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9354926B2 (en) | 2011-03-22 | 2016-05-31 | International Business Machines Corporation | Processor management via thread status |
CN107533479A (en) * | 2015-04-01 | 2018-01-02 | 微软技术许可有限责任公司 | Power knows scheduling and power manager |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101686082B1 (en) | 2010-07-22 | 2016-12-28 | 삼성전자주식회사 | Apparatus and method for thread scheduling and lock acquisition order control based on deterministic progress index |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0352050A2 (en) * | 1988-07-18 | 1990-01-24 | Digital Equipment Corporation | Single-keyed indexed file for TP queue repository |
GB2304211A (en) * | 1995-08-11 | 1997-03-12 | Fujitsu Ltd | User-level process-scheduler |
US5812844A (en) * | 1995-12-07 | 1998-09-22 | Microsoft Corporation | Method and system for scheduling the execution of threads using optional time-specific scheduling constraints |
-
1999
- 1999-12-17 KR KR1020017007977A patent/KR20010103719A/en not_active Withdrawn
- 1999-12-17 WO PCT/US1999/030247 patent/WO2000039677A1/en not_active Application Discontinuation
- 1999-12-17 EP EP99966433A patent/EP1141827A1/en not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0352050A2 (en) * | 1988-07-18 | 1990-01-24 | Digital Equipment Corporation | Single-keyed indexed file for TP queue repository |
GB2304211A (en) * | 1995-08-11 | 1997-03-12 | Fujitsu Ltd | User-level process-scheduler |
US5812844A (en) * | 1995-12-07 | 1998-09-22 | Microsoft Corporation | Method and system for scheduling the execution of threads using optional time-specific scheduling constraints |
Non-Patent Citations (1)
Title |
---|
FISKE S., DALLY W. J.: "THREAD PRIORITIZATION: A THREAD SCHEDULING MECHANISM FOR MULTIPLE-CONTEXT PARALLEL PROCESSORS", FUTURE GENERATIONS COMPUTER SYSTEMS, NL, ELSEVIER SCIENCE PUBLISHERS. AMSTERDAM, vol. 11, no. 6, 1 October 1995 (1995-10-01), pages 503 - 518, XP000536144, ISSN: 0167-739X * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9354926B2 (en) | 2011-03-22 | 2016-05-31 | International Business Machines Corporation | Processor management via thread status |
US9886077B2 (en) | 2011-03-22 | 2018-02-06 | International Business Machines Corporation | Processor management via thread status |
US11157061B2 (en) | 2011-03-22 | 2021-10-26 | International Business Machines Corporation | Processor management via thread status |
CN107533479A (en) * | 2015-04-01 | 2018-01-02 | 微软技术许可有限责任公司 | Power knows scheduling and power manager |
Also Published As
Publication number | Publication date |
---|---|
KR20010103719A (en) | 2001-11-23 |
EP1141827A1 (en) | 2001-10-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7051330B1 (en) | Generic application server and method of operation therefor | |
Rajkumar et al. | Resource kernels: A resource-centric approach to real-time and multimedia systems | |
US8180973B1 (en) | Servicing interrupts and scheduling code thread execution in a multi-CPU network file server | |
US5390329A (en) | Responding to service requests using minimal system-side context in a multiprocessor environment | |
Bangs et al. | Better operating system features for faster network servers | |
US7246167B2 (en) | Communication multiplexor using listener process to detect newly active client connections and passes to dispatcher processes for handling the connections | |
US6895585B2 (en) | Method of mixed workload high performance scheduling | |
US9195506B2 (en) | Processor provisioning by a middleware processing system for a plurality of logical processor partitions | |
US7373640B1 (en) | Technique for dynamically restricting thread concurrency without rewriting thread code | |
US6223207B1 (en) | Input/output completion port queue data structures and methods for using same | |
US9411636B1 (en) | Multi-tasking real-time kernel threads used in multi-threaded network processing | |
US9003410B2 (en) | Abstracting a multithreaded processor core to a single threaded processor core | |
US6968557B1 (en) | Reducing stack memory resources in a threaded computer system | |
CN108595282A (en) | A kind of implementation method of high concurrent message queue | |
Kaneko et al. | Integrated scheduling of multimedia and hard real-time tasks | |
US20120260257A1 (en) | Scheduling threads in multiprocessor computer | |
CN103984598A (en) | method and system for thread scheduling | |
US20080172670A1 (en) | Method and Apparatus for Reducing Contention for Computer System Resources Using Soft Locks | |
Nakajima | Resource reservation for adaptive qos mapping in real-time mach | |
EP1693743A2 (en) | System, method and medium for using and/or providing operating system information to acquire a hybrid user/operating system lock | |
EP1377903A2 (en) | Method of and system for withdrawing budget from a blocking task | |
US20060123421A1 (en) | Streamlining cpu utilization by delaying transactions | |
EP1141827A1 (en) | Method and apparatus for providing operating system scheduling operations | |
Sommer et al. | Operating system extensions for dynamic real-time applications | |
Jones | NAS requirements checklist for job queuing/scheduling software |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): KR |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 1999966433 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020017007977 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 1999966433 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1020017007977 Country of ref document: KR |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 1999966433 Country of ref document: EP |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 1020017007977 Country of ref document: KR |