[go: up one dir, main page]

WO2007085978A2 - A method of controlling a page cache memory in real time stream and best effort applications - Google Patents

A method of controlling a page cache memory in real time stream and best effort applications Download PDF

Info

Publication number
WO2007085978A2
WO2007085978A2 PCT/IB2007/050115 IB2007050115W WO2007085978A2 WO 2007085978 A2 WO2007085978 A2 WO 2007085978A2 IB 2007050115 W IB2007050115 W IB 2007050115W WO 2007085978 A2 WO2007085978 A2 WO 2007085978A2
Authority
WO
WIPO (PCT)
Prior art keywords
applications
page cache
memory
stream
best effort
Prior art date
Application number
PCT/IB2007/050115
Other languages
French (fr)
Other versions
WO2007085978A3 (en
Inventor
Ozcan Mesut
Jozef P. Van Gassel
Gilein De Nijs
Steven B. Luitjens
Siarhei Yermalayeu
Artur Burchard
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2007085978A2 publication Critical patent/WO2007085978A2/en
Publication of WO2007085978A3 publication Critical patent/WO2007085978A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the invention relates to a method performed in a device of controlling a page cache memory.
  • the present invention further relates to a page cache controlling system in the device for controlling the page cache memory.
  • Hard disk is a major energy consumer in a battery-powered system. Its energy consumption can be reduced by spinning it down when idle. When streaming A/V, disk idle time can be maximized by using relatively large stream buffers and serving the stream from the buffer while the disk is spun down.
  • best-effort (BE) applications access the disk in a less predictable way.
  • Operating Systems are optimized for such applications and (try to) minimise the disk accesses by pre-fetching and caching of data in memory (page cache) by reading large parts of a file into memory and keeping used data in memory for possible future use.
  • best-effort traffic can be handled in two ways: spin-up the disk immediately and serve the BE requests, or postpone the BE request until the next disk spin- up when stream buffers get refilled.
  • the increased size of the stream buffers increases the time the disk is spun down, which leads to a reduced energy consumption of the disk while streaming.
  • best-effort (BE) applications can also be active. Therefore, increasing the stream buffer size means a reduction of the free pages in the page cache for the BE applications. Such a reduction will lead to more frequent flushing (writing) of dirty pages to disk and swapping-out of memory to disk and therefore to an increase in power consumption.
  • the operating system will be able to cache less file data (either read-ahead or previously read data) which will lead to more cache misses and thus more HDD access.
  • the present invention provides optimizing the power trade-off between the size of the stream buffers and the available memory for the BE applications.
  • the present invention relates to a method performed in a device of controlling a page cache memory in stream applications and best effort applications, the page cache memory comprising at least one stream buffer associated to the stream applications and a memory associated to the best effort applications, comprising:
  • the at least one activity parameter is selected from a group consisting of:
  • the stream buffer size is decreased or increased when one or more of the activity parameter is above or below, respectively, a pre-defined threshold value associated to each of said activity parameter.
  • non-cached data By the term non-cached data is meant not read-ahead or cached data from previous use.
  • activity parameters as control parameters a very effective way is provided to dynamically control the memory cache.
  • Memory swap outs and dirty page flushing are indications of disk accesses due to shortage of free memory. Increasing the amount of free pages in memory leads to a reduction in memory swap outs and dirty page flushing. Similarly, the amount of dirty pages in the memory is an indication of a coming dirty page flush. Such a flush can be postponed and the number of dirty page flushing can be reduced by increasing the amount of free memory pages.
  • the stream applications comprise Audio, or Video, or Audi/Video applications.
  • the present invention relates to a computer readable media for storing instructions for a processing unit to execute one or more of the above method steps.
  • the present invention relates to a page cache controlling system for controlling a page cache memory in stream applications and best effort applications, the page cache memory comprising at least one stream buffer associated to the stream applications and a memory associated to the best effort applications, comprising:
  • a page cache manager for establishing at least one activity parameter indicating the current activity in the page cache memory
  • a buffer adaptor for adapting the buffer size dynamically to the stream applications by increasing or decreasing the size of the at least one stream buffer.
  • the page cache manager comprises a page cache monitor adapted to monitor the activity in the page cache memory and to determine the at least one activity parameter, and a resource manager interconnected to the page cache monitor that uses the at least one parameter for keeping track of the activity in the page cache memory.
  • the present invention relates to a device adapted to perform stream applications and best effort applications comprising said page cache controlling system.
  • the aspects of the present invention may each be combined with any of the other aspects.
  • Figure 1 shows an embodiment of a page cache controlling system according to the present invention for controlling a page cache memory in real time stream applications and BE applications, and
  • Figure 2 shows an embodiment of a method performed in a device according to the present invention of controlling a page cache memory in real time stream applications and best effort applications.
  • the present invention relates to controlling a page cache memory in devices which are implemented to run both best effort (BE) applications and simultaneously real time stream applications.
  • the aim of the present invention is to maintain the power consumption associated with storage means, especially hard disk drive (HDD), for such real time stream and best effort applications optimally. This is especially important for devices which are battery powered, since such optimal power consumption would clearly be reflected in a more optimal battery use and enhance the lifetime of the battery charge.
  • Example of devices that are suitable to run such BE and real time stream applications is a PC computer, Personal Digital Assistant (PDA) devices, mobile phones and other electronic devices that are implemented to run the applications simultaneously.
  • the BE applications could e.g. include typical PC processing, e.g. word processing, Internet searching etc.
  • the real time stream applications could e.g. comprise Audio (e.g. an MP3 player that is integrated in a PDA or a mobile phone), Video and Audio/Video (A/V) applications.
  • the process of spinning of the storage means such as HDD up and down is very energy consuming process.
  • the energy consumption associated to the real time stream application can be reduced by spinning the HDD down when idle.
  • the idle time can be increased by increasing the buffer size, and therefore the energy consumption of the system can be minimized.
  • Such an increase in the buffer size will however be reflected in a reduction in the remaining page cache memory associated to the BE applications that are still running, since an increase in the buffer size means a reduction of the free pages in the page cache for the BE applications, which in turn leads to more cache misses for BE applications, more flushing of dirty pages and more memory swap requests. Therefore, to serve the BE application the memory means (e.g.
  • HDD high definition digital versatile disk
  • the BE request is delayed or postponed until the next spin up process of e.g. the HDD takes place to serve the real time stream application.
  • Such a process can be very energy demanding. It is therefore important to be able to control the buffer size dynamically based on the activity in the page cache, to find the optimal equilibrium between the stream buffers and the available memory for the BE applications.
  • FIG. 1 shows an embodiment of a page cache controlling system 100 according to the present invention for controlling a page cache memory 101 in real time stream (R_T_S_A) 115 applications and BE applications (B E) 116 in a device 117, wherein the system comprises a page cache manager 102 and a buffer adaptor (B A) 103.
  • the device can e.g. be a PC computer, Personal Digital Assistant (PDA) devices, mobile phones and other electronic devices.
  • the borderline 110 is to separate between the user space (U S) 111 and the kernel space (K S) 112 which is the space within the device 117.
  • the page cache memory 101 comprises stream buffer 107, which as shown here comprises buffers (B) 108, 109, that is associated to the real time stream applications (R_T_S_A) 115.
  • buffers B
  • R_T_S_A real time stream applications
  • These buffers are filled with read-ahead file data for reading or storing new data for writing, in such a way that the storage means, which in the following will be assumed to be HDD 119, does not need to be accessed when data is streamed from the buffers, e.g. 108, 109.
  • the HDD 119 needs to be accessed again when the buffers are empty and need to be refilled for reading, or when the buffers are full and data needs to be written to disk for writing.
  • the buffers 108, 109 are also used to be able to provide guarantees about the bandwidth to the real time stream applications by overcoming system and HDD latencies by having buffered data available.
  • the page cache memory 101 further comprises memory (104) associated to the BE applications (B E) 116 comprising buffered file data associated to the BE applications (B E).
  • This buffered file data consists of individual cached pages, which contain data which were just requested by a BE application and which is cached in case it needs further access, of pages which are needed in the future by a BE application as predicted by the operating system (read-ahead), and of pages 105 which store new file data before it is written to the HDD, i.e. dirty pages.
  • the page cache manager 102 is adapted to establish at least one activity parameter indicating the current activity in the page cache memory 101.
  • the activity parameter comprises one or more of the following parameters:
  • the page cache manager 102 comprises a page cache monitor (P M) 121 and a resource manager (R M) 120 interconnected to the page cache monitor (P M) 121.
  • the page cache monitor monitors the page cache activity in the memory 104 associated to the BE applications (B E) 116, e.g. the swap-out frequency and/or the flushing of dirty pages 105 frequency of the memory 104.
  • the resource manager (R M) 120 For every file that is attached to the buffer adaptor (B A) 103, the resource manager (R M) 120 is informed about the resource requirements. Accordingly, for every file that is attached to the buffer adaptor (B A) 103, the resource manager (R M) 120 is informed about the resource requirements. The resource manager can subsequently allow or deny the request, or change the parameters as it sees fit. This way, it is ensured that the bandwidth needed by the real time stream applications (R_T_S_A) 115 and the guarantees the system 100 gives accordingly are feasible.
  • the buffer adaptor (B A) 103 adapts the buffer size 108, 109 dynamically to the real time stream applications (R T S A) 115 by increasing or decreasing the size of the buffers 108, 109.
  • the buffer size will be decreased when at least one activity parameter indicate the following:
  • the amount of cache misses of the best effort applications (B E) 116 is above a pre-defined threshold value.
  • the threshold values are operating system dependent and/or storage means dependent.
  • the threshold values depend both on the operating system and the HDD 119. As an example, the threshold values may either be determined automatically, semi-automatically or manually.
  • the optimal values of these threshold variables are depending on the bit rate of the real time streams, the available memory and the power usage, seek times and transfer rates of the HDD 119.
  • the bit rate of the streams and the available memory can be automatically determined dynamically.
  • the characteristic values of the HDD 119 in the system can also be determined automatically. These values are static so this needs only to be done once for every type of HDD used. When the system is used e.g. in an embedded device, these values can be determined and statically programmed by the manufacturer.
  • the amount of free page frames 106 in the memory 104 associated to the BE applications (B E) 116 is low.
  • the following actions are then preferably taken place to free pages in the memory 104: 1.
  • Unused pages 122 in the memory 104 are reclaimed. These are pages that are not accessed recently or are not locked and not dirty and not protected.
  • Dirty pages 105 in the cache 101 are flushed (written) to the HDD 119 when: a.
  • the memory 104 in the page cache 101 gets too full and more pages are needed, or b.
  • the number of dirty pages 105 become too large, or c. Too much time has elapsed since a page has stayed dirty, or d.
  • a process requests all pages of block devices of particular files to be flushed.
  • a swap-out is performed: a. Whenever the number of free page frames 106 falls below a predefined threshold, or b. When a request from memory 104 cannot be satisfied because the number of free page frames 106 would fall below a pre-defined value.
  • the total size of the stream buffers 108, 109 is increased when the number of page frames 106 in the cache memory 101 is above a predefined threshold value.
  • the access to the storage means e.g. hard disk drive (HDD) 119
  • HDD hard disk drive
  • the embodiment shown in Fig. 1 further includes an elevator (E V) 118 (or an I/O scheduler) for reordering all requests for the HDD 119 access.
  • E V elevator
  • the elevator 118 increases the HDD efficiency by collecting all requests of the page cache 101 for HDD access (those generated by the real time stream applications 115 via the stream buffers 108, 109 and those by the BE applications 116 via the general page cache mechanism 104) and reorders them to minimize seeking. Furthermore, it combines similar adjacent requests to one request to the HDD 119.
  • the real time stream applications (R_T_S_A) 115 comprise e.g. A/V applications the buffers 108 and 108 can be recording and playback buffers for the A/V applications.
  • the result of adapting the buffer size dynamically based on the activity parameters as described previously will result in that the power consumption of the HDD 119 shown here will be optimal as the HDD 119 accesses by the real time stream applications (R_T_S_A) 115 will be as clustered as possible (and therefore allowing the HDD to spin down for prolonged periods of time) and the accesses by the BE applications (B E) 116 will be minimized by having a suitable amount of memory 104 available in the page cache 101.
  • Figure 2 shows an embodiment of a method performed in a device 117, according to the present invention of controlling a page cache memory in real time stream applications and best effort applications, wherein the page cache memory comprises at least one stream buffer associated to the real time stream applications and a memory associated to the best effort applications.
  • the page cache activity is monitored (Sl) 201 by page cache monitor.
  • This activity can e.g. relate to the swap-out frequency of the memory associated with the best effort applications, the flushing of dirty pages frequency of the memory associated with the best effort applications, and the amount of dirty pages in the memory associated with the best effort applications.
  • One or more activity parameters relating to the monitoring are then established (S2) 202 and compared to one or more pre-defined threshold values (C) 203.
  • the definition of these threshold values may be determined based on the type of the operating system in the device and/or the type of the storage means, e.g. HDD, the device uses. These threshold values may be selected manually, or selected automatically selected e.g. the first time the controlling system 100 in Fig. 1 is run on the device.
  • the threshold values may indicate whether the swap-out frequency of the memory associated with the best effort applications has exceed a certain swap-out frequency limit that is evaluated, either automatically or manually, as being critical for the BE applications.
  • the threshold values may also indicate whether the flushing of dirty pages frequency of the memory associated with the best effort applications has exceed a certain swap-out frequency limit that is evaluated, either automatically or manually, as being critical for the BE applications.
  • the threshold values may also indicate whether the amount of dirty pages in the memory associated with the best effort applications has exceed a certain frequency limit that is evaluated, either automatically or manually, as being critical for the BE applications.
  • the threshold values may also indicate whether the amount of cache misses of the BE applications has exceed a certain limit that is evaluated, either automatically or manually, as being critical for the BE applications.
  • the stream buffer size will be decreased (S5) 205. However, if one or more of the activity parameters is below the threshold value(s) the buffer size will be increased (S6) 204.
  • the above steps 201-203 and 204 or 205 are then repeated, 206, 207.
  • the invention can be implemented in any suitable form including hardware, software, firmware or any combination of these.
  • the invention or some features of the invention can be implemented as computer software running on one or more data processors and/or digital signal processors.
  • the elements and components of an embodiment of the invention may be physically, functionally and logically implemented in any suitable way. Indeed, the functionality may be implemented in a single unit, in a plurality of units or as part of other functional units. As such, the invention may be implemented in a single unit, or may be physically and functionally distributed between different units and processors.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The present invention relates to controlling a page cache memory in devices which are implemented to run both best effort (BE) applications and simultaneously real time stream applications, where the controlling is based on adapting the buffer size dynamically to these applications. The aim of the present invention is therefore to maintain the power consumption associated with storage means, especially hard disk drive (HDD), for such real time stream and best effort applications optimally.

Description

A method of controlling a page cache memory in real time stream and best effort applications
FIELD OF THE INVENTION
The invention relates to a method performed in a device of controlling a page cache memory. The present invention further relates to a page cache controlling system in the device for controlling the page cache memory.
BACKGROUND OF THE INVENTION
Hard disk is a major energy consumer in a battery-powered system. Its energy consumption can be reduced by spinning it down when idle. When streaming A/V, disk idle time can be maximized by using relatively large stream buffers and serving the stream from the buffer while the disk is spun down.
PC type, best-effort (BE) applications access the disk in a less predictable way. Usually, Operating Systems are optimized for such applications and (try to) minimise the disk accesses by pre-fetching and caching of data in memory (page cache) by reading large parts of a file into memory and keeping used data in memory for possible future use. When streaming via relatively large stream buffers and spinning down the disk during the long idle periods, best-effort traffic can be handled in two ways: spin-up the disk immediately and serve the BE requests, or postpone the BE request until the next disk spin- up when stream buffers get refilled.
The increased size of the stream buffers increases the time the disk is spun down, which leads to a reduced energy consumption of the disk while streaming. However, when streaming AV, best-effort (BE) applications can also be active. Therefore, increasing the stream buffer size means a reduction of the free pages in the page cache for the BE applications. Such a reduction will lead to more frequent flushing (writing) of dirty pages to disk and swapping-out of memory to disk and therefore to an increase in power consumption. Also, the operating system will be able to cache less file data (either read-ahead or previously read data) which will lead to more cache misses and thus more HDD access.
Decreasing the stream buffer size will however lead to an increase of the free pages in the page cache and therefore reduce the frequency of flushes, cache misses and swap-outs. However, if the stream buffer size is chosen too small, the disk will spin-up more frequently to refill the stream buffers which will also lead to an increase in power consumption.
There is therefore a power trade-off between the size of the stream buffers and the available memory for the BE applications, where the size of the stream buffers depend among other things on the memory requirements of the BE applications.
SUMMARY OF THE INVENTION
It is preferred to minimize the power consumption of memories, especially hard disk drive (HDD), in stream applications. Therefore, the present invention provides optimizing the power trade-off between the size of the stream buffers and the available memory for the BE applications.
According to one aspect, the present invention relates to a method performed in a device of controlling a page cache memory in stream applications and best effort applications, the page cache memory comprising at least one stream buffer associated to the stream applications and a memory associated to the best effort applications, comprising:
- establishing at least one activity parameter indicating the current activity in the page cache memory, and based thereon
- adapting the buffer size dynamically to the stream applications by increasing or decreasing the size of the at least one stream buffer. Thereby, an optimization between the stream buffers and the memory associated to the best effort applications will result in that the access to the storage means in the device, e.g. the hard disk drive, will be optimized and therefore the energy consumption associated to such disk drives will be minimized. This is of particular advantage for devices which are battery driven, since such optimal power consumption associated to the memories (e.g. HDD) will be minimized for such applications resulting in a more optimal battery use and enhance the lifetime of the battery charge.
In an embodiment, the at least one activity parameter is selected from a group consisting of:
- the swap-out frequency of the memory associated with the best effort applications,
- the flushing of dirty pages frequency of the memory associated with the best effort applications,
- the amount of dirty pages in the memory associated with the best effort applications, and - the amount of cache misses due to the reading of non-cached data by the best effort applications, wherein the stream buffer size is decreased or increased when one or more of the activity parameter is above or below, respectively, a pre-defined threshold value associated to each of said activity parameter.
By the term non-cached data is meant not read-ahead or cached data from previous use. Using these activity parameters as control parameters a very effective way is provided to dynamically control the memory cache. Memory swap outs and dirty page flushing are indications of disk accesses due to shortage of free memory. Increasing the amount of free pages in memory leads to a reduction in memory swap outs and dirty page flushing. Similarly, the amount of dirty pages in the memory is an indication of a coming dirty page flush. Such a flush can be postponed and the number of dirty page flushing can be reduced by increasing the amount of free memory pages.
In an embodiment, the stream applications comprise Audio, or Video, or Audi/Video applications.
According to another aspect, the present invention relates to a computer readable media for storing instructions for a processing unit to execute one or more of the above method steps.
According to still another aspect, the present invention relates to a page cache controlling system for controlling a page cache memory in stream applications and best effort applications, the page cache memory comprising at least one stream buffer associated to the stream applications and a memory associated to the best effort applications, comprising:
- a page cache manager for establishing at least one activity parameter indicating the current activity in the page cache memory, and - a buffer adaptor for adapting the buffer size dynamically to the stream applications by increasing or decreasing the size of the at least one stream buffer.
In an embodiment, the page cache manager comprises a page cache monitor adapted to monitor the activity in the page cache memory and to determine the at least one activity parameter, and a resource manager interconnected to the page cache monitor that uses the at least one parameter for keeping track of the activity in the page cache memory.
According to yet another aspect, the present invention relates to a device adapted to perform stream applications and best effort applications comprising said page cache controlling system. The aspects of the present invention may each be combined with any of the other aspects. These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention will be described, by way of example only, with reference to the drawings, in which
Figure 1 shows an embodiment of a page cache controlling system according to the present invention for controlling a page cache memory in real time stream applications and BE applications, and
Figure 2 shows an embodiment of a method performed in a device according to the present invention of controlling a page cache memory in real time stream applications and best effort applications.
DETAILED DESCRIPTION OF AN EMBODIMENT
The present invention relates to controlling a page cache memory in devices which are implemented to run both best effort (BE) applications and simultaneously real time stream applications. The aim of the present invention is to maintain the power consumption associated with storage means, especially hard disk drive (HDD), for such real time stream and best effort applications optimally. This is especially important for devices which are battery powered, since such optimal power consumption would clearly be reflected in a more optimal battery use and enhance the lifetime of the battery charge. Example of devices that are suitable to run such BE and real time stream applications is a PC computer, Personal Digital Assistant (PDA) devices, mobile phones and other electronic devices that are implemented to run the applications simultaneously. The BE applications could e.g. include typical PC processing, e.g. word processing, Internet searching etc., and the real time stream applications could e.g. comprise Audio (e.g. an MP3 player that is integrated in a PDA or a mobile phone), Video and Audio/Video (A/V) applications.
It is clear that the process of spinning of the storage means such as HDD up and down is very energy consuming process. When running a real time stream application the energy consumption associated to the real time stream application can be reduced by spinning the HDD down when idle. The idle time can be increased by increasing the buffer size, and therefore the energy consumption of the system can be minimized. Such an increase in the buffer size will however be reflected in a reduction in the remaining page cache memory associated to the BE applications that are still running, since an increase in the buffer size means a reduction of the free pages in the page cache for the BE applications, which in turn leads to more cache misses for BE applications, more flushing of dirty pages and more memory swap requests. Therefore, to serve the BE application the memory means (e.g. HDD) is spun up immediately, or the BE request is delayed or postponed until the next spin up process of e.g. the HDD takes place to serve the real time stream application. In the same way, it is not preferred to reduce the buffer size of the real time stream applications in the page cache for enhancing the number of free pages in the page cache since such a reduction would mean a more frequent spin up of the memory means for refilling the stream buffers. Such a process can be very energy demanding. It is therefore important to be able to control the buffer size dynamically based on the activity in the page cache, to find the optimal equilibrium between the stream buffers and the available memory for the BE applications.
FIG. 1 shows an embodiment of a page cache controlling system 100 according to the present invention for controlling a page cache memory 101 in real time stream (R_T_S_A) 115 applications and BE applications (B E) 116 in a device 117, wherein the system comprises a page cache manager 102 and a buffer adaptor (B A) 103. The device can e.g. be a PC computer, Personal Digital Assistant (PDA) devices, mobile phones and other electronic devices. The borderline 110 is to separate between the user space (U S) 111 and the kernel space (K S) 112 which is the space within the device 117.
As shown in the figure the page cache memory 101 comprises stream buffer 107, which as shown here comprises buffers (B) 108, 109, that is associated to the real time stream applications (R_T_S_A) 115. For each stream opened (for reading or writing), one buffer is added. These buffers are filled with read-ahead file data for reading or storing new data for writing, in such a way that the storage means, which in the following will be assumed to be HDD 119, does not need to be accessed when data is streamed from the buffers, e.g. 108, 109. The HDD 119 needs to be accessed again when the buffers are empty and need to be refilled for reading, or when the buffers are full and data needs to be written to disk for writing. The buffers 108, 109 are also used to be able to provide guarantees about the bandwidth to the real time stream applications by overcoming system and HDD latencies by having buffered data available.
The page cache memory 101 further comprises memory (104) associated to the BE applications (B E) 116 comprising buffered file data associated to the BE applications (B E). This buffered file data consists of individual cached pages, which contain data which were just requested by a BE application and which is cached in case it needs further access, of pages which are needed in the future by a BE application as predicted by the operating system (read-ahead), and of pages 105 which store new file data before it is written to the HDD, i.e. dirty pages. The page cache manager 102 is adapted to establish at least one activity parameter indicating the current activity in the page cache memory 101. In a preferred embodiment the activity parameter comprises one or more of the following parameters:
1. the swap-out frequency of the memory 104 associated with the best effort applications (B E) 116, 2. the flushing of dirty pages 105 frequency of the memory 104,
3. the amount of dirty pages 105 in the memory 101, and
4. the amount of cache misses of the best effort applications (B E) 116. In a preferred embodiment, the page cache manager 102 comprises a page cache monitor (P M) 121 and a resource manager (R M) 120 interconnected to the page cache monitor (P M) 121. The page cache monitor monitors the page cache activity in the memory 104 associated to the BE applications (B E) 116, e.g. the swap-out frequency and/or the flushing of dirty pages 105 frequency of the memory 104. This gives the resource manager (R M) 120 the possibility of keeping track of the system resources that are available and are being used. This can e.g. be the total buffer space and the available HDD bandwidth. For every file that is attached to the buffer adaptor (B A) 103, the resource manager (R M) 120 is informed about the resource requirements. Accordingly, for every file that is attached to the buffer adaptor (B A) 103, the resource manager (R M) 120 is informed about the resource requirements. The resource manager can subsequently allow or deny the request, or change the parameters as it sees fit. This way, it is ensured that the bandwidth needed by the real time stream applications (R_T_S_A) 115 and the guarantees the system 100 gives accordingly are feasible.
Accordingly, based on the at least one activity parameter the buffer adaptor (B A) 103 adapts the buffer size 108, 109 dynamically to the real time stream applications (R T S A) 115 by increasing or decreasing the size of the buffers 108, 109. In one preferred embodiment, the buffer size will be decreased when at least one activity parameter indicate the following:
1. the frequency of the memory swap-outs is above a pre-defined threshold value, 2. the frequency of flushing of dirty pages 105 is above a pre-defined threshold value,
3. the frequency of memory swap-outs and cache flushes combined is above a pre-defined threshold value, 4. the amount of dirty pages 105 is above a pre-defined threshold value and
5. the amount of dirty pages 105 (e.g. in percentage) is above a pre-defined threshold value,
6. the amount of cache misses of the best effort applications (B E) 116 is above a pre-defined threshold value. Typically, the threshold values are operating system dependent and/or storage means dependent. In this embodiment, the threshold values depend both on the operating system and the HDD 119. As an example, the threshold values may either be determined automatically, semi-automatically or manually.
Preferably, the optimal values of these threshold variables are depending on the bit rate of the real time streams, the available memory and the power usage, seek times and transfer rates of the HDD 119. The bit rate of the streams and the available memory can be automatically determined dynamically. The characteristic values of the HDD 119 in the system can also be determined automatically. These values are static so this needs only to be done once for every type of HDD used. When the system is used e.g. in an embedded device, these values can be determined and statically programmed by the manufacturer.
Under circumstances as described in the previous embodiment where the buffer size is to be decreased, the amount of free page frames 106 in the memory 104 associated to the BE applications (B E) 116 is low. The following actions are then preferably taken place to free pages in the memory 104: 1. Unused pages 122 in the memory 104 are reclaimed. These are pages that are not accessed recently or are not locked and not dirty and not protected.
2. Dirty pages 105 in the cache 101 are flushed (written) to the HDD 119 when: a. The memory 104 in the page cache 101 gets too full and more pages are needed, or b. The number of dirty pages 105 become too large, or c. Too much time has elapsed since a page has stayed dirty, or d. A process requests all pages of block devices of particular files to be flushed. 3. A swap-out is performed: a. Whenever the number of free page frames 106 falls below a predefined threshold, or b. When a request from memory 104 cannot be satisfied because the number of free page frames 106 would fall below a pre-defined value.
These actions and the decision when and which of these actions to take are typically implemented in the operating system and are available in modern operating systems as part of the memory management.
In another preferred embodiment, the total size of the stream buffers 108, 109 is increased when the number of page frames 106 in the cache memory 101 is above a predefined threshold value.
Accordingly, by dynamically increasing or decreasing the buffer size 107 in this way, the access to the storage means, e.g. hard disk drive (HDD) 119, will be optimized and therefore the energy consumption associated to such disk drives will be minimized for such applications.
The embodiment shown in Fig. 1 further includes an elevator (E V) 118 (or an I/O scheduler) for reordering all requests for the HDD 119 access. As an example, if the HDD 119 is orders of magnitude slower than the processing speed of the system 100, the elevator 118 increases the HDD efficiency by collecting all requests of the page cache 101 for HDD access (those generated by the real time stream applications 115 via the stream buffers 108, 109 and those by the BE applications 116 via the general page cache mechanism 104) and reorders them to minimize seeking. Furthermore, it combines similar adjacent requests to one request to the HDD 119. If the real time stream applications (R_T_S_A) 115 comprise e.g. A/V applications the buffers 108 and 108 can be recording and playback buffers for the A/V applications.
Therefore, the result of adapting the buffer size dynamically based on the activity parameters as described previously will result in that the power consumption of the HDD 119 shown here will be optimal as the HDD 119 accesses by the real time stream applications (R_T_S_A) 115 will be as clustered as possible (and therefore allowing the HDD to spin down for prolonged periods of time) and the accesses by the BE applications (B E) 116 will be minimized by having a suitable amount of memory 104 available in the page cache 101.
Figure 2 shows an embodiment of a method performed in a device 117, according to the present invention of controlling a page cache memory in real time stream applications and best effort applications, wherein the page cache memory comprises at least one stream buffer associated to the real time stream applications and a memory associated to the best effort applications.
The page cache activity is monitored (Sl) 201 by page cache monitor. This activity can e.g. relate to the swap-out frequency of the memory associated with the best effort applications, the flushing of dirty pages frequency of the memory associated with the best effort applications, and the amount of dirty pages in the memory associated with the best effort applications. One or more activity parameters relating to the monitoring are then established (S2) 202 and compared to one or more pre-defined threshold values (C) 203. The definition of these threshold values may be determined based on the type of the operating system in the device and/or the type of the storage means, e.g. HDD, the device uses. These threshold values may be selected manually, or selected automatically selected e.g. the first time the controlling system 100 in Fig. 1 is run on the device. Accordingly, the threshold values may indicate whether the swap-out frequency of the memory associated with the best effort applications has exceed a certain swap-out frequency limit that is evaluated, either automatically or manually, as being critical for the BE applications. The threshold values may also indicate whether the flushing of dirty pages frequency of the memory associated with the best effort applications has exceed a certain swap-out frequency limit that is evaluated, either automatically or manually, as being critical for the BE applications. The threshold values may also indicate whether the amount of dirty pages in the memory associated with the best effort applications has exceed a certain frequency limit that is evaluated, either automatically or manually, as being critical for the BE applications. The threshold values may also indicate whether the amount of cache misses of the BE applications has exceed a certain limit that is evaluated, either automatically or manually, as being critical for the BE applications.
If one or more of the activity parameters exceed at least one of the threshold value(s) the stream buffer size will be decreased (S5) 205. However, if one or more of the activity parameters is below the threshold value(s) the buffer size will be increased (S6) 204. The above steps 201-203 and 204 or 205 are then repeated, 206, 207. The invention can be implemented in any suitable form including hardware, software, firmware or any combination of these. The invention or some features of the invention can be implemented as computer software running on one or more data processors and/or digital signal processors. The elements and components of an embodiment of the invention may be physically, functionally and logically implemented in any suitable way. Indeed, the functionality may be implemented in a single unit, in a plurality of units or as part of other functional units. As such, the invention may be implemented in a single unit, or may be physically and functionally distributed between different units and processors.
Although the present invention has been described in connection with preferred embodiments, it is not intended to be limited to the specific form set forth herein. Rather, the scope of the present invention is limited only by the accompanying claims.
Certain specific details of the disclosed embodiment are set forth for purposes of explanation rather than limitation, so as to provide a clear and thorough understanding of the present invention. However, it should be understood by those skilled in this art, that the present invention might be practised in other embodiments that do not conform exactly to the details set forth herein, without departing significantly from the spirit and scope of this disclosure. Further, in this context, and for the purposes of brevity and clarity, detailed descriptions of well-known apparatuses, circuits and methodologies have been omitted so as to avoid unnecessary detail and possible confusion. Reference signs are included in the claims, however the inclusion of the reference signs is only for clarity reasons and should not be construed as limiting the scope of the claims.
A person skilled in the art will readily appreciate that various parameters disclosed in the description may be modified and that various embodiments disclosed and/or claimed may be combined without departing from the scope of the invention.

Claims

CLAIMS:
1. A method to be performed in a device (117) of controlling a page cache memory (101) in stream applications (115) and best effort applications (116), the page cache memory (101) comprising at least one stream buffer (107) associated to the stream applications (115) and a memory (104) associated to the best effort applications (116), comprising:
- establishing at least one activity parameter (202) indicating the current activity in the page cache memory (101), and based thereon
- adapting the buffer size (204, 205) dynamically to the stream applications (115) by increasing or decreasing the size of the at least one stream buffer (107).
2. A method according to claim 1, wherein the at least one activity parameter is selected from a group consisting of:
- the swap-out frequency of the memory (104) associated with the best effort applications (116), - the flushing of dirty pages frequency of the memory (104) associated with the best effort applications (116),
- the amount of dirty pages in the memory (104) associated with the best effort applications (116), and
- the amount of cache misses due to the reading of non-cached data by the best effort applications (116), wherein the stream buffer size is decreased (205) or increased (204) when one or more of the activity parameter is above or below, respectively, a pre-defined threshold value associated to each of said activity parameter.
3. A method according to claim 2, wherein the threshold values are defined based on one or more operating system characteristics of the device (117), or the hard disk drive (HDD) (119) characteristics of the device (117), or the combination thereof.
4. A method according to claim 1, wherein the stream applications (115) comprise Audio, or Video, or Audio/Video applications.
5. A computer readable media for storing instructions for a processing unit to execute the method in claim 1.
6. A page cache controlling system (100) for controlling a page cache memory
(101) in real time stream applications (115) and best effort applications (116), the page cache memory (101) comprising at least one stream buffer (107) associated to the real time stream applications (115) and a memory (104) associated to the best effort applications (116), comprising:
- a page cache manager (102) for establishing at least one activity parameter indicating the current activity in the page cache memory (101), and
- a buffer adaptor (103) for adapting the buffer size dynamically to the real time stream applications (115) by increasing or decreasing the size of the at least one stream buffer (107).
7. A controlling system according to claim 6, where the page cache manager
(102) comprises a page cache monitor (121) adapted to monitor the activity in the page cache memory (101) and to determine the at least one activity parameter, and a resource manager
(120) interconnected to the page cache monitor (121) that uses the at least one parameter for keeping track of the activity in the page cache memory (101).
8. A device (117) adapted to perform real time stream applications (115) and best effort applications (116) comprising the page cache controlling system (100) according to claim 6.
PCT/IB2007/050115 2006-01-26 2007-01-15 A method of controlling a page cache memory in real time stream and best effort applications WO2007085978A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP06300071.5 2006-01-26
EP06300071 2006-01-26

Publications (2)

Publication Number Publication Date
WO2007085978A2 true WO2007085978A2 (en) 2007-08-02
WO2007085978A3 WO2007085978A3 (en) 2007-10-18

Family

ID=38091197

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2007/050115 WO2007085978A2 (en) 2006-01-26 2007-01-15 A method of controlling a page cache memory in real time stream and best effort applications

Country Status (1)

Country Link
WO (1) WO2007085978A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103763371A (en) * 2014-01-21 2014-04-30 深圳市脉山龙信息技术股份有限公司 Method for dynamically controlling mobile end application cache
US9143381B2 (en) 2009-04-16 2015-09-22 Microsoft Technology Licenising, LLC Sequenced transmission of digital content items

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5581736A (en) * 1994-07-18 1996-12-03 Microsoft Corporation Method and system for dynamically sharing RAM between virtual memory and disk cache
US6122708A (en) * 1997-08-15 2000-09-19 Hewlett-Packard Company Data cache for use with streaming data
US6438668B1 (en) * 1999-09-30 2002-08-20 Apple Computer, Inc. Method and apparatus for reducing power consumption in a digital processing system
US20030074524A1 (en) * 2001-10-16 2003-04-17 Intel Corporation Mass storage caching processes for power reduction

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9143381B2 (en) 2009-04-16 2015-09-22 Microsoft Technology Licenising, LLC Sequenced transmission of digital content items
CN103763371A (en) * 2014-01-21 2014-04-30 深圳市脉山龙信息技术股份有限公司 Method for dynamically controlling mobile end application cache

Also Published As

Publication number Publication date
WO2007085978A3 (en) 2007-10-18

Similar Documents

Publication Publication Date Title
US7472222B2 (en) HDD having both DRAM and flash memory
KR101726824B1 (en) Efficient Use of Hybrid Media in Cache Architectures
US7334144B1 (en) Host-based power savings method and apparatus
KR101335792B1 (en) Information device with cache, information processing device using the same, and computer readable recording medium having program thereof
EP1605361B1 (en) Cache hierarchy
US20190251023A1 (en) Host controlled hybrid storage device
EP1605455B1 (en) RAID with high power and low power disk drives
US10740242B2 (en) Sensing device data caching
Chen et al. SmartSaver: Turning flash drive into a disk energy saver for mobile computers
Bisson et al. A hybrid disk-aware spin-down algorithm with I/O subsystem support
US8072704B1 (en) Energy-saving operation of a storage device
US20140189032A1 (en) Computer system and method of controlling computer system
CN1312590C (en) Mass storage caching processes for power reduction
KR20090034629A (en) Storage device including write buffer and its control method
US20120047330A1 (en) I/o efficiency of persistent caches in a storage system
WO2007085978A2 (en) A method of controlling a page cache memory in real time stream and best effort applications
JP2007272721A (en) Storage device and its control method
US8364893B2 (en) RAID apparatus, controller of RAID apparatus and write-back control method of the RAID apparatus
KR101162565B1 (en) Data storage device and method for processing data access request
Singleton et al. Flash on disk for low-power multimedia computing
KR101051504B1 (en) Hybrid hard disk I / O system based on n-block preload for low power and I / O performance
US12367153B2 (en) Multi-level starvation widget
US20240111684A1 (en) Multi-level starvation widget
KR100806627B1 (en) How to manage video servers containing multi-speed discs
Shim et al. RMA: A Read Miss-Based Spin-Down Algorithm using an NV Cache

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07700586

Country of ref document: EP

Kind code of ref document: A2