FAST MEMORY ACCESS TO DIGITAL DATA
TECHNICAL FIELD
The present invention relates generally to data on demand, and more particularly to fast memory access to digital data.
BACKGROUND ART
Data-on-demand (DOD) facilitates the delivery of data, such as digital data, to clients on a demand basis. Often, there is a significant delay in the delivery of this data. FIG. 1 illustrates a conventional architecture for delivering data on demand. Data may be transmitted utilizing a satellite dish, a cable modem, etc. The data is transmitted, using the particular transmission means 12, over a transmission medium 14, such as analog television (TV), digital TV, Internet information, etc. Once the data is transmitted, it is digitally encoded via an input section 16. For example, the data is encoded into a digital signal such as a Motion Pictures Experts Group (MPEG) format, etc. The data is then buffered in a fast buffer memory 18. Following this buffering process, the data is stored in a mass storage device 20 for future access. The data again is buffered to the fast buffer memory 22. The fast buffer memory may be the same as the original fast buffer memory 18, or a different fast buffer memory 22. The data is then displayed to a client via a display 24 and an audio output device 26.
The architecture illustrated in FIG. 1 provides clients with data on demand. The data is stored in a mass storage device for retrieval upon the client request. The mass storage device data is then accessed and buffered through the fast buffer memory prior to being delivered to the client. Although this architecture provides data on demand, clients experience a significant delay in the receipt of the data requested. FIG. 2 illustrates an architecture for providing data for multimedia services. The use of a mutlimedia time waiping system is utilized to allow real time capture, storage, and display of television broadcast signals as disclosed in US. Pat. No. 6,233,389. As indicated, the input module 28 takes TV input streams in a multitude of forms and produces MPEG streams. The MPEG streams are parsed and separated into audio and video components, the components being stored in temporary buffers. The MPEG
streams are delivered to a media switch 30, which mediates between a microprocessor CPU 38, a hard disk or storage device 36, and memory 34. The media switch 30 buffers the MPEG stream into memory. Thereafter, where the user is watching real time TV, it sends the stream, separated into audio and video components, to the output section 32, recombines the video and audio, and the stream is written simultaneously to the hard disk or storage device 36.
This approach provides the user with the ability to reverse, fast forward, play, pause, index, fast/slow reverse play, and fast/slow play the particular program. However, users of the system experience a delay in receiving data requested. For example, a user may have to wait 3-4 seconds to receive the data. In a day where the speed with which information is relayed is paramount, such a delay may be unacceptable to many users. A system for providing data on demand rapidly is needed. Users should be able to quickly access data requested and experience merely an unnoticeable delay in receiving this data.
SUMMARY OF THE INVENTION
Accordingly, it is an object of the present invention to provide fast memory access to digital data.
It is another object of the invention to provide users with a reliable data on demand system.
Briefly, a preferred embodiment of the present invention is a method and apparatus for providing fast memory access to digital data. Digital data is received from a data source. The data is then stored in a first memory and a second memory. The first memory stores and delivers the data at a faster rate than the second memory. A request for the data is received from a client and the first memory is accessed to deliver the data to the client. Where the data is not available from the first memory, the data is retrieved from the second memory and written to the first memory for retrieval therefrom. The data is then delivered to the client in accordance with the client request.
An advantage of the present invention is that is reduces the delay in receiving digital data from a data on demand service.
The foregoing and other objects, features and advantages of the invention will be apparent from the following detailed description of the preferred embodiment(s) which make(s) reference to (the several figures of) the drawing.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a schematic diagram showing conventional architecture for providing data on demand.
FIG. 2 is a schematic diagram showing architecture for providing a multimedia time warping system.
FIG. 3 is a schematic diagram showing fast memory access to digital data in accordance with the present invention.
FIG. 4 is a flowchart illustrating a process for providing data to a client in accordance with a client request in accordance with the present invention.
FIG. 5A is a flowchart illustrating a process for meeting the data segment requirements of a client in accordance with the present invention.
FIG. 5B is flowchart illustrating the function of a delay module in accordance with the present invention.
FIG. 6 is a flowchart illustrating a process for releasing data stored in fast memory in accordance with the present invention.
FIG. 7 is a flowchart illustrating a process for providing fast memory access to digital data in accordance with the present invention.
DETAILED DESCRIPTION OF THE INVENTION
The present invention is a method and apparatus for providing fast memory access to digital data. Data on demand services can provide access to digital data in response to a client request therefor. The present invention provides data on demand services at an expedited rate in order to diminish the seemingly significant delay experienced by users of past systems.
Turning now to FIG. 3, a schematic diagram shows the system architecture for fast memory access to digital data in accordance with the present invention. An input section 40 forwards data, via an MPEG program stream, to shared fast memory 42. The fast memory 42 writes the data to mass storage 44, where it can be retrieved in response to a client request. The data retrieved from the mass storage 44 or fast memory 42 is forwarded to the decoder 46 for conversion to TV output signals, which are then delivered to a TV.
The CPU 45A locates data in the fast memory 42 or mass storage 44 and passes the address to the decoder 46. Where the data is not located in fast memory 42, the CPU 45A locates the data in mass storage 44 and reads the data to fast memory 42. The CPU 45A then passes the address to the decoder 46, which in turn reads the data from fast memory 42.
The encoder receives data from a data source. The data may be in various formats, such as analog TV, digital TV, Internet information, etc. The input section 40 converts the data into a uniform format, such as a Moving Pictures Experts Group (MPEG) format. The MPEG stream, for example, is then written to fast memory 42. The fast memory 42 is capable of storing and delivering data at a much more rapid rate than the mass storage 44. The mass storage may be a hard drive, etc. However, the fast memory 42 writes the data into the mass storage 44, The fast memory 42 shares this data with the mass storage 44 and can access the mass storage 44 in order to retrieve data it no longer can provide. For example, the fast memory 42 may no longer be able to provide data because it has released the data to free space for other incoming data.
Data retrieved from the fast memory 42 or the mass storage 44 is forwarded to the decoder 46 when a request for the data is received from the decoder. The decoder 46 retrieves the data and converts it into a format usable by the broadcast medium. In the current diagram, the broadcast medium is a TV. Accordingly, the decoder here can convert the MPEG stream of data into TV output signals for delivery to the TV receiver. The MPEG stream is a program stream. The broadcast medium may be any medium suitable for use with the present invention. For example, the broadcast medium may be a computing device, such as a personal digital assistant (PDA), a laptop, a smart card, etc., a TV, as previously discussed, a gaming device, etc.
The decoder may include a buffer. A dual buffer is utilized in the present invention to prevent delay. A play buffer and a reserve buffer are provided. The reserve buffer stored data that is anticipated. An anticipator window may exist. An anticipator window may include the data segments playing, such as blocks 1, 2, and 5.
FIG. 4 is a flowchart illustrating a process for providing data to a client in accordance with a client request in accordance with the present invention. The data arrives 48 and the system determines whether a client is requesting that particular data 50. If the client is not requesting the data, the data is not needed immediately and it can be stored in a mass storage device or the data can remain in fast memory until more data arrives that requires the fast memory space and simultaneously be written to the mass storage device for permanent storage. If a client is requesting data, the client is notified of the availability 52 of that data. After notifying the client of the availability of the data 52, and delivering the data to the client, the system determines whether there are any more clients 54. If there are more clients, the system determines whether the remaining clients are requesting the data 50 and notifies the client of the availability 52 and delivers the data, as previously discussed. This process repeats until all clients requesting the data have been notified of its availability.
The client may be any type of client suitable for use with the present invention. The client may be mass storage, the decoder, or any other device capable of requesting data. For example, the client may be hardware, software, or firmware coupled to the fast memory, etc.
FIG. 5A is a flowchart illustrating a process for meeting the data segment requirements of a client in accordance with the present invention. An interrupt may be received from the decoder for new data 56. The previous fast memory may be freed and marked as not in use 58. The next data segment requirement is then calculated 60 and fast memory is checked 62 for the next data segment requirement. If the next data segment is in fast memory, the address is forwarded to the decoder and the data is marked as in use 66. However, if the data segment is not in fast memory, the mass storage is accessed 64 to locate the data segment. A delay 65, as discussed below in FIG. 5a, is also present to enable the receipt of data. If the data segment is in mass storage, it is written to the fast memory 68 and the decoder is notified of the address and the data is marked as in use 66.
If the next data segment is not in mass storage, the system returns to calculate the next data segment requirement step 60. The data may not be in mass storage due to in error in the request or a temporary void in the mass storage of that particular segment, such as a void caused by data loss, packet drop, etc. By returning to the calculate the next data segment requirement step 60, the system should be able to retrieve the data segment since the error is not likely to twice occur. In the event the data is not found in mass storage again, the system will continue to return to the calculation step 60 to find the next data segment requirement until a next data segment requirement is located. The requirement may have changed by the time the system re-circulates, thereby allowing the system to break an unending cycle with no result.
The address forwarded to the decoder marks the location of the particular data segment in the fast memory. The address acts as a pointer so that the decoder can locate the data segment and forward it to the client after it converts it into an output signal. The output signal may be any signal suitable for use with the present invention, such as a TV output signal, a gaming output signal, a wireless device output signal, etc.
Similarly, the data segment may be located in the mass storage by utilizing an address corresponding to the particular data segment. The CPU locates data in mass storage, reads the data to fast memory, and then passes the address to the decoder, which in turn reads the data from fast memory. Thus, the data segment is located in mass storage and
written to fast memory. Accordingly, the decoder utilizes the address of the data segment in fast memory and is able to rapidly deliver the data segment to the client.
FIG. 5B is flowchart illustrating the function of a delay module in accordance with the present invention. A delay, in milliseconds, is scheduled. If the delay is greater than one frame, the next data segment can be calculated. However, if the delay is not greater than one frame, the fast memory will be checked again for the data. If it is not in fast memory, and the delay remains as less than one frame, the system will continue to cycle through, checking fast memory until the data is located in fast memory, or the delay is greater than one frame and the next data segment is calculated. If the data is located in fast memory, the decoder is notified of the address so the data can be retrieved and the data can be marked as in use.
The delay of one frame or less enables the receipt of the data. During the delay period, the fast memory is periodically checked to determine whether the data has arrived.
FIG. 6 is a flowchart illustrating a process for releasing data stored in fast memory in accordance with the present invention. A list of clients with data in use status for each buffer is maintained 70. Each client can use multiple buffers. Notification is received from a client that the data is no longer required 72. The client is removed from the list of clients with data in use status 74. The system determines whether all the clients have been removed from the status in use list 76. If all the clients requesting data, with data in use status in other words, have been removed from the status in use list, the previous fast memory buffer can be freed and marked as not in use 58. However, if all of the clients have not been removed from the list, certain clients still require the data and the cycle will repeat until notification has been received from every client on the list that the client no longer requires the data. In other words, the system returns to the receive notification step 72 until notifications from every client on the list have been received and the data can be released from the fast memory.
As previously discussed, clients requesting data have a data in use status. A list may be maintained of these clients in order to prevent the release of data from fast memory that continues to be required by certain clients. Any other method suitable for use with the present invention may be utilized to track -Tients still in need of the data and release data
no longer required by one or more clients. For example, client activity regarding the data requested can be logged and the data released following a specified period of time during which no activity is sensed, etc.
The notification from the client that the particular data is no longer required may be simple. The notification may be an actual message, such as free buffer, conveyed to the system that the client has finished with the data. The notification may be derived from the fact that the client is now requesting another data segment, etc. Any type of notification, implied or explicit, may be utilized that is suitable for use with the present invention.
Once the fast memory releases the data no longer required by the clients, space is freed for other data. The newer data then takes the place of the released data. In this manner, the fast memory can constantly free space to make room for incoming data that may be requested by a client.
FIG. 7 is a flowchart illustrating a process for providing fast memory access to digital data in accordance with the present invention. Data is received from a data source 78 and that data is then encoded 80. For example, the data may be encoded (i.e. converted) into an MPEG stream. The data is then stored in a first memory and a second memory. The first memory stores and delivers the data at a faster rate than the second memory 82. A request from the client for the data is received 84 and the first memory is accessed to deliver the data to the client. The data is retrieved from the second memory and written to the first memory where the data is not available from the first memory 86. The request for the data from the client may be an interrupt from a decoder. The data is then delivered to the client 88. The data may be delivered to the client via a computing device, a gaming device, or a television.
As previously discussed, a list of clients requesting particular data may be maintained. In this scenario, a particular client is removed from the list upon receipt of notification that the client no longer requires the particular data. Further, the particular data is released from the first memory following the removal of every client from the list.
By accessing the fast memory (i.e. first memory), the system is able to rapidly deliver requested data to a client. Previous systems offer data with at least a 3 to 4 second delay. The method of the present invention, however, can deliver data with a delay of only 0.3 seconds or less. This delay is tremendously faster than any other delivery method for digital data in a data on demand environment. Moreover, this type of delay is not perceivable to users, whereas a 3 to 4 second delay is very obvious to users.
While the invention has been particularly shown and described with reference to (a) certain preferred embodiment(s), it will be understood by those skilled in the art that various alterations and modifications in form and detail may be made therein. Accordingly, it is intended that the following claims cover all such alterations and modifications as fall within the true spirit and scope of the invention.