Disclosure of Invention
In view of the above, the present invention provides a depth image compression method to solve the problems of large data size and high storage resource consumption of depth information storage.
In a first aspect, the present invention provides a depth image compression method, the method comprising:
acquiring at least one depth image, wherein the at least one depth image comprises a first image;
the depth characteristic in the first image is read, the first image is divided into N image blocks by utilizing a sliding window technology according to the depth characteristic, wherein each image block comprises a plurality of pixel points, and N is more than or equal to 2 and is a positive integer;
Calculating a reference value of each image block in the N image blocks according to a plurality of pixel points, calculating pixel differences between at least one pixel value in each image block and the corresponding reference value, and generating N pixel difference sets corresponding to the N image blocks, wherein each pixel difference set comprises at least one pixel difference;
And transmitting the N pixel difference sets and N reference values to a server.
With reference to the first aspect, in a possible implementation manner, the reading a depth characteristic in the first image, dividing the first image into N image blocks according to the depth characteristic using a sliding window technology includes:
and sliding the predefined sliding window on the first image according to the read depth information and the distribution characteristics in the first image, and determining the size of each image block one by one to obtain the N image blocks.
With reference to the first aspect, in another possible implementation manner, the N image blocks include a first image block and a second image block, and the first image block and the second image block are adjacent;
The sliding the predefined sliding window on the first image according to the read depth information and the distribution characteristic in the first image, determining the size of each image block one by one, and obtaining the N image blocks, including:
Sliding the predefined sliding window on the first image block and the second image block according to the depth information and the distribution characteristics of the first image block and the second image block, and determining a third image block;
Said sliding the predefined sliding window over the first and second image blocks according to the depth information and the distribution characteristics of the first and second image blocks, determining a third image block comprising:
calculating average pixel values of all pixel points in the first image block as a first average pixel value, and calculating average pixel values of all pixel points in the second image block as a second average pixel value;
comparing a difference between the first average pixel value and the second average pixel value;
judging whether the difference value is larger than a preset value or not;
If the size of the third image block is larger than the preset value, dividing the third image block, wherein the size of the third image block is a part of the size of the predefined sliding window;
And if the size of the third image block is smaller than or equal to the preset value, dividing the third image block, wherein the size of the third image block is larger than that of the second image block or the first image block.
With reference to the first aspect, in a further possible implementation manner, the size of the third image block divided is half of the predefined sliding window size when the difference value is greater than the preset value, and the size of the third image block divided is the sum of the areas of the first image block and the second image block when the difference value is less than or equal to the preset value.
With reference to the first aspect, in a further possible implementation manner, the calculating the reference value of each image block of the N image blocks according to a number of pixel points includes:
And calculating the difference between each pixel value in the current image block and the corresponding average pixel value, and comparing the pixel value with the smallest difference value as the reference value of the current image block.
With reference to the first aspect, in a further possible implementation manner, the calculating pixel differences between at least one pixel value in each image block and a corresponding reference value, generating N sets of pixel differences corresponding to the N image blocks includes:
And calculating the pixel value corresponding to each pixel point in the current image block and the pixel difference between the pixel value and the reference value of the current image block to obtain a pixel difference set comprising at least one pixel difference, and counting the N pixel difference sets of the N image blocks after the pixel difference calculation.
With reference to the first aspect, in a further possible implementation manner, after the sending the N pixel difference sets and the N reference values to the server, the method further includes:
Reading N pieces of image block information divided by a target depth image, wherein the target depth image is one of the at least one depth image;
According to the N image block information, reading the reference distance, the relative distance and the zone bit of each image block, and analyzing the pixel original data of each image block;
And integrating and calculating at least part of pixel differences in the N reference values and the N pixel difference sets to generate the target depth image.
In a second aspect, the present invention provides a depth image compression apparatus, the apparatus comprising:
the device comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for acquiring at least one depth image, and the at least one depth image comprises a first image;
The processing module is used for reading the depth characteristic in the first image, and dividing the first image into N image blocks by utilizing a sliding window technology according to the depth characteristic, wherein N is more than or equal to 2 and is a positive integer;
the computing module is used for computing a reference value of each image block in the N image blocks, computing a difference value between at least one pixel value in each image block and the corresponding reference value, and counting N pixel difference sets corresponding to the N image blocks, wherein each pixel difference set comprises a plurality of difference values;
and the sending module is used for sending the N pixel difference sets and the N reference values to a server.
In a third aspect, the present invention provides a computer device, comprising a memory and a processor, where the memory and the processor are communicatively connected, and the memory stores computer instructions, and the processor executes the computer instructions, thereby executing the depth image compression method according to the first aspect or any implementation manner corresponding to the first aspect.
In a fourth aspect, the present invention provides a computer readable storage medium having stored thereon computer instructions for causing a computer to perform the depth image compression method of the first aspect or any one of its corresponding embodiments.
Furthermore, the present invention provides a computer program product comprising computer instructions for causing a computer to perform the depth image compression method of the first aspect or any of its corresponding embodiments.
According to the depth image compression method provided by the embodiment, the image blocks are divided one by one according to a sliding window technology, the average value of all pixel values is calculated, and the pixel value closest to the average value is selected as a reference value. In the storage process, the difference value between each pixel value and the reference value is recorded and stored, and the method stores the pixel difference value set and the reference value instead of directly storing the original value of each pixel, so that the beneficial effects of effectively reducing the data quantity required to be stored and reducing the storage cost and the storage resource are achieved.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, unless explicitly stated or limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected, mechanically connected, electrically connected, directly connected, indirectly connected via an intervening medium, or in communication between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The technical features of the different embodiments of the invention described below may be combined with one another as long as they do not conflict with one another.
First, technical scenes and related technical terms related to the technical scheme of the invention are introduced.
The invention relates to the technical field of image processing, in particular to a storage and compression method for depth image data, aiming at improving the storage efficiency of depth images, reducing the occupation of storage space and improving the transmission efficiency of the depth images.
With the development of SLAM (Simultaneous Localization AND MAPPING) technology, simultaneous localization and mapping technology and intelligent driving technology, information carried by a two-dimensional planar color image obtained by shooting with a common camera cannot meet technical requirements. The fact that the requirements cannot be met means that information carried by a two-dimensional plane color image is limited to a two-dimensional plane, accurate longitudinal depth information between objects in the image and between the objects and photographers cannot be displayed, and powerful data support cannot be provided for technologies such as accurate instant map construction and path planning.
Therefore, depth of field distance (namely depth information) of a shooting space is shot by using devices such as a depth camera, a TOF camera and the like, so that the defect of insufficient precision is overcome. However, the storage capacity required for directly storing the depth image data (such as the depth value) photographed by the depth camera into the memory is extremely large, which limits the feasibility of the depth image data in practical application. Therefore, the technical scheme of the invention aims to find an efficient image compression and processing scheme so as to reduce the burden of a storage facility and improve the efficiency of data transmission.
Referring to fig. 1, a schematic view of a scene photographed by a depth camera according to an embodiment of the present invention is shown. The application scene comprises a depth camera, at least one terminal device such as a PC and at least one server. In addition, other more or fewer devices, apparatuses, such as switches, etc. may be included in the scene, which is not limited in this embodiment.
The depth camera obtains depth information of each pixel point in the image through the depth sensor, so that not only a traditional two-dimensional (2D) image but also three-dimensional (3D) point cloud information can be obtained. Depth cameras include, but are not limited to, binocular cameras, TOF cameras, and the like.
Terminal devices include, but are not limited to, various personal computers, PCs, notebook computers, smart phones, tablet computers, portable wearable devices, and the like.
The server may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers. Further, the server may be an independent server, or may be a server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, content delivery networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms.
The terminal device, e.g. the PC, and the server may be connected via a wired or wireless network.
The present invention provides a depth image compression method embodiment, it should be noted that the steps illustrated in the flowchart of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
In this embodiment, a depth image compression method is provided, which may be used in the above terminal device, such as a PC, and fig. 2 is a flowchart of the depth image compression method according to an embodiment of the present invention, as shown in fig. 2, where the flowchart includes:
Step S101, at least one depth image is acquired, wherein the at least one depth image comprises a first image.
In particular, a depth camera or TOF camera captures a set of depth images and transmits the depth images to a terminal device, such as a PC. The PC receives at least one depth image from a depth camera or TOF occurrence.
Wherein the first image includes color information and depth information of a photographed scene.
In this embodiment, the depth camera or the TOF camera will take at least one depth image including the first image as an example, and the receiving end PC receives and processes the first image.
Step S102, the depth characteristic in the first image is read, and the first image is divided into N image blocks by utilizing a sliding window technology according to the depth characteristic.
Each image block comprises a plurality of pixel points, and N is more than or equal to 2 and is a positive integer. The first image is composed of hundreds of pixel points, and each pixel point corresponds to one pixel value. The PIXEL value (PIXEL or PX) may be used to represent depth information.
The Depth characteristic, which is also referred to as the Color Depth (Color Depth), refers to the number of bits used to store each pixel. These numbers of bits determine the maximum number of colors that can occur in a color image or the maximum number of gray levels in a gray image. The image depth is the color detail of a single pixel point, such as 16 bits (65536 colors), 24 bits (16777216 colors), etc.
The sliding window technique uses the size (width and height) and the step size (horizontal and vertical movement distance) of the sliding window, which are required to be predefined, and slides the sliding window on the first image according to the characteristics of the depth image after the definition is completed, so as to obtain divided image blocks one by one. The size of the block is dynamically adjusted in a sliding block mode, so that the optimal block size can be adaptively selected when different depth value areas are processed, and storage efficiency is improved.
Step S103, calculating a reference value of each image block in the N image blocks according to the plurality of pixel points, and calculating pixel differences between at least one pixel value in each image block and the corresponding reference value to generate N pixel difference sets corresponding to the N image blocks.
Each set of pixel differences includes at least one pixel difference. For example, one set of pixel differences contains 1000 pixel differences.
A specific embodiment of the step S103 includes:
Step S103-1, for each image block, calculating average pixel values of all pixel points in each image block to obtain N average pixel values.
In this embodiment, N image blocks are averaged over the pixel points of each image block to obtain N average pixel values. For example, in a first image block in the first image, an average value of all pixel points in the first image block is calculated to obtain a first average pixel value.
Step S103-2, calculating the difference between each pixel value in the current image block and the corresponding average pixel value, and comparing the pixel value with the smallest difference value as the reference value of the current image block.
The first average pixel value is compared with each pixel value in the first image block, and the pixel value closest to or the same as the first average pixel value is found as a first reference value of the first image block. Alternatively, if two or more pixel values are the same as the first average pixel value, one of them is selected at random as the first reference value, or one is selected as the first reference value according to the coordinate distance.
Similarly, the reference values in other image blocks are calculated and confirmed according to the method, and N reference values are obtained after calculation. Each image block corresponds to a reference value as a reference value for all pixels.
The process further comprises the steps of calculating pixel values corresponding to each pixel point in the current image block and pixel differences (delta PX) between the pixel values and reference values of the current image block for each image block to obtain a pixel difference set comprising at least one pixel difference, and counting N pixel difference sets of N image blocks after the pixel difference calculation.
For example, in a first image block, 1001 pixel points are present, 1001 pixel values are associated with the 1001 pixel points, a difference value (Δpx) is calculated between the 1001 pixel values and a first reference value of the image block, 1000 pixel differences (Δpx) are obtained, and the 1000 Δpx are generated into a first pixel difference set. Similarly, all pixel differences (Δpx) in other image blocks are respectively generated into corresponding pixel difference sets.
Step S104, N pixel difference sets and N reference values are sent to the server.
In one embodiment, the terminal PC transmits the N pixel difference sets and the N reference values to the server through a wireless network, so that the server stores the N pixel difference sets and the N reference values in the cloud.
According to the method provided by the embodiment, the image blocks are divided one by one according to a sliding window technology, the average value of all pixel values is calculated, and the pixel value closest to the average value is selected as a reference value. In the storage process, the difference value between each pixel value and the reference value is recorded and stored, and the method stores the pixel difference value set and the reference value instead of directly storing the original value of each pixel, so that the beneficial effects of effectively reducing the data quantity required to be stored and reducing the storage cost and the storage resource are achieved.
Optionally, in this embodiment, step S102 reads a depth characteristic in the first image, and divides the first image into N image blocks according to the depth characteristic by using a sliding window technology, which specifically includes:
And sliding the predefined sliding window on the first image according to the depth information and the distribution characteristics in the read first image, and determining the size of each image block one by one to obtain N image blocks.
Wherein the length and width of the predefined sliding window are determined according to the characteristics of the depth image. Specifically, dynamic block size configuration is implemented according to the characteristics of the depth image. The various parts of the depth image often have different depth distribution characteristics, and effective compression is often not achieved with a uniform fixed block size, so that a sliding window is predefined and is used for sliding processing on the first image.
One embodiment of the slider window configuration provided in this embodiment includes dynamically selecting the size of the next image block in the depth image by a sliding window technique. The dividing principle is that for the region with gentle depth change, larger image blocks can be divided, and for the region with severe depth change, smaller image blocks are used, so that each image block can be better adapted to the corresponding depth characteristic, and the integrity and accuracy of information are ensured.
In one possible implementation manner of this embodiment, the N image blocks include a first image block and a second image block, and the first image block and the second image block are two image blocks in adjacent positions.
The step S102 is to slide the predefined sliding window on the first image according to the depth information and the distribution characteristics in the read first image, determine the size of each image block one by one, and obtain N image blocks, and includes to slide the predefined sliding window on the first image block and the second image block according to the depth information and the distribution characteristics of the first image block and the second image block, and determine the third image block.
Further, as shown in fig. 3, sliding a predefined sliding window over the first image block and the second image block, determining an embodiment of the third image block includes:
Step S201, calculating an average pixel value of all pixels in the first image block as a first average pixel value, and calculating an average pixel value of all pixels in the second image block as a second average pixel value.
Step S202 is comparing the difference between the first average pixel value and the second average pixel value.
Step S203, judging whether the difference is larger than a preset value. If yes, step S204 is performed, and if no, step S205 is performed.
Step S204, if the size of the first image block is larger than the preset value, dividing a third image block, wherein the size of the third image block is a part of the size of the predefined sliding window.
Step S205, if the size of the third image block is smaller than or equal to the preset value, dividing the third image block, wherein the size of the third image block is larger than that of the second image block or the first image block.
For example, a predefined sliding window is defined, the sliding window having a length L1 and a width W1, the sliding window having an area l1×w1, wherein l1=2l, w1=2w, L is the minimum dividend of the total length of the first image, and W is the minimum dividend of the total width of the first image. First, a first image block and a second image block are divided in a first image, wherein the first image block and the second image block are both the same size as a predefined sliding window size, such as l1×w1.
Calculating a first average pixel value of the first image block as a1, a second average pixel value of the second image block as a2, and comparing the sizes of the a1 and the a2, wherein a preset value is a preset value.
If the absolute value of the difference a1-a2 is less than or equal to a preset, step S205 is performed to divide the size of the third image block to be larger than the areas of the first image block and the second image block, for example, to divide the size of the third image block to be the sum of the areas of the first image block and the second image block, i.e., the third image block=the first image block+the second image block, in this example, the third image block is 2 times the predefined sliding window (i.e., 2×l1×w1).
If the absolute value of the a1-a2 difference > a is preset, step S204 is performed to divide the size of the third image block to a fraction of the predefined sliding window size, e.g. the size of the third image block is half of the first image block or the second image block, i.e. the third image block= 1/2 (predefined sliding window) = 1/2× (l1×w1).
And so on until it is determined that the entire image slider is partitioned to be full.
In this embodiment, the current image is divided into N image blocks by using a sliding window technology, and in the process of dividing the image blocks, the depth distribution condition of the current processing area is monitored in real time, and the size of the processing block is automatically adjusted according to the depth statistical information and the distribution characteristics of the image. In an implementation process, the algorithm can select a proper block size for efficient storage and processing according to the depth value range and distribution characteristics of pixels in the current block.
In addition, the present embodiment further includes a process of reading image data stored in the server, specifically, as shown in fig. 4, after the N pixel difference sets and the N reference values are sent to the server in step S104, the method further includes:
Step S105, N pieces of image block information divided by a target depth image are read, wherein the target depth image is one of at least one depth image.
The target depth image may be an image that the user desires to read on a client or terminal device. The target depth image may be the first image described above, or other images in at least one image, which is not limited in this embodiment.
And S106, reading the reference distance, the relative distance and the zone bit of each image block according to the N image block information, and analyzing the pixel original data of each image block.
And S107, carrying out integration calculation on at least part of pixel differences in the N reference values and the N pixel difference sets to generate a target depth image.
In the process of reading the depth image, in the stage of reading the depth image data, firstly, the block size information divided by the current depth image is read, then the reference distance and the relative distance of each read block and the reference distance pixel point position with the zone bit are calculated reversely to analyze the original data of the current block, and each block is analyzed sequentially to complete the reading of the current whole depth image data.
The depth image compression storage method based on the sliding block mean method provided by the embodiment has the remarkable advantages in various aspects such as storage efficiency, data compression and processing performance by dynamically adjusting the size of the block, selecting a mean value reference value, storing a difference value, self-adaptive updating mechanism, optimizing a reading process and the like. Not only has important significance for improving the efficiency of the depth image processing, but also provides reference value for the research and application of the related fields.
In a specific embodiment, a depth image compression method based on a sliding block mean method is provided, and the method has the remarkable advantages in multiple aspects such as storage efficiency, data compression and processing performance by dynamically adjusting the size of a block, selecting a mean value reference value, storing a difference value, self-adaptive updating mechanism, optimizing a reading process and the like. As shown in fig. 5, mainly comprises the following steps:
step one, dividing the image block size.
First, a process of dynamic block size configuration is implemented according to characteristics of a depth image. Various portions of the depth image tend to have different depth profile characteristics, and therefore, effective compression is often not achieved with a uniform image block size. The specific implementation mode of the image block size division is as follows:
The size of the image block is dynamically selected in the depth image by a sliding window technique. For areas with more gradual depth changes, larger tiles may be used, while in areas with more intense depth changes smaller tiles may be used. Therefore, each image block is better adapted to the corresponding depth characteristic, and the integrity and accuracy of the information are ensured. And (3) self-adaptive adjustment, namely monitoring the depth distribution condition of the current processing area in real time, and automatically adjusting the size of the image block according to the depth statistical information and the distribution characteristic of the image. In an implementation process, the algorithm can select a proper block size for efficient storage and processing according to the depth value range and distribution characteristics of pixels in the current block.
And step two, selecting a mean value and a reference value.
And after the size configuration of the sliding window block is completed, selecting an average value and a reference value. The purpose of this process is to calculate a mean value (i.e. an average pixel value) for each image block and let this mean value be the reference value in order to reduce redundant storage of subsequent data.
One embodiment includes, for each image block, calculating a mean of all pixel values within the block. The calculation of the mean value can reflect the depth information of the area, so that the subsequent difference value recording value is improved. After the average value is obtained, a pixel value closest to the average value is selected from all pixel values in the block as a reference value. The reference value can effectively reduce the calculation complexity of the subsequent difference value and plays a key role in the subsequent storage and decompression process.
And step three, storing the difference value.
The difference value storage aims at significantly reducing the amount of raw data required for storage by recording the difference value between each pixel value and the reference value. One possible implementation includes performing a difference calculation for each pixel value in each image block with a reference value. This way it is ensured that the amount of data stored is greatly reduced, since in general the difference between pixel values is relatively small. At the time of data storage, pixel data is saved in the form of differences, including flag bits to indicate validity and specific identification of the data. Each image block records the reference value and all the difference information, thereby realizing efficient compression storage.
And step four, reading the depth image.
After the storage of the depth image is completed, the difference set and the reference value compressed and stored on the server are read. The stored data is effectively analyzed, so that the aim of reconstructing the original image is fulfilled. One embodiment includes, during a read phase, first reading block size information of a current depth image partition to learn a basic structure of each block. And sequentially reading the reference distance, the relative distance and the zone bit of each block, and analyzing the pixel original data of each block. And the reconstruction of the whole depth image is completed through the combination calculation of the reference value and the relative difference value, so that the accuracy and the integrity of reading are ensured.
According to the method provided by the embodiment, the difference value between each pixel value and the average value reference value is recorded, N reference values of the divided N image blocks and the corresponding pixel difference sets are stored, and compared with the storage of specific image data, the total data amount required by storage is obviously reduced, so that the storage cost is reduced, and the overall efficiency of a storage system is improved.
In addition, unnecessary information redundancy is effectively reduced while the image quality is maintained, and the dual requirements of data safety and storage economy are achieved. According to the method, the size of the next image block is dynamically divided according to the sizes of the two adjacent image blocks, so that the adaptive adjustment of the image blocks based on the depth characteristics is achieved, the compression efficiency of the depth image in the transmission process is improved, the data transmission cost is reduced, and a feasible technical support is provided for real-time depth image application.
In this embodiment, a depth image compression device is further provided, and the depth image compression device is used to implement the foregoing embodiments and preferred implementations, and will not be described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
The present embodiment provides a depth image compression apparatus, as shown in fig. 6, which includes a receiving module 610, a processing module 620, a calculating module 630, and a transmitting module 640. In addition, the apparatus may further include other more or fewer modules, such as a storage module, which is not limited in this embodiment.
The receiving module 610 is configured to obtain at least one depth image, where the at least one depth image includes a first image.
The processing module 620 is configured to read a depth characteristic of the first image, and divide the first image into N image blocks according to the depth characteristic by using a sliding window technique, where N is greater than or equal to 2 and is a positive integer.
The calculating module 630 is configured to calculate a reference value of each of the N image blocks, calculate a difference between at least one pixel value in each image block and the corresponding reference value, and count N pixel difference sets corresponding to the N image blocks, where each pixel difference set includes a plurality of differences.
And a transmitting module 640, configured to transmit the N pixel difference sets and the N reference values to the server.
In a possible implementation manner, the processing module 620 is specifically configured to obtain a predefined sliding window, slide the predefined sliding window on the first image according to the depth information and the distribution characteristics in the read first image, and determine the size of each image block one by one, so as to obtain N image blocks.
In addition, in another possible implementation manner, the compression device provided in this embodiment further includes a reading module, which is not shown in fig. 6.
And the reading module is used for reading N image block information divided by a target depth image, wherein the target depth image is one of at least one depth image.
The processing module 620 is further configured to read the reference distance, the relative distance, and the flag bit of each image block according to the N image block information, parse the pixel original data of each image block, and perform an integration calculation on at least part of the pixel differences in the N reference value and the N pixel difference sets, so as to generate the target depth image.
Further functional descriptions of the above respective modules and units are the same as those of the above corresponding embodiments, and are not repeated here.
The depth image compression apparatus of the present embodiment is presented in the form of functional units, where the units refer to ASIC (Application SPECIFIC INTEGRATED Circuit) circuits, processors and memories that execute one or more software or firmware programs, and/or other devices that can provide the above-described functions.
The embodiment of the invention also provides computer equipment, which is provided with the depth image compression device shown in the figure 6.
Referring to FIG. 7, an alternative embodiment of the present invention provides a computer device that includes one or more processors 10, a memory 20, and interfaces for connecting the components, including a high-speed interface and a low-speed interface. The various components are communicatively coupled to each other using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the computer device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface.
In some alternative embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple computer devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 10 is illustrated in fig. 7.
The processor 10 may be a central processor, a network processor, or a combination thereof. The processor 10 may further include a hardware chip, among others. The hardware chip may be an application specific integrated circuit, a programmable logic device, or a combination thereof. The programmable logic device may be a complex programmable logic device, a field programmable gate array, a general-purpose array logic, or any combination thereof.
The memory 20 stores instructions executable by the at least one processor 10 to cause the at least one processor 10 to perform implementing the depth image compression method shown in the above embodiments.
The memory 20 may include a storage program area that may store an operating system, application programs required for at least one function, and a storage data area that may store data created according to the use of the computer device, etc. In addition, the memory 20 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some alternative embodiments, memory 20 may comprise memory located remotely from processor 10, which may be connected to the computer device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The memory 20 may comprise volatile memory, such as random access memory, or nonvolatile memory, such as flash memory, hard disk or solid state disk, or the memory 20 may comprise a combination of the above types of memory.
The computer device further comprises input means 30 and output means 40. The processor 10, memory 20, input device 30, and output device 40 may be connected by a bus or other means, for example in fig. 7.
The input device 30 may receive input numeric or character information and generate signal inputs related to user settings and function control of the computer apparatus, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer stick, one or more mouse buttons, a track ball, a joystick, and the like. The output means 40 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. Such display devices include, but are not limited to, liquid crystal displays, light emitting diodes, displays and plasma displays. In some alternative implementations, the display device may be a touch screen.
Optionally, the computer device further comprises at least one communication interface for the computer device to communicate with other devices or communication networks.
The embodiments of the present invention also provide a computer readable storage medium, and the method according to the embodiments of the present invention described above may be implemented in hardware, firmware, or as a computer code which may be recorded on a storage medium, or as original stored in a remote storage medium or a non-transitory machine readable storage medium downloaded through a network and to be stored in a local storage medium, so that the method described herein may be stored on such software process on a storage medium using a general purpose computer, a special purpose processor, or programmable or special purpose hardware.
The storage medium may be a magnetic disk, an optical disk, a read-only memory, a random-access memory, a flash memory, a hard disk, a solid state disk, or the like, and further, the storage medium may further include a combination of the above types of memories. It will be appreciated that a computer, processor, microprocessor controller or programmable hardware includes a memory component that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the depth image compression method illustrated by the above embodiments.
Embodiments of the present application may also provide a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps of the above-described method. Wherein the computer program product may write program code for performing the operations of embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
The foregoing embodiments are merely for illustrating the technical solutions of the embodiments of the present invention, but not for limiting the same, and although the embodiments of the present invention have been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that modifications may be made to the technical solutions described in the foregoing embodiments or equivalents may be substituted for some of the technical features thereof, and these modifications or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention in essence.