CN119497991A - Point cloud encoding and decoding method, device, equipment and storage medium - Google Patents
Point cloud encoding and decoding method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN119497991A CN119497991A CN202280098080.1A CN202280098080A CN119497991A CN 119497991 A CN119497991 A CN 119497991A CN 202280098080 A CN202280098080 A CN 202280098080A CN 119497991 A CN119497991 A CN 119497991A
- Authority
- CN
- China
- Prior art keywords
- current
- motion vector
- point cloud
- parameter
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 239
- 238000003860 storage Methods 0.000 title claims description 24
- 230000033001 locomotion Effects 0.000 claims abstract description 676
- 239000013598 vector Substances 0.000 claims abstract description 649
- 238000004364 calculation method Methods 0.000 claims abstract description 258
- 238000012545 processing Methods 0.000 claims abstract description 33
- 230000008859 change Effects 0.000 claims description 55
- 230000008569 process Effects 0.000 claims description 41
- 238000004590 computer program Methods 0.000 claims description 23
- 239000011159 matrix material Substances 0.000 claims description 19
- 108091026890 Coding region Proteins 0.000 description 32
- 238000010586 diagram Methods 0.000 description 14
- 230000006835 compression Effects 0.000 description 12
- 238000007906 compression Methods 0.000 description 12
- 238000013139 quantization Methods 0.000 description 12
- 230000009466 transformation Effects 0.000 description 11
- 230000006870 function Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 9
- 238000006243 chemical reaction Methods 0.000 description 8
- 230000003068 static effect Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 230000001360 synchronised effect Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 239000003086 colorant Substances 0.000 description 3
- 238000004040 coloring Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000002595 magnetic resonance imaging Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000002591 computed tomography Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000007654 immersion Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 230000000737 periodic effect Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000000638 solvent extraction Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 238000010146 3D printing Methods 0.000 description 1
- 241000023320 Luma <angiosperm> Species 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000003874 inverse correlation nuclear magnetic resonance spectroscopy Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
The application provides a point cloud decoding and encoding method and device, the method comprises the steps of decoding a point cloud code stream, determining at least one of classification information and motion information of a current decoding unit, wherein the classification information is determined based on a first parameter, the motion vector information is determined based on a second parameter, the first parameter is used for indicating a calculation period of the classification information, the second parameter is used for indicating a calculation period of the motion vector information, and decoding the current decoding unit according to the at least one of the classification information and the motion information of the current decoding unit. The method and the device for calculating the classification information and the motion vector information periodically, compared with the method and the device for calculating the classification information and the motion vector information once for each decoding unit, greatly reduce the calculation times of the classification information and the motion vector information, reduce the processing time of encoding and decoding, and improve the encoding and decoding efficiency.
Description
The present application relates to the field of point cloud technologies, and in particular, to a method, an apparatus, a device, and a storage medium for encoding and decoding a point cloud.
And acquiring the object surface through acquisition equipment to form point cloud data, wherein the point cloud data comprises hundreds of thousands or more points. In the video production process, the point cloud data is transmitted between the point cloud encoding device and the point cloud decoding device in the form of point cloud media files. However, such huge points present a challenge for transmission, and therefore, the point cloud encoding device needs to compress the point cloud data and transmit the compressed point cloud data.
In point cloud coding using inter prediction, it is currently necessary to calculate classification information and motion vector information once for each coding unit. This increases the processing time of the codec and reduces the codec efficiency.
Disclosure of Invention
The embodiment of the application provides a point cloud encoding and decoding method, a device, equipment and a storage medium, which are used for reducing encoding and decoding processing time and improving encoding and decoding efficiency.
In a first aspect, an embodiment of the present application provides a point cloud decoding method, including:
Decoding a point cloud code stream, and determining at least one of classification information and motion vector information of a current decoding unit, wherein the classification information is determined based on a first parameter, the motion vector information is determined based on a second parameter, the first parameter is used for indicating a calculation period of the classification information, and the second parameter is used for indicating a calculation period of the motion vector information;
and decoding the current decoding unit according to at least one of the classification information and the motion vector information of the current decoding unit.
In a second aspect, the present application provides a point cloud encoding method, including:
Determining at least one of a first parameter for indicating a calculation period of the classification information and a second parameter for indicating a calculation period of the motion vector information;
determining at least one of classification information and motion vector information of a current coding unit according to at least one of the first parameter and the second parameter;
And encoding the current coding unit according to at least one of the classification information and the motion vector information of the current coding unit.
In a third aspect, the present application provides a point cloud decoding apparatus for performing the method in the first aspect or each implementation manner thereof. In particular, the apparatus comprises a functional unit for performing the method of the first aspect described above or in various implementations thereof.
In a fourth aspect, the present application provides a point cloud encoding apparatus for performing the method in the second aspect or each implementation manner thereof. In particular, the apparatus comprises functional units for performing the method of the second aspect described above or in various implementations thereof.
In a fifth aspect, a point cloud decoder is provided that includes a processor and a memory. The memory is for storing a computer program and the processor is for calling and running the computer program stored in the memory for performing the method of the first aspect or implementations thereof.
In a sixth aspect, a point cloud encoder is provided that includes a processor and a memory. The memory is for storing a computer program and the processor is for invoking and running the computer program stored in the memory to perform the method of the second aspect or implementations thereof described above.
In a seventh aspect, a point cloud codec system is provided, including a point cloud encoder and a point cloud decoder. The point cloud decoder is configured to perform the method of the first aspect or its respective implementation forms, and the point cloud encoder is configured to perform the method of the second aspect or its respective implementation forms.
An eighth aspect provides a chip for implementing the method of any one of the first to second aspects or each implementation thereof. In particular, the chip comprises a processor for calling and running a computer program from a memory, such that a device on which the chip is installed performs the method as in any one of the above-mentioned first to second aspects or implementations thereof.
In a ninth aspect, a computer-readable storage medium is provided for storing a computer program for causing a computer to perform the method of any one of the above first to second aspects or implementations thereof.
In a tenth aspect, there is provided a computer program product comprising computer program instructions for causing a computer to perform the method of any one of the first to second aspects or implementations thereof.
In an eleventh aspect, there is provided a computer program which, when run on a computer, causes the computer to perform the method of any one of the above-described first to second aspects or implementations thereof.
In a twelfth aspect, there is provided a code stream generated based on the method of the second aspect, optionally, the code stream including at least one of a first parameter and a second parameter.
Based on the above technical scheme, at least one of classification information and motion vector information of a current decoding unit is determined by decoding a point cloud code stream, wherein the classification information is determined based on a first parameter, the motion vector information is determined based on a second parameter, the first parameter is used for indicating a calculation period of the classification information, and the second parameter is used for indicating a calculation period of the motion vector information. Then, the current decoding unit is decoded according to at least one of the classification information and the motion vector information of the current decoding unit. In other words, in the embodiment of the application, the classification information and the motion vector information are calculated periodically, and compared with the case that the classification information and the motion vector information are calculated once for each decoding unit, the embodiment of the application greatly reduces the calculation times of the classification information and the motion vector information, reduces the processing time of encoding and decoding, and improves the encoding and decoding efficiency.
FIG. 1 is a schematic block diagram of a point cloud codec system according to an embodiment of the present application;
FIG. 2 is a schematic block diagram of a point cloud encoder provided by an embodiment of the present application;
FIG. 3 is a schematic block diagram of a point cloud decoder provided by an embodiment of the present application;
fig. 4 is a schematic flow chart of a point cloud decoding method according to an embodiment of the present application;
FIG. 5 is a point cloud histogram according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of a point cloud encoding method according to an embodiment of the present application;
fig. 7 is a schematic block diagram of a point cloud decoding apparatus provided by an embodiment of the present application;
FIG. 8 is a schematic block diagram of a point cloud encoding apparatus provided by an embodiment of the present application;
FIG. 9 is a schematic block diagram of an electronic device provided by an embodiment of the present application;
Fig. 10 is a schematic block diagram of a point cloud codec system provided by an embodiment of the present application.
The application can be applied to the technical field of point cloud up-sampling, for example, the technical field of point cloud compression.
In order to facilitate understanding of the embodiments of the present application, the following brief description will be first given of related concepts related to the embodiments of the present application:
Point Cloud (Point Cloud) refers to a set of irregularly distributed discrete points in space that represent the spatial structure and surface properties of a three-dimensional object or scene.
The Point Cloud Data (Point Cloud Data) is a specific recorded form of a Point Cloud, and points in the Point Cloud may include position information of the points and attribute information of the points. For example, the position information of the point may be three-dimensional coordinate information of the point. The positional information of the points may also be referred to as geometric information of the points. For example, the attribute information of the dots may include color information, reflectivity information, normal vector information, and the like. For example, the color information may be information on any one color space. For example, the color information may be (RGB). For another example, the color information may be luminance and chrominance (YcbCr, YUV) information. For example, Y denotes brightness (Luma), cb (U) denotes blue color difference, cr (V) denotes red, and U and V are expressed as chromaticity (Chroma) for describing color difference information. For example, a point cloud obtained according to the laser measurement principle, in which points may include three-dimensional coordinate information of the points and laser reflection intensity (reflection) of the points. For another example, a point cloud obtained according to a photogrammetry principle, where the points in the point cloud may include three-dimensional coordinate information of the points and color information of the points. For another example, a point cloud is obtained by combining laser measurement and photogrammetry principles, and the points in the point cloud may include three-dimensional coordinate information of the points, laser reflection intensity (reflection) of the points, and color information of the points.
The acquisition path of the point cloud data may include, but is not limited to, at least one of (1) computer device generation. The computer device may generate point cloud data from the virtual three-dimensional object and the virtual three-dimensional scene. (2) 3D (3-Dimension) laser scanning acquisition. The method can acquire the point cloud data of a static real world three-dimensional object or a three-dimensional scene through 3D laser scanning, can acquire millions of point cloud data per second, and can acquire 3D photogrammetry. The real-world visual scene is acquired by a 3D photographing device (i.e. a group of cameras or a camera device with a plurality of lenses and sensors) to obtain point cloud data of the real-world visual scene, and the dynamic real-world three-dimensional object or the point cloud data of the three-dimensional scene can be obtained by 3D photographing. (4) And acquiring point cloud data of the biological tissue organ through the medical equipment. In the medical field, point cloud data of biological tissue and organs can be acquired through medical equipment such as magnetic resonance imaging (Magnetic Resonance Imaging, MRI), electronic computer tomography (Computed Tomography, CT), electromagnetic positioning information and the like.
The point cloud can be divided into an intensive point cloud and a sparse point cloud according to the acquired approach.
The point cloud is divided into the following types according to the time sequence of the data:
The first type of static point cloud is that an object is static, and equipment for acquiring the point cloud is also static;
a second type of dynamic point cloud, wherein the object is moving, but the equipment for acquiring the point cloud is stationary;
A third type of dynamic acquisition point cloud-the device that acquires the point cloud is in motion.
The applications of the point cloud are divided into two main types:
The machine perception point cloud can be used for scenes such as an autonomous navigation system, a real-time inspection system, a geographic information system, a visual sorting robot, an emergency rescue robot and the like;
and secondly, human eyes perceive point clouds, which can be used for point cloud application scenes such as digital cultural heritage, free viewpoint broadcasting, three-dimensional immersion communication, three-dimensional immersion interaction and the like.
Along with the development of three-dimensional reconstruction and three-dimensional imaging technologies, the point cloud is widely applied to the fields of virtual reality, immersive remote presentation, three-dimensional printing and the like. However, three-dimensional point clouds often have a huge number of points, and the distribution of the points has disorder in space, and meanwhile, each point often has rich attribute information, so that one point cloud has a huge data volume, and great challenges are brought to the storage and transmission of the point clouds. Therefore, the point cloud compression coding technology is one of key technologies for point cloud processing and application.
The following describes the relevant knowledge of the point cloud codec.
Fig. 1 is a schematic block diagram of a point cloud codec system according to an embodiment of the present application. It should be noted that fig. 1 is only an example, and the point cloud codec system according to the embodiment of the present application includes but is not limited to the one shown in fig. 1. As shown in fig. 1, the point cloud codec system 100 includes an encoding device 110 and a decoding device 120. Wherein the encoding device is configured to encode (which may be understood as compressing) the point cloud data to generate a code stream, and to transmit the code stream to the decoding device. The decoding device decodes the code stream generated by the encoding device to obtain decoded point cloud data.
The encoding device 110 of the embodiment of the present application may be understood as a device having a point cloud encoding function, and the decoding device 120 may be understood as a device having a point cloud decoding function, that is, the embodiment of the present application includes a wider apparatus for the encoding device 110 and the decoding device 120, such as a smart phone, a desktop computer, a mobile computing apparatus, a notebook (e.g., laptop) computer, a tablet computer, a set-top box, a television, a camera, a display apparatus, a digital media player, a point cloud game console, a vehicle-mounted computer, and the like.
In some embodiments, the encoding device 110 may transmit the encoded point cloud data (e.g., a code stream) to the decoding device 120 via the channel 130. The channel 130 may include one or more media and/or devices capable of transmitting encoded point cloud data from the encoding device 110 to the decoding device 120.
In one example, channel 130 includes one or more communication media that enable encoding device 110 to transmit encoded point cloud data directly to decoding device 120 in real-time. In this example, the encoding apparatus 110 may modulate the encoded point cloud data according to a communication standard and transmit the modulated point cloud data to the decoding apparatus 120. Where the communication medium comprises a wireless communication medium, such as a radio frequency spectrum, the communication medium may optionally also comprise a wired communication medium, such as one or more physical transmission lines.
In another example, channel 130 includes a storage medium that may store point cloud data encoded by encoding device 110. Storage media include a variety of locally accessed data storage media such as compact discs, DVDs, flash memory, and the like. In this example, the decoding device 120 may obtain encoded point cloud data from the storage medium.
In another example, the channel 130 may comprise a storage server that may store the point cloud data encoded by the encoding device 110. In this example, the decoding device 120 may download stored encoded point cloud data from the storage server. Alternatively, the storage server may store the encoded point cloud data and may transmit the encoded point cloud data to the decoding device 120, such as a web server (e.g., for a website), a File Transfer Protocol (FTP) server, or the like.
In some embodiments, the encoding apparatus 110 includes a point cloud encoder 112 and an output interface 113. Wherein the output interface 113 may comprise a modulator/demodulator (modem) and/or a transmitter.
In some embodiments, the encoding device 110 may include a point cloud source 111 in addition to the point cloud encoder 112 and the input interface 113.
The point cloud source 111 may include at least one of a point cloud acquisition device (e.g., scanner), a point cloud archive, a point cloud input interface for receiving point cloud data from a point cloud content provider, a computer graphics system for generating point cloud data.
The point cloud encoder 112 encodes point cloud data from the point cloud source 111 to generate a code stream. The point cloud encoder 112 directly transmits the encoded point cloud data to the decoding device 120 via the output interface 113. The encoded point cloud data may also be stored on a storage medium or storage server for subsequent reading by the decoding device 120.
In some embodiments, decoding device 120 includes an input interface 121 and a point cloud decoder 122.
In some embodiments, the decoding apparatus 120 may further include a display device 123 in addition to the input interface 121 and the point cloud decoder 122.
Wherein the input interface 121 comprises a receiver and/or a modem. The input interface 121 may receive the encoded point cloud data through the channel 130.
The point cloud decoder 122 is configured to decode the encoded point cloud data to obtain decoded point cloud data, and transmit the decoded point cloud data to the display device 123.
The display device 123 displays the decoded point cloud data. The display device 123 may be integral with the decoding apparatus 120 or external to the decoding apparatus 120. The display device 123 may include a variety of display devices, such as a Liquid Crystal Display (LCD), a plasma display, an Organic Light Emitting Diode (OLED) display, or other types of display devices.
In addition, fig. 1 is only an example, and the technical solution of the embodiment of the present application is not limited to fig. 1, for example, the technology of the present application may also be applied to single-sided point cloud encoding or single-sided point cloud decoding.
The current point cloud encoder can adopt the international standard organization moving picture experts group (Moving Picture Experts Group, MPEG) to propose two point cloud compression coding technical routes, namely projection-based point cloud compression (Video-based Point Cloud Compression, VPCC) and geometric-based point cloud compression (Geometry-based Point Cloud Compression, GPCC). The VPCC utilizes the existing two-dimensional coding tool to code the projected two-dimensional image by projecting the three-dimensional point cloud to two dimensions, GPCC utilizes the hierarchical structure to divide the point cloud into a plurality of units step by step, and codes the whole point cloud through the coding record dividing process.
A point cloud encoder and a point cloud decoder to which the embodiments of the present application are applicable are described below by taking GPCC codec frames as an example.
Fig. 2 is a schematic block diagram of a point cloud encoder provided by an embodiment of the present application.
The points in the above-described known point cloud may include position information of the points and attribute information of the points, and thus, the encoding of the points in the point cloud mainly includes position encoding and attribute encoding. In some examples, the location information of points in the point cloud is also referred to as geometric information, and the corresponding location encoding of points in the point cloud may also be referred to as geometric encoding.
In GPCC coding framework, the geometric information and corresponding attribute information of the point cloud are separately coded.
The process of position coding involves first creating a smallest cube surrounding all points of the point cloud, which cube is called the smallest bounding box. Performing octree division on the minimum bounding box, namely, performing octave division on the minimum bounding box into 8 subcubes, performing octave division on the subcubes which are not empty (contain points in the point cloud), stopping division until leaf nodes obtained by division are 1 multiplied by 1 unit cubes, and encoding the occupation condition of the 8 subcubes generated by each division by 8-bit binary numbers in the process to generate binary geometric bit streams, namely, geometric code streams. The method comprises the steps of preprocessing points in the point cloud, such as coordinate transformation, quantization, repeated point removal and the like, carrying out geometric coding on the preprocessed point cloud, such as constructing an octree, and carrying out geometric coding on the basis of the constructed octree to form a geometric code stream. And simultaneously, reconstructing the position information of each point in the point cloud data based on the position information output by the constructed octree to obtain a reconstruction value of the position information of each point.
The attribute coding process comprises the steps of selecting one of three prediction modes to conduct point cloud prediction through given reconstruction information of position information of input point cloud and original values of the attribute information, quantifying predicted results, and conducting arithmetic coding to form an attribute code stream.
As shown in fig. 2, the position coding may be implemented by:
A coordinate conversion (Tanmsform coordinates) unit 201, a voxel (Voxelize) unit 202, an octree partitioning (analysis octree) unit 203, a geometric reconstruction (Reconstruct geometry) unit 204, a first arithmetic coding (ARITHMETIC ENCONDE) unit 205, and a surface fitting unit (Analyze surface approximation) 206.
The coordinate conversion unit 201 may be used to convert world coordinates of points in the point cloud into relative coordinates. For example, the geometrical coordinates of the points are respectively subtracted by the minimum value of xyz coordinate axes, which is equivalent to a dc-cut operation, to achieve the transformation of the coordinates of the points in the point cloud from world coordinates to relative coordinates.
The voxel (Voxelize) unit 202, also referred to as a quantization and removal duplicate points (Quantize and remove points) unit, may reduce the number of coordinates by quantization, after quantization the original different points may be assigned the same coordinates, based on which duplicate points may be removed by a duplication operation, for example, multiple clouds with the same quantization location and different attribute information may be merged into one cloud by attribute conversion. In some embodiments of the present application, voxel unit 202 is an optional unit module.
The octree division unit 203 may encode the position information of the quantized points using an octree (octree) encoding scheme. For example, the point cloud is divided in the form of an octree, whereby the positions of points can be one-to-one corresponding to the positions of octree, and geometric encoding is performed by counting the positions of points in the octree and marking them with 1 as the flag.
In some embodiments, in the process of encoding geometric information based on the triangular patch sets (trianglesoup, trisoup), the point cloud is also subjected to octree division through the octree division unit 203, but unlike the process of encoding geometric information based on octree, the trisoup does not need to divide the point cloud step by step into unit cubes with the side length of 1x1x1, but does not need to divide the point cloud into blocks (sub-blocks) with the side length of W, the division is stopped when the blocks (sub-blocks) are divided into W, the surface formed by the distribution of the point cloud in each block is obtained, at most twelve vertexes (intersecting points) generated by the surface and twelve sides of the block are obtained, the intersecting points are subjected to surface fitting through the surface fitting unit 206, and the fitted intersecting points are subjected to geometric encoding.
The geometric reconstruction unit 204 may perform position reconstruction based on the position information output by the octree dividing unit 203 or the intersection point fitted by the surface fitting unit 206, to obtain a reconstructed value of the position information of each point in the point cloud data.
The arithmetic coding unit 205 may perform arithmetic coding on the position information output by the octree analysis unit 203 or the intersection point fitted by the surface fitting unit 206 by using an entropy coding method, for example, generate a geometric code stream using the position information output by the octree analysis unit 203 by using an arithmetic coding method, and the geometric code stream may also be referred to as a geometric bit stream (geometry bitstream).
Attribute encoding may be achieved by:
A color conversion (Transform colors) unit 210, a re-coloring (Transfer attributes) unit 211, a Region ADAPTIVE HIERARCHICAL Transform (RAHT) unit 212, a generate LOD (Generate LOD) unit 213 and a boost (lifting Transform) unit 214, a quantization coefficient (Quantize coefficients) unit 215, and an arithmetic coding unit 216.
It should be noted that the point cloud encoder 200 may include more, fewer, or different functional components than those of fig. 2.
The color conversion unit 210 may be used to convert the RGB color space at the point cloud to YCbCr format or other formats.
The re-coloring unit 211 re-colors the color information using the reconstructed geometric information such that the uncoded attribute information corresponds to the reconstructed geometric information.
After the original value of the attribute information of the point is converted by the re-coloring unit 211, any one of the conversion units may be selected to convert the point in the point cloud. The transformation unit may include RAHT transformation 212 and lifting (lifting transform) unit 214. Among them, the promotion changes depend on the generation detail Layer (LOD).
Any one of RAHT transformation and lifting transformation can be understood as predicting attribute information of a point in a point cloud to obtain a predicted value of the attribute information of the point, and further obtaining a residual value of the attribute information of the point based on the predicted value of the attribute information of the point. For example, the residual value of the attribute information of the point may be the original value of the attribute information of the point minus the predicted value of the attribute information of the point.
In one embodiment of the application, the LOD generating unit generates LOD by acquiring Euclidean distance between points according to position information of points in point cloud and dividing the points into different detail expression layers according to Euclidean distance. In one embodiment, the Euclidean distances may be sorted and the different ranges of Euclidean distances divided into different layers of detail representation. For example, a point may be randomly selected as the first detail presentation layer. And then calculating the Euclidean distance between the rest points and the points, and classifying the points with the Euclidean distance meeting the first threshold value requirement as second detail expression layers. And acquiring the mass center of the points in the second detail expression layer, calculating the Euclidean distance between the points except the first detail expression layer and the second detail expression layer and the mass center, and classifying the points with the Euclidean distance conforming to the second threshold value as the third detail expression layer. And so on, all points are classified into the detail expression layer. By adjusting the threshold value of the euclidean distance, the number of points per LOD layer can be made incremental. It should be appreciated that the manner in which the LODs are partitioned may also take other forms, as the application is not limited in this regard.
It should be noted that, the point cloud may be directly divided into one or more detail expression layers, or the point cloud may be divided into a plurality of point cloud slices (slices) first, and then each point cloud slice may be divided into one or more LOD layers.
For example, the point cloud may be divided into a plurality of point cloud tiles, and the number of points per point cloud tile may be between 55 ten thousand and 110 ten thousand. Each point cloud tile can be viewed as a separate point cloud. Each point cloud tile may in turn be divided into a plurality of detail presentation layers, each detail presentation layer comprising a plurality of points. In one embodiment, the partitioning of the detail presentation layer may be done according to the Euclidean distance between points.
The quantization unit 215 may be used to quantize residual values of attribute information of points. For example, if quantization unit 215 and RAHT transform unit 212 are connected, quantization unit 215 may be used to quantize RAHT the residual value of the attribute information of the point output by transform unit 212.
The arithmetic coding unit 216 may entropy-encode the residual value of the point attribute information using zero-run coding (Zero run length coding) to obtain an attribute code stream. The attribute code stream may be bit stream information.
Fig. 3 is a schematic block diagram of a point cloud decoder provided by an embodiment of the present application.
As shown in fig. 3, the decoder 300 may obtain a point cloud code stream from the encoding device, and obtain location information and attribute information of points in the point cloud by parsing the code. Decoding of the point cloud includes position decoding and attribute decoding.
The position decoding process comprises the steps of performing arithmetic decoding on a geometric code stream, combining after constructing an octree, reconstructing point position information to obtain reconstruction information of the point position information, and performing coordinate transformation on the reconstruction information of the point position information to obtain the point position information. The positional information of the points may also be referred to as geometric information of the points.
The attribute decoding process comprises the steps of obtaining a residual value of attribute information of points in a point cloud through analyzing an attribute code stream, obtaining the residual value of the attribute information of the points after inverse quantization through inverse quantization of the residual value of the attribute information of the points, carrying out point cloud prediction by selecting one of following RAHT inverse transformation and lifting inverse transformation based on reconstruction information of the position information of the points obtained in the position decoding process to obtain a predicted value, adding the predicted value and the residual value to obtain the reconstruction value of the attribute information of the points, and carrying out color space inverse conversion on the reconstruction value of the attribute information of the points to obtain the decoded point cloud.
As shown in fig. 3, the position decoding may be achieved by:
An arithmetic decoding unit 301, an octree synthesis (synthesize octree) unit 302, a surface fitting unit (Synthesize suface approximation) 303, a geometric reconstruction (Reconstruct geometry) unit 304, and an inverse coordinate transformation (inverse transform coordinates) unit 305.
Attribute encoding may be achieved by:
An arithmetic decoding unit 310, an inverse quantization (inverse quantize) unit 311, an inverse transformation unit 312 of RAHT, a generation LOD (Generate LOD) unit 313, an inverse lifting transform (INVERSE LIFTING) unit 314, and an inverse color conversion (inverse trasform colors) unit 315.
It should be noted that decompression is the inverse of compression, and similarly, the functions of the respective units in the decoder 300 can be referred to as the functions of the corresponding units in the encoder 200. In addition, the point cloud decoder 300 may include more, fewer, or different functional components than fig. 3.
For example, the decoder 300 may divide the point cloud into a plurality of LODs according to the euclidean distance between points in the point cloud, then sequentially decode attribute information of the points in the LODs, for example, calculate the number of zeros (zero_cnt) in the zero-run encoding technique to decode the residual based on the zero_cnt, and then the decoding framework 200 may dequantize the residual based on the decoded residual value and add the dequantized residual value to the predicted value of the current point to obtain a reconstructed value of the point cloud until all the point clouds are decoded. The current point is taken as the nearest point of the follow-up LOD points, and the attribute information of the follow-up points is predicted by using the reconstruction value of the current point.
The foregoing is a basic flow of a point cloud codec under GPCC codec framework, and with the development of technology, some modules or steps of the framework or flow may be optimized, and the present application is applicable to the basic flow of a point cloud codec under GPCC codec framework, but is not limited to the framework and flow.
Because adjacent frames in a continuously acquired point cloud sequence have higher correlation, in some embodiments, inter-frame prediction may be introduced to improve the point cloud coding efficiency. Inter prediction mainly includes motion estimation, motion compensation, and the like, and in the motion estimation step, spatial motion offset vectors of two adjacent frames are calculated and written into a code stream. In the motion compensation step, the calculated motion vector is further used for calculating the spatial offset of the point cloud, and the offset point cloud frame is used as a reference to further improve the coding efficiency of the current frame. Considering that the radar point cloud space spans are large and the motion vectors of different parts are large, in some embodiments, the radar point cloud is divided into two parts, namely a road and a non-road, and only the non-road part is used for estimating the global motion vector.
As can be seen from the above, in the point cloud codec using inter-frame prediction, it is required to calculate the classification information and the motion vector information once for each coding unit, which increases the processing time of the codec and reduces the codec efficiency of the point cloud.
In order to solve the technical problems, the embodiment of the application calculates the primary classification information and the motion vector information at intervals of a plurality of coding units instead of calculating the primary classification information and the motion vector information for each coding unit based on the similarity of continuous point cloud frame contents, thereby reducing the calculation times of the classification information and the motion vector information, reducing the processing time of encoding and decoding, and improving the encoding and decoding efficiency.
The following describes a point cloud encoding and decoding method according to an embodiment of the present application with reference to a specific embodiment.
Firstly, taking a decoding end as an example, the point cloud decoding method provided by the embodiment of the application is introduced.
Fig. 4 is a schematic flow chart of a point cloud decoding method according to an embodiment of the present application. The point cloud decoding method of the embodiment of the application can be completed by the point cloud decoding device shown in the above fig. 1 or fig. 3.
As shown in fig. 4, the point cloud decoding method according to the embodiment of the present application includes:
s101, decoding the point cloud code stream, and determining at least one of classification information and motion vector information of a current decoding unit.
Wherein the classification information is determined based on a first parameter, the motion vector information is determined based on a second parameter, the first parameter is used for indicating a calculation period of the classification information, and the second parameter is used for indicating a calculation period of the motion vector information.
As can be seen from the above description, the adjacent frames in the continuously acquired point cloud sequence have higher correlation, so that inter-frame prediction can be introduced to improve the point cloud encoding and decoding efficiency.
Inter prediction mainly comprises the steps of motion estimation, motion compensation and the like. In some embodiments, in the motion estimation step, spatial motion offset vectors of two adjacent frames are calculated and written into the code stream. In the motion compensation step, the calculated motion vector is further used for calculating the spatial offset of the point cloud, and the offset point cloud frame is used as a reference to further improve the coding efficiency of the current frame.
The embodiment of the application does not limit the specific content of the motion vector information of the current decoding unit, and can be motion information related to the steps of motion estimation, motion compensation and the like.
For example, the motion vector information may be a spatial motion offset vector of two adjacent frames in motion estimation, i.e., a motion vector.
For another example, the motion vector information may also be motion estimation ME (Motion Estimation) between two adjacent frames in motion compensation.
In a real scenario, the movements of different objects may be different, for example, point cloud data captured by lidar sensors on a moving vehicle, where roads and objects typically have different movements. Since the distance between the road and the radar sensor is relatively constant and the road varies slightly from one vehicle position to the next, the movement of the point representing the road relative to the radar sensor position is small. In contrast, objects such as buildings, road signs, vegetation, or other vehicles have a large movement. Since the road and object points have different motions, dividing the point cloud data into the road and object will improve the accuracy of global motion estimation and compensation, thereby improving compression efficiency. That is, for point cloud data using inter prediction, in order to improve accuracy of inter prediction and compression efficiency, it is necessary to classify point clouds in one decoding unit, for example, divide the point clouds in the decoding unit into road point clouds and non-road point clouds.
In some embodiments, the classification of the point cloud in the decoding unit is indicated by classification information, wherein classification information may be understood as information required to divide the point cloud into several categories.
In the embodiment of the present application, the classification information of the current decoding unit may be understood as classification information of point clouds in the current decoding unit, that is, information required for classifying the point clouds in the current decoding unit into several categories.
In the point cloud decoding process, the point cloud data may be divided into at least one decoding unit, the decoding process of each decoding unit is independent, and the decoding process of each decoding unit is substantially identical. For convenience of description, the embodiment of the present application will be described by taking a decoding unit currently being decoded, i.e., a current decoding unit as an example.
The embodiment of the application does not limit the specific size of the current decoding unit, and can be determined according to actual needs.
In some embodiments, the current decoding unit is a current point cloud frame, i.e. one point cloud frame may be decoded as one decoding unit.
In some embodiments, the current decoding unit is a partial area of the current point cloud frame, for example, the current point cloud frame is divided into a plurality of areas, and one area is used as one decoding unit to perform independent decoding.
The embodiment of the application does not limit the specific mode of dividing the current point cloud frame into a plurality of areas.
In one example, the current point cloud frame is divided into a plurality of point clouds, and the sizes of the plurality of point clouds may be the same or not the same, and one point cloud is used as a decoding unit to perform independent decoding.
In another example, the current point cloud frame is divided into a plurality of points cloud mass, and the sizes of the points cloud mass may or may not be identical, and one point cloud mass is taken as one decoding unit to be independently decoded.
In the embodiment of the present application, in order to avoid calculating the classification information and the motion vector information once for each decoding unit, at least one of the first parameter and the second parameter is set. Wherein the first parameter is used for indicating the calculation period of the classification information, and the second parameter is used for indicating the calculation period of the motion vector information. In this way, the encoding end can calculate the period according to the classification information indicated by the first parameter, periodically calculate the classification information, and/or calculate the period according to the motion vector information indicated by the second parameter, and periodically calculate the motion vector information, thereby reducing the calculation times of the classification information and/or the motion vector information and improving the encoding and decoding efficiency.
In some embodiments, the calculation cycle of the above classification information may be understood as calculating the classification information once every at least one decoding unit, or calculating the classification information once every at least one point cloud frame.
In some embodiments, the above-mentioned calculation cycle of the motion vector information may be understood as calculating the motion vector information once every at least one decoding unit, or calculating the motion vector information once every at least one point cloud frame.
In the embodiment of the present application, the specific implementation manner of determining at least one of the classification information and the motion vector information of the current decoding unit by decoding the point cloud code stream at the decoding end in S101 includes, but is not limited to, the following several ways:
In one mode, the decoding end decodes at least one of classification information and motion vector information of a current decoding unit from the point cloud code stream.
In this manner, the encoding end may determine the classification information of each decoding unit and/or determine the motion vector information of each decoding unit according to the first parameter and/or the second parameter. Then, the encoding end writes at least one of the classification information and the motion vector information of each decoding unit into the point cloud code stream. Thus, the decoding end can obtain the classification information of each decoding unit and/or the motion vector information of each decoding unit by directly decoding the code stream.
In one possible implementation manner of the first embodiment, the encoding end may skip writing the first parameter and/or the second parameter into the point cloud code stream, that is, the encoding end does not write the first parameter and/or the second parameter into the point cloud code stream, but directly writes the classification information of each decoding unit and/or the motion vector information of each decoding unit into the point cloud code stream. Therefore, the decoding end can directly decode the classification information of each decoding unit and/or the motion vector information of each decoding unit from the code stream by adopting the existing decoding method, and further, the decoding complexity is not increased under the advance of improving the coding efficiency.
In some embodiments, the decoding end may further determine at least one of classification information and motion vector information of the current decoding unit according to the following manner.
In a second mode, the decoding end determines at least one of the classification information and the motion vector information through the following steps S101-a and S101-B:
S101-A, decoding at least one of a first parameter and a second parameter from a point cloud code stream;
S101-B, determining classification information of the current decoding unit according to the first parameter, and/or determining motion vector information of the current decoding unit according to the second parameter.
It should be noted that, the first parameter and the second parameter may be used separately, in one example, the encoding end writes the first parameter into the point cloud code stream, but does not write the second parameter into the point cloud code stream, so that the decoding end may determine the classification information of the current decoding unit according to the first parameter, and obtain the motion vector information of the current decoding unit by decoding the point cloud code stream. In another example, the encoding end writes the second parameter into the point cloud code stream, but does not write the first parameter into the point cloud code stream, so that the decoding end can determine the motion vector information of the current decoding unit according to the second parameter, and obtain the classification information of the current decoding unit by decoding the point cloud code stream. In yet another example, the encoding end writes both the first parameter and the second parameter into the point cloud code stream, so that the decoding end can determine the classification information of the current decoding unit according to the first parameter, and determine the motion vector information of the current decoding unit according to the second parameter.
In the second mode, if the encoding end writes the first parameter into the point cloud code stream, the encoding end skips writing the classification information of each decoding unit into the point cloud code stream, and correspondingly, the decoding end determines the classification information of the decoding unit according to the first parameter, instead of obtaining the classification information of the point clouds in the decoding units by decoding one by one. And/or if the encoding end writes the second parameter into the point cloud code stream, skipping writing the classification information of each decoding unit into the point cloud code stream, and correspondingly, determining the motion vector information of the decoding unit according to the second parameter by the decoding end instead of obtaining the motion vector information of the decoding unit by decoding one by one. Therefore, the encoding end writes the first parameter and/or the second parameter into the code stream, and skips writing the classification information and/or the motion vector information of each decoding unit into the point cloud code stream, so that the decoding processing time can be reduced, and the code stream burden of encoding the classification information and/or the motion vector information of each decoding unit can be reduced.
Alternatively, at least one of the first parameter and the second parameter may be stored in the form of an unsigned integer, denoted as u (v), representing the use of v bits to describe one parameter.
Optionally, at least one of the first parameter and the second parameter may be stored in the form of unsigned exponential golomb coding, denoted as ue (v), and the value of the parameter is first converted into a v-bit 01-bit sequence by exponential golomb coding, and then written into the code stream.
In some embodiments, the encoding end writes at least one of the first parameter and the second parameter into the sequence header parameter set, and at this time, the decoding end obtains at least one of the first parameter and the second parameter by decoding the sequence header parameter set.
In one example, the first parameter is used to indicate spacing of a plurality of point cloud frames, and the classification information is calculated once, and/or the second parameter is used to indicate spacing of a plurality of point cloud frames, and the motion vector information is calculated once.
Illustratively, the first parameter and the second parameter are stored in the sequence header parameter set as shown in table 1:
TABLE 1
In table 1, classification_period represents a first parameter, motion_period represents a second parameter, classification_info represents classification information, and motion_info represents motion vector information.
In some embodiments, the encoding end writes at least one of the first parameter and the second parameter into the point cloud header information, and at this time, the decoding end obtains at least one of the first parameter and the second parameter by decoding the point cloud header information.
In one example, the first parameter is used to indicate that classification information of an ith point cloud slice in the point cloud frames is calculated for the ith point cloud slice in the point cloud frames at intervals of a plurality of point cloud frames, i is a positive integer, and/or the second parameter is used to indicate that motion vector information of the ith point cloud slice in the point cloud frames is calculated for the ith point cloud slice in the point cloud frames at intervals of a plurality of point cloud frames.
In this example, the storage manner of the first parameter and the second parameter in the point cloud header information is as shown in table 2:
TABLE 2
In table 2, classification_frame_period represents a first parameter, motion_frame_period represents a second parameter, classification_info represents classification information, and motion_info represents motion vector information.
In one example, a first parameter is used to indicate a plurality of point clouds in a point cloud frame, to calculate primary classification information, and/or a second parameter is used to indicate a plurality of point clouds in a point cloud frame, to calculate primary motion vector information.
In this example, the storage manner of the first parameter and the second parameter in the point cloud header information is as shown in table 3:
TABLE 3 Table 3
In table 3, classification_slice_period represents a first parameter, motion_slice_period represents a second parameter, classification_info represents classification information, and motion_info represents motion vector information.
In some embodiments, before decoding the first parameter and the second parameter, the decoding end first needs to decode the point cloud code stream to obtain a first identifier inter_prediction_flag, where the first identifier inter_prediction_flag is used to indicate whether to perform inter-frame prediction decoding, and if the first identifier inter_prediction_flag indicates that inter-frame prediction encoding is performed, decodes the point cloud code stream to obtain at least one of the first parameter and the second parameter.
In the following description, a specific process of determining, by the decoding side, the classification information of the current decoding unit according to the first parameter in S101-B.
In the above S101-B, the specific implementation manner of determining the classification information of the current decoding unit by the decoding end according to the first parameter includes, but is not limited to, the following:
In mode 1, the step S101-B includes the steps of S101-B-11 and S101-B-12 as follows:
S101-B-11, determining a classification information calculation period corresponding to the current decoding unit according to the first parameter;
S101-B-12, determining the classification information of the current decoding unit according to the classification information calculation period.
In the embodiment of the application, the calculation periods of the classification information corresponding to different decoding units in the point cloud sequence can be the same or different, and the embodiment of the application is not limited to the above.
In some embodiments, if the calculation periods of the classification information corresponding to different decoding units in the point cloud sequence are the same, a first parameter may be written into the code stream, and the calculation period of the classification information of each decoding unit in the point cloud sequence is indicated by the first parameter. For example, the first parameter indicates that the classification information is calculated once every K decoding units.
In some embodiments, if the calculation periods of the classification information corresponding to different decoding units in the point cloud sequence are not identical, a plurality of first parameters may be written into the code stream, and the calculation periods of the classification information of each decoding unit in the point cloud sequence are indicated by the plurality of first parameters. For example, 3 first parameters are written in the code stream, wherein the first parameter indicates that K1 decoding units calculate the primary classification information at intervals, the second first parameter indicates that K2 decoding units calculate the primary classification information at intervals, and the third first parameter indicates that K3 decoding units calculate the primary classification information at intervals.
From the above, no matter in which form the first parameter indicates the calculation period of the classification information, the calculation period of the classification information corresponding to the current decoding unit can be determined according to the first parameter decoded by the code stream for the current decoding unit. For example, the current decoding unit calculates classification information once for every 4 point cloud frames, the current decoding unit is assumed to calculate classification information once for the 6 th point cloud frame in the decoding sequence, the 5 th point cloud frame calculates classification information once for the 0 th point cloud frame, and the 10 th point cloud frame calculates classification information once, wherein the 0 th point cloud frame to the 4 th point cloud frame can be understood as a first calculation period of the classification information, the 5 th point cloud frame to the 9 th point cloud frame can be understood as a second calculation period of the classification information, and the current decoding unit is located in the second calculation period, so that the second calculation period is determined as a classification information calculation period corresponding to the current decoding unit.
And the decoding end determines the classification information of the current decoding unit according to the classification information calculation period after determining the classification information calculation period corresponding to the current decoding unit according to the steps.
The embodiment of the application does not limit the specific mode of determining the classification information of the current decoding unit according to the calculation period of the classification information corresponding to the current decoding unit by the decoding end.
In some embodiments, the two ends of the codec agree that the classification information of the decoding unit in the classification information calculation period is a default value 1, and then the decoding end determines the default value 1 as the classification information of the current decoding unit.
In some embodiments, the two ends of the encoding and decoding are contracted, and the classification information of the decoding unit in the classification information calculation period is calculated by adopting a preset calculation method. For example, the current decoding unit is an area of the current point cloud frame, and the classification information of the current decoding unit can be determined according to the classification information of the decoded point clouds around the current decoding unit in the current point cloud frame.
In some embodiments, the encoding side writes the classification information of the first decoding unit in one classification information calculation period to the code stream, while the classification information of the other decoding units in the classification information calculation period is not written to the code stream. Thus, the decoding end can determine the classification information of the current decoding unit according to the position of the current decoding unit in the classification information calculation period corresponding to the current decoding unit.
In example 1, if the current decoding unit is the first decoding unit in the classification information calculation period, the point cloud code stream is decoded, and the classification information of the current point decoding unit is obtained.
In example 2, if the current decoding unit is the non-first decoding unit in the classification information calculation period, the classification information of the current decoding unit is determined according to the decoded information or the default value.
In this embodiment, the encoding end writes the first parameter and the classification information of the first decoding unit in each classification information calculation period into the point cloud code stream, and does not write the classification information of the other decoding units in the classification information calculation period into the point cloud code stream. Thus, after determining the calculation period of the classification information corresponding to the current decoding unit, the decoding end can determine the classification information of the current decoding unit according to whether the current decoding unit is the first decoding unit in the calculation period of the classification information.
With continued reference to the above example, assuming that the calculation period of the classification information corresponding to the current decoding unit is from the 5 th point cloud frame to the 9 th point cloud frame, if the current decoding unit is the 5 th point cloud frame in the decoding order, the decoding end directly decodes the classification information of the current decoding unit from the code stream. If the current decoding unit is not the 5 th point cloud frame, for example, the 6 th point cloud frame, the decoding end determines the default value as the classification information of the current decoding unit, or determines the classification information of the current decoding unit according to the decoded information.
The embodiment of the present application is not limited to the specific implementation manner of determining the classification information of the current decoding unit according to the decoded information in the above example 2.
In one possible implementation, the classification information of the current decoding unit is determined according to the classification information of the first decoding unit in the calculation period of the classification information corresponding to the current decoding unit. For example, the classification information of the first decoding unit in the classification information calculation period is determined as the classification information of the current decoding unit, or the classification information of the first decoding unit in the classification information calculation period is processed to obtain the classification information of the current decoding unit.
In one possible implementation, the classification information of the current decoding unit is determined according to the following step 11:
And 11, determining classification information of the current decoding unit according to the classification information of M decoding units, wherein the M decoding units are M decoded decoding units positioned before the current decoding unit in the decoding sequence, and M is a positive integer.
The embodiment of the application does not limit the specific selection modes of the M decoding units.
In some embodiments, the M decoding units are sequentially adjacent in decoding order without an interval therebetween.
In some embodiments, the M decoding units may be any M decoding units located before the current decoding unit in the decoding order, that is, the M decoding units may or may not be completely adjacent to each other, which is not limited in the embodiment of the present application.
Due to the specific relevance of the content between adjacent point cloud frames, in the implementation manner, M decoding units positioned before the current decoding unit in the decoding sequence are acquired in the decoded information, and the classification information of the current decoding unit is determined according to the classification information of the M decoding units.
In step 11, according to the classification information of the M decoding units, the implementation manner of determining the classification information of the current decoding unit at least includes the following examples:
In the first example, if M is equal to 1, the classification information of one decoding unit located before the current decoding unit in the decoding order is determined as the classification information of the current decoding unit. For example, if the current decoding unit is the 6 th point cloud frame in the decoding order, the classification information of the 5 th point cloud frame in the decoding order is determined as the classification information of the current decoding unit.
In a second example, if M is greater than 1, the classification information of the M decoding units is subjected to preset processing, and the processing result is determined as the classification information of the current decoding unit.
For example, an average value of the classification information of the M decoding units is determined as the classification information of the current decoding unit.
For another example, a weighted average of the classification information of the M decoding units is determined as the classification information of the current decoding unit. Optionally, the closer to the current decoding unit in the decoding order, the greater the weight, and the farther from the current decoding unit, the lesser the weight, among the M decoding units.
The decoding end may determine the classification information of the current decoding unit according to the following manner 2 in addition to determining the classification information of the current decoding unit in the foregoing manner 1.
In mode 2, if the first parameter indicates that the classification information is calculated once for every K decoding units, the implementation of S101-B at least includes the following two examples:
In example 1, if the current decoding unit is the NK decoding unit in the decoding order, the point cloud code stream is decoded to obtain the classification information of the current decoding unit, and K, N is a positive integer.
In example 2, if the current decoding unit is a non-NK decoding unit in decoding order, the classification information of the current decoding unit is determined according to the decoded information or the default value.
In this mode 2, the classification information calculation cycle for each decoding unit in the point cloud series is the same, and for example, the classification information is calculated once for every K decoding units. In this way, the encoding end writes the classification information of the decoding units numbered 0 and the integer multiple of K (i.e., the NK-th decoding units) in the decoding order into the code stream, and does not write the classification information of the decoding units numbered not the integer multiple of K (i.e., the non NK-th decoding units) into the code stream, thereby reducing the code stream load. Correspondingly, when the decoding end decodes the current decoding unit, judging whether the current decoding unit is the NK decoding unit in the decoding sequence, namely judging whether the serial number of the current decoding unit in the decoding sequence is integer multiple of K. If the decoding end determines that the current decoding unit is the NK decoding unit in the decoding sequence, the classification information of the current decoding unit is decoded from the code stream. If the current decoding unit is not the NK decoding unit in the decoding order, determining the default value as the classification information of the current decoding unit or determining the classification information of the current decoding unit according to the decoded information.
In this implementation 2, if the first parameter indicates that the classification information is calculated once for every K decoding units, the decoding end decodes the classification information once for every K decoding units from the code stream, so that the decoding frequency of the decoding end can be reduced. For example, the point cloud sequence includes 1000 point cloud frames, and it is assumed that one point cloud frame is used as a decoding unit, so that the decoding frequency of the decoding end is 1000/K instead of 1000, the decoding frequency is greatly reduced, the decoding load of the decoding end is reduced, and the decoding efficiency is improved.
In this mode 2, if the current decoding unit is a non-NK decoding unit in the decoding order, the specific process of determining the classification information of the current decoding unit according to the decoded information may refer to the descriptions of the above steps 11 and 12, and will not be described herein.
In the embodiment of the application, according to the mode, the classification information of the current decoding unit can be determined.
The classification information may be understood as information required to divide the point cloud into different categories. The embodiment of the application does not limit the concrete expression form of the classification information.
In some embodiments, the classification information includes at least one of a first height threshold and a second height threshold, the first and second height thresholds being used for classification of point clouds in the current decoding unit.
Optionally, at least one of the first height threshold and the second height threshold is a preset value.
Optionally, at least one of the first height threshold and the second height threshold is a statistic. For example, as shown in fig. 5, the height value of the point cloud midpoint is counted using a histogram, the horizontal axis of the histogram is the height value of the point cloud midpoint, and the vertical axis of the histogram is the number of points at the height value. Fig. 5 is a graph of statistics using a radar point cloud as an example, where the height of the radar is a zero point, so that the height of most points is negative. Then, a height value corresponding to the peak of the histogram is obtained, and a standard deviation of the height value is calculated, and when the height corresponding to the peak is taken as the center, a threshold value higher than the center by a (for example, 1.5 times) the standard deviation is denoted as a first height threshold value Top_thr, and a threshold value lower than the center by b (for example, 1.5 times) the standard deviation is denoted as a second height threshold value bottom_thr.
The first and second height thresholds divide the point cloud into different categories. For example, point clouds having a height value between a first height threshold and a second height threshold among the point clouds are denoted as first class point clouds, point clouds having a height value greater than the first height threshold and a height value less than the second height threshold are denoted as second class point clouds.
In some embodiments, if the classification information includes at least one of the first height threshold and the second height threshold, the corresponding first parameter classification_period may include at least one of a first sub-parameter top_threshold_period and a second sub-parameter bottom_threshold_period;
The first sub-parameter top_threshold_period is used for indicating a calculation period of the first height threshold, and the second sub-parameter bottom_threshold_period is used for indicating a calculation period of the second height threshold.
The first sub-parameter and the second sub-parameter may be independently assigned.
Alternatively, the calculation period of the first height threshold may be the same as or different from the calculation period of the second height threshold, which is not limited in the embodiment of the present application.
The above description is given of the specific process of determining the classification information of the current decoding unit according to the first parameter in S101-B, and the following description is given of the specific implementation process of determining the motion vector information of the current decoding unit according to the second parameter in S101-B.
In S101-B, the specific implementation manner of determining the motion vector information of the current decoding unit by the decoding end according to the second parameter includes, but is not limited to, the following:
in mode 1, the step S101-B includes the steps of S101-B-21 and S101-B-22 as follows:
S101-B-21, determining a motion vector information calculation period corresponding to the current decoding unit according to the second parameter;
S101-B-22, determining the motion vector information of the current decoding unit according to the motion vector information calculation period.
In the embodiment of the present application, the motion vector information calculation periods corresponding to different decoding units in the point cloud sequence may be the same or different, which is not limited in the embodiment of the present application.
In some embodiments, if the calculation periods of the motion vector information corresponding to different decoding units in the point cloud sequence are the same, a second parameter may be written into the code stream, and the calculation period of the motion vector information of each decoding unit in the point cloud sequence is indicated by the second parameter. For example, the second parameter indicates that motion vector information is calculated once every R decoding units.
In some embodiments, if the calculation periods of the motion vector information corresponding to different decoding units in the point cloud sequence are not identical, a plurality of second parameters may be written into the code stream, and the calculation periods of the motion vector information corresponding to each decoding unit in the point cloud sequence are indicated by the plurality of second parameters. For example, 3 second parameters are written in the code stream, wherein the first second parameter indicates that the R1 decoding units calculate the motion vector information once, the second parameter indicates that the R2 decoding units calculate the motion vector information once, and the third second parameter indicates that the R3 decoding units calculate the motion vector information once.
From the above, no matter what form the second parameter indicates the calculation period of the motion vector information, the calculation period of the motion vector information corresponding to the current decoding unit can be determined according to the second parameter decoded by the code stream for the current decoding unit. For example, the current decoding unit calculates motion vector information once for every 4 point cloud frames, the current decoding unit is assumed to calculate motion vector information once for the 6 th point cloud frame in the decoding sequence, the 0 th point cloud frame in the decoding sequence calculates motion vector information once for the 5 th point cloud frame, and the 10 th point cloud frame calculates motion vector information once, wherein the 0 th to 4 th point cloud frames can be understood as a first calculation period of the motion vector information, the 5 th to 9 th point cloud frames can be understood as a second calculation period of the motion vector information, and the current decoding unit is in the second calculation period, so that the second calculation period is determined as a motion vector information calculation period corresponding to the current decoding unit.
And the decoding end determines the motion vector information of the current decoding unit according to the motion vector information calculation period after determining the motion vector information calculation period corresponding to the current decoding unit according to the steps.
The embodiment of the application does not limit the specific mode of determining the motion vector information of the current decoding unit according to the motion vector information calculation period corresponding to the current decoding unit by the decoding end.
In some embodiments, the two ends of the codec agree that the motion vector information of the decoding unit in the motion vector information calculation period is a default value of 1, and then the decoding end determines the default value of 1 as the motion vector information of the current decoding unit.
In some embodiments, the two ends of the codec agree to calculate the motion vector information of the decoding unit in the motion vector information calculation period by using a preset calculation method. For example, the current decoding unit is an area of the current point cloud frame, and the motion vector information of the current decoding unit can be determined according to the motion vector information of the decoded area around the current decoding unit in the current point cloud frame.
In some embodiments, the encoding side writes the motion vector information of the first decoding unit in one motion vector information calculation period to the code stream, while the motion vector information of the other decoding units in the motion vector information calculation period is not written to the code stream. Thus, the decoding end can determine the motion vector information of the current decoding unit according to the position of the current decoding unit in the motion vector information calculation period corresponding to the current decoding unit.
In example 1, if the current decoding unit is the first decoding unit in the motion vector information calculation period, the point cloud code stream is decoded, and the motion vector information of the current point decoding unit is obtained.
In example 2, if the current decoding unit is the non-first decoding unit in the motion vector information calculation period, the motion vector information of the current decoding unit is determined according to the decoded information or the default value.
In this embodiment, the encoding end writes the second parameter, and the motion vector information of the first decoding unit in each motion vector information calculation period, into the point cloud code stream, while not writing the motion vector information of the other decoding units in the motion vector information calculation period into the point cloud code stream. Thus, after determining the motion vector information calculation period corresponding to the current decoding unit, the decoding end can determine the motion vector information of the current decoding unit according to whether the current decoding unit is the first decoding unit in the motion vector information calculation period.
With continued reference to the above example, assuming that the motion vector information calculation period corresponding to the current decoding unit is from the 5 th point cloud frame to the 9 th point cloud frame, if the current decoding unit is the 5 th point cloud frame in the decoding order, the decoding end directly decodes the motion vector information of the current decoding unit from the code stream. If the current decoding unit is not the 5 th point cloud frame, for example, the 6 th point cloud frame, the decoding end determines the default value as the motion vector information of the current decoding unit, or determines the motion vector information of the current decoding unit according to the decoded information.
The embodiment of the present application is not limited to the specific implementation manner of determining the motion vector information of the current decoding unit according to the decoded information in the above example 2.
In one possible implementation, the motion vector information of the current decoding unit is determined according to the motion vector information of the first decoding unit in the motion vector information calculation period corresponding to the current decoding unit. For example, the motion vector information of the first decoding unit in the motion vector information calculation period is determined as the motion vector information of the current decoding unit, or the motion vector information of the first decoding unit in the motion vector information calculation period is processed to obtain the motion vector information of the current decoding unit.
In one possible implementation, the motion vector information of the current decoding unit is determined according to the following step 21:
And step 21, determining motion vector information of a current decoding unit according to the motion vector information of S decoding units, wherein the S decoding units are S decoded decoding units positioned before the current decoding unit in a decoding sequence, and S is a positive integer.
The embodiment of the application does not limit the specific selection modes of the S decoding units.
In some embodiments, the S decoding units are sequentially adjacent in decoding order without an interval therebetween.
In some embodiments, the S decoding units may be any S decoding units located before the current decoding unit in the decoding order, that is, the S decoding units may be adjacent or not completely adjacent, which is not limited in the embodiment of the present application.
Due to the specific relevance of the content between the adjacent point cloud frames, in the implementation manner, the decoding end obtains S decoding units positioned before the current decoding unit in the decoding sequence from the decoded information, and determines the motion vector information of the current decoding unit according to the motion vector information of the S decoding units.
In step 21, according to the motion vector information of the S decoding units, the implementation manner of determining the motion vector information of the current decoding unit at least includes the following examples:
In the first example, if S is equal to 1, the motion vector information of one decoding unit located before the current decoding unit in the decoding order is determined as the motion vector information of the current decoding unit. For example, if the current decoding unit is the 6 th point cloud frame in the decoding order, the motion vector information of the 5 th point cloud frame in the decoding order is determined as the motion vector information of the current decoding unit.
In a second example, if S is greater than 1, the motion vector information of S decoding units is subjected to preset processing, and the processing result is determined as the motion vector information of the current decoding unit.
For example, an average value of motion vector information of S decoding units is determined as motion vector information of the current decoding unit.
For another example, a weighted average of the motion vector information of the S decoding units is determined as the motion vector information of the current decoding unit. Optionally, the closer to the current decoding unit in the decoding order, the greater the weight, and the farther from the current decoding unit, the lesser the weight, among the S decoding units.
The decoding end may determine the motion vector information of the current decoding unit according to the following manner 2, in addition to determining the motion vector information of the current decoding unit in the foregoing manner 1.
In mode 2, if the second parameter indicates that the motion vector information is calculated once for each R decoding units, the implementation of S101-B at least includes the following two examples:
In example 1, if the current decoding unit is the NR decoding unit in the decoding order, the point cloud code stream is decoded to obtain motion vector information of the current decoding unit, and R, N is a positive integer.
In example 2, if the current decoding unit is a non-NR decoding unit in decoding order, motion vector information of the current decoding unit is determined according to the decoded information or a default value.
In this mode 2, the motion vector information calculation cycle for each decoding unit in the point cloud series is the same, and the motion vector information is calculated once for each R decoding units, for example. In this way, the encoding end writes the motion vector information of the decoding unit numbered 0 and the integer multiple of the number R (i.e., the NR-th decoding unit) in the decoding order into the code stream, and does not write the motion vector information of the decoding unit numbered not the integer multiple of the number R (i.e., the non-NR-th decoding unit) into the code stream, thereby reducing the load of the code stream. Correspondingly, when the decoding end decodes the current decoding unit, judging whether the current decoding unit is the NR decoding unit in the decoding sequence, namely judging whether the serial number of the current decoding unit in the decoding sequence is an integer multiple of R. If the decoding end determines that the current decoding unit is the NR decoding unit in the decoding sequence, decoding the motion vector information of the current decoding unit from the code stream. If the current decoding unit is not the NR decoding unit in the decoding order, determining the default value as the motion vector information of the current decoding unit or determining the motion vector information of the current decoding unit according to the decoded information.
In this implementation 2, if the second parameter indicates that each R decoding units calculates the motion vector information once, the decoding end decodes the motion vector information once from the code stream every R decoding units, so that the decoding frequency of the decoding end can be reduced. For example, the point cloud sequence includes 1000 point cloud frames, and it is assumed that one point cloud frame is used as a decoding unit, so that the decoding frequency of the decoding end is 1000/R instead of 1000, the decoding frequency is greatly reduced, the decoding load of the decoding end is reduced, and the decoding efficiency is improved.
In this mode 2, if the current decoding unit is a non-NR decoding unit in the decoding order, the specific process of determining the motion vector information of the current decoding unit according to the decoded information may refer to the descriptions of the above step 21 and step 22, which are not repeated herein.
The decoding end may determine the motion vector information of the current decoding unit according to the following manner 3, in addition to determining the motion vector information of the current decoding unit according to the methods described in the foregoing manners 1 and 2.
Mode 3, the decoding end determines motion vector information according to the degree of variation of the classification information of the decoding unit. That is, the decoding end determines the motion vector information of the current decoding unit according to the following steps 1 and 2:
step 1, determining the change degree of classification information according to a first parameter;
And 2, determining the motion vector information of the current decoding unit according to the change degree.
In the embodiment of the present application, if the classification information of different decoding units does not change greatly, the motion vector information indicating different decoding units may also not change greatly. In contrast, if the classification information of different decoding units varies greatly, the motion vector information indicating different decoding units may also vary greatly. Accordingly, the motion vector information of the current decoding unit can be determined according to the degree of variation of the classification information of the different decoding units.
The embodiment of the application does not limit the specific implementation manner of determining the degree of variation of the classification information of the point cloud according to the first parameter in the step 1.
In some embodiments, classification information for the plurality of decoding units is determined based on the first parameter, and a degree of change in the classification information is determined based on the classification information for the plurality of decoding units. For example, when the difference between the classification information of the plurality of decoding units is large, the degree of change of the classification information is large, and when the difference between the classification information of the plurality of decoding units is small, the degree of change of the classification information is small.
In some embodiments, the degree of variation of the classification information is determined from the classification information of the current decoding unit and the classification information of the reference decoding unit of the current decoding unit.
For example, according to the first parameter, the classification information of the current decoding unit is determined, and the specific process may refer to the description of the foregoing embodiment, which is not repeated herein. Next, the degree of change between the classification information of the current decoding unit and the classification information of the reference decoding unit of the current decoding unit is determined, for example, the absolute value of the difference between the classification information of the current decoding unit and the classification information of the reference decoding unit of the current decoding unit is determined as the degree of change of the classification information.
According to the method, after the change degree of the classification information is determined, the motion vector information of the current decoding unit is determined according to the change degree of the classification information.
For example, if the degree of change of the classification information is less than or equal to the first preset value, the default value or the motion vector information of the previous decoding unit of the current decoding unit in the decoding order is determined as the motion vector information of the current decoding unit.
For another example, if the degree of change is greater than the first preset value, the point cloud code stream is decoded, and the motion vector information of the current decoding unit is obtained.
In the embodiment of the application, according to the mode, the motion vector information of the current decoding unit can be determined.
The motion vector information is understood as motion information required for inter prediction by the decoding side. The embodiment of the application does not limit the concrete expression form of the motion vector information.
In some embodiments, the motion vector information includes at least one of a rotation matrix and an offset vector. The rotation matrix describes three-dimensional rotation of the decoding unit and the reference decoding unit, and the offset vector describes offset of the coordinate origin of the decoding unit and the reference decoding unit in three directions.
In one example, when the rotation matrix isIt indicates that the current decoding unit is not rotated compared to the reference decoding unit.
In one example, when the offset vector isIt means that the origin of coordinates of the current decoding unit and the reference decoding unit is not shifted.
If the current decoding unit is neither rotated nor shifted compared to the reference decoding unit, the motion vector between the two decoding units is noted as a zero motion vector.
In some embodiments, if the motion vector information includes at least one of a rotation matrix and an offset vector, the corresponding second parameter motion_period includes at least one of a third sub-parameter rotation_matrix_period and a fourth sub-parameter transform_vector_period.
The third sub-parameter rotation_matrix_period is used for indicating the calculation period of the rotation matrix, and the fourth sub-parameter transmission_vector_period is used for indicating the calculation period of the offset vector.
The third sub-parameter and the fourth sub-parameter may be independently assigned.
Alternatively, the calculation period of the rotation matrix and the calculation period of the offset vector may be the same or different, which is not limited in the embodiment of the present application.
In some embodiments, if the current decoding unit is the first decoding unit in the decoding order, i.e. the decoding sequence number is 0, the encoding end writes at least one of the classification information and the motion vector information of the current decoding unit into the point cloud code stream. In this way, the decoding end can decode the point cloud code stream to directly obtain at least one of the classification information and the motion vector information of the current decoding unit.
In the embodiment of the present application, the decoding end determines at least one of the classification information and the motion vector information of the current decoding unit according to the above steps, and then executes the following step S102.
S102, decoding the current decoding unit according to at least one of the classification information and the motion vector information of the current decoding unit.
Because the motions of different objects are different, in order to improve the decoding accuracy, the category of point clouds in the current decoding unit is determined according to the classification information of the current decoding unit, and different motion vector information is adopted for the point clouds in different categories to carry out inter-frame prediction. Taking the point cloud data scanned by the vehicle radar as an example, for example, the point cloud may be divided into a road point and an object point, the motion vector information of the road point and the object point is different,
In the step S102, the embodiment of the present application does not limit the specific process of decoding the current decoding unit according to at least one of the classification information and the motion vector information of the current decoding unit.
In some embodiments, if the classification information of the current decoding unit is determined according to the above method, but the motion vector information of the current decoding unit is not determined, the point cloud in the current decoding unit may be classified into a plurality of categories according to the classification information. Different motion vector information is assigned to each category, wherein the motion vector information assigned to different categories may be preset values corresponding to different categories, or values calculated according to the categories, which is not limited in the embodiment of the present application.
In some embodiments, if the motion vector information of the current decoding unit is determined according to the above method, but the classification information of the current decoding unit is not determined, at this time, the decoding end may determine the classification information of the current decoding unit according to the self-determination of the classification information of the current decoding unit, for example, according to the decoding information of the decoded units around the current decoding unit. Then, the point cloud in the current decoding unit is divided into a plurality of categories according to the classification information. And determining the motion vector information of the point cloud of each category in the current decoding unit according to the motion vector information of the current decoding unit. For example, if the current decoding unit includes a first type of point cloud and a second type of point cloud, the determined motion vector information may be determined as the motion vector information of the first type of point cloud, and the motion vector information of the second type of point cloud is determined as a preset value, for example, a zero vector.
In some embodiments, if the classification information of the current decoding unit and the motion vector information of the current decoding unit are determined according to the above steps, the step S102 includes the following steps:
S102-A, dividing the point cloud in the current decoding unit into P types of point clouds according to the classification information of the current decoding unit, wherein P is a positive integer greater than 1.
In this embodiment, the decoding end divides the point cloud in the current decoding unit into P-type point clouds according to the classification information of the current decoding unit, determines the motion vector information corresponding to the P-type point clouds according to the motion vector information of the current decoding unit, and further decodes the current decoding unit according to the motion vector information corresponding to the P-type point clouds. In other words, according to the embodiment of the application, different motion vector information is adopted for decoding aiming at different types of point clouds in the current decoding unit, so that the decoding accuracy is improved.
The embodiment of the application does not limit the specific type of dividing the point cloud in the current decoding unit into the P-type point cloud according to the classification information of the current decoding unit in S102-A.
In some embodiments, the classification information of the current decoding unit may be a class identifier, for example, each point in the current decoding unit includes a class identifier, so that the point cloud in the current decoding unit may be classified into a P-class point cloud according to the class identifier.
In some embodiments, the classification information of the current decoding unit includes a first height threshold and a second height threshold, and the first height threshold is greater than the second height threshold, where S102-a includes the following steps:
S102-A1, dividing the point cloud in the current decoding unit into P-type point clouds according to the first height threshold and the second height threshold.
For example, point clouds with height values greater than a first height threshold in the current decoding unit are classified into one type of point clouds, point clouds with height values between the first height threshold and a second height threshold in the current decoding unit are classified into one type of point clouds, and point clouds with height values smaller than the second height threshold in the current decoding unit are classified into one type of point clouds.
For another example, the point clouds with the height value smaller than or equal to the first height threshold and larger than or equal to the second height threshold in the current decoding unit are divided into the first type point clouds, and the point clouds with the height value larger than the first height threshold or smaller than the second height threshold in the current decoding unit are divided into the second type point clouds.
S102-B, determining motion vector information corresponding to the P-type point cloud according to the motion vector information of the current decoding unit.
According to the above steps, after the point cloud in the current decoding unit is divided into the P-type point cloud, the motion vector information corresponding to the P-type point cloud is determined according to the motion vector information of the current decoding unit. For example, the motion vector information of the current decoding unit is determined as the motion vector information of one type of point clouds in the P type of point clouds, and the motion vector information of other types of point clouds in the P type of point clouds can be a preset value.
In one example, the P-type point cloud includes the first-type point cloud and the second-type point cloud, and then the motion vector information of the current decoding unit may be determined as the motion vector information of the second-type point cloud, where the motion vector information of the first-type point cloud is a preset value, for example, a zero vector.
Taking the vehicle-mounted point cloud as an example, the first type of point cloud can be understood as a road point cloud, the second type of point cloud can be understood as a non-road point cloud, and the non-road point cloud is a research focus because the change of a road is not large, so that the motion vector information of a current decoding unit is determined to be the motion vector information of the non-road point, and the determined road point is predicted to be static, namely zero motion, namely the motion vector information of the road point is zero vector.
S102-C, decoding the current decoding unit according to the motion vector information corresponding to the P-type point cloud.
According to the method, after the motion vector information corresponding to the P-type point cloud in the current decoding unit is determined, the current decoding unit is decoded according to the motion vector information corresponding to the P-type point cloud.
The embodiment of the application does not limit the specific implementation mode of decoding the current decoding unit according to the motion vector information corresponding to the P-type point cloud.
In some embodiments, the decoding end can determine a reference decoding unit of the current decoding unit, perform motion compensation on the reference decoding unit according to the motion vector information of the P-type point cloud to obtain prediction information of the current decoding unit, and decode at least one of geometric information and attribute information of the current decoding unit according to the prediction information.
In one example, the geometric information of the current decoding unit is decoded according to prediction information, which may be understood as a prediction unit of the current decoding unit. Therefore, the space occupation condition of the current decoding unit can be predicted according to the space occupation condition of the prediction unit, and then the geometric code stream of the current decoding unit is decoded according to the predicted space occupation condition of the current decoding unit, so that the geometric information of the current decoding unit is obtained.
In another example, the attribute information of the current decoding unit is decoded according to prediction information, which may be understood as a prediction unit of the current decoding unit. Thus, for each point in the current decoding unit, at least one reference point of the point is acquired in the prediction unit, and the attribute information of the point is predicted according to the attribute information of the at least one reference point, so that the attribute predicted value of the point is obtained. And then, decoding the attribute code stream to obtain an attribute residual value of the point, and determining an attribute reconstruction value of the point according to the attribute predicted value and the attribute residual value of the point. By referring to the method, the attribute reconstruction value of each point in the current decoding unit can be determined, and then the attribute reconstruction value of the current decoding unit is obtained.
It should be noted that, the decoding end may also adopt a method for decoding the current decoding unit according to at least one of the classification information and the motion vector information of the current decoding unit, which is not limited in the embodiment of the present application.
According to the point cloud decoding method provided by the embodiment of the application, a decoding end decodes a point cloud code stream, and at least one of classification information and motion vector information of a current decoding unit is determined, wherein the classification information is determined based on a first parameter, the motion vector information is determined based on a second parameter, the first parameter is used for indicating a calculation period of the classification information, and the second parameter is used for indicating a calculation period of the motion vector information. Then, the current decoding unit is decoded according to at least one of the classification information and the motion vector information of the current decoding unit. In other words, in the embodiment of the application, the classification information and the motion vector information are calculated periodically, and compared with the case that the classification information and the motion vector information are calculated once for each decoding unit, the embodiment of the application greatly reduces the calculation times of the classification information and the motion vector information, reduces the processing time of encoding and decoding, and improves the encoding and decoding efficiency.
The decoding end is taken as an example above to describe the point cloud decoding method provided by the embodiment of the present application in detail, and the encoding end is taken as an example below to describe the point cloud encoding method provided by the embodiment of the present application.
Fig. 6 is a schematic flow chart of a point cloud encoding method according to an embodiment of the application. The point cloud encoding method of the embodiment of the application can be completed by the point cloud encoding device shown in the above fig. 1 or fig. 2.
As shown in fig. 6, the point cloud decoding method according to the embodiment of the present application includes:
S201, at least one of a first parameter and a second parameter is determined.
Wherein the first parameter is used for indicating the calculation period of the classification information, and the second parameter is used for indicating the calculation period of the motion vector information.
As can be seen from the above description, the adjacent frames in the continuously acquired point cloud sequence have higher correlation, so that inter-frame prediction can be introduced to improve the point cloud coding efficiency.
Inter prediction mainly comprises the steps of motion estimation, motion compensation and the like. In some embodiments, in the motion estimation step, spatial motion offset vectors of two adjacent frames are calculated and written into the code stream. In the motion compensation step, the calculated motion vector is further used for calculating the spatial offset of the point cloud, and the offset point cloud frame is used as a reference to further improve the coding efficiency of the current frame.
The embodiment of the application does not limit the specific content of the motion vector information of the current coding unit, and can be motion information related to the steps of motion estimation, motion compensation and the like.
For example, the motion vector information may be a spatial motion offset vector of two adjacent frames in motion estimation, i.e., a motion vector.
For another example, the motion vector information may also be motion estimation ME (Motion Estimation) between two adjacent frames in motion compensation.
In a real scenario, the movements of different objects may be different, for example, point cloud data captured by lidar sensors on a moving vehicle, where roads and objects typically have different movements. Since the distance between the road and the radar sensor is relatively constant and the road varies slightly from one vehicle position to the next, the movement of the point representing the road relative to the radar sensor position is small. In contrast, objects such as buildings, road signs, vegetation, or other vehicles have a large movement. Since the road and object points have different motions, dividing the point cloud data into the road and object will improve the accuracy of global motion estimation and compensation, thereby improving compression efficiency. That is, for point cloud data using inter prediction, in order to improve accuracy of inter prediction and compression efficiency, for one encoding unit, it is necessary to classify point clouds in the encoding unit, for example, to divide the point clouds in the encoding unit into road point clouds and non-road point clouds.
In the embodiment of the application, at least one of the first parameter and the second parameter is set in order to avoid calculating the classification information and the motion vector information once for each coding unit. Wherein the first parameter is used for indicating the calculation period of the classification information, and the second parameter is used for indicating the calculation period of the motion vector information. Therefore, the encoding end can calculate the period and the periodic classification information according to the classification information indicated by the first parameter and/or calculate the period and the periodic motion vector information according to the motion vector information indicated by the second parameter, so that the calculation times of the classification information and/or the motion vector information are reduced, and the encoding efficiency is improved.
In some embodiments, the calculation period of the above classification information may be understood as calculating the classification information once every at least one coding unit, or calculating the classification information once every at least one point cloud frame.
In some embodiments, the above-mentioned calculation cycle of the motion vector information may be understood as calculating the motion vector information once every at least one coding unit, or calculating the motion vector information once every at least one point cloud frame.
Alternatively, the first parameter may be represented by a classification_period.
Alternatively, the second parameter may be motion_period.
Alternatively, the first parameter and the second parameter may have different representations, for example, may be set to be a classification_period_log2 and a motion_period_log2, which respectively represent that the calculation period of the classification information takes 2 pairs and the calculation period of the motion vector takes 2 pairs, which still falls within the scope of the present application.
In some embodiments, at least one of the first parameter and the second parameter is a preset value.
In some embodiments, at least one of the first parameter and the second parameter is a parameter entered by a user.
S202, determining at least one of classification information and motion vector information of the current coding unit according to at least one of the first parameter and the second parameter.
In the point cloud encoding process, the point cloud data may be divided into at least one encoding unit, the encoding process of each encoding unit is independent, and the decoding process of each encoding unit is substantially identical. For convenience of description, the embodiment of the present application will be described by taking a coding unit currently being coded, i.e., a current coding unit as an example.
The embodiment of the application does not limit the specific size of the current coding unit, and can be determined according to actual needs.
In some embodiments, the current encoding unit is a current point cloud frame, i.e., one point cloud frame may be encoded as one encoding unit.
In some embodiments, the current coding unit is a partial area of the current point cloud frame, for example, the current point cloud frame is divided into a plurality of areas, and one area is used as one coding unit to perform separate coding.
The embodiment of the application does not limit the specific mode of dividing the current point cloud frame into a plurality of areas.
In one example, the current point cloud frame is divided into a plurality of point clouds, and the sizes of the plurality of point clouds may be the same or not the same, and one point cloud is used as one encoding unit to perform independent encoding.
In another example, the current point cloud frame is divided into a plurality of points cloud mass, and the sizes of the points cloud mass may or may not be identical, and one point cloud mass is taken as one encoding unit to be separately encoded.
In the embodiment of the present application, the specific implementation manner of determining at least one of the classification information and the motion vector information of the current coding unit according to at least one of the first parameter and the second parameter in S202 includes, but is not limited to, the following:
In one mode, the encoding end determines at least one of classification information and motion vector information by the following step S202-a:
S202-A, determining classification information of the current coding unit according to the first parameter, and/or determining motion vector information of the current coding unit according to the second parameter.
It should be noted that the first parameter and the second parameter may be used separately, and in one example, the encoding end may determine the classification information of the current encoding unit according to the first parameter, and obtain the motion vector information of the current encoding unit according to the existing method. In another example, the encoding end may determine the motion vector information of the current encoding unit according to the second parameter, and obtain the classification information of the current encoding unit according to the existing method. In yet another example, the encoding end may determine classification information of the current encoding unit according to the first parameter and determine motion vector information of the current encoding unit according to the second parameter.
In the first mode, the encoding end determines the classification information of the encoding units according to the first parameter, rather than determining the classification information of the point cloud in each encoding unit one by one. And/or, the coding end determines the motion vector information of the coding units according to the second parameter instead of determining the motion vector information of each coding unit one by one.
In the following description, the specific process of determining the classification information of the current coding unit by the coding end according to the first parameter in S202-a.
In the above S202-a, the specific implementation manner of determining the classification information of the current coding unit by the coding end according to the first parameter includes, but is not limited to, the following:
In mode 1, the step S202-A includes the steps of S202-A-11 and S202-A-12 as follows:
S202-A-11, determining a classification information calculation period corresponding to the current coding unit according to the first parameter;
S202-A-12, determining the classification information of the current coding unit according to the classification information calculation period.
In the embodiment of the application, the calculation periods of the classification information corresponding to different coding units in the point cloud sequence can be the same or different, and the embodiment of the application is not limited to the above.
In some embodiments, the first parameter indicates a calculation period of classification information for each coding unit in the point cloud sequence. For example, the first parameter indicates that the classification information is calculated once every K coding units.
In some embodiments, the calculation periods of the classification information corresponding to different coding units in the point cloud sequence are not identical, and then the calculation periods of the classification information of each coding unit in the point cloud sequence may be indicated by a plurality of first parameters. For example, 3 first parameters are determined, wherein the first parameter indicates that K1 coding units are spaced apart to calculate the primary classification information, the second first parameter indicates that K2 coding units are spaced apart to calculate the primary classification information, and the third first parameter indicates that K3 coding units are spaced apart to calculate the primary classification information.
From the above, no matter in which form the first parameter indicates the calculation period of the classification information, the calculation period of the classification information corresponding to the current coding unit can be determined according to the first parameter for the current coding unit. For example, the current coding unit is a current point cloud frame, the first parameter indicates 4 point cloud frames at intervals, classification information is calculated once, the current coding unit is assumed to be a 6 th point cloud frame in the coding sequence, a 0 th point cloud frame in the coding sequence calculates classification information once, a 5 th point cloud frame calculates classification information once, and a 10 th point cloud frame calculates classification information once, wherein the 0 th point cloud frame to the 4 th point cloud frame can be understood as a first calculation period of the classification information, the 5 th point cloud frame to the 9 th point cloud frame can be understood as a second calculation period of the classification information, and the current coding unit is located in the second calculation period, so that the second calculation period is determined as a classification information calculation period corresponding to the current coding unit.
The coding end determines the classification information calculation period corresponding to the current coding unit according to the steps, and then determines the classification information of the current coding unit according to the classification information calculation period.
The embodiment of the application does not limit the specific mode of determining the classification information of the current coding unit according to the calculation period of the classification information corresponding to the current coding unit by the coding end.
In some embodiments, the two ends of the encoding and decoding are agreed, the classification information of the encoding unit in the classification information calculation period is a default value 1, and then the encoding end determines the default value 1 as the classification information of the current encoding unit.
In some embodiments, the two ends of the encoding and decoding are contracted, and the classification information of the encoding units in the classification information calculation period is calculated by adopting a preset calculation method. For example, if the current coding unit is an area of the current point cloud frame, the classification information of the current coding unit may be determined according to the classification information of the point clouds coded around the current coding unit in the current point cloud frame.
In some embodiments, the encoding end may determine the classification information of the current encoding unit according to the position of the current encoding unit in the classification information calculation period corresponding to the current encoding unit.
In example 1, if the current coding unit is the first coding unit in the classification information calculation period, the classification of the point cloud in the current point coding unit is identified, and the classification information of the current point coding unit is obtained.
Example 2, if the current coding unit is the non-first coding unit in the classification information calculation period, the classification information of the current coding unit is determined according to the encoded information or the default value.
In this embodiment, after determining the classification information calculation period corresponding to the current coding unit, the coding end may determine the classification information of the current coding unit according to whether the current coding unit is the first coding unit in the classification information calculation period.
Continuing with the above example, assuming that the calculation period of the classification information corresponding to the current coding unit is from the 5 th point cloud frame to the 9 th point cloud frame, if the current coding unit is the 5 th point cloud frame in the coding sequence, the classification of the point cloud in the current point coding unit is identified, and the classification information of the current point coding unit is obtained. If the current coding unit is not the 5 th point cloud frame, for example, the 6 th point cloud frame, the coding end determines the default value as the classification information of the current coding unit, or determines the classification information of the current coding unit according to the coded information.
The embodiment of the present application is not limited to the specific implementation manner of determining the classification information of the current coding unit according to the encoded information in the above example 2.
In one possible implementation, the classification information of the current coding unit is determined according to the classification information of the first coding unit in the calculation period of the classification information corresponding to the current coding unit. For example, the classification information of the first coding unit in the classification information calculation period is determined as the classification information of the current coding unit, or the classification information of the first coding unit in the classification information calculation period is processed to obtain the classification information of the current coding unit.
In one possible implementation, the classification information of the current coding unit is determined according to the following step 11:
And 11, determining classification information of the current coding unit according to the classification information of M coding units, wherein the M coding units are M coded coding units positioned before the current coding unit in the coding sequence, and M is a positive integer.
The embodiment of the application does not limit the specific selection modes of the M coding units.
In some embodiments, the M coding units are sequentially adjacent in coding order without an interval therebetween.
In some embodiments, the M coding units may be any M coding units located before the current coding unit in the coding sequence, that is, the M coding units may or may not be completely adjacent to each other, which is not limited in the embodiment of the present application.
Because of the specific correlation of the content between the adjacent point cloud frames, in the implementation manner, the coding end obtains M coding units positioned in front of the current coding unit in the coding sequence from the coded information, and determines the classification information of the current coding unit according to the classification information of the M coding units.
In step 12, according to the classification information of the M coding units, the implementation manner of determining the classification information of the current coding unit at least includes the following examples:
In the first example, if M is equal to 1, the classification information of one coding unit located before the current coding unit in the coding sequence is determined as the classification information of the current coding unit. For example, if the current coding unit is the 6 th point cloud frame in the coding sequence, determining the classification information of the 5 th point cloud frame in the coding sequence as the classification information of the current coding unit.
In a second example, if M is greater than 1, the classification information of the M coding units is subjected to preset processing, and the processing result is determined as the classification information of the current coding unit.
For example, an average value of the classification information of the M coding units is determined as the classification information of the current coding unit.
For another example, a weighted average of the classification information of the M coding units is determined as the classification information of the current coding unit. Optionally, the closer to the current coding unit in the coding sequence, the greater the weight, and the farther from the current coding unit, the lesser the weight, among the M coding units.
In some embodiments of the foregoing mode 1, the encoding end writes the first parameter into the point cloud code stream, and if the current encoding unit is the first encoding unit in the classification information calculation period, writes the classification information of the current encoding unit into the point cloud code stream.
In some embodiments of the foregoing mode 1, the first parameter is written into the point cloud code stream, and if the current coding unit is the non-first coding unit in the classification information calculation period, writing the classification information of the current coding unit into the point cloud code stream is skipped.
That is, in this embodiment 1, the encoding end can write the first parameter into the code stream and write the classification information of the first encoding unit in the classification information calculation period into the code stream, and the classification information of the encoding unit located in the middle of the classification information calculation period (i.e., the first encoding unit in the non-calculation period) is not written into the code stream, so that the load of the code stream can be reduced.
In some embodiments, in the mode 1, the encoding end writes the classification information of the current encoding unit into the point cloud code stream, and skips writing the first parameter into the point cloud code stream.
The encoding end may determine the classification information of the current encoding unit according to the following manner 2 in addition to determining the classification information of the current encoding unit in the foregoing manner 1.
In mode 2, if the first parameter indicates that the classification information is calculated once for every K coding units, the implementation of S202-a at least includes the following two examples:
in example 1, if the current coding unit is the NK coding unit in the coding sequence, the class of the point cloud in the current point coding unit is identified, and classification information of the current point coding unit is obtained, and K, N is a positive integer.
In example 2, if the current coding unit is a non-NK coding unit in the coding order, the classification information of the current coding unit is determined according to the encoded information or the default value.
In one embodiment of the method 2, the first parameter is written into the point cloud code stream, and if the current coding unit is the NK-th coding unit in the coding sequence, the classification information of the current coding unit is written into the code stream.
In another embodiment of the method 2, the first parameter is written into the point cloud code stream, and if the current coding unit is a non-NK coding unit in the coding order, writing the classification information of the current coding unit into the code stream is skipped.
In this implementation 2, if the first parameter indicates that every K coding units, and the classification information is calculated once, the coding end writes the classification information of the coding unit numbered 0 or an integer multiple of K into the code stream, and does not write the classification information into the code stream for other coding units. For example, the point cloud sequence comprises 1000 point cloud frames, and one point cloud frame is assumed to be used as one coding unit, so that the coding frequency of the coding end is 1000/K instead of 1000 times, the coding frequency is greatly reduced, the coding load of the coding end is reduced, and the coding efficiency is improved.
In another embodiment of this mode 2, the classification information of the current coding unit may be written into the point cloud code stream, and writing the first parameter into the point cloud code stream may be skipped.
In this mode 2, if the current coding unit is a non-NK coding unit in the coding sequence, the specific process of determining the classification information of the current coding unit according to the coded information may refer to the descriptions of the above steps 11 and 12, and will not be described herein.
According to the embodiment of the application, the classification information of the current coding unit can be determined according to the mode.
The classification information may be understood as information required to divide the point cloud into different categories. The embodiment of the application does not limit the concrete expression form of the classification information.
In some embodiments, the classification information includes at least one of a first height threshold and a second height threshold, the first height threshold and the second height threshold being used for classification of point clouds in the current coding unit.
Optionally, at least one of the first height threshold and the second height threshold is a preset value.
Optionally, at least one of the first height threshold and the second height threshold is a statistic. For example, as shown in fig. 5, the height value of the point cloud midpoint is counted using a histogram, the horizontal axis of the histogram is the height value of the point cloud midpoint, and the vertical axis of the histogram is the number of points at the height value. Fig. 5 is a graph of statistics using a radar point cloud as an example, where the height of the radar is a zero point, so that the height of most points is negative. Then, a height value corresponding to the peak of the histogram is obtained, and a standard deviation of the height value is calculated, and when the height corresponding to the peak is taken as the center, a threshold value higher than the center by a (for example, 1.5 times) the standard deviation is denoted as a first height threshold value Top_thr, and a threshold value lower than the center by b (for example, 1.5 times) the standard deviation is denoted as a second height threshold value bottom_thr.
The first and second height thresholds divide the point cloud into different categories. For example, point clouds having a height value between a first height threshold and a second height threshold among the point clouds are denoted as first class point clouds, point clouds having a height value greater than the first height threshold and a height value less than the second height threshold are denoted as second class point clouds.
In some embodiments, if the classification information includes at least one of the first height threshold and the second height threshold, the corresponding first parameter classification_period may include at least one of a first sub-parameter top_threshold_period and a second sub-parameter bottom_threshold_period;
The first sub-parameter top_threshold_period is used for indicating a calculation period of the first height threshold, and the second sub-parameter bottom_threshold_period is used for indicating a calculation period of the second height threshold.
The first sub-parameter and the second sub-parameter may be independently assigned.
Alternatively, the calculation period of the first height threshold may be the same as or different from the calculation period of the second height threshold, which is not limited in the embodiment of the present application.
The above description is given of the specific process of determining the classification information of the current coding unit according to the first parameter in S202-a, and the following description is given of the specific implementation process of determining the motion vector information of the current coding unit according to the second parameter in S202-a.
In S202-a, the specific implementation manner of determining the motion vector information of the current coding unit by the coding end according to the second parameter includes, but is not limited to, the following:
In mode 1, the step S202-A includes the steps of S202-A-21 and S202-A-22 as follows:
S202-A-21, determining a motion vector information calculation period corresponding to the current coding unit according to the second parameter;
S202-A-22, determining the motion vector information of the current coding unit according to the motion vector information calculation period.
In the embodiment of the application, the motion vector information calculation periods corresponding to different coding units in the point cloud sequence can be the same or different, and the embodiment of the application is not limited to the same.
In some embodiments, the second parameter indicates a calculation period of motion vector information for each coding unit in the point cloud sequence. For example, the second parameter indicates that motion vector information is calculated once every R coding units.
In some embodiments, the computing period of the motion vector information corresponding to each coding unit in the point cloud sequence is indicated by a plurality of second parameters. For example, 3 second parameters are determined, wherein a first second parameter indicates that the R1 coding units are calculating motion vector information once, a second parameter indicates that the R2 coding units are calculating motion vector information once, and a third second parameter indicates that the R3 coding units are calculating motion vector information once.
From the above, no matter what form the second parameter indicates the calculation period of the motion vector information, the calculation period of the motion vector information corresponding to the current coding unit can be determined according to the second parameter for the current coding unit. For example, the current coding unit is a current point cloud frame, the second parameter indicates that motion vector information is calculated once every 4 point cloud frames, the current coding unit is assumed to be a 6 th point cloud frame in the coding sequence, a 0 th point cloud frame in the coding sequence calculates motion vector information once, a5 th point cloud frame calculates motion vector information once, a 10 th point cloud frame calculates motion vector information once, wherein the 0 th point cloud frame to the 4 th point cloud frame can be understood as a first calculation period of the motion vector information, the 5 th point cloud frame to the 9 th point cloud frame can be understood as a second calculation period of the motion vector information, and the current coding unit is located in the second calculation period, so that the second calculation period is determined as a motion vector information calculation period corresponding to the current coding unit.
The coding end determines the motion vector information of the current coding unit according to the motion vector information calculation period after determining the motion vector information calculation period corresponding to the current coding unit according to the steps.
The embodiment of the application does not limit the specific mode of determining the motion vector information of the current coding unit according to the motion vector information calculation period corresponding to the current coding unit by the coding end.
In some embodiments, the two ends of the encoding and decoding are agreed that the motion vector information of the encoding unit in the motion vector information calculation period is a default value 1, and then the encoding end determines the default value 1 as the motion vector information of the current encoding unit.
In some embodiments, the two ends of the encoding and decoding are contracted, and the motion vector information of the encoding unit in the motion vector information calculation period is calculated by adopting a preset calculation method. For example, if the current coding unit is an area of the current point cloud frame, the motion vector information of the current coding unit may be determined according to the motion vector information of the coded area around the current coding unit in the current point cloud frame.
In some embodiments, the encoding end may determine the motion vector information of the current encoding unit according to the position of the current encoding unit in the motion vector information calculation period corresponding to the current encoding unit.
In example 1, if the current coding unit is the first coding unit in the motion vector information calculation period, the motion vector information of the current point coding unit is determined according to the reference coding unit of the current coding unit.
In example 2, if the current coding unit is the non-first coding unit in the motion vector information calculation period, the motion vector information of the current coding unit is determined according to the encoded information or the default value.
In this embodiment, after determining the motion vector information calculation period corresponding to the current coding unit, the coding end may determine the motion vector information of the current coding unit according to whether the current coding unit is the first coding unit in the motion vector information calculation period.
With continued reference to the above example, assuming that the motion vector information calculation period corresponding to the current coding unit is from the 5 th point cloud frame to the 9 th point cloud frame, if the current coding unit is the 5 th point cloud frame in the coding sequence, the coding end directly determines the motion vector information of the current point coding unit according to the reference coding unit of the current coding unit. If the current coding unit is not the 5 th point cloud frame, for example, the 6 th point cloud frame, the coding end determines the default value as the motion vector information of the current coding unit, or determines the motion vector information of the current coding unit according to the coded information.
The implementation of the present application is not limited to the specific implementation manner of determining the motion vector information of the current coding unit according to the encoded information in the above example 2.
In one possible implementation, the motion vector information of the current coding unit is determined according to the motion vector information of the first coding unit in the motion vector information calculation period corresponding to the current coding unit. For example, the motion vector information of the first coding unit in the motion vector information calculation period is determined as the motion vector information of the current coding unit, or the motion vector information of the first coding unit in the motion vector information calculation period is processed to obtain the motion vector information of the current coding unit.
In one possible implementation, the motion vector information of the current coding unit is determined according to the following step 21:
and step 21, determining motion vector information of the current coding unit according to the motion vector information of S coding units, wherein the S coding units are S coded coding units positioned before the current coding unit in the coding sequence, and S is a positive integer.
The embodiment of the application does not limit the specific selection modes of the S coding units.
In some embodiments, the S coding units are sequentially adjacent in coding order without an interval therebetween.
In some embodiments, the S coding units may be any S coding units located before the current coding unit in the coding sequence, that is, the S coding units may be adjacent or not completely adjacent, which is not limited in the embodiment of the present application.
Due to the specific relevance of the content between the adjacent point cloud frames, in the implementation manner, the coding end obtains S coding units positioned in front of the current coding unit in the coding sequence from the coded information, and determines the motion vector information of the current coding unit according to the motion vector information of the S coding units.
In step 22, according to the motion vector information of the S coding units, the implementation manner of determining the motion vector information of the current coding unit at least includes the following examples:
In the first example, if S is equal to 1, the motion vector information of one coding unit located before the current coding unit in the coding order is determined as the motion vector information of the current coding unit. For example, if the current coding unit is the 6 th point cloud frame in the coding sequence, the motion vector information of the 5 th point cloud frame in the coding sequence is determined as the motion vector information of the current coding unit.
In a second example, if S is greater than 1, the motion vector information of S coding units is subjected to preset processing, and the processing result is determined as the motion vector information of the current coding unit.
For example, an average value of motion vector information of S coding units is determined as motion vector information of the current coding unit.
For another example, a weighted average of the motion vector information of the S coding units is determined as the motion vector information of the current coding unit. Optionally, the closer to the current coding unit in the coding sequence, the greater the weight, and the farther from the current coding unit, the lesser the weight, among the S coding units.
In one embodiment of the method 1, the second parameter is written into the point cloud code stream, and if the current coding unit is the first coding unit in the motion vector information calculation period, the motion vector information of the current coding unit is written into the point cloud code stream.
In one embodiment of the method 1, the second parameter is written into the point cloud code stream, and if the current coding unit is the non-first coding unit in the motion vector information calculation period, the writing of the motion vector information of the current coding unit into the point cloud code stream is skipped.
That is, in this mode 1, the encoding end can write the second parameter into the code stream, and at the same time, the motion vector information of the first encoding unit in the motion vector information calculation period is written into the code stream, and the code stream is not written into the motion vector information of the encoding unit located in the middle of the motion vector information calculation period (i.e., the first encoding unit in the non-calculation period), so that the burden of the code stream can be reduced.
In one embodiment of this mode 1, the motion vector information of the current coding unit is written into the point cloud code stream, and writing the second parameter into the point cloud code stream is skipped.
The encoding end may determine the motion vector information of the current encoding unit according to the following manner 2, in addition to determining the motion vector information of the current encoding unit in the foregoing manner 1.
In mode 2, if the second parameter indicates that the motion vector information is calculated once for each R coding units, the implementation of S202-a at least includes the following two examples:
In example 1, if the current coding unit is the NR-th coding unit in the coding order, the motion vector information of the current point coding unit is determined according to the reference coding unit of the current coding unit, and R, N is a positive integer.
In example 2, if the current coding unit is a non-NR coding unit in the coding order, the motion vector information of the current coding unit is determined according to the encoded information or the default value.
In this mode 2, the motion vector information calculation cycle for each coding unit in the point cloud series is the same, and the motion vector information is calculated once for each R coding units, for example. Thus, when the encoding end encodes the current encoding unit, it is determined whether the current encoding unit is the NR-th encoding unit in the encoding sequence, that is, whether the sequence number of the current encoding unit in the encoding sequence is an integer multiple of R. If the coding end determines that the current coding unit is the NR coding unit in the coding sequence, determining the motion vector information of the current point coding unit according to the reference coding unit of the current coding unit. If the current coding unit is not the NR coding unit in the coding sequence, determining the default value as the motion vector information of the current coding unit or determining the motion vector information of the current coding unit according to the coded information.
In one embodiment of the mode 2, the second parameter is written into the point cloud code stream, and if the current coding unit is the NK-th coding unit in the coding sequence, the motion vector information of the current coding unit is written into the code stream.
In one embodiment of the method 2, the second parameter is written into the point cloud code stream, and if the current coding unit is a non-NK coding unit in the coding order, the writing of the motion vector information of the current coding unit into the code stream is skipped.
In this implementation 2, if the second parameter indicates that each R coding units, and motion vector information is calculated once, the coding end encodes motion vector information once for each R coding units, so that the number of times of encoding motion vector information can be reduced. For example, the point cloud sequence comprises 1000 point cloud frames, and one point cloud frame is assumed to be used as one coding unit, so that the coding frequency of the coding end is 1000/R instead of 1000 times, the coding frequency is greatly reduced, the coding load of the coding end is reduced, and the coding efficiency is improved.
In one embodiment of this mode 2, the motion vector information of the current coding unit is written into the point cloud code stream, and writing the second parameter into the point cloud code stream is skipped.
In this mode 2, if the current coding unit is a non-NR coding unit in the coding order, the specific process of determining the motion vector information of the current coding unit according to the coded information may refer to the descriptions of the above step 21 and step 22, which are not repeated herein.
The encoding end may determine the motion vector information of the current encoding unit according to the following manner 3, in addition to determining the motion vector information of the current encoding unit according to the methods described in the foregoing manners 1 and 2.
Mode 3, the encoding end determines motion vector information according to the degree of variation of the classification information of the encoding unit. Namely, the encoding end determines the motion vector information of the current encoding unit according to the following steps 1 and 2:
step 1, determining the change degree of classification information according to a first parameter;
And 2, determining the motion vector information of the current coding unit according to the change degree.
In the embodiment of the application, if the classification information of different coding units does not change greatly, the motion vector information representing the different coding units may not change greatly. In contrast, if the classification information of different coding units varies greatly, the motion vector information indicating different coding units may also vary greatly. Accordingly, the motion vector information of the current coding unit can be determined according to the degree of variation of the classification information of the different coding units.
The embodiment of the application does not limit the specific implementation manner of determining the degree of variation of the classification information of the point cloud according to the first parameter in the step 1.
In some embodiments, classification information for the plurality of coding units is determined based on the first parameter, and a degree of change in the classification information is determined based on the classification information for the plurality of coding units. For example, when the difference between the classification information of the plurality of coding units is large, the degree of change of the classification information is large, and when the difference between the classification information of the plurality of coding units is small, the degree of change of the classification information is small.
In some embodiments, the degree of change in the classification information is determined based on the classification information of the current coding unit and the classification information of the reference coding unit of the current coding unit.
For example, according to the first parameter, the classification information of the current coding unit is determined, and the specific process may refer to the description of the foregoing embodiment, which is not repeated herein. Next, the degree of change between the classification information of the current coding unit and the classification information of the reference coding unit of the current coding unit is determined, for example, the absolute value of the difference between the classification information of the current coding unit and the classification information of the reference coding unit of the current coding unit is determined as the degree of change of the classification information.
According to the method, after the change degree of the classification information is determined, the motion vector information of the current coding unit is determined according to the change degree of the classification information.
For example, if the degree of change of the classification information is less than or equal to the first preset value, the default value or the motion vector information of the previous coding unit of the current coding unit in the coding sequence is determined as the motion vector information of the current coding unit.
For another example, if the degree of change is greater than the first preset value, the motion vector information of the current coding unit is determined according to the reference coding unit of the current coding unit.
According to the embodiment of the application, the motion vector information of the current coding unit can be determined according to the mode.
The motion vector information is understood as motion information required for inter prediction by the encoding end. The embodiment of the application does not limit the concrete expression form of the motion vector information.
In some embodiments, the motion vector information includes at least one of a rotation matrix and an offset vector. The rotation matrix describes three-dimensional rotation of the coding unit and the reference coding unit, and the offset vector describes offset of the coordinate origin of the coding unit and the reference coding unit in three directions.
In some embodiments, if the motion vector information includes at least one of a rotation matrix and an offset vector, the corresponding second parameter motion_period includes at least one of a third sub-parameter rotation_matrix_period and a fourth sub-parameter transform_vector_period.
The third sub-parameter rotation_matrix_period is used for indicating the calculation period of the rotation matrix, and the fourth sub-parameter transmission_vector_period is used for indicating the calculation period of the offset vector.
The third sub-parameter and the fourth sub-parameter may be independently assigned.
Alternatively, the calculation period of the rotation matrix and the calculation period of the offset vector may be the same or different, which is not limited in the embodiment of the present application.
In some embodiments, if the current coding unit is the first coding unit in the coding sequence, i.e. the coding sequence number is 0, the coding end writes at least one of the classification information and the motion vector information of the current coding unit into the point cloud code stream.
In the embodiment of the present application, the encoding end determines at least one of the classification information and the motion vector information of the current encoding unit according to the above steps, and then performs the following step S203.
S203, the current coding unit is coded according to at least one of the classification information and the motion vector information of the current coding unit.
Because the motions of different objects are different, in order to improve the coding accuracy, the category of point clouds in the current coding unit is determined according to the classification information of the current coding unit, and different motion vector information is adopted for the point clouds in different categories to conduct inter-frame prediction. Taking the point cloud data scanned by the vehicle radar as an example, for example, the point cloud may be divided into a road point and an object point, the motion vector information of the road point and the object point is different,
In the step S203, the embodiment of the present application does not limit the specific process of encoding the current encoding unit according to at least one of the classification information and the motion vector information of the current encoding unit.
In some embodiments, if the classification information of the current coding unit is determined according to the above method, but the motion vector information of the current coding unit is not determined, the point cloud in the current coding unit may be classified into a plurality of categories according to the classification information. Different motion vector information is assigned to each category, wherein the motion vector information assigned to different categories may be preset values corresponding to different categories, or values calculated according to the categories, which is not limited in the embodiment of the present application.
In some embodiments, if the motion vector information of the current coding unit is determined according to the above method, but the classification information of the current coding unit is not determined, at this time, the coding end may determine the classification information of the current coding unit according to the self-determination of the classification information of the current coding unit, for example, according to the coding information of the coded units around the current coding unit. Then, the point cloud in the current coding unit is divided into a plurality of categories according to the classification information. And determining the motion vector information of the point cloud of each category in the current coding unit according to the motion vector information of the current coding unit. For example, if the current encoding unit includes a first type of point cloud and a second type of point cloud, the determined motion vector information may be determined as the motion vector information of the first type of point cloud, and the motion vector information of the second type of point cloud is determined as a preset value, for example, a zero vector.
In some embodiments, if the classification information of the current coding unit and the motion vector information of the current coding unit are determined according to the above steps, the step S203 includes the following steps:
S203-A, dividing the point cloud in the current coding unit into P types of point clouds according to the classification information of the current coding unit, wherein P is a positive integer greater than 1.
In this embodiment, the encoding end divides the point cloud in the current encoding unit into P-type point clouds according to the classification information of the current encoding unit, determines the motion vector information corresponding to the P-type point clouds according to the motion vector information of the current encoding unit, and encodes the current encoding unit according to the motion vector information corresponding to the P-type point clouds. In the embodiment of the application, different motion vector information is adopted for encoding aiming at different types of point clouds in the current encoding unit, so that the encoding accuracy is improved.
The embodiment of the application does not limit the specific type of dividing the point cloud in the current coding unit into the P type point cloud according to the classification information of the current coding unit in S203-A.
In some embodiments, the classification information of the current coding unit may be a class identifier, for example, each point in the current coding unit includes a class identifier, so that the point cloud in the current coding unit may be classified into a P-class point cloud according to the class identifier.
In some embodiments, the classification information of the current coding unit includes a first height threshold and a second height threshold, and the first height threshold is greater than the second height threshold, where S203-a includes the following steps:
S203-A1, dividing the point cloud of the current coding unit into P-type point clouds according to the first height threshold and the second height threshold.
For example, point clouds with height values greater than a first height threshold in the current coding unit are classified into one type of point clouds, point clouds with height values between the first height threshold and a second height threshold in the current coding unit are classified into one type of point clouds, and point clouds with height values smaller than the second height threshold in the current coding unit are classified into one type of point clouds.
For another example, the point clouds with the height value of the current coding unit being less than or equal to the first height threshold and greater than or equal to the second height threshold are classified into first class point clouds, and the point clouds with the height value of the current coding unit being greater than the first height threshold or less than the second height threshold are classified into second class point clouds.
S203-B, determining motion vector information corresponding to the P-type point cloud according to the motion vector information of the current coding unit.
According to the steps, after the point cloud in the current coding unit is divided into the P-type point cloud, the motion vector information corresponding to the P-type point cloud is determined according to the motion vector information of the current coding unit. For example, the motion vector information of the current coding unit is determined as the motion vector information of one type of point clouds in the P type of point clouds, and the motion vector information of other types of point clouds in the P type of point clouds can be a preset value.
In one example, the P-type point cloud includes the first-type point cloud and the second-type point cloud, and then the motion vector information of the current coding unit may be determined as the motion vector information of the second-type point cloud, where the motion vector information of the first-type point cloud is a preset value, for example, a zero vector.
Taking the vehicle-mounted point cloud as an example, the first type of point cloud can be understood as a road point cloud, the second type of point cloud can be understood as a non-road point cloud, and the non-road point cloud is a research focus because the change of a road is not large, so that the motion vector information of the current coding unit is determined to be the motion vector information of the non-road point, and the determined road point is predicted to be static, namely zero motion, namely the motion vector information of the road point is zero vector.
S203-C, encoding the current encoding unit according to the motion vector information corresponding to the P-type point cloud.
According to the method, after the motion vector information corresponding to the P-type point cloud in the current coding unit is determined, the current coding unit is coded according to the motion vector information corresponding to the P-type point cloud.
The embodiment of the application does not limit the specific implementation mode of encoding the current encoding unit according to the motion vector information corresponding to the P-type point cloud.
In some embodiments, the encoding end can determine a reference encoding unit of the current encoding unit, perform motion compensation on the reference encoding unit according to the motion vector information of the P-type point cloud to obtain prediction information of the current encoding unit, and encode at least one of geometric information and attribute information of the current encoding unit according to the prediction information.
In one example, the geometric information of the current coding unit is encoded according to prediction information, which may be understood as a prediction unit of the current coding unit. Therefore, the space occupation condition of the current coding unit can be predicted according to the space occupation condition of the prediction unit, and the geometric information of the current coding unit is coded according to the predicted space occupation condition of the current coding unit, so that the geometric code stream of the current coding unit is obtained.
In another example, the attribute information of the current coding unit is encoded according to prediction information, which may be understood as a prediction unit of the current coding unit. Thus, for each point in the current coding unit, at least one reference point of the point is obtained in the prediction unit, and the attribute information of the point is predicted according to the attribute information of the at least one reference point, so as to obtain the attribute predicted value of the point. And then, determining the attribute residual value of the point according to the attribute predicted value and the attribute value of the point, and further encoding the attribute residual value of the point to form an attribute code stream.
It should be noted that, the encoding end may also use a method for encoding the current encoding unit according to at least one of the classification information and the motion vector information of the current encoding unit, which is not limited in the embodiment of the present application.
From the above, the encoding end may write at least one of the first parameter and the second parameter into the code stream.
Alternatively, at least one of the first parameter and the second parameter may be stored in the form of an unsigned integer, denoted as u (v), representing the use of v bits to describe one parameter.
Optionally, at least one of the first parameter and the second parameter may be stored in the form of unsigned exponential golomb coding, denoted as ue (v), and the value of the parameter is first converted into a v-bit 01-bit sequence by exponential golomb coding, and then written into the code stream.
In some embodiments, the encoding side writes at least one of the first parameter and the second parameter to a sequence header parameter set.
In one example, the first parameter is used to indicate spacing of a plurality of point cloud frames, and the classification information is calculated once, and/or the second parameter is used to indicate spacing of a plurality of point cloud frames, and the motion vector information is calculated once.
Illustratively, the first parameter and the second parameter are stored in the sequence header parameter set as shown in table 1.
In some embodiments, the encoding end writes at least one of the first parameter and the second parameter into the point cloud header information.
In one example, the first parameter is used to indicate that classification information of an ith point cloud slice in the point cloud frames is calculated for the ith point cloud slice in the point cloud frames at intervals of a plurality of point cloud frames, i is a positive integer, and/or the second parameter is used to indicate that motion vector information of the ith point cloud slice in the point cloud frames is calculated for the ith point cloud slice in the point cloud frames at intervals of a plurality of point cloud frames.
In this example, the first parameter and the second parameter are stored in the point cloud header information as shown in table 2.
In one example, a first parameter is used to indicate a plurality of point clouds in a point cloud frame, to calculate primary classification information, and/or a second parameter is used to indicate a plurality of point clouds in a point cloud frame, to calculate primary motion vector information.
In this example, the first parameter and the second parameter are stored in the point cloud header information as shown in table 3.
In some embodiments, before determining the first parameter and the second parameter, the encoding end first needs to determine a first flag inter_prediction_flag, where the first flag inter_prediction_flag is used to indicate whether to perform inter-prediction encoding, and if the first flag inter_prediction_flag indicates that inter-prediction encoding is performed, at least one of the first parameter and the second parameter is determined.
The point cloud coding method provided by the embodiment of the application comprises the steps that a coding end determines at least one of a first parameter and a second parameter, wherein the first parameter is used for indicating the calculation period of classification information, the second parameter is used for indicating the calculation period of motion vector information, the at least one of the classification information and the motion vector information of a current coding unit is determined according to the at least one of the first parameter and the second parameter, and the current coding unit is coded according to the at least one of the classification information and the motion vector information of the current coding unit. In other words, according to the embodiment of the application, the classification information and the motion vector information are calculated periodically, so that compared with the case that the classification information and the motion vector information are calculated once for each coding unit, the calculation times of the classification information and the motion vector information are greatly reduced, the coding processing time is shortened, and the coding efficiency is improved.
It should be understood that fig. 4-6 are only examples of the present application and should not be construed as limiting the present application.
The preferred embodiments of the present application have been described in detail above with reference to the accompanying drawings, but the present application is not limited to the specific details of the above embodiments, and various simple modifications can be made to the technical solution of the present application within the scope of the technical concept of the present application, and all the simple modifications belong to the protection scope of the present application. For example, the specific features described in the above embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, various possible combinations are not described further. As another example, any combination of the various embodiments of the present application may be made without departing from the spirit of the present application, which should also be regarded as the disclosure of the present application.
It should be further understood that, in the various method embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic of the processes, and should not constitute any limitation on the implementation process of the embodiments of the present application. In addition, in the embodiment of the present application, the term "and/or" is merely an association relationship describing the association object, which means that three relationships may exist. Specifically, A and/or B may represent three cases where A alone exists, while A and B exist, and B alone exists. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
The method embodiments of the present application are described above in detail with reference to fig. 4 to 6, and the apparatus embodiments of the present application are described below in detail with reference to fig. 7 to 10.
Fig. 7 is a schematic block diagram of a point cloud decoding apparatus provided by an embodiment of the present application.
As shown in fig. 7, the point cloud decoding apparatus 10 may include:
A determining unit 11, configured to decode a point cloud code stream, and determine at least one of classification information and motion vector information of a current decoding unit, where the classification information is determined based on a first parameter, and the motion vector information is determined based on a second parameter, where the first parameter is used to indicate a calculation period of the classification information, and the second parameter is used to indicate a calculation period of the motion vector information;
And a decoding unit 12 for decoding the current decoding unit according to at least one of the classification information and the motion vector information of the current decoding unit.
In some embodiments, the determining unit 11 is specifically configured to decode at least one of the classification information and the motion vector information of the current decoding unit from the point cloud code stream.
In some embodiments, the determining unit 11 is specifically configured to decode at least one of the first parameter and the second parameter from the point cloud code stream, determine classification information of the current decoding unit according to the first parameter, and/or determine motion vector information of the current decoding unit according to the second parameter.
In some embodiments, the determining unit 11 is specifically configured to determine a classification information calculation period corresponding to the current decoding unit according to the first parameter, and determine classification information of the current decoding unit according to the classification information calculation period.
In some embodiments, the determining unit 11 is specifically configured to decode the point cloud code stream if the current decoding unit is the first decoding unit in the calculation period of the classification information, so as to obtain the classification information of the current point decoding unit.
In some embodiments, the determining unit 11 is specifically configured to determine, if the current decoding unit is a non-first decoding unit in the period of the calculation period of the classification information, the classification information of the current decoding unit according to the decoded information or a default value.
In some embodiments, the first parameter indicates that the classification information is calculated once for every K decoding units, and the determining unit 11 is specifically configured to decode the point cloud code stream if the current decoding unit is an NK-th decoding unit in the decoding order, so as to obtain the classification information of the current decoding unit, where K, N are all positive integers.
In some embodiments, the determining unit 11 is further configured to determine, if the current decoding unit is a non-NK decoding unit in decoding order, classification information of the current decoding unit according to the decoded information or a default value.
In some embodiments, the determining unit 11 is specifically configured to determine the classification information of the current decoding unit according to the classification information of the M decoding units, where the M decoding units are M decoded decoding units located before the current decoding unit in the decoding order, and M is a positive integer.
In some embodiments, the determining unit 11 is specifically configured to determine, as the classification information of the current decoding unit, the classification information of a decoding unit located before the current decoding unit in the decoding order if the M is equal to 1.
In some embodiments, the determining unit 11 is specifically configured to perform a preset process on the classification information of the M decoding units if the M is greater than 1, and determine the processing result as the classification information of the current decoding unit.
In some embodiments, the determining unit 11 is specifically configured to determine an average value of the classification information of the M decoding units as the classification information of the current decoding unit.
In some embodiments, the classification information includes at least one of a first elevation threshold and a second elevation threshold, the first parameter including at least one of a first sub-parameter and a second sub-parameter;
The first sub-parameter is used for indicating the calculation period of the first height threshold, the second sub-parameter is used for indicating the calculation period of the second height threshold, and the first height threshold and the second height threshold are used for classifying point clouds in the current decoding unit.
In some embodiments, the determining unit 11 is specifically configured to determine a motion vector information calculation period corresponding to the current decoding unit according to the second parameter, and determine motion vector information of the current decoding unit according to the motion vector information calculation period.
In some embodiments, the determining unit 11 is specifically configured to decode the point cloud code stream if the current decoding unit is the first decoding unit in the motion vector information calculation period, and obtain the motion vector information of the current point decoding unit.
In some embodiments, the determining unit 11 is specifically configured to determine, if the current decoding unit is a non-first decoding unit in the motion vector information calculation period, the motion vector information of the current decoding unit according to the decoded information or a default value.
In some embodiments, the first parameter indicates that motion vector information is calculated once every R decoding units, and the determining unit 11 is specifically configured to decode the point cloud code stream if the current decoding unit is the NR decoding unit in the decoding order, so as to obtain motion vector information of the current decoding unit, where R, N is a positive integer.
In some embodiments, the determining unit 11 is further configured to determine the motion vector information of the current decoding unit according to the decoded information or the default value if the current decoding unit is a non-NR decoding unit in decoding order.
In some embodiments, the determining unit 11 is specifically configured to determine the motion vector information of the current decoding unit according to the motion vector information of S decoding units, where the S decoding units are S decoded decoding units located before the current decoding unit in a decoding order, and S is a positive integer.
In some embodiments, the determining unit 11 is specifically configured to determine, as the motion vector information of the current decoding unit, the motion vector information of a decoding unit located before the current decoding unit in the decoding order if the S is equal to 1.
In some embodiments, the determining unit 11 is specifically configured to perform a preset process on the motion vector information of the S decoding units if S is greater than 1, and determine the processing result as the motion vector information of the current decoding unit.
In some embodiments, the determining unit 11 is specifically configured to determine an average value of the motion vector information of the S decoding units as the motion vector information of the current decoding unit.
In some embodiments, the determining unit 11 is further configured to determine a degree of change of the classification information according to the first parameter, and determine the motion vector information of the current decoding unit according to the degree of change.
In some embodiments, the determining unit 11 is specifically configured to determine the classification information of the current decoding unit according to the first parameter, and determine a degree of change between the classification information of the current decoding unit and the classification information of the reference decoding unit of the current decoding unit.
In some embodiments, the determining unit 11 is specifically configured to determine, as the motion vector information of the current decoding unit, a default value or motion vector information of a decoding unit preceding the current decoding unit in the decoding order if the degree of change is less than or equal to a first preset value.
In some embodiments, the determining unit 11 is specifically configured to decode the point cloud code stream if the degree of change is greater than a first preset value, so as to obtain motion vector information of the current decoding unit.
In some embodiments, the motion vector information includes at least one of a rotation matrix and an offset vector, and the second parameter includes at least one of a third sub-parameter and a fourth sub-parameter;
The third sub-parameter is used for indicating the calculation period of the rotation matrix, and the fourth sub-parameter is used for indicating the calculation period of the offset vector.
In some embodiments, the determining unit 11 is further configured to decode the point cloud code stream if the current decoding unit is the first decoding unit in the decoding order, to obtain at least one of classification information and motion vector information of the current decoding unit.
In some embodiments, the decoding unit 12 is specifically configured to divide the point cloud in the current decoding unit into P-type point clouds according to the classification information of the current decoding unit, where P is a positive integer greater than 1, determine motion vector information corresponding to the P-type point clouds according to the motion vector information of the current decoding unit, and decode the current decoding unit according to the motion vector information corresponding to the P-type point clouds.
In some embodiments, the classification information includes a first height threshold and a second height threshold, and the first height threshold is greater than the second height threshold, and the decoding unit 12 is specifically configured to divide the point cloud in the current decoding unit into the point clouds of class P according to the first height threshold and the second height threshold.
In some embodiments, the P-type point clouds include a first-type point cloud and a second-type point cloud, and the decoding unit 12 is specifically configured to divide a point cloud having a height value in the current decoding unit that is less than or equal to the first height threshold and greater than or equal to the second height threshold into the first-type point clouds, and divide a point cloud having a height value in the current decoding unit that is greater than the first height threshold or less than the second height threshold into the second-type point clouds.
In some embodiments, the decoding unit 12 is specifically configured to determine a reference decoding unit of the current decoding unit, perform motion compensation on the reference decoding unit according to the motion vector information of the P-type point cloud to obtain prediction information of the current decoding unit, and decode at least one of geometry information and attribute information of the current decoding unit according to the prediction information.
In some embodiments, the current decoding unit is a current point cloud frame or a spatial region of the current point cloud frame.
In some embodiments, the determining unit 12 is specifically configured to decode the sequence header parameter set to obtain at least one of the first parameter and the second parameter.
In some embodiments, the first parameter is used to indicate that a plurality of point cloud frames are spaced apart, to calculate one time classification information, and/or,
The second parameter is used for indicating a plurality of point cloud frames to be separated, and motion vector information is calculated once.
In some embodiments, the current decoding unit is a current point cloud slice, and the determining unit 12 is specifically configured to decode point cloud slice header information to obtain at least one of the first parameter and the second parameter.
In some embodiments, the first parameter is used to indicate that, for an ith point cloud tile in the point cloud frames, a plurality of point cloud frames are spaced apart, classification information of the ith point cloud tile is calculated once, the i is a positive integer, and/or,
The second parameter is used for indicating that for the ith point cloud piece in the point cloud frames, a plurality of point cloud frames are spaced, and motion vector information of the ith point cloud piece is calculated once.
In some embodiments, the first parameter is used to indicate a plurality of point clouds in a point cloud frame, calculate one-time classification information, and/or,
And the second parameter is used for separating a plurality of point clouds in one point cloud frame and calculating primary motion vector information.
In some embodiments, the determining unit 12 is specifically configured to decode the point cloud code stream, and obtain a first identifier, where the first identifier is used to indicate whether to perform inter-prediction decoding;
And if the first mark indicates that the inter-frame predictive coding is performed, decoding the point cloud code stream to obtain at least one of the first parameter and the second parameter.
It should be understood that apparatus embodiments and method embodiments may correspond with each other and that similar descriptions may refer to the method embodiments. To avoid repetition, no further description is provided here. Specifically, the point cloud decoding apparatus 10 shown in fig. 7 may correspond to a corresponding main body in performing the point cloud decoding method according to the embodiment of the present application, and the foregoing and other operations and/or functions of each unit in the point cloud decoding apparatus 10 are respectively for implementing a corresponding flow in the point cloud decoding method, which are not described herein for brevity.
Fig. 8 is a schematic block diagram of a point cloud encoding apparatus provided by an embodiment of the present application.
As shown in fig. 8, the point cloud encoding apparatus 20 includes:
A first determining unit 21 for determining at least one of a first parameter for indicating a calculation period of the classification information and a second parameter for indicating a calculation period of the motion vector information;
A second determining unit 22 for determining at least one of classification information and motion vector information of the current encoding unit according to at least one of the first parameter and the second parameter;
and an encoding unit 23, configured to encode the current encoding unit according to at least one of the classification information and the motion vector information of the current encoding unit.
In some embodiments, the second determining unit 22 is specifically configured to determine the classification information of the current coding unit according to the first parameter, and/or determine the motion vector information of the current coding unit according to the second parameter.
In some embodiments, the second determining unit 22 is specifically configured to determine a classification information calculation period corresponding to the current coding unit according to the first parameter, and determine classification information of the current coding unit according to the classification information calculation period.
In some embodiments, the second determining unit 22 is specifically configured to identify the category of the current point coding unit if the current coding unit is the first coding unit in the calculation period of the classification information, and obtain the classification information of the current point coding unit.
In some embodiments, the second determining unit 22 is specifically configured to determine, if the current coding unit is the non-first coding unit in the classification information calculation period, the classification information of the current coding unit according to the encoded information or the default value.
In some embodiments, the encoding unit 23 is further configured to write the first parameter into a point cloud code stream, and if the current encoding unit is the first encoding unit in the classification information calculation period, write the classification information of the current encoding unit into the point cloud code stream.
In some embodiments, the encoding unit 23 is further configured to write the first parameter into the point cloud code stream, and skip writing the classification information of the current encoding unit into the point cloud code stream if the current encoding unit is the non-first encoding unit in the classification information calculation period.
In some embodiments, the first parameter indicates that classification information is calculated once every K coding units, and the second determining unit 22 is specifically configured to identify a class of point cloud in the current point coding unit if the current coding unit is an NK-th coding unit in the coding sequence, so as to obtain classification information of the current point coding unit, where K, N is a positive integer.
In some embodiments, the second determining unit 22 is further configured to determine, if the current coding unit is a non-NK coding unit in the coding order, the classification information of the current coding unit according to the encoded information or a default value.
In some embodiments, the encoding unit 23 is further configured to write the first parameter into a point cloud code stream, and if the current encoding unit is the NK-th encoding unit in the encoding order, write the classification information of the current encoding unit into the code stream.
In some embodiments, the encoding unit 23 is further configured to write the first parameter into a point cloud code stream, and if the current encoding unit is a non-NK encoding unit in the encoding order, skip writing the classification information of the current encoding unit into the code stream.
In some embodiments, the encoding unit 23 is further configured to write classification information of the current encoding unit into a point cloud code stream, and skip writing the first parameter into the point cloud code stream.
In some embodiments, the second determining unit 22 is specifically configured to determine the classification information of the current coding unit according to the classification information of M coding units, where M coding units are M coded coding units located before the current coding unit in decoding order, and M is a positive integer.
In some embodiments, the second determining unit 22 is specifically configured to determine, as the classification information of the current coding unit, the classification information of a coding unit located before the current coding unit in the coding sequence if the M is equal to 1.
In some embodiments, the second determining unit 22 is specifically configured to perform a preset process on the classification information of the M coding units if the M is greater than 1, and determine the processing result as the classification information of the current coding unit.
In some embodiments, the second determining unit 22 is specifically configured to determine an average value of the classification information of the M coding units as the classification information of the current coding unit.
In some embodiments, the classification information includes at least one of a first elevation threshold and a second elevation threshold, the first parameter including at least one of a first sub-parameter and a second sub-parameter;
The first sub-parameter is used for indicating the calculation period of the first height threshold, the second sub-parameter is used for indicating the calculation period of the second height threshold, and the first height threshold and the second height threshold are used for classifying the current coding unit.
In some embodiments, the second determining unit 22 is specifically configured to determine a motion vector information calculation period corresponding to the current encoding unit according to the second parameter, and determine motion vector information of the current encoding unit according to the motion vector information calculation period.
In some embodiments, the second determining unit 22 is specifically configured to determine, if the current coding unit is the first coding unit in the motion vector information calculation period, the motion vector information of the current point coding unit according to the reference coding unit of the current coding unit.
In some embodiments, the second determining unit 22 is specifically configured to determine, if the current coding unit is the non-first coding unit in the motion vector information calculation period, the motion vector information of the current coding unit according to the encoded information or the default value.
In some embodiments, the encoding unit 23 is further configured to write the second parameter into a point cloud code stream, and if the current encoding unit is the first encoding unit in the motion vector information calculation period, then write the motion vector information of the current encoding unit into the point cloud code stream.
In some embodiments, the encoding unit 23 is further configured to write the second parameter into the point cloud code stream, and skip writing the motion vector information of the current encoding unit into the point cloud code stream if the current encoding unit is the non-first encoding unit in the motion vector information calculation period.
In some embodiments, the first parameter indicates that motion vector information is calculated once every R coding units, and the second determining unit 22 is specifically configured to determine, if the current coding unit is the NR coding unit in the coding order, motion vector information of the current point coding unit according to a reference coding unit of the current coding unit, where R, N are all positive integers.
In some embodiments, the second determining unit 22 is further configured to determine, if the current coding unit is a non-NR coding unit in the coding order, motion vector information of the current coding unit according to the coded information or a default value.
In some embodiments, the encoding unit 23 is further configured to write the second parameter into a point cloud code stream, and if the current encoding unit is the NK-th encoding unit in the encoding order, write the motion vector information of the current encoding unit into the code stream.
In some embodiments, the encoding unit 22 is further configured to write the second parameter into the point cloud code stream, and if the current encoding unit is a non-NK encoding unit in the encoding order, skip writing the motion vector information of the current encoding unit into the code stream.
In some embodiments, the encoding unit 22 is further configured to write motion vector information of the current encoding unit to a point cloud code stream, and skip writing the second parameter to the point cloud code stream.
In some embodiments, the second determining unit 22 is specifically configured to determine the motion vector information of the current coding unit according to the motion vector information of S coding units, where the S coding units are S coded coding units located before the current coding unit in the coding order, and the S is a positive integer.
In some embodiments, the second determining unit 22 is specifically configured to determine, as the motion vector information of the current coding unit, the motion vector information of a coding unit located before the current coding unit in the coding sequence if the S is equal to 1.
In some embodiments, the second determining unit 22 is specifically configured to perform a preset process on the motion vector information of the S coding units if S is greater than 1, and determine the processing result as the motion vector information of the current coding unit.
In some embodiments, the second determining unit 22 is specifically configured to determine an average value of the motion vector information of the S coding units as the motion vector information of the current coding unit.
In some embodiments, the second determining unit 22 is further configured to determine a degree of change of the classification information of the different coding units according to the first parameter, and determine the motion vector information of the current coding unit according to the degree of change.
In some embodiments, the second determining unit 22 is specifically configured to determine, according to the first parameter, classification information of the current coding unit, and determine a degree of change between the classification information of the current coding unit and classification information of a reference coding unit of the current coding unit.
In some embodiments, the second determining unit 22 is specifically configured to determine, as the motion vector information of the current coding unit, a default value or motion vector information of a coding unit preceding the current coding unit in the coding sequence if the degree of change is less than or equal to a first preset value.
In some embodiments, the second determining unit 22 is specifically configured to determine, according to the reference coding unit of the current coding unit, the motion vector information of the current coding unit if the degree of change is greater than a first preset value.
In some embodiments, the encoding unit 23 is further configured to write the first parameter into a point cloud code stream, and skip writing the second parameter into the point cloud code stream.
In some embodiments, the motion vector information includes at least one of a rotation matrix and an offset vector, and the second parameter includes at least one of a third sub-parameter and a fourth sub-parameter;
The third sub-parameter is used for indicating the calculation period of the rotation matrix, and the fourth sub-parameter is used for indicating the calculation period of the offset vector.
In some embodiments, the encoding unit 23 is specifically configured to divide a point cloud in the current encoding unit into P-type point clouds according to classification information of the current encoding unit, where P is a positive integer greater than 1, determine motion vector information corresponding to the P-type point clouds according to motion vector information of the current encoding unit, and encode the current encoding unit according to the motion vector information corresponding to the P-type point clouds.
In some embodiments, the classification information includes a first height threshold and a second height threshold, and the first height threshold is greater than the second height threshold, and the encoding unit 23 is specifically configured to divide the point cloud in the current encoding unit into the point clouds of class P according to the first height threshold and the second height threshold.
In some embodiments, the P-type point clouds include a first-type point cloud and a second-type point cloud, and the encoding unit 23 is specifically configured to divide a point cloud having a height value in the current encoding unit that is less than or equal to the first height threshold and greater than or equal to the second height threshold into the first-type point clouds, and divide a point cloud having a height value in the current encoding unit that is greater than the first height threshold or less than the second height threshold into the second-type point clouds.
In some embodiments, the encoding unit 23 is specifically configured to perform motion compensation on a reference encoding unit of the current encoding unit according to the motion vector information of the P-type point cloud to obtain prediction information of the current encoding unit, and encode at least one of geometry information and attribute information of the current encoding unit according to the prediction information.
In some embodiments, the current coding unit is a current point cloud frame or a spatial region of the current point cloud frame.
In some embodiments, the encoding unit 23 is further configured to write at least one of the first parameter and the second parameter into a sequence header parameter set.
In some embodiments, the first parameter is used to indicate that a plurality of point cloud frames are spaced apart, to calculate one time classification information, and/or,
The second parameter is used for indicating a plurality of point cloud frames to be separated, and motion vector information is calculated once.
In some embodiments, the current encoding unit is a current point cloud slice, and the encoding unit 23 is further configured to write at least one of the first parameter and the second parameter into point cloud slice header information.
In some embodiments, the first parameter is used to indicate that, for an ith point cloud tile in the point cloud frames, a plurality of point cloud frames are spaced apart, classification information of the ith point cloud tile is calculated once, the i is a positive integer, and/or,
The second parameter is used for indicating that for the ith point cloud piece in the point cloud frames, a plurality of point cloud frames are spaced, and motion vector information of the ith point cloud piece is calculated once.
In some embodiments, the first parameter is used to indicate a plurality of point clouds in a point cloud frame, calculate one-time classification information, and/or,
And the second parameter is used for separating a plurality of point clouds in one point cloud frame and calculating primary motion vector information.
In some embodiments, the first determining unit 21 is further configured to determine a first flag, where the first flag is used to indicate whether inter-prediction encoding is performed, and determine at least one of the first parameter and the second parameter if the first flag indicates that inter-prediction encoding is performed.
It should be understood that apparatus embodiments and method embodiments may correspond with each other and that similar descriptions may refer to the method embodiments. To avoid repetition, no further description is provided here. Specifically, the point cloud encoding apparatus 20 shown in fig. 8 may correspond to a corresponding main body in performing the point cloud encoding method according to the embodiment of the present application, and the foregoing and other operations and/or functions of each unit in the point cloud encoding apparatus 20 are respectively for implementing a corresponding flow in the point cloud encoding method, which are not described herein for brevity.
The apparatus and system of embodiments of the present application are described above in terms of functional units in conjunction with the accompanying drawings. It should be understood that the functional units may be implemented in hardware, or in instructions in software, or in a combination of hardware and software units. Specifically, each step of the method embodiment in the embodiment of the present application may be implemented by an integrated logic circuit of hardware in a processor and/or an instruction in a software form, and the steps of the method disclosed in connection with the embodiment of the present application may be directly implemented as a hardware decoding processor or implemented by a combination of hardware and software units in the decoding processor. Alternatively, the software elements may reside in a well-established storage medium in the art such as random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, registers, and the like. The storage medium is located in a memory, and the processor reads information in the memory, and in combination with hardware, performs the steps in the above method embodiments.
Fig. 9 is a schematic block diagram of an electronic device provided by an embodiment of the present application.
As shown in fig. 9, the electronic device 30 may be a point cloud decoding device or a point cloud encoding device according to an embodiment of the present application, and the electronic device 30 may include:
A memory 33 and a processor 32, the memory 33 being adapted to store a computer program 34 and to transmit the program code 34 to the processor 32. In other words, the processor 32 may call and run the computer program 34 from the memory 33 to implement the method of an embodiment of the present application.
For example, the processor 32 may be configured to perform the steps of the method 200 described above in accordance with instructions in the computer program 34.
In some embodiments of the present application, the processor 32 may include, but is not limited to:
A general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field programmable gate array (Field Programmable GATE ARRAY, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
In some embodiments of the present application, the memory 33 includes, but is not limited to:
Volatile memory and/or nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDR SDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCH LINK DRAM, SLDRAM), and Direct memory bus RAM (DR RAM).
In some embodiments of the application, the computer program 34 may be partitioned into one or more units that are stored in the memory 33 and executed by the processor 32 to perform the methods provided by the application. The one or more elements may be a series of computer program instruction segments capable of performing the specified functions, which instruction segments describe the execution of the computer program 34 in the electronic device 30.
As shown in fig. 9, the electronic device 30 may further include:
A transceiver 33, the transceiver 33 being connectable to the processor 32 or the memory 33.
The processor 32 may control the transceiver 33 to communicate with other devices, and in particular, may send information or data to other devices or receive information or data sent by other devices. The transceiver 33 may include a transmitter and a receiver. The transceiver 33 may further include antennas, the number of which may be one or more.
It will be appreciated that the various components in the electronic device 30 are connected by a bus system that includes, in addition to a data bus, a power bus, a control bus, and a status signal bus.
Fig. 10 is a schematic block diagram of a point cloud codec system provided by an embodiment of the present application.
As shown in fig. 10, the point cloud encoding and decoding system 40 may include a point cloud encoder 41 and a point cloud decoder 42, wherein the point cloud encoder 41 is configured to perform the point cloud encoding method according to the embodiment of the present application, and the point cloud decoder 42 is configured to perform the point cloud decoding method according to the embodiment of the present application.
The application also provides a code stream which is generated according to the coding method.
The present application also provides a computer storage medium having stored thereon a computer program which, when executed by a computer, enables the computer to perform the method of the above-described method embodiments. Alternatively, embodiments of the present application also provide a computer program product comprising instructions which, when executed by a computer, cause the computer to perform the method of the method embodiments described above.
When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., digital point cloud optical disk (digital video disc, DVD)), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment. For example, functional units in various embodiments of the application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (91)
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2022/105000 WO2024011381A1 (en) | 2022-07-11 | 2022-07-11 | Point cloud encoding method and apparatus, point cloud decoding method and apparatus, device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN119497991A true CN119497991A (en) | 2025-02-21 |
Family
ID=89535242
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202280098080.1A Pending CN119497991A (en) | 2022-07-11 | 2022-07-11 | Point cloud encoding and decoding method, device, equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN119497991A (en) |
WO (1) | WO2024011381A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118830250A (en) * | 2024-02-02 | 2024-10-22 | 北京小米移动软件有限公司 | Methods for encoding and decoding 3D point clouds, encoder-decoder |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001298728A (en) * | 2000-04-12 | 2001-10-26 | Meidensha Corp | Remote monitoring system and image coding method |
JP4403549B2 (en) * | 2004-09-15 | 2010-01-27 | 日本ビクター株式会社 | Moving picture coding apparatus and moving picture coding program |
CN101340578A (en) * | 2007-07-03 | 2009-01-07 | 株式会社日立制作所 | Motion vector estimation device, encoder and camera |
JP4882948B2 (en) * | 2007-10-04 | 2012-02-22 | ソニー株式会社 | Image processing apparatus and method, and program |
EP3869484B1 (en) * | 2018-10-15 | 2025-02-19 | Mitsubishi Electric Corporation | Information processing device, information processing method, and program |
CN113920483B (en) * | 2021-09-14 | 2025-03-18 | 征图三维(北京)激光技术有限公司 | Method, device, electronic device and storage medium for classifying objects in road point cloud |
CN114143556B (en) * | 2021-12-28 | 2024-12-03 | 苏州联视泰电子信息技术有限公司 | An inter-frame encoding and decoding method for compressing 3D sonar point cloud data |
-
2022
- 2022-07-11 CN CN202280098080.1A patent/CN119497991A/en active Pending
- 2022-07-11 WO PCT/CN2022/105000 patent/WO2024011381A1/en unknown
Also Published As
Publication number | Publication date |
---|---|
WO2024011381A1 (en) | 2024-01-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11830212B2 (en) | Point cloud data transmission apparatus, point cloud data transmission method, point cloud data reception apparatus, and point cloud data reception method | |
EP4075804A1 (en) | Point cloud data transmission device, transmission method, processing device and processing method | |
EP4092625A1 (en) | Point cloud data transmission device, transmission method, processing device, and processing method | |
US20210400103A1 (en) | Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method | |
US20230239501A1 (en) | Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method | |
US12149579B2 (en) | Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method | |
US20220327743A1 (en) | Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method | |
US12058370B2 (en) | Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method | |
US20230059625A1 (en) | Transform-based image coding method and apparatus therefor | |
US20220256190A1 (en) | Point cloud data processing apparatus and method | |
EP4258671A1 (en) | Point cloud attribute predicting method, encoder, decoder, and storage medium | |
JP2023543752A (en) | Point cloud codec method and system, and point cloud encoder and point cloud decoder | |
US20230154052A1 (en) | Point cloud data transmission device, point cloud data transmission method, point cloud data reception device and point cloud data reception method | |
US20240242390A1 (en) | Point cloud data transmission method, point cloud data transmission device, point cloud data reception method, and point cloud data reception device | |
CN114402624B (en) | Point cloud data processing device and method | |
CN119497991A (en) | Point cloud encoding and decoding method, device, equipment and storage medium | |
CN118075464A (en) | Point cloud attribute prediction method and device and codec | |
CN115474041A (en) | Point cloud attribute prediction method and device and related equipment | |
CN115086716A (en) | Method and device for selecting neighbor points in point cloud and coder/decoder | |
US20220239946A1 (en) | Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method | |
CN116636214A (en) | Point cloud encoding and decoding method and system, point cloud encoder and point cloud decoder | |
WO2024065269A1 (en) | Point cloud encoding and decoding method and apparatus, device, and storage medium | |
US20230412837A1 (en) | Point cloud data transmission method, point cloud data transmission device, point cloud data reception method, and point cloud data reception device | |
US20240179347A1 (en) | Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method | |
US20230206510A1 (en) | Point cloud data processing device and processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication |