CN113141494A - 3D image processing method and device and 3D display terminal - Google Patents
3D image processing method and device and 3D display terminal Download PDFInfo
- Publication number
- CN113141494A CN113141494A CN202010073041.9A CN202010073041A CN113141494A CN 113141494 A CN113141494 A CN 113141494A CN 202010073041 A CN202010073041 A CN 202010073041A CN 113141494 A CN113141494 A CN 113141494A
- Authority
- CN
- China
- Prior art keywords
- image
- parallax
- forming
- content
- difference
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/194—Transmission of image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application relates to the technical field of 3D (three-dimensional), and discloses a 3D image processing method, which comprises the following steps: comparing at least two parallax images used to form the 3D image to obtain difference image content used to form 3D-rendered image content in the 3D image; determining a disparity image content in one of the at least two disparity images for forming a 3D image, and determining another of the at least two disparity images for forming a 3D image. According to the 3D image processing method, the difference image content in one parallax image of the at least two parallax images is determined to be used for forming the 3D image, so that the two parallax images do not need to be transmitted, the data transmission amount is reduced, and the transmission efficiency is improved. The application also discloses a 3D image processing device and a 3D display terminal.
Description
Technical Field
The present application relates to the field of 3D technologies, and for example, to a 3D image processing method and apparatus, and a 3D display terminal.
Background
Currently, when two parallax images are used for 3D display, the two parallax images need to be transmitted to the display side for 3D display.
In the process of implementing the embodiments of the present disclosure, it is found that at least the following problems exist in the related art:
both the two parallax images are transmitted, resulting in a large amount of transmission data and low transmission efficiency.
Disclosure of Invention
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview nor is intended to identify key/critical elements or to delineate the scope of such embodiments but rather as a prelude to the more detailed description that is presented later.
The embodiment of the disclosure provides a 3D image processing method and device and a 3D display terminal, and aims to solve the technical problems that when two parallax images are used for 3D display, the two parallax images need to be transmitted, so that the transmission data volume is large, and the transmission efficiency is low.
The 3D image processing method provided by the embodiment of the disclosure comprises the following steps:
comparing at least two parallax images used to form the 3D image to obtain difference image content used to form 3D-rendered image content in the 3D image;
determining a disparity image content in one of the at least two disparity images for forming a 3D image, and determining another of the at least two disparity images for forming a 3D image.
In some embodiments, the comparing to obtain the difference image content may include:
and comparing the first parallax image and the second parallax image in the at least two parallax images to obtain the difference image content of at least one of the first parallax image and the second parallax image.
In some embodiments, in the case that the difference image content of the first parallax image is obtained after the comparison, determining the difference image content of one parallax image of the at least two parallax images to be used for forming the 3D image may include: the disparity image content in the first disparity image is determined for forming a 3D image.
In some embodiments, in the case that the difference image content of the first parallax image and the difference image content of the second parallax image are obtained after the comparison, determining the difference image content of one parallax image of the at least two parallax images to be used for forming the 3D image may include:
the disparity image content of the first parallax image or the disparity image content of the second parallax image is selected for forming the 3D image.
In some embodiments, the manner of making the above selection may include:
randomly selecting; or
Selecting a parallax image containing least difference image content; or
The parallax image containing the most difference image content is selected.
In some embodiments, determining that another parallax image of the at least two parallax images is used to form the 3D image in the case where the difference image content in the first parallax image is determined to be used to form the 3D image may include: the second parallax image is determined for forming a 3D image.
In some embodiments, in a case where the difference image content of the first parallax image or the difference image content of the second parallax image is selected for forming the 3D image, determining another parallax image of the at least two parallax images for forming the 3D image may include:
determining a parallax image of the first parallax image and the second parallax image that is not selected for the difference image content for forming the 3D image.
In some embodiments, the disparity image content of the one of the at least two disparity images used to form the 3D image may also be transmitted, and the other of the at least two disparity images used to form the 3D image may also be transmitted.
The 3D image processing method provided by the embodiment of the disclosure comprises the following steps:
acquiring one parallax image of at least two parallax images for forming a 3D image, and acquiring the content of a difference image in the other parallax image of the at least two parallax images for forming the 3D image;
and performing 3D display based on the one parallax image of the at least two parallax images for forming the 3D image and the content of the parallax image.
In some embodiments, performing 3D display may include:
integrating another parallax image of the at least two parallax images for forming the 3D image based on the content of the difference image and the one parallax image of the at least two parallax images for forming the 3D image;
and performing 3D display based on the one parallax image and the other parallax image.
The 3D image processing device provided by the embodiment of the disclosure comprises a processor and a memory which stores program instructions; the processor is configured to perform the 3D image processing method described above when executing the program instructions.
The 3D image processing apparatus provided by the embodiment of the present disclosure includes:
a comparison module configured to compare at least two parallax images for forming a 3D image to obtain difference image content for forming 3D image content in the 3D image;
a determination module configured to determine a disparity image content in one of the at least two disparity images for forming a 3D image and to determine another disparity image in the at least two disparity images for forming a 3D image.
In some embodiments, the comparison module may be configured to:
and comparing the first parallax image and the second parallax image in the at least two parallax images to obtain the difference image content of at least one of the first parallax image and the second parallax image.
In some embodiments, the determining module may be configured to:
determining the difference image content in the first parallax image to be used for forming the 3D image under the condition that the difference image content of the first parallax image is obtained after the comparison by the comparison module;
or the like, or, alternatively,
and when the difference image content of the first parallax image and the difference image content of the second parallax image are obtained after the comparison by the comparison module, selecting the difference image content of the first parallax image or the difference image content of the second parallax image to form the 3D image.
In some embodiments, in making the above selection, the determination module may be configured to:
randomly selecting; or
Selecting a parallax image containing least difference image content; or
The parallax image containing the most difference image content is selected.
In some embodiments, the determining module may be configured to:
determining a second parallax image for forming a 3D image in a case where it is determined that the difference image content in the first parallax image is for forming a 3D image;
or the like, or, alternatively,
in the case where the difference image content of the first parallax image or the difference image content of the second parallax image is selected for forming the 3D image, it is determined that the parallax image of the first parallax image and the second parallax image, from which the difference image content is not selected, is used for forming the 3D image.
In some embodiments, a sending module configured to:
the method includes transmitting a disparity image content in one of at least two disparity images used to form the 3D image, and transmitting another one of the at least two disparity images used to form the 3D image.
The 3D image processing apparatus provided by the embodiment of the present disclosure includes:
an acquisition module configured to acquire one of at least two parallax images for forming a 3D image and acquire a difference image content in another of the at least two parallax images for forming the 3D image;
and a display module configured to perform 3D display based on the one of the at least two parallax images used to form the 3D image and the difference image content.
In some embodiments, the display module may include:
a processing unit configured to integrate another parallax image of the at least two parallax images for forming the 3D image based on the difference image content and the one parallax image of the at least two parallax images for forming the 3D image;
and a display unit configured to perform 3D display based on the one parallax image and the other parallax image.
The 3D display terminal provided by the embodiment of the disclosure comprises the 3D image processing device.
The 3D image processing method, the device and the 3D display terminal provided by the embodiment of the disclosure can realize the following technical effects:
the difference image content in one parallax image of the at least two parallax images is determined to be used for forming the 3D image, so that the two parallax images do not need to be transmitted, the transmission data volume is reduced, and the transmission efficiency is improved.
The foregoing general description and the following description are exemplary and explanatory only and are not restrictive of the application.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the accompanying drawings and not in limitation thereof, in which elements having the same reference numeral designations are shown as like elements and not in limitation thereof, and wherein:
fig. 1 is a flowchart of a 3D image processing method provided by an embodiment of the present disclosure;
fig. 2 is a further flowchart of a 3D image processing method provided by an embodiment of the present disclosure;
fig. 3 is a further flowchart of a 3D image processing method provided by an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a 3D image processing apparatus provided in an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a 3D image processing apparatus provided in an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a 3D image processing apparatus provided in an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a 3D image processing apparatus provided in an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a display module provided in an embodiment of the present disclosure;
fig. 9A, 9B, and 9C are respectively schematic structural diagrams of a 3D display terminal according to an embodiment of the present disclosure.
Reference numerals:
400: a 3D image processing device; 410: a processor; 420: a memory; 430: a communication interface; 440: a bus; 510: a comparison module; 520: a determination module; 530: a sending module; 600: a 3D image processing device; 610: an acquisition module; 620: a display module; 621: a processing unit; 622: a display unit; 700: and 3D display terminal.
Detailed Description
So that the manner in which the features and elements of the disclosed embodiments can be understood in detail, a more particular description of the disclosed embodiments, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. In the following description of the technology, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the disclosed embodiments. However, one or more embodiments may be practiced without these details. In other instances, well-known structures and devices may be shown in simplified form in order to simplify the drawing.
Referring to fig. 1, an embodiment of the present disclosure provides a 3D image processing method, including:
step 110: comparing at least two parallax images used to form the 3D image to obtain difference image content used to form 3D-rendered image content in the 3D image;
step 120: determining a disparity image content in one of the at least two disparity images for forming a 3D image, and determining another of the at least two disparity images for forming a 3D image.
In some embodiments, the comparing to obtain the difference image content may include:
and comparing the first parallax image and the second parallax image in the at least two parallax images to obtain the difference image content of at least one of the first parallax image and the second parallax image.
In some embodiments, the above-mentioned difference image content may be image content having a difference between two or more different parallax images, for example: when the image content of the same position of two different parallax images has a difference, the image content of the two different parallax images at the position is determined as the difference image content. Alternatively, the difference image content may be image content of two different parallax images, where the difference exceeds a threshold, for example: and determining the image content of the two different parallax images at the same position as the difference image content when the image content of the two different parallax images at the same position has a difference and the difference exceeds a difference threshold value. Alternatively, the difference threshold may be a specific value such as 10%, 20%, 50%, or may be a range of values such as 10% or more and 50% or less.
In some embodiments, in the case that the difference image content of the first parallax image is obtained after the comparison, determining the difference image content of one parallax image of the at least two parallax images to be used for forming the 3D image may include: the disparity image content in the first disparity image is determined for forming a 3D image.
In some embodiments, in a case where the difference image content of the first parallax image and the difference image content of the second parallax image are obtained after the comparison, determining the difference image content of one parallax image of the at least two parallax images to be used for forming the 3D image includes:
the disparity image content of the first parallax image or the disparity image content of the second parallax image is selected for forming the 3D image.
In some embodiments, the above-mentioned selection manner may include:
randomly selecting; or
Selecting a parallax image containing least difference image content; or
The parallax image containing the most difference image content is selected.
In some embodiments, the content of the difference images included in the different parallax images obtained by the comparison may be different. In this case, the parallax image including the least amount of the difference image content can be selected to reduce the amount of transmission data as much as possible and improve the transmission efficiency. Alternatively, the parallax image containing the most content of the difference image may be selected, which can ensure high quality in the subsequent formation of the 3D image while reducing the amount of transmission data and improving the transmission efficiency. Alternatively, the difference image content in the parallax image may also be selected based on other conditions.
In some embodiments, determining that another parallax image of the at least two parallax images is used to form the 3D image in the case where the difference image content in the first parallax image is determined to be used to form the 3D image may include: the second parallax image is determined for forming a 3D image.
In some embodiments, in a case where the difference image content of the first parallax image or the difference image content of the second parallax image is selected for forming the 3D image, determining another parallax image of the at least two parallax images for forming the 3D image may include:
determining a parallax image of the first parallax image and the second parallax image that is not selected for the difference image content for forming the 3D image. In this way, it is possible to determine the difference image content in one of the two parallax images for forming a 3D image and determine the other of the two parallax images for forming a 3D image.
In some embodiments, the disparity image content of the one of the at least two disparity images used to form the 3D image may also be transmitted, and the other of the at least two disparity images used to form the 3D image may also be transmitted.
Referring to fig. 2, a 3D image processing method provided by the embodiment of the present disclosure includes:
step 210: acquiring one parallax image of at least two parallax images for forming a 3D image, and acquiring the content of a difference image in the other parallax image of the at least two parallax images for forming the 3D image;
step 220: and performing 3D display based on the one parallax image of the at least two parallax images for forming the 3D image and the content of the parallax image.
Referring to fig. 3, in some embodiments, performing 3D display may include:
step 310: integrating another parallax image of the at least two parallax images for forming the 3D image based on the content of the difference image and the one parallax image of the at least two parallax images for forming the 3D image;
step 320: and performing 3D display based on the one parallax image and the other parallax image.
In some embodiments, when performing 3D display based on the two parallax images, the two parallax images may be subjected to pixel allocation on a 3D display screen of a 3D display terminal; for example: the two parallax images are allocated to the pixels of the 3D display screen corresponding to the eye positions based on the eye positions (e.g., eyeball coordinates) of the viewer.
In some embodiments, the 3D image processing apparatus provided by embodiments of the present disclosure includes a processor and a memory storing program instructions; the processor is configured to perform the 3D image processing method described above when executing the program instructions.
Referring to fig. 4, in some embodiments, the 3D image processing apparatus 400 described above may include:
a processor (processor)410 and a memory (memory)420, and may further include a Communication Interface 430 and a bus 440. The processor 410, the communication interface 430 and the memory 420 can communicate with each other through the bus 440. Communication interface 430 may be used for information transfer. The processor 410 may call logic instructions in the memory 420 to perform the 3D image processing method of the above-described embodiment.
Furthermore, the logic instructions in the memory 420 may be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as a stand-alone product.
The memory 420 serves as a computer-readable storage medium for storing software programs, computer-executable programs, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure. The processor 410 executes functional applications and data processing, i.e., implements the 3D image processing method in the above-described method embodiments, by executing program instructions/modules stored in the memory 420.
The memory 420 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. Further, memory 420 may include high speed random access memory and may also include non-volatile memory.
Referring to fig. 5, in some embodiments, a 3D image processing apparatus 400 provided by an embodiment of the present disclosure includes:
a comparison module 510 configured to compare at least two parallax images used to form the 3D image to obtain difference image content used to form 3D image content in the 3D image;
a determining module 520 configured to determine a disparity image content in one of the at least two disparity images for forming a 3D image and determine another disparity image in the at least two disparity images for forming a 3D image.
In some embodiments, the comparison module 510 may be configured to:
and comparing the first parallax image and the second parallax image in the at least two parallax images to obtain the difference image content of at least one of the first parallax image and the second parallax image.
In some embodiments, the above-mentioned difference image content may be image content having a difference between two or more different parallax images, for example: when the image content of the same position of two different parallax images has a difference, the image content of the two different parallax images at the position is determined as the difference image content. Alternatively, the difference image content may be image content of two different parallax images, where the difference exceeds a threshold, for example: and determining the image content of the two different parallax images at the same position as the difference image content when the image content of the two different parallax images at the same position has a difference and the difference exceeds a difference threshold value. Alternatively, the difference threshold may be a specific value such as 10%, 20%, 50%, or may be a range of values such as 10% or more and 50% or less.
In some embodiments, the determining module 520 may be configured to:
in the case that the difference image content of the first parallax image is obtained after the comparison by the comparison module 510, determining that the difference image content in the first parallax image is used for forming a 3D image;
or the like, or, alternatively,
when the difference image content of the first parallax image and the difference image content of the second parallax image are obtained after the comparison by the comparison module 510, the difference image content of the first parallax image or the difference image content of the second parallax image is selected for forming the 3D image.
In some embodiments, the comparing module 510 may be a logic processing chip, a single chip, or the like, or include a logic circuit to implement the corresponding functions described above.
In some embodiments, in making the above selection, the determination module 520 may be configured to:
randomly selecting; or
Selecting a parallax image containing least difference image content; or
The parallax image containing the most difference image content is selected.
In some embodiments, the content of the difference images included in the different parallax images obtained by the comparison may be different. In this case, the parallax image including the least amount of the difference image content can be selected to reduce the amount of transmission data as much as possible and improve the transmission efficiency. Alternatively, the parallax image containing the most content of the difference image may be selected, which can ensure high quality in the subsequent formation of the 3D image while reducing the amount of transmission data and improving the transmission efficiency. Alternatively, the difference image content in the parallax image may also be selected based on other conditions.
In some embodiments, the determining module 520 may be configured to:
determining a second parallax image for forming a 3D image in a case where it is determined that the difference image content in the first parallax image is for forming a 3D image;
or the like, or, alternatively,
in the case where the difference image content of the first parallax image or the difference image content of the second parallax image is selected for forming the 3D image, it is determined that the parallax image of the first parallax image and the second parallax image, from which the difference image content is not selected, is used for forming the 3D image. In this way, it is possible to determine the difference image content in one of the two parallax images for forming a 3D image and determine the other of the two parallax images for forming a 3D image.
In some embodiments, the determining module 520 may be a logic processing chip, a single chip, or the like, or include a logic circuit to implement the corresponding functions described above.
Referring to fig. 6, in some embodiments, a sending module 530 may be further included, configured to:
the method includes transmitting a disparity image content in one of at least two disparity images used to form the 3D image, and transmitting another one of the at least two disparity images used to form the 3D image.
In some embodiments, the sending module 530 may be a logic processing chip, a single chip, a signal transmitter, or the like, or include a logic circuit to implement the corresponding functions described above.
Referring to fig. 7, in some embodiments, a 3D image processing apparatus 600 provided by an embodiment of the present disclosure includes:
an obtaining module 610 configured to obtain one parallax image of the at least two parallax images used for forming the 3D image, and obtain a difference image content of another parallax image of the at least two parallax images used for forming the 3D image;
and a display module 620 configured to perform 3D display based on one of the at least two parallax images used to form the 3D image and the difference image content.
In some embodiments, the obtaining module 610 may be a logic processing chip, a single chip, a signal receiver, or the like, or include a logic circuit to implement the corresponding functions described above.
In some embodiments, the display module 620 may be a display screen, a display, etc., or include display circuitry to implement the respective functions described above.
Referring to fig. 8, in some embodiments, the display module 620 may include:
a processing unit 621 configured to integrate another parallax image of the at least two parallax images for forming the 3D image based on the difference image content and the one parallax image of the at least two parallax images for forming the 3D image;
and a display unit 622 configured to perform 3D display based on the one parallax image and the other parallax image.
In some embodiments, the processing unit 621 may be a logic processing chip, a single chip, or the like, or include a logic circuit to implement the corresponding functions described above.
In some embodiments, the display unit 622 may be a display screen, a display, etc., or include display circuitry to implement the respective functions described above.
In some embodiments, when performing 3D display based on the two parallax images, the two parallax images may be subjected to pixel allocation on a 3D display screen of a 3D display terminal; for example: the two parallax images are allocated to the pixels of the 3D display screen corresponding to the eye positions based on the eye positions (e.g., eyeball coordinates) of the viewer.
Referring to fig. 9A, a 3D display terminal 700 provided in an embodiment of the present disclosure includes the 3D image processing apparatus 400 described above.
Referring to fig. 9B, a 3D display terminal 700 provided in an embodiment of the present disclosure includes the 3D image processing apparatus 600 described above.
Referring to fig. 9C, a 3D display terminal 700 provided in an embodiment of the present disclosure includes the 3D image processing apparatus 400 and the 3D image processing apparatus 600 described above.
In some embodiments, the 3D display terminal 700 may include one or both of the 3D image processing apparatus 400 and the 3D image processing apparatus 600, and the actual setting situation may be considered according to the actual requirement.
The 3D image processing method, the device and the 3D display terminal provided by the embodiment of the disclosure can determine that the difference image content in one parallax image of at least two parallax images is used for forming the 3D image, so that the two parallax images do not need to be transmitted, the transmission data volume is reduced, and the transmission efficiency is improved.
The disclosed embodiments provide a computer-readable storage medium storing computer-executable instructions configured to perform the above-described 3D image processing method.
The disclosed embodiments provide a computer program product comprising a computer program stored on a computer-readable storage medium, the computer program comprising program instructions that, when executed by a computer, cause the computer to perform the above-described 3D image processing method.
The computer-readable storage medium described above may be a transitory computer-readable storage medium or a non-transitory computer-readable storage medium.
The computer-readable storage medium and the computer program product provided by the embodiments of the present disclosure can determine that the difference image content in one of the at least two parallax images is used for forming a 3D image, and thus, there is no need to transmit both parallax images, the amount of transmission data is reduced, and the transmission efficiency is improved.
The technical solution of the embodiments of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes one or more instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present disclosure. And the aforementioned storage medium may be a non-transitory storage medium comprising: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes, and may also be a transient storage medium.
The above description and drawings sufficiently illustrate embodiments of the disclosure to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, and other changes. The examples merely typify possible variations. Individual components and functions are optional unless explicitly required, and the sequence of operations may vary. Portions and features of some embodiments may be included in or substituted for those of others. The scope of the disclosed embodiments includes the full ambit of the claims, as well as all available equivalents of the claims. As used in this application, although the terms "first," "second," etc. may be used in this application to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, unless the meaning of the description changes, so long as all occurrences of the "first element" are renamed consistently and all occurrences of the "second element" are renamed consistently. The first and second elements are both elements, but may not be the same element. Furthermore, the words used in the specification are words of description only and are not intended to limit the claims. As used in the description of the embodiments and the claims, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Similarly, the term "and/or" as used in this application is meant to encompass any and all possible combinations of one or more of the associated listed. Furthermore, the terms "comprises" and/or "comprising," when used in this application, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Without further limitation, an element defined by the phrase "comprising one" does not exclude the presence of other like elements in a process, method or device that comprises the element. In this document, each embodiment may be described with emphasis on differences from other embodiments, and the same and similar parts between the respective embodiments may be referred to each other. For methods, products, etc. of the embodiment disclosures, reference may be made to the description of the method section for relevance if it corresponds to the method section of the embodiment disclosure.
Those of skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software may depend upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments disclosed herein, the disclosed methods, products (including but not limited to devices, apparatuses, etc.) may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit may be merely a division of a logical function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form. Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to implement the present embodiment. In addition, each functional unit in the embodiments of the present disclosure may be integrally provided, or each unit may exist alone physically, or two or more units may be integrated into one unit.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In the description corresponding to the flowcharts and block diagrams in the figures, operations or steps corresponding to different blocks may also occur in different orders than disclosed in the description, and sometimes there is no specific order between the different operations or steps. For example, two sequential operations or steps may in fact be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. Each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Claims (18)
1. A 3D image processing method, characterized by comprising:
comparing at least two parallax images used to form a 3D image to obtain difference image content used to form 3D image content in the 3D image;
determining a disparity image content in one of the at least two disparity images for forming the 3D image and determining another of the at least two disparity images for forming the 3D image.
2. The method of claim 1, wherein the comparing to obtain the difference image content comprises:
comparing a first parallax image and a second parallax image in the at least two parallax images to obtain a difference image content of at least one of the first parallax image and the second parallax image.
3. The method of claim 2,
determining that the difference image content in one of the at least two parallax images is used for forming the 3D image when the difference image content of the first parallax image is obtained after the comparing, including:
determining disparity image content in the first disparity image for forming the 3D image;
or the like, or, alternatively,
determining that the difference image content in one of the at least two parallax images is used for forming the 3D image when the difference image content of the first parallax image and the difference image content of the second parallax image are obtained after the comparing, including:
selecting disparity image content of the first disparity image or disparity image content of the second disparity image for forming the 3D image.
4. The method of claim 3, wherein the selecting comprises:
randomly selecting; or
Selecting a parallax image containing least difference image content; or
The parallax image containing the most difference image content is selected.
5. The method according to claim 3 or 4,
determining that another parallax image of the at least two parallax images is used to form the 3D image if it is determined that the difference image content in the first parallax image is used to form the 3D image, comprising:
determining the second parallax image for forming the 3D image;
or the like, or, alternatively,
in a case where the difference image content of the first parallax image or the difference image content of the second parallax image is selected for forming the 3D image, determining that another parallax image of the at least two parallax images is used for forming the 3D image includes:
determining a parallax image of the first and second parallax images that is not selected for the difference image content for forming the 3D image.
6. The method of any of claims 1 to 5, further comprising:
transmitting a disparity image content in one of the at least two disparity images for forming the 3D image, and transmitting another one of the at least two disparity images for forming the 3D image.
7. A 3D image processing method, characterized by comprising:
acquiring one parallax image of at least two parallax images for forming a 3D image, and acquiring the content of a difference image in the other parallax image of the at least two parallax images for forming the 3D image;
and performing 3D display based on one parallax image of at least two parallax images for forming the 3D image and the content of the parallax image.
8. The method of claim 7, wherein said performing 3D display comprises:
integrating another parallax image of the at least two parallax images used to form the 3D image based on the difference image content and one parallax image of the at least two parallax images used to form the 3D image;
and performing 3D display based on the one parallax image and the other parallax image.
9. A 3D image processing apparatus comprising a processor and a memory storing program instructions, characterized in that the processor is configured to perform the method of any of claims 1 to 6 or any of claims 7 to 8 when executing the program instructions.
10. A 3D image processing apparatus characterized by comprising:
a comparison module configured to compare at least two parallax images for forming a 3D image to obtain difference image content for forming 3D image content in the 3D image;
a determination module configured to determine a disparity image content in one of the at least two disparity images for forming the 3D image and to determine another disparity image in the at least two disparity images for forming the 3D image.
11. The apparatus of claim 10, wherein the comparison module is configured to:
comparing a first parallax image and a second parallax image in the at least two parallax images to obtain a difference image content of at least one of the first parallax image and the second parallax image.
12. The apparatus of claim 11, wherein the determination module is configured to:
determining that the difference image content in the first parallax image is used for forming the 3D image when the difference image content of the first parallax image is obtained after the comparison by the comparison module;
or the like, or, alternatively,
and when the comparison module performs the comparison to obtain the difference image content of the first parallax image and the difference image content of the second parallax image, selecting the difference image content of the first parallax image or the difference image content of the second parallax image to be used for forming the 3D image.
13. The apparatus of claim 12, wherein in making the selection, the determination module is configured to:
randomly selecting; or
Selecting a parallax image containing least difference image content; or
The parallax image containing the most difference image content is selected.
14. The apparatus of claim 12 or 13, wherein the determination module is configured to:
determining the second parallax image to be used for forming the 3D image in a case where it is determined that the difference image content in the first parallax image is used for forming the 3D image;
or the like, or, alternatively,
in a case where the difference image content of the first parallax image or the difference image content of the second parallax image is selected for forming the 3D image, it is determined that the parallax image of the first parallax image and the second parallax image, from which the difference image content is not selected, is used for forming the 3D image.
15. The apparatus of any one of claims 10 to 14, further comprising a transmitting module configured to:
transmitting a disparity image content in one of the at least two disparity images for forming the 3D image, and transmitting another one of the at least two disparity images for forming the 3D image.
16. A 3D image processing apparatus characterized by comprising:
an acquisition module configured to acquire one of at least two parallax images for forming a 3D image and acquire a difference image content in another of the at least two parallax images for forming the 3D image;
a display module configured to perform 3D display based on one of at least two parallax images used to form a 3D image and the difference image content.
17. The apparatus of claim 16, wherein the display module comprises:
a processing unit configured to integrate one of the at least two parallax images for forming the 3D image based on the difference image content and another of the at least two parallax images for forming the 3D image;
a display unit configured to perform 3D display based on the one parallax image and the other parallax image.
18. A 3D display terminal, characterized in that it comprises an apparatus according to any of claims 9 or 10 to 15 or 16 to 17.
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010073041.9A CN113141494A (en) | 2020-01-20 | 2020-01-20 | 3D image processing method and device and 3D display terminal |
| PCT/CN2021/071702 WO2021147754A1 (en) | 2020-01-20 | 2021-01-14 | 3d image processing method and device, and 3d display terminal |
| TW110101860A TW202130167A (en) | 2020-01-20 | 2021-01-18 | 3D image processing method and device, and 3D display terminal |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010073041.9A CN113141494A (en) | 2020-01-20 | 2020-01-20 | 3D image processing method and device and 3D display terminal |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN113141494A true CN113141494A (en) | 2021-07-20 |
Family
ID=76809205
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010073041.9A Pending CN113141494A (en) | 2020-01-20 | 2020-01-20 | 3D image processing method and device and 3D display terminal |
Country Status (3)
| Country | Link |
|---|---|
| CN (1) | CN113141494A (en) |
| TW (1) | TW202130167A (en) |
| WO (1) | WO2021147754A1 (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101453662A (en) * | 2007-12-03 | 2009-06-10 | 华为技术有限公司 | Stereo video communication terminal, system and method |
| CN102984548A (en) * | 2011-09-05 | 2013-03-20 | 中国移动通信集团公司 | 3D video coding transmission method and apparatus |
| CN107396082A (en) * | 2017-07-14 | 2017-11-24 | 歌尔股份有限公司 | A kind for the treatment of method and apparatus of view data |
| WO2018115841A1 (en) * | 2016-12-23 | 2018-06-28 | Sony Interactive Entertainment Inc. | Image data encoding and decoding |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR101875615B1 (en) * | 2010-09-01 | 2018-07-06 | 엘지전자 주식회사 | Method and apparatus for processing and receiving digital broadcast signal for 3-dimensional display |
| US9485494B1 (en) * | 2011-04-10 | 2016-11-01 | Nextvr Inc. | 3D video encoding and decoding methods and apparatus |
| CN107318027B (en) * | 2012-12-27 | 2020-08-28 | 日本电信电话株式会社 | Image encoding/decoding method, image encoding/decoding device, and image encoding/decoding program |
| US9930363B2 (en) * | 2013-04-12 | 2018-03-27 | Nokia Technologies Oy | Harmonized inter-view and view synthesis prediction for 3D video coding |
-
2020
- 2020-01-20 CN CN202010073041.9A patent/CN113141494A/en active Pending
-
2021
- 2021-01-14 WO PCT/CN2021/071702 patent/WO2021147754A1/en not_active Ceased
- 2021-01-18 TW TW110101860A patent/TW202130167A/en unknown
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101453662A (en) * | 2007-12-03 | 2009-06-10 | 华为技术有限公司 | Stereo video communication terminal, system and method |
| CN102984548A (en) * | 2011-09-05 | 2013-03-20 | 中国移动通信集团公司 | 3D video coding transmission method and apparatus |
| WO2018115841A1 (en) * | 2016-12-23 | 2018-06-28 | Sony Interactive Entertainment Inc. | Image data encoding and decoding |
| CN107396082A (en) * | 2017-07-14 | 2017-11-24 | 歌尔股份有限公司 | A kind for the treatment of method and apparatus of view data |
Also Published As
| Publication number | Publication date |
|---|---|
| TW202130167A (en) | 2021-08-01 |
| WO2021147754A1 (en) | 2021-07-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10311757B2 (en) | Data hiding method and data hiding system | |
| EP1986446B1 (en) | Image processing apparatus and image processing method | |
| CN105847777B (en) | A kind of method and device for transmitting three dimensional depth image | |
| EP2122576A1 (en) | Rendering 3d video images on a stereo-enabled display | |
| US20120127271A1 (en) | Stereo video capture system and method | |
| CN103369331A (en) | Image hole filling method, image hole filling device, video image processing method and video image processing device | |
| US20140168211A1 (en) | Image processing apparatus, image processing method and program | |
| US20130050413A1 (en) | Video signal processing apparatus, video signal processing method, and computer program | |
| CN112929638A (en) | Eye positioning method and device, multi-view naked eye 3D display method and equipment | |
| CN113763449B (en) | Depth recovery method and device, electronic equipment and storage medium | |
| CN112073718B (en) | Television screen splash detection method and device, computer equipment and storage medium | |
| EP3038361B1 (en) | A method for adapting a number of views delivered by an auto-stereoscopic display device, and corresponding computer program product and electronic device | |
| WO2021110032A1 (en) | Multi-viewpoint 3d display device and 3d image display method | |
| CN113141494A (en) | 3D image processing method and device and 3D display terminal | |
| US6751345B2 (en) | Method and apparatus for improving object boundaries extracted from stereoscopic images | |
| CN112584124A (en) | Method and device for realizing 3D display and 3D display terminal | |
| CN113542617B (en) | Method and device for acquiring code scanning image, code scanning equipment and storage medium | |
| EP4078504A1 (en) | Method and apparatus for improved object detection | |
| CN105554587A (en) | Display control method, device and display device | |
| CN105208403A (en) | Image transmission method, server, and terminal | |
| CN115885439B (en) | Method and device for identifying insertion mode of connector | |
| KR101526490B1 (en) | Visual data processing apparatus and method for Efficient resource management in Cloud Computing | |
| EP2908520B1 (en) | Method for displaying a 3D content on a multi-view display device, corresponding multi-view display device and computer program product | |
| CN113141501A (en) | Method and device for realizing 3D display and 3D display system | |
| CN114092925A (en) | Video subtitle detection method and device, terminal equipment and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210720 |
|
| RJ01 | Rejection of invention patent application after publication |