CN113840108A - Video image processing method and device, camera equipment and readable storage medium - Google Patents
Video image processing method and device, camera equipment and readable storage medium Download PDFInfo
- Publication number
- CN113840108A CN113840108A CN202010583092.6A CN202010583092A CN113840108A CN 113840108 A CN113840108 A CN 113840108A CN 202010583092 A CN202010583092 A CN 202010583092A CN 113840108 A CN113840108 A CN 113840108A
- Authority
- CN
- China
- Prior art keywords
- image
- yuv
- pixel row
- data
- original
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 15
- 238000000034 method Methods 0.000 claims abstract description 30
- 239000013598 vector Substances 0.000 claims description 47
- 238000004590 computer program Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 abstract description 36
- 230000000694 effects Effects 0.000 abstract description 8
- 230000000903 blocking effect Effects 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 16
- 238000010586 diagram Methods 0.000 description 12
- 238000004891 communication Methods 0.000 description 7
- 238000013497 data interchange Methods 0.000 description 4
- 230000003993 interaction Effects 0.000 description 2
- 238000001454 recorded image Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/91—Television signal processing therefor
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
The application provides a video mirror processing method and device, a camera device and a readable storage medium, and relates to the field of image processing. According to the image processing method and device, data exchange is carried out on YUV image data of each frame of original to-be-processed image recorded by the camera equipment about a target central axis of the original to-be-processed image in the horizontal direction, target YUV data corresponding to the original to-be-processed image is obtained, then image expression is carried out on the target YUV data, so that a target mirror image which corresponds to the original to-be-processed image and has a mirror surface effect is obtained, a quick mirror image conversion function in a mirror image video recording process is achieved, the probability of real-time video blocking or large video delay of the camera equipment is reduced, and the recording performance of the mirror image video is enhanced.
Description
Technical Field
The present application relates to the field of image processing, and in particular, to a video mirroring method and apparatus, an image capturing device, and a readable storage medium.
Background
With the continuous development of internet technology, camera devices (e.g., smart phones, cameras, etc.) bring great convenience to users, users can record videos through the camera devices to record information that users want to store, and users usually need to ensure that the recorded videos are consistent with the picture contents during video preview in the process of recording videos through the camera devices.
However, in practice, some video cameras record video in a state opposite to the picture content of the video preview, for example, when a user uses a front camera on a smart phone to record video, the default recorded video is opposite to the picture content of the video preview. In order to ensure that the video content finally output by the image capturing apparatus is consistent with the picture content during video preview, the image capturing apparatus is usually required to have a mirror image video recording function. However, in the current scheme for realizing the mirror image video recording in the market, each frame of recorded image is turned over in the video recording process, so that the mirror image effect can be achieved. The realization scheme usually causes the phenomenon of real-time video recording blocking or large video recording delay of the camera equipment due to the low speed of generating the mirror image by the image, and has the problem of poor video recording performance of the mirror image.
Disclosure of Invention
In view of the above, an object of the present application is to provide a video image processing method and apparatus, an image capturing device, and a readable storage medium, which can improve the image conversion rate during image video recording, reduce the probability of real-time video jamming or large video delay, and enhance the image video recording performance.
In order to achieve the above purpose, the embodiments of the present application employ the following technical solutions:
in a first aspect, an embodiment of the present application provides a video mirroring method, which is applied to an image capturing apparatus, and the method includes:
acquiring each frame of original image to be processed recorded by the camera equipment;
extracting YUV image data of each original image to be processed aiming at each frame of original image to be processed;
exchanging data of the YUV image data of the original image to be processed with respect to a target central axis of the original image to be processed in the horizontal direction to obtain corresponding target YUV data;
and performing image expression on the target YUV data to obtain a target mirror image corresponding to the original image to be processed.
In an optional embodiment, the data exchange of the YUV image data of the original image to be processed with respect to the target central axis of the original image to be processed in the horizontal direction includes:
dividing the original image to be processed into a first image area and a second image area by using the target central axis;
sequentially determining, for each first pixel row in the first image region, a second pixel row in the second image region that is symmetric about the target axis for the first pixel row;
and performing data conversion on the YUV data of the first pixel row and the second pixel row which are symmetrical in position in the YUV image data.
In an optional embodiment, the data-swapping, in the YUV image data, the YUV data of each of the first pixel row and the second pixel row with symmetric positions includes:
sequentially calling a hexagonal vector expansion instruction to extract at least one YUV vector in the YUV image data of each first pixel row and a second pixel row which is symmetrical to the first pixel row about the target axis according to the arrangement sequence of the first pixel rows of the first image area, wherein the YUV vector comprises YUV data of a plurality of pixel points which are sequentially arranged in the corresponding pixel row;
and exchanging the content of the YUV vector corresponding to the pixel position in the extracted YUV vectors of the first pixel row and the second pixel row with symmetrical positions.
In an optional embodiment, the data exchange of the YUV image data of the original image to be processed with respect to the target central axis of the original image to be processed in the horizontal direction includes:
dividing the original image to be processed into a plurality of third image areas and a plurality of fourth image areas, wherein each third image area is provided with one fourth image area in symmetry with respect to the target central axis;
calling a thread to determine a fourth pixel row, symmetrical to the target axis, of each third pixel row in the third image area in the fourth image area according to each group of third image areas and fourth image areas which are symmetrical in position;
and calling the thread to perform data conversion on the YUV data in the YUV image data of the third pixel row and the fourth pixel row with the determined positions being symmetrical.
In an optional implementation manner, the invoking the thread to perform data conversion on the YUV data in the YUV image data of the third pixel row and the fourth pixel row with symmetric positions determined by the thread includes:
calling the thread to extract at least one YUV vector which is matched with the thread and is symmetrical in position from the YUV image data based on a hexagonal vector expansion instruction, wherein the YUV vector corresponds to each of the third pixel row and the fourth pixel row;
and calling the thread according to the corresponding relation of the pixel positions to exchange the content of the YUV vectors of the third pixel row and the fourth pixel row with symmetrical positions.
In a second aspect, an embodiment of the present application provides a video image processing apparatus, which is applied to an image capturing device, and includes:
the image acquisition module is used for acquiring each frame of original image to be processed recorded by the camera equipment;
the YUV extraction module is used for extracting YUV image data of each original image to be processed aiming at each frame of original image to be processed;
the YUV exchange module is used for exchanging the YUV image data of the original image to be processed with respect to a target central axis of the original image to be processed in the horizontal direction to obtain corresponding target YUV data;
and the mirror image expression module is used for carrying out image expression on the target YUV data to obtain a target mirror image corresponding to the original image to be processed.
In an optional implementation manner, the YUV swapping module includes:
the image dividing submodule is used for dividing the original image to be processed into a first image area and a second image area by using the target central axis;
a pixel row determination submodule for determining, for each first pixel row in the first image region in turn, a second pixel row of the first pixel row in the second image region, which is symmetrical about the target axis;
and the YUV conversion sub-module is used for performing data conversion on the YUV data of the first pixel row and the second pixel row which are symmetrical in position in the YUV image data.
In an optional embodiment, the YUV conversion sub-module performs data conversion on the YUV data of the first pixel row and the second pixel row, which are symmetric in position, in the YUV image data, and includes:
sequentially calling a hexagonal vector expansion instruction to extract at least one YUV vector in the YUV image data of each first pixel row and a second pixel row which is symmetrical to the first pixel row about the target axis according to the arrangement sequence of the first pixel rows of the first image area, wherein the YUV vector comprises YUV data of a plurality of pixel points which are sequentially arranged in the corresponding pixel row;
and exchanging the content of the YUV vector corresponding to the pixel position in the extracted YUV vectors of the first pixel row and the second pixel row with symmetrical positions.
In an optional implementation manner, the YUV swapping module includes:
the image dividing submodule is used for dividing the original image to be processed into a plurality of third image areas and a plurality of fourth image areas, wherein each third image area is symmetrically provided with one fourth image area relative to the target central axis;
the pixel row determining submodule is used for calling a thread to determine a fourth pixel row, symmetrical about the target central axis, of each third pixel row in the third image area in the fourth image area according to each group of symmetrical third image areas and fourth image areas;
and the YUV conversion sub-module is used for calling the thread to convert the YUV data of the third pixel row and the fourth pixel row with the determined positions in the YUV image data respectively.
In an optional implementation manner, the YUV conversion sub-module invokes the thread to perform data conversion on the YUV data in the YUV image data of the third pixel row and the fourth pixel row with symmetric determined positions, where the YUV data includes:
calling the thread to extract at least one YUV vector which is matched with the thread and is symmetrical in position from the YUV image data based on a hexagonal vector expansion instruction, wherein the YUV vector corresponds to each of the third pixel row and the fourth pixel row;
and calling the thread according to the corresponding relation of the pixel positions to exchange the content of the YUV vectors of the third pixel row and the fourth pixel row with symmetrical positions.
In a third aspect, an embodiment of the present application provides an image capturing apparatus, including a processor and a memory, where the memory stores machine executable instructions executable by the processor, and the processor may execute the machine executable instructions to implement the video mirroring processing method according to any one of the foregoing embodiments.
In a fourth aspect, an embodiment of the present application provides a readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method for processing video images according to any one of the foregoing embodiments is implemented.
The beneficial effects of the embodiment of the application are that:
according to the image processing method and device, data exchange is carried out on YUV image data of each frame of original to-be-processed image recorded by the camera equipment about a target central axis of the original to-be-processed image in the horizontal direction, target YUV data corresponding to the original to-be-processed image is obtained, then image expression is carried out on the target YUV data, so that a target mirror image which corresponds to the original to-be-processed image and has a mirror surface effect is obtained, a quick mirror image conversion function in a mirror image video recording process is achieved, the probability of real-time video blocking or large video delay of the camera equipment is reduced, and the recording performance of the mirror image video is enhanced.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic structural composition diagram of an image pickup apparatus provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a video image processing method according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating one of the sub-steps included in step S230 of FIG. 2;
FIG. 4 is a second schematic flowchart of the sub-steps included in step S230 in FIG. 2;
fig. 5 is a schematic diagram illustrating a module composition of a video image processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic diagram of a module composition of the YUV swapping module in fig. 5.
Icon: 10-an image pickup apparatus; 11-a memory; 12-a processor; 13-a communication unit; 14-a display unit; 15-a camera; 100-video mirror image processing means; 110-an image acquisition module; a 120-YUV extraction module; 130-YUV exchange module; 140-mirror image expression module; 131-an image partitioning sub-module; 132-pixel row determination submodule; 133-YUV conversion submodule.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present application, it is to be understood that the terms "first," "second," "third," and the like are used merely for distinguishing between descriptions and are not intended to indicate or imply relative importance.
In the description of the present application, it is further noted that, unless expressly stated or limited otherwise, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a schematic structural composition diagram of an image capturing apparatus 10 according to an embodiment of the present disclosure. In this embodiment of the present application, the camera device 10 may be configured to implement video recording, perform fast mirror image processing on a recorded video image in some video recording processes, implement a fast mirror image conversion function in a mirror image video recording process, reduce a probability of a real-time video jam or a large video delay, and enhance the mirror image video recording performance. The image capturing device 10 may be, but is not limited to, a smart phone including a front camera, a tablet computer, a smart camera, a personal computer, and the like.
In the present embodiment, the image pickup apparatus 10 includes a video image processing device 100, a memory 11, a processor 12, a communication unit 13, a display unit 14, and a camera 15. The memory 11, the processor 12, the communication unit 13, the display unit 14 and the camera 15 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the memory 11, the processor 12, the communication unit 13, the display unit 14 and the camera 15 may be electrically connected to each other through one or more communication buses or signal lines.
In this embodiment, the memory 11 may be used for storing a program, and the processor 12 may execute the program accordingly after receiving the execution instruction. The Memory 11 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
In this embodiment, the processor 12 may be an integrated circuit chip having signal processing capabilities. The Processor 12 may be a general-purpose Processor including a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Network Processor (NP), a Digital Signal Processor (DSP), and the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like that implements or executes the methods, steps and logic blocks disclosed in the embodiments of the present application. In one embodiment of this embodiment, the Processor 12 is a computer hexagonal Digital Signal Processor (CDSP).
In this embodiment, the communication unit 13 is configured to establish a communication connection between the image capturing apparatus 10 and another electronic apparatus via a network, and perform data interaction via the network.
In this embodiment, the display unit 14 includes a display screen, and the image capturing apparatus 10 may implement a video recording preview function through the display screen included in the display unit 14, and may also play the recorded video through the display screen included in the display unit 14.
In this embodiment, the camera 15 is used to implement the photographing function and the video recording function of the image capturing apparatus 10. In one implementation of this embodiment, the camera 15 includes a sensor direction of 90°Or 270°The user can take a self-timer by the aid of the front camera.
In this embodiment, the video image processing apparatus 100 includes at least one software functional module that can be stored in the memory 11 in the form of software or firmware or solidified in the operating system of the image capturing device 10. The processor 12 may be used to execute executable modules stored in the memory 11, such as software functional modules and computer programs included in the video image processing apparatus 100. The image pickup apparatus 10 increases the image conversion rate when recording the image video through the video image processing device 100, reduces the probability of real-time video jam or large video delay, and enhances the image video recording performance.
It is to be understood that the block diagram shown in fig. 1 is only a structural component diagram of the image pickup apparatus 10, and the image pickup apparatus 10 may include more or less components than those shown in fig. 1, or have a different configuration from that shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
In the present application, in order to ensure that the above-mentioned image capturing apparatus 10 can implement a fast mirroring conversion function in a process of recording a mirrored video, the present application implements the above-mentioned function by providing a video mirroring processing method applied to the above-mentioned image capturing apparatus 10. The following describes the video image processing method provided by the present application in detail.
Referring to fig. 2, fig. 2 is a flowchart illustrating a video image processing method according to an embodiment of the present application. In the embodiment of the present application, the video mirroring method shown in fig. 2 is as follows.
Step S210, acquiring each frame of original image to be processed recorded by the image capturing device.
In this embodiment, the original image to be processed is a recorded image that needs to be expressed by a mirror image effect and is selected by a user when performing a video recording operation through the image capturing apparatus 10. In an implementation manner of this embodiment, the original to-be-processed image may be an original image recorded by the camera device 10 through a front-facing camera.
Step S220, extracting YUV image data of each original image to be processed, for each frame of original image to be processed.
In this embodiment, the YUV image data includes Y (brightness) channel data, U (color) channel data, and V (saturation) channel data corresponding to each pixel point in the original image to be processed, where the Y channel data, the U channel data, and the V channel data of the same pixel point may be Y: u: v or [ Y, U, V ]. The image capturing device 10 may calculate the size of the Y channel data of each pixel by using a formula "Y ═ 0.299 × R +0.587 × G +0.114 × B", calculate the size of the U channel data of each pixel by using a formula "U ═ 0.1687 × R-0.3313 × G +0.5 × B + 128", and calculate the size of the V channel data of each pixel by using a formula "V ═ 0.5 × R-0.4187 × G-0.0813 × B + 128". Wherein, Y represents the Y channel data of the corresponding pixel, U represents the U channel data of the corresponding pixel, V represents the V channel data of the corresponding pixel, R represents the R (Red ) component data of the corresponding pixel, G represents the G (Green ) component data of the corresponding pixel, and B represents the B (Blue ) component data of the corresponding pixel.
Step S230, exchanging the YUV image data of the original to-be-processed image with respect to the target central axis of the original to-be-processed image in the horizontal direction to obtain corresponding target YUV data.
In this embodiment, the target central axis is a central line of the corresponding original to-be-processed image in the horizontal direction, and the original to-be-processed image may be folded with respect to the target central axis. Taking an original to-be-processed image with a size of 640 × 480 as an example, the original to-be-processed image includes 480 pixel rows, which can be expressed as 0 th pixel row to 479 th pixel row, and a central axis of the original to-be-processed image in a horizontal direction is a central line between the 239 th pixel row and the 240 th pixel row.
The image pickup apparatus 10 obtains target YUV data corresponding to an original image to be processed by parameter-exchanging YUV data of all pixel rows symmetric about the target axis in YUV image data about the original image to be processed. The YUV data of the same pixel row comprises YUV data of different pixel points of the pixel row, the positions of the YUV data of the different pixel points in the YUV data of the same pixel row correspond to the positions of the YUV data of the different pixel points in an original image to be processed, and when the YUV data of a certain pixel row is exchanged to another position, the YUV data of each pixel point in the exchanged YUV data of the pixel row is unchanged. The YUV data comprises Y channel data, U channel data and V channel data of corresponding pixel points.
Taking an original to-be-processed image with a size of 640 × 480 as an example, the YUV image data corresponding to the original to-be-processed image includes 480 pixel rows of YUV data, at this time, the image pickup apparatus 10 performs content exchange between the 0 th pixel row of YUV data and the 479 th pixel row of YUV data, performs content exchange between the 1 st pixel row of YUV data and the 478 th pixel row of YUV data, and so on until the 239 th pixel row of YUV data and the 240 th pixel row of YUV data are subjected to content exchange, so as to obtain target YUV data corresponding to the original to-be-processed image and capable of representing a mirror image effect. Wherein the internal element position of the YUV data of each pixel row is unchanged during the content exchange process.
Optionally, referring to fig. 3, fig. 3 is a schematic flowchart of the sub-steps included in step S230 in fig. 2. In an embodiment of the present application, the image capturing apparatus 10 may implement the step S230 by calling one thread alone, and the step S230 may include sub-steps S231 to S233.
And a substep S231, dividing the original image to be processed into a first image area and a second image area by the target central axis.
Sub-step S232, determining, for each first pixel row in the first image region in turn, a second pixel row in the second image region that is symmetric about the target axis for the first pixel row.
In the sub-step S233, YUV data of the first pixel row and the second pixel row, which are symmetrical in position, are subjected to data conversion in the YUV image data.
Taking an original to-be-processed image with a size of 640 × 480 as an example, the image capturing apparatus 10 divides the original to-be-processed image into two image areas with a size of 640 × 240, where pixel rows covered by one image area (a first image area) are 0 th pixel row to 239 th pixel row of the original to-be-processed image, and pixel rows covered by the other image area (a second image area) are 240 th pixel row to 479 th pixel row of the original to-be-processed image. At this time, the imaging apparatus 10 sequentially determines, according to the arrangement order of the first pixel rows in the first image region, that the second pixel rows, which are symmetric about the target axis position, of the 0 th pixel row to the 239 th pixel row are respectively the 479 th pixel row to the 240 th pixel row, then sequentially performs data conversion on the YUV data of the 0 th pixel row and the 479 th pixel row in the YUV image data, performs data conversion on the YUV data of the 1 st pixel row and the 478 th pixel row in the YUV image data, performs data conversion on the 2 nd pixel row and the 477 th pixel row in the YUV image data, and then sequentially analogizes until the YUV data of the 239 th pixel row and the 240 th pixel row in the YUV image data are subjected to data conversion.
In an embodiment of this embodiment, when the processor 12 is a CDSP, the image capturing apparatus 10 may implement YUV data swapping by calling a hexagonal Vector Extensions (HVX) instruction, where data swapping is performed on YUV data of a first pixel row and a second pixel row that are symmetric in position in the YUV image data, including:
according to the arrangement sequence of first pixel rows of a first image area, sequentially calling a hexagonal vector expansion instruction to extract at least one YUV vector in YUV image data of each first pixel row and a second pixel row which is symmetrical to the first pixel row about a target central axis, wherein the YUV vector comprises YUV data of a plurality of pixel points which are sequentially arranged in the corresponding pixel row;
and exchanging the content of the YUV vector corresponding to the pixel position in the extracted YUV vectors of the first pixel row and the second pixel row with symmetrical positions.
Taking an original to-be-processed image with a size of 640 × 480 as an example, a first image region of the original to-be-processed image includes 0 th pixel row to 239 th pixel row, and a second image region of the original to-be-processed image includes 240 th pixel row to 479 th pixel row. The data capacity of each YUV Vector is 1024 bits, YUV data of each pixel point can be represented by 8 bits, and at this time, one YUV Vector can be directly obtained from 1021/8 pixels at one time, and then, 5 sequentially arranged YUV vectors can be extracted from the YUV data of one pixel row by the image pickup device 10, and can be represented as Vector1, Vector2, Vector3, Vector4, and Vector 5. In this case, the image capturing apparatus 10 performs data conversion on the vectors 1, 2, 3, 4, and 5 of the 0 th pixel row and the vectors 1, 2, 3, 4, and 5 of the 479 th pixel row, respectively, according to the arrangement order of the first pixel rows in the first image region, then performs data conversion on the vectors 1, 2, 3, 4, and 5 of the 1 st pixel row and the vectors 1, 2, 3, 4, and 5 of the 478 th pixel row, then performs data conversion on the vectors 72, 5, 3647772, 5, 36240, 5, and 5 of the 1 st pixel row, and so on the image data of the vectors.
Optionally, referring to fig. 4, fig. 4 is a second schematic flowchart of the sub-steps included in step S230 in fig. 2. In another embodiment of the present application, the image capturing apparatus 10 may implement the step S230 by calling a plurality of threads, and the step S230 may include sub-steps S235 to S237.
And a substep S235 of dividing the original image to be processed into a plurality of third image regions and a plurality of fourth image regions, wherein each third image region has a fourth image region symmetric with respect to the target central axis.
In sub-step S236, for each set of the third image region and the fourth image region with symmetric positions, a thread is invoked to determine a fourth pixel row in the fourth image region, where each third pixel row in the third image region is symmetric about the target axis.
And a substep S237, calling the thread to perform data conversion on the YUV data in the YUV image data of the third pixel row and the fourth pixel row with the determined positions being symmetrical.
The image capturing apparatus 10 may divide an original image to be processed into a plurality of area combinations, where each area combination includes a third image area and a fourth image area that are symmetric about a target central axis, and then, for each area combination, invoke a thread matching the area combination to perform YUV data conversion on all pixel rows that are symmetric about the target central axis in the area combination.
Taking an original to-be-processed image with a size of 640 × 480 as an example, the image capturing apparatus 10 divides the original to-be-processed image into eight image areas with a size of 640 × 60, and these eight image areas can be allocated to 4 area combinations. The pixel lines covered by the third image area in the first area combination comprise the 0 th pixel line to the 59 th pixel line of the original image to be processed, and the pixel lines covered by the fourth image area comprise the 420 th pixel line to the 479 th pixel line of the original image to be processed; the pixel rows covered by the third image area in the second area combination comprise 60 th pixel row to 119 th pixel row, and the pixel rows covered by the fourth image area comprise 360 th pixel row to 419 th pixel row; the pixel rows covered by the third image area in the third area combination comprise 120 th pixel row to 179 th pixel row, and the pixel rows covered by the fourth image area comprise 300 th pixel row to 359 th pixel row; the pixel rows covered by the third image area in the fourth area combination include 180 th pixel row to 239 th pixel row, and the pixel rows covered by the fourth image area include 240 th pixel row to 299 th pixel row.
Each area combination can correspond to one thread, at the moment, the first area combination can correspond to the thread 1, and the thread 1 realizes data interchange between the 0 th pixel row to the 59 th pixel row and the 420 th pixel row to the 479 th pixel row; the second area combination can correspond to thread 2, and the data interchange between the 60 th pixel row to the 119 th pixel row and the 360 th pixel row to the 419 th pixel row is realized by the thread 2; the third area combination can correspond to thread 3, and data interchange among the 120 th pixel row to the 179 th pixel row and the 300 th pixel row to the 359 th pixel row is realized by the thread 3; the fourth area combination can correspond to thread 4, and the data interchange between the 180 th pixel row to the 239 th pixel row and the 240 th pixel row to the 299 th pixel row is realized by thread 4.
Taking the first area combination as an example, the image capturing apparatus 10 may determine, by using the calling thread 1, that the fourth pixel line corresponding to each of the pixel lines from 0 th pixel line to 59 th pixel line is from 479 th pixel line to 420 th pixel line, at this time, the image capturing apparatus 10 may call the thread 1 to perform data swapping on the YUV data of the pixel lines from 0 th pixel line to 479 th pixel line in the YUV image data, call the thread 1 to perform data swapping on the YUV data of the pixel lines from 1 st pixel line to 478 th pixel line in the YUV image data, call the thread 1 to perform data swapping on the YUV data of the pixel lines from 2 nd pixel line to 477 th pixel line in the YUV image data, and so on until the YUV data of the pixel lines from 59 th pixel line to 420 th pixel line in the YUV image data are subjected to data swapping.
In an embodiment of this embodiment, when the processor 12 is a CDSP, the image capturing apparatus 10 may implement YUV data conversion by invoking a hexagonal Vector Extensions (HVX) instruction, and when the thread is invoked to perform data conversion on YUV data in YUV image data in the third pixel row and the fourth pixel row with symmetric positions, the method includes:
calling the thread, and extracting at least one YUV vector corresponding to a third pixel row and a fourth pixel row which are matched with the thread and are symmetrical in position from the YUV image data based on the hexagonal vector expansion instruction;
and calling the thread according to the corresponding relation of the pixel positions to exchange the content of the YUV vectors of the third pixel row and the fourth pixel row with symmetrical positions.
Taking an example that YUV data of one pixel row can be extracted by the image capturing apparatus 10 as 5 sequentially arranged YUV vectors (including Vector1, Vector2, Vector3, Vector4, and Vector5) in a 640 × 480 original image to be processed, the image capturing apparatus 10 may call the Vector1, Vector2, Vector3, Vector4, and Vector5 of the 0 th pixel row and the Vector1, Vector2, Vector3, Vector4, and Vector5 of the 479 th pixel row respectively for the thread 1, then call the Vector1, Vector2, Vector3, and Vector3 of the 1 st pixel row and the Vector3, and the Vector3 of the second pixel row and the Vector3, the Vector3, the Vector3, and the Vector3, and the Vector3, and the Vector3, and the 3, Vector2, Vector3, Vector4 and Vector5 perform data swapping.
Referring to fig. 2 again, in step S240, the target YUV data is subjected to image expression to obtain a target mirror image corresponding to the original image to be processed.
In this embodiment, after the image capturing device 10 determines target YUV data of a frame of original YUV image data to be processed after data exchange operation is performed, RGB component data corresponding to the target YUV data may be converted by using the formulas "R +1.402 (V-128)", "G-Y-0.34414 (U-128) -0.71414 (V-128)", and "B + Y +1.772 (U-128)", so as to complete image expression operation on the target YUV data, thereby outputting a target mirror image with a mirror effect corresponding to the original image to be processed, so as to quickly implement a mirror image conversion function in a mirror image video recording process, reduce the probability of video jam or large video delay occurring in a real-time video recording process by the image capturing device 10, and enhance the mirror image video recording performance.
In this embodiment of the present application, the camera device 10 implements a fast mirror image conversion function in a mirror image video recording process by executing the video mirror image processing method, so as to reduce the probability of real-time video jamming or large video delay of the camera device 10, and enhance the video recording performance of the mirror image.
In the present application, in order to ensure that the video image processing apparatus 100 included in the imaging device 10 can be normally implemented, the present application implements the functions of the video image processing apparatus 100 by dividing the functional blocks. The following describes a specific configuration of the video image processing apparatus 100 provided in the present application.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating a module composition of the video image processing apparatus 100 according to an embodiment of the present disclosure. In the embodiment of the present application, the video image processing apparatus 100 includes an image obtaining module 110, a YUV extracting module 120, a YUV exchanging module 130, and an image expression module 140.
And an image obtaining module 110, configured to obtain each frame of original image to be processed recorded by the image capturing apparatus.
A YUV extracting module 120, configured to extract, for each frame of original image to be processed, YUV image data of the original image to be processed.
The YUV swapping module 130 is configured to swap the YUV image data of the original image to be processed with respect to a target central axis of the original image to be processed in the horizontal direction, so as to obtain corresponding target YUV data.
And the mirror image expression module 140 is configured to perform image expression on the target YUV data to obtain a target mirror image corresponding to the original image to be processed.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating a module composition of the YUV swapping module 130 in fig. 5. In the embodiment of the present application, the YUV pairing module 130 may include an image dividing sub-module 131, a pixel row determining sub-module 132, and a YUV pairing sub-module 133.
In an implementation manner of this embodiment, the image dividing sub-module 131 is configured to divide the original to-be-processed image into a first image area and a second image area by using the target central axis. The pixel row determining submodule 132 is configured to determine, for each first pixel row in the first image region in turn, a second pixel row in the second image region, which is symmetric about the target central axis for the first pixel row. The YUV conversion sub-module 133 is configured to perform data conversion on the YUV data of the first pixel row and the second pixel row with symmetric positions in the YUV image data.
In another implementation manner of this embodiment, the image dividing sub-module 131 is configured to divide the original to-be-processed image into a plurality of third image areas and a plurality of fourth image areas, where each of the third image areas has a fourth image area symmetric with respect to the target central axis. The pixel row determining submodule 132 is configured to, for each set of the third image area and the fourth image area with symmetric positions, invoke a thread to determine a fourth pixel row in the fourth image area, where each third pixel row in the third image area is symmetric with respect to the target axis. The YUV conversion sub-module 133 is configured to invoke the thread to perform data conversion on the YUV data in the YUV image data in the third pixel row and the fourth pixel row, which are symmetric to each other in the determined position.
It should be noted that the basic principle and the resulting technical effect of the video mirroring apparatus 100 provided in the embodiment of the present application are the same as those of the video mirroring method applied to the image capturing device 10, and for the sake of brief description, reference may be made to the description of the video mirroring method above, which is not mentioned in this embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a readable storage medium, which includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned readable storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In summary, in the video mirror image processing method, the video mirror image processing apparatus, the video camera device, and the readable storage medium provided in the embodiments of the present application, YUV image data of each frame of original to-be-processed image recorded by the video camera device is subjected to data exchange with respect to a target central axis of the original to-be-processed image in a horizontal direction, so as to obtain target YUV data corresponding to the original to-be-processed image, and then the target YUV data is subjected to image expression, so as to obtain a target mirror image with a mirror effect corresponding to the original to-be-processed image, thereby implementing a fast mirror image conversion function in a mirror image video recording process, reducing a probability of occurrence of real-time video blocking or large video delay of the video camera device, and enhancing a mirror image video recording performance.
The above description is only for various embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and all such changes or substitutions are included in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010583092.6A CN113840108A (en) | 2020-06-23 | 2020-06-23 | Video image processing method and device, camera equipment and readable storage medium |
PCT/CN2021/101205 WO2021259191A1 (en) | 2020-06-23 | 2021-06-21 | Video mirror processing method and apparatus, photographic device, and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010583092.6A CN113840108A (en) | 2020-06-23 | 2020-06-23 | Video image processing method and device, camera equipment and readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113840108A true CN113840108A (en) | 2021-12-24 |
Family
ID=78964278
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010583092.6A Pending CN113840108A (en) | 2020-06-23 | 2020-06-23 | Video image processing method and device, camera equipment and readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113840108A (en) |
WO (1) | WO2021259191A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106127673A (en) * | 2016-07-19 | 2016-11-16 | 腾讯科技(深圳)有限公司 | A kind of method for processing video frequency, device and computer equipment |
CN110213502A (en) * | 2019-06-28 | 2019-09-06 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
US20190297339A1 (en) * | 2016-06-30 | 2019-09-26 | Nokia Technologies Oy | An Apparatus, A Method and A Computer Program for Video Coding and Decoding |
CN110324598A (en) * | 2018-03-30 | 2019-10-11 | 武汉斗鱼网络科技有限公司 | A kind of image processing method, device and computer equipment |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103686450A (en) * | 2013-12-31 | 2014-03-26 | 广州华多网络科技有限公司 | Video processing method and system |
WO2019075305A1 (en) * | 2017-10-13 | 2019-04-18 | Elwha Llc | Satellite constellation with image edge processing |
-
2020
- 2020-06-23 CN CN202010583092.6A patent/CN113840108A/en active Pending
-
2021
- 2021-06-21 WO PCT/CN2021/101205 patent/WO2021259191A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190297339A1 (en) * | 2016-06-30 | 2019-09-26 | Nokia Technologies Oy | An Apparatus, A Method and A Computer Program for Video Coding and Decoding |
CN106127673A (en) * | 2016-07-19 | 2016-11-16 | 腾讯科技(深圳)有限公司 | A kind of method for processing video frequency, device and computer equipment |
CN110324598A (en) * | 2018-03-30 | 2019-10-11 | 武汉斗鱼网络科技有限公司 | A kind of image processing method, device and computer equipment |
CN110213502A (en) * | 2019-06-28 | 2019-09-06 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
WO2021259191A1 (en) | 2021-12-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20170099435A1 (en) | Image Generation Method Based On Dual Camera Module And Dual Camera Apparatus | |
US8861846B2 (en) | Image processing apparatus, image processing method, and program for performing superimposition on raw image or full color image | |
US10110821B2 (en) | Image processing apparatus, method for controlling the same, and storage medium | |
JP6172935B2 (en) | Image processing apparatus, image processing method, and image processing program | |
CN107147851B (en) | Photo processing method, apparatus, computer-readable storage medium, and electronic device | |
JP6087612B2 (en) | Image processing apparatus and image processing method | |
WO2020207427A1 (en) | Method for image-processingand electronic device | |
WO2022261849A1 (en) | Method and system of automatic content-dependent image processing algorithm selection | |
US20240397015A1 (en) | Image Processing Method and Electronic Device | |
CN115170554A (en) | Image detection method and electronic equipment | |
CN115706870B (en) | Video processing method, device, electronic equipment and storage medium | |
CN111097169A (en) | Game image processing method, device, equipment and storage medium | |
WO2022217525A1 (en) | Image noise reduction processing method and device, and imaging device | |
CN110033421B (en) | Image processing method, device, storage medium and electronic device | |
CN117768774A (en) | Image processor, image processing method, photographing device and electronic device | |
CN113840108A (en) | Video image processing method and device, camera equipment and readable storage medium | |
US20240107182A1 (en) | Image processing method and electronic device | |
US11202019B2 (en) | Display control apparatus with image resizing and method for controlling the same | |
CN110049254B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
JP6025555B2 (en) | Image processing apparatus, image processing method, and program | |
CN110035233A (en) | Image processing method, device, storage medium and electronic equipment | |
US20210218887A1 (en) | Imaging apparatus | |
US11310442B2 (en) | Display control apparatus for displaying image with/without a frame and control method thereof | |
US20230328339A1 (en) | Image capture apparatus and control method thereof, and image processing apparatus | |
CN113747046B (en) | Image processing method, device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20211224 |