[go: up one dir, main page]

CN113225550A - Offset detection method and device, camera module, terminal equipment and storage medium - Google Patents

Offset detection method and device, camera module, terminal equipment and storage medium Download PDF

Info

Publication number
CN113225550A
CN113225550A CN202110413657.0A CN202110413657A CN113225550A CN 113225550 A CN113225550 A CN 113225550A CN 202110413657 A CN202110413657 A CN 202110413657A CN 113225550 A CN113225550 A CN 113225550A
Authority
CN
China
Prior art keywords
line segment
edge line
image
target edge
camera module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110413657.0A
Other languages
Chinese (zh)
Inventor
罗伟城
陈文章
魏君竹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang OFilm Optoelectronics Technology Co Ltd
Original Assignee
Nanchang OFilm Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang OFilm Optoelectronics Technology Co Ltd filed Critical Nanchang OFilm Optoelectronics Technology Co Ltd
Priority to CN202110413657.0A priority Critical patent/CN113225550A/en
Publication of CN113225550A publication Critical patent/CN113225550A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an offset detection method and device, a camera module, terminal equipment and a storage medium, and belongs to the technical field of image processing. The method comprises the following steps: acquiring a first image picture acquired by a camera module; carrying out edge detection on the first image picture to obtain a first edge image; detecting edge line segments of all objects in the first edge image through Hough lines to obtain target edge line segments, wherein the target edge line segments are edge line segments of which the accumulated values corresponding to the edge line segments of all the objects are larger than a preset accumulated threshold value; detecting the line segment parameters of the target edge line segment; and when the line segment parameters of the target edge line segment accord with the preset parameter range, determining that the position of the optical filter used by the camera module is shifted. Whether this application can realize taking place the skew to the light filter through the image picture and detect, improves the space utilization of camera module.

Description

Offset detection method and device, camera module, terminal equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for detecting an offset, a camera module, a terminal device, and a storage medium.
Background
With the rapid development of scientific technology, more and more functions can be realized in the terminal, for example, more and more terminals can realize functions of shooting, monitoring, video and the like through installed camera modules. The camera modules arranged on some terminals can further comprise various optical filters, and the terminals can shoot under different scenes by controlling the switching of the optical filters. At present, a camera module is often provided with an IR-Cut Filter Removable (ICR), and a terminal can switch optical filters through the ICR, so that the camera module can be used in different environments. Wherein, if the position of the light filter of camera module takes place the skew, can cause the influence to the image that the camera module was gathered, in the correlation technique, can detect whether the light filter takes place the skew through the position after the light filter switches.
In the related art, since the position detection often requires a plurality of sensors to be arranged at the position after the optical filter is switched, the offset detection is completed, and the problem of complex structure of the camera module is caused.
Disclosure of Invention
The embodiment of the application provides an offset detection method and device, a camera module, terminal equipment and a storage medium, and can avoid the situation that a plurality of sensors are arranged at positions in the camera module after corresponding optical filters are switched, reduce the number of devices in the camera module and improve the space utilization rate of the camera module.
In one aspect, an embodiment of the present application provides an offset detection method, where the offset detection method is used in a camera module, where the camera module includes an optical filter assembly, the optical filter assembly includes at least two optical filters, and frequency bands of light filtered by the at least two optical filters are different; the method comprises the following steps:
acquiring a first image picture acquired by the camera module;
performing edge detection on the first image picture to obtain a first edge image, wherein the first edge image is composed of edge line segments of all objects in the first image picture;
detecting edge line segments of the objects in the first edge image through Hough lines to obtain target edge line segments, wherein the target edge line segments are edge line segments of which the accumulated values corresponding to the edge line segments of the objects are larger than a preset accumulated threshold value;
detecting the line segment parameters of the target edge line segment;
and when the line segment parameters of the target edge line segment accord with the preset parameter range, determining that the position of the optical filter used by the camera module is shifted.
In the embodiment of the application, a first image picture acquired through a camera module is acquired, edge detection is performed on the first image picture, a first edge image is acquired, a target edge line segment is determined from the first edge image, and by detecting line segment parameters of the target edge line segment, when the line segment parameters of the target edge line segment meet a preset parameter range, it is determined that the position of an optical filter in use of the camera module is shifted. The image frame that this application adopted to gather the camera module is analyzed, confirm probably because the light filter position takes place the target edge line section that the skew leads to in the picture, and detect, thereby whether the skew has taken place for the position of determining the light filter that is using, need not set up a plurality of sensors in the camera module corresponding position department after the light filter switches, the structure of camera module has been simplified, space utilization has been improved, whether the skew has taken place for the light filter can be detected out to direct follow the image, need not to carry out sensing and transmission to light filter position parameter through position sensor, detection efficiency and detection precision have also been improved.
As an optional implementation manner, in an aspect of the embodiment of the present application, the line segment parameter includes: the line segment slope of the target edge line segment in the first edge image, the line segment width of the target edge line segment in the first edge image, the line segment length of the target edge line segment in the first edge image, the pixel area of the target edge line segment in the first edge image, and one or more line segment paths formed by the target edge line segment in the first edge image.
As an optional implementation manner, in an aspect of the embodiment of the present application, the detecting the line segment parameter of the target edge line segment includes a slope of the target edge line segment in the first edge image and a line segment width occupied by the target edge line segment in the first edge image, the preset parameter threshold includes a preset slope range and a preset width range, and the detecting the line segment parameter of the target edge line segment includes:
when the slope of the line segment is within the range of the preset slope, acquiring the line segment width occupied by the target edge line segment in the first edge image;
and when the line segment width is within the preset width range, determining that the line segment parameters of the target edge line segment conform to the preset parameter range.
In the embodiment of the application, the line segment parameters comprise a line segment slope and a line segment width, the target edge line segments are respectively screened by combining the line segment slope and the line segment width, the target edge line segments which accord with the preset parameter range are determined, and the accuracy of determining that the target edge line segments accord with the preset parameter range is improved.
As an optional implementation manner, in an aspect of the embodiments of the present application, the acquiring a line segment width occupied by the target edge line segment in the first edge image when a line segment slope of the target edge line segment in the first edge image is within the preset slope range includes:
acquiring each pixel point occupied by the target edge line segment in the first edge image;
acquiring each group of pixel points in the width direction of the target edge line segment according to each pixel point;
and calculating the average value of the widths of the pixel points of each group according to the width of the pixel points of each group, and acquiring the average value as the line segment width occupied by the target edge line segment in the first edge image.
In the embodiment of the application, when the line segment width is obtained, each group of pixel points is obtained through each pixel point occupied by the target edge line segment in the first edge image, and the line segment width is represented by the average value of the widths of the pixel points of each group, so that the accuracy of obtaining the line segment width is improved.
As an optional implementation manner, in an aspect of the embodiment of the present application, the obtaining each pixel point occupied by the target edge line segment in the first edge image includes:
acquiring a first pixel point of the target edge line segment in the first edge image;
taking the first pixel point as a pixel starting block, and acquiring an adjacent pixel set, wherein the adjacent pixel set is a set formed by all pixel points adjacent to the pixel starting block;
for the adjacent pixel set, detecting whether the difference value between the pixel value of each pixel point in the adjacent pixel set and the pixel value of the first pixel point is smaller than a preset difference threshold value;
merging each pixel point which is smaller than the preset difference threshold value in the adjacent pixel set with the pixel starting block to serve as a new pixel starting block, and executing the step of obtaining the adjacent pixel set until the difference value between the pixel value of each pixel point in the adjacent pixel set and the pixel value of the first pixel point is not smaller than the preset difference threshold value;
and taking each pixel point contained in the finally obtained pixel start block as each pixel point occupied by the target edge line segment in the first edge image.
In the embodiment of the application, the difference value of the pixel value between a pixel point on the target edge line segment and the adjacent region is judged from the pixel point, so that each pixel point on the same target edge line segment is screened out, and the efficiency and the flexibility for acquiring each pixel point on the target edge line segment are improved.
As an optional implementation manner, in an aspect of the embodiments of the present application, the determining that the position of the filter being used by the camera module is shifted when the line segment parameter of the target edge line segment meets a preset parameter range includes:
when the line segment parameters of the target edge line segment accord with a preset parameter range, detecting whether a historical edge line segment at the same position as the target edge line segment in a pixel coordinate system is stored in a detection cache, wherein the historical edge line segment is a line segment of which the line segment parameters determined from the previous N frames of image pictures of the first image picture accord with the preset parameter range, the pixel coordinate system is the pixel coordinate system of the first edge image, and N is a positive integer;
and when the historical edge line segment at the same position of the target edge line segment in the pixel coordinate system is stored in the detection cache, determining that the position of the optical filter used by the camera module is shifted.
In the embodiment of the application, when the line segment parameter of the target edge line segment conforms to the preset parameter range, whether a historical edge line segment at the same position as the target edge line segment in the pixel coordinate system is stored in the detection cache is further detected, and if the target edge line segment conforming to the preset parameter range also exists in the historical edge line segment, the target edge line segment is determined to be caused by the deviation of the position of the optical filter which is being used by the camera module, so that the accuracy of judging the deviation of the optical filter is improved.
As an optional implementation manner, in an aspect of the embodiment of the present application, when the number of the target edge line segments is at least two, before the detecting the line segment parameter of the target edge line segment, the method further includes:
sorting each target edge line segment according to the accumulated value corresponding to each target edge line segment;
the detecting the segment parameters of the target edge segment includes:
and detecting the line segment parameters of the target edge line segments in sequence according to the sequencing result.
In the embodiment of the application, when the number of the target edge line segments obtained from the first edge image is at least two, the scheme can also sequence each target edge line segment and detect according to the sequencing result, so that the detection efficiency can be improved.
As an optional implementation manner, in an aspect of this embodiment of the present application, the sorting the target edge line segments includes:
sorting according to the sequence of accumulated values of the target edge line segments from large to small; or,
and sorting according to the difference between the respective accumulated value of each target edge line segment and the preset accumulated threshold value from small to large.
In the embodiment of the application, when the target edge line segments are sorted, the accumulated values may be sorted in an order from large to small, or the accumulated values may be sorted according to a difference between the accumulated values and a preset accumulated threshold, so as to improve the detection efficiency.
As an optional implementation manner, in an aspect of the embodiment of the present application, the camera module further includes a driving device, and after determining that a position of the optical filter in use by the camera module is shifted, the method further includes:
determining a filter which needs to be used currently in the filter component;
and controlling the driving assembly to provide the driving force to the optical filter component so as to adjust the position of the optical filter which needs to be used at present to a fixed position.
In the embodiment of the application, after the position of the optical filter used by the camera module is determined to be shifted, which optical filter is the optical filter which needs to be used currently can be determined, and the driving device is used for providing driving force for the optical filter assembly so as to adjust the shifted optical filter, so that the shift correction of the optical filter is realized, the influence of the shift of the optical filter on an image is reduced, and the accuracy of the camera module for acquiring the image is improved.
As an optional implementation manner, in an aspect of the embodiments of the present application, after the controlling the driving component to provide the driving force to the optical filter assembly to adjust the position of the optical filter that needs to be currently used to a fixed position, the method further includes:
detecting a first number of times of providing, which is the number of times the driving member provides the driving force to the optical filter assembly;
and when the first providing times do not reach the upper limit of the preset times, executing the step of acquiring the first image picture acquired by the camera module.
In the embodiment of the application, after the position of the optical filter in use by the camera module is determined to be displaced, whether the driving force is continuously provided or not is detected by recording the number of times that the driving force is provided to the optical filter component by the driving component, so that the driving force is controlled, and the accuracy of the displacement correction is improved.
As an optional implementation manner, in an aspect of the embodiments of the present application, the method further includes:
and when the first providing times reach the preset upper time limit, generating early warning information, wherein the early warning information is used for indicating that the position of the optical filter assembly deviates.
In the embodiment of the present application, when the first providing number reaches the upper limit of the preset number, it indicates that the first providing number may be a line segment due to a hardware reason, so as to improve the accuracy of determining the offset of the optical filter and the accuracy of correcting the offset.
As an optional implementation manner, in an aspect of the embodiments of the present application, before the controlling the driving component to provide the driving force to the optical filter assembly to adjust the position of the optical filter that needs to be currently used to a fixed position, the method further includes:
acquiring an image position corresponding to the optical filter which needs to be used currently in the first image picture;
determining the direction of the driving force according to the position relation between the image position and the target edge line segment;
the controlling the driving assembly to provide the driving force to the optical filter assembly to adjust the position of the currently required optical filter to a fixed position includes:
and controlling the driving assembly to provide the driving force to the optical filter component according to the direction of the driving force so as to adjust the position of the optical filter which needs to be used currently to a fixed position.
In the embodiment of the application, the direction of the driving force is further determined by acquiring the corresponding image position of the optical filter which needs to be used currently in the first image picture, so that the driving force is provided according to the determined direction, and the accuracy of adjusting the offset of the optical filter is improved.
As an alternative implementation, in an aspect of the embodiments of the present application, the determining the currently used optical filter in the optical filter assembly includes:
acquiring various optical filters corresponding to the first image picture;
and determining the optical filter which needs to be used currently from the various optical filters according to the illumination intensity of the current environment of the camera module.
In the embodiment of the application, the optical filter which needs to be used at present is determined according to the illumination intensity of the environment where the camera module is located at present, so that the optical filter for adjusting the offset is more in line with the actual application scene, and the accuracy of the offset adjustment is improved.
In another aspect, an embodiment of the present application provides an offset detection apparatus, where the offset detection apparatus is used in a camera module, where the camera module includes an optical filter assembly, the optical filter assembly includes at least two optical filters, and frequency bands of light filtered by the at least two optical filters are different; the device comprises:
the image acquisition module is used for acquiring a first image acquired by the camera module;
the image acquisition module is used for carrying out edge detection on the first image picture to acquire a first edge image, and the first edge image is composed of edge line segments of all objects in the first image picture;
a line segment determining module, configured to perform hough line detection on edge line segments of the objects in the first edge image to obtain a target edge line segment, where the target edge line segment is an edge line segment in which an accumulated value corresponding to the edge line segment of each object is greater than a preset accumulated threshold;
the parameter detection module is used for detecting the line segment parameters of the target edge line segment;
and the offset determining module is used for determining that the position of the optical filter used by the camera module is offset when the line segment parameters of the target edge line segment accord with the preset parameter range.
In another aspect, an embodiment of the present application provides a camera module, which includes a memory and a processor, where the memory stores a computer program, and when the computer program is executed by the processor, the processor implements the offset detection method according to the above aspect and any optional implementation manner thereof.
In another aspect, an embodiment of the present application provides a terminal device, where the terminal device includes at least one camera module according to the above aspect.
In another aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the offset detection method according to the above another aspect and its optional modes.
In another aspect, the present application provides a computer program product, which when run on a computer, causes the computer to execute the offset detection method according to the above one aspect.
In another aspect, an application publishing platform is provided, and the application publishing platform is configured to publish a computer program product, wherein when the computer program product runs on a computer, the computer is caused to perform the offset detection method according to the above aspect.
The technical scheme provided by the embodiment of the application can at least comprise the following beneficial effects:
in the embodiment of the application, a first image picture acquired through a camera module is acquired, edge detection is performed on the first image picture, a first edge image is acquired, a target edge line segment is determined from the first edge image, and by detecting line segment parameters of the target edge line segment, when the line segment parameters of the target edge line segment meet a preset parameter range, it is determined that the position of an optical filter in use of the camera module is shifted. The image frame that this application adopted to gather the camera module is analyzed, confirm probably because the light filter position takes place the target edge line section that the skew leads to in the picture, and detect, thereby whether the skew has taken place for the position of determining the light filter that is using, need not set up a plurality of sensors in the camera module corresponding position department after the light filter switches, the structure of camera module has been simplified, space utilization has been improved, whether the skew has taken place for the light filter can be detected out to direct follow the image, need not to carry out sensing and transmission to light filter position parameter through position sensor, detection efficiency and detection precision have also been improved.
Drawings
Fig. 1 is a schematic structural diagram of a filter switching method according to an exemplary embodiment of the present disclosure;
FIG. 2 is a flow chart of a method of offset detection provided by an exemplary embodiment of the present application;
FIG. 3 is a flowchart of a method of offset detection provided by an exemplary embodiment of the present application;
fig. 4 to fig. 5 are schematic image diagrams of a first image frame and a first edge image according to an exemplary embodiment of the present application;
FIG. 6 is a schematic diagram of a target edge line segment in a first edge image according to an exemplary embodiment of the present application;
FIG. 7 is a schematic diagram of an alternative embodiment of the present application relating to a target edge line segment in the first edge image of FIG. 6;
FIG. 8 is a schematic illustration of another target edge line segment to which an exemplary embodiment of the present application relates;
FIG. 9 is an interface diagram of a first image frame according to an exemplary embodiment of the present application;
FIG. 10 is a flow chart of a method of offset detection in accordance with an exemplary embodiment of the present application;
fig. 11 is a block diagram of an offset detection apparatus according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Reference herein to "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
It should be noted that the terms "first", "second", "third" and "fourth", etc. in the description and claims of the present application are used for distinguishing different objects, and are not used for describing a specific order. The terms "comprises," "comprising," and "having," and any variations thereof, of the embodiments of the present application, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The scheme provided by the application can be used in the process of adapting to different use scenes by switching the optical filters when the terminal used by people in daily life comprises the electromagnetic dual-optical-filter switcher, and for convenience of understanding, some terms and application architectures related to the embodiment of the application are briefly introduced below.
The double-Filter switcher (ICR) is characterized in that a group of filters are arranged in a lens module of a camera module, and when an infrared sensing point outside the lens detects the change of the intensity of light, the built-in ICR automatically switches the filters, so that the switching is realized according to the change of the intensity of external light, and the image achieves the best effect.
Hough Transform (Hough Transform) is one of the basic methods for identifying geometric shapes from images in image processing, and is mainly used for separating geometric shapes (such as straight lines, circles, etc.) with certain identical features from images. The most basic hough transform is to detect straight lines from black and white images.
In daily life, a camera module has been applied to various terminals, and people can take pictures, record videos, and the like by using the camera module. Due to the fact that the quality of the images acquired by the terminal through the camera module is different in different scenes, for example, the quality of the images acquired by the terminal through the camera module is better under the condition that the light intensity is stronger in the daytime, and the quality of the images acquired by the terminal through the camera module is poorer under the condition that the light intensity is weaker at night.
At present, a set of optical filters is arranged in a lens module of the camera module, and the used optical filters are switched according to the light intensity conditions in the daytime and at night, so that the quality of images acquired at night can be improved. Please refer to fig. 1, which shows a schematic structural diagram of a filter switching method according to an exemplary embodiment of the present disclosure. As shown in fig. 1, the filter switching structure 100 includes a filter assembly 110, a rocker arm 120, a magnet 130, an electromagnetic coil 140, a first electrode 150, and a second electrode 160.
Alternatively, the optical filter switching structure 100 may be applied to a camera module, or may be installed in a terminal. The optical filter assembly 110 may at least include a first optical filter and a second optical filter, the optical filter assembly 110 may be mechanically connected to the rocker arm 120, the rocker arm 120 is integrally connected to the magnet 130, and the terminal supplies power to the electromagnetic coil 140 through the first electrode 150 and the second electrode 160, so that the first electromagnetic pole 141 and the second electromagnetic pole 142 are N poles or S poles, respectively, and thus the electromagnetic coil generates electromagnetic force to push the magnet 130 to rotate. For example, in fig. 1, if the terminal supplies power to the electromagnetic coil 140 through the first electrode 150 and the second electrode 160, the first electromagnetic pole 141 is an S pole, and the second electromagnetic pole 142 is an N pole, the magnet 130 can be pushed to rotate counterclockwise, so as to drive the optical filter to switch.
In the structure of electromagnetically controlling the switching of the optical filter shown in fig. 1, after the switching of the optical filter is completed, the switched optical filter may move from an originally accurate position to another position due to vibration of an external environment or a shift of its own hardware, and an image passing through a different optical filter may appear on a screen. That is, in one image screen, a first image region and a second image region are included, the first image region is an image region corresponding to the transmission infrared band-pass filter, and the second image region is an image region corresponding to the infrared cut filter. That is, in the camera module, since the infrared band pass filter is shifted, a part of the infrared cut filter and a part of the infrared band pass filter are both in a path where light is transmitted into the camera lens, resulting in two image areas.
In order to detect whether the optical filter in the camera module is shifted in time, a corresponding sensor is usually arranged at a fixed position, for example, the fixed position can be in a clamping groove in the camera module where the optical filter is switched, sensors are arranged in the optical filter and the clamping groove, and whether the position of the optical filter is shifted is determined by the sensors.
In order to reduce the number of devices of the camera module, the complexity of the structure of the camera module is simplified, and the space utilization rate of the camera module is improved. A solution is proposed in the present application, please refer to fig. 2, which shows a method flowchart of an offset detection method provided in an exemplary embodiment of the present application. The offset detection method can be used in the camera module shown in fig. 1, where the camera module includes an optical filter assembly, the optical filter assembly includes at least two optical filters, and the frequency bands of the at least two optical filters for filtering light are different. As shown in fig. 2, the offset detection method may include several steps as follows.
Step 201, acquiring a first image picture acquired by a camera module.
Optionally, the first image frame may be any frame of image collected by the camera module in the working process. For example, in the working process of the camera module, the camera module collects images of the external environment in real time through the camera, and accordingly, each frame of image collected in real time can be regarded as a first image picture collected by the camera module.
Optionally, the camera module provided by the present application may include a driving device and at least two optical filters, where the frequency bands of the at least two optical filters for filtering light are different, for example, one of the at least two optical filters may be an infrared cut-off filter, the other may be an infrared band-pass filter, and the other may be an ultraviolet light filter. Alternatively, the driving device may be an electromagnetic coil as shown in fig. 1, and the electromagnetic coil generates electromagnetic force after being electrified, so that the optical filter is switched. In the application, the respective working filtering frequency bands of the at least two optical filters are different, so that the optical filter component switches different optical filters according to different external environments.
Step 202, performing edge detection on the first image picture to obtain a first edge image, where the first edge image is composed of edge line segments of each object in the first image picture.
Optionally, the camera module performs edge detection on the acquired first image picture, so as to obtain a first edge image composed of edge line segments of each object in the first image picture. Wherein, each object in the first image picture can be each object contained in the first image picture. For example, the camera module may convert the first image into a first grayscale image, perform noise reduction on the obtained first grayscale image, determine a luminance gradient in the noise-reduced first grayscale image, where the luminance gradient may indicate a rate of change of luminance in the grayscale image, perform edge tracking according to the luminance gradient, and obtain a first edge image.
Step 203, performing hough line detection on the edge line segments of each object in the first edge image to obtain a target edge line segment, where the target edge line segment is an edge line segment whose accumulated value corresponding to the edge line segment of each object is greater than a preset accumulated threshold value.
The Hough line detection may include performing Hough transformation on pixel points where edge line segments of each object in the first edge image are located, calculating an accumulated value corresponding to the edge line segment of each object, determining a target edge line segment according to the accumulated value corresponding to the edge line segment of each object, completing detection of the target edge line segment, and obtaining the target edge line segment, wherein the accumulated value of the target edge line segment is greater than a preset accumulated threshold value.
Optionally, the camera module performs hough transform on the pixel points where each edge line segment in the first edge image is located, and calculates an accumulated value corresponding to the edge line segment of each object. For example, there are edge line segments corresponding to three objects in the first edge image, and the camera module may perform hough transform on pixel points where the first object, the second object, and the third object are located, and calculate respective accumulated values of the edge line segments of the first object, the second object, and the third object.
Optionally, the camera module determines a target edge line segment greater than a preset accumulation threshold value through an accumulated value corresponding to the edge line segment of each object. For example, the camera module detects accumulated values corresponding to edge line segments of each object, and determines an edge line segment corresponding to an accumulated value as a target edge line segment when the accumulated value is greater than a preset accumulated threshold value. When a certain accumulated value is not greater than a preset accumulated threshold value, the edge line segment corresponding to the accumulated value can be removed from the first edge image. The preset accumulation threshold may be preset by a developer.
And step 204, detecting the line segment parameters of the target edge line segment.
Optionally, the camera module detects the obtained line segment parameters of each target edge line segment. The line segment parameter may be a line segment parameter in the edge image of the target edge line segment from which the non-target edge line segment is removed from the first edge image, and optionally, the line segment parameter of the target edge line segment may include: the line segment parameters may be one or more of a slope, a width, an area, a path, a curvature and the like of the target edge line segment in the first edge image.
Step 205, when the line segment parameters of the target edge line segment meet the preset parameter range, determining that the position of the optical filter used by the camera module is shifted.
Optionally, the camera module detects line segment parameters of the target edge line segment, and if the line segment parameters of the target edge line segment meet a preset parameter range, it may be determined that the position of the optical filter being used by the camera module is shifted. If the line segment parameters of the target edge line segment do not conform to the preset parameter range, it can be determined that the position of the filter used by the camera module is not shifted. The preset parameter range may also be preset by a developer.
In summary, in the embodiment of the present application, a first image acquired by a camera module is obtained, edge detection is performed on the first image, a first edge image is obtained, a target edge line segment is determined from the first edge image, and by detecting line segment parameters of the target edge line segment, when the line segment parameters of the target edge line segment meet a preset parameter range, it is determined that a position of an optical filter being used by the camera module is shifted. The image frame that this application adopted to gather the camera module is analyzed, confirm probably because the light filter position takes place the target edge line section that the skew leads to in the picture, and detect, thereby whether the skew has taken place for the position of determining the light filter that is using, need not set up a plurality of sensors in the camera module corresponding position department after the light filter switches, the structure of camera module has been simplified, space utilization has been improved, whether the skew has taken place for the light filter can be detected out to direct follow the image, need not to carry out sensing and transmission to light filter position parameter through position sensor, detection efficiency and detection precision have also been improved.
In a possible implementation manner, the embodiment shown in fig. 2 is described by taking an example that the line segment parameters of the target edge line segment include a line segment slope of the target edge line segment in the first edge image and a line segment width occupied by the target edge line segment in the first edge image, and the preset parameter threshold includes a preset slope range and a preset width range.
Referring to fig. 3, a flowchart of a method of detecting an offset according to an exemplary embodiment of the present application is shown. The offset detection method can be used in the camera module shown in fig. 1, where the camera module includes an optical filter assembly, the optical filter assembly includes at least two optical filters, and the frequency bands of the at least two optical filters for filtering light are different. As shown in fig. 3, the offset detection method may include several steps as follows.
Step 301, acquiring a first image picture acquired by a camera module.
In the working process of the camera module, the camera module collects images of an external environment in real time through the camera, and correspondingly, each frame of image collected in real time can be regarded as a first image picture collected by the camera module. That is, in the present application, the process of detecting whether the filter is shifted may be detected in real time during the use of the camera module.
Step 302, performing edge detection on the first image picture to obtain a first edge image, where the first edge image is composed of edge line segments of each object in the first image picture.
Optionally, the camera module performs edge detection on the acquired first image picture, so as to obtain a first edge image composed of edge line segments of each object in the first image picture. Wherein, each object in the first image picture can be each object contained in the first image picture.
Optionally, the edge detection scheme may be any edge detection algorithm based on a Sobel operator, a Laplacian operator, and a Canny operator, for example, based on the edge detection algorithm of the Canny operator, the camera module may convert the first picture image into a first gray image, perform noise reduction processing on the obtained first gray image through a gaussian filter, and calculate a luminance gradient of each pixel in the noise-reduced first gray image, where the luminance gradient may indicate a change rate of luminance in the gray image, perform edge tracking according to the luminance gradient, and obtain the first edge image.
Please refer to fig. 4 to fig. 5, which illustrate image diagrams of a first image frame and a first edge image according to an exemplary embodiment of the present application. As shown in fig. 4, each object 401 is included in the first image screen 400, and after the camera module performs the edge detection on the first image screen 401 shown in fig. 4, the first edge image 500 shown in fig. 5 can be acquired, and the edge line segments 501 of each object are included in the first edge image 500.
Step 303, performing hough line detection on the edge line segments of each object in the first edge image to obtain a target edge line segment, where the target edge line segment is an edge line segment whose accumulated value corresponding to the edge line segment of each object is greater than a preset accumulated threshold value.
Optionally, the process of obtaining the target edge line segment through hough line detection may be as follows: carrying out Hough transform on pixel points where edge line segments of all objects in the first edge image are located, and calculating accumulated values corresponding to the edge line segments of all the objects; and determining a target edge line segment according to the accumulated value corresponding to the edge line segment of each object, wherein the accumulated value of the target edge line segment is greater than a preset accumulated threshold value.
Optionally, the camera module performs hough transform on all non-zero pixel points in the first edge image, and calculates an accumulated value corresponding to an edge line segment of each object. For example, the camera module transforms hough space to all non-zero pixel points in the first edge image in fig. 6 one by one, and accumulates the transformed hough space into a hough table, and counts the accumulated value of each object. Please refer to table 1, which shows a hough table according to an exemplary embodiment of the present application.
Accumulated value Edge line segment
An accumulated value of one Edge line segment one
Accumulated value two Edge line segment two
Accumulated value three Edge line segment three
…… ……
TABLE 1
As shown in table 1, for the first edge image, each accumulated value may represent an edge line segment, that is, the accumulated value obtained through hough transform may have a corresponding relationship with each edge line segment in the first edge image, and the camera module may learn the corresponding edge line segment through the accumulated value.
Optionally, the camera module may screen the hough table, screen out an edge line segment with an accumulated value greater than a preset accumulated threshold value, and determine the edge line segment as a target edge line segment. Wherein the preset accumulation threshold value can be preset by a developer. For example, the camera module compares the accumulated values corresponding to the edge line segments of each object with a preset accumulated threshold, and determines the edge line segment corresponding to the accumulated value as the target edge line segment when a certain accumulated value is greater than the preset accumulated threshold. When a certain accumulated value is not greater than a preset accumulated threshold value, the edge line segment corresponding to the accumulated value can be removed from the first edge image. The preset accumulation threshold may be preset by a developer.
In a possible implementation manner, when the determined number of the target edge line segments is at least two, the camera module may further sort the target edge line segments according to the accumulated values corresponding to the target edge line segments; in the subsequent step 305, the line segment parameters of the target edge line segments are sequentially detected according to the sorting result. That is, in the first edge image, at least two edge line segments are determined as target edge line segments, and the camera module sorts the at least two target edge line segments according to respective accumulated values corresponding to the at least two target edge line segments.
Optionally, the camera module may sort the target edge line segments according to the following manner: and sorting according to the sequence of the accumulated value of each target edge line segment from large to small. For example, the determined number of the target edge line segments is 3, and the target edge line segments are respectively a first target edge line segment, a second target edge line segment, a third target edge line segment, a first target edge line segment, a second target edge line segment, and an accumulated value of each of the third target edge line segments is a first accumulated value, a second accumulated value, and a third accumulated value, wherein the second accumulated value is larger than the third accumulated value and larger than the first accumulated value, and the camera module sorts the target edge line segments in descending order of the accumulated values of each of the target edge line segments to obtain a sort of the second target edge line segment, the third target edge line segment, and the first target edge line segment.
Alternatively, the camera modules may be sorted in the order from small to large according to the difference between the respective accumulated values of the target edge line segments and the preset accumulated threshold. For example, the determined number of the target edge line segments is 3, which are respectively a first target edge line segment, a second target edge line segment, a third target edge line segment, a first target edge line segment, a second target edge line segment, and a third target edge line segment, where the accumulated values of the first target edge line segment, the second target edge line segment, and the third target edge line segment are respectively a first accumulated value, a second accumulated value, and a third accumulated value, where a difference value between the first accumulated value and a preset accumulated threshold value is a first difference value, a difference value between the second accumulated value and the preset accumulated threshold value is a second difference value, and a difference value between the third accumulated value and the preset accumulated threshold value is a third difference value, where the camera module determines that the difference value is larger than the first difference value and larger than the second difference value by judging, sorts the camera module according to a sequence from small to large difference values between the respective accumulated values of the target edge line segments and the preset accumulated threshold value, and obtains a sort as the third target edge line segment, a first target edge line segment, a second target edge line segment.
Optionally, the camera module may have a first cache, where the first cache may cache each accumulated value determined that the accumulated value is greater than the preset accumulated threshold value, and sort the cached accumulated values in subsequent sorting.
And step 304, detecting the line segment parameters of the target edge line segment.
Optionally, the camera module sequentially detects the line segment parameters of the obtained target edge line segments according to the above sorting. Optionally, the camera module may further include a second cache, where the second cache may store the accumulated values of the sorted target edge line segments, and the camera module detects each target edge line segment from the second cache according to the sorting.
The line segment parameters may be line segment parameters in the edge image of the target edge line segment from which the non-target edge line segment is removed from the first edge image, and the line segment parameters include a line segment slope and a line segment width. In this step, the detection of the line segment parameters of the target edge line segment includes the detection of the slope of the line segment and the detection of the width of the line segment.
Optionally, the camera module may detect the slope of the line segment first, and then detect the width of the line segment when the slope of the line segment meets the requirement. When the slope of the line segment is within the preset slope range, acquiring the line segment width occupied by the target edge line segment in the first edge image; and when the line segment width is within the preset width range, determining that the line segment parameters of the target edge line segment conform to the preset parameter range. When the camera acquires the slope of the line segment, the slope of each edge line segment corresponding to each accumulated value obtained after hough transform can be acquired from each edge line segment obtained after hough transform, and when the slope of the target edge line segment needs to be acquired in this step, the slope of the line segment corresponding to the accumulated value can be acquired from the slopes of each edge line segment.
The camera module obtains the line slope of the target edge line segment, compares the line slope with a preset slope range, determines that the line slope of the target edge line segment meets the requirement when the line slope is in the preset slope range, and then obtains the line width of the target edge line segment. For example, the preset slope range is 89 degrees to 91 degrees, when the slope of the target edge line segment is within the range, the line segment width of the target edge line segment is continuously obtained, and if the slope of the target edge line segment is not within the range, the target edge line segment may be removed from each target edge line segment, and the next target edge line segment is detected according to the sequence.
In a possible implementation manner, when the slope of the target edge line segment in the first edge image is within the preset slope range, the manner of acquiring the line segment width occupied by the target edge line segment in the first edge image by the camera module may be as follows: the method comprises the steps of obtaining each pixel point occupied by a target edge line segment in a first edge image, obtaining each group of pixel points in the width direction of the target edge line segment according to each pixel point, calculating the average value of the width of each group of pixel points according to the width of each group of pixel points, and obtaining the average value as the line segment width occupied by the target edge line segment in the first edge image.
Referring to fig. 6, a schematic diagram of a target edge line segment in a first edge image according to an exemplary embodiment of the present application is shown. As shown in fig. 6, the first edge image 600 includes a target edge line segment 601, the target edge line segment 601 has occupied pixels on each row and each column on the first edge image, and the camera module obtains each pixel occupied by the target edge line segment 601 on the first edge image, and takes an average value of widths of each group of pixels in a width direction of the target edge line segment as a width of the target edge line segment.
In a possible implementation manner, when the camera module acquires each pixel point occupied by the target edge line segment in the first edge image, the following manner may be adopted: acquiring a first pixel point of a target edge line segment in a first edge image; taking the first pixel point as a pixel starting block, and acquiring an adjacent pixel set, wherein the adjacent pixel set is a set formed by all pixel points adjacent to the pixel starting block; for the adjacent pixel set, detecting whether the difference value between the pixel value of each pixel point in the adjacent pixel set and the pixel value of the first pixel point is smaller than a preset difference threshold value; merging each pixel point which is smaller than a preset difference threshold value in the adjacent pixel set with the pixel starting block to serve as a new pixel starting block, and executing the step of obtaining the adjacent pixel set until the difference value between the pixel value of each pixel point in the adjacent pixel set and the pixel value of the first pixel point is not smaller than the preset difference threshold value; and taking each pixel point contained in the finally obtained pixel starting block as each pixel point occupied by the target edge line segment in the first edge image.
Optionally, the first pixel point may be a pixel point at the center of the target edge line segment, or may be any one pixel point included in the first edge image of the target edge line segment, and the adjacent pixel points are screened according to the pixel value of the first pixel point, so as to determine whether the adjacent pixel points belong to the same target edge line segment as the first pixel point. Refer to fig. 7, which shows a schematic diagram of an exemplary embodiment of the present application relating to a target edge line segment in another first edge image of fig. 6. As shown in fig. 7, the first edge image 700 includes a target edge line segment 701, and the target edge line segment includes a first pixel point 702. The camera module detects outwards through the first pixel point 702, checks whether the absolute value of the difference between the pixel values of the surrounding pixel points and the pixel value of the first pixel point 702 is smaller than a preset difference threshold, if so, considers that the pixel point and the first pixel point 702 belong to the same target edge line segment, and if not, considers that the pixel point and the first pixel point 702 do not belong to the same target edge line segment. After the detection of the adjacent pixel points of the pixel starting block 702 is completed, each pixel point smaller than the preset difference threshold is merged with the pixel starting block to serve as a new pixel starting block, the step of obtaining an adjacent pixel set is executed on the new pixel starting block, and so on until the difference between the pixel value of each pixel point in the adjacent pixel set and the pixel value of the first pixel point is not smaller than the preset difference threshold, so that each pixel point occupied by the target edge line segment is obtained.
Optionally, when the first pixel point is a pixel point at the center position of the target edge line segment, the camera module may determine the center of the target edge line segment by combining the start point and the end point of the line segment obtained in the hough transform process with the position on the first image picture, so as to obtain the first pixel point, and perform outward diffusion type detection from the center.
Optionally, after the camera module obtains the position of each pixel point occupied by the target edge line segment, the camera module obtains each group of pixel points in the width direction of the target edge line segment, calculates the average value of the widths of each group of pixel points according to the widths of each group of pixel points, and finally obtains the average value as the line segment width occupied by the target edge line segment in the first edge image. After the camera module obtains the position of each pixel point, the length and the width of the target edge line segment can be determined, and the average value of the width of each group of pixel points in the width direction of the target edge line segment is calculated. Referring to FIG. 8, a schematic diagram of another target edge line segment is shown in accordance with an exemplary embodiment of the present application. As shown in fig. 8, the first edge image 800 includes a target edge line segment 801, a length direction 802, a width direction 803, a first group of pixels 804, a second group of pixels 805, and the camera module obtains groups of pixels of the target edge line segment 801 in the width direction 803 along the length direction 802 in sequence according to the step length of the pixels, and uses the average value of the widths of the groups of pixels as the line segment width of the target edge line segment 801.
And 305, when the line segment parameters of the target edge line segment accord with the preset parameter range, determining that the position of the optical filter used by the camera module is shifted.
Optionally, when the line segment parameters of the target edge line segment are a line segment slope and a line segment width, and when the line segment slope of the target edge line segment is within a preset slope range and the line segment width is within a preset width range, it is determined that the position of the optical filter in use by the camera module is shifted.
In a possible implementation manner, when the line segment parameter of the target edge line segment conforms to a preset parameter range, the camera module may further detect whether a historical edge line segment at the same position as the target edge line segment in a pixel coordinate system is stored in the detection cache, where the historical edge line segment is a line segment whose line segment parameter determined from the previous N frames of the first image frame conforms to the preset parameter range, the pixel coordinate system is a pixel coordinate system of the first edge image, and N is a positive integer; and when the historical edge line segment stored in the cache at the same position in the pixel coordinate system as the target edge line segment is detected, determining that the position of the filter used by the camera module is shifted.
That is, when the line segment parameters of the target edge line segment meet the preset parameter range, the camera module detects whether a historical edge line segment at the same position as the target edge line segment in the pixel coordinate system is stored in the detection cache, if so, the target edge line segment is shown to have appeared in the previous N frames and is at the same position in each frame, and the target edge line segment is caused by the position of the optical filter, so that the position of the optical filter used by the camera module is determined to be shifted. And if the image frame does not exist, continuously acquiring the next frame of image frame, and continuously detecting the image frame shot by the camera module in real time. After the judgment of the target edge line segment of the first image picture is completed, the camera module can also store the target edge line segment which accords with the preset parameter range in the first image picture into the detection cache, so as to judge the target edge line segment of the subsequent image picture.
Optionally, in the process of determining whether the position of the optical filter in use by the camera module is shifted by detecting the line segment parameter of the target edge line segment, in addition to detecting the slope and the width of the target edge line segment, the length of the target edge line segment, the area of the pixel point occupied by the target edge line segment, and the like may also be determined, which is not limited in this embodiment.
In step 306, the currently needed filter in the filter assembly is determined.
Optionally, after the camera module determines that the optical filter is shifted, the optical filter that is currently needed by the optical filter assembly may be determined continuously.
In a possible implementation manner, the camera module may obtain a filter switching signal received by the last camera module, where the filter switching signal is used to control the filter assembly to be switched from a previously used filter to a filter that needs to be used, and determine a filter that needs to be used currently in the filter assembly according to the filter switching signal. For example, in the ICR that this application provided, can detect the change of external environment light through the photo resistance, when the resistance value of photo resistance is less than and predetermines the threshold value, can regard external environment daytime, when the resistance value of photo resistance is not less than and predetermines the threshold value, can regard external environment nightly. When the resistance values of the photoresistors are sequentially switched on two sides of the preset threshold value, the external environment is changed, and an optical filter switching signal can be sent to the controller. Alternatively, when the external environment is daytime, the filter switcher needs to use the infrared cut filter, and when the external environment is nighttime, the filter switcher needs to use the infrared band pass filter. If the optical filter switching signal received by the camera module at the last time is to switch the optical filter from the infrared band-pass optical filter to the infrared cut-off optical filter, the camera module can also acquire the optical filter switching signal in this step, so that it is determined that the optical filter which needs to be used at present is the infrared cut-off optical filter.
In a possible implementation manner, the camera module may further obtain various optical filters corresponding to the first image picture; and determining the optical filter which needs to be used currently from the two optical filters according to the illumination intensity of the current environment of the camera module. For example, the camera module detects that there is an image area obtained by transmitting an infrared band-pass filter and an image area obtained by transmitting an ultraviolet light filter in the first image picture through the obtained first image picture, which indicates that the various filters corresponding to the first image picture are the infrared band-pass filter and the ultraviolet light filter, and determines the filter which needs to be used currently from the two filters according to the illumination intensity of the current environment of the camera module.
Step 307, controlling the driving assembly to provide a driving force to the filter assembly, so as to adjust the position of the filter that needs to be used at present to a fixed position.
The fixed position is the position of the currently used optical filter after the optical filter is switched. Optionally, the camera module further includes a driving device, and the driving device can control switching of the optical filter.
In a possible implementation manner, after the camera module determines the optical filter which needs to be used currently, the image position corresponding to the optical filter which needs to be used currently can be acquired in the first image picture; determining the direction of the driving force according to the position relation between the image position and the target edge line segment; and controlling the driving assembly to provide driving force to the filter assembly according to the direction of the driving force so as to adjust the position of the filter which needs to be used at present to a fixed position.
For example, please refer to fig. 9, which shows an interface diagram of a first image frame according to an exemplary embodiment of the present application. As shown in fig. 9, the first image screen 900 includes a first image position 901, a second image position 902, and a target edge line segment 903. The first image position 901 is a position corresponding to an image region obtained through the infrared band-pass filter, the second image position 902 is a position corresponding to an image region obtained through the infrared cut-off filter, and the target edge line segment 903 is a target edge line segment which is obtained through the above steps and meets a preset parameter range. If the optical filter which needs to be used at present is an infrared band-pass filter, the camera module can determine that the direction of the driving force is rightward driving according to the position relation between the first image position 901 and the target edge line segment 903, so that the driving assembly is controlled to provide the driving force for the optical filter assembly, and the position of the infrared band-pass filter is rightward driven and adjusted to a fixed position. For example, in fig. 1 described above, power is supplied to the electromagnetic coil 140 to the first electrode 150 and the second electrode 160, so that the filter is switched to the infrared band pass filter.
Optionally, after the above steps, the camera module may further detect a first number of times, where the first number of times is a number of times that the driving assembly provides the driving force to the optical filter assembly. And when the first providing times do not reach the upper limit of the preset times, executing the step of acquiring the first image picture acquired by the camera module. And when the first providing times reach the upper limit of the preset times, generating early warning information, wherein the early warning information is used for indicating the position of the optical filter assembly to deviate.
Optionally, after the providing of the one-time driving, the camera module may record a first providing time corresponding to a target edge line segment in the first image frame, where the target edge line segment meets a preset parameter range, and if the target edge line segment is determined to be driven by determining that the optical filter is shifted for the first time, the first providing time is 1, and if providing times are recorded for target edge line segments at the same position in the previous N frames of image frames, the providing times are increased by 1.
For example, in the image frames of the first 3 frames, the target edge line segment is detected at the same position, and after the previous 3 frames are judged, all the target edge line segments are driven, the first supply frequency recorded by the camera module is 3, and after the driving, the first supply frequency can be adjusted to 4. Optionally, the first number of times of providing may be recorded by a counter, for example, the camera module further includes a first counter, and after the camera module determines that the position of the filter in use is shifted, the first counter records the number of times of driving the shift result corresponding to the target edge line segment at the same position. Optionally, the first counter may be turned on when it is determined that the line segment parameter of the target edge line segment meets a preset parameter range, the value of the first counter is 0 when turned on, the value is increased by 1 after providing one drive, if there is a target edge line segment at the same position in an image of a subsequent frame and one drive is provided, the value of the first counter is continuously increased by 1, and so on, when the image of the subsequent frame does not include the target edge line segment at the same position, which indicates that the position of the optical filter is correct due to the previous drive, the first counter may be reset to zero or turned off at this time.
That is to say, in the process of detecting the target edge line segment, if the target edge line segment appears at the same position in the multi-frame image (the first N frames), it is proved that the filter assembly in the camera module has abnormal switching. Optionally, when the driving correction is performed, the target edge line segment in the subsequent multi-frame image moves towards one direction until the target edge line segment disappears, which proves that the driving correction mechanism works normally, and the offset optical filter can return; when the driving correction is carried out, the target edge line segment still appears at the same position in the subsequent multi-frame images, and then the driving correction mechanism is abnormal or the hardware of the camera module has defects.
Optionally, after the first providing times are obtained, the first providing times are detected according to a preset upper time limit, the step of obtaining the first image picture collected by the camera module is executed when the first providing times do not reach the preset upper time limit, and when the first providing times reach the preset upper time limit, early warning information is generated and used for indicating the position of the optical filter assembly to deviate.
Optionally, the camera module may detect the obtained first provision frequency, compare the obtained first provision frequency with a preset upper frequency limit, and when the first provision frequency does not reach the preset upper frequency limit, continue to perform the step of obtaining the first image picture acquired by the camera module, that is, return to the first step to detect the next frame of image.
When the first providing times reach the preset upper time limit, it indicates that the camera module performs multiple driving adjustment, but the target edge line segment still exists in the image picture, and the camera module can determine the occurrence reason of the target edge line segment as being caused by the abnormality of the hardware of the optical filter assembly, and generate early warning information to prompt a user. Optionally, the warning information may be displayed in the form of voice, video, vibration, or the like.
In a possible implementation manner, after the steps 306 to 307 are completed, the camera module may further detect a currently used filter, and determine whether to adjust the obtained position of the currently used filter to a fixed position. That is, after the position of the optical filter that needs to be used at present is adjusted to the fixed position, the camera module determines whether the position adjustment of the optical filter at this time is correct by detecting whether the optical filter at the fixed position is the optical filter that needs to be used at present. If the filter with the fixed position is not the filter that is obtained in the above step, the camera module may execute step 307 again, try to switch again, and increase the first providing time once in one implementation manner.
The above-mentioned detecting whether the optical filter at the fixed position is the optical filter that needs to be used currently can identify the picture that is formed by which optical filter the picture is formed by being penetrated through by the captured picture image of the current camera module, and compare the optical filter with the optical filter that needs to be used currently, if the picture is consistent, the switching is correct, and if the picture is inconsistent, the switching is failed, and the step 307 is returned to re-switching.
In summary, in the embodiment of the present application, a first image acquired by a camera module is obtained, edge detection is performed on the first image, a first edge image is obtained, a target edge line segment is determined from the first edge image, and by detecting line segment parameters of the target edge line segment, when the line segment parameters of the target edge line segment meet a preset parameter range, it is determined that a position of an optical filter being used by the camera module is shifted. The image frame that this application adopted to gather the camera module is analyzed, confirm probably because the light filter position takes place the target edge line section that the skew leads to in the picture, and detect, thereby whether the skew has taken place for the position of determining the light filter that is using, need not set up a plurality of sensors in the camera module corresponding position department after the light filter switches, the structure of camera module has been simplified, space utilization has been improved, whether the skew has taken place for the light filter can be detected out to direct follow the image, need not to carry out sensing and transmission to light filter position parameter through position sensor, detection efficiency and detection precision have also been improved.
In addition, whether historical edge line segments with the same positions as the target edge line segments in the pixel coordinate system are stored in the detection cache or not is further detected, if the target edge line segments meeting the preset parameter range also exist in the historical edge line segments, the target edge line segments are determined to be caused by the fact that the positions of the optical filters used by the camera module are shifted, and therefore accuracy of judging the shifting of the optical filters is improved.
In addition, in the embodiment of the application, after the position of the optical filter in use by the camera module is determined to be displaced, whether the driving force is continuously provided or not is detected by recording the number of times that the driving force is provided to the optical filter box by the driving assembly, so that the driving force is controlled, and the accuracy of the displacement correction is improved.
Optionally, the present application may acquire a monitoring image through a System on Chip (SoC), please refer to fig. 10, which shows a flowchart of a method of an offset detection method according to an exemplary embodiment of the present application. The offset detection method can be used in the camera module shown in fig. 1, where the camera module includes an optical filter assembly, the optical filter assembly includes at least two optical filters, and the frequency bands of the at least two optical filters for filtering light are different. As shown in fig. 10, the offset detection method may include several steps as follows.
Step 1001, an SoC monitoring image is obtained.
Step 1002, judging whether the position of the ICR deviates or not according to the SoC monitoring image.
Optionally, the implementation manners of step 1001 to step 1002 may refer to the contents of step 301 to step 306 in the embodiment shown in fig. 3, which are not described herein again.
When the position of the ICR is deviated, the step 1003 is executed, otherwise, the step 1001 is returned to, and the monitoring is continued.
And step 1003, controlling the driving component to provide driving force so as to enable the ICR to return to the fixed position.
In step 1004, it is detected whether the number of times the driving force is provided reaches a number upper limit.
Optionally, the implementation manners of step 1003 to step 1004 may refer to the contents in step 308 in the embodiment shown in fig. 3, and are not described herein again.
Wherein, if the number of times of providing the driving force reaches the upper limit of the number of times, step 1005 is executed, otherwise, the step 1001 is returned to, and the monitoring is continued.
Step 1005, generating early warning information.
To sum up, this application sees through SoC real-time supervision image, the adoption carries out the analysis to the image picture that camera module gathered, probably be because the light filter position takes place the target edge line section that the skew leads to in the confirmed picture, and detect, thereby determine whether the skew has taken place for the position of light filter that is using, position department after not needing to correspond the light filter switching in the camera module sets up a plurality of sensors, the structure of camera module has been simplified, space utilization has been improved, direct follow image can detect out whether the skew has taken place for the light filter, need not to carry out sensing and transmission to light filter positional parameter through position sensor, detection efficiency and detection precision have also been improved.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 11, a block diagram of an offset detection apparatus 1100 according to an exemplary embodiment of the present disclosure is shown, in which the offset detection apparatus can be applied to a camera module, the camera module includes an optical filter assembly, the optical filter assembly includes at least two optical filters, and frequency bands of light filtered by the at least two optical filters are different; the offset detection device includes:
a picture acquiring module 1101, configured to acquire a first image picture acquired by the camera module;
an image obtaining module 1102, configured to perform edge detection on the first image picture to obtain a first edge image, where the first edge image is composed of edge line segments of each object in the first image picture;
a line segment determining module 1103, configured to perform hough line detection on edge line segments of the objects in the first edge image to obtain a target edge line segment, where the target edge line segment is an edge line segment whose accumulated value corresponding to the edge line segment of each object is greater than a preset accumulation threshold;
a parameter detection module 1104, configured to detect a segment parameter of the target edge segment;
an offset determining module 1105, configured to determine that the position of the optical filter being used by the camera module is offset when the line segment parameter of the target edge line segment meets a preset parameter range.
In summary, in the embodiment of the present application, a first image acquired by a camera module is obtained, edge detection is performed on the first image, a first edge image is obtained, a target edge line segment is determined from the first edge image, and by detecting line segment parameters of the target edge line segment, when the line segment parameters of the target edge line segment meet a preset parameter range, it is determined that a position of an optical filter being used by the camera module is shifted. The image frame that this application adopted to gather the camera module is analyzed, confirm probably because the light filter position takes place the target edge line section that the skew leads to in the picture, and detect, thereby whether the skew has taken place for the position of determining the light filter that is using, need not set up a plurality of sensors in the camera module corresponding position department after the light filter switches, the structure of camera module has been simplified, space utilization has been improved, whether the skew has taken place for the light filter can be detected out to direct follow the image, need not to carry out sensing and transmission to light filter position parameter through position sensor, detection efficiency and detection precision have also been improved.
Optionally, the line segment parameters include: the line segment slope of the target edge line segment in the first edge image, the line segment width of the target edge line segment in the first edge image, the line segment length of the target edge line segment in the first edge image, the pixel area of the target edge line segment in the first edge image, and one or more line segment paths formed by the target edge line segment in the first edge image.
Optionally, the line segment parameters of the target edge line segment include a slope of the target edge line segment in the first edge image and a line segment width occupied by the target edge line segment in the first edge image, and the preset parameter threshold includes a preset slope range and a preset width range, where the parameter detection module 1104 includes: a first obtaining unit and a second determining unit;
the first obtaining unit is configured to obtain a line segment width occupied by the target edge line segment in the first edge image when the slope of the line segment is within the preset slope range;
the second determining unit is configured to determine that the line segment parameter of the target edge line segment conforms to the preset parameter range when the line segment width is within the preset width range.
Optionally, the first obtaining unit includes: the system comprises a first acquisition subunit, a second acquisition subunit and a first calculation subunit;
the first obtaining subunit is configured to obtain each pixel point occupied by the target edge line segment in the first edge image;
the second obtaining subunit is configured to obtain, according to each pixel point, each group of pixel points in the width direction of the target edge line segment;
and the first calculating subunit is configured to calculate an average value of the widths of each group of pixels according to the width of each group of pixels, and obtain the average value as the line segment width occupied by the target edge line segment in the first edge image.
Optionally, the first obtaining subunit is configured to obtain a first parameter value
Acquiring a first pixel point of the target edge line segment in the first edge image;
taking the first pixel point as a pixel starting block, and acquiring an adjacent pixel set, wherein the adjacent pixel set is a set formed by all pixel points adjacent to the pixel starting block;
for the adjacent pixel set, detecting whether the difference value between the pixel value of each pixel point in the adjacent pixel set and the pixel value of the first pixel point is smaller than a preset difference threshold value;
merging each pixel point which is smaller than the preset difference threshold value in the adjacent pixel set with the pixel starting block to serve as a new pixel starting block, and executing the step of obtaining the adjacent pixel set until the difference value between the pixel value of each pixel point in the adjacent pixel set and the pixel value of the first pixel point is not smaller than the preset difference threshold value;
and taking each pixel point contained in the finally obtained pixel start block as each pixel point occupied by the target edge line segment in the first edge image.
Optionally, the offset determining module 1105 is configured to detect whether a historical edge line segment at the same position as the target edge line segment in a pixel coordinate system is stored in a detection cache when a line segment parameter of the target edge line segment conforms to a preset parameter range, where the historical edge line segment is a line segment whose line segment parameter determined from the previous N frames of image frames of the first image frame conforms to the preset parameter range, the pixel coordinate system is a pixel coordinate system of the first edge image, and N is a positive integer;
and when the historical edge line segment at the same position of the target edge line segment in the pixel coordinate system is stored in the detection cache, determining that the position of the optical filter used by the camera module is shifted.
Optionally, the number of the target edge line segments is at least two, and the apparatus further includes:
a line segment sorting module, configured to, before the parameter detecting module 1105 detects the line segment parameters of the target edge line segments, sort the target edge line segments according to the accumulated values corresponding to the target edge line segments;
the parameter detecting module 1105 is configured to sequentially detect the segment parameters of each target edge segment according to the sorting result.
Optionally, the line segment sorting module includes: a first sorting unit or a second sorting unit;
the first sequencing unit is used for sequencing according to the sequence of accumulated values of the target edge line segments from large to small; or,
and the second sorting unit is used for sorting according to the difference between the accumulated value of each target edge line segment and the preset accumulated threshold value from small to large.
Optionally, the camera module further includes a driving device, and the apparatus further includes:
the optical filter determining module is used for determining the optical filter which needs to be used currently in the optical filter assembly after the position of the optical filter which is used by the camera module is determined to be deviated;
and the driving force control module is used for controlling the driving assembly to provide the driving force for the optical filter component so as to adjust the position of the optical filter which needs to be used at present to a fixed position.
Optionally, the apparatus further comprises:
a number detection module configured to detect a first number of times that the driving assembly supplies the driving force to the optical filter assembly after the driving force control module controls the driving assembly to supply the driving force to the optical filter assembly to adjust the position of the optical filter that needs to be currently used to a fixed position;
and the step execution module is used for executing the step of acquiring the first image picture acquired by the camera module when the first providing times do not reach the upper limit of the preset times.
Optionally, the apparatus further comprises:
and the information generation module is used for generating early warning information when the first providing times reach the preset upper time limit, wherein the early warning information is used for indicating the position of the optical filter assembly to deviate.
Optionally, the apparatus further comprises:
a position obtaining module, configured to obtain, in the first image frame, an image position corresponding to the currently required optical filter before the driving module is controlled to provide the driving force to the optical filter assembly so as to adjust the position of the currently required optical filter to a fixed position;
the direction determining module is used for determining the direction of the driving force according to the position relation between the image position and the target edge line segment;
the driving force control module is used for controlling the driving assembly to provide the driving force for the optical filter component according to the direction of the driving force so as to adjust the position of the optical filter which needs to be used at present to a fixed position.
Optionally, the filter determining module includes: a second acquiring unit and a second determining unit;
the second acquisition unit is used for acquiring various optical filters corresponding to the first image picture;
and the second determining unit is used for determining the optical filter which needs to be used currently from the various optical filters according to the illumination intensity of the current environment of the camera module.
In a possible implementation manner, the offset detection method may be applied to a camera module, where the camera module includes a memory and a processor, and the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to implement the offset detection method as shown in any one or more of fig. 2 or fig. 3.
In a possible implementation manner, the camera module may be applied to a terminal device, and the terminal device may include at least one camera module as described above. Optionally, the terminal device may be a terminal device that can be equipped with a camera module.
For example, the terminal device may be a vehicle-mounted device, for example, a driving computer with a video recording function, or a wireless communication device externally connected to the driving computer.
Alternatively, the terminal device may be a roadside device, for example, a street lamp, a signal lamp or other roadside device with a monitoring function.
Alternatively, the terminal equipment may be user terminal equipment such as mobile telephones (or "cellular" telephones) and computers with mobile terminals, such as portable, pocket, hand-held, computer-included or vehicle-mounted mobile devices. For example, a Station (STA), a subscriber unit (subscriber unit), a subscriber Station (subscriber Station), a mobile Station (mobile), a remote Station (remote Station), an access point (ap), a remote terminal (remote terminal), an access terminal (access terminal), a user equipment (user terminal), a user agent (user agent), a user equipment (user device), or a user terminal (UE). For example, the terminal device 110 may be a mobile phone, a tablet computer, an e-book reader, smart glasses, a smart watch, an MP4(Moving Picture Experts Group Audio Layer IV) player, a notebook computer, a laptop computer, a desktop computer, and the like.
Optionally, the processor in the terminal device may include one or more processing cores. The processor connects various parts within the overall terminal device using various interfaces and lines, performs various functions of the terminal device and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory, and calling data stored in the memory. Alternatively, the processor may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is to be understood that the modem may be implemented by a communication chip without being integrated into the processor.
The Memory in the terminal device may include a Random Access Memory (RAM) or a Read-Only Memory (ROM). The memory may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like. The storage data area may also store data created by the terminal device in use, and the like. It is understood that the terminal device may include more or less structural elements than those shown in the above structural block diagrams, for example, a power module, a speaker, a bluetooth module, a sensor, etc., which are not limited herein.
In an example where the terminal device is an in-vehicle terminal, the controller is an Electronic Control Unit (ECU) in the in-vehicle terminal. The vehicle-mounted terminal comprises the camera module shown in the figure 1, and the ECU can receive the optical filter switching signal and further control the optical filter to switch. For example, when the external environment changes from daytime to night, the ECU may control the camera module to switch from the currently used infrared cut filter to the infrared band pass filter, and provide a holding force for the infrared band pass filter in order to improve stability, so that the infrared band pass filter is fixed at the switched position.
The ECU acquires a first image picture acquired by the camera module; performing edge detection on the first image picture to obtain a first edge image, wherein the first edge image is composed of edge line segments of all objects in the first image picture; carrying out Hough transform on pixel points where edge line segments of all objects in the first edge image are located, and calculating accumulated values corresponding to the edge line segments of all the objects; determining a target edge line segment according to the accumulated value corresponding to the edge line segment of each object, wherein the accumulated value of the target edge line segment is greater than a preset accumulated threshold value; detecting the line segment parameters of the target edge line segment; and when the line segment parameters of the target edge line segment accord with the preset parameter range, determining that the position of the optical filter used by the camera module is shifted.
In summary, in the embodiment of the present application, a first image acquired by a camera module is obtained, edge detection is performed on the first image, a first edge image is obtained, a target edge line segment is determined from the first edge image, and by detecting line segment parameters of the target edge line segment, when the line segment parameters of the target edge line segment meet a preset parameter range, it is determined that a position of an optical filter being used by the camera module is shifted. The image frame that this application adopted to gather the camera module is analyzed, confirm probably because the light filter position takes place the target edge line section that the skew leads to in the picture, and detect, thereby whether the skew has taken place for the position of determining the light filter that is using, need not set up a plurality of sensors in the camera module corresponding position department after the light filter switches, the structure of camera module has been simplified, space utilization has been improved, whether the skew has taken place for the light filter can be detected out to direct follow the image, need not to carry out sensing and transmission to light filter position parameter through position sensor, detection efficiency and detection precision have also been improved.
The embodiment of the application also discloses a computer readable storage medium which stores a computer program, wherein the computer program realizes the method in the embodiment of the method when being executed by a processor.
The embodiment of the application also discloses a computer program product, which comprises a non-transitory computer readable storage medium storing a computer program, and the computer program is operable to make a computer execute the method in the method embodiment.
The embodiment of the application also discloses an application publishing platform, wherein the application publishing platform is used for publishing a computer program product, and when the computer program product runs on a computer, the computer is enabled to execute the method in the method embodiment.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are all alternative embodiments and that the acts and modules involved are not necessarily required for this application.
In various embodiments of the present application, it should be understood that the size of the serial number of each process described above does not mean that the execution sequence is necessarily sequential, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present application, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, may be embodied in the form of a software product, stored in a memory, including several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of the embodiments of the present application.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by hardware instructions of a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, such as a magnetic disk, or a combination thereof, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
The foregoing describes an example of an offset detection method, an offset detection apparatus, a camera module, a terminal device, and a storage medium disclosed in the embodiments of the present application, and a principle and an implementation of the present application are described herein by applying an example, and the description of the foregoing embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (17)

1. The offset detection method is used in a camera module, wherein the camera module comprises an optical filter assembly, the optical filter assembly comprises at least two optical filters, and the frequency bands of the at least two optical filters for filtering light rays are different; the method comprises the following steps:
acquiring a first image picture acquired by the camera module;
performing edge detection on the first image picture to obtain a first edge image, wherein the first edge image is composed of edge line segments of all objects in the first image picture;
detecting edge line segments of the objects in the first edge image through Hough lines to obtain target edge line segments, wherein the target edge line segments are edge line segments of which the accumulated values corresponding to the edge line segments of the objects are larger than a preset accumulated threshold value;
detecting the line segment parameters of the target edge line segment;
and when the line segment parameters of the target edge line segment accord with the preset parameter range, determining that the position of the optical filter used by the camera module is shifted.
2. The method of claim 1, wherein the line segment parameters comprise: the line segment slope of the target edge line segment in the first edge image, the line segment width of the target edge line segment in the first edge image, the line segment length of the target edge line segment in the first edge image, the pixel area of the target edge line segment in the first edge image, and one or more line segment paths formed by the target edge line segment in the first edge image.
3. The method of claim 2, wherein the segment parameters of the target edge segment include a segment slope of the target edge segment in the first edge image and a segment width of the target edge segment in the first edge image, the preset parameter threshold includes a preset slope range and a preset width range, and the detecting the segment parameters of the target edge segment includes:
when the slope of the line segment is within the range of the preset slope, acquiring the line segment width occupied by the target edge line segment in the first edge image;
and when the line segment width is within the preset width range, determining that the line segment parameters of the target edge line segment conform to the preset parameter range.
4. The method according to claim 3, wherein the obtaining the line segment width occupied by the target edge line segment in the first edge image when the line segment slope of the target edge line segment in the first edge image is within the preset slope range comprises:
acquiring each pixel point occupied by the target edge line segment in the first edge image;
acquiring each group of pixel points in the width direction of the target edge line segment according to each pixel point;
and calculating the average value of the widths of the pixel points of each group according to the width of the pixel points of each group, and acquiring the average value as the line segment width occupied by the target edge line segment in the first edge image.
5. The method of claim 4, wherein the obtaining of each pixel point occupied by the target edge line segment in the first edge image comprises:
acquiring a first pixel point of the target edge line segment in the first edge image;
taking the first pixel point as a pixel starting block, and acquiring an adjacent pixel set, wherein the adjacent pixel set is a set formed by all pixel points adjacent to the pixel starting block;
for the adjacent pixel set, detecting whether the difference value between the pixel value of each pixel point in the adjacent pixel set and the pixel value of the first pixel point is smaller than a preset difference threshold value;
merging each pixel point which is smaller than the preset difference threshold value in the adjacent pixel set with the pixel starting block to serve as a new pixel starting block, and executing the step of obtaining the adjacent pixel set until the difference value between the pixel value of each pixel point in the adjacent pixel set and the pixel value of the first pixel point is not smaller than the preset difference threshold value;
and taking each pixel point contained in the finally obtained pixel start block as each pixel point occupied by the target edge line segment in the first edge image.
6. The method according to claim 1, wherein determining that the position of the filter being used by the camera module is shifted when the line segment parameter of the target edge line segment meets a preset parameter range comprises:
when the line segment parameters of the target edge line segment accord with a preset parameter range, detecting whether a historical edge line segment at the same position as the target edge line segment in a pixel coordinate system is stored in a detection cache, wherein the historical edge line segment is a line segment of which the line segment parameters determined from the previous N frames of image pictures of the first image picture accord with the preset parameter range, the pixel coordinate system is the pixel coordinate system of the first edge image, and N is a positive integer;
and when the historical edge line segment at the same position of the target edge line segment in the pixel coordinate system is stored in the detection cache, determining that the position of the optical filter used by the camera module is shifted.
7. The method of claim 1, wherein the number of the target edge line segments is at least two, and before the detecting the line segment parameters of the target edge line segments, the method further comprises:
sorting each target edge line segment according to the accumulated value corresponding to each target edge line segment;
the detecting the segment parameters of the target edge segment includes:
and detecting the line segment parameters of the target edge line segments in sequence according to the sequencing result.
8. The method of claim 7, wherein the sorting the target edge line segments comprises:
sorting according to the sequence of accumulated values of the target edge line segments from large to small; or,
and sorting according to the difference between the respective accumulated value of each target edge line segment and the preset accumulated threshold value from small to large.
9. The method according to any one of claims 1 to 8, wherein the camera module further comprises a driving device, and after the determining that the position of the filter in use by the camera module is shifted, the method further comprises:
determining a filter which needs to be used currently in the filter component;
and controlling the driving assembly to provide the driving force to the optical filter component so as to adjust the position of the optical filter which needs to be used at present to a fixed position.
10. The method of claim 9, wherein after the controlling the driving assembly to provide the driving force to the optical filter assembly to adjust the position of the currently required optical filter to a fixed position, further comprising:
detecting a first number of times of providing, which is the number of times the driving member provides the driving force to the optical filter assembly;
and when the first providing times do not reach the upper limit of the preset times, executing the step of acquiring the first image picture acquired by the camera module.
11. The method of claim 10, further comprising:
and when the first providing times reach the preset upper time limit, generating early warning information, wherein the early warning information is used for indicating that the position of the optical filter assembly deviates.
12. The method of claim 9, further comprising, before the controlling the driving assembly to provide the driving force to the optical filter assembly to adjust the position of the currently required optical filter to a fixed position:
acquiring an image position corresponding to the optical filter which needs to be used currently in the first image picture;
determining the direction of the driving force according to the position relation between the image position and the target edge line segment;
the controlling the driving assembly to provide the driving force to the optical filter assembly to adjust the position of the currently required optical filter to a fixed position includes:
and controlling the driving assembly to provide the driving force to the optical filter component according to the direction of the driving force so as to adjust the position of the optical filter which needs to be used currently to a fixed position.
13. The method of claim 9, wherein said determining a currently used filter in said filter assembly comprises:
acquiring various optical filters corresponding to the first image picture;
and determining the optical filter which needs to be used currently from the various optical filters according to the illumination intensity of the current environment of the camera module.
14. The offset detection device is used in a camera module, wherein the camera module comprises an optical filter assembly, the optical filter assembly comprises at least two optical filters, and the frequency bands of the at least two optical filters for filtering light rays are different; the device comprises:
the image acquisition module is used for acquiring a first image acquired by the camera module;
the image acquisition module is used for carrying out edge detection on the first image picture to acquire a first edge image, and the first edge image is composed of edge line segments of all objects in the first image picture;
a line segment determining module, configured to perform hough line detection on edge line segments of the objects in the first edge image to obtain a target edge line segment, where the target edge line segment is an edge line segment in which an accumulated value corresponding to the edge line segment of each object is greater than a preset accumulated threshold;
the parameter detection module is used for detecting the line segment parameters of the target edge line segment;
and the offset determining module is used for determining that the position of the optical filter used by the camera module is offset when the line segment parameters of the target edge line segment accord with the preset parameter range.
15. A camera module, comprising a memory and a processor, wherein the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to implement the offset detection method according to any one of claims 1 to 13.
16. A terminal device, characterized in that it comprises at least one camera module according to claim 15.
17. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the offset detection method according to any one of claims 1 to 13.
CN202110413657.0A 2021-04-16 2021-04-16 Offset detection method and device, camera module, terminal equipment and storage medium Withdrawn CN113225550A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110413657.0A CN113225550A (en) 2021-04-16 2021-04-16 Offset detection method and device, camera module, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110413657.0A CN113225550A (en) 2021-04-16 2021-04-16 Offset detection method and device, camera module, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113225550A true CN113225550A (en) 2021-08-06

Family

ID=77087663

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110413657.0A Withdrawn CN113225550A (en) 2021-04-16 2021-04-16 Offset detection method and device, camera module, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113225550A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113676717A (en) * 2021-08-27 2021-11-19 浙江大华技术股份有限公司 Filter disc correcting method, device, equipment and medium
CN113838097A (en) * 2021-09-29 2021-12-24 成都新潮传媒集团有限公司 Camera lens angle deviation detection method and device and storage medium
CN117179744A (en) * 2023-08-30 2023-12-08 武汉星巡智能科技有限公司 Non-contact infant height measurement method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008236526A (en) * 2007-03-22 2008-10-02 Seiko Epson Corp Image processing method, image processing apparatus, and electronic apparatus
CN103841409A (en) * 2012-11-26 2014-06-04 浙江大华技术股份有限公司 Detecting method, device and system for switching infrared filter
US20170324952A1 (en) * 2016-05-03 2017-11-09 Performance Designed Products Llc Method of calibration for a video gaming system
CN108918093A (en) * 2018-05-23 2018-11-30 精锐视觉智能科技(深圳)有限公司 A kind of optical filter mirror defects detection method, device and terminal device
CN109474776A (en) * 2018-12-11 2019-03-15 努比亚技术有限公司 Switching device of optical fiber, switching method, terminal and storage medium
CN109714529A (en) * 2018-12-24 2019-05-03 浙江大华技术股份有限公司 Optical filter switching method, device and the storage medium of image collecting device
CN110505477A (en) * 2019-09-17 2019-11-26 普联技术有限公司 Double filter test methods, device, equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008236526A (en) * 2007-03-22 2008-10-02 Seiko Epson Corp Image processing method, image processing apparatus, and electronic apparatus
CN103841409A (en) * 2012-11-26 2014-06-04 浙江大华技术股份有限公司 Detecting method, device and system for switching infrared filter
US20170324952A1 (en) * 2016-05-03 2017-11-09 Performance Designed Products Llc Method of calibration for a video gaming system
CN108918093A (en) * 2018-05-23 2018-11-30 精锐视觉智能科技(深圳)有限公司 A kind of optical filter mirror defects detection method, device and terminal device
CN109474776A (en) * 2018-12-11 2019-03-15 努比亚技术有限公司 Switching device of optical fiber, switching method, terminal and storage medium
CN109714529A (en) * 2018-12-24 2019-05-03 浙江大华技术股份有限公司 Optical filter switching method, device and the storage medium of image collecting device
CN110505477A (en) * 2019-09-17 2019-11-26 普联技术有限公司 Double filter test methods, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张进猛等: "图像处理边缘检测技术的应用探索", 《通讯世界》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113676717A (en) * 2021-08-27 2021-11-19 浙江大华技术股份有限公司 Filter disc correcting method, device, equipment and medium
CN113676717B (en) * 2021-08-27 2024-02-06 浙江大华技术股份有限公司 Filter disc correction method, device, equipment and medium
CN113838097A (en) * 2021-09-29 2021-12-24 成都新潮传媒集团有限公司 Camera lens angle deviation detection method and device and storage medium
CN113838097B (en) * 2021-09-29 2024-01-09 成都新潮传媒集团有限公司 Camera lens angle deviation detection method, device and storage medium
CN117179744A (en) * 2023-08-30 2023-12-08 武汉星巡智能科技有限公司 Non-contact infant height measurement method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
US10070053B2 (en) Method and camera for determining an image adjustment parameter
CN108012083B (en) Face acquisition method and device and computer readable storage medium
CN107258077B (en) System and method for Continuous Auto Focus (CAF)
KR101468351B1 (en) Object tracking device, object tracking method, and control program
CN113225550A (en) Offset detection method and device, camera module, terminal equipment and storage medium
US8605185B2 (en) Capture of video with motion-speed determination and variable capture rate
US9357107B2 (en) Image-processing device, image-capturing device, image-processing method, and recording medium
CN104301596B (en) A kind of method for processing video frequency and device
CN107230187A (en) The method and apparatus of multimedia signal processing
CN105100609A (en) Mobile terminal and shooting parameter adjusting method
CN105190685A (en) Adaptive data path for computer-vision applications
CN113313626B (en) Image processing method, device, electronic device and storage medium
GB2545551A (en) Imaging device and imaging method
EP3940633B1 (en) Image alignment method and apparatus, electronic device, and storage medium
CN111246092B (en) Image processing method, device, storage medium and electronic device
CN102036005A (en) Imaging device for processing captured image
CN109002796B (en) Image acquisition method, device and system and electronic equipment
KR20130027520A (en) Viewpoint detector based on skin color area and face area
CN110008771B (en) Code scanning system and code scanning method
US11553136B2 (en) Subject tracking device, subject tracking method, and storage medium
US9619871B2 (en) Image processing device, imaging apparatus, image processing method, and program
CN108513069B (en) Image processing method, image processing device, storage medium and electronic equipment
CN118396863A (en) Image processing method, device, electronic equipment and computer readable storage medium
CN109784327B (en) Boundary box determining method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210806