CN112702575B - Multi-focal-length image acquisition device based on image fusion technology and analysis method - Google Patents
Multi-focal-length image acquisition device based on image fusion technology and analysis method Download PDFInfo
- Publication number
- CN112702575B CN112702575B CN202011542483.XA CN202011542483A CN112702575B CN 112702575 B CN112702575 B CN 112702575B CN 202011542483 A CN202011542483 A CN 202011542483A CN 112702575 B CN112702575 B CN 112702575B
- Authority
- CN
- China
- Prior art keywords
- image
- fixed
- focus lens
- image processing
- processing chip
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 35
- 238000005516 engineering process Methods 0.000 title claims abstract description 11
- 238000004458 analytical method Methods 0.000 title abstract description 11
- 238000012545 processing Methods 0.000 claims abstract description 86
- 238000012634 optical imaging Methods 0.000 claims abstract description 32
- 238000000034 method Methods 0.000 claims description 13
- 239000011159 matrix material Substances 0.000 claims description 10
- 238000012937 correction Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 7
- 230000001360 synchronised effect Effects 0.000 claims description 6
- 230000002093 peripheral effect Effects 0.000 claims description 5
- 230000002159 abnormal effect Effects 0.000 claims description 4
- 238000003384 imaging method Methods 0.000 claims description 4
- 238000003703 image analysis method Methods 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 claims description 3
- 238000012544 monitoring process Methods 0.000 abstract description 11
- 238000009434 installation Methods 0.000 description 4
- 230000007704 transition Effects 0.000 description 4
- 101100498818 Arabidopsis thaliana DDR4 gene Proteins 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 241000226585 Antennaria plantaginifolia Species 0.000 description 1
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 101000997281 Rattus norvegicus Potassium voltage-gated channel subfamily C member 1 Proteins 0.000 description 1
- 101001135491 Rattus norvegicus Potassium voltage-gated channel subfamily C member 4 Proteins 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/81—Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
A multi-focal-length image acquisition device and an analysis method based on an image fusion technology are disclosed, wherein the acquisition device comprises an image processing unit and a camera lens array unit; the image processing unit is electrically connected with the camera lens array unit; the camera lens array unit comprises a fixed focus lens group and an image sensor; the fixed focus lens group comprises at least two fixed focus lenses; the adjacent optical imaging areas are overlapped; the image sensor is electrically connected with the fixed-focus lens; the long-distance road monitoring is segmented and shot, and the adjacent optical imaging areas are spliced and fused during image processing, so that the monitoring image in a long-distance range is clearly displayed.
Description
Technical Field
The invention relates to the field of image recognition, in particular to a multi-focal-length image acquisition device and an analysis method based on an image fusion technology.
Background
At present, 2 ten thousand highway tunnels are owned in China, the total length of the tunnels is about 2000 ten thousand meters, and the safety in the tunnels is always a great importance for building the tunnels. In recent years, the country has been emphasizing the upgrading work of the highway tunnel, mainly including the work of completing tunnel traffic engineering and accessory facility short boards, promoting the civil engineering structure reformation of the on-service highway tunnel, and the like, so that the highway tunnel can better provide safe and convenient travel service for the public. The most important of the upgrading and upgrading of the highway tunnel is the informatization and the intellectualization of tunnel management, and the method is mainly used for quickly identifying image information in the tunnel. The existing tunnel monitoring adopts a monocular fixed-focus camera to monitor the environment in the tunnel, and the installation mode of the camera is divided into two types of tunnel inner top installation and tunnel inner side installation. The top loading height is usually the highest point of the tunnel arch area, and the typical value of the top loading height is 7 m; the side-mounted mode is to mount the camera on the side wall of the tunnel, with a typical height of 5 meters. The clear width of the double-lane tunnel in China is about 10.5 meters, and the width of the lane is 7.5 meters; in addition, according to the design requirements of an electromechanical system of the highway, one camera is required to be arranged every 150 meters on a traffic lane, monitoring is carried out along the driving direction of vehicles, full coverage of monitoring in a tunnel is achieved, and the image is clear through means of backlight compensation and strong light inhibition.
Conventional tunnel cameras have a field of view length of at least 150 meters, since the camera is required to be able to capture at least all images within the range of an adjacent camera. Due to the fact that imaging is in a rule of large and small, and in addition, the angle of view is in a divergence shape, the image collected by the camera is clearer at the near end, and effective pixels occupied by roads in the far-end image are fewer, so that effective information of the lanes is easily lost, and the intelligentization of management in the tunnel is seriously influenced; on the other hand, because effective pixels occupied by the vehicle at the far end of the image are too small, adaptation of intelligent algorithm analysis cannot be realized completely, the problems of misinformation and incapability of identification exist in analysis of people, vehicles, objects and lanes, and safety management in the tunnel is difficult to realize. Therefore, a method for clearly displaying an image in a long distance range is required.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a multi-focal-length image acquisition device and an analysis method based on an image fusion technology.
A multi-focal-length image acquisition device based on an image fusion technology comprises an image processing unit and a camera lens array unit; the image processing unit is electrically connected with the camera lens array unit; the camera lens array unit comprises a fixed focus lens group and an image sensor; the fixed focus lens group comprises at least two fixed focus lenses; adjacent optical imaging areas in the fixed-focus lens group are overlapped; the image sensor is electrically connected with the fixed-focus lens.
Furthermore, the central lines of the imaging images of the fixed-focus lens in the fixed-focus lens group are in the same normal line.
Further, the image processing unit comprises an ISP image processing chip, an FPGA processing chip and an image processing chip; the FPGA processing chip is electrically connected with the ISP image processing chip and the image processing chip respectively.
Furthermore, the fixed-focus lens group comprises three fixed-focus lenses, namely a short-distance fixed-focus lens, a middle-distance fixed-focus lens and a long-distance fixed-focus lens; the optical imaging area of the middle-distance fixed-focus lens is positioned between the optical imaging area of the short-distance fixed-focus lens and the optical imaging area of the long-distance fixed-focus lens; and the optical imaging area of the middle-distance fixed-focus lens is respectively overlapped with the optical imaging areas of the short-distance fixed-focus lens and the long-distance fixed-focus lens.
A multi-focal-length image analysis method based on an image fusion technology is characterized in that the multi-focal-length image acquisition device based on the image fusion technology is adopted, and the method comprises the following steps:
step 1: the fixed-focus lenses in the fixed-focus lens groups respectively collect images, and the image sensors convert the optical signals into ISP image data and transmit the ISP image data to the image processing unit;
step 2: and the image processing unit receives the ISP image data, processes the image data to obtain a fused image and transmits the fused image to an external display device.
Further, the processing of the image data in the step 2 includes cutting, splicing, fusing and analyzing the image; the image is cut by T-type zooming; the image splicing fusion comprises the geometric correction of the image and the multi-frequency fusion.
Further, the processing of the image data comprises the steps of:
step 21: the ISP image processing chip receives ISP image data output by the image sensor, performs initialization configuration management on the image, generates a color space and transmits the color space to the FPGA processing chip;
step 22: the FPGA processing chip receives the image color space data preprocessed by the ISP image processing chip, cuts the image according to the instruction and outputs the image color space data to the image processing chip; the instruction is sent out by the image processing chip;
step 23: and the image processing chip receives the push data, splices and fuses the images and finishes the step. Further, in the step 22, the FPGA processing chip cuts the image into a T-type scaling, where the T-type scaling includes the following steps:
step 221: inputting a parameter matrix of an image;
step 222: determining a range of a stretched portion;
step 223: stretching the stretched part and filling point positions;
step 224: and finishing the image stretching and finishing the steps.
Further, the stretching of the stretching part is line-by-line stretching; the original length of the stretching part in the horizontal direction is m, and the stretching step length is x/m if the stretched length is x; determining a starting point of stretching, and calculating other point positions by adding multiples of the step length to the starting point; performing point location filling on the stretched image, wherein the point location filling is scale linear difference filling; the coordinate with decimal point of the image after the stretching is expressed as (a.a', b), and because the image is only horizontally stretched, the ordinate b is an integer necessarily, and point location filling is not needed; where the scale linear difference fill of the abscissa is expressed as:
f(a,b)*(1-0.a')+f(a+1,b)*(0.a')
where f () denotes the value of the parameter with coordinates () in the parameter matrix of the original image.
Further, when the image processing unit is started for the first time or restarted, after the image processing unit is started, the initialization operation is performed, the running program is loaded after the initialization is completed, the initialization configuration is performed on the peripheral unit, and it is confirmed that all components of the system are not abnormal; the image processing unit also initializes a camera lens array unit to enable each fixed-focus lens to adopt a synchronous acquisition clock and a synchronous configuration mode; the peripheral unit includes an external display device.
The invention has the beneficial effects that:
the long-distance road monitoring is segmented and shot, and adjacent optical imaging areas are spliced and fused during image processing, so that the monitoring image in a long-distance range is clearly displayed;
by carrying out T-shaped zooming before splicing and fusion, the farther away the image is enlarged, and compared with the image acquired by traditional monitoring, the farther away the image can be clearly displayed;
and the smooth transition of the image is ensured by carrying out geometric correction and multi-frequency fusion after the splicing fusion is finished.
Drawings
FIG. 1 is a schematic diagram of an image processing unit according to a first embodiment of the present invention;
FIG. 2 is a flowchart of an analysis method according to a first embodiment of the present invention;
FIG. 3 is a flowchart illustrating an exemplary embodiment of image data processing;
FIG. 4 is a flowchart illustrating a T-scaling according to a first embodiment of the present invention;
FIG. 5 is a schematic view of an optical imaging area according to a first embodiment of the present invention;
FIG. 6 is a schematic illustration of a mosaic of optical imaging areas according to a first embodiment of the present invention;
FIG. 7 is an output image according to a first embodiment of the present invention;
fig. 8 is an output image of a conventional road monitoring apparatus.
Detailed Description
The following embodiments of the present invention are provided by way of specific examples, and other advantages and effects of the present invention will be readily apparent to those skilled in the art from the disclosure herein. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
The first embodiment is as follows:
a multi-focal-length image acquisition device based on an image fusion technology comprises an image processing unit and a camera lens array unit. The image processing unit is electrically connected with the camera lens array unit.
The camera lens array unit comprises a fixed focus lens group and an image sensor; the fixed focus lens group includes at least two fixed focus lenses, in this example, three fixed focus lenses, which are a short-distance fixed focus lens, a middle-distance fixed focus lens and a long-distance fixed focus lens. The adjacent optical imaging areas are overlapped; in this example, the optical imaging area of the fixed-focus lens is located between the optical imaging area of the short-distance fixed-focus lens and the optical imaging area of the long-distance fixed-focus lens, so that the optical imaging area of the middle-distance fixed-focus lens in the fixed-focus lens group in the same group is overlapped with the optical imaging areas of the short-distance fixed-focus lens and the long-distance fixed-focus lens respectively. The image sensor is electrically connected with the fixed-focus lens, and the image sensor and the fixed-focus lens are correspondingly arranged. The distance is set between each group of fixed-focus lens groups at intervals. The fixed focus lens is used for collecting images, and in the embodiment, a wide-dynamic low-illumination camera is selected as the fixed focus lens. It should be noted that the central lines of the images imaged by the fixed-focus lenses in each fixed-focus lens group are on the same normal line, so as to ensure that the images subsequently synthesized do not shift. The image sensor can process and convert the image collected by the fixed-focus lens into ISP image data and transmit the ISP image data to the image processing unit.
As shown in fig. 1, the image processing unit includes an ISP image processing chip, an FPGA processing chip, and an image processing chip, wherein the FPGA processing chip is electrically connected to the ISP image processing chip and the image processing chip, respectively, ISP image data processed by the image sensor in this example is first transmitted to the FPGA processing chip, and the image data is converted by the FPGA processing chip and then transmitted to the image processing chip; the image processing chip completes the fusion of the images collected by different fixed-focus lenses in the same group of fixed-focus lens groups. The image processing chip is provided with a NAND Flash interface, a DDR4 interface and a network interface, wherein the NAND Flash interface is used for connecting a Flash memory; the DDR4 interface is used for connecting a memory bank; the network interface is used for outputting the fused image, realizing the functions of transmitting video data, controlling data and service application data and realizing service application. And the image processing unit realizes the cutting, splicing, fusion and analysis of the image.
The camera lens array unit further comprises a fixing unit and a power supply unit, and the fixing unit can fix the fixed-focus lens and the image sensor on the top in the tunnel or the side wall of the tunnel. The power supply unit is electrically connected with the fixed-focus lens and the image sensor to supply power to the fixed-focus lens and the image sensor.
It should be noted that in some other embodiments, other numbers of fixed-focus lenses may be used to form the fixed-focus lens group.
In the implementation process, the camera lens array unit acquires images in a segmented manner, the image processing unit realizes the fusion of the images of different segments, the long-distance range image acquisition is realized, and the definition of the acquired images is ensured.
As shown in fig. 2, a multi-focal-length image analysis method based on an image fusion technique includes the following steps:
step 1: a close-range fixed-focus lens, a middle-range fixed-focus lens and a long-range fixed-focus lens in the fixed-focus lens group respectively collect images, and an image sensor converts an optical signal into ISP image data and transmits the ISP image data to an image processing unit;
step 2: and the image processing unit receives the ISP image data, processes the image data to obtain a fused image and transmits the fused image to an external display device.
As shown in fig. 5, the installation distance of the fixed focus lens group in step 1 is 150 meters, wherein the short-distance fixed focus lens is denoted as C1, the middle-distance fixed focus lens is denoted as C2, and the long-distance fixed focus lens is denoted as C3, wherein the focal length f of C1 is C1 <f C2 <f C3 And adjacent optical imaging areas can be overlapped. The image of the optical imaging area of C1 is represented as A1, the image of the optical imaging area of C2 is represented as A2, and the image of the optical imaging area of C3 is represented as A3; the edges of A1, A2 and A3 have overlapping regions, and the total optical imaging area is more than 150 meters away along the length of the road.
As shown in fig. 3 and 6, the processing of the image data in step 2 includes cropping, stitching, fusing, and analyzing the image. The cropping of the image is achieved by T-scaling. The image splicing fusion comprises the geometric correction of the image and the multi-frequency fusion. The geometric correction comprises barrel-shaped lens distortion correction, pincushion lens distortion correction and the like; the multi-frequency fusion is used for carrying out pixel fusion on the image overlapping area. Through the splicing and fusion of the images, the transition color transition of the adjacent images in the optical imaging area is gradually changed naturally, and meanwhile, the dislocation and ghost range caused by the visual angle difference are reduced, so that a spliced and fused image with larger pixels for flattening the road is formed. The analysis of the image is the detection and analysis of a conventional image intelligent event, and can detect and analyze road abnormal information in the image and give a warning in time. The processing of the image data comprises the steps of:
step 21: the ISP image processing chip receives and reads ISP image data output by the image sensor, performs initialization configuration management on the image, generates a color space and transmits the color space to the FPGA processing chip; in this example, the image data of the optical imaging area acquired by C1-C3 are represented as RGB Bayer1, RGB Bayer2, RGB Bayer3, and the generated color space is represented as YCbCrRAW1, YCbCr RAW2, YCbCr RAW3, forming images A1, A2, A3 of the optical imaging area;
step 22: the FPGA processing chip receives the image color space data preprocessed by the ISP image processing chip, cuts the image according to the instruction, and outputs the data to the image processing chip, namely YCbCr RAW I, YCbCr RAW II and YCbCr RAW III; the instruction is sent out by the image processing chip;
step 23: and the image processing chip receives and pushes the data YCbCr RAW I, YCbCr RAW II and YCbCr RAW III, and carries out splicing and fusion on the images, and the step is finished.
As shown in fig. 4, in the step 22, the FPGA processing chip cuts the image into a T-shaped zoom to the image, where the T-shaped zoom is to stretch a bilaterally symmetric T-shaped region of the image into a planar image based on the central line of the image, and the farther the imaging position of the image is from the fixed-focus lens group, the larger the magnification factor is; the T-type zooming can make the image at a distance clearly displayed. The T-type scaling comprises the following steps:
step 221: inputting a parameter matrix of an image;
step 222: determining a range of the stretched portion; in this example, the stretching part is equal to the height of the parameter matrix of the image;
step 223: stretching the stretched part and filling point positions;
step 224: and finishing the image stretching and finishing the step.
The extent of the stretched portion in step 222 is in this example a trapezoidal shaped image area of the road.
In step 223, when the stretched portion is stretched, the stretched portion is stretched in the horizontal direction because the stretched portion is equal to the parameter matrix in height. In the example, the stretching of the stretching part is progressive stretching, during stretching, the original length of the stretching part in the horizontal direction is m, and assuming that the length after stretching is x, the starting point of a new image matrix is taken as the starting point of the stretching part, and other points of the stretching part are calculated by adding the multiple of the step length from the starting point, wherein the step length is x/m; the leftmost point of the new image matrix is selected as the starting point. Since decimal points may exist after the image is stretched, point location filling is needed to be carried out on the stretched image, and the point location filling is scale linear difference filling. In which the coordinates with decimal point of the image after completion of stretching are represented as (a.a', b), and since only horizontal stretching is performed, the ordinate b is necessarily an integer. The point filling of the abscissa is expressed as:
f(a,b)*(1-0.a')+f(a+1,b)*(0.a')
wherein f (.) represents the parameter value with the coordinate of (.) in the parameter matrix of the original image.
In step 224, the process ends after the stretching of the parameters of all rows is completed.
In the step 23, the image is merged and fused, including image fusion, brightness adjustment of image edges, image scaling, and the like, so as to form a multi-focus monitoring picture.
Before step 21, if the image processing unit is started for the first time or restarted, after the image processing unit is started, an initialization operation needs to be performed first, an operating program is loaded after the initialization is completed, an initialization configuration is performed on the peripheral unit, and it is determined that each component of the system is not abnormal; the image processing unit also initializes the camera lens array unit, so that each fixed-focus lens adopts a synchronous acquisition clock and a synchronous configuration mode, and the image fusion is facilitated; the peripheral unit includes an external display device.
As shown in fig. 7 and 8, in the implementation process, the adjacent optical imaging regions are spliced and fused, so that clear display in a long-distance range is realized; meanwhile, T-shaped zooming is carried out before splicing and fusion, so that images at a distance can be amplified and displayed, and compared with images acquired by traditional monitoring, the images are clearly displayed; and after the splicing fusion is finished, geometric correction and multi-frequency fusion are carried out to ensure smooth transition of the image.
The above description is only one specific example of the present invention and should not be construed as limiting the invention in any way. It will be apparent to persons skilled in the relevant art that various modifications and changes in form and detail can be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (1)
1. A multi-focal-length image analysis method based on an image fusion technology is characterized in that a multi-focal-length image acquisition device based on the image fusion technology adopted by the method comprises an image processing unit and a camera lens array unit; the image processing unit is electrically connected with the camera lens array unit; the camera lens array unit comprises a fixed focus lens group and an image sensor; the fixed focus lens group comprises at least two fixed focus lenses; adjacent optical imaging areas in the fixed-focus lens group are overlapped; the image sensor is electrically connected with the fixed-focus lens;
the central lines of the imaging images of the fixed-focus lens in the fixed-focus lens group are on the same normal;
the image processing unit comprises an ISP image processing chip, an FPGA processing chip and an image processing chip; the FPGA processing chip is electrically connected with the ISP image processing chip and the image processing chip respectively;
the fixed-focus lens group comprises three fixed-focus lenses, namely a short-distance fixed-focus lens, a middle-distance fixed-focus lens and a long-distance fixed-focus lens; the optical imaging area of the middle-distance prime lens is positioned between the optical imaging area of the short-distance prime lens and the optical imaging area of the long-distance prime lens; the optical imaging area of the middle-distance fixed-focus lens is respectively overlapped with the optical imaging areas of the short-distance fixed-focus lens and the long-distance fixed-focus lens;
the method comprises the following steps:
step 1: a close-range fixed-focus lens, a middle-range fixed-focus lens and a long-range fixed-focus lens in the fixed-focus lens group respectively collect images, and an image sensor converts an optical signal into ISP image data and transmits the ISP image data to an image processing unit;
step 2: the image processing unit receives ISP image data, processes the image data to obtain a fused image and transmits the fused image to an external display device;
the close-range prime lens is denoted as C1, the intermediate-range prime lens is denoted as C2, and the long-range prime lens is denoted as C3, where the focal length f of C1 C1 <f C2 <f C3 ;
The processing of the image data in the step 2 comprises the steps of cutting, splicing, fusing and analyzing the image; the image is cut by T-type zooming; the image splicing and fusion comprises geometric correction and multi-frequency fusion of images, wherein the multi-frequency fusion is used for carrying out pixel fusion on an image overlapping area;
the processing of the image data comprises the steps of:
step 21: the ISP image processing chip receives and reads ISP image data output by the image sensor, carries out initialization configuration management on the image, generates a color space and transmits the color space to the FPGA processing chip;
step 22: the FPGA processing chip receives the image color space data preprocessed by the ISP image processing chip, cuts the image according to the instruction and outputs the image color space data to the image processing chip; the instruction is sent out by the image processing chip;
step 23: the image processing chip receives the push data, splices and fuses the images, and finishes the step; in the step 22, the image is cut by the FPGA processing chip into T-shaped scaling of the image, where the T-shaped scaling includes the following steps:
step 221: inputting a parameter matrix of an image;
step 222: determining a range of a stretched portion;
step 223: stretching the stretched part and filling point positions;
step 224: finishing the image stretching and finishing the step;
in the step 223, the stretching of the stretching part is line-by-line stretching; the original length of the stretching part in the horizontal direction is m, and the stretching step length is x/m if the stretched length is x; determining a starting point of stretching, and calculating other point positions by adding multiples of the step length to the starting point; performing point location filling on the stretched image, wherein the point location filling is scale linear difference filling; the coordinates with decimal point of the image after the stretching is finished are expressed as (a.a', b), and the ordinate b is necessarily an integer because only horizontal stretching is carried out; where the scale linear difference fill of the abscissa is expressed as:
f(a,b)*(1-0.a')+f(a+1,b)*(0.a' )
wherein f (.) represents the parameter value with the coordinate of (.) in the parameter matrix of the original image;
when the image processing unit is started for the first time or restarted, after the image processing unit is started, carrying out initialization operation, loading an operation program after the initialization is finished, carrying out initialization configuration on an external unit, and confirming that all components of the system are abnormal; the image processing unit also initializes a camera lens array unit to enable each fixed-focus lens to adopt a synchronous acquisition clock and a synchronous configuration mode; the peripheral unit includes an external display device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011542483.XA CN112702575B (en) | 2020-12-23 | 2020-12-23 | Multi-focal-length image acquisition device based on image fusion technology and analysis method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011542483.XA CN112702575B (en) | 2020-12-23 | 2020-12-23 | Multi-focal-length image acquisition device based on image fusion technology and analysis method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112702575A CN112702575A (en) | 2021-04-23 |
CN112702575B true CN112702575B (en) | 2023-04-18 |
Family
ID=75509501
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011542483.XA Active CN112702575B (en) | 2020-12-23 | 2020-12-23 | Multi-focal-length image acquisition device based on image fusion technology and analysis method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112702575B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119815185A (en) * | 2024-12-27 | 2025-04-11 | 四川国创新视超高清视频科技有限公司 | A multi-view fusion visual monitoring method and system for narrow and long areas |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016167839A (en) * | 2016-04-18 | 2016-09-15 | オリンパス株式会社 | Imaging apparatus and imaging method |
CN109120883A (en) * | 2017-06-22 | 2019-01-01 | 杭州海康威视数字技术股份有限公司 | Video monitoring method, device and computer readable storage medium based on far and near scape |
CN109274939A (en) * | 2018-09-29 | 2019-01-25 | 成都臻识科技发展有限公司 | A kind of parking lot entrance monitoring method and system based on three camera modules |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101862199B1 (en) * | 2012-02-29 | 2018-05-29 | 삼성전자주식회사 | Method and Fusion system of time-of-flight camera and stereo camera for reliable wide range depth acquisition |
CN106384372B (en) * | 2016-08-31 | 2019-08-09 | 重庆大学 | View synthesis method and device |
CN106713761A (en) * | 2017-01-11 | 2017-05-24 | 中控智慧科技股份有限公司 | Image processing method and apparatus |
CN109285136B (en) * | 2018-08-31 | 2021-06-08 | 清华-伯克利深圳学院筹备办公室 | An image multi-scale fusion method, device, storage medium and terminal |
CN111738969B (en) * | 2020-06-19 | 2024-05-28 | 无锡英菲感知技术有限公司 | Image fusion method, device and computer readable storage medium |
CN111818304B (en) * | 2020-07-08 | 2023-04-07 | 杭州萤石软件有限公司 | Image fusion method and device |
-
2020
- 2020-12-23 CN CN202011542483.XA patent/CN112702575B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016167839A (en) * | 2016-04-18 | 2016-09-15 | オリンパス株式会社 | Imaging apparatus and imaging method |
CN109120883A (en) * | 2017-06-22 | 2019-01-01 | 杭州海康威视数字技术股份有限公司 | Video monitoring method, device and computer readable storage medium based on far and near scape |
CN109274939A (en) * | 2018-09-29 | 2019-01-25 | 成都臻识科技发展有限公司 | A kind of parking lot entrance monitoring method and system based on three camera modules |
Also Published As
Publication number | Publication date |
---|---|
CN112702575A (en) | 2021-04-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5615441B2 (en) | Image processing apparatus and image processing method | |
CN103080978B (en) | Object recognition equipment | |
CN111768332B (en) | Method for splicing vehicle-mounted panoramic real-time 3D panoramic images and image acquisition device | |
JP4872769B2 (en) | Road surface discrimination device and road surface discrimination method | |
CN1902670A (en) | Device for displaying image outside vehicle | |
EP3470780B1 (en) | Object distance detection device | |
JP2008158958A (en) | Road surface determination method and road surface determination device | |
US20140055572A1 (en) | Image processing apparatus for a vehicle | |
JP2004258266A (en) | Stereoscopic adapter and distance image input device using the same | |
JP2019164138A (en) | Information processing device, mobile body, image processing system, and information processing method | |
CN114120254B (en) | Road information recognition method, device and storage medium | |
JP2014106704A (en) | In-vehicle image processing device | |
CN110087032A (en) | A kind of panorama type tunnel video monitoring devices and method | |
JP2006060425A (en) | Image generating method and apparatus thereof | |
CN112702575B (en) | Multi-focal-length image acquisition device based on image fusion technology and analysis method | |
Goga et al. | Fusing semantic labeled camera images and 3D LiDAR data for the detection of urban curbs | |
JP2023029441A (en) | Measuring device, measuring system, and vehicle | |
CN111951339B (en) | Image processing method for parallax calculation using heterogeneous binocular cameras | |
JP2016143364A (en) | POSITION IDENTIFICATION DEVICE, POSITION IDENTIFICATION METHOD, AND PROGRAM | |
CN111818262A (en) | Image reconstruction method and device | |
CN110009636A (en) | An integrated variable-focus visual inspection system for highway tunnels | |
JP2008286648A (en) | Distance measuring device, distance measuring system, distance measuring method | |
JP2006318059A (en) | Apparatus, method, and program for image processing | |
WO2019151704A1 (en) | Method and system for measuring three-dimensional visibility using traffic monitoring camera | |
CN213990780U (en) | Multi-focal-length image acquisition device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |