[go: up one dir, main page]

CN119815183B - Image stitching method and system for low-computation-power camera module array - Google Patents

Image stitching method and system for low-computation-power camera module array

Info

Publication number
CN119815183B
CN119815183B CN202510273116.0A CN202510273116A CN119815183B CN 119815183 B CN119815183 B CN 119815183B CN 202510273116 A CN202510273116 A CN 202510273116A CN 119815183 B CN119815183 B CN 119815183B
Authority
CN
China
Prior art keywords
camera
camera module
image
dynamic
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202510273116.0A
Other languages
Chinese (zh)
Other versions
CN119815183A (en
Inventor
区士超
刘晓涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Super Node Innovative Technology Shenzhen Co ltd
Original Assignee
Super Node Innovative Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Super Node Innovative Technology Shenzhen Co ltd filed Critical Super Node Innovative Technology Shenzhen Co ltd
Priority to CN202510273116.0A priority Critical patent/CN119815183B/en
Publication of CN119815183A publication Critical patent/CN119815183A/en
Application granted granted Critical
Publication of CN119815183B publication Critical patent/CN119815183B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The invention relates to the technical field of panoramic monitoring, and discloses an image stitching method and system for a low-calculation-force camera module array, which are used for gathering panoramic images of a target scene through an array formed by a plurality of camera modules, configuring initial priority by combining key annotation instructions of monitoring users, the method comprises the steps of dividing a camera module into an important module and a non-important module, carrying out timing dynamic detection on the non-important module, carrying out real-time dynamic detection on the important module, updating weights when a dynamic block is found, carrying out track tracking on the dynamic block by adopting a Kalman filtering algorithm, and predicting the motion track of the dynamic block. According to the scheme, the monitoring resource allocation of the camera module can be dynamically optimized, the response speed of a key area is improved, the instantaneity and the accuracy of image stitching are improved, the monitoring efficiency and the accuracy are effectively improved, and the problem that the low-calculation-force camera module array in the prior art is difficult to realize efficient and accurate image stitching is solved.

Description

Image stitching method and system for low-computation-power camera module array
Technical Field
The invention relates to the technical field of panoramic monitoring, in particular to an image stitching method and system for a low-computation-power camera module array.
Background
In modern camera array systems, processing multiple video streams simultaneously to achieve real-time stitching is a core challenge. Especially when the number of cameras is large, the conventional algorithm often needs to synchronously splice each frame of picture of all cameras. To achieve the real-time stitching effect, a large amount of GPU computing resources are required and a strict hardware synchronization mechanism is required, so that for a low-computation-power camera module array, efficient and accurate image stitching is difficult to achieve.
Disclosure of Invention
The invention aims to provide an image stitching method and system for a low-computation-power camera module array, and aims to solve the problem that the low-computation-power camera module array is difficult to achieve efficient and accurate image stitching in the prior art.
The present invention is achieved in a first aspect by providing an image stitching method for a low-power camera module array, comprising:
The method comprises the steps that panoramic image acquisition is carried out on a target scene through a camera module array formed by a plurality of camera modules, so that camera area images of the target scene corresponding to the camera modules are obtained, and the camera area images of the camera modules are combined, so that a panoramic spliced picture of the target scene is obtained;
Acquiring a monitoring key marking instruction of a monitoring user for the panoramic stitching picture, configuring initial priority weights of all the image pickup area images in the panoramic stitching picture according to the monitoring key marking instruction, and dividing all the image pickup modules into key image pickup modules and non-key image pickup modules through the initial priority weights corresponding to all the image pickup area images;
performing dynamic detection of appointed frequency on the image of the image pickup area collected by the non-key image pickup module, and performing real-time dynamic detection on the image of the image pickup area collected by the key image pickup module to obtain dynamic detection results of the non-key image pickup module and the key image pickup module;
When the dynamic detection result shows that a dynamic block appears in the image capturing area image, updating the initial priority of the image capturing module corresponding to the image capturing area image with the dynamic block so as to obtain the real-time priority of the image capturing module;
When the image pickup module corresponding to the image pickup area image of the dynamic block is a key image pickup module, carrying out track tracking processing on the dynamic block according to a Kalman filtering algorithm to obtain a predicted motion track of the dynamic block, and carrying out dynamic extension analysis and corresponding initial priority updating on the key image pickup module according to the predicted motion track to obtain real-time priority of the image pickup module;
And sequentially performing image splicing processing on the image capturing area images acquired by each image capturing module according to the real-time priority of the image capturing module so as to obtain dynamic spliced images.
In a second aspect, the present invention provides an image stitching system for a low-power camera module array, for implementing an image stitching method for a low-power camera module array according to any one of the first aspects, including:
The picture splicing module is used for acquiring panoramic images of a target scene through a camera module array formed by a plurality of camera modules so as to obtain camera area images of the target scene corresponding to the camera modules, and combining the camera area images of the camera modules so as to obtain a panoramic spliced picture of the target scene;
The key annotation module is used for acquiring a monitoring key annotation instruction of a monitoring user for the panoramic stitching picture, configuring initial priority weights of all the image pickup area images in the panoramic stitching picture according to the monitoring key annotation instruction, and dividing all the image pickup modules into key image pickup modules and non-key image pickup modules through the initial priority weights corresponding to all the image pickup area images;
the dynamic detection module is used for carrying out dynamic detection of appointed frequency on the image of the image pickup area collected by the non-key image pickup module, and carrying out real-time dynamic detection on the image of the image pickup area collected by the key image pickup module so as to obtain dynamic detection results of the non-key image pickup module and the key image pickup module;
The weight updating module is used for updating the initial priority of the image pickup module corresponding to the image pickup area image with the dynamic block when the dynamic detection result shows that the dynamic block appears in the image pickup area image, so as to obtain the real-time priority of the image pickup module;
The track tracking module is used for carrying out track tracking processing on the dynamic block according to a Kalman filtering algorithm when the image pickup module corresponding to the image pickup area image of the dynamic block is a key image pickup module so as to obtain a predicted motion track of the dynamic block, and carrying out dynamic extension analysis and corresponding initial priority updating on the key image pickup module according to the predicted motion track so as to obtain real-time priority of the image pickup module;
And the dynamic splicing module is used for sequentially carrying out image splicing processing on the image capturing area images acquired by each image capturing module according to the real-time priority of the image capturing module so as to obtain dynamic spliced images.
The invention provides an image stitching method for a low-computation-force camera module array, which has the following beneficial effects:
According to the invention, a panoramic image of a target scene is acquired by an array formed by a plurality of camera modules, an initial priority is configured by combining with a key annotation instruction of a monitoring user, the camera modules are divided into key and non-key modules, timing dynamic detection is carried out on the non-key modules, real-time dynamic detection is carried out on the key modules, the weight is updated when a dynamic block is found, and a Kalman filtering algorithm is adopted to track the dynamic block so as to predict the motion trail of the dynamic block. According to the scheme, the monitoring resource allocation of the camera module can be dynamically optimized, the response speed of a key area is improved, the instantaneity and the accuracy of image stitching are improved, the intellectualization and the adaptability of a system are enhanced, the monitoring efficiency and the accuracy are effectively improved, and the problem that the low-calculation-power camera module array in the prior art is difficult to realize efficient and accurate image stitching is solved.
Drawings
Fig. 1 is a schematic diagram of steps of an image stitching method for a low-power camera module array according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an image stitching system for a low-power camera module array according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The implementation of the present invention will be described in detail below with reference to specific embodiments.
Referring to fig. 1 and 2, a preferred embodiment of the present invention is provided.
In a first aspect, the present invention provides an image stitching method for a low-power camera module array, including:
S1, carrying out panoramic image acquisition on a target scene through an image pickup module array formed by a plurality of image pickup modules to obtain image pickup area images of the target scene corresponding to all the image pickup modules, and combining the image pickup area images of all the image pickup modules to obtain a panoramic spliced picture of the target scene;
S2, acquiring a monitoring key marking instruction of a monitoring user for the panoramic stitching picture, configuring initial priority weights of all the image pickup area images in the panoramic stitching picture according to the monitoring key marking instruction, and dividing all the image pickup modules into key image pickup modules and non-key image pickup modules through the initial priority weights corresponding to all the image pickup area images;
S3, carrying out dynamic detection of designated frequency on the image of the image pickup area collected by the non-key image pickup module, and carrying out real-time dynamic detection on the image of the image pickup area collected by the key image pickup module to obtain dynamic detection results of the non-key image pickup module and the key image pickup module;
S4, when the dynamic detection result shows that a dynamic block appears in the image capturing area image, updating the initial priority of the image capturing module corresponding to the image capturing area image with the dynamic block so as to obtain the real-time priority of the image capturing module;
S5, when the image pickup module corresponding to the image pickup area image of the dynamic block is a key image pickup module, carrying out track tracking processing on the dynamic block according to a Kalman filtering algorithm to obtain a predicted motion track of the dynamic block, and carrying out dynamic extension analysis and corresponding initial priority updating on the key image pickup module according to the predicted motion track to obtain real-time priority of the image pickup module;
And S6, sequentially performing image splicing processing on the image capturing area images acquired by each image capturing module according to the real-time priority of the image capturing module so as to obtain dynamic spliced images.
Specifically, in step S1 of the embodiment provided by the present invention, a plurality of imaging modules are arranged in a certain array manner to cover different areas of a target scene. Each camera module is responsible for shooting a specific area, each camera module shoots the covered area in real time, image data in the area is captured, a plurality of camera modules are assumed to exist, the camera modules possibly have different visual angles, focal lengths, resolutions and the like so as to ensure that each detail of the whole target scene is covered, the plurality of camera modules can effectively cover a larger target scene through cooperative work, the layout can avoid blind areas which cannot be shot by a single camera module, each camera module acquires images from different angles, multidimensional description of the target scene is increased, and the camera module is beneficial to providing more comprehensive and accurate monitoring images.
More specifically, the image data collected by each camera module is collected to a central processing system through a certain interface or transmission mechanism, an image stitching algorithm is adopted to stitch the image data collected by each camera module, a common stitching algorithm is based on feature point matching (such as SIFT, SURF and the like) or optical flow calculation based on pixel level, in the stitching process, the algorithm can process overlapping areas among different images to perform pixel level alignment, obvious visible stitching lines are avoided, color correction, edge smoothing and other optimization processing is performed on the stitched images, stitching results are more natural and seamless, the finally stitched panoramic images can be visually and seamlessly connected through accurate image alignment and optimization, obvious stitching marks can not appear, and finally, a complete panoramic image with rich viewing angles can be obtained by combining the data of a plurality of camera modules, so that monitoring staff can be helped to comprehensively and carefully observe a target scene.
More specifically, after processing and optimizing, the spliced image is generated to form a panoramic image of a complete target scene, the final spliced image can be displayed in a monitoring system, a user can check the panoramic image, perform operations such as zooming and translation, check each area in the scene, the user can see all details of the whole target scene from a uniform view angle, any key information is avoided from being missed due to blind areas or local view angle limitation, the efficient operation of the real-time monitoring system can be realized through splicing a plurality of camera module images, the real-time monitoring system is particularly suitable for real-time monitoring of a large range and complex scenes, the target scene images are acquired and spliced through an array of a plurality of camera modules, and the technical effects of the whole technical scheme are as follows: the large-scale coverage comprises a plurality of camera modules which cooperatively work to ensure that the whole target scene is covered in a non-blind area manner and a multi-angle view is realized, each camera module provides different view angles, so that panoramic images can be subjected to more detail and seamless splicing, an accurate image splicing algorithm is adopted, flaws and unnatural feeling during image splicing are eliminated, the smooth, natural and real-time monitoring of the final spliced images is ensured, the spliced panoramic images can be displayed in real time, a user can view each area of the target scene at any time, particularly in the monitoring of a large or complex scene, the monitoring efficiency is improved, the effect of the monitoring system is greatly improved as a whole, particularly in the application occasions with large monitoring requirements, complex scene and rich view angles (such as urban monitoring, traffic monitoring, large warehouse monitoring and the like), more comprehensive and more detailed views can be provided, helping the user to make more accurate decisions.
Specifically, in step S2 of the embodiment provided by the present invention, the monitoring system provides a user interaction interface, through which a user may mark certain areas in the panoramic image, and designate monitoring emphasis, where the marking may be accomplished by clicking with a mouse, selecting a frame, touching a screen, etc., and the user may directly designate areas or select certain image capturing modules, and the system receives a marking instruction of the user, where the instructions include a focus area, which may be coordinates of a target area or an area selected by the user, covered by the whole image capturing module, or may be a specific part (such as a certain local area) in a certain image capturing module, where the focus image capturing module may designate certain image capturing modules as focus modules, where the areas collected by the modules may be focused, and the user may select and define which areas or modules are focus according to actual needs, thereby improving flexibility of the monitoring system, meeting monitoring needs in different scenes, and through the interaction interface, the user may very intuitively and conveniently perform focus area marking, avoiding manual searching and analysis, and saving time and effort.
More specifically, according to the key marking instruction of the user, the system assigns an initial priority weight to each image of the image capturing area (corresponding to the image capturing module), and the specific steps include that the key area is given a high weight, and the system is given a higher initial priority to the key area or the key image capturing module marked by the user. The method means that the areas or the camera modules acquire more calculation resources and attention in the subsequent dynamic detection, image analysis and other processes, the non-key areas are given low weight, the system gives lower initial priority to the areas which are not marked with key, the images and data of the areas are lower in priority in processing, the weight value can be a numerical value, for example, the range from 0 to 1 or 0 to 100, the larger the number is the higher the priority, the weight setting can be dynamically adjusted according to the scene complexity of the camera modules, the importance of the shooting areas and the like, through initial weight configuration, the system can reasonably allocate the calculation resources and the processing time according to the key monitoring areas of users, so that the monitoring efficiency and the accuracy of the key areas are improved, the system preferentially carries out more real-time dynamic detection, analysis and updating for the important areas, the situation of the areas is ensured to be processed timely and accurately, the unnecessary processing is reduced, and the response speed and the processing capacity of the system are optimized.
More specifically, according to the initial priority corresponding to each camera module, the system classifies the camera modules into two categories, namely, key camera modules and modules with higher initial priority. Generally, the modules are responsible for collecting image data marked as important monitoring areas by users, the non-important shooting modules are responsible for collecting image data of other less important areas, the classification of the shooting modules can be automatically processed according to weight values, for example, when the weight values exceed a certain threshold value, the modules are classified as important shooting modules, the modules below the threshold value are classified as non-important shooting modules, after the shooting modules are divided into important and non-important modules, the areas which should be monitored preferentially can be defined more clearly, the areas are ensured to be paid full attention, and through the classification of the modules, the system can carry out reasonable resource scheduling and allocation according to the requirements of different priorities, so that the overall monitoring efficiency is improved.
More specifically, after the initial priority weight configuration, the system is not only dependent on static configuration, in actual operation, a dynamic detection result (such as the occurrence of a dynamic block) can lead to weight adjustment, so that although each camera module has a fixed weight value in the initial stage, the system can continuously adjust the priority weight of each module according to the real-time monitoring condition (such as the dynamic detection result), the system can continuously adjust and optimize the priority of each camera module according to the actual condition in the monitoring process, the important area is ensured to be monitored more accurately all the time, the dynamic adjustment mode can adapt to the change in complex scenes, such as the occurrence of a dynamic target, the occurrence of an important event and the like, the response speed and the monitoring precision of the monitoring system are improved, the system can effectively configure the priority weight for each camera module by acquiring a monitoring important marking instruction of a user, and divide the important and non-important camera modules according to the important weights, the technical effects of the process are mainly characterized in that the fine monitoring is realized by ensuring that the important area obtains enough monitoring resources and processing time, the monitoring effect is improved, the allocation of the optimized resources is realized by reasonably configuring the initial priority weight, the optimization calculation and the processing resources are optimized, the resource is enabled to be used for the system to be adjusted more flexibly in a high-efficient condition or the important condition, the important condition is required to be adjusted in a high-level condition, and the important condition is required to be adjusted in the important condition is in the monitoring scene or is more flexibly and is required to be adjusted in a high-efficient condition or is required to be monitored.
Specifically, in step S3 of the embodiment provided by the present invention, the system sets the dynamic detection frequency of the non-emphasized modules according to the initial priority of these modules, for example, for the non-emphasized module with a lower priority, the system may reduce the frequency of dynamic detection to reduce unnecessary calculation load, and the image data of the non-emphasized area may not need to be continuously detected in real time, so that the detection frequency is set to be lower (for example, detection is performed every few seconds or minutes). Therefore, system resources can be saved, more computing power is distributed to the key areas, and the system can dynamically detect images of non-key areas within a preset time interval (such as every 5 seconds, 10 seconds or longer). This process involves analysis and comparison of images to find changes or anomalies, where dynamic detection of non-emphasized areas is typically focused on changes in the images (e.g., the appearance, movement, disappearance, etc. of people or objects), and if such changes are detected, the system marks and records them, and for non-emphasized areas, the system can employ simple change detection algorithms, such as interframe difference methods, background modeling methods, etc., to quickly identify potential events or actions. This ensures detection efficiency while reducing computing resource consumption.
It can be understood that by reducing the dynamic detection frequency of the non-key areas, reducing unnecessary computation and data processing, optimizing the overall performance of the system, the system can reasonably allocate monitoring resources without sacrificing important monitoring effects, ensure that the detection of the non-key areas does not affect the real-time monitoring of the key areas, and the system can timely discover and respond to changes or potential anomalies in the non-key areas, particularly for those occasional events, although the frequency is low.
More specifically, the priority of the key camera modules is high, so that their images need to be dynamically detected in real time, i.e. the image content is monitored and analyzed without interruption. The system will continuously monitor these areas, process and respond to events in time, and dynamically detect in real time at a relatively high frequency, typically at intervals of every second or less. This means that the system will continuously analyze the image data collected by these camera modules in order to find any changes or events in time, and because these areas are important monitoring areas, the system needs to use more complex and accurate dynamic detection algorithms. Common algorithms include, for example, using Convolutional Neural Networks (CNNs) for target recognition and tracking, fast recognition of dynamic targets (e.g., people, vehicles, etc.) and tracking of their trajectories, recognition of specific events or abnormal behaviors (e.g., abnormal parking, intrusion, etc.) by real-time pattern recognition algorithms, and once a dynamic event or behavior is detected in real-time, the system will immediately alert, record the event, or notify the relevant monitoring personnel, which is critical for monitoring scenarios requiring fast response (e.g., security, traffic monitoring, etc.).
It can be understood that by carrying out real-time dynamic detection on the key camera module, important changes or abnormal events in a scene can be captured and responded more accurately, the real-time performance and accuracy in the monitoring process are ensured, the real-time detection ensures that the system can timely respond when the abnormality occurs, instant alarm and intervention opportunities are provided, the security and emergency response capability is improved, the target and the event can be identified more accurately by adopting a complex detection algorithm for the key region, the identification rate and the accuracy of the system are improved, and false alarm and missing report are reduced.
More specifically, after the system performs dynamic detection, the system may generate dynamic detection results of the non-key image capturing module and the key image capturing module respectively, where the non-key image capturing module detection results may include detected changes (such as object movement, interference in a certain area, etc.), but the response priority of the changes is lower, the key image capturing module detection results include detailed detection information of dynamic events or abnormal behaviors (such as intrusion, emergency, etc.) in a key area, and the system may give higher processing priority to the results, and the system will execute different processing strategies based on the detection results of the image capturing modules. For the key module, the instant alarm may be triggered and the detailed event log may be recorded, while the result of the key module is archived or processed with low priority unless a major anomaly occurs.
It can be understood that the combination of dynamic detection results of the key and non-key modules ensures the overall monitoring of the whole monitoring scene by the system, and simultaneously ensures the real-time attention to important areas, by distinguishing the dynamic detection frequencies of the key and non-key modules, the system can efficiently process the monitoring requirements of different areas, avoid the waste of resources, ensure the response speed and the accuracy of the important areas, can more accurately identify the abnormality by using a complex detection algorithm aiming at the key modules, reduce the false alarm in the detection of the non-key areas, perform the real-time dynamic detection on the key camera modules, ensure the overall and instant monitoring of the important areas, and perform the low-frequency detection on the non-key camera modules, effectively save the calculation resources and reduce the burden of the system, and by different dynamic detection frequencies and algorithms, the system can realize the intelligent resource scheduling, reasonably allocate the calculation resources, and adjust the monitoring strength according to the actual situation, so that the system can rapidly identify and respond to the abnormality at the key moment, and the low-frequency dynamic detection of the non-key areas can still ensure the global monitoring capability of the system, and can not allocate the potential and the potential difference of the important areas, and ensure the overall monitoring performance of the monitoring system.
Specifically, in step S4 of the embodiment provided by the present invention, when the camera module dynamically detects the camera area, the image is analyzed to find any change or dynamic object (such as a person, a vehicle, or movement of an object), and these changes may be represented as "dynamic blocks", that is, a specific area in the image has motion or change different from the background, and the system uses various image processing algorithms (such as background modeling, inter-frame difference, etc.) to detect the dynamic blocks in the image, where these dynamic blocks may represent an ongoing activity or potential event, for example, monitoring a certain object movement in the area, or a change in an area that does not conform to a normal mode, and once the system detects the dynamic blocks, it marks the area in the image and generates a corresponding dynamic detection result, where these dynamic blocks may represent important events or abnormal phenomena such as intrusion, moving objects, etc.
More specifically, each camera module has an initial priority in the system, which is generally determined by the requirements at the time of setting, the sensitivity or priority of the monitored area, for example, important areas (e.g. entrances, gates) generally have a higher initial priority, less important areas (e.g. hallways, parking lots) may have a lower initial priority, and when the system detects a dynamic block, the occurrence of a dynamic event generally means that the monitored area of the camera module has changed significantly, and more computing resources may be required for processing and analyzing, and at this time, the system updates the priority of the camera module according to rules or algorithms to promote its real-time priority.
More specifically, for example, if a camera module detects a dynamic block, the system may automatically increase the real-time priority of the module to a higher level, including increasing the detection frequency of the module, performing higher frequency image analysis, allocating more computing resources to the module, using more complex detection algorithms (e.g., object tracking, abnormal behavior recognition, etc.), prioritizing the event response of the camera module, such as triggering an alarm, initiating a video recording, notifying personnel, etc.
More specifically, the updating of the priority weights may be instantaneous or may be changed step by step, for example, a time window may be set, in which when a dynamic block exists, the priority weights of the camera modules may be gradually increased until the dynamic block disappears or the monitoring task is completed, and then the priority weights are restored to the initial weights, once the priority weights of the camera modules are increased, the system may continuously track the dynamic block situation of the area, so as to ensure that the monitoring can be always kept in a high priority state, and for more complex dynamic blocks (such as a plurality of targets exist simultaneously or for a long time), the system may further strengthen the priority weights of the modules, so as to ensure that the processing is uninterrupted.
More specifically, within the system, the updating of the real-time priority can affect the order of processing of other modules. For modules that present dynamic blocks, their actual priority weights may cause the priorities of the other modules to drop accordingly. The method is completed through a dynamic scheduling algorithm, for example, a load balancing mode is adopted to ensure that the overall load of the system is not too high, when a dynamic block disappears or is not active any more, the system gradually reduces the real-time priority of the camera module to the original initial priority according to a preset recovery rule, and certain delay is needed in the process, so that the system can accurately judge whether the dynamic block is a real event or is only briefly interfered.
It will be appreciated that by adjusting the priority weights of the camera modules based on dynamic blocks, the system is able to focus resources at significant times for more efficient processing. The presence of dynamic blocks means that more important activities may occur in the area, so that the improvement of the priority weight can ensure that more computing resources are allocated to the modules, so that possible security threats can be responded timely, the system can dynamically adjust the resources and processing capacity of each camera module, the overall system performance is optimized according to actual conditions, excessive calculation of unimportant areas is avoided, and the system can respond to dynamic changes of the monitoring area more swiftly by updating the real-time priority weight of the camera modules. The method has the advantages that the areas where the dynamic blocks appear are subjected to priority processing, response delay can be reduced, potential events or anomalies can be captured more quickly, the monitoring capability of the related camera modules can be automatically enhanced when the dynamic blocks appear, the important areas are monitored more accurately, and the overall emergency response capability is improved.
Specifically, in step S5 of the embodiment provided by the present invention, the system detects the occurrence of the dynamic block, and marks the dynamic block that appears in the image, where the dynamic block may represent a moving object in the image, such as a pedestrian, a vehicle, etc., and the camera module (i.e., the key camera module) is responsible for more accurately analyzing and tracking the dynamic block.
More specifically, in order to make accurate trajectory predictions, the system needs to initialize a Kalman filter for each dynamic block, which is a recursive algorithm suitable for noise-based systems, which can estimate the current state by means of previous state predictions, working by predicting the state of the block at the next moment (position, velocity, etc.) from the current dynamic block's motion state (position, velocity, etc.), depending on the motion model, and correcting its own prediction result from these observations when the system acquires new observations (i.e. new positions of the dynamic blocks). The process involves the weighted fusion of the prediction and the observed value to obtain a more accurate motion trail, a Kalman filter provides the optimal estimation of the current moment to the dynamic block, and the optimal estimation comprises the information of position, speed, even acceleration and the like, so that the motion trail of the dynamic block is formed, and the system can predict the future position according to the historical motion data of the dynamic block through the recursion update of the Kalman filter. Such predictions enable the system to know in advance the likely path of movement of the dynamic block.
More specifically, the system can predict the possible position of the dynamic block in a future period by using the predicted motion track of the dynamic block obtained by the kalman filter, which provides a powerful basis for future dynamic changes, especially when the dynamic block possibly leaves the current shooting area or enters other important areas, and according to the predicted motion track, the system performs dynamic extension analysis to analyze whether the predicted track can affect other shooting areas, especially if the predicted track relates to a high-priority monitoring area (such as an entrance, an important room, etc.), the system dynamically adjusts the priority weight of the area so as to ensure that the important shooting module can continuously track in the area possibly passed by the dynamic block and ensure that the shooting module can effectively monitor the motion path of the dynamic block in the future period. If the track of the dynamic block extends to other important areas, the system automatically adjusts the visual angle, focal length or adding resources of the camera module to ensure that the module can continuously and effectively monitor the dynamic block, and according to the extension analysis of the predicted track of the dynamic block, the system dynamically adjusts the priority of the camera module, and if the predicted track shows that the dynamic block is about to enter a high priority area or has potential influence on the important area, the system can improve the real-time priority of the important camera module. Specifically, the manner of updating the priority weights may include:
More specifically, more computing resources are allocated to the key camera module, possibly including increasing the frequency of image processing, enabling a more efficient target tracking algorithm, etc., and the monitoring strategy of the camera module may be adjusted according to the predicted trajectory. For example, the camera module may scan a certain direction in advance, so as to ensure that the dynamic block can be captured in time, if the motion track of the dynamic block indicates that the dynamic block may be in an important area within a period of time, the system may prolong the monitoring duration of the area and increase the detection frequency of the area, the real-time priority of the camera module may be dynamically updated according to the above extension analysis and priority adjustment, the motion track of the dynamic block not only determines the priority of the module, but also may affect the resource allocation of other camera modules, and the system ensures the intelligence and high efficiency of the monitoring task through the dynamic update mechanism.
More specifically, in the track change process of the dynamic block, the updating of the real-time priority is not completed at one time, the system continuously monitors the change condition of the dynamic block and performs dynamic feedback according to the real-time motion track, for example, if the dynamic block stays in a region for a longer time or the track change is more intense, the system may further adjust the priority of the image capturing module in the region, and when the dynamic block does not stay in a key region any more or the motion track of the dynamic block changes, the system gradually restores the original priority. This recovery process is usually gradual, avoiding severe fluctuations in priority weights, ensuring system stability.
It can be appreciated that the system can know the future motion trend of the dynamic block in advance by performing track prediction on the dynamic block through a kalman filtering algorithm, and adjust the priority of the camera module before the dynamic block enters the critical area. The method and the system greatly improve the precision and response speed of monitoring, ensure that the dynamic block cannot be ignored at key moment, and can intelligently allocate monitoring resources through updating the real-time priority of the key shooting module, for example, when the system predicts that the dynamic block is about to pass through an important area, the system can preferentially schedule the resources to perform high-frequency and high-quality monitoring, avoid wasting the resources of unimportant areas, the system can update the real-time priority according to the predicted track of the dynamic block, flexibly cope with different dynamic changes, and when the track of the dynamic block deviates, the system can automatically adjust the monitoring strategy, ensure that the monitoring of the key area is not influenced, and even if the direction of the dynamic block suddenly changes, the system can timely adjust the priority to ensure the consistency and the effectiveness of the monitoring.
Specifically, in step S6 of the embodiment provided by the present invention, each camera module collects images of different camera areas according to the assigned priority weights, each camera module collects video or still images according to the monitored area, where the images may have differences of viewing angle, resolution, illumination, and the like, the priority weights of the camera modules may be dynamically adjusted, and the camera module with a high priority weight may obtain higher-frequency image data, or obtain more computing resources, so as to update the image data in time when the monitored area changes significantly.
More specifically, before stitching, the images acquired by each camera module may need to be preprocessed to improve stitching effects, for example, distortion correction may need to be performed on the images of different camera modules due to distortion of the lens of the camera, especially the wide-angle lens, color difference or exposure inconsistency may exist on the images of different camera modules, uniform adjustment and brightness alignment may need to be performed on colors to ensure stitching effects are consistent, noise in the images is removed, and particularly the images acquired in a low-illumination environment are removed, and denoising processing may improve quality of the stitched images.
More specifically, feature points are extracted from each image using a feature point detection algorithm (e.g., SIFT, SURF, ORB, etc.), and these feature points are used to identify correspondence between images, thereby providing basis for image stitching, matching feature points between adjacent images to determine which regions overlap, and providing reference for image alignment and stitching, typically feature matching is performed based on descriptors (e.g., SIFT or ORB) to ensure accurate image butt-joint.
More specifically, based on real-time priority weight, the stitching order of the images is dynamically adjusted, the images collected by the camera modules with high priority weight are preferentially involved in stitching to ensure that the monitoring of important areas is preferentially displayed in the stitched images, if the priority weight of a certain camera module is higher, the collected images may occupy a larger proportion in the stitching process, even a higher resolution or finer image processing method is adopted during stitching, if a certain area (such as an important monitoring area) is in the monitoring range of a plurality of camera modules, the system preferentially selects the images for stitching the area so as to ensure the image quality of the important area, the images are aligned by utilizing the result of feature matching, the overlapped areas are correctly stitched together, and in the alignment process, a geometric transformation algorithm (such as homography matrix calculation) can be used to correct the displacement between the images to ensure seamless butting of the images, and a plurality of images are fused into a complete stitched image. Generally, the image fusion algorithm processes the transition of different images according to the characteristics of the overlapping area, so as to avoid obvious traces of the stitching line. The common fusion method comprises multiple exposure fusion, namely, a brightness difference and a weighted average method of smooth transition images, wherein the multiple exposure fusion is suitable for processing images under different exposure, different weights are given to different images according to the priority weights of the images, and images with higher priority weights have larger influence on a splicing result during splicing.
More specifically, in order to improve the quality of the stitched image, detail optimization is required for the stitched image. The common optimization steps comprise seamless transition processing, namely, for a spliced line region, the transition region is smoother through adjusting parameters such as brightness, color and contrast of an image, obvious splicing marks, resolution adjustment and scaling are eliminated, the resolution of the spliced image is adjusted according to system requirements, balance of image quality and processing speed is ensured, if certain regions need high resolution details, the system can dynamically adjust the display quality of the regions, edge smoothing processing, namely, saw-tooth edges possibly occurring in the splicing process are processed, so that the image is more natural, the splicing region is dynamically adjusted, namely, in the real-time monitoring process, the system dynamically adjusts the splicing region according to the real-time priority of an image pickup module, for example, the region which is preferentially monitored can be highlighted through locally enhancing the spliced image, and the key content is ensured to be clearer.
More specifically, after the processing, the finally generated spliced image presents a seamless and dynamic panoramic image, and the area covered by each camera module can be displayed. The system can display and analyze the spliced image as a new monitoring picture, update and output the spliced image in real time, and can use the spliced image as new visual information for further processing by operators or an automation system. For example, object tracking, abnormality detection, and the like are performed using stitched images.
It can be understood that by adjusting the image acquisition frequency and the stitching order of the camera module according to the real-time priority, the system can ensure the image quality and the monitoring precision of important areas, the areas with high priority in the stitched image can be processed and displayed more accurately, so that key information can not be lost, the dynamic adjustment of the priority can not only optimize the monitoring quality, but also effectively manage computing resources, for example, the system can preferentially process the image stitching task of the camera module with high priority, avoid resource waste, and respond to changes rapidly in real-time monitoring.
The invention provides an image stitching method for a low-computation-force camera module array, which has the following beneficial effects:
According to the invention, a panoramic image of a target scene is acquired by an array formed by a plurality of camera modules, an initial priority is configured by combining with a key annotation instruction of a monitoring user, the camera modules are divided into key and non-key modules, timing dynamic detection is carried out on the non-key modules, real-time dynamic detection is carried out on the key modules, the weight is updated when a dynamic block is found, and a Kalman filtering algorithm is adopted to track the dynamic block so as to predict the motion trail of the dynamic block. According to the scheme, the monitoring resource allocation of the camera module can be dynamically optimized, the response speed of a key area is improved, the instantaneity and the accuracy of image stitching are improved, the intellectualization and the adaptability of a system are enhanced, the monitoring efficiency and the accuracy are effectively improved, and the problem that the low-calculation-power camera module array in the prior art is difficult to realize efficient and accurate image stitching is solved.
Preferably, the step of acquiring a panoramic image of a target scene through an image capturing module array formed by a plurality of image capturing modules to obtain image capturing area images of the target scene corresponding to each image capturing module, and combining the image capturing area images of each image capturing module to obtain a panoramic mosaic picture of the target scene includes:
s11, carrying out initial working parameter configuration on a camera module array preset at a designated position so as to enable the camera module array to be in an initial debugging state, and respectively carrying out image acquisition on a target scene by each camera module in the camera module array in the initial debugging state so as to obtain a camera area image corresponding to each camera module;
And S12, performing image stitching processing of corresponding position relations on the image capturing area images acquired by each image capturing module according to the setting positions of each image capturing module in the image capturing module array so as to obtain a panoramic stitching picture of the target scene.
Specifically, initial configuration of the camera module array is performed, including basic working parameters such as resolution, exposure, white balance, focal length and the like of each camera module, which are helpful to ensure that images acquired by each camera module are consistent in quality, provide a basis for subsequent image stitching processing, and in order to ensure correct configuration and image stitching effects of the camera module array, accurate calibration must be performed according to the camera module positions in the array.
More specifically, after initial configuration and debugging are completed, each camera module of the camera module array starts to acquire an image of a target scene, each camera module shoots a part of the target scene in real time through a specific view angle and a specific position of the camera module to obtain respective camera area images, the shooting areas of each camera module overlap, and the overlapping areas provide references for subsequent splicing, so that images acquired by different camera modules can be accurately spliced, and different camera modules may have different shooting angles and resolutions, therefore, the subsequent image preprocessing (such as distortion removal, color correction and the like) is very important to ensure the quality and consistency of spliced images.
More specifically, because the camera modules may adopt different lens types, geometric distortion (such as barrel distortion, pillow distortion, etc.) may occur in the captured image, so that distortion correction is required, exposure, white balance and brightness between different camera modules may be different, so that balance adjustment is required to be performed on colors and brightness of each image, color consistency of the spliced image is ensured, transition is smooth, denoising processing is required to be performed on the image acquired in a low-illumination or high-noise environment, so as to improve definition and visibility of the spliced image, feature points (such as SIFT, SURF, ORB algorithm) are extracted for each image, and an overlapping region between each camera module is found through a feature matching method. The feature points help to determine the correspondence between images and provide basis for subsequent geometric transformations.
More specifically, according to the feature point matching result, the images are aligned by using a geometric transformation method (such as homography matrix, perspective transformation and the like), so that the overlapping areas of adjacent images are ensured to be accurately butted, obvious joints are avoided to be generated in the spliced images, the aligned images are fused, in the splicing process, the natural transition of the spliced lines is ensured according to the priority of each camera module, the image quality and the characteristics of the overlapping areas, the obvious splicing marks are avoided, the weighted fusion is performed according to the priority of different images (for example, the images in certain areas possibly have higher priority weight), the details of the areas with high priority are not lost, and the spliced lines are smoothly processed through technologies such as tone adjustment and brightness balance so as to realize seamless transition, and the visual continuity of the final spliced images is ensured.
More specifically, the finally generated image is a panoramic stitching picture after image processing and fusion, and the whole target scene is displayed, at this time, the system can cut, zoom or thumbnail the image according to the requirements, adapt to different display requirements, if the target scene changes, the camera module array can continuously acquire the image in real time, and generate an updated panoramic image through the same stitching process flow, which is particularly important for applications such as dynamic monitoring, automatic driving and the like.
It can be understood that the high-quality image covering the whole target scene can be acquired through multi-angle and multi-view acquisition of the camera module array, distortion can be eliminated, seam and color inconsistency can be reduced by combining the image preprocessing and the image fusion technology, a high-quality and seamless panoramic image is generated, the camera module array can shoot the target scene at multiple angles at different positions and angles, the wider view coverage than a single camera is provided, the spliced panoramic image can comprehensively display the scene, the method is suitable for various application scenes needing large-scale monitoring, such as traffic monitoring, intelligent cities, security monitoring and the like, an accurate spatial relationship can be established between different camera modules through an accurate image splicing algorithm, the accurate and correct splicing result is ensured, meanwhile, reasonable distortion correction and brightness and color adjustment are realized, and the color of the spliced image is consistent and detail is rich.
Preferably, the step of obtaining a monitoring key annotation instruction of a monitoring user for the panoramic stitching picture, configuring initial priority weights for the image capturing area images in the panoramic stitching picture according to the monitoring key annotation instruction, and dividing each image capturing module into a key image capturing module and a non-key image capturing module through the initial priority weights corresponding to the image capturing area images includes:
S21, constructing an interaction port based on the panoramic stitching picture so as to enable a monitoring user of the camera module array to carry out interaction operation on the panoramic stitching picture, thereby generating a monitoring key annotation instruction for the panoramic stitching picture;
s22, analyzing the monitoring key marking instruction to obtain area range information of a key monitoring area fed back by the monitoring key marking, and performing range positioning on the module monitoring area of each camera module according to the area range information to obtain a positioning result of the module monitoring area of each camera module relative to the key monitoring area;
S23, when the positioning result shows that the module monitoring area of the camera module is in the key monitoring area, high-level initial priority weight is distributed to the camera module so as to divide the camera module into key camera modules;
And S24, when the positioning result shows that the module monitoring area of the camera module is not in the key monitoring area, the initial priority weight of the low fan is distributed to the camera module so as to divide the camera module into non-key camera modules.
Specifically, in order to facilitate monitoring of a user marking a monitoring key region, the system needs to design an interactive port, which is generally a graphical interface-based user interactive platform, a user can operate on a panoramic spliced picture through a mouse, a touch screen or other input devices, the user can select a designated region, drag the mouse or click on certain regions to mark the monitoring key region, or accurately mark the region needing to be focused through drawing polygons, rectangles and the like, and when the user marks the key region on the interactive port, the system records coordinate information of the monitoring key region input by the user, and the information is used for subsequent optimization and weight configuration.
More specifically, after the monitoring user completes the labeling of the key region, the system will acquire and analyze the labeling instruction input by the user, wherein the analysis process includes identifying parameters such as coordinates, shape, size and the like of the monitored region, determining range information of the key region, comparing the information with the monitored region of the camera module, extracting range information of the key monitored region from the user labeling instruction, wherein the information may include coordinates, size, shape (such as rectangle, circle and the like) of the region, and the range information may affect the subsequent priority allocation of the camera module.
More specifically, according to a preset camera module array and a view angle and a monitoring area of each camera module, a specific monitoring area of each camera module is calculated, in general, the monitoring area of each camera module can be represented as a rectangular or polygonal area within a shooting range of the camera module, the monitoring area of each camera module is compared with a key monitoring area calibrated by a user, whether the monitoring area of the camera module is in the key monitoring area is judged, if the monitoring area of a certain camera module is overlapped or intersected with the key monitoring area, the camera module can be judged to be a key camera module, and if the monitoring area is not overlapped with the key area, the camera module belongs to a non-key camera module.
More specifically, depending on the positioning result, the system will assign higher initial priority to those image capturing modules whose monitored areas overlap with the emphasized areas, and in general, these modules will be assigned more resources or higher image processing priorities (e.g., higher resolution, real-time requirements, etc.), these image capturing modules will be considered as "emphasized image capturing modules", special attention and optimization processing is required, for those image capturing modules whose monitored areas do not overlap with the emphasized areas, the system will assign lower initial priority to those image capturing modules whose image processing and data transmission may be performed at lower resolution or lower frequency, these modules will be classified as "non-emphasized image capturing modules", and it is not necessary to provide resources and processing priorities equivalent to those of the emphasized areas.
More specifically, through the steps, each camera module in the camera module array is allocated with different initial priority weights, and is divided into a key camera module and a non-key camera module according to the weights, in the monitoring process, the system can adjust the priority weights of the camera modules in real time according to changed scenes or user requirements, and the key areas are ensured to always keep high definition and real-time response.
Preferably, when the dynamic detection result shows that a dynamic block appears in the image capturing area image, the step of updating the initial priority of the image capturing module corresponding to the image capturing area image with the dynamic block to obtain the real-time priority of the image capturing module includes:
s41, when the dynamic detection result shows that a dynamic block appears in the image of the shooting area, data acquisition of the motion speed and the area occupation ratio of the dynamic block is carried out, and the motion speed v and the area occupation ratio S of the dynamic block are obtained;
S42, calculating the dynamic block weight of the dynamic block according to a dynamic block weight calculation formula w=αv+βs to obtain the dynamic block weight of the dynamic block, wherein w is the dynamic block weight, and α and β are preset balance adjustment parameters;
s43, calculating the overall weight of the camera module with the dynamic block according to a camera module calculation formula pi=Σ (j epsilon Ri) wj/Σ (k=1- & gtn) wj to obtain the real-time priority weight of the camera module, wherein pi is the priority of the ith camera, ri is the dynamic area set covered by the ith camera, wj is the weight of the dynamic block in the area j, n is the total number of cameras, Σ (j epsilon Ri) wj represents the weight sum of all dynamic blocks covered by a single camera i, Σ (k=1- & gtn) wj represents the sum of all dynamic block weights of all cameras with the total number n, and outer-layer summation Σ (k=1- & gtn) traverses all cameras from 1 to n and inner-layer summation Σ (j epsilon Rk) to calculate the weight sum of all dynamic blocks covered by each camera.
Specifically, the system detects dynamic blocks in the panoramic image through a dynamic detection algorithm, the dynamic blocks represent moving objects (such as pedestrians, vehicles and the like) in the image, the system can identify the dynamic areas in real time and judge the motion state of the dynamic areas, and the motion speed (v) is calculated by tracking the motion track of the dynamic blocks in adjacent frames in the image. The motion speed is usually estimated by the displacement of the image and the time interval, and the area ratio(s) is calculated by calculating the area ratio of the dynamic block in the image of the imaging area according to the shape and the area of the dynamic block. This data helps determine the importance of dynamic blocks in the image, and large dynamic blocks may represent important monitored objects.
More specifically, the dynamic block weight calculation formula w=αw+βs, w is the dynamic block weight, α and β are preset balance adjustment parameters, and the system can comprehensively consider the speed and the area ratio of the dynamic block through the weight calculation formula of the dynamic block, so as to assign appropriate weight to each dynamic block, in general, the dynamic blocks with higher movement speed or larger area have higher weight, which means that they are more likely to be important targets, and the system should preferentially process these areas.
More specifically, according to the camera module calculation formula pi=Σ (j e Ri) wj/Σ (k=1→n) wj, pi is the priority of the ith camera, ri is the dynamic region set covered by the ith camera, wj is the weight of the dynamic block in the region j, n is the total number of cameras, Σ (j e Ri) wj represents the weight sum of all the dynamic blocks covered by a single camera i, Σ (k=1→n) Σ (j e r) wj represents the sum of all the dynamic block weights of all the dynamic blocks of all the cameras in total n, outer layer summation Σ (k=1→n) traverses all the cameras from 1 to n, inner layer summation (j e Rk) calculates the weight sum of all the dynamic blocks covered by each camera, and for each camera module, the total view angle of the dynamic block is calculated according to the monitored dynamic region (the dynamic block covered by the module) of the monitored dynamic block weight sum, and the total view angle of the dynamic block is the total camera module is the total view of the monitored dynamic block weight sum of all the dynamic block. In this way, camera modules with higher dynamic block weights are assigned higher priority weights.
More specifically, the priority of the camera module is adjusted according to the real-time priority calculated by the dynamic block weight, and the priority of the camera module is improved for those camera modules with more monitoring dynamic blocks or higher dynamic block weights, so as to ensure that dynamic events can be responded and processed quickly, and the system can dynamically adjust the allocation of system resources according to the real-time priority of the camera module, for example, for the camera module with higher priority, the system can allocate more computing resources, bandwidth or higher image resolution for the camera module, so as to ensure that the module can process dynamic events in real time and high efficiency.
Preferably, when the image capturing module corresponding to the image capturing area image of the dynamic block is an important image capturing module, performing track tracking processing on the dynamic block according to a kalman filtering algorithm to obtain a predicted motion track of the dynamic block, and performing dynamic extension analysis and corresponding initial priority update on the important image capturing module according to the predicted motion track to obtain real-time priority of the image capturing module, where the step of obtaining the real-time priority of the image capturing module includes:
S51, when an image pickup module corresponding to the image pickup area image of the dynamic block is an important image pickup module, performing feature analysis on a position vector and a speed vector of the dynamic block to obtain the position vector and the speed vector of the dynamic block;
s52, carrying out prediction processing on the position vector and the speed vector of the dynamic block in the future according to a Kalman filtering algorithm to obtain a predicted motion track of the dynamic block;
S53, carrying out dynamic extension analysis on the dynamic block according to the predicted motion trail of the dynamic block to obtain a shooting area to which the dynamic block extends in a future time period, and marking the shooting area as an extension area;
and S54, updating the initial priority of the camera module corresponding to the extension area to obtain the real-time priority of the camera module.
Specifically, for the detected dynamic block, the position of the dynamic block in the current image frame is determined first, the position vector can be represented by pixel coordinates, usually the position of the center point of the object in the image, the speed vector of the dynamic block is calculated according to the motion track of the dynamic block in the multi-frame image, the speed vector comprises the speed and the direction of the dynamic block in the image space, and the system can comprehensively describe the motion state of the dynamic block through the position and the speed vector, so that necessary data support is provided for subsequent prediction and tracking.
More specifically, the kalman filter is a recursive optimal estimation algorithm, and is suitable for estimating and predicting the state of a dynamic system, in this scenario, the kalman filter is used for predicting the position and the speed of a dynamic block, according to the position vector and the speed vector at the current moment, the kalman filter predicts the possible position and the speed of the dynamic block at the next moment, according to the actual observed data (the position of the dynamic block in the next frame of image), the kalman filter corrects the predicted value, improves the prediction precision, and through the kalman filter, the system can continuously track the motion state of the dynamic block and predicts the future position and speed thereof.
More specifically, based on the predicted motion trajectory obtained by the kalman filtering, the motion trend of the dynamic block in the future time period is analyzed, for example, the system may infer that the dynamic block will move in a certain direction and possibly span the monitoring ranges of a plurality of camera modules, through the analysis of the future motion trajectory of the dynamic block, the system may predict which new areas, i.e. new camera areas that may be covered by the dynamic block in the future, mark these new areas as "extended areas", and do corresponding preparation work for these areas, for example, allocate monitoring resources for them or adjust the priority of the camera modules in advance.
More specifically, after determining the extension areas into which the dynamic blocks may enter, the system updates the initial priority of the camera modules to which the areas belong, and adjusts the priority of the relevant camera modules according to the predicted extension areas of the dynamic blocks, generally speaking, the areas to which the dynamic blocks may extend will increase the weight of the camera modules, so as to ensure that the camera modules can preferentially acquire and process the new reached dynamic blocks, and the updated priority weights are reflected in the real-time priority adjustment of the camera modules, so as to ensure that the key camera modules can respond in time when a dynamic event occurs.
More specifically, on the basis of the extension area, the priority weights of the camera modules are comprehensively updated, so that the camera modules can timely process the predicted dynamic blocks, a formula for updating the priority weights can be based on the predicted motion track of the dynamic blocks and the weights of the extension area, the response capacity of the camera modules to future dynamic blocks is further improved, and finally, the priority weights of each camera module can be adjusted according to the motion trend of the dynamic blocks, the extension area and other factors through real-time calculation.
It can be understood that the motion trail of the dynamic block is accurately predicted by Kalman filtering, the system can more accurately predict the motion trend of the dynamic block, the high-precision trail prediction can greatly improve the response capability of the system to dynamic events, and the dynamic extension analysis enables the system to identify the area to which the dynamic block possibly moves in advance, so that the camera module can be prepared for a new monitoring area in advance. Particularly, when the boundaries of the shooting areas are close, the monitoring blind areas of the dynamic blocks can be effectively reduced through the advanced extension processing, and for key shooting modules, the system carries out priority weight updating according to the predicted track and the extension areas of the dynamic blocks, so that the shooting modules can be ensured to acquire and process the dynamic blocks preferentially, and the strategy can effectively ensure the monitoring quality of key areas (such as people flow dense areas, important traffic nodes and the like).
Preferably, the step of sequentially performing image stitching processing on the image capturing area images acquired by each image capturing module according to the real-time priority of the image capturing module to obtain dynamic stitched images includes:
S61, comparing the real-time priorities of the camera modules, and sequencing the real-time priorities of the camera modules according to the comparison result to obtain a weight sequence of each camera module;
And S62, sequentially performing image splicing processing on the image pickup area images acquired by each image pickup module according to the weight sequence of each image pickup module so as to obtain a plurality of dynamic spliced images formed by adjacent image pickup area images.
Specifically, the real-time priorities of all the camera modules are compared. Each camera module distributes a real-time priority according to the importance, coverage area, response priority and other factors in the current monitoring task, the camera module with higher weight usually represents that the monitoring area is more important or more urgent in need of resource attention, all the camera modules are ordered according to the comparison result of the priority weights, the ordering is performed from high to low according to the priority weight, namely the camera module with the highest weight value is arranged in front, the camera module with lower priority is arranged in back, and the ordered result is a weight sequence which represents the monitoring priority and processing sequence of each camera module.
More specifically, according to the weight sequence generated in the previous step, the images collected by each camera module are sequentially subjected to stitching processing, the higher the weight of the camera module is, the higher the priority of image stitching is, the image stitching is aimed at compositing the area images collected by a plurality of camera modules into a large-scale image, an image processing algorithm (such as image transformation, registration, fusion and the like) is usually used for achieving the aim, in the stitching process, smooth transition between adjacent images needs to be ensured, obvious seams or distortion does not occur, the images collected by each camera module are sequentially stitched, and finally a dynamic stitching image composed of a plurality of adjacent area images is obtained. The spliced images can provide a continuous monitoring field of view and cover a plurality of areas, so that the dynamic scene is displayed more comprehensively.
More specifically, in the stitching process, there may be an overlapping portion of the images, and the overlapping area needs to be processed finely, and an image fusion technique is generally used to process the overlapping portion, so as to avoid repetitive, fuzzy or unnatural transitions occurring in the image stitching process, and smooth the edges of the stitched images, so as to avoid obvious seams or abrupt visual effects in the stitching area. Through the image gradual change processing, the naturalness and visual consistency of image splicing are enhanced.
It can be understood that the image splicing order is determined through the real-time priority sequencing of the camera module, so that the region with higher priority can obtain more attention, and the image splicing of the important region is ensured to be completed preferentially during splicing, thereby improving the intelligent level of the image splicing. The areas with higher priority are processed, the detail display of the key areas can be ensured to be clearer, and the system can be ensured to dynamically adjust the image splicing sequence according to the real-time monitoring requirement by splicing images according to the priority weight sequence. Under dynamic scenes, such as crowd flow, traffic monitoring and the like, the splicing process can respond to changes in real time, key areas of image splicing can be automatically adjusted, and the most important areas can be accurately spliced and presented at any moment.
Referring to fig. 2, in a second aspect, the present invention provides an image stitching system for a low-power camera module array, for implementing an image stitching method for a low-power camera module array according to any one of the first aspect, including:
The picture splicing module is used for acquiring panoramic images of a target scene through a camera module array formed by a plurality of camera modules so as to obtain camera area images of the target scene corresponding to the camera modules, and combining the camera area images of the camera modules so as to obtain a panoramic spliced picture of the target scene;
The key annotation module is used for acquiring a monitoring key annotation instruction of a monitoring user for the panoramic stitching picture, configuring initial priority weights of all the image pickup area images in the panoramic stitching picture according to the monitoring key annotation instruction, and dividing all the image pickup modules into key image pickup modules and non-key image pickup modules through the initial priority weights corresponding to all the image pickup area images;
the dynamic detection module is used for carrying out dynamic detection of appointed frequency on the image of the image pickup area collected by the non-key image pickup module, and carrying out real-time dynamic detection on the image of the image pickup area collected by the key image pickup module so as to obtain dynamic detection results of the non-key image pickup module and the key image pickup module;
The weight updating module is used for updating the initial priority of the image pickup module corresponding to the image pickup area image with the dynamic block when the dynamic detection result shows that the dynamic block appears in the image pickup area image, so as to obtain the real-time priority of the image pickup module;
The track tracking module is used for carrying out track tracking processing on the dynamic block according to a Kalman filtering algorithm when the image pickup module corresponding to the image pickup area image of the dynamic block is a key image pickup module so as to obtain a predicted motion track of the dynamic block, and carrying out dynamic extension analysis and corresponding initial priority updating on the key image pickup module according to the predicted motion track so as to obtain real-time priority of the image pickup module;
And the dynamic splicing module is used for sequentially carrying out image splicing processing on the image capturing area images acquired by each image capturing module according to the real-time priority of the image capturing module so as to obtain dynamic spliced images.
In this embodiment, for specific implementation of each module in the above system embodiment, please refer to the description in the above method embodiment, and no further description is given here.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (5)

1.一种用于低算力摄像模块阵列的图像拼接方法,其特征在于,包括:1. An image stitching method for low-computing-power camera module arrays, characterized in that it includes: 通过集成设置若干摄像模块的摄像模块阵列对目标场景进行全景图像采集,以得到由各个摄像模块采集的摄像区域图像集成组合的全景拼接画面;A camera module array, which integrates several camera modules, is used to capture panoramic images of the target scene, so as to obtain a panoramic stitched image that is a combination of images of the camera areas captured by each camera module. 获取监控用户对于所述全景拼接画面的监控重点标注指令,并根据所述监控重点标注指令对所述全景拼接画面中的各个所述摄像区域图像进行初始优先权重的配置,通过各个所述摄像区域图像所对应的初始优化权重将各个所述摄像模块划分为重点摄像模块与非重点摄像模块;The monitoring user obtains the monitoring focus marking instructions for the panoramic stitched image, and configures the initial priority weight of each camera area image in the panoramic stitched image according to the monitoring focus marking instructions. The camera modules are divided into key camera modules and non-key camera modules by the initial optimization weight corresponding to each camera area image. 对所述非重点摄像模块所采集的摄像区域图像进行指定频率的动态检测,对所述重点摄像模块所采集的摄像区域图像进行实时动态检测,以得到所述非重点摄像模块与所述重点摄像模块的动态检测结果;Dynamic detection at a specified frequency is performed on the images of the camera area collected by the non-key camera module, and real-time dynamic detection is performed on the images of the camera area collected by the key camera module, so as to obtain the dynamic detection results of the non-key camera module and the key camera module; 当所述动态检测结果显示摄像区域图像中出现动态块时,对出现动态块的所述摄像区域图像所对应的摄像模块进行初始优先权重的更新处理,以得到所述摄像模块的实时优先权重;When the dynamic detection result shows that a dynamic block appears in the image of the camera area, the initial priority weight of the camera module corresponding to the image of the camera area where the dynamic block appears is updated to obtain the real-time priority weight of the camera module. 当出现动态块的所述摄像区域图像所对应的摄像模块为重点摄像模块时,根据卡尔曼滤波算法对所述动态块进行轨迹追踪处理,以得到所述动态块的预测运动轨迹,并根据所述预测运动轨迹对所述重点摄像模块进行动态延伸分析与对应的初始优先权重更新,以得到所述摄像模块的实时优先权重;When the camera module corresponding to the image of the camera area where the dynamic block appears is a key camera module, the dynamic block is tracked according to the Kalman filter algorithm to obtain the predicted motion trajectory of the dynamic block, and the key camera module is dynamically extended and updated according to the predicted motion trajectory to obtain the real-time priority weight of the camera module. 根据所述摄像模块的实时优先权重对各个所述摄像模块所采集的摄像区域图像依次进行图像拼接处理,以得到动态拼接图像;Based on the real-time priority weight of the camera modules, the images of the camera areas collected by each camera module are sequentially processed for image stitching to obtain a dynamically stitched image; 获取监控用户对于所述全景拼接画面的监控重点标注指令,并根据所述监控重点标注指令对所述全景拼接画面中的各个所述摄像区域图像进行初始优先权重的配置,通过各个所述摄像区域图像所对应的初始优化权重将各个所述摄像模块划分为重点摄像模块与非重点摄像模块的步骤包括:The steps of obtaining the monitoring user's monitoring focus marking instruction for the panoramic stitched image, configuring the initial priority weight of each camera area image in the panoramic stitched image according to the monitoring focus marking instruction, and dividing each camera module into key camera modules and non-key camera modules through the initial optimization weight corresponding to each camera area image include: 基于所述全景拼接画面构建交互端口,以供所述摄像模块阵列的监控用户对所述全景拼接画面进行交互操作,从而生成对于所述全景拼接画面的监控重点标注指令;An interactive port is constructed based on the panoramic stitched image, allowing monitoring users of the camera module array to interact with the panoramic stitched image and generate monitoring highlighting instructions for the panoramic stitched image. 对所述监控重点标注指令进行指令解析,以得到所述监控重点标注所反馈的重点监控区域的区域范围信息,并根据所述区域范围信息对各个所述摄像模块的模块监控区域进行范围定位,以得到各个所述摄像模块的模块监控区域相对于所述重点监控区域的定位结果;The monitoring focus marking instruction is parsed to obtain the area range information of the key monitoring area fed back by the monitoring focus marking, and the module monitoring area of each camera module is located according to the area range information to obtain the positioning result of the module monitoring area of each camera module relative to the key monitoring area. 当所述定位结果显示所述摄像模块的模块监控区域处于所述重点监控区域时,为所述摄像模块分配高等级的初始优先权重,以将所述摄像模块划分为重点摄像模块;When the positioning result shows that the monitoring area of the camera module is in the key monitoring area, a high initial priority weight is assigned to the camera module to classify the camera module as a key camera module; 当所述定位结果显示所述摄像模块的模块监控区域不处于所述重点监控区域时,为所述摄像模块分配低风机的初始优先权重,以将所述摄像模块划分为非重点摄像模块;When the positioning result shows that the monitoring area of the camera module is not in the key monitoring area, the camera module is assigned an initial priority weight of low fan speed to classify the camera module as a non-key camera module. 当所述动态检测结果显示摄像区域图像中出现动态块时,对出现动态块的所述摄像区域图像所对应的摄像模块进行初始优先权重的更新处理,以得到所述摄像模块的实时优先权重的步骤包括:When the dynamic detection result shows that a dynamic block appears in the camera area image, the step of updating the initial priority weight of the camera module corresponding to the camera area image where the dynamic block appears, in order to obtain the real-time priority weight of the camera module, includes: 当所述动态检测结果显示摄像区域图像中出现动态块时,对所述动态块进行运动速度与面积占比的数据采集,得到所述动态块的运动速度v与面积占比s;When the dynamic detection result shows that a dynamic block appears in the image of the camera area, the dynamic block is subjected to data collection on its movement speed and area ratio to obtain the movement speed v and area ratio s of the dynamic block. 根据动态块权重计算公式w =αv +βs对所述动态块进行动态块权重的计算,以得到所述动态块的动态块权重;其中,所述w为动态块权重,α和β为预先设置的平衡调节参数;The dynamic block weight is calculated according to the dynamic block weight calculation formula w = αv + βs to obtain the dynamic block weight; where w is the dynamic block weight, and α and β are preset balance adjustment parameters. 根据摄像模块计算公式pi = Σ(j∈Ri)wj / Σ(k=1→n)Σ(j∈Rk)wj对存在所述动态块的摄像模块进行整体权重的计算,以得到所述摄像模块的实时优先权重;其中,pi为第i个摄像头的优先级、Ri为第i个摄像头覆盖的动态区域集合、wj为区域j中动态块的权重、n为摄像头总数,Σ(j∈Ri)wj代表单个摄像头i覆盖的所有动态块的权重总和、Σ(k=1→n)Σ(j∈Rk)wj代表总数为n的所有摄像头的所有动态块权重的总和、外层求和Σ(k=1→n)遍历从1到n的所有摄像头、内层求和Σ(j∈Rk)计算每个摄像头覆盖的所有动态块的权重和;The overall weight of the camera modules containing the dynamic blocks is calculated according to the camera module calculation formula pi = Σ(j∈Ri)wj / Σ(k=1→n)Σ(j∈Rk)wj to obtain the real-time priority weight of the camera modules; where pi is the priority of the i-th camera, Ri is the set of dynamic regions covered by the i-th camera, wj is the weight of the dynamic blocks in region j, n is the total number of cameras, Σ(j∈Ri)wj represents the sum of weights of all dynamic blocks covered by a single camera i, Σ(k=1→n)Σ(j∈Rk)wj represents the sum of weights of all dynamic blocks of all cameras with a total of n, the outer summation Σ(k=1→n) iterates through all cameras from 1 to n, and the inner summation Σ(j∈Rk) calculates the sum of weights of all dynamic blocks covered by each camera; 当出现动态块的所述摄像区域图像所对应的摄像模块为重点摄像模块时,根据卡尔曼滤波算法对所述动态块进行轨迹追踪处理,以得到所述动态块的预测运动轨迹,并根据所述预测运动轨迹对所述重点摄像模块进行动态延伸分析与对应的初始优先权重更新,以得到所述摄像模块的实时优先权重的步骤包括:When the camera module corresponding to the image of the camera area where the dynamic block appears is a key camera module, the following steps are taken: The dynamic block is tracked using a Kalman filter algorithm to obtain its predicted motion trajectory. The key camera module is then dynamically extended and its initial priority weight is updated based on the predicted motion trajectory to obtain its real-time priority weight. 当出现动态块的所述摄像区域图像所对应的摄像模块为重点摄像模块时,对所述动态块进行位置向量与速度向量的特征分析,得到所述动态块的位置向量与速度向量;When the camera module corresponding to the camera area image of the dynamic block is a key camera module, feature analysis of the position vector and velocity vector of the dynamic block is performed to obtain the position vector and velocity vector of the dynamic block. 根据卡尔曼滤波算法对动态块的位置向量与速度向量进行未来变化的预测处理,以得到所述动态块的预测运动轨迹;The position vector and velocity vector of the dynamic block are predicted to change in the future using the Kalman filter algorithm, so as to obtain the predicted motion trajectory of the dynamic block. 根据所述动态块的预测运动轨迹对所述动态块进行动态延伸分析,以得到所述动态块在未来时间段中将延伸至的摄像区域,并将所述摄像区域标记为延伸区域;Based on the predicted motion trajectory of the dynamic block, a dynamic extension analysis is performed on the dynamic block to obtain the camera area to which the dynamic block will extend in the future time period, and the camera area is marked as the extension area; 对所述延伸区域所对应的摄像模块进行初始优先权重的更新处理,以得到所述摄像模块的实时优先权重。The initial priority weights of the camera modules corresponding to the extended region are updated to obtain the real-time priority weights of the camera modules. 2.如权利要求1所述的用于低算力摄像模块阵列的图像拼接方法,其特征在于,通过由若干摄像模块组成的摄像模块阵列对目标场景进行全景图像采集,以得到所述目标场景对应各个摄像模块的摄像区域图像,并对各个摄像模块的摄像区域图像进行组合,以得到所述目标场景的全景拼接画面的步骤包括:2. The image stitching method for a low-computing-power camera module array as described in claim 1, characterized in that the steps of acquiring panoramic images of a target scene by a camera module array composed of several camera modules to obtain the camera area images of each camera module corresponding to the target scene, and combining the camera area images of each camera module to obtain a panoramic stitched image of the target scene include: 对预先预设在指定位置的摄像模块阵列进行初始工作参数配置,以使得所述摄像模块阵列处于初始调试状态,处于初始调试状态的所述摄像模块阵列中的各个摄像模块分别对目标场景进行图像采集,以得到对应各个所述摄像模块的摄像区域图像;The camera module array, which is pre-set at a designated location, is configured with initial operating parameters so that the camera module array is in an initial debugging state. Each camera module in the camera module array in the initial debugging state acquires images of the target scene to obtain the corresponding camera area image of each camera module. 根据各个所述摄像模块在所述摄像模块阵列中处于的设置位置,对各个所述摄像模块获取的摄像区域图像进行对应位置关系的图像拼接处理,以得到所述目标场景的全景拼接画面。Based on the position of each camera module in the camera module array, the images of the camera areas acquired by each camera module are stitched together according to their corresponding positions to obtain a panoramic stitched image of the target scene. 3.如权利要求1所述的用于低算力摄像模块阵列的图像拼接方法,其特征在于,对所述延伸区域所对应的摄像模块进行初始优先权重的更新处理,以得到所述摄像模块的实时优先权重的步骤包括:3. The image stitching method for a low-computing-power camera module array as described in claim 1, characterized in that the step of updating the initial priority weights of the camera modules corresponding to the extended region to obtain the real-time priority weights of the camera modules includes: 将所述动态块所对应的摄像模块标记为第一摄像模块,并获取所述第一摄像模块的实时优先权重;The camera module corresponding to the dynamic block is marked as the first camera module, and the real-time priority weight of the first camera module is obtained; 对所述第一摄像模块的实时优先权重与其余摄像模块的实时优先权重进行比较,以确定所述实时优先权重仅低于所述第一摄像模块的实时优先权重的摄像模块,并标记为第二摄像模块;The real-time priority weight of the first camera module is compared with the real-time priority weight of the other camera modules to determine the camera module whose real-time priority weight is only lower than that of the first camera module, and it is marked as the second camera module. 对所述第一摄像模块与所述第二摄像模块的实时优先权重进行中间值计算,以得到所述延伸区域所对应的摄像模块的实时优先权重。The real-time priority weights of the first camera module and the second camera module are calculated by taking an intermediate value to obtain the real-time priority weight of the camera module corresponding to the extended area. 4.如权利要求1所述的用于低算力摄像模块阵列的图像拼接方法,其特征在于,根据所述摄像模块的实时优先权重对各个所述摄像模块所采集的摄像区域图像依次进行图像拼接处理,以得到动态拼接图像的步骤包括:4. The image stitching method for a low-computing-power camera module array as described in claim 1, characterized in that the step of sequentially performing image stitching processing on the camera area images acquired by each camera module according to the real-time priority weight of the camera modules to obtain a dynamically stitched image includes: 对所述摄像模块的实时优先权重进行比较处理,并根据比较处理的结果对各个所述摄像模块的实时优先权重进行排序,以得到各个所述摄像模块的权重序列;The real-time priority weights of the camera modules are compared and processed, and the real-time priority weights of each camera module are sorted according to the comparison and processing results to obtain the weight sequence of each camera module. 根据各个所述摄像模块的权重序列对各个所述摄像模块所采集的摄像区域图像依次进行图像拼接处理,以得到若干由相邻的摄像区域图像组成的动态拼接图像。Based on the weight sequence of each camera module, the images of the camera area acquired by each camera module are sequentially processed to obtain a number of dynamically stitched images composed of adjacent camera area images. 5.一种用于低算力摄像模块阵列的图像拼接系统,其特征在于,用于实现权利要求1-4任意一项所述的一种用于低算力摄像模块阵列的图像拼接方法,包括:5. An image stitching system for a low-computing-power camera module array, characterized in that it is used to implement the image stitching method for a low-computing-power camera module array as described in any one of claims 1-4, comprising: 画面拼接模块,用于通过由若干摄像模块组成的摄像模块阵列对目标场景进行全景图像采集,以得到所述目标场景对应各个摄像模块的摄像区域图像,并对各个摄像模块的摄像区域图像进行组合,以得到所述目标场景的全景拼接画面;The image stitching module is used to acquire panoramic images of the target scene through a camera module array composed of several camera modules, so as to obtain the camera area images of each camera module corresponding to the target scene, and to combine the camera area images of each camera module to obtain a panoramic stitched image of the target scene. 重点标注模块,用于获取监控用户对于所述全景拼接画面的监控重点标注指令,并根据所述监控重点标注指令对所述全景拼接画面中的各个所述摄像区域图像进行初始优先权重的配置,通过各个所述摄像区域图像所对应的初始优化权重将各个所述摄像模块划分为重点摄像模块与非重点摄像模块;The key annotation module is used to obtain the monitoring user's key annotation instructions for the panoramic stitched image, and configure the initial priority weight of each camera area image in the panoramic stitched image according to the monitoring key annotation instructions. The camera modules are divided into key camera modules and non-key camera modules by the initial optimization weight corresponding to each camera area image. 动态检测模块,用于对所述非重点摄像模块所采集的摄像区域图像进行指定频率的动态检测,对所述重点摄像模块所采集的摄像区域图像进行实时动态检测,以得到所述非重点摄像模块与所述重点摄像模块的动态检测结果;The dynamic detection module is used to perform dynamic detection at a specified frequency on the images of the camera area collected by the non-key camera module and to perform real-time dynamic detection on the images of the camera area collected by the key camera module, so as to obtain the dynamic detection results of the non-key camera module and the key camera module. 权重更新模块,用于当所述动态检测结果显示摄像区域图像中出现动态块时,对出现动态块的所述摄像区域图像所对应的摄像模块进行初始优先权重的更新处理,以得到所述摄像模块的实时优先权重;The weight update module is used to update the initial priority weight of the camera module corresponding to the camera region image where the dynamic block appears when the dynamic detection result shows that a dynamic block appears in the camera region image, so as to obtain the real-time priority weight of the camera module. 轨迹追踪模块,用于当出现动态块的所述摄像区域图像所对应的摄像模块为重点摄像模块时,根据卡尔曼滤波算法对所述动态块进行轨迹追踪处理,以得到所述动态块的预测运动轨迹,并根据所述预测运动轨迹对所述重点摄像模块进行动态延伸分析与对应的初始优先权重更新,以得到所述摄像模块的实时优先权重;The trajectory tracking module is used to perform trajectory tracking processing on the dynamic block according to the Kalman filter algorithm when the camera module corresponding to the camera area image where the dynamic block appears is a key camera module, so as to obtain the predicted motion trajectory of the dynamic block, and to perform dynamic extension analysis and corresponding initial priority weight update on the key camera module according to the predicted motion trajectory, so as to obtain the real-time priority weight of the camera module. 动态拼接模块,用于根据所述摄像模块的实时优先权重对各个所述摄像模块所采集的摄像区域图像依次进行图像拼接处理,以得到动态拼接图像。The dynamic stitching module is used to sequentially stitch together the images of the camera areas collected by each camera module according to the real-time priority weight of the camera modules to obtain a dynamically stitched image.
CN202510273116.0A 2025-03-10 2025-03-10 Image stitching method and system for low-computation-power camera module array Active CN119815183B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510273116.0A CN119815183B (en) 2025-03-10 2025-03-10 Image stitching method and system for low-computation-power camera module array

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510273116.0A CN119815183B (en) 2025-03-10 2025-03-10 Image stitching method and system for low-computation-power camera module array

Publications (2)

Publication Number Publication Date
CN119815183A CN119815183A (en) 2025-04-11
CN119815183B true CN119815183B (en) 2025-11-28

Family

ID=95264988

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510273116.0A Active CN119815183B (en) 2025-03-10 2025-03-10 Image stitching method and system for low-computation-power camera module array

Country Status (1)

Country Link
CN (1) CN119815183B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120107707B (en) * 2025-05-09 2025-07-22 四川和生视界医药技术开发有限公司 Lesion analysis method and device based on fundus image
CN121095059A (en) * 2025-11-07 2025-12-09 深圳市元素创达科技有限公司 Panoramic image processing method and device and panoramic unmanned aerial vehicle

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118233736A (en) * 2024-04-02 2024-06-21 芯粒微(深圳)科技有限公司 Computing resource calling method and related equipment of intelligent camera cluster

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116016862A (en) * 2022-12-27 2023-04-25 贵州宇鹏科技有限责任公司 Panoramic video real-time monitoring system
CN117423052B (en) * 2023-10-20 2024-08-16 山东运泰通信工程有限公司 Monitoring equipment adjustment and measurement system and method based on data analysis
CN118864941A (en) * 2024-07-01 2024-10-29 深圳技术大学 An AI-based image recognition system
CN118741255B (en) * 2024-08-30 2024-12-13 深圳大唐宝昌燃气发电有限公司 Monitoring data transmission method based on 5G
CN119545201A (en) * 2024-11-29 2025-02-28 江苏捷达交通工程集团有限公司 A design method and system for an exterior vehicle monitoring system based on a panoramic camera
CN119541118B (en) * 2025-01-20 2025-05-23 北京金蓝盾保安服务有限公司 A security area intrusion warning method and system based on video images

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118233736A (en) * 2024-04-02 2024-06-21 芯粒微(深圳)科技有限公司 Computing resource calling method and related equipment of intelligent camera cluster

Also Published As

Publication number Publication date
CN119815183A (en) 2025-04-11

Similar Documents

Publication Publication Date Title
CN119815183B (en) Image stitching method and system for low-computation-power camera module array
CN109104561B (en) System and method for tracking moving objects in a scene
US10339386B2 (en) Unusual event detection in wide-angle video (based on moving object trajectories)
JP4643766B1 (en) Moving body detection apparatus and moving body detection method
US10121079B2 (en) Video tracking systems and methods employing cognitive vision
EP2549738B1 (en) Method and camera for determining an image adjustment parameter
US20150015787A1 (en) Automatic extraction of secondary video streams
KR100879623B1 (en) Automated Wide Area Surveillance System Using PTZ Camera and Its Method
US20040141633A1 (en) Intruding object detection device using background difference method
JP2009533778A (en) Video segmentation using statistical pixel modeling
CN101243470A (en) object tracking system
US20110181716A1 (en) Video surveillance enhancement facilitating real-time proactive decision making
EP2954499A1 (en) Information processing apparatus, information processing method, program, and information processing system
WO2003067884A1 (en) Method and apparatus for video frame sequence-based object tracking
CN102577347A (en) Omni-directional intelligent autotour and situational aware dome surveillance camera system and method
KR100820952B1 (en) Unmanned automatic parking control method using single camera and its system
Quiroga et al. As seen on tv: Automatic basketball video production using gaussian-based actionness and game states recognition
US20250203194A1 (en) Image processing device, image processing method, and program
Kumar et al. Real time target tracking with pan tilt zoom camera
CN114120165B (en) Gun-ball linked target tracking method, device, electronic device and storage medium
US10122984B2 (en) Pan/tilt/zoom camera based video playing method and apparatus
Fehr et al. Counting people in groups
KR20190026625A (en) Image displaying method, Computer program and Recording medium storing computer program for the same
JP2001094968A (en) Video processing equipment
El-Alfy et al. Multi-scale video cropping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant