CN117315210A - Image blurring method based on stereoscopic imaging and related device - Google Patents
Image blurring method based on stereoscopic imaging and related device Download PDFInfo
- Publication number
- CN117315210A CN117315210A CN202311604811.8A CN202311604811A CN117315210A CN 117315210 A CN117315210 A CN 117315210A CN 202311604811 A CN202311604811 A CN 202311604811A CN 117315210 A CN117315210 A CN 117315210A
- Authority
- CN
- China
- Prior art keywords
- image
- blurring
- data
- stereoscopic imaging
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10116—X-ray image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Architecture (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the technical field of image processing, and discloses an image blurring method based on stereoscopic imaging and a related device, which are used for improving the accuracy of image blurring based on stereoscopic imaging. Comprising the following steps: corresponding stereo imaging data are collected from the database, and image data filtering is carried out on the stereo imaging data to obtain filtered image data; inputting the filtered image data into an image perception model for image segmentation processing to obtain a corresponding auxiliary atlas, and splitting an image target subject of the filtered image data through the auxiliary atlas to obtain a target subject region; image region segmentation is carried out on the filtered image data through the target main body region, so that a plurality of background region images are obtained; blurring a plurality of background area images in the filtered image data by a multi-scale Gaussian filtering algorithm to obtain candidate blurring images; and carrying out blurring edge optimization processing on the candidate blurring image to obtain a target blurring image.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an image blurring method based on stereoscopic imaging and a related device.
Background
In recent years, with rapid development of the fields of computer vision and image processing, and continuous progress of image acquisition technology, image processing and analysis have become core components of many application fields. The acquisition and processing of the stereoscopic imaging data has wide application prospects in the fields of medicine, autopilot, virtual reality, entertainment industry and the like. Stereoscopic imaging techniques can provide more depth information, helping to enhance the realism and information conveying capabilities of the image.
In complex scenes containing multiple depth levels, it is difficult for the blurring algorithm to accurately capture all depth information. This results in undesirable blurring effects, and some foreground or background may be blurring improperly. The blurring algorithm cannot track depth changes of the object as the object or camera moves in the image. This results in an incoherent or inaccurate blurring effect of the object. When occlusion relationships exist between objects, the blurring algorithm cannot correctly identify and process the occlusions, resulting in inaccurate depth information.
Disclosure of Invention
The invention provides an image blurring method based on stereoscopic imaging and a related device, which are used for improving the accuracy of image blurring based on stereoscopic imaging.
The first aspect of the present invention provides an image blurring method based on stereoscopic imaging, the image blurring method based on stereoscopic imaging comprising:
corresponding stereoscopic imaging data are collected from a preset database, and image data filtering is carried out on the stereoscopic imaging data to obtain filtered image data;
inputting the filtered image data into a preset image perception model for image segmentation processing to obtain a corresponding auxiliary image set, wherein the auxiliary image set comprises a focus image subset, a depth image subset and a mask image subset;
splitting an image target subject of the filtered image data through the auxiliary atlas to obtain a target subject region;
image region segmentation is carried out on the filtered image data through the target main body region, so that a plurality of background region images are obtained;
blurring processing is carried out on a plurality of background area images in the filtered image data through a preset multi-scale Gaussian filtering algorithm, so that candidate blurring images are obtained;
and carrying out blurring edge optimization processing on the candidate blurring image to obtain a target blurring image.
With reference to the first aspect, in a first implementation manner of the first aspect of the present invention, the collecting corresponding stereo imaging data from a preset database, and performing image data filtering on the stereo imaging data to obtain filtered image data includes:
Corresponding stereoscopic imaging data are collected from the database, and perspective matching is carried out on the stereoscopic imaging data to obtain a plurality of perspective information of the stereoscopic imaging data;
performing horizontal edge detection processing on the stereoscopic imaging data based on a plurality of view angle information, wherein the stereoscopic imaging data comprises horizontal edges;
performing vertical edge detection on the stereoscopic imaging data to obtain vertical edges of the stereoscopic imaging data;
constructing an image area frame for the horizontal edge and the vertical edge to obtain a target area frame;
performing pixel gradient calculation on the stereoscopic imaging data based on the target region frame to obtain gradient data corresponding to the stereoscopic imaging data;
and carrying out image filtering processing on the stereoscopic imaging data through the gradient data to obtain the filtered image data.
With reference to the first aspect, in a second implementation manner of the first aspect of the present invention, the inputting the filtered image data into a preset image sensing model to perform image segmentation processing, to obtain a corresponding auxiliary atlas, where the auxiliary atlas includes a focus atlas subset, a depth atlas subset, and a mask atlas subset, includes:
Inputting the filtered image data into an input layer of the image perception model to perform image monocular depth estimation to obtain corresponding monocular depth estimation information;
based on a preset image depth threshold sequence, performing first image segmentation on the filtered image based on the monocular depth estimation information to obtain a corresponding depth map subset;
inputting the monocular depth estimation information into a convolution layer of the image perception model, and carrying out data convolution processing on the filtered image data to obtain corresponding image convolution characteristics;
performing image focus calibration on the filtered image through the image convolution characteristics to obtain an image focus set corresponding to the filtered image;
based on the image focus set, performing second image segmentation processing on the filtered image to obtain a focus map subset corresponding to the filtered image;
inputting the image focus set into a semantic segmentation layer of the image perception model for semantic segmentation processing to obtain semantic information corresponding to the filtered image;
and carrying out third image segmentation on the filtered image based on semantic information corresponding to the filtered image to obtain a mask image subset of the filtered image.
With reference to the first aspect, in a third implementation manner of the first aspect of the present invention, the splitting, by the auxiliary atlas, an image target object to the filtered image data to obtain a target object area includes:
performing image correlation analysis on the filtered image data through the auxiliary atlas to obtain correlation analysis data;
performing correlation region calibration on the filtered image data based on the correlation analysis data to obtain a plurality of correlation regions;
and splitting the target main body of the filtered image data based on a plurality of correlation areas to obtain the target main body area.
With reference to the third implementation manner of the first aspect, in a fourth implementation manner of the first aspect of the present invention, the performing, based on the plurality of correlation areas, object-subject splitting on the filtered image data to obtain the object-subject area includes:
threshold screening is carried out on each correlation area to obtain at least one target correlation area;
carrying out communication region analysis on at least one target correlation region to obtain a plurality of communication regions;
and splitting the target main body of the filtered image data based on the plurality of connected areas to obtain the target main body area.
With reference to the first aspect, in a fifth implementation manner of the first aspect of the present invention, the performing image region segmentation on the filtered image data by using the target subject region to obtain a plurality of background region images includes:
performing binarization processing on the target main body area to obtain a corresponding binarization area image;
performing mask map matching on the binarized region image based on the mask map subset to obtain a corresponding target mask map set;
performing mask map inversion processing on the binarized region image through the target mask map set to obtain a corresponding inversion region image;
and based on the inversion region image, performing image region segmentation on the filtered image data to obtain a plurality of background region images.
With reference to the first aspect, in a sixth implementation manner of the first aspect of the present invention, the blurring processing is performed on a plurality of background area images in the filtered image data by using a preset multi-scale gaussian filtering algorithm, so as to obtain candidate blurring images, where the blurring processing includes:
performing filter scale matching on a plurality of background area images in the filtered image data to obtain a target filter scale corresponding to each background area image;
Based on the target filter scale corresponding to each background area image, carrying out data standard deviation calculation on a plurality of background area images in the filtered image data through the multi-scale Gaussian filtering algorithm to obtain a plurality of standard deviations;
and blurring the background area images based on the standard deviations to obtain candidate blurring images.
The second aspect of the present invention provides an image blurring apparatus based on stereoscopic imaging, comprising:
the acquisition module is used for acquiring corresponding stereoscopic imaging data from a preset database, and filtering the stereoscopic imaging data to obtain filtered image data;
the processing module is used for inputting the filtered image data into a preset image perception model to perform image segmentation processing to obtain a corresponding auxiliary image set, wherein the auxiliary image set comprises a focus image subset, a depth image subset and a mask image subset;
the splitting module is used for splitting the image target main body of the filtered image data through the auxiliary atlas to obtain a target main body area;
the segmentation module is used for carrying out image region segmentation on the filtered image data through the target main body region to obtain a plurality of background region images;
The blurring module is used for blurring a plurality of background area images in the filtered image data through a preset multi-scale Gaussian filtering algorithm to obtain candidate blurring images;
and the optimizing module is used for carrying out virtual edge optimizing processing on the candidate virtual images to obtain target virtual images.
A third aspect of the present invention provides an image blurring apparatus based on stereoscopic imaging, comprising: a memory and at least one processor, the memory having instructions stored therein; the at least one processor invokes the instructions in the memory to cause the stereoscopic imaging-based image blurring apparatus to perform the stereoscopic imaging-based image blurring method described above.
A fourth aspect of the present invention provides a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the stereoscopic imaging-based image blurring method described above.
In the technical scheme provided by the invention, corresponding stereo imaging data are acquired from a database, and image data filtering is carried out on the stereo imaging data to obtain filtered image data; inputting the filtered image data into an image perception model for image segmentation processing to obtain a corresponding auxiliary image set, wherein the auxiliary image set comprises a focus image subset, a depth image subset and a mask image subset; splitting an image target subject of the filtered image data through the auxiliary atlas to obtain a target subject region; image region segmentation is carried out on the filtered image data through the target main body region, so that a plurality of background region images are obtained; blurring a plurality of background area images in the filtered image data by a multi-scale Gaussian filtering algorithm to obtain candidate blurring images; and carrying out blurring edge optimization processing on the candidate blurring image to obtain a target blurring image. In the scheme, noise and unnecessary details can be removed by filtering the stereoscopic imaging data, so that the quality and definition of the image are enhanced. Segmentation using an image perception model enables the filtered image to be divided into different regions, including a focus map subset, a depth map subset, and a mask map subset. This helps to better understand the structure and content of the image. By the auxiliary atlas, the target subject region in the image can be effectively identified and extracted. This is important for subsequent processing, such as image synthesis or blurring. After the target main body area is separated, the background area can be further divided into a plurality of parts, so that different processing strategies can be adopted for different background areas. The blurring processing is carried out on the images of the plurality of background areas through the multi-scale Gaussian filtering algorithm, so that the background can be effectively blurred, the target main body is highlighted, and the target is more highlighted. By optimizing the blurring edge, the blurring effect can be more natural and lifelike, and the non-coherent sense between blurring and a target main body is reduced, so that the accuracy of image blurring based on stereoscopic imaging is further improved.
Drawings
FIG. 1 is a schematic diagram of an embodiment of an image blurring method based on stereoscopic imaging according to an embodiment of the present invention;
FIG. 2 is a flowchart of the embodiment of the present invention, in which filtered image data is input into a preset image sensing model to perform image segmentation processing, so as to obtain a corresponding auxiliary atlas;
FIG. 3 is a flow chart of image object splitting of filtered image data with an auxiliary atlas in an embodiment of the invention;
FIG. 4 is a flow chart of object-based splitting of filtered image data in accordance with an embodiment of the present invention;
FIG. 5 is a schematic view of an embodiment of a stereoscopic imaging-based image blurring apparatus according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an embodiment of an image blurring apparatus based on stereoscopic imaging in an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides an image blurring method based on stereoscopic imaging and a related device, which are used for improving the accuracy of image blurring based on stereoscopic imaging.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For ease of understanding, a specific flow of an embodiment of the present invention is described below with reference to fig. 1, and an embodiment of a stereoscopic imaging-based image blurring method in an embodiment of the present invention includes:
s101, acquiring corresponding stereoscopic imaging data from a preset database, and performing image data filtering on the stereoscopic imaging data to obtain filtered image data;
it is to be understood that the execution subject of the present invention may be an image blurring device based on stereoscopic imaging, and may also be a terminal or a server, which is not limited herein. The embodiment of the invention is described by taking a server as an execution main body as an example.
In particular, stereoscopic imaging data is obtained from a database, which data includes a plurality of view angle information, which is an important resource used in the blurring process. These data from different perspectives may provide information about scene depth and object position. Based on these view angle information, horizontal edge detection processing is performed, and edge information in the horizontal direction is extracted from the stereoscopic imaging data. Horizontal edge detection helps to determine horizontal edge structures in the image, which is critical for target localization and edge optimization during image blurring. Following the horizontal edge detection, vertical edge detection is performed, and edge information in the vertical direction is extracted so that the structure of the image is more comprehensively understood. Vertical edge detection helps to capture the contour and vertical edges of an object. On the basis of the first two steps, image region frame construction operation is performed to obtain a target region frame, wherein the target region frame contains horizontal and vertical edge information and helps identify main objects in the image and surrounding structures. And calculating pixel gradient based on the target region frame. Gradient data for each pixel is calculated to learn the change in the image in more detail. The gradient information may be used for further image processing and optimization. Image filtering processing is performed on the stereoscopic imaging data using the gradient data. The blurring effect of the image is achieved by applying a suitable filtering algorithm. From the gradient data, the filtering parameters may be adjusted to blur the background while preserving the sharpness of the main object, thereby producing final filtered image data. For example, assume that a server obtains stereo imaging data from a medical database, which data includes views from different angles, such as X-ray and magnetic resonance imaging. By performing horizontal and vertical edge detection on these data, the server is able to extract information about tissue structure and organ contours. The server builds a target region box to ensure that the patient's body parts are of primary concern rather than the background when the image is blurred.
S102, inputting the filtered image data into a preset image perception model for image segmentation processing to obtain a corresponding auxiliary image set, wherein the auxiliary image set comprises a focus map subset, a depth map subset and a mask map subset;
specifically, the filtered image data is input to an input layer of an image perception model for image monocular depth estimation. This step helps to calculate depth information for each portion of the image. Monocular depth estimation is a technique for estimating object distance from a camera, typically implemented using neural networks. These estimates provide depth information that can be used for further segmentation. And according to a preset image depth threshold sequence, performing first image segmentation based on monocular depth estimation information to obtain a depth map subset. The depth map subset contains image portions with similar depth, helping to separate the different depth levels in the image. This helps to place the focus on the main object. Meanwhile, monocular depth estimation information is input into a convolution layer of the image perception model to carry out data convolution processing, and image convolution characteristics are obtained. These features are important information for further processing and can help determine structures and features in the image. And (3) calibrating an image focus based on the image convolution characteristics to obtain an image focus set. The image focus set indicates which areas in the image are focus areas, i.e. areas where sharpness needs to be maintained. This step helps to separate the main object from the background, thus better blurring. Based on the image focus set, a second image segmentation process is performed to obtain a focus map subset. The subset of focus maps contains the main objects or regions that need to maintain sharpness and detailed information. These image portions will become the focus in the final blurred image. Inputting the image focus set into a semantic segmentation layer of the image perception model for semantic segmentation processing so as to obtain semantic information corresponding to the filtered image. Semantic information may help to further distinguish between different objects and categories and features of objects in the image. Based on semantic information of the filtered image, a third image segmentation is performed to obtain a mask map subset. The mask map subset includes mask information for the different objects and regions, which facilitates locating and processing the different objects in the blurred image. By masking the map subset, it can be ensured that the blurring algorithm correctly handles the relationship and depth information between different objects. For example, assume that a vehicle is equipped with a stereo camera to acquire stereo imaging data on a road. These data include multiple perspectives, allowing the server to learn depth information for different objects and roads. The server inputs the filtered image data into an image perception model. Through monocular depth estimation, the server calculates the depth of different objects and road portions in the image. According to the depth information, the server performs first image segmentation to divide the image into different depth layers. The server uses convolution processing and image focus calibration to determine which areas need to be kept in sharpness. This helps to separate the main vehicle from the road section. The second image segmentation makes the vehicle clearer in the image by generating a subset of the focus map, while the background is blurred. The server performs semantic segmentation and third image segmentation to identify semantic information of different objects and roads and generates a mask map subset. These masks allow the server to know the locations of traffic signs, pedestrians, and other vehicles on the road and ensure that they are properly handled in the blurring process.
S103, splitting an image target subject of the filtered image data through an auxiliary atlas to obtain a target subject region;
specifically, image correlation analysis is performed on the filtered image data by the auxiliary atlas. The aim is to evaluate the correlation between different parts of the image. Correlation analysis can help determine which regions or objects have higher correlation in the image, which is critical for target subject resolution. Correlation analysis may use different image features, such as color, texture, or depth information, to calculate similarity between regions. Based on the result of the correlation analysis, the server obtains correlation analysis data. These data reflect the degree of correlation between different regions in the image. A higher correlation indicates that these regions belong to the same object or region. And calibrating the correlation area according to the correlation analysis data. This step helps to divide the image into a plurality of regions of relevance, with pixels within each region having similar characteristics or relevance. Calibration of these correlation regions may be done using clustering or segmentation algorithms to ensure that the relevant portions of the image are grouped together. Based on the plurality of correlation regions, a target subject split is performed. Different objects or regions in the image are separated in preparation for the blurring process. The splitting process can be adjusted according to the needs of different applications to ensure that the main object can be positioned and highlighted explicitly. For example, suppose that a vehicle captures stereo imaging data of road and traffic conditions by means of onboard cameras and sensors. And carrying out image correlation analysis on the filtered stereo imaging data through the auxiliary atlas. This includes correlation analysis between vehicles, pedestrians, traffic signals and other vehicles to determine the correlation of different areas. Based on the correlation analysis data, the server demarcates the correlation areas, dividing the image into different areas, each area containing image portions with similar correlations, such as other vehicles, pedestrians, or traffic signals. And (5) splitting the target main body through correlation region calibration. Different road elements are separated to help the vehicle to better understand the road conditions, make decisions and control.
Wherein a threshold screening is performed for each correlation region, and a threshold for the correlation region is set to determine which regions are considered as target correlation regions. The threshold may be based on correlation analysis data or may be based on the needs of a particular application. The goal is to select an area that is related to the primary object. And obtaining at least one target correlation area through threshold value screening. These regions are considered to be associated with the primary object, being the initial regions of target subject split. Subsequently, a connected region analysis is performed on the at least one target correlation region. Each target correlation region is divided into a plurality of connected regions. The connected regions are made up of adjacent pixels that are similar in color, texture, or depth. Analyzing the connected region helps to more accurately identify and isolate the target subject. And splitting the target subject for the filtered image data based on the plurality of connected regions. Each connected region is identified as an individual target or target portion. This step helps to separate the main object from the background in preparation for the blurring process. For example, the server determines the primary targets (e.g., other vehicles, pedestrians, or obstacles) on the road and applies a blurring effect to them while keeping the background clear. A threshold screening is performed for each correlation region. The server analyzes the relevance of the different parts of the image and then sets a threshold to select the region relevant to the primary target. This includes identifying areas of relevance for other vehicles or pedestrians. The server obtains at least one target correlation region through threshold screening, and the regions are associated with main targets, such as other vehicles. And carrying out connected region analysis on at least one target correlation region. This may help the server subdivide the target relevance area into different connected areas, each representing a separate vehicle. Based on the plurality of connected regions, the server performs target subject splitting, identifying each connected region as a target. This helps the server highlight other vehicles in the blurring process while keeping the road background clear.
S104, performing image region segmentation on the filtered image data through the target main region to obtain a plurality of background region images;
specifically, binarization processing is performed on the target subject area. The target subject region is divided into two parts: 1 and 0. Typically, the body portion will be labeled 1 and the non-body portion will be labeled 0. This binarization process helps to explicitly indicate which regions are the main object and which are the background. Mask map matching is performed on the binarized area image based on the mask map subset. The mask map subset contains mask information about the object for identifying the location and shape of the object. By matching the mask information, the server determines the exact location of the subject area. And obtaining a target mask graph set through mask graph matching. This set includes mask information for each subject area. And performing mask map inversion processing on the binarized area image through the target mask map set. This step inverts the bulk regions to white and the non-bulk regions to black. This is in preparation for final region segmentation to obtain multiple background region images. Image region segmentation is performed on the filtered image data based on the inverted region image. The inversion region is divided into a plurality of portions, each representing a different background region. These background area images will be used for blurring processing to highlight the main object. For example, in MRI images, the patient's body structures and abnormal areas need to be highlighted while the background is blurred to reduce interference. The image is divided into a main body region representing abnormal tissue and a background region representing normal tissue. This may be achieved by image segmentation methods such as thresholding or region growing. The body region is binarized, and the abnormal tissue is marked as 1 and the normal tissue is marked as 0. This helps to explicitly indicate the location of anomalies in the image. Mask map matching is performed on the binarized region images using a mask map subset containing mask information about the abnormal tissue to ensure the accurate location of the abnormal tissue. By mask map matching, a mask map set of abnormal organization is obtained, and the mask information is used for reversing the abnormal region. And (3) performing mask map inversion processing on the binarized region image through the target mask map set, and inverting the abnormal tissue to 0 and inverting the normal tissue to 1. And carrying out image region segmentation on the filtered image data based on the inversion region image to obtain a plurality of background region images, wherein the images represent normal tissues. These background area images may be blurred to highlight abnormal tissue.
S105, blurring processing is carried out on a plurality of background area images in the filtered image data through a preset multi-scale Gaussian filtering algorithm, and candidate blurring images are obtained;
specifically, filter scale matching is performed, and an appropriate filter scale is selected for each background region of the image. Filter scale matching is a problem with determining which gaussian filter will be best suited for blurring a particular background region. And calculating according to the target filter scale of each background area image. The choice of the target filter scale is based on the characteristics of the background region, including its size and texture. Different background areas require filters of different dimensions to achieve the best blurring effect. The data standard deviation of each background area image is calculated using a multi-scale gaussian filtering algorithm. The standard deviation of data is a measure for evaluating the degree of variation of pixel values in an image. The larger the standard deviation is, the larger the difference of pixel values in the image is, and the smaller the standard deviation is, the smaller the change of pixel values is. And according to the standard deviation obtained by calculation, carrying out blurring processing on each background area image to obtain candidate blurring images. Areas of greater standard deviation will be more strongly blurred to reduce their detail and texture, thus making the main object more prominent. For example, it is assumed that stereoscopic imaging data of a road scene is acquired using a stereoscopic imaging sensor. The image is filtered in preparation for blurring. By means of filter scale matching, suitable filter scales are selected for different background areas of the road, e.g. different scales are required for the textured and smooth parts of the road. And calculating the data standard deviation of each background area image according to the target filter scale. For example, road signs on roads have a large standard deviation, while smooth portions of the road itself have a small standard deviation. And carrying out blurring treatment on different background areas based on the standard deviation obtained by calculation. The areas of greater standard deviation (e.g., road signs) will be slightly obscured to preserve some of their detail, while the areas of lesser standard deviation (e.g., roads) will be more strongly obscured to reduce their level of detail.
S106, performing blurring edge optimization processing on the candidate blurring image to obtain a target blurring image.
The edge detection is performed on the candidate blurred image. Edge detection is to determine the boundary between an object and the background in an image. This can be done using various image processing algorithms, such as Canny edge detection or Sobel operator, etc. Edge detection generates a binarized image in which edge portions are marked white and non-edge portions are marked black. Then, by fusing the candidate blurring image and the edge detection result, blurring edge optimization is achieved, thereby ensuring that the blurring transition is smooth, rather than abrupt, to enhance the realism of the image. The fusion may be achieved in a variety of ways, including pixel-level fusion, filter-level fusion, and the like. For pixel level fusion, a weight is applied to each pixel to adjust the degree of blurring according to the result of edge detection. Pixels near the edges will be affected less by blurring, while pixels in the center of the background area will be affected more by blurring. This method ensures smooth transition of blurring effect, making the image more natural. The filter stage fusion takes into account the candidate blurred image and the filter response of the edge detection. This may be achieved by convolving the frequency domain representations of both. This approach allows finer granularity control of the blurring transitions to ensure consistency in the appearance of the image. A target blurring image is obtained in which the edge portions are optimized to ensure that the contours of the main object are clearly visible, while the background areas are still blurring to improve focus.
In the technical scheme provided by the embodiment of the invention, corresponding stereo imaging data are acquired from a database, and image data filtering is carried out on the stereo imaging data to obtain filtered image data; inputting the filtered image data into an image perception model for image segmentation processing to obtain a corresponding auxiliary image set, wherein the auxiliary image set comprises a focus image subset, a depth image subset and a mask image subset; splitting an image target subject of the filtered image data through the auxiliary atlas to obtain a target subject region; image region segmentation is carried out on the filtered image data through the target main body region, so that a plurality of background region images are obtained; blurring a plurality of background area images in the filtered image data by a multi-scale Gaussian filtering algorithm to obtain candidate blurring images; and carrying out blurring edge optimization processing on the candidate blurring image to obtain a target blurring image. In the scheme, noise and unnecessary details can be removed by filtering the stereoscopic imaging data, so that the quality and definition of the image are enhanced. Segmentation using an image perception model enables the filtered image to be divided into different regions, including a focus map subset, a depth map subset, and a mask map subset. This helps to better understand the structure and content of the image. By the auxiliary atlas, the target subject region in the image can be effectively identified and extracted. This is important for subsequent processing, such as image synthesis or blurring. After the target main body area is separated, the background area can be further divided into a plurality of parts, so that different processing strategies can be adopted for different background areas. The blurring processing is carried out on the images of the plurality of background areas through the multi-scale Gaussian filtering algorithm, so that the background can be effectively blurred, the target main body is highlighted, and the target is more highlighted. By optimizing the blurring edge, the blurring effect can be more natural and lifelike, and the non-coherent sense between blurring and a target main body is reduced, so that the accuracy of image blurring based on stereoscopic imaging is further improved.
In a specific embodiment, the process of executing step S101 may specifically include the following steps:
(1) Corresponding stereoscopic imaging data are collected from a database, and perspective matching is carried out on the stereoscopic imaging data to obtain a plurality of perspective information of the stereoscopic imaging data;
(2) Performing horizontal edge detection processing on stereoscopic imaging data based on the plurality of view angle information, and horizontal edges of the stereoscopic imaging data;
(3) Vertical edge detection is carried out on the three-dimensional imaging data, so that the vertical edge of the three-dimensional imaging data is obtained;
(4) Constructing an image area frame for the horizontal edge and the vertical edge to obtain a target area frame;
(5) Performing pixel gradient calculation on stereoscopic imaging data based on the target region frame to obtain gradient data corresponding to the stereoscopic imaging data;
(6) And performing image filtering processing on the stereoscopic imaging data through the gradient data to obtain filtered image data.
Specifically, corresponding stereoscopic imaging data is first obtained from a database. Such data is typically acquired by a dual camera system or other stereoscopic imaging device. Viewing angle matching is performed to ensure that images from different viewing angles are properly aligned. Viewing angle matching is a process of ensuring that pixels of each stereoscopic image correspond correctly in three-dimensional space. This involves finding the translational and rotational relationships between the left and right cameras in order to align them, the alignment results directly affecting the accuracy of the subsequent depth estimation and stereoscopic imaging data. Based on the plurality of perspective information, the server proceeds with horizontal edge detection. The purpose of this step is to capture edges and structures in the horizontal direction in the image. The horizontal edges generally correspond to horizontal features and object boundaries in the scene. These edge information help to extract depth information and identify objects in the image. And simultaneously, vertical edge detection is carried out on the stereoscopic imaging data, so that the vertical edge of the stereoscopic imaging data is obtained. The vertical edges are typically associated with depth information in stereoscopic imaging, as depth variations are typically presented in the form of vertical edges. The server then merges the horizontal edge and vertical edge information to construct a target region box. These region boxes identify regions of interest in the image, including boundaries of objects, locations of depth variations, or other important structures. Based on the target region frame, the server performs pixel gradient calculation to learn gradient information of each pixel in the image. The gradient represents the degree of variation of the pixel values, which is critical for the subsequent depth estimation and blurring process. The server performs image filtering processing on stereoscopic imaging data using the gradient data. This step aims to enhance the quality of the image, remove noise and highlight specific features. Image filtering helps prepare the data for further image analysis and processing.
In a specific embodiment, as shown in fig. 2, the process of executing step S102 may specifically include the following steps:
s201, inputting the filtered image data into an input layer of an image perception model to perform image monocular depth estimation, and obtaining corresponding monocular depth estimation information;
s202, performing first image segmentation on a filtered image based on monocular depth estimation information based on a preset image depth threshold sequence to obtain a corresponding depth map subset;
s203, inputting monocular depth estimation information into a convolution layer of an image perception model, and performing data convolution processing on the filtered image data to obtain corresponding image convolution characteristics;
s204, calibrating an image focus of the filtered image through the image convolution characteristics to obtain an image focus set corresponding to the filtered image;
s205, performing second image segmentation processing on the filtered image based on the image focus set to obtain a focus map subset corresponding to the filtered image;
s206, inputting the image focus set into a semantic segmentation layer of the image perception model to perform semantic segmentation processing to obtain semantic information corresponding to the filtered image;
s207, performing third image segmentation on the filtered image based on semantic information corresponding to the filtered image to obtain a mask map subset of the filtered image.
The filtered image data is input into an input layer of the image perception model to perform image monocular depth estimation, so as to obtain corresponding monocular depth estimation information. The purpose is to infer depth information for each pixel in the image. Monocular depth estimation typically relies on neural network models to learn to understand the distance and depth relationships of objects in an image, thereby generating a depth map. The server will perform a first image segmentation based on a preset sequence of image depth thresholds. And dividing different parts in the image into different depth layers according to the depth estimation information to form a depth map subset. This helps to separate objects or scene parts of different depths. And inputting the monocular depth estimation information into a convolution layer of the image perception model, and carrying out data convolution processing. This step aims at extracting the feature and structure information of the image to better understand the image content. A convolutional layer may typically learn the texture, edges, and other features of an image. Using the image convolution feature, the server performs image focus calibration, which helps determine the focus area in the image. The image focus set defines which regions in the image are sharp and important and which regions are blurred or secondary. Based on the image focus set, a second image segmentation is performed to divide the image into focus map subsets, helping to identify the focus objects or regions in the image. And then, inputting the image focus set into a semantic segmentation layer of the image perception model to perform semantic segmentation processing to obtain semantic information corresponding to the filtered image. The purpose is to assign semantic tags to different objects or categories in an image to obtain semantic information of the image. Based on the semantic information of the image, a third image segmentation is performed to obtain a mask map subset. This step helps to identify different objects, categories or areas in the image, separate them and extract them. For example, the image perception model may receive images from a vehicle camera, and the monocular depth estimation may help the vehicle understand the distance and location of obstacles on the road. Image focus calibration can determine which objects or regions are particularly important for safe driving. Semantic segmentation helps identify traffic signs, pedestrians, vehicles, etc., and generates semantic maps. The subset of mask maps may be used to identify different areas on the road, such as lanes, shoulders, traffic signals, and the like.
In a specific embodiment, as shown in fig. 3, the process of executing step S103 may specifically include the following steps:
s301, performing image correlation analysis on the filtered image data through an auxiliary atlas to obtain correlation analysis data;
s302, performing correlation region calibration on the filtered image data based on correlation analysis data to obtain a plurality of correlation regions;
s303, splitting the target subject of the filtered image data based on the correlation areas to obtain a target subject area.
The image correlation analysis is performed using the auxiliary atlas. The auxiliary atlas is a dataset containing information related to the filtered image data and may include a focus map subset, a depth map subset, a mask map subset, and so on. This information is a key factor for understanding objects and structures in the filtered image. By assisting the atlas, the server performs an image correlation analysis with the aim of determining the correlation of different regions in the filtered image, i.e. which regions have similar features or content. This helps the server to identify the interrelationship between different objects, structures or regions in the image, thereby better understanding the image content. Based on the correlation analysis data, the server performs correlation region calibration on the filtered image data. The image is divided into a plurality of correlation regions, where each region contains pixels with similar features or content. This helps group objects or regions in the image together for better analysis and processing thereof. Based on the plurality of correlation areas, the server performs object subject splitting to obtain object subject areas in the image. This step helps to separate the main objects or targets in the image, enabling finer image segmentation and analysis. For example, the auxiliary atlas may include a subset of mask atlas that contains information about roads, vehicles, and pedestrians. Through image correlation analysis, it can be determined which regions in the image contain similar road and vehicle features. Based on these relevance areas, the image can be separated into different road segments and traffic situations. The target subject split may be used to separate different vehicles or pedestrians for traffic monitoring or decision making.
In a specific embodiment, as shown in fig. 4, the process of executing step S303 may specifically include the following steps:
s401, screening a threshold value of each correlation area to obtain at least one target correlation area;
s402, carrying out connected region analysis on at least one target correlation region to obtain a plurality of connected regions;
s403, splitting the target subject of the filtered image data based on the plurality of connected regions to obtain a target subject region.
In particular, in image processing, each correlation region may contain pixels having similar characteristics or content. These relevance areas may represent different objects, structures or areas in the image. But the server is only interested in a specific area or a specific type of relevance area. Thus, a threshold screening is performed on the correlation region. Setting a threshold value, and screening out the correlation area with specific characteristics or contents according to a certain standard, thereby obtaining at least one target correlation area. The specific manner of threshold screening depends on the requirements of the application and the characteristics of the image. For example, in autopilot, a threshold may be used to filter out relevant areas containing traffic signs. Next, a connected region analysis is performed to group pixels in the target correlation region into a plurality of connected regions, wherein each connected region is an independent, continuous set of pixels. Connected region analysis typically involves connectivity and adjacency analysis between pixels. Through connected region analysis, the target correlation region may be subdivided into a plurality of connected regions, each representing a separate target or object. This helps to better understand the distribution and structure of objects in the image. For example, in autopilot, a target relevance area containing a plurality of traffic signs may be divided into a plurality of separate connected areas, each area representing a different traffic sign. And splitting the target main body based on the plurality of communication areas. This step helps to separate different targets or objects in the image, thereby enabling finer image segmentation and analysis. The result of the object principal splitting is to obtain object principal regions, where each region represents an independent object.
In a specific embodiment, the process of executing step S104 may specifically include the following steps:
(1) Performing binarization processing on the target main body area to obtain a corresponding binarization area image;
(2) Based on the mask map subset, performing mask map matching on the binarized region image to obtain a corresponding target mask map set;
(3) Performing mask map inversion processing on the binarized region image through a target mask map set to obtain a corresponding inversion region image;
(4) And carrying out image region segmentation on the filtered image data based on the inversion region image to obtain a plurality of background region images.
Specifically, binarization processing is performed on the target main body region to obtain a corresponding binarized region image. Binarization is the conversion of pixel values in an image into binary values (0 or 1), which pixels should be considered as target areas and which should be considered as background, usually based on a specific threshold. This step helps to separate the object from the background, so that further image processing is better performed. In autopilot, a target subject area (e.g., a road) may be binarized to separate the road from the surrounding environment. Mask map matching is performed on the binarized area image based on the mask map subset. The mask map subset contains information about the target region, typically represented in the form of a mask. By mask map matching, the server associates the target region with the corresponding mask, thereby obtaining more information about the target. This may be used to determine the shape, location or other characteristics of the object. For example, a mask of traffic signs may be used to match a binarized area image of a road to determine the location of the traffic sign. Subsequently, the binarized area image is subjected to mask map inversion processing by the target mask map set. The mask of the target area is inverted to the mask of the background area, thereby acquiring an inverted area image. The inverted region image represents a background region corresponding to the target region, which is very useful for separating the target and the background. For example, by reversing the road mask, the server obtains a background area image outside the road. Image region segmentation is performed on the filtered image data based on the inverted region image to obtain a plurality of background region images. This step helps to divide the image into a plurality of background regions, each region representing a different background element or structure. For example, by reversing the road area image, the image can be separated into the background of the road and the surrounding environment.
In a specific embodiment, the process of executing step S105 may specifically include the following steps:
(1) Performing filter scale matching on a plurality of background area images in the filtered image data to obtain a target filter scale corresponding to each background area image;
(2) Based on the target filter scale corresponding to each background area image, performing data standard deviation calculation on a plurality of background area images in the filtered image data through a multi-scale Gaussian filtering algorithm to obtain a plurality of standard deviations;
(3) And blurring the plurality of background area images based on the plurality of standard deviations to obtain candidate blurring images.
Specifically, filter scale matching is performed on a plurality of background area images in the filtered image data, so as to obtain a target filter scale corresponding to each background area image. The filter scale matching is to determine which filter scale is suitable for each background area image for blurring processing. For example, roads and surrounding environments require different filter dimensions because their characteristics are different. And calculating the data standard deviation of a plurality of background area images in the filtered image data by a multi-scale Gaussian filtering algorithm based on the target filter scale corresponding to each background area image. Multiscale gaussian filtering is a common image processing method used to blur or blur an image to reduce detail and noise in the image. The data standard deviation calculation is to determine the degree of change in pixel value of each background area image, i.e., the degree of smoothness of the image. The higher the standard deviation value, the more detail and texture the image has, while the lower the standard deviation value, the smoother the image. This step helps to determine the filter strength that needs to be applied to achieve blurring of different background areas. For example, the standard deviation is lower for the road background area and higher for the surrounding ambient background area. Blurring processing is performed on the plurality of background area images based on the plurality of standard deviations to obtain candidate blurring images. This step adjusts the filter strength according to the standard deviation of each background area, thereby realizing blurring processing with different degrees. The candidate blurring images represent different blurring effects on the background area. For example, the candidate blurring image may represent a slight blurring of the road and a stronger blurring of the surrounding environment, respectively.
The method for image blurring based on stereoscopic imaging in the embodiment of the present invention is described above, and the apparatus for image blurring based on stereoscopic imaging in the embodiment of the present invention is described below, referring to fig. 5, an embodiment of the apparatus for image blurring based on stereoscopic imaging in the embodiment of the present invention includes:
the acquisition module 501 is configured to acquire corresponding stereo imaging data from a preset database, and perform image data filtering on the stereo imaging data to obtain filtered image data;
the processing module 502 is configured to input the filtered image data into a preset image sensing model for image segmentation processing, so as to obtain a corresponding auxiliary atlas, where the auxiliary atlas includes a focus map subset, a depth map subset, and a mask map subset;
a splitting module 503, configured to split the image target subject of the filtered image data through the auxiliary atlas, so as to obtain a target subject area;
a segmentation module 504, configured to segment the image region of the filtered image data by using the target subject region, so as to obtain a plurality of background region images;
the blurring module 505 is configured to perform blurring processing on a plurality of background area images in the filtered image data by using a preset multi-scale gaussian filtering algorithm, so as to obtain candidate blurring images;
And an optimizing module 506, configured to perform a virtual edge optimization process on the candidate virtual image, so as to obtain a target virtual image.
Through the cooperation of the components, the technical scheme provided by the invention is that corresponding stereo imaging data are acquired from a database, and image data filtering is carried out on the stereo imaging data to obtain filtered image data; inputting the filtered image data into an image perception model for image segmentation processing to obtain a corresponding auxiliary image set, wherein the auxiliary image set comprises a focus image subset, a depth image subset and a mask image subset; splitting an image target subject of the filtered image data through the auxiliary atlas to obtain a target subject region; image region segmentation is carried out on the filtered image data through the target main body region, so that a plurality of background region images are obtained; blurring a plurality of background area images in the filtered image data by a multi-scale Gaussian filtering algorithm to obtain candidate blurring images; and carrying out blurring edge optimization processing on the candidate blurring image to obtain a target blurring image. In the scheme, noise and unnecessary details can be removed by filtering the stereoscopic imaging data, so that the quality and definition of the image are enhanced. Segmentation using an image perception model enables the filtered image to be divided into different regions, including a focus map subset, a depth map subset, and a mask map subset. This helps to better understand the structure and content of the image. By the auxiliary atlas, the target subject region in the image can be effectively identified and extracted. This is important for subsequent processing, such as image synthesis or blurring. After the target main body area is separated, the background area can be further divided into a plurality of parts, so that different processing strategies can be adopted for different background areas. The blurring processing is carried out on the images of the plurality of background areas through the multi-scale Gaussian filtering algorithm, so that the background can be effectively blurred, the target main body is highlighted, and the target is more highlighted. By optimizing the blurring edge, the blurring effect can be more natural and lifelike, and the non-coherent sense between blurring and a target main body is reduced, so that the accuracy of image blurring based on stereoscopic imaging is further improved.
The stereoscopic imaging-based image blurring apparatus in the embodiment of the present invention is described in detail above in fig. 5 from the point of view of a modularized functional entity, and the stereoscopic imaging-based image blurring device in the embodiment of the present invention is described in detail below from the point of view of hardware processing.
Fig. 6 is a schematic structural diagram of a stereoscopic imaging-based image blurring apparatus 600 according to an embodiment of the present invention, where the stereoscopic imaging-based image blurring apparatus 600 may have a relatively large difference due to configuration or performance, and may include one or more processors (CPU) 610 (e.g., one or more processors) and a memory 620, and one or more storage media 630 (e.g., one or more mass storage devices) storing applications 633 or data 632. Wherein the memory 620 and the storage medium 630 may be transitory or persistent storage. The program stored in the storage medium 630 may include one or more modules (not shown), each of which may include a series of instruction operations in the stereoscopic imaging-based image blurring apparatus 600. Still further, the processor 610 may be configured to communicate with the storage medium 630 to execute a series of instruction operations in the storage medium 630 on the stereoscopic imaging-based image blurring device 600.
The stereoscopic imaging-based image blurring apparatus 600 may also include one or more power supplies 640, one or more wired or wireless network interfaces 650, one or more input/output interfaces 660, and/or one or more operating systems 631, such as WindowsServe, macOSX, unix, linux, freeBSD, and the like. It will be appreciated by those skilled in the art that the stereoscopic imaging-based image blurring apparatus structure shown in fig. 6 does not constitute a limitation of the stereoscopic imaging-based image blurring apparatus, and may include more or less components than illustrated, or may combine certain components, or may be a different arrangement of components.
The present invention also provides a stereoscopic imaging-based image blurring apparatus, which includes a memory and a processor, wherein the memory stores computer-readable instructions that, when executed by the processor, cause the processor to execute the steps of the stereoscopic imaging-based image blurring method in the above embodiments.
The present invention also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, and may also be a volatile computer readable storage medium, in which instructions are stored which, when executed on a computer, cause the computer to perform the steps of the stereoscopic imaging-based image blurring method.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or passed as separate products, may be stored in a computer readable storage medium. Based on the understanding that the technical solution of the present invention may be embodied in essence or in a part contributing to the prior art or in whole or in part in the form of a software product stored in a storage medium, comprising instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (10)
1. An image blurring method based on stereoscopic imaging, which is characterized by comprising the following steps:
corresponding stereoscopic imaging data are collected from a preset database, and image data filtering is carried out on the stereoscopic imaging data to obtain filtered image data;
inputting the filtered image data into a preset image perception model for image segmentation processing to obtain a corresponding auxiliary image set, wherein the auxiliary image set comprises a focus image subset, a depth image subset and a mask image subset;
splitting an image target subject of the filtered image data through the auxiliary atlas to obtain a target subject region;
Image region segmentation is carried out on the filtered image data through the target main body region, so that a plurality of background region images are obtained;
blurring processing is carried out on a plurality of background area images in the filtered image data through a preset multi-scale Gaussian filtering algorithm, so that candidate blurring images are obtained;
and carrying out blurring edge optimization processing on the candidate blurring image to obtain a target blurring image.
2. The stereoscopic imaging-based image blurring method of claim 1, wherein the acquiring corresponding stereoscopic imaging data from a preset database and performing image data filtering on the stereoscopic imaging data to obtain filtered image data includes:
corresponding stereoscopic imaging data are collected from the database, and perspective matching is carried out on the stereoscopic imaging data to obtain a plurality of perspective information of the stereoscopic imaging data;
performing horizontal edge detection processing on the stereoscopic imaging data based on a plurality of view angle information, wherein the stereoscopic imaging data comprises horizontal edges;
performing vertical edge detection on the stereoscopic imaging data to obtain vertical edges of the stereoscopic imaging data;
constructing an image area frame for the horizontal edge and the vertical edge to obtain a target area frame;
Performing pixel gradient calculation on the stereoscopic imaging data based on the target region frame to obtain gradient data corresponding to the stereoscopic imaging data;
and carrying out image filtering processing on the stereoscopic imaging data through the gradient data to obtain the filtered image data.
3. The stereoscopic imaging-based image blurring method according to claim 1, wherein the inputting the filtered image data into a preset image perception model for image segmentation processing, to obtain a corresponding auxiliary atlas, wherein the auxiliary atlas includes a focal map subset, a depth map subset, and a mask map subset, includes:
inputting the filtered image data into an input layer of the image perception model to perform image monocular depth estimation to obtain corresponding monocular depth estimation information;
based on a preset image depth threshold sequence, performing first image segmentation on the filtered image based on the monocular depth estimation information to obtain a corresponding depth map subset;
inputting the monocular depth estimation information into a convolution layer of the image perception model, and carrying out data convolution processing on the filtered image data to obtain corresponding image convolution characteristics;
Performing image focus calibration on the filtered image through the image convolution characteristics to obtain an image focus set corresponding to the filtered image;
based on the image focus set, performing second image segmentation processing on the filtered image to obtain a focus map subset corresponding to the filtered image;
inputting the image focus set into a semantic segmentation layer of the image perception model for semantic segmentation processing to obtain semantic information corresponding to the filtered image;
and carrying out third image segmentation on the filtered image based on semantic information corresponding to the filtered image to obtain a mask image subset of the filtered image.
4. The stereoscopic imaging-based image blurring method of claim 1, wherein the splitting of the image target subject from the filtered image data by the auxiliary atlas to obtain a target subject region includes:
performing image correlation analysis on the filtered image data through the auxiliary atlas to obtain correlation analysis data;
performing correlation region calibration on the filtered image data based on the correlation analysis data to obtain a plurality of correlation regions;
and splitting the target main body of the filtered image data based on a plurality of correlation areas to obtain the target main body area.
5. The stereoscopic imaging-based image blurring method of claim 4, wherein the performing object-subject splitting on the filtered image data based on the plurality of correlation regions to obtain the object-subject region includes:
threshold screening is carried out on each correlation area to obtain at least one target correlation area;
carrying out communication region analysis on at least one target correlation region to obtain a plurality of communication regions;
and splitting the target main body of the filtered image data based on the plurality of connected areas to obtain the target main body area.
6. The stereoscopic imaging-based image blurring method according to claim 1, wherein the image region segmentation of the filtered image data by the target subject region to obtain a plurality of background region images includes:
performing binarization processing on the target main body area to obtain a corresponding binarization area image;
performing mask map matching on the binarized region image based on the mask map subset to obtain a corresponding target mask map set;
performing mask map inversion processing on the binarized region image through the target mask map set to obtain a corresponding inversion region image;
And based on the inversion region image, performing image region segmentation on the filtered image data to obtain a plurality of background region images.
7. The stereoscopic imaging-based image blurring method according to claim 1, wherein blurring a plurality of the background area images in the filtered image data by a preset multi-scale gaussian filtering algorithm to obtain candidate blurring images, comprises:
performing filter scale matching on a plurality of background area images in the filtered image data to obtain a target filter scale corresponding to each background area image;
based on the target filter scale corresponding to each background area image, carrying out data standard deviation calculation on a plurality of background area images in the filtered image data through the multi-scale Gaussian filtering algorithm to obtain a plurality of standard deviations;
and blurring the background area images based on the standard deviations to obtain candidate blurring images.
8. An image blurring apparatus based on stereoscopic imaging, characterized in that the image blurring apparatus based on stereoscopic imaging comprises:
the acquisition module is used for acquiring corresponding stereoscopic imaging data from a preset database, and filtering the stereoscopic imaging data to obtain filtered image data;
The processing module is used for inputting the filtered image data into a preset image perception model to perform image segmentation processing to obtain a corresponding auxiliary image set, wherein the auxiliary image set comprises a focus image subset, a depth image subset and a mask image subset;
the splitting module is used for splitting the image target main body of the filtered image data through the auxiliary atlas to obtain a target main body area;
the segmentation module is used for carrying out image region segmentation on the filtered image data through the target main body region to obtain a plurality of background region images;
the blurring module is used for blurring a plurality of background area images in the filtered image data through a preset multi-scale Gaussian filtering algorithm to obtain candidate blurring images;
and the optimizing module is used for carrying out virtual edge optimizing processing on the candidate virtual images to obtain target virtual images.
9. An image blurring apparatus based on stereoscopic imaging, characterized in that the image blurring apparatus based on stereoscopic imaging comprises: a memory and at least one processor, the memory having instructions stored therein;
the at least one processor invokes the instructions in the memory to cause the stereoscopic imaging-based image blurring apparatus to perform the stereoscopic imaging-based image blurring method of any of claims 1-7.
10. A computer readable storage medium having instructions stored thereon, which when executed by a processor, implement the stereoscopic imaging based image blurring method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311604811.8A CN117315210B (en) | 2023-11-29 | 2023-11-29 | Image blurring method based on stereoscopic imaging and related device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311604811.8A CN117315210B (en) | 2023-11-29 | 2023-11-29 | Image blurring method based on stereoscopic imaging and related device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117315210A true CN117315210A (en) | 2023-12-29 |
CN117315210B CN117315210B (en) | 2024-03-05 |
Family
ID=89274010
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311604811.8A Active CN117315210B (en) | 2023-11-29 | 2023-11-29 | Image blurring method based on stereoscopic imaging and related device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117315210B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117974493A (en) * | 2024-03-28 | 2024-05-03 | 荣耀终端有限公司 | Image processing method and related device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013005489A1 (en) * | 2011-07-06 | 2013-01-10 | オリンパス株式会社 | Image capture device and image processing device |
CN108230333A (en) * | 2017-11-28 | 2018-06-29 | 深圳市商汤科技有限公司 | Image processing method, device, computer program, storage medium and electronic equipment |
CN108399596A (en) * | 2018-02-07 | 2018-08-14 | 深圳奥比中光科技有限公司 | Depth image engine and depth image computational methods |
CN111311481A (en) * | 2018-12-12 | 2020-06-19 | Tcl集团股份有限公司 | Background blurring method and device, terminal equipment and storage medium |
CN111784563A (en) * | 2020-06-24 | 2020-10-16 | 泰康保险集团股份有限公司 | Background blurring method and device, computer equipment and storage medium |
CN113938578A (en) * | 2020-07-13 | 2022-01-14 | 武汉Tcl集团工业研究院有限公司 | Image blurring method, storage medium and terminal device |
CN116980579A (en) * | 2023-08-30 | 2023-10-31 | 深圳优立全息科技有限公司 | Image stereoscopic imaging method based on image blurring and related device |
-
2023
- 2023-11-29 CN CN202311604811.8A patent/CN117315210B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013005489A1 (en) * | 2011-07-06 | 2013-01-10 | オリンパス株式会社 | Image capture device and image processing device |
CN108230333A (en) * | 2017-11-28 | 2018-06-29 | 深圳市商汤科技有限公司 | Image processing method, device, computer program, storage medium and electronic equipment |
CN108399596A (en) * | 2018-02-07 | 2018-08-14 | 深圳奥比中光科技有限公司 | Depth image engine and depth image computational methods |
CN111311481A (en) * | 2018-12-12 | 2020-06-19 | Tcl集团股份有限公司 | Background blurring method and device, terminal equipment and storage medium |
CN111784563A (en) * | 2020-06-24 | 2020-10-16 | 泰康保险集团股份有限公司 | Background blurring method and device, computer equipment and storage medium |
CN113938578A (en) * | 2020-07-13 | 2022-01-14 | 武汉Tcl集团工业研究院有限公司 | Image blurring method, storage medium and terminal device |
CN116980579A (en) * | 2023-08-30 | 2023-10-31 | 深圳优立全息科技有限公司 | Image stereoscopic imaging method based on image blurring and related device |
Non-Patent Citations (1)
Title |
---|
李晓颖;杨恒杰;闫铮;连方;巫梅琴;: "基于颜色恒常性的图像背景虚化算法", 激光与光电子学进展, no. 08 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117974493A (en) * | 2024-03-28 | 2024-05-03 | 荣耀终端有限公司 | Image processing method and related device |
Also Published As
Publication number | Publication date |
---|---|
CN117315210B (en) | 2024-03-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109685060B (en) | Image processing method and device | |
CN110110617B (en) | Medical image segmentation method and device, electronic equipment and storage medium | |
US20230032036A1 (en) | Three-dimensional scene constructing method, apparatus and system, and storage medium | |
EP3971825B1 (en) | Systems and methods for hybrid depth regularization | |
CN108961327B (en) | Monocular depth estimation method and device, equipment and storage medium thereof | |
EP2811423B1 (en) | Method and apparatus for detecting target | |
AU2021202716B2 (en) | Systems and methods for automated segmentation of individual organs in 3D anatomical images | |
CN112017189A (en) | Image segmentation method and device, computer equipment and storage medium | |
WO2015010451A1 (en) | Method for road detection from one image | |
KR20200060194A (en) | Method of predicting depth values of lines, method of outputting 3d lines and apparatus thereof | |
EP2757529B1 (en) | Systems and methods for 3D data based navigation using descriptor vectors | |
CN112419295A (en) | Medical image processing method, apparatus, computer device and storage medium | |
Juneja et al. | Energy based methods for medical image segmentation | |
CN117292076A (en) | Dynamic three-dimensional reconstruction method and system for local operation scene of engineering machinery | |
Zhao et al. | Automatic blur region segmentation approach using image matting | |
CN117315210B (en) | Image blurring method based on stereoscopic imaging and related device | |
CN112215217B (en) | Digital image recognition method and device for simulating doctor to read film | |
CN115546270A (en) | Image registration method, model training method and equipment for multi-scale feature fusion | |
CN110443228B (en) | Pedestrian matching method and device, electronic equipment and storage medium | |
CN118522055B (en) | Method, system, equipment and storage medium for realizing real wrinkle detection | |
EP2757526A1 (en) | Systems and methods for 3D data based navigation using a watershed method | |
CN117152398B (en) | Three-dimensional image blurring method, device, equipment and storage medium | |
Khoddami et al. | Depth map super resolution using structure-preserving guided filtering | |
EP3018626B1 (en) | Apparatus and method for image segmentation | |
CN118015190A (en) | Autonomous construction method and device of digital twin model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |