[go: up one dir, main page]

CN118298094A - Shadow generation method and device and electronic equipment - Google Patents

Shadow generation method and device and electronic equipment Download PDF

Info

Publication number
CN118298094A
CN118298094A CN202410232332.6A CN202410232332A CN118298094A CN 118298094 A CN118298094 A CN 118298094A CN 202410232332 A CN202410232332 A CN 202410232332A CN 118298094 A CN118298094 A CN 118298094A
Authority
CN
China
Prior art keywords
region
area
shadow
determining
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410232332.6A
Other languages
Chinese (zh)
Inventor
罗舒仁
郭正扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202410232332.6A priority Critical patent/CN118298094A/en
Publication of CN118298094A publication Critical patent/CN118298094A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a method, a device, electronic equipment and a computer readable storage medium for generating shadows, wherein the method comprises the following steps: determining a first area with the same texture from an image comprising at least one real object, wherein the first area comprises a bright area illuminated by a real light source and a shadow area generated by the real object under the projection of the real light source; determining superposition data corresponding to the bright region according to shadow information corresponding to the shadow region in the first region and original data of the bright region, wherein the superposition data is used for generating a shadow effect consistent with the shadow region in the bright region; and fusing the superimposed data with the original data corresponding to at least part of the bright areas, and generating shadow effects corresponding to the virtual objects in at least part of the bright areas. The scheme provided by the application can generate a real shadow effect for the virtual object in the augmented reality, thereby bringing real visual experience to the user.

Description

Shadow generation method and device and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and apparatus for generating shadows, an electronic device, and a computer readable storage medium.
Background
The augmented reality (Augmented Reality, abbreviated as AR) technology is a technology for skillfully fusing virtual information with a real world, and shadow is an important effect in the augmented reality technology. In the augmented reality technology, the shadow effect can add a realistic appearance to the virtual object, so that the virtual object can be better integrated into a real scene, and a real visual experience is brought to a user.
In the related art, in order to achieve a shadow effect in the augmented reality technology, a position of a light source is estimated according to brightness of each pixel point in a real image, so that shadow information of a virtual object under irradiation of the light source is obtained according to the estimated position of the light source, and a corresponding shadow effect is generated.
However, the positioning and detection of highlight pixels may be interfered by factors such as image quality, noise, occlusion or reflection, and there may be difficulties in accurately identifying and tracking highlight points, which may result in inaccurate light source estimation and generated shadow information, thereby making the generated shadow effect unreal and greatly affecting the visual experience of the user.
Disclosure of Invention
The application provides a method, a device, electronic equipment and a computer readable storage medium for generating shadows, which can generate real shadow effects for virtual objects in augmented reality, so that real visual experience is brought to users. The specific scheme is as follows:
in a first aspect, an embodiment of the present application provides a method for generating shadows, including:
determining a first area with the same texture from an image comprising at least one real object, wherein the first area comprises a bright area illuminated by a real light source and a shadow area generated by the real object under the projection of the real light source;
Determining superposition data corresponding to the bright region according to shadow information corresponding to the shadow region in the first region and original data of the bright region, wherein the superposition data is used for generating a shadow effect consistent with the shadow region in the bright region;
And fusing the superimposed data with the original data corresponding to at least part of the bright areas, and generating the shadow effect corresponding to the virtual object in the at least part of the bright areas.
In a second aspect, an embodiment of the present application provides an apparatus for generating shadows, including:
A first determining unit for determining a first region having the same texture from an image including at least one real object, the first region including a bright region illuminated by a real light source and a shadow region generated by the real object under projection of the real light source;
A second determining unit configured to determine superimposed data corresponding to the bright area according to the shadow information corresponding to the shadow area in the first area and the original data of the bright area, the superimposed data being used to generate a shadow effect in accordance with the shadow area in the bright area;
and the fusion unit is used for fusing the superposition data with the original data corresponding to at least part of the bright areas and generating the shadow effect corresponding to the virtual object in the at least part of the bright areas.
In a third aspect, the present application also provides an electronic device, including:
a processor; and
A memory for storing a data processing program, the electronic device being powered on and executing the program by the processor, to perform the method according to the first aspect.
In a fourth aspect, embodiments of the present application also provide a computer readable storage medium storing a data processing program for execution by a processor to perform the method of the first aspect.
Compared with the prior art, the application has the following advantages:
The application provides a shadow generating method, which comprises the steps of firstly, determining a first area with the same texture from an image comprising at least one real object, wherein the first area comprises a bright area illuminated by a real light source and a shadow area generated by the real object under the projection of the real light source; secondly, according to the shadow information corresponding to the shadow area in the first area and the original data of the bright area, determining superposition data corresponding to the bright area, wherein the superposition data are used for generating shadow effects consistent with the shadow area in the bright area; and finally, fusing the superimposed data with the original data corresponding to at least part of the bright area, and generating a shadow effect corresponding to the virtual object in at least part of the bright area.
Drawings
FIG. 1 is a flow chart of a method of generating shadows provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of an example of an image including at least one real object in a method for generating shadows according to an embodiment of the present application;
FIG. 3 is a schematic diagram of determining a first region from FIG. 2 in a method of generating shadows according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an example of luminance data of each pixel in a first area in a method for generating shadows according to an embodiment of the present application;
FIG. 5 is a schematic diagram of another example of luminance data of each pixel in the first region in the method for generating shadows according to the embodiment of the present application;
FIG. 6 is a schematic diagram of an example of generating shadows for a virtual object in a method for generating shadows according to an embodiment of the present application;
FIG. 7 is a block diagram showing an example of a shadow generating apparatus according to an embodiment of the present application;
fig. 8 is a block diagram illustrating an example of an electronic device according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. The present application may be embodied in many other forms than those herein described, and those skilled in the art will readily appreciate that the present application may be similarly embodied without departing from the spirit or essential characteristics thereof, and therefore the present application is not limited to the specific embodiments disclosed below.
It should be noted that the terms "first," "second," "third," and the like in the claims, description, and drawings of the present application are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. The data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and their variants are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Before describing embodiments of the present application in detail, related concepts will be described first and prior art will be further described.
1. Introduction to related concepts
1. Augmented reality: augmented reality (Augmented Reality, AR for short) is a technique that "seamlessly" merges virtual environment information with a real environment. The AR creates virtual environment information by means of an optical system, and performs information annotation on the real environment through superposition and fusion with the real environment, so that the two environments become a whole, human perception is enhanced, and even the original sensory experience is exceeded.
2. Computer vision: computer vision (Computer vision) is a science of how to make a machine "look at", and more specifically, to replace human eyes with a camera and a Computer to perform machine vision such as recognition, tracking and measurement on a target, and further perform image processing, and the image is processed by the Computer to be an image more suitable for human eyes to observe or transmit to an instrument to detect.
2. Further description of the prior art
Shadow effect is an important effect in augmented reality technology. In the augmented reality technology, the shadow effect can provide depth sense and stereoscopic sense for the virtual object, so that the virtual object is more attached to the real environment, and a user brings more real sense. At present, the process of realizing the shadow effect in the augmented reality technology is complex, and the shadow effect is realized by comprehensively utilizing the related knowledge and technologies in multiple fields such as computer graphics, computer vision, real-time rendering and the like. The following is a way to achieve the shadow effect in the augmented reality technology in the prior art:
Mode one: the position of the light source is estimated through the image, and the direction of the incident light is calculated through capturing the position of the high-brightness point pixel in the image sequence, so that the shadow information is obtained, and the shadow effect is realized. The method comprises the following steps:
Step one, image acquisition: capturing a sequence of images for estimating light source position and shadow information using a camera or sensor;
secondly, detecting high-brightness points: performing image analysis on each image frame in the image sequence, and detecting and locating highlight points in the image frames;
third step, pixel position recording: recording position information of highlight pixels in each image frame;
Fourth step, light source estimation: estimating a position of the light source from position information of the highlight pixels using computer vision and a mathematical method;
fifth, shadow calculation: and according to the estimated light source position and the camera position, combining the geometric information of the virtual object, using a shadow calculation algorithm (such as a projection algorithm or shadow mapping and the like) to calculate the shadow effect generated by the virtual object under the projection of the light source.
Mode two: shadow effects are achieved by training with outdoor panoramic data and using scene images to estimate high dynamic range outdoor shadow information. The method comprises the following steps:
Step one, data collection: collecting a panoramic image dataset containing an outdoor scene, the collected dataset may encompass outdoor environments under different lighting conditions and include simultaneously captured high dynamic range (HIGH DYNAMIC RANGE, HDR) images and their corresponding Low dynamic range (Low DYNAMIC RANGE, LDR) images;
Second, pairing the high dynamic range image with the low dynamic range image: pairing the HDR image with the corresponding LDR image;
thirdly, marking shadows: marking a shadow area in the paired images in a manual or semi-automatic mode;
Fourth step, model training: training a deep learning model, such as a Convolutional Neural Network (CNN), using the paired images and corresponding shadow annotation data for predicting shadow information from the scene images, wherein a supervised learning method, such as pixel-level classification, semantic segmentation, etc., may be employed for training during the training process;
fifth step, shadow estimation: and inputting the outdoor scene image into the model by using the trained deep learning model so as to predict shadow information, and calculating the shadow effect generated by the virtual object under the projection of the light source according to the shadow information and the position and the direction of the virtual object.
However, in the first manner, the positioning and detection of the highlight point pixels may be interfered by factors such as image quality, noise, shielding or reflection, and there may be difficulty in accurately identifying and tracking the highlight point, which may cause the light source to estimate and the generated shadow information to be inaccurate, so that the generated shadow effect is not realistic enough. In the second mode, firstly, a large amount of training data is needed to train the model, and secondly, due to the diversity and complexity of outdoor environments, it is difficult to capture all possible illumination, which results in limited data in the training process, in this case, even if a large amount of training data is used to train the model, all situations are often not covered, so that the effect of generating shadow effect by using the trained model is poor. Therefore, how to generate a real shadow effect for a virtual object in augmented reality becomes extremely important.
For the reasons described above, in order to generate a real shadow effect for a virtual object in augmented reality, so as to bring a real visual experience to a user, the first embodiment of the present application provides a method for generating a shadow, where the method is applied to an electronic device, and the electronic device may be a desktop computer, a notebook computer, a mobile phone, a tablet computer, a server, a terminal device, or other electronic devices capable of generating a shadow.
The method for generating shadows according to the embodiment of the present application is described below with reference to fig. 1 to 6.
As shown in fig. 1, the method for generating shadows according to the present application includes the following steps S101 to S103.
Step S101: a first region of the same texture is determined from an image comprising at least one real object, the first region comprising a bright region illuminated by a real light source and a shadow region of the real object generated by the projection of the real light source.
In the present application, the image including the real object may be an image in the real world, which may be acquired by any one of the following devices: digital cameras, lidar, sensors for capturing images, etc. The present application can place a virtual object that does not exist in the real world in an image including a real object through augmented reality calculation, and add a shadow image to the virtual object.
One or more real objects may be included in the image including the real objects, the real objects may be real objects in the real world, and shadow information corresponding to at least one real object may be included in the image. In practical application, the shadow information corresponding to the real object in the image is shadow information generated by the fact that the real object is irradiated by the real light source in the real world.
When a shadow is added to a certain virtual object in an image, the application can refer to the real shadow corresponding to the real object to generate a vivid natural shadow for the virtual object. Thus, in the present application, a first region including a shadow region of a real object generated under projection of a real light source may be first determined from an image, and a bright region illuminated by the real light source may be further included in the first region. Wherein the bright areas may be used for placing shadows of the virtual objects, i.e. areas of the bright areas comprising shadows for placing virtual objects.
In the real world, real light sources may include natural light sources and artificial light sources. The natural light sources may include, for example, but not limited to, the following: the sun, moon, star, firefly, artificial light sources may include, for example, but are not limited to, the following: incandescent lamp, halogen lamp, energy saving lamp, LED lamp, spotlight, floodlight. It should be noted that, the moon in the natural light source does not emit light itself, but reflects sunlight to generate a lighting effect.
Wherein the first area may have the same texture, e.g. the real object is a house, which under irradiation of the sun generates shadows on the lawn, the first area may be the lawn; for another example, the real object is an automobile, and the automobile generates shadows on the cement floor under the irradiation of the sun, and the first area may be the cement floor.
It should be noted that, the position of the real object may be a position in the first area, or may be a position outside the first area, where the shadow generated by the real object under the projection of the real light source is located, and the first area determined by the present application may include only a bright area and a shadow area.
By this means, from an image comprising a real object, a first area is determined which comprises a shadow area of the real object generated by the projection of the real light source and a bright area illuminated by the real light source, thus providing a basis for generating shadows of the virtual object within the bright area by the bright area and the shadow area.
Step S102: and determining superposition data corresponding to the bright region according to the shadow information corresponding to the shadow region in the first region and the original data of the bright region, wherein the superposition data is used for generating a shadow effect consistent with the shadow region in the bright region.
In practical application, in order to generate a shadow consistent with a shadow effect of a real shadow corresponding to a real object for a virtual object, superposition data corresponding to a bright region may be determined according to the determined shadow information corresponding to the shadow region in the first region and original data of the bright region, the superposition data being specifically used for being superimposed on the original data of the bright region, and causing the bright region to generate a shadow effect consistent with the shadow region.
In an alternative embodiment, the shade information corresponding to the shade region may be color information of each pixel point corresponding to the shade region. After the first area is determined, the superposition data corresponding to the bright area can be determined according to the color information of each pixel point corresponding to the shadow area in the first area and the color of each pixel point corresponding to the bright area, and the determined superposition data can also be the color information superposed for each pixel point, so that the superposition data is superposed on a single pixel point to change the color of the single pixel point in the bright area, and further the shadow effect corresponding to the virtual object is generated in the bright area.
When the determined superimposed data is superimposed on the bright area, a shadow effect corresponding to the shadow effect of the shadow area in the first area may be generated.
Step S103: and fusing the superimposed data with the original data corresponding to at least part of the bright areas, and generating the shadow effect corresponding to the virtual object in the at least part of the bright areas.
In this step, after the superimposed data is determined, the shadow effect corresponding to the virtual object can be generated in the bright area by fusing the superimposed data with the original data corresponding to the bright area.
The raw data corresponding to at least a part of the bright region may be raw color data corresponding to each pixel point in at least a part of the region.
In a specific implementation, a shadow effect corresponding to the virtual object may be generated in all the bright areas, or a shadow effect corresponding to the virtual object may be generated in a partial area of the bright area, and specifically, at least a partial area may be selected in the bright area according to a position where the virtual object is to be placed and geometric information of the virtual object, and a shadow corresponding to the virtual object may be generated in at least a partial area, where the geometric information of the virtual object may include a size and a shape of the virtual object.
In the present application, superimposed data is data superimposed for each pixel, and thus, original data of a corresponding pixel point can be fused with superimposed data at an arbitrary position in a bright area, thereby generating shadows of arbitrary size and arbitrary shape.
In practical applications, the consistency of the shade effect can be simply understood as color consistency, and in the related art, the color of each pixel point in at least part of the bright area is generally directly converted into the average color of each pixel point in the shade area of the first area, so that the color corresponding to at least part of the bright area is consistent with the color corresponding to the shade area of the first area. However, the color of each pixel point in at least part of the bright area is directly replaced in this way, so that the original detail features and texture features of the pixels in at least part of the bright area are eliminated, and the loss of visual effect is caused.
Therefore, in this step, by fusing the superimposed data with the original data corresponding to at least a part of the bright region, on the basis of the shadow effect that the superimposed data causes the at least a part of the bright region to be generated in correspondence with the shadow effect that the shadow region has, the original detail features and texture features of the at least a part of the bright region can be retained by the original data corresponding to the at least a part of the bright region, so that the shadow effect generated in the at least a part of the bright region retains the original detail features and texture features of the at least a part of the bright region on the basis of the shadow effect corresponding to the shadow region.
For example, when the first area is a grass land, the shadow effect generated for the virtual object is a shadow effect of the grass land, and when the first area is a cement land, the shadow effect generated for the virtual object is a shadow effect of the cement land.
In the present application, under the condition that the light projected by the real light source only changes the brightness of the real object and does not change the color of the real object, the superposition data corresponding to the bright area determined in the present application may be a difference value between the average brightness data of the pixels corresponding to the shadow area in the first area and the average brightness data of the pixels corresponding to the bright area, and then fusing the superposition data with the brightness data corresponding to at least a part of the bright area may be to superimpose the superposition data on the brightness data corresponding to at least a part of the bright area, so that the brightness data corresponding to at least a part of the bright area is changed to be consistent with the brightness data corresponding to the shadow area in the first area.
The application provides a shadow generating method, which comprises the steps of firstly, determining a first area with the same texture from an image comprising at least one real object, wherein the first area comprises a bright area illuminated by a real light source and a shadow area generated by the real object under the projection of the real light source; secondly, according to shadow information corresponding to a shadow area in a first area, superposition data corresponding to a bright area are determined, the superposition data are used for generating shadow effects consistent with the shadow area in the bright area, and because the superposition data are data generated by referring to the shadow information corresponding to the shadow area in the first area and original data of the bright area, after the superposition data are superimposed in the bright area, the shadow effects consistent with the shadow effects corresponding to the shadow area in the first area can be generated in the bright area; and finally, fusing the superimposed data with the original data corresponding to at least part of the bright area, and generating a shadow effect corresponding to the virtual object in at least part of the bright area.
In an alternative embodiment, in order to enable accurate division of a first region having the same texture, and avoid that shadows generated in the first region affect the division of the first region, the present application may determine the first region having the same texture from an image including a real object by:
Determining first color information corresponding to each first pixel point in a region selected by the region selection operation in response to the region selection operation triggered by the image, wherein the region selected by the region selection operation is a region in the first region to be determined, and the first color information comprises at least one of hue information and saturation information;
Determining second color information of each second pixel point in the image except the region selected by the region selecting operation;
determining a second pixel point belonging to a first area to be determined in each second pixel point according to the first color information and the second color information;
And determining an area formed by the area selected by the area selection operation and the second pixel points belonging to the first area to be determined as the first area.
Upon determining the first region, an operation may be first selected for the image trigger region by an interactive operation of the user. It can be understood that the user can trigger the region selection operation on the image through a corresponding control, can also trigger the region selection operation on the image through a specific gesture, and can also trigger the region selection operation on the image through voice.
It should be noted that, the purpose of the region selection operation is to accurately determine the first region, so the region selected by the region selection operation may be a small region selected by the user from the image including at least one real object and located in the first region including the shadow region generated by the projection of the real object by the real light source, for convenience of explanation, the region selected by the region selection operation is hereinafter referred to as a target region, and each pixel point in the image including at least one real object may be classified according to the selected target region, so as to accurately obtain the first region with the same texture.
Specifically, after the user selects the target area, the first color information corresponding to each first pixel point in the target area may be determined. It should be noted that, since the first area to be determined includes a shadow area generated by the real object under the projection of the real light source, although the pixels in the first area have the same texture, there is a certain difference between the brightness of the pixels in the bright area in the first area and the brightness of the pixels in the shadow area. Therefore, in order to accurately determine the first area when the first area to be determined includes the shadow area corresponding to the real object, the first color information corresponding to each first pixel point in the target area determined in the present application may be color information from which the luminance information is removed, and the first color information may include only hue information, or may include both hue information and saturation information.
Where hue refers to the basic attribute of a color, hue may describe the name and class of the color. For example, common hues include white, yellow, cyan, green, magenta, red, blue, black, and the like. Saturation refers to the purity of a color, which can describe the concentration of the color, and generally the higher the saturation the brighter the color and vice versa. Brightness refers to the brightness of a color, which can describe the darkness of the color, expressed in percent, and in general, a high brightness will cause an object to appear brighter and a low brightness will cause an object to appear darker.
Then, second color information of each second pixel point in the region other than the target region in the image including at least one real object may be determined, and similarly, the second color information may be color information from which the luminance information is removed, and when the first color information includes only hue information, the second color information includes only hue information, and when the first color information includes both hue information and saturation information, the second color information includes both hue information and saturation information.
In this way, for each second pixel point, the corresponding second color information and the first color information corresponding to the target area can be compared. By comparing the second color information with the first color information, a pixel point belonging to the same or similar color as the target area can be determined from an area except the target area in an image including at least one real object, wherein the pixel point is a pixel point belonging to a first area to be determined, and based on the pixel point belonging to the first area to be determined and the target area selected by a user, the pixel point belonging to the first area to be determined and the target area selected by the user can be determined as the first area.
Through the technical means, the target area belonging to the first area to be determined is selected through the user interaction, then the color information of the pixel points in the area except the target area in the image is screened according to the color information of the pixel points in the target area selected by the user, the pixel points with the same or similar color as the pixel points in the target area can be accurately obtained, and the first area with the same texture can be accurately determined from the image comprising at least one real object.
Specifically, in the specific implementation, the "determining the second pixel point belonging to the first area to be determined" in the second pixel points according to the first color information and the second color information "in the above-described step may be implemented by:
Respectively determining the similarity between second color information corresponding to each second pixel point and the first color information;
and determining the second pixel points with the similarity larger than or equal to the preset similarity in the second pixel points as second pixel points belonging to the first area to be determined.
Note that, the first color information corresponding to each first pixel point in the target area may be an average value of color data of each first pixel point, the second color information corresponding to the second pixel point may be color data of the second pixel point, and in general, the first color information and the second color information may be vector-form data.
For each second pixel point, the similarity between the second color information corresponding to the second pixel point and the first color information can be determined, and the second pixel points belonging to the first area to be determined are screened out from the second pixel points based on the principle that the similarity is greater than or equal to the preset similarity.
It can be understood that the similarity between the second color information and the first color information may be used to describe the degree of association between the second pixel point and the target area selected by the user, where the higher the similarity is, the stronger the association between the second pixel point and the target area is indicated, and the lower the similarity is, the weaker the association between the second pixel point and the target area is indicated. In this way, the pixel points which are strongly related to the target area can be determined from the area except the target area selected by the user in the image, so that the first area with the same texture is obtained.
In the case where the first color information and the second color information are data in the form of vectors, the similarity between the first color information and the second color information may be determined by calculating the distance between the first color information and the second color information.
Common distances between vectors include, but are not limited to: euclidean distance, manhattan distance, cosine similarity. Euclidean Distance (also called Euclidean metric or Euclidean Distance), which is the simplest and most intuitive way of measuring Distance, can represent the true Distance between two points in an n-dimensional space; the Manhattan Distance (manhattan_distance) is the sum of distances of overall variation when a certain feature vector is moved in the absolute value direction and finally becomes another feature vector; cosine similarity, also called cosine similarity, is that the similarity of two feature vectors is evaluated by calculating the cosine value of the included angle of the two feature vectors, and the smaller the included angle between the two feature vectors is, the closer the cosine value is to 1, the more the two feature vectors are in agreement with each other in direction, and the higher the similarity is.
The following provides a calculation formula for the similarity between the vector a and the vector B, wherein formula (1) is a calculation formula for the euclidean distance between the vector a and the vector B, formula (2) is a calculation formula for the manhattan distance between the vector a and the vector B, and formula (3) is a calculation formula for the cosine similarity between the vector a and the vector B. Wherein, vector a and vector B are both n-dimensional vectors, vector a is (A1, A2, … … An), and vector B is (B1, B2, … … Bn).
In an alternative embodiment, in order to better evaluate the similarity between the second color information and the first color information, the correlation between the second color information and the first color information may be determined by calculating the mahalanobis distance between the second color information and the first color information.
Therefore, the step of "determining the similarity between the second color information corresponding to each of the second pixel points and the first color information, respectively" may be achieved by:
respectively determining the mahalanobis distance between the second color information corresponding to each second pixel point and the category formed by the first color information corresponding to each first pixel point in the area selected by the area selecting operation;
And respectively determining the similarity between the second color information corresponding to each second pixel point and the first color information according to the mahalanobis distance, wherein the mahalanobis distance is inversely proportional to the corresponding similarity.
The mahalanobis distance (Mahalanobis Distance) is a measure of distance, representing the covariance distance of the data. The mahalanobis distance is an effective way to calculate the similarity of two unknown sample sets. Unlike Euclidean distance, the Mahalanobis distance allows for the correlation between the various characteristics, and the correlation between the various characteristics is scale independent. For example, there is a certain correlation between height and weight. The mahalanobis distance can be regarded as a correction of the euclidean distance, and the problem that the dimensions of the euclidean distance are inconsistent and related is corrected.
The similarity between the two vectors is calculated by the mahalanobis distance because the mahalanobis distance considers the correlation and covariance between the respective features, and the difference between the variables can be more accurately measured. The following is a specific advantage of calculating the similarity between two vectors for the mahalanobis distance:
the first, mahalanobis distance allows capturing correlation information between different features by considering the covariance matrix between the features, which is important when processing data with related features, because a simple euclidean distance cannot capture such a relationship;
Secondly, the mahalanobis distance can normalize each dimension or feature, so that each feature is affected similarly when the distance is calculated, and the variable dominant distance calculation result with a larger range of certain features can be avoided;
Third, covariance matrix in mahalanobis distance considers covariance between features, which can help further describe the distribution of features. By introducing covariance information, data points with different covariance structures can be better distinguished.
Therefore, the mahalanobis distance can provide a more accurate way for measuring the similarity between vectors under the condition of considering the characteristic correlation and the covariance, and has wide application in the fields of pattern recognition, cluster analysis and the like.
The correlation among the features is considered in the calculation of the mahalanobis distance, so that the mahalanobis distance can better reflect the actual distribution situation of the data. If the mahalanobis distance between two vectors is small, it is stated that the two vectors are more similar in feature space; if the mahalanobis distance between two vectors is large, it means that the two vectors are relatively different in feature space.
In image processing, mahalanobis distance is typically used to measure similarity between pixels or images. In the shadow generating method provided by the embodiment of the application, the mahalanobis distance between the second color information corresponding to each second pixel point and the first color information corresponding to the target area selected by the user is calculated, so that the similarity between each second pixel point and the target area selected by the user can be judged, whether each second pixel point belongs to the first area or not is determined, and the classification of each second pixel point is completed.
The following equation (4) is a calculation equation of the mahalanobis distance between the vector x and the vector y as follows:
Wherein D m (x, y) is the mahalanobis distance between vector x and vector y, x-y is the difference between vector x and vector y, (x-y) T is the transpose of the difference between vector x and vector y, C is the covariance matrix of the sample data, and C -1 is the inverse of the covariance matrix.
The covariance matrix is an n×n matrix formed by a set of random variables, where N is the number of random variables. The element on the diagonal of the covariance matrix is the variance of each random variable, and the element on the off-diagonal represents the covariance between two random variables. Covariance is a link between two random variables that represents the correlation of the two variations. If the variation directions of the two are consistent, the covariance is a positive value; otherwise, the covariance is negative.
Standard deviation and variance are used to describe the correlation between one-dimensional data, and when it is desired to obtain the correlation between variables of each dimension in two-dimensional data, the correlation between variables can typically be measured by covariance. In the case where the variables exceed two, then a covariance matrix may be used to measure the correlation between more than two variables.
The variance is the square of the standard deviation, which is the average of the distances from each point in the dataset to the mean point, and the variance reflects the degree of dispersion of the data in the dataset. Taking two sets as examples, set a is [0,6, 14, 20], set b is [8,9, 11, 12], and the average value of set a and set b is 10, but obviously the difference between the two sets is quite large, the standard deviation of set a is about 8.8, the standard deviation of set b is about 1.8, and obviously set b is more concentrated, so the standard deviation is smaller, and the standard deviation describes the scattering degree of data.
In a specific implementation manner, in order to accurately determine the first region from an image including at least one real object, the RGB image may be converted into an HSV color space to determine color information of a pixel point from an H channel and an S channel without being affected by brightness, so that a segmentation operation of the first region is completed from the image.
Thus, in an alternative embodiment, first, pixels in an image comprising at least one real object may be converted from an RGB color space to an HSV color space; secondly, determining the H value and the S value corresponding to each first pixel point in the HSV color space in the region (target region) selected by the region selection operation as first color information corresponding to each first pixel point; then, determining the corresponding H value and S value of each second pixel point in the HSV color space in the region except the region (target region) selected by the region selection operation in the image comprising at least one real object as second color information corresponding to each second pixel point; and finally, determining a target second pixel point with the similarity larger than or equal to the preset similarity according to the similarity of the first color information and the second color information, and determining the determined target second pixel point and a target area selected by a user as a first area.
It should be noted that, RGB color space is commonly used in display systems, such as computer and television displays, and uses RGB color space, in which each color component R, G, B is independent from each other, by using the principle of three primary colors superposition in physics. In the HSV color space, H is Hue (english name Hue), S is Saturation (english name Saturation), and V is brightness (english name Value).
In a specific embodiment, an (H, S) plane may be formed by the H dimension and the S dimension in the HSV, the first pixel points in the target area selected by the user may form a class on the (H, S) plane, the second pixel points may be classified by analyzing the hue and saturation of each second pixel point and the similarity between the formed classes, and the areas with similar hue and similar saturation may be classified into the same class, so as to accurately determine the first area.
Based on the category formed by the target area, in the embodiment of the present application, the mahalanobis distance between the second color information corresponding to the second pixel point and the first color information corresponding to the target area selected by the user may be calculated by the following formula (5):
D m(i,c)=(μc-pi)T*∑c -1*(μc-pi) formula (5)
Wherein D m (i, c) is a mahalanobis distance between the second color information corresponding to the second pixel point and the first color information corresponding to the target area selected by the user, μ c is a mean value of the color information corresponding to each first pixel point in the target area selected by the user, p i is the color information of the ith second pixel point in the second pixel points, Σ c -1 is an inverse matrix of the covariance matrix corresponding to each first pixel point in the target area selected by the user on the (H, S) plane.
Thus, after the mahalanobis distance between each second pixel point and the target area selected by the user is obtained, the similarity between the second color information corresponding to the second pixel point and the first color information can be determined according to the mahalanobis distance, wherein the larger the mahalanobis distance is, the more similar the corresponding second pixel point is to the target area, and the smaller the mahalanobis distance is, the lower the similarity between the corresponding second pixel point and the target area is.
The following describes, by referring to fig. 2 and fig. 3, determination of a first area in a shadow generating method according to an embodiment of the present application:
As shown in fig. 2, which is a schematic diagram of an example of an image including at least one real object in the method for generating shadows according to the embodiment of the present application, it is seen that the exemplary real object 01 and real object 02 are included in fig. 2, and the real object 01 and the real object 02 each have a real shadow in fig. 2.
As shown in fig. 3, in the method for generating shadows according to the embodiment of the present application, a first area is determined from fig. 2, and in fig. 2, the areas except for the real object 01 and the real object 02 are determined first areas. The area outlined by the white dotted line in fig. 3 is a hatched area, and the area outside the white dotted line in fig. 3 is a bright area.
It should be noted that, in the real world, a shadow area generated by a real object under projection of a real light source generally includes a deep shadow area and a shallow shadow area. The dark and light shadows are two different types of shadows created by an object under light conditions, and represent the relative position between the light source and the object and the shielding of light.
Wherein, the deep shadow is the darkest area of the object, and is formed by completely shielding or blocking light under the illumination condition. A dark shadow typically occurs between the object and the light source, with the object partially or fully obscuring the path of the light so that the area does not receive the light directly, and the dark shadow is darker in color, with the extent depending on the size, shape, and geometry of the light source.
The light shadow is bright relative to the dark shadow, which is the area between full illumination and full occlusion. The light shadow appears around the dark shadow, typically in an area where the light portion is blocked and still reaches. The light shadow is darker in color than the fully illuminated bright area and lighter than the fully occluded dark shadow area, which also depends on the size, shape, and geometry of the light source and the object.
Therefore, before the superimposed data is determined in step S102, the method for generating shadows according to the embodiment of the present application may further include the steps of: a deep shadow region and a shallow shadow region are determined from the shadow region of the first region.
Accordingly, the determination of the superimposed data in step S102 may be achieved by: and determining first superposition data corresponding to the bright region and used for generating a deep shadow effect according to the shadow information corresponding to the deep shadow region and the original data of the bright region, and determining second superposition data corresponding to the bright region and used for generating a shallow shadow effect according to the shadow information corresponding to the shallow shadow region.
Accordingly, step S103 may be implemented by: fusing the first superposition data with the original data corresponding to a first partial region in the at least partial region, and generating a deep shadow effect corresponding to the virtual object in the first partial region;
and fusing the second superposition data with the original data corresponding to the second partial region in the at least partial region, and generating the shadow effect corresponding to the virtual object in the second partial region.
In the embodiment, since the color of the dark shadow region is darker than the color of the light shadow region, the color of the light shadow region is darker than the color of the bright region, and thus, the first superimposed data for generating the dark shadow effect in the bright region is generally different from the second superimposed data for generating the light shadow effect in the bright region.
According to the method, the first partial region for displaying the deep shadow effect corresponding to the virtual object can be determined from the bright region, and then the second partial region for displaying the shallow shadow effect corresponding to the virtual object can be determined. In general, the second partial area is located at the periphery of the first partial area, so that the original data corresponding to the first partial area is fused with the first superimposed data, a deep shadow effect corresponding to the virtual object is generated in the first partial area, the original data corresponding to the second partial area is fused with the second superimposed data, and a shallow shadow effect corresponding to the virtual object is generated in the second partial area.
Specifically, when fusion is performed, the first superimposed data and the original data of each pixel point in the first partial area can be fused, the second superimposed data and the original data of each pixel point in the second partial area can be fused, and after fusion, the brightness of each pixel point in the first partial area and the second partial area can be correspondingly darkened, so that a deep shadow effect and a shallow shadow effect corresponding to the virtual object can be generated.
By means of the technical means, the deep shadow effect and the light shadow effect corresponding to the virtual object are generated in the bright area, the stereoscopic impression and the layering impression in the image are increased under the condition that the shadow contains the deep shadow and the light shadow, the shadow is enabled to be more rich in texture and change, the shadow generated for the virtual object is enabled to be closer to the real shadow in the real world, and then the virtual object and the corresponding shadow can be better integrated into the real image, and accordingly real visual experience is brought to a user.
In a specific embodiment, after the first area is determined, a shadow area generated by the real object under the projection of the real light source may be determined from the first area, and then a shadow area and a deep shadow area are determined from the shadow area, so that a corresponding shadow effect may be quickly generated in the bright area according to the determined deep shadow area and the determined shallow shadow area.
The following describes a method for generating shadows according to an embodiment of the present application, in which a shadow area is determined from a first area:
Specifically, before the superimposed data is determined in step S102, in the method for generating shadows according to the embodiment of the present application, the shadow area may be determined by:
acquiring brightness data corresponding to each pixel point in the first region, wherein the brightness data is used for representing the illumination state of the corresponding pixel point irradiated by the real light source;
Determining first critical brightness data from brightness data corresponding to each pixel point in the first area, wherein the first critical brightness data is used for distinguishing the shadow area from the bright area;
And determining an area formed by pixels smaller than the first critical brightness data in brightness data corresponding to the pixels in the first area as the shadow area, and determining an area formed by pixels larger than or equal to the first critical brightness data in brightness data corresponding to the pixels in the first area as the bright area.
The luminance data corresponding to each pixel point in the first area may be used to represent an illumination state of the corresponding pixel point illuminated by the real light source, where the illumination state may include: direct illumination, indirect illumination, non-illuminated. The pixel with the illumination state being directly illuminated can be regarded as the pixel in the bright area, the pixel with the illumination state being indirectly illuminated can be regarded as the pixel in the light shadow area, and the pixel with the illumination state being not illuminated can be regarded as the pixel in the deep shadow area.
According to the application, the pixel points can be divided into the pixel points in the bright area, the pixel points in the shallow shadow area and the pixel points in the deep shadow area according to the brightness data corresponding to the pixel points in the first area, so that the first area can be accurately divided into the bright area, the shallow shadow area and the deep shadow area from the pixel level.
In specific implementation, the first area is divided into a bright area and a shadow area, specifically, the first critical luminance data can be determined from luminance data corresponding to each pixel point in the first area, the first area is divided into the shadow area and the bright area by the first critical luminance data, wherein an area formed by pixel points with luminance data smaller than the first critical luminance data in the first area is determined as the shadow area, and an area formed by pixel points with luminance data greater than or equal to the first critical luminance data in the first area is determined as the bright area.
Thereafter, the deep shadow region and the shallow shadow region may be divided from the divided shadow regions, and may be specifically divided as follows:
determining second critical brightness data from brightness data of each pixel point in the shadow region, wherein the second critical brightness data is used for distinguishing the deep shadow region from the shallow shadow region;
And determining an area formed by pixels which are smaller than the second critical brightness data in the brightness data corresponding to the pixels in the shadow area as the deep shadow area, and determining an area formed by pixels which are larger than or equal to the second critical brightness data in the brightness data corresponding to the pixels in the shadow area as the shallow shadow area.
It should be noted that, the luminance data of each pixel point in the light-shadow area is substantially greater than or equal to the second critical luminance data and less than the first critical luminance data. The shadow region is divided into a deep shadow region and a shallow shadow region by the second critical luminance data.
In the application, the first area is divided into the bright area and the shadow area by the first critical brightness data, and the divided shadow area is divided into the deep shadow area and the shallow shadow area by the second critical brightness data, so that when the shadow effect corresponding to the virtual object is generated, the deep shadow effect corresponding to the virtual object can be generated in the bright area obtained by the area division according to the shadow information of the deep shadow area obtained by the area division, and the shallow shadow effect corresponding to the virtual object can be generated in the bright area obtained by the area division according to the shadow information of the shallow shadow area obtained by the area division.
The following describes a method for determining the first critical luminance data:
alternatively, the first critical luminance data may be determined in the present application by the following steps S1 to S3:
Step S1: for the ith brightness data, determining a set formed by pixels with brightness data smaller than the ith brightness data in the first area as a first set, and determining a set formed by pixels with brightness data larger than or equal to the ith brightness data in the first area as a second set, wherein i traverses 1-N, and N is the number of brightness data which are not repeated mutually in the first area.
Step S2: determining a first degree of difference between the luminance data corresponding to each pixel in the first set, and determining a second degree of difference between the luminance data corresponding to each pixel in the second set.
Step S3: and determining the ith brightness data with the smallest sum of the first difference degree and the second difference degree as the first critical brightness data.
In specific implementation, each specific luminance value in the luminance data can be traversed, and the first area is divided into a first set and a second set through the currently traversed luminance value, wherein the first set is a set formed by pixels, the luminance data of which are smaller than the currently traversed luminance value, in the first area, the second set is a set formed by pixels, the luminance data of which are greater than or equal to the currently traversed luminance value, in the first area, and the pixels in the first set and the pixels in the second set jointly form the first area.
It will be appreciated that, in the first area, the same luminance data may correspond to a plurality of pixels, and the value range of the zero degree data may be [0,1] or [0, 255], where a higher value of the luminance data indicates a brighter pixel and a lower value of the luminance data indicates a darker pixel, for convenience of explanation, the present application is exemplified by a value range of the luminance data of [0,1 ].
As shown in table 1, an exemplary table is an example of luminance data in a first area in a method for generating shadows according to an embodiment of the present application.
Table 1.
Luminance data Number of pixels
0.1 200
0.2 400
0.35 170
0.6 600
0.8 130
0.9 200
In table 1, the number of pixels in the first region for luminance data of 0.1 is 200, the number of pixels for luminance data of 0.2 is 400, the number of pixels for luminance data of 0.35 is 170, the number of pixels for luminance data of 0.6 is 600, the number of pixels for luminance data of 0.8 is 130, and the number of pixels for luminance data of 0.9 is 200. Based on this, a distribution map of luminance data of each pixel in the first region can be created.
In an alternative embodiment, a schematic diagram of an example of luminance data of each pixel in the first area as shown in fig. 4 may be created, where the abscissa in fig. 4 is luminance data and the ordinate is the number of pixels, and fig. 4 may show a distribution of the number of pixels on each luminance data.
In another alternative embodiment, a schematic diagram of another example of the luminance data of each pixel in the first area as shown in fig. 5 may be created, where the abscissa in fig. 5 is the luminance data, the ordinate is the probability density of a pixel, and the probability density of a pixel corresponding to a certain luminance data refers to the probability that the luminance data of a pixel in the first area is the luminance data, and may also be understood as the percentage of the pixels corresponding to the luminance data in the first area to the total pixels in the first area.
It can be understood that the schematic diagram shown in fig. 5 may also be referred to as an intensity histogram, and when the intensity histogram is created, the number of pixels may be normalized first to obtain a probability density of a pixel corresponding to each luminance data, specifically, the number of pixels in the first area may be obtained, and a ratio of the number of pixels corresponding to each luminance data to the number of pixels in the first area may be determined as the probability density of the pixel corresponding to the luminance data.
As shown in table 2, an example table of probability densities of pixels normalized by the number of pixels in table 1 is shown.
Table 2.
It will be appreciated that the sum of the probability densities of the pixels corresponding to all luminance data is 1.
When determining the first critical luminance data for dividing the first area into the shadow area and the bright area, luminance data that can minimize the sum of the first difference between the luminance data corresponding to the pixels in the first set and the second difference between the luminance data corresponding to the pixels in the second set may be determined from the luminance data corresponding to the first area, which is the first critical luminance data.
In an alternative embodiment, the first threshold luminance data may be determined by means of intra-group variances.
Specifically, the first degree of difference may be determined by: determining the intra-group variance corresponding to the first set, and determining a first number of pixels in the first set; and determining the product of the intra-group variance corresponding to the first set and the first quantity as a first difference degree between brightness data corresponding to each pixel in the first set. Accordingly, the second degree of difference may be determined by: determining the intra-group variance corresponding to the second set, and determining a second number of pixels in the second set; and determining the product of the intra-group variance corresponding to the second set and the second number as a second difference degree between brightness data corresponding to each pixel in the second set.
The following formula (6) is a calculation formula of the sum of the first degree of difference and the second degree of difference:
Wherein, For the first critical luminance data being the sum of the first difference and the second difference corresponding to the ith luminance data in the luminance data, w 1 is the number of pixels whose luminance data is smaller than the ith luminance data,For the intra-group variance corresponding to the first set, w 2 is the number of pixels whose luminance data is greater than or equal to the ith luminance data,And the second set of corresponding intra-group variances.
It should be noted that, for different ith luminance data, the number of pixels in the first set is different, the corresponding intra-group variances are also different, and similarly, for different ith luminance data, the number of pixels in the second set is different, and the corresponding intra-group variances are also different.
Therefore, based on the principle that the sum of the first difference degree and the second difference degree is minimum, each piece of brightness data corresponding to the first area can be traversed, so that the brightness data which can enable the sum of the first difference degree and the second difference degree to be minimum is obtained, and the first critical brightness data can be accurately determined. Thus, the first region can be precisely and efficiently divided into the bright region and the shadow region.
In a specific implementation manner, the intra-group variance corresponding to the first set is calculated as follows:
Determining average brightness data corresponding to the first set and average brightness data corresponding to the first region;
And determining the square of the difference value between the average brightness data corresponding to the first set and the average brightness data corresponding to the first area as the intra-group variance corresponding to the first set.
The intra-group variance corresponding to the second set is calculated as follows:
determining average brightness data corresponding to the second set;
And determining the intra-group variance corresponding to the second set by squaring the average brightness data corresponding to the second set and the average brightness data difference value corresponding to the first region.
The following formulas (7) and (8) are respectively the calculation formulas of the intra-group variance corresponding to the first set and the intra-group variance corresponding to the second set:
Wherein, For the intra-group variance corresponding to the first set,For the intra-group variance corresponding to the second set,For the average luminance data corresponding to each pixel in the first set,For the average luminance data corresponding to each pixel in the first region,Is the average luminance data corresponding to each pixel in the first set.
The determination method of the second critical luminance data for dividing the shadow region into the deep shadow region and the shallow shadow region is the same as the determination method of the first critical luminance data, and the determination steps are as follows:
For the jth brightness data in the brightness data of the shadow area, determining a set formed by pixel points with the brightness data smaller than the jth brightness data in the shadow area as a third set, and determining a set formed by pixels with the brightness data larger than or equal to the jth brightness data in the shadow area as a fourth set, wherein j traverses 1-M, and M is the data of the brightness data which are not repeated mutually in the shadow area;
Determining a third difference degree between the brightness data corresponding to each pixel in the third set, and determining a fourth difference degree between the brightness data corresponding to each pixel in the second set;
And determining the j-th brightness data with the minimum sum of the third difference degree and the fourth difference degree as second critical brightness data.
The determination of the second critical luminance data may refer to the above determination of the first critical luminance data, and will not be described herein.
In an alternative embodiment, the method can blend the superimposed data and the original data corresponding to at least part of the bright areas in an alpha (transparent channel) mixing mode, so that the shadow effect corresponding to the virtual object is generated in at least part of the bright areas.
For convenience of explanation, the present application will be described in detail with respect to generating a deep image effect corresponding to a virtual object.
Alpha blending is performed on all pixels in at least some of the bright areas and is supported by computer graphics hardware. In its general form, for pixel k, alpha blending is a linear combination of the pixel's original RGB values and alpha overlays, calculated as shown in equation (9):
Wherein, For the RGB values of pixel k after alpha blending,R k in (a) is the value of the corresponding R channel of pixel k after alpha blending,G k in (a) is the value of the corresponding G channel for pixel k after alpha blending,B k in (3) is the value of the corresponding B channel of the pixel k after alpha mixing; For RGB values of the superimposed data where pixel k is superimposed in alpha blending, R k in (2) is the value of the R channel corresponding to the superimposed data superimposed by pixel k in alpha blending,G k in (a) is the value of the G channel corresponding to the superimposed data superimposed by pixel k in alpha blending,B k in (3) is the value of the B channel corresponding to the superimposed data superimposed by the pixel k in alpha blending; for the original RGB values to which pixel k corresponds before alpha blending, R k in (2) is the value of the original R channel to which pixel k corresponds before alpha blending,G k in (a) is the value of the original G channel to which pixel k corresponds before alpha blending,B k in (3) is the value of the original B channel corresponding to the pixel k before alpha mixing; a is a first weight of superimposed data superimposed by pixel k in alpha blending, and 1-a is a second weight corresponding to original data of pixel k in alpha blending.
In addition, in an ideal case, in the formula (9)Is that As the RGB values corresponding to the shadow areas in the first area, in practical application, since the number of pixel points in the shadow areas is large,An average value of RGB corresponding to each pixel point in the shadow area may be used, where,R in the shadow area is the average value of each pixel point on the R channel,G in (a) is the average value of each pixel point on the G channel in the shadow region,B in (a) is the average value of each pixel point on the B channel in the shadow region.
As can be seen from the above formula (9), in the present application, after the shadow area and the bright area are divided from the image, the shadow effect corresponding to the virtual object can be generated according to the first weight, the second weight, the superimposed data and the original data corresponding to at least a part of the bright area by determining the first weight, the second weight and the superimposed data.
Therefore, before the superimposed data is determined in step S102, the method for generating shadows provided by the present application may further include the steps of:
and determining a first weight of the superimposed data, and determining a second weight of the original data corresponding to the bright area, wherein the sum of the first weight and the second weight is 1.
The determination of the superimposed data in step S102 may be achieved by:
and determining superposition data corresponding to the bright region according to the shadow information corresponding to the shadow region in the first region, the original data of the bright region and the first weight.
Accordingly, step S103 may be implemented by: and fusing the superimposed data and the original data corresponding to the at least partial region according to the corresponding first weight and second weight respectively, and generating a shadow effect corresponding to the virtual object in the at least partial region.
It should be noted that, in theory, the first weight may be any value in the interval of [0,1], for example, the first weight may be 0.1, and the corresponding second weight is 0.9; the first weight may be 0.7, and the corresponding second weight is 0.3; the first weight may be 0.8 and the corresponding second weight 0.2. The superimposed data may be understood as superimposed layers, the first weight may be understood as transparency of the superimposed layers, the original data may be understood as the original layers, the second weight may be understood as transparency of the original layers, and the sum of the transparency of the superimposed layers and the original layer is 1.
In practical applications, in order to make the shadow effect corresponding to the generated virtual object keep the original data of the pixel as much as possible, the second weight corresponding to the original data is required to be as large as possible, that is, the first weight corresponding to the superimposed data is as small as possible, and both the first weight and the second weight are positive values.
It should be noted that, when the difference obtained by subtracting the average value of a certain channel in the shadow area from the average data corresponding to a certain channel in the original data of the pixels in the bright area is smaller, it is indicated that the difference between the bright area and the shadow area on the channel is smaller, and the influence of the value of the shadow area on the channel on the shadow effect in the bright area is smaller; on the contrary, when the difference obtained by subtracting the average value of a certain channel in the shadow area from the average data corresponding to a certain channel in the original data of the pixels in the bright area is larger, it is indicated that the difference between the bright area and the shadow area on the channel is larger, and the effect of the shadow effect generated by the value of the shadow area on the channel on the bright area is larger.
The first weight may be determined in the present application by:
determining a first difference value of data corresponding to an R channel in the original data of the bright area, a second difference value of data corresponding to a G channel in the original data of the bright area and a third difference value of data corresponding to a B channel in the original data of the bright area;
And determining the ratio of the largest value among the first difference value, the second difference value and the third difference value to the data of the corresponding channel in the original data of the bright area as the first weight.
The following formula (10) is a determination formula of the first weight:
Wherein a is a first weight, Is the average value of the pixel points in the shadow region in the first region on the R channel,The original average value corresponding to the pixel point in the bright area on the R channel; Is the average value of the pixel points in the shadow area in the first area on the G channel, The original average value of the pixel points in the bright area corresponding to the G channel; is the average value of the pixel points in the shadow area in the first area on the B channel, Is the original average value corresponding to the pixel point in the bright area on the B channel.
Thus, based on the formula (10), a first weight corresponding to the superimposed data is determined, and a second weight is obtained by subtracting the first weight from 1.
And then, determining the superposition data, and generating the shadow effect corresponding to the virtual object according to the first weight, the second weight, the superposition data and the original data corresponding to the bright area.
Since the shadow effect of the virtual object matches the shadow effect corresponding to the shadow area, in an ideal state, the average value of RGB of the pixel point in the shadow corresponding to the generated virtual object is the same as the average value of RGB corresponding to the pixel point in the shadow area in the first area. Thus, the above formula (9) is ideally the following formula (10):
Thus, the following formula (11), formula (12), and formula (13) can be calculated:
Wherein R a is the superimposed data corresponding to the R channel in the superimposed data, G a is the superimposed data corresponding to the G channel in the superimposed data, B a is the superimposed data corresponding to the B channel in the superimposed data, Is the average value of the pixel points in the shadow region in the first region on the R channel,The original average value corresponding to the pixel point in the bright area on the R channel; Is the average value of the pixel points in the shadow area in the first area on the G channel, The original average value of the pixel points in the bright area corresponding to the G channel; is the average value of the pixel points in the shadow area in the first area on the B channel, And a is a first weight corresponding to the superimposed data, which is an original average value corresponding to the pixel point in the bright area on the B channel.
The determination of the superimposed data can be obtained based on the above formula:
multiplying the second weight by the data corresponding to the channel targeted in the original data of the bright area aiming at any channel of the R channel, the G channel and the B channel to obtain the superposition component corresponding to the channel targeted in the original data corresponding to the bright area;
Subtracting the superposition component corresponding to the channel targeted in the original data corresponding to the bright region from the shadow data corresponding to the channel targeted in the shadow information to obtain the superposition component corresponding to the channel targeted in the superposition data to be determined;
And determining the ratio of the superposition component corresponding to the channel aimed at in the superposition data to be determined to the first weight as the channel data corresponding to the channel aimed at in the superposition data to be determined, and obtaining superposition data containing the channel data respectively corresponding to the R channel, the G channel and the B channel.
After obtaining the superimposed data including the channel data corresponding to the R channel, the G channel, and the B channel, the above formula (10) may be used to fuse the superimposed data and the original data for each pixel point in at least a part of the bright area, so as to generate a shadow effect corresponding to the virtual object in at least a part of the bright area.
In addition, in order to make the shadow shape corresponding to the shadow of the generated virtual object conform to the real situation, for example, the shadow corresponding to the cuboid is approximately rectangular, the shadow corresponding to the sphere is spherical, and the like, in the application, the shape of the virtual object can be stretched and deformed to determine the shadow shape by carrying out offset calculation on the outline shape of the virtual object, so that the shadow which is more vivid and more natural is generated, and the real visual experience is brought to the user.
Specifically, when generating the shadow effect corresponding to the virtual object, the shape of the virtual object can be deformed to obtain the shadow shape corresponding to the virtual object; determining at least a partial region conforming to the shape of the shadow from the bright region; and fusing the superimposed data with the original data corresponding to at least part of the area, and generating a shadow effect corresponding to the virtual object in at least part of the area.
In an optional embodiment, in order to make the shadow effect corresponding to the virtual object generated in the application more lifelike and natural, according to the real situation, the position of the real light source can be determined according to the direction of the shadow generated by the real object under the projection of the real light source, then the direction of the shadow corresponding to the virtual object is determined according to the position relationship between the real light source and the virtual object, and further the shadow corresponding to the virtual object is generated based on the determined direction of the shadow corresponding to the virtual object.
As shown in fig. 6, which is a schematic diagram of an example of generating shadows for virtual objects in the method for generating shadows according to the embodiment of the present application, fig. 6 includes a real object 01, a real object 02, a virtual object 03, and a virtual object 04, and it can be seen that, in a case where corresponding real shadows are generated by the real object 01 and the real object 02 under the projection of a real light source, corresponding shadows can be generated for the virtual object 03 and the virtual object 04 respectively according to the real shadows, and the generated shadows are more realistic.
Corresponding to the method for generating shadows according to the first embodiment of the present application, the second embodiment of the present application further provides a device for generating shadows, as shown in fig. 7, where the device 700 for generating shadows includes:
A first determining unit 701 for determining a first region having the same texture from an image including at least one real object, the first region including a bright region illuminated by a real light source and a shadow region generated by the real object under projection of the real light source;
a second determining unit 702, configured to determine superimposed data corresponding to the bright area according to the shadow information corresponding to the shadow area in the first area and the original data of the bright area, where the superimposed data is used to generate a shadow effect consistent with the shadow area in the bright area;
And a fusion unit 703, configured to fuse the superimposed data with original data corresponding to at least a partial area in the bright area, and generate a shadow effect corresponding to the virtual object in the at least partial area.
Optionally, the apparatus 700 for generating shadows further comprises a third determining unit, where the third determining unit is configured to:
determining a deep shadow region and a shallow shadow region from the shadow region of the first region;
The second determining unit 702 is specifically configured to:
And determining first superposition data corresponding to the bright region and used for generating a deep shadow effect according to the shadow information corresponding to the deep shadow region and the original data of the bright region, and determining second superposition data corresponding to the bright region and used for generating a shallow shadow effect according to the shadow information corresponding to the shallow shadow region.
Optionally, the fusion unit 703 is specifically configured to:
Fusing the first superposition data with the original data corresponding to a first partial region in the at least partial region, and generating a deep shadow effect corresponding to the virtual object in the first partial region;
and fusing the second superposition data with the original data corresponding to the second partial region in the at least partial region, and generating the shadow effect corresponding to the virtual object in the second partial region.
Optionally, the first determining unit 701 is specifically configured to:
Determining first color information corresponding to each first pixel point in a region selected by the region selection operation in response to the region selection operation triggered by the image, wherein the region selected by the region selection operation is a region in the first region to be determined, and the first color information comprises hue information;
Determining second color information of each second pixel point in the image except the region selected by the region selecting operation;
determining a second pixel point belonging to a first area to be determined in each second pixel point according to the first color information and the second color information;
And determining an area formed by the area selected by the area selection operation and the second pixel points belonging to the first area to be determined as the first area.
Optionally, the first determining unit 701 is specifically configured to:
Respectively determining the similarity between second color information corresponding to each second pixel point and the first color information;
and determining the second pixel points with the similarity larger than or equal to the preset similarity in the second pixel points as second pixel points belonging to the first area to be determined.
Optionally, the third determining unit is further configured to:
acquiring brightness data corresponding to each pixel point in the first region, wherein the brightness data is used for representing the illumination state of the corresponding pixel point irradiated by the real light source;
Determining first critical brightness data from brightness data corresponding to each pixel point in the first area, wherein the first critical brightness data is used for distinguishing the shadow area from the bright area;
And determining an area formed by pixels smaller than the first critical brightness data in brightness data corresponding to the pixels in the first area as the shadow area, and determining an area formed by pixels larger than or equal to the first critical brightness data in brightness data corresponding to the pixels in the first area as the bright area.
Optionally, the third determining unit is specifically configured to:
determining second critical brightness data from brightness data of each pixel point in the shadow region, wherein the second critical brightness data is used for distinguishing the deep shadow region from the shallow shadow region;
And determining an area formed by pixels which are smaller than the second critical brightness data in the brightness data corresponding to the pixels in the shadow area as the deep shadow area, and determining an area formed by pixels which are larger than or equal to the second critical brightness data in the brightness data corresponding to the pixels in the shadow area as the shallow shadow area.
Optionally, the third determining unit is specifically configured to:
For the ith brightness data, determining a set formed by pixels with brightness data smaller than the ith brightness data in the first area as a first set, and determining a set formed by pixels with brightness data larger than or equal to the ith brightness data in the first area as a second set, wherein i traverses 1-N, and N is the number of brightness data which are not repeated mutually in the first area;
Determining a first difference degree between brightness data corresponding to pixels in the first set, and determining a second difference degree between brightness data corresponding to pixels in the second set;
And determining the ith brightness data with the smallest sum of the first difference degree and the second difference degree as the first critical brightness data.
Optionally, the third determining unit is specifically configured to:
determining the intra-group variance corresponding to the first set, and determining a first number of pixels in the first set;
Determining the product of the intra-group variance corresponding to the first set and the first quantity as a first difference degree between brightness data corresponding to pixels in the first set;
Determining the intra-group variance corresponding to the second set, and determining a second number of pixels in the second set;
and determining the product of the intra-group variance corresponding to the second set and the second number as a second difference degree between brightness data corresponding to each pixel in the second set.
Optionally, the third determining unit is specifically configured to:
Determining average brightness data corresponding to the first set and average brightness data corresponding to the first region;
Determining the square of the difference value between the average brightness data corresponding to the first set and the average brightness data corresponding to the first area as the intra-group variance corresponding to the first set;
determining average brightness data corresponding to the second set;
And determining the intra-group variance corresponding to the second set by squaring the average brightness data corresponding to the second set and the average brightness data difference value corresponding to the first region.
Optionally, the second determining unit 702 is further specifically configured to:
Determining a first weight of the superimposed data, and determining a second weight of the original data corresponding to the bright area, wherein the sum of the first weight and the second weight is 1;
and determining superposition data corresponding to the bright region according to the shadow information corresponding to the shadow region in the first region, the original data of the bright region and the first weight.
Optionally, the fusion unit 703 is specifically configured to:
And fusing the superimposed data and the original data corresponding to the at least partial region according to the corresponding first weight and second weight respectively, and generating a shadow effect corresponding to the virtual object in the at least partial region.
Optionally, the shadow information includes first channel data of an R channel, second channel data of a G channel, and third channel data of a B channel, and the second determining unit 702 is specifically configured to:
determining a first difference value of data corresponding to an R channel in the original data of the bright area, a second difference value of data corresponding to a G channel in the original data of the bright area and a third difference value of data corresponding to a B channel in the original data of the bright area;
And determining the ratio of the largest value among the first difference value, the second difference value and the third difference value to the data of the corresponding channel in the original data of the bright area as the first weight.
Optionally, the second determining unit 702 is specifically configured to:
multiplying the second weight by the data corresponding to the channel targeted in the original data of the bright area aiming at any channel of the R channel, the G channel and the B channel to obtain the superposition component corresponding to the channel targeted in the original data corresponding to the bright area;
Subtracting the superposition component corresponding to the channel targeted in the original data corresponding to the bright region from the shadow data corresponding to the channel targeted in the shadow information to obtain the superposition component corresponding to the channel targeted in the superposition data to be determined;
And determining the ratio of the superposition component corresponding to the channel aimed at in the superposition data to be determined to the first weight as the channel data corresponding to the channel aimed at in the superposition data to be determined, and obtaining superposition data containing the channel data respectively corresponding to the R channel, the G channel and the B channel.
Optionally, the first determining unit 701 is further specifically configured to:
converting each pixel in the image from an RGB color space to an HSV color space;
Determining an H value and an S value corresponding to each first pixel point in the HSV color space in the region selected by the region selecting operation as first color information corresponding to each first pixel point;
And determining the H value and the S value corresponding to each second pixel point in the HSV color space in the areas except the area selected by the area selecting operation in the image as second color information corresponding to each second pixel point.
Optionally, the first determining unit 701 is specifically configured to:
respectively determining the mahalanobis distance between the second color information corresponding to each second pixel point and the category formed by the first color information corresponding to each first pixel point in the area selected by the area selecting operation;
And respectively determining the similarity between the second color information corresponding to each second pixel point and the first color information according to the mahalanobis distance, wherein the mahalanobis distance is inversely proportional to the corresponding similarity.
Optionally, the fusion unit 703 is specifically configured to:
deforming the shape of the virtual object to obtain a shadow shape corresponding to the virtual object;
Determining at least a partial region from the bright region that conforms to the shadow shape;
And fusing the superposition data with the original data corresponding to the at least partial area, and generating the shadow effect corresponding to the virtual object in the at least partial area.
The third embodiment of the present application also provides an electronic device for generating shadows, corresponding to the method for generating shadows provided by the first embodiment of the present application. As shown in fig. 8, the electronic device 800 includes: a processor 801; and a memory 802 for storing a program of a method of generating a shadow, the apparatus, after being powered on and running the program of the method of generating a shadow by the processor, performing the steps of:
determining a first area with the same texture from an image comprising at least one real object, wherein the first area comprises a bright area illuminated by a real light source and a shadow area generated by the real object under the projection of the real light source;
Determining superposition data corresponding to the bright region according to shadow information corresponding to the shadow region in the first region and original data of the bright region, wherein the superposition data is used for generating a shadow effect consistent with the shadow region in the bright region;
And fusing the superimposed data with the original data corresponding to at least part of the bright areas, and generating the shadow effect corresponding to the virtual object in the at least part of the bright areas.
In correspondence with the method of generating shadows provided by the first embodiment of the present application, a fourth embodiment of the present application provides a computer-readable storage medium storing a program of the method of generating shadows, the program being executed by a processor to perform the steps of:
determining a first area with the same texture from an image comprising at least one real object, wherein the first area comprises a bright area illuminated by a real light source and a shadow area generated by the real object under the projection of the real light source;
Determining superposition data corresponding to the bright region according to shadow information corresponding to the shadow region in the first region and original data of the bright region, wherein the superposition data is used for generating a shadow effect consistent with the shadow region in the bright region;
And fusing the superimposed data with the original data corresponding to at least part of the bright areas, and generating the shadow effect corresponding to the virtual object in the at least part of the bright areas.
It should be noted that, for the detailed descriptions of the apparatus, the electronic device, and the computer readable storage medium provided in the second embodiment, the third embodiment, and the fourth embodiment of the present application, reference may be made to the related descriptions of the first embodiment of the present application, and the detailed descriptions are omitted here.
While the application has been described in terms of preferred embodiments, it is not intended to be limiting, but rather, it will be apparent to those skilled in the art that various changes and modifications can be made herein without departing from the spirit and scope of the application as defined by the appended claims.
In one typical configuration, the node devices in the blockchain include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
1. Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), random Access Memory (RAM) of other nature, read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage media or any other non-transmission media that can be used to store information that can be accessed by a computing device. Computer readable media, as defined herein, does not include non-transitory computer readable media (transmission media), such as modulated data signals and carrier waves.
2. It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
While the application has been described in terms of preferred embodiments, it is not intended to be limiting, but rather, it will be apparent to those skilled in the art that various changes and modifications can be made herein without departing from the spirit and scope of the application as defined by the appended claims.

Claims (20)

1. A method of generating shadows, the method comprising:
determining a first area with the same texture from an image comprising at least one real object, wherein the first area comprises a bright area illuminated by a real light source and a shadow area generated by the real object under the projection of the real light source;
Determining superposition data corresponding to the bright region according to shadow information corresponding to the shadow region in the first region and original data of the bright region, wherein the superposition data is used for generating a shadow effect consistent with the shadow region in the bright region;
And fusing the superimposed data with the original data corresponding to at least part of the bright areas, and generating the shadow effect corresponding to the virtual object in the at least part of the bright areas.
2. The method of claim 1, wherein prior to said determining superimposed data corresponding to said bright region from said shadow information corresponding to said shadow region in said first region and said raw data of said bright region, said method further comprises:
determining a deep shadow region and a shallow shadow region from the shadow region of the first region;
The determining the superimposed data corresponding to the bright region according to the shadow information corresponding to the shadow region in the first region and the original data of the bright region includes:
and determining first superposition data corresponding to the bright region and used for generating a deep shadow effect according to shadow information corresponding to the deep shadow region and the original data of the bright region, and determining second superposition data corresponding to the bright region and used for generating a shallow shadow effect according to shadow information corresponding to the shallow shadow region.
3. The method according to claim 2, wherein the fusing the superimposed data with the original data corresponding to at least a part of the bright area, and generating the shadow effect corresponding to the virtual object in the at least part of the bright area, includes:
Fusing the first superposition data with the original data corresponding to a first partial region in the at least partial region, and generating a deep shadow effect corresponding to the virtual object in the first partial region;
and fusing the second superposition data with the original data corresponding to the second partial region in the at least partial region, and generating the shadow effect corresponding to the virtual object in the second partial region.
4. The method of claim 1, wherein determining a first region having the same texture from an image including a real object comprises:
Determining first color information corresponding to each first pixel point in a region selected by the region selection operation in response to the region selection operation triggered by the image, wherein the region selected by the region selection operation is a region in the first region to be determined, and the first color information comprises hue information;
Determining second color information of each second pixel point in the image except the region selected by the region selecting operation;
determining a second pixel point belonging to a first area to be determined in each second pixel point according to the first color information and the second color information;
And determining an area formed by the area selected by the area selection operation and the second pixel points belonging to the first area to be determined as the first area.
5. The method of claim 4, wherein determining a second pixel belonging to the first region to be determined from the second pixels according to the first color information and the second color information, comprises:
Respectively determining the similarity between second color information corresponding to each second pixel point and the first color information;
and determining the second pixel points with the similarity larger than or equal to the preset similarity in the second pixel points as second pixel points belonging to the first area to be determined.
6. The method of claim 2, wherein prior to said determining superimposed data corresponding to said bright region from said shadow information corresponding to said shadow region in said first region and said raw data of said bright region, said method further comprises:
acquiring brightness data corresponding to each pixel point in the first region, wherein the brightness data is used for representing the illumination state of the corresponding pixel point irradiated by the real light source;
Determining first critical brightness data from brightness data corresponding to each pixel point in the first area, wherein the first critical brightness data is used for distinguishing the shadow area from the bright area;
And determining an area formed by pixels smaller than the first critical brightness data in brightness data corresponding to the pixels in the first area as the shadow area, and determining an area formed by pixels larger than or equal to the first critical brightness data in brightness data corresponding to the pixels in the first area as the bright area.
7. The method of claim 6, wherein the determining a deep shadow region and a shallow shadow region from the shadow region of the first region comprises:
determining second critical brightness data from brightness data of each pixel point in the shadow region, wherein the second critical brightness data is used for distinguishing the deep shadow region from the shallow shadow region;
And determining an area formed by pixels which are smaller than the second critical brightness data in the brightness data corresponding to the pixels in the shadow area as the deep shadow area, and determining an area formed by pixels which are larger than or equal to the second critical brightness data in the brightness data corresponding to the pixels in the shadow area as the shallow shadow area.
8. The method of claim 6, wherein determining first critical luminance data from luminance data corresponding to each pixel in the first region comprises:
For the ith brightness data, determining a set formed by pixels with brightness data smaller than the ith brightness data in the first area as a first set, and determining a set formed by pixels with brightness data larger than or equal to the ith brightness data in the first area as a second set, wherein i traverses 1-N, and N is the number of brightness data which are not repeated mutually in the first area;
Determining a first difference degree between brightness data corresponding to pixels in the first set, and determining a second difference degree between brightness data corresponding to pixels in the second set;
And determining the ith brightness data with the smallest sum of the first difference degree and the second difference degree as the first critical brightness data.
9. The method of claim 8, wherein determining a first degree of difference between luminance data corresponding to pixels in the first set comprises:
determining the intra-group variance corresponding to the first set, and determining a first number of pixels in the first set;
Determining the product of the intra-group variance corresponding to the first set and the first quantity as a first difference degree between brightness data corresponding to pixels in the first set;
the determining a second degree of difference between the luminance data corresponding to each pixel in the second set includes:
Determining the intra-group variance corresponding to the second set, and determining a second number of pixels in the second set;
and determining the product of the intra-group variance corresponding to the second set and the second number as a second difference degree between brightness data corresponding to each pixel in the second set.
10. The method of claim 9, wherein the determining the intra-group variance corresponding to the first set comprises:
Determining average brightness data corresponding to the first set and average brightness data corresponding to the first region;
Determining the square of the difference value between the average brightness data corresponding to the first set and the average brightness data corresponding to the first area as the intra-group variance corresponding to the first set;
The determining the intra-group variance corresponding to the second set includes:
determining average brightness data corresponding to the second set;
And determining the intra-group variance corresponding to the second set by squaring the average brightness data corresponding to the second set and the average brightness data difference value corresponding to the first region.
11. The method of claim 1, wherein prior to said determining superimposed data corresponding to said bright region from said shadow information corresponding to said shadow region in said first region and said raw data of said bright region, said method further comprises:
Determining a first weight of the superimposed data, and determining a second weight of the original data corresponding to the bright area, wherein the sum of the first weight and the second weight is 1;
The determining the superimposed data corresponding to the bright region according to the shadow information corresponding to the shadow region in the first region and the original data of the bright region includes:
and determining superposition data corresponding to the bright region according to the shadow information corresponding to the shadow region in the first region, the original data of the bright region and the first weight.
12. The method of claim 11, wherein the fusing the superimposed data with the raw data corresponding to at least a portion of the bright region, generating a shadow effect corresponding to the virtual object in the at least a portion of the region, comprises:
And fusing the superimposed data and the original data corresponding to the at least partial region according to the corresponding first weight and second weight respectively, and generating a shadow effect corresponding to the virtual object in the at least partial region.
13. The method of claim 11, wherein the shadow information includes first channel data for an R channel, second channel data for a G channel, and third channel data for a B channel, the determining the first weight for the overlay data comprising:
determining a first difference value of data corresponding to an R channel in the original data of the bright area, a second difference value of data corresponding to a G channel in the original data of the bright area and a third difference value of data corresponding to a B channel in the original data of the bright area;
And determining the ratio of the largest value among the first difference value, the second difference value and the third difference value to the data of the corresponding channel in the original data of the bright area as the first weight.
14. The method of claim 13, wherein the determining the superimposed data corresponding to the bright region based on the shadow information corresponding to the shadow region in the first region, the raw data of the bright region, and the first weight comprises:
multiplying the second weight by the data corresponding to the channel targeted in the original data of the bright area aiming at any channel of the R channel, the G channel and the B channel to obtain the superposition component corresponding to the channel targeted in the original data corresponding to the bright area;
Subtracting the superposition component corresponding to the channel targeted in the original data corresponding to the bright region from the shadow data corresponding to the channel targeted in the shadow information to obtain the superposition component corresponding to the channel targeted in the superposition data to be determined;
And determining the ratio of the superposition component corresponding to the channel aimed at in the superposition data to be determined to the first weight as the channel data corresponding to the channel aimed at in the superposition data to be determined, and obtaining superposition data containing the channel data respectively corresponding to the R channel, the G channel and the B channel.
15. The method of claim 4, wherein prior to said determining the first color information corresponding to each first pixel point in the region selected by the region selection operation, the method further comprises:
converting each pixel in the image from an RGB color space to an HSV color space;
the determining the first color information corresponding to each first pixel point in the area selected by the area selection operation includes:
Determining an H value and an S value corresponding to each first pixel point in the HSV color space in the region selected by the region selecting operation as first color information corresponding to each first pixel point;
the determining the second color information of each second pixel point in the area except the area selected by the area selecting operation in the image includes:
And determining the H value and the S value corresponding to each second pixel point in the HSV color space in the areas except the area selected by the area selecting operation in the image as second color information corresponding to each second pixel point.
16. The method of claim 5, wherein determining the similarity between the second color information corresponding to each of the second pixel points and the first color information includes:
respectively determining the mahalanobis distance between the second color information corresponding to each second pixel point and the category formed by the first color information corresponding to each first pixel point in the area selected by the area selecting operation;
And respectively determining the similarity between the second color information corresponding to each second pixel point and the first color information according to the mahalanobis distance, wherein the mahalanobis distance is inversely proportional to the corresponding similarity.
17. The method according to claim 1, wherein the fusing the superimposed data with the original data corresponding to the at least partial region, generating the shadow effect corresponding to the virtual object in the at least partial region, includes:
deforming the shape of the virtual object to obtain a shadow shape corresponding to the virtual object;
Determining at least a partial region from the bright region that conforms to the shadow shape;
And fusing the superposition data with the original data corresponding to the at least partial area, and generating the shadow effect corresponding to the virtual object in the at least partial area.
18. An apparatus for generating shadows, the apparatus comprising:
A first determining unit for determining a first region having the same texture from an image including at least one real object, the first region including a bright region illuminated by a real light source and a shadow region generated by the real object under projection of the real light source;
A second determining unit configured to determine superimposed data corresponding to the bright area according to the shadow information corresponding to the shadow area in the first area and the original data of the bright area, the superimposed data being used to generate a shadow effect in accordance with the shadow area in the bright area;
and the fusion unit is used for fusing the superposition data with the original data corresponding to at least part of the bright areas and generating the shadow effect corresponding to the virtual object in the at least part of the bright areas.
19. An electronic device, comprising:
a processor; and
A memory for storing a data processing program, the electronic device being powered on and executing the program by the processor, to perform the method of any of claims 1-17.
20. A computer readable storage medium, characterized in that a data processing program is stored, which program is run by a processor, performing the method according to any of claims 1-17.
CN202410232332.6A 2024-02-29 2024-02-29 Shadow generation method and device and electronic equipment Pending CN118298094A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410232332.6A CN118298094A (en) 2024-02-29 2024-02-29 Shadow generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410232332.6A CN118298094A (en) 2024-02-29 2024-02-29 Shadow generation method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN118298094A true CN118298094A (en) 2024-07-05

Family

ID=91683837

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410232332.6A Pending CN118298094A (en) 2024-02-29 2024-02-29 Shadow generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN118298094A (en)

Similar Documents

Publication Publication Date Title
CN110148204B (en) Method and system for representing virtual objects in a view of a real environment
Xue et al. Understanding and improving the realism of image composites
Soriano et al. Adaptive skin color modeling using the skin locus for selecting training pixels
US20180357819A1 (en) Method for generating a set of annotated images
Panagopoulos et al. Robust shadow and illumination estimation using a mixture model
CN112634156B (en) A method for estimating material reflection parameters based on images captured by portable devices
JP2016500975A (en) Generation of depth maps from planar images based on combined depth cues
US11321939B2 (en) Using machine learning to transform image styles
US12002226B2 (en) Using machine learning to selectively overlay image content
Yang et al. Underwater image enhancement using scene depth-based adaptive background light estimation and dark channel prior algorithms
CN111898525B (en) Construction method of smoke identification model, and method and device for detecting smoke
Lecca et al. GRASS: a gradient-based random sampling scheme for Milano Retinex
JP5740147B2 (en) Light source estimation apparatus and light source estimation method
US20160140748A1 (en) Automated animation for presentation of images
US9418434B2 (en) Method for detecting 3D geometric boundaries in images of scenes subject to varying lighting
US10893594B2 (en) Method of identifying light sources and a corresponding system and product
US9384553B2 (en) Method for factorizing images of a scene into basis images
Cheng et al. Fast and accurate illumination estimation using LDR panoramic images for realistic rendering
Lin et al. On‐site example‐based material appearance acquisition
CN118298094A (en) Shadow generation method and device and electronic equipment
CN116612210A (en) Deep learning-based electro-embroidery material mapping generation method, device and medium
Kasper et al. Multiple point light estimation from low-quality 3D reconstructions
Alexandrov et al. Towards dense SLAM with high dynamic range colors
Dolhasz et al. Measuring observer response to object-scene disparity in composites
CN114760422B (en) Backlight detection method and system, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination