CN116580126B - Custom lamp effect configuration method and system based on key frame - Google Patents
Custom lamp effect configuration method and system based on key frame Download PDFInfo
- Publication number
- CN116580126B CN116580126B CN202310592906.6A CN202310592906A CN116580126B CN 116580126 B CN116580126 B CN 116580126B CN 202310592906 A CN202310592906 A CN 202310592906A CN 116580126 B CN116580126 B CN 116580126B
- Authority
- CN
- China
- Prior art keywords
- hand
- curve image
- drawn curve
- image
- drawn
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 230000000694 effects Effects 0.000 title claims abstract description 35
- 230000001795 light effect Effects 0.000 claims abstract description 83
- 239000011159 matrix material Substances 0.000 claims description 140
- 238000012545 processing Methods 0.000 claims description 38
- 238000013527 convolutional neural network Methods 0.000 claims description 32
- 230000002457 bidirectional effect Effects 0.000 claims description 29
- 230000008859 change Effects 0.000 claims description 23
- 238000011176 pooling Methods 0.000 claims description 20
- 230000009467 reduction Effects 0.000 claims description 19
- 230000007246 mechanism Effects 0.000 claims description 18
- 238000005457 optimization Methods 0.000 claims description 13
- 230000000903 blocking effect Effects 0.000 claims description 12
- 230000004913 activation Effects 0.000 claims description 10
- 230000006870 function Effects 0.000 claims description 9
- 238000001914 filtration Methods 0.000 claims description 5
- 238000012546 transfer Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 13
- 238000004590 computer program Methods 0.000 description 10
- 238000009826 distribution Methods 0.000 description 10
- 238000003062 neural network model Methods 0.000 description 6
- 238000000605 extraction Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 238000003860 storage Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 3
- 238000005728 strengthening Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 239000011324 bead Substances 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000036962 time dependent Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/20—Drawing from basic elements, e.g. lines or circles
- G06T11/203—Drawing of straight lines or curves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02B—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
- Y02B20/00—Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
- Y02B20/40—Control techniques providing energy savings, e.g. smart controller or presence detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
A self-defining light effect configuration method and system based on key frame, it accepts the light effect hand-drawn curve image input by user; taking the starting point and the end point of a curve in the light effect hand-drawn curve image as a first key frame and a second key frame; and determining an interpolated varying curvature from the first keyframe to the second keyframe based on a shape of a curve in the light effect hand drawn curve image. In this way, the hand-drawn graph input by the user can be optimized based on the user intention to reduce friction so as to improve the final light effect self-defining effect.
Description
Technical Field
The application relates to the technical field of intelligent configuration, in particular to a custom light effect configuration method and system based on key frames.
Background
In recent years, color decorative lamps are popular in the market, and have some dynamic changing effects of lights, especially some intelligent decorative color lamps, which can send commands for controlling the lights to the color decorative lamps through APP. However, the existing decorative color lamps have limited types of light effects, and the flexibility of configuring the light effects is insufficient when a user wants to customize the light effects.
Thus, an optimized custom lighting configuration scheme is desired.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides a key frame-based self-defined light effect configuration method and a system thereof, which accept a light effect hand-drawn curve image input by a user; taking the starting point and the end point of a curve in the light effect hand-drawn curve image as a first key frame and a second key frame; and determining an interpolated varying curvature from the first keyframe to the second keyframe based on a shape of a curve in the light effect hand drawn curve image. In this way, the hand-drawn graph input by the user can be optimized based on the user intention to reduce friction so as to improve the final light effect self-defining effect.
In a first aspect, a method for configuring a custom light effect based on a keyframe is provided, which includes: receiving a light effect hand-drawn curve image input by a user; taking the starting point and the end point of a curve in the light effect hand-drawn curve image as a first key frame and a second key frame; and determining an interpolated varying curvature from the first keyframe to the second keyframe based on a shape of a curve in the light effect hand drawn curve image.
In the above-mentioned custom lighting effect configuration method based on the key frame, determining an interpolation change curvature from the first key frame to the second key frame based on a shape of a curve in the lighting effect hand-drawn curve image includes: image noise reduction is carried out on the light effect hand-drawn curve image so as to obtain a noise-reduced hand-drawn image; performing image blocking processing on the noise-reduced hand-painted curve image to obtain a sequence of hand-painted curve image blocks; respectively passing each hand-drawn curve image block in the sequence of hand-drawn curve image blocks through a shallow feature extractor based on a convolutional neural network model to obtain a plurality of hand-drawn curve image block feature matrixes; arranging the hand-painted curve image block feature matrixes according to the positions of image blocks to obtain a hand-painted curve image global feature matrix; the hand-drawn curve image global feature matrix is subjected to a bidirectional attention mechanism to obtain an optimized hand-drawn curve image global feature matrix; performing class probability density discrimination enhancement on the optimized hand-drawn curve image global feature matrix to obtain a re-optimized hand-drawn curve image global feature matrix; passing the re-optimized hand-drawn curve image global feature matrix through a decoder to generate an optimized hand-drawn curve image; and determining the interpolated varying curvature from the first keyframe to the second keyframe based on the shape of the curve in the optimized hand-drawn curve image.
In the above-mentioned custom lighting effect configuration method based on the key frame, performing image noise reduction on the lighting effect hand-drawn curve image to obtain a noise-reduced hand-drawn image, including: and carrying out bilinear filtering on the light effect hand-drawn curve image to obtain the noise-reduced hand-drawn image.
In the above-mentioned custom lighting effect configuration method based on the key frame, performing image blocking processing on the hand-painted curve image after noise reduction to obtain a sequence of hand-painted curve image blocks, including: and uniformly dividing the image blocks of the noise-reduced hand-drawn curve image to obtain a sequence of the hand-drawn curve image blocks, wherein each hand-drawn curve image block in the sequence of the hand-drawn curve image blocks has the same size.
In the above-mentioned custom lighting configuration method based on the key frame, each hand-painted curve image block in the sequence of hand-painted curve image blocks is respectively passed through a shallow feature extractor based on a convolutional neural network model to obtain a plurality of hand-painted curve image block feature matrices, including: and respectively carrying out convolution processing, pooling processing and nonlinear activation processing on input data in forward transfer of layers by using each layer of the shallow feature extractor based on the convolutional neural network model so as to output shallow layers of the shallow feature extractor based on the convolutional neural network model as the plurality of hand-drawn curve image block feature matrices.
In the key frame-based self-defined lighting effect configuration method, the shallow layer feature extractor based on the convolutional neural network model comprises 3-5 convolutional layers.
In the above-mentioned custom lighting configuration method based on key frames, the method for obtaining the optimized hand-drawn curve image global feature matrix by the bidirectional attention mechanism includes: pooling the hand-drawn curve image global feature matrix along the horizontal direction and the vertical direction respectively to obtain a first pooling vector and a second pooling vector; performing association coding on the first pooling vector and the second pooling vector to obtain a bidirectional association matrix; inputting the bidirectional association matrix into a Sigmoid activation function to obtain a bidirectional association weight matrix; and calculating the point-by-point multiplication between the bidirectional association weight matrix and the hand-drawn curve image global feature matrix to obtain the optimized hand-drawn curve image global feature matrix.
In the key frame-based self-defined light effect configuration method, class probability density differentiation is conducted on the optimized hand-drawn curve image global feature matrix to be strongThe step of transforming to obtain a re-optimized hand-drawn curve image global feature matrix comprises the following steps: performing class probability density discrimination enhancement on the optimized hand-drawn curve image global feature matrix by using the following optimization formula to obtain a re-optimized hand-drawn curve image global feature matrix; wherein, the optimization formula is: Wherein (1)>And->Is the mean value and standard deviation of the feature value set of each position in the global feature matrix of the optimized hand-drawn curve image,/for>Is the first +.>Characteristic value of position, and->Is the +.f. of the re-optimized hand-drawn curve image global feature matrix>Characteristic values of the location.
In the above-mentioned key frame-based custom lighting configuration method, the decoder includes a plurality of deconvolution layers.
In a second aspect, a key frame based custom light efficacy configuration system is provided, comprising: the image receiving module is used for receiving the light effect hand-drawn curve image input by the user; the key frame generation module is used for taking the starting point and the end point of a curve in the light effect hand-drawn curve image as a first key frame and a second key frame; and an interpolation change curvature generation module for determining an interpolation change curvature from the first keyframe to the second keyframe based on a shape of a curve in the light effect hand-drawn curve image.
Compared with the prior art, the key frame-based self-defined light effect configuration method and the system thereof provided by the application accept the light effect hand-drawn curve image input by a user; taking the starting point and the end point of a curve in the light effect hand-drawn curve image as a first key frame and a second key frame; and determining an interpolated varying curvature from the first keyframe to the second keyframe based on a shape of a curve in the light effect hand drawn curve image. In this way, the hand-drawn graph input by the user can be optimized based on the user intention to reduce friction so as to improve the final light effect self-defining effect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a graph of linear interpolation change rate according to an embodiment of the present application.
FIG. 2 is a graph of curve interpolation change rate according to an embodiment of the present application.
FIG. 3 is a graph of polyline interpolation change rate according to an embodiment of the present application.
FIG. 4 is a graph of a custom interpolation rate of change according to an embodiment of the present application.
Fig. 5 is a schematic view of a scenario of a custom lighting configuration method based on a keyframe according to an embodiment of the present application.
Fig. 6 is a flowchart of a key frame based custom light efficacy configuration method according to an embodiment of the present application.
Fig. 7 is a flowchart of the sub-steps of step 130 in the key frame based custom light efficacy configuration method according to an embodiment of the present application.
Fig. 8 is a schematic diagram of the architecture of step 130 in the key frame-based custom light effect configuration method according to an embodiment of the present application.
Fig. 9 is a flowchart of the sub-steps of step 135 in the key frame based custom light efficacy configuration method according to an embodiment of the present application.
Fig. 10 is a block diagram of a key frame based custom light efficacy configuration system in accordance with an embodiment of the present application.
Detailed Description
The following description of the technical solutions according to the embodiments of the present application will be given with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Unless defined otherwise, all technical and scientific terms used in the embodiments of the application have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application.
In describing embodiments of the present application, unless otherwise indicated and limited thereto, the term "connected" should be construed broadly, for example, it may be an electrical connection, or may be a communication between two elements, or may be a direct connection, or may be an indirect connection via an intermediate medium, and it will be understood by those skilled in the art that the specific meaning of the term may be interpreted according to circumstances.
It should be noted that, the term "first\second\third" related to the embodiment of the present application is merely to distinguish similar objects, and does not represent a specific order for the objects, it is to be understood that "first\second\third" may interchange a specific order or sequence where allowed. It is to be understood that the "first\second\third" distinguishing objects may be interchanged where appropriate such that embodiments of the application described herein may be practiced in sequences other than those illustrated or described herein.
The popular color decorative lamps in the market are provided with some dynamic changing effects. The intelligent decorative colored lamp can send a command for controlling the light to the lamp belt through an App, and allows a user to define a dynamic effect. App issues command-color lamp command storage-color lamp MCU operation command-color lamp bead driving-lamp bead lighting. Regarding command definition of color light effect, key frame technology is used in industry, which is convenient for users to set custom light effect.
Aiming at the self-definition of the color lamp effect, key frame technology flows in the industry, so that a user can conveniently set the self-definition lamp effect. It will be appreciated by those of ordinary skill in the art that the light is illuminated at all times, requiring at least 24 frames per second with the visual residual effect, in order to appear to the naked eye to be flicker free. The lighting frequency of the decorative color lamp will be higher, assuming that the lighting is performed at 100 frames per second, the lighting effect of a certain frame can be decomposed into hue (H), brightness (B) and saturation (S), if the user wants to set a change of the lighting effect from H1B1S1 (first key frame) to H2B2S2 (second key frame) within one second, the setting of 100 frames constituting 1 second is not needed, and the HSB parameter of each frame is not needed. But only needs to set the HSB of the 0 th second and the 1 st second, and the MCU performs self-interpolation to calculate.
In particular, in the technical solution of the present application, a concept of a rate of change is introduced, that is, what rate H1B1S1 (second key frame) is changed into H2S2B2 (second key frame), and a graphical setting manner is introduced. Those of ordinary skill in the art will appreciate that the common interpolation method in the industry is equal difference interpolation, i.e., constant velocity variation, and does not allow user customization. In the technical scheme of the application, various interpolation change rates are provided in a graphical mode, and a user is allowed to use a hand-drawn curve to customize the change rate.
Specifically, in the technical solution of the present application, in some curves provided by default, if the curves are straight lines, as shown in fig. 1, an arithmetic interpolation is represented; if the curve is a polyline, as shown in FIG. 3, a jump at a certain time is indicated; if the curve is one, as shown in FIG. 2, a more dynamic rate of change is indicated. Also, in addition to selecting some of the curves provided by default, the user can manually plot the curve to customize the rate of change, and the system performs interpolation operations by fitting.
In addition to selecting some of the curves provided by default, the user may manually draw the curve to customize the rate of change. The system performs interpolation operations by fitting. Still further, the three parameters, keyframe 1 through keyframe 2, hsb, are in 3 independent rates of change. For example, H changes uniformly from 1 to 2, s adopts a curvilinear rate, and B uses hopping, as shown in fig. 4. Further, instead of a monotonically increasing curve from key frame 1 to key frame 2, the target value for frame 2 may be exceeded before returning to the target in terms of value change.
In the present application, on the one hand: the abscissa is incremental, and from key frame 1 to key frame 2 is a time-dependent change, and time cannot flow back. There is a limit to the drawing of the curve, with one and only one point on the same abscissa. On the other hand: key frame 1 and key frame 2 are seen from the ordinate of the graph, and the value of key frame 2 > the value of key frame 1 is merely for convenience of demonstrating the graph, and there is no requirement in practical arrangement. Whether the value of key frame 2 > the value of key frame 1, the value of key frame 2 = the value of key frame 1, or the value of key frame 2 < the value of key frame 1, the fitting may be performed according to a demonstration curve, such as a horizontal or vertical flip curve, or a rotation curve, etc. at the time of calculation. For example, the value of +100%, H is +360 (because the whole color ring is 360), S or B is +100, so that the value of key frame 2 is always greater than the value of key frame 1, then the value of each frame in the middle is calculated according to curve fitting, and each value is subtracted by 100%. These modes are very simple to implement in software.
The key points of the application are as follows: 1. the curve graphically represents the manner in which the interpolation from keyframe 1 to keyframe 2 is calculated. 2. Allowing the user to draw a curve by hand.
In particular, in generating the interpolation change rate between the key frame 1 and the key frame 2 based on the user hand-drawn curve, however, deviation between the input hand-drawn curve and the user intention is caused by deviation of the received input of the drawing software and problems of hesitation or insufficient skill of the user when drawing the curve, so that an image processing scheme is expected to optimize the hand-drawn curve input by the user based on the user intention so as to improve the final light effect customization effect.
Specifically, in the technical scheme of the application, firstly, image noise reduction is performed on the light effect hand-drawn curve image to obtain a noise-reduced hand-drawn image. It should be appreciated that the user may introduce a number of outliers of image pixels (represented as image noise on the image) due to hesitation or insufficient skill in drawing the curve, so that after receiving the light effect hand-drawn curve image, the light effect hand-drawn curve image is first image-noise-reduced, for example, in a specific embodiment, the light effect hand-drawn curve image may be bilinear filtered to achieve image noise reduction.
And then, carrying out image blocking processing on the noise-reduced hand-painted curve image to obtain a sequence of hand-painted curve image blocks. It should be understood that, in the hand-drawn curve image after noise reduction, the curve may be regarded as formed by splicing multiple sections of sub-curves, and the data processing amount and the difficulty of image analysis and processing may be reduced by performing overall collaborative optimization after the multiple sections of sub-curves are individually optimized. Therefore, in the technical scheme of the application, the image segmentation processing is performed on the hand-drawn curve image after noise reduction to obtain the sequence of the hand-drawn curve image blocks. For example, in a specific example of the present application, the noise-reduced hand-drawn curve image is subjected to uniform image block segmentation to obtain the sequence of hand-drawn curve image blocks.
And then, respectively passing each hand-drawn curve image block in the sequence of hand-drawn curve image blocks through a shallow feature extractor based on a convolutional neural network model to obtain a plurality of hand-drawn curve image block feature matrixes. That is, in the technical solution of the present application, the convolutional neural network model is used as a feature extractor to capture the image features of the sub-curve segments in the hand-drawn curve image blocks. Here, it should be appreciated by those of ordinary skill in the art that the convolutional neural network model has the following characteristics in terms of extracting image features: shallow features are edge, shape, texture, etc., while deep features are abstract features of objects, structures, etc., and as convolutional coding deepens, shallow features are progressively submerged or even vanished. Therefore, in the technical scheme of the application, the number of the convolution layers of the convolution neural network model is strictly controlled, and in particular, in the technical scheme of the application, the convolution neural network model comprises 3-5 convolution layers, so that the convolution neural network model can fully and accurately capture the image shallow layer characteristics of the sub-curve segments in each hand-drawn curve image block.
After the characteristic matrixes of the plurality of hand-drawn curve image blocks are obtained, the characteristic matrixes of the plurality of hand-drawn curve image blocks are arranged according to the positions of image blocks so as to obtain a global characteristic matrix of the hand-drawn curve image. That is, after extracting the image features of each sub-curve segment in the hand-drawn curve, rearranging the image features of each sub-curve segment into a hand-drawn curve image global feature matrix according to the segmentation positions of the image segments. Further, the hand-drawn curve image global feature matrix may be decoded by a decoder to generate a generation optimized hand-drawn curve image.
In particular, considering that in the technical solution of the present application, the contribution degree of the feature values of each position of the hand-drawn curve image global feature matrix in the spatial dimension thereof to decoding generation based on a decoder is different, in order to fully utilize the spatial feature distribution significance of the hand-drawn curve image global feature matrix, in the technical solution of the present application, before the hand-drawn curve image global feature matrix is input into the decoder for decoding generation, the hand-drawn curve image global feature matrix is subjected to a bidirectional attention mechanism to obtain an optimized hand-drawn curve image global feature matrix. Here, the bidirectional attention mechanism module further performs attention weight strengthening on the row space and the column space dimensions of the feature matrix to strengthen the space dimension distribution on the attention dimension, so that the overall distribution consistency of the hand-drawn curve image global feature matrix on the space dimension can be improved.
However, the consistency of the overall distribution of the global feature matrix of the optimized hand-drawn curve image in the spatial dimension may further cause a problem of distinguishing degree in the probability density dimension between the local distributions of the global feature matrix of the optimized hand-drawn curve image, thereby affecting the accuracy of the decoding regression of the global feature matrix of the optimized hand-drawn curve image.
Thus, the optimized hand-drawn curve image is preferably globally characterized by a matrix, e.g., expressed asOrthogonalization of manifold curved surface dimension of Gaussian probability density is carried out, specifically: />Wherein->And->Is a feature value set +.>Mean and standard deviation of (2), and->Is the +.f. of the optimized hand-drawn curve image global feature matrix after optimization>Characteristic values of the location.
Here, the optimized hand-drawn curve image global feature matrix can be characterized by characterizing the surface unit tangent vector modulo length and the unit normal vector modulo length by the square root of the mean and standard deviation of the high-dimensional feature set expressing the manifold surfaceOrthogonal projection based on unit modular length is carried out on a tangential plane and a normal plane on a manifold curved surface of the high-dimensional feature manifold, so that the dimension reconstruction of the probability density of the high-dimensional feature is carried out on the basis of the basic structure of the Gaussian feature manifold geometry, and the accuracy of decoding generation of the optimized hand-drawn curve image global feature matrix through the decoder is improved through the dimension orthogonalization of the improved probability density.
That is, the re-optimized hand-drawn curve image global feature matrix is further passed through a decoder to generate an optimized hand-drawn curve image. In a specific example of the present application, the decoder comprises a plurality of deconvolution layers to perform deconvolution decoding generation by deconvolution operations cascaded with each other. Then, based on the shape of the curve in the optimized hand-drawn curve image, the interpolated varying curvature from the first keyframe to the second keyframe is determined.
Fig. 5 is a schematic view of a scenario of a custom lighting configuration method based on a keyframe according to an embodiment of the present application. As shown in fig. 5, in the application scenario, first, a light effect hand-drawn curve image (e.g., C as illustrated in fig. 5) input by a user is accepted; the obtained light effect hand-drawn curve image is then input into a server (e.g., S as illustrated in fig. 5) deployed with a custom light effect configuration algorithm based on a key frame, wherein the server is capable of processing the light effect hand-drawn curve image based on the custom light effect configuration algorithm of a key frame to determine the interpolated varying curvature from the first key frame to the second key frame based on the shape of the curve in the optimized hand-drawn curve image.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
In one embodiment of the present application, fig. 6 is a flowchart of a method for key frame based custom light effect configuration according to an embodiment of the present application. As shown in fig. 6, a key frame-based custom light efficacy configuration method 100 according to an embodiment of the present application includes: 110, accepting a light effect hand-drawn curve image input by a user; 120, taking the starting point and the end point of a curve in the light effect hand-drawn curve image as a first key frame and a second key frame; and, 130, determining an interpolated varying curvature from the first keyframe to the second keyframe based on the shape of the curve in the light effect hand drawn curve image.
Fig. 7 is a flowchart of the sub-steps of step 130 in the key frame based custom light efficacy configuration method according to an embodiment of the present application. As shown in fig. 7, determining an interpolated curvature of change from the first keyframe to the second keyframe based on a shape of a curve in the light effect hand-drawn curve image includes: 131, performing image noise reduction on the light effect hand-drawn curve image to obtain a noise-reduced hand-drawn image; 132, performing image blocking processing on the noise-reduced hand-painted curve image to obtain a sequence of hand-painted curve image blocks; 133, passing each hand-drawn curve image block in the sequence of hand-drawn curve image blocks through a shallow feature extractor based on a convolutional neural network model to obtain a plurality of hand-drawn curve image block feature matrices; 134, arranging the hand-painted curve image block feature matrixes according to the positions of the image blocks to obtain a hand-painted curve image global feature matrix; 135, passing the hand-drawn curve image global feature matrix through a bidirectional attention mechanism to obtain an optimized hand-drawn curve image global feature matrix; 136, performing class probability density discrimination enhancement on the optimized hand-drawn curve image global feature matrix to obtain a re-optimized hand-drawn curve image global feature matrix; 137 passing the re-optimized hand-drawn curve image global feature matrix through a decoder to generate an optimized hand-drawn curve image; and, 138, determining the interpolated varying curvature from the first keyframe to the second keyframe based on the shape of the curve in the optimized hand-drawn curve image.
Fig. 8 is a schematic diagram of the architecture of step 130 in the key frame-based custom light effect configuration method according to an embodiment of the present application. As shown in fig. 8, in the network architecture, first, image denoising is performed on the light effect hand-drawn curve image to obtain a denoised hand-drawn image; then, performing image blocking processing on the noise-reduced hand-painted curve image to obtain a sequence of hand-painted curve image blocks; then, each hand-drawn curve image block in the sequence of hand-drawn curve image blocks respectively passes through a shallow feature extractor based on a convolutional neural network model to obtain a plurality of hand-drawn curve image block feature matrixes; then, arranging the hand-painted curve image block feature matrixes according to the positions of image blocks to obtain a hand-painted curve image global feature matrix; then, the hand-painted curve image global feature matrix is subjected to a bidirectional attention mechanism to obtain an optimized hand-painted curve image global feature matrix; then, carrying out class probability density discrimination enhancement on the optimized hand-drawn curve image global feature matrix to obtain a re-optimized hand-drawn curve image global feature matrix; then, passing the re-optimized hand-drawn curve image global feature matrix through a decoder to generate an optimized hand-drawn curve image; and finally, determining the interpolated varying curvature from the first keyframe to the second keyframe based on the shape of the curve in the optimized hand-drawn curve image.
Specifically, in step 131, image denoising is performed on the light effect hand-drawn curve image to obtain a denoised hand-drawn image. In particular, in generating the interpolation change rate between the key frame 1 and the key frame 2 based on the user hand-drawn curve, however, deviation between the input hand-drawn curve and the user intention is caused by deviation of the received input of the drawing software and problems of hesitation or insufficient skill of the user when drawing the curve, so that an image processing scheme is expected to optimize the hand-drawn curve input by the user based on the user intention so as to improve the final light effect customization effect.
Specifically, in the technical scheme of the application, firstly, image noise reduction is performed on the light effect hand-drawn curve image to obtain a noise-reduced hand-drawn image. It should be appreciated that the user may introduce a number of outliers of image pixels (represented as image noise on the image) due to hesitation or insufficient skill in drawing the curve, so that after receiving the light effect hand-drawn curve image, the light effect hand-drawn curve image is first image-noise-reduced, for example, in a specific embodiment, the light effect hand-drawn curve image may be bilinear filtered to achieve image noise reduction.
The method for performing image noise reduction on the light effect hand-painted curve image to obtain a noise-reduced hand-painted image comprises the following steps of: and carrying out bilinear filtering on the light effect hand-drawn curve image to obtain the noise-reduced hand-drawn image.
Specifically, in step 132, the noise-reduced hand-drawn curve image is subjected to image segmentation processing to obtain a sequence of hand-drawn curve image blocks. And then, carrying out image blocking processing on the noise-reduced hand-painted curve image to obtain a sequence of hand-painted curve image blocks. It should be understood that, in the hand-drawn curve image after noise reduction, the curve may be regarded as formed by splicing multiple sections of sub-curves, and the data processing amount and the difficulty of image analysis and processing may be reduced by performing overall collaborative optimization after the multiple sections of sub-curves are individually optimized. Therefore, in the technical scheme of the application, the image segmentation processing is performed on the hand-drawn curve image after noise reduction to obtain the sequence of the hand-drawn curve image blocks.
For example, in a specific example of the present application, the noise-reduced hand-drawn curve image is subjected to uniform image block segmentation to obtain the sequence of hand-drawn curve image blocks, where each hand-drawn curve image block in the sequence of hand-drawn curve image blocks has the same size.
Specifically, in step 133, each hand-drawn curve image block in the sequence of hand-drawn curve image blocks is respectively passed through a shallow feature extractor based on a convolutional neural network model to obtain a plurality of hand-drawn curve image block feature matrices. And then, respectively passing each hand-drawn curve image block in the sequence of hand-drawn curve image blocks through a shallow feature extractor based on a convolutional neural network model to obtain a plurality of hand-drawn curve image block feature matrixes. That is, in the technical solution of the present application, the convolutional neural network model is used as a feature extractor to capture the image features of the sub-curve segments in the hand-drawn curve image blocks.
Here, it should be appreciated by those of ordinary skill in the art that the convolutional neural network model has the following characteristics in terms of extracting image features: shallow features are edge, shape, texture, etc., while deep features are abstract features of objects, structures, etc., and as convolutional coding deepens, shallow features are progressively submerged or even vanished.
Therefore, in the technical scheme of the application, the number of the convolution layers of the convolution neural network model is strictly controlled, and in particular, in the technical scheme of the application, the convolution neural network model comprises 3-5 convolution layers, so that the convolution neural network model can fully and accurately capture the image shallow layer characteristics of the sub-curve segments in each hand-drawn curve image block.
Wherein, each hand-drawn curve image block in the sequence of hand-drawn curve image blocks is respectively passed through a shallow feature extractor based on a convolutional neural network model to obtain a plurality of hand-drawn curve image block feature matrices, including: and respectively carrying out convolution processing, pooling processing and nonlinear activation processing on input data in forward transfer of layers by using each layer of the shallow feature extractor based on the convolutional neural network model so as to output shallow layers of the shallow feature extractor based on the convolutional neural network model as the plurality of hand-drawn curve image block feature matrices.
The convolutional neural network (Convolutional Neural Network, CNN) is an artificial neural network and has wide application in the fields of image recognition and the like. The convolutional neural network may include an input layer, a hidden layer, and an output layer, where the hidden layer may include a convolutional layer, a pooling layer, an activation layer, a full connection layer, etc., where the previous layer performs a corresponding operation according to input data, outputs an operation result to the next layer, and obtains a final result after the input initial data is subjected to a multi-layer operation.
The convolutional neural network model has excellent performance in the aspect of image local feature extraction by taking a convolutional kernel as a feature filtering factor, and has stronger feature extraction generalization capability and fitting capability compared with the traditional image feature extraction algorithm based on statistics or feature engineering.
Specifically, in step 134, the plurality of hand-drawn curve image block feature matrices are arranged according to the positions of the image blocks to obtain a hand-drawn curve image global feature matrix. After the characteristic matrixes of the plurality of hand-drawn curve image blocks are obtained, the characteristic matrixes of the plurality of hand-drawn curve image blocks are arranged according to the positions of image blocks so as to obtain a global characteristic matrix of the hand-drawn curve image. That is, after extracting the image features of each sub-curve segment in the hand-drawn curve, rearranging the image features of each sub-curve segment into a hand-drawn curve image global feature matrix according to the segmentation positions of the image segments. Further, the hand-drawn curve image global feature matrix may be decoded by a decoder to generate a generation optimized hand-drawn curve image.
Specifically, in step 135, the hand-drawn curve image global feature matrix is passed through a bi-directional attention mechanism to obtain an optimized hand-drawn curve image global feature matrix. In particular, considering that in the technical solution of the present application, the contribution degree of the feature values of each position of the hand-drawn curve image global feature matrix in the spatial dimension thereof to decoding generation based on a decoder is different, in order to fully utilize the spatial feature distribution significance of the hand-drawn curve image global feature matrix, in the technical solution of the present application, before the hand-drawn curve image global feature matrix is input into the decoder for decoding generation, the hand-drawn curve image global feature matrix is subjected to a bidirectional attention mechanism to obtain an optimized hand-drawn curve image global feature matrix.
Here, the bidirectional attention mechanism module further performs attention weight strengthening on the row space and the column space dimensions of the feature matrix to strengthen the space dimension distribution on the attention dimension, so that the overall distribution consistency of the hand-drawn curve image global feature matrix on the space dimension can be improved.
Fig. 9 is a flowchart of the sub-step of step 135 in the key frame-based custom lighting effect configuration method according to an embodiment of the present application, as shown in fig. 9, the step of obtaining an optimized hand-drawn curve image global feature matrix by a bidirectional attention mechanism from the hand-drawn curve image global feature matrix includes: 1351, pooling the global feature matrix of the hand-drawn curve image along the horizontal direction and the vertical direction respectively to obtain a first pooled vector and a second pooled vector; 1352, performing association coding on the first pooling vector and the second pooling vector to obtain a bidirectional association matrix; 1353, inputting the bidirectional association matrix into a Sigmoid activation function to obtain a bidirectional association weight matrix; and 1354, calculating the point-by-point multiplication between the bi-directional association weight matrix and the hand-drawn curve image global feature matrix to obtain the optimized hand-drawn curve image global feature matrix.
The attention mechanism is a data processing method in machine learning, and is widely applied to various machine learning tasks such as natural language processing, image recognition, voice recognition and the like. On one hand, the attention mechanism is that the network is hoped to automatically learn out the places needing attention in the picture or text sequence; on the other hand, the attention mechanism generates a mask by the operation of the neural network, the weights of the values on the mask. In general, the spatial attention mechanism calculates the average value of different channels of the same pixel point, and then obtains spatial features through some convolution and up-sampling operations, and the pixels of each layer of the spatial features are given different weights.
Specifically, in step 136, class probability density discrimination enhancement is performed on the optimized hand-drawn curve image global feature matrix to obtain a re-optimized hand-drawn curve image global feature matrix. However, the consistency of the overall distribution of the global feature matrix of the optimized hand-drawn curve image in the spatial dimension may further cause a problem of distinguishing degree in the probability density dimension between the local distributions of the global feature matrix of the optimized hand-drawn curve image, thereby affecting the accuracy of the decoding regression of the global feature matrix of the optimized hand-drawn curve image.
Thus, the optimized hand-drawn curve image is preferably globally characterized by a matrix, e.g., expressed asOrthogonalization of manifold curved surface dimension of Gaussian probability density is carried out, specifically: performing class probability density discrimination enhancement on the optimized hand-drawn curve image global feature matrix to obtain a re-optimized hand-drawn curve image global feature matrix, including: performing class probability density discrimination enhancement on the optimized hand-drawn curve image global feature matrix by using the following optimization formula to obtain a re-optimized hand-drawn curve image global feature matrix; wherein, the optimization formula is: />Wherein (1)>Andis the mean and standard deviation of the feature value set of each position in the global feature matrix of the optimized hand-drawn curve image,is the first +.>Characteristic value of position, and->Is the +.f. of the re-optimized hand-drawn curve image global feature matrix>Characteristic values of the location.
Here, the optimized hand-drawn curve image global feature matrix can be characterized by characterizing the surface unit tangent vector modulo length and the unit normal vector modulo length by the square root of the mean and standard deviation of the high-dimensional feature set expressing the manifold surfaceOrthogonal projection based on unit modular length is carried out on a tangential plane and a normal plane on a manifold curved surface of the high-dimensional feature manifold, so that the dimension reconstruction of the probability density of the high-dimensional feature is carried out on the basis of the basic structure of the Gaussian feature manifold geometry, and the accuracy of decoding generation of the optimized hand-drawn curve image global feature matrix through the decoder is improved through the dimension orthogonalization of the improved probability density.
Specifically, in steps 137 and 138, passing the re-optimized hand-drawn curve image global feature matrix through a decoder to generate an optimized hand-drawn curve image; and determining the interpolated varying curvature from the first keyframe to the second keyframe based on the shape of the curve in the optimized hand-drawn curve image. That is, the re-optimized hand-drawn curve image global feature matrix is further passed through a decoder to generate an optimized hand-drawn curve image. In a specific example of the present application, the decoder comprises a plurality of deconvolution layers to perform deconvolution decoding generation by deconvolution operations cascaded with each other. Then, based on the shape of the curve in the optimized hand-drawn curve image, the interpolated varying curvature from the first keyframe to the second keyframe is determined.
In summary, a key frame based custom light effect configuration method 100 is illustrated that accepts a user-entered light effect hand-drawn curve image in accordance with an embodiment of the present application; taking the starting point and the end point of a curve in the light effect hand-drawn curve image as a first key frame and a second key frame; and determining an interpolated varying curvature from the first keyframe to the second keyframe based on a shape of a curve in the light effect hand drawn curve image. In this way, the hand-drawn graph input by the user can be optimized based on the user intention to reduce friction so as to improve the final light effect self-defining effect.
In one embodiment of the present application, FIG. 10 is a block diagram of a key frame based custom light efficacy configuration system according to an embodiment of the present application. As shown in fig. 10, a key frame-based custom light efficacy configuration system 200 according to an embodiment of the present application includes: the image receiving module 210 is configured to receive a light effect hand-drawn curve image input by a user; a key frame generating module 220, configured to take a start point and an end point of a curve in the light effect hand-drawn curve image as a first key frame and a second key frame; and an interpolated curvature generation module 230 for determining an interpolated curvature of the first keyframe to the second keyframe based on a shape of a curve in the light effect hand-drawn curve image.
In a specific example, in the above-mentioned custom lighting effect configuration system based on keyframes, the interpolation changing curvature generating module includes: the image noise reduction unit is used for carrying out image noise reduction on the light effect hand-painted curve image so as to obtain a noise-reduced hand-painted image; the image blocking processing unit is used for carrying out image blocking processing on the hand-painted curve image after noise reduction to obtain a sequence of hand-painted curve image blocks; the shallow feature extraction unit is used for enabling each hand-drawn curve image block in the sequence of hand-drawn curve image blocks to respectively pass through a shallow feature extractor based on a convolutional neural network model so as to obtain a plurality of hand-drawn curve image block feature matrixes; the matrix arrangement unit is used for arranging the characteristic matrixes of the plurality of hand-drawn curve image blocks according to the positions of the image blocks so as to obtain a hand-drawn curve image global characteristic matrix; the bidirectional attention unit is used for enabling the hand-drawn curve image global feature matrix to obtain an optimized hand-drawn curve image global feature matrix through a bidirectional attention mechanism; the optimizing unit is used for carrying out class probability density distinguishing degree strengthening on the optimized hand-drawn curve image global feature matrix so as to obtain a re-optimized hand-drawn curve image global feature matrix; a decoding unit, configured to pass the re-optimized hand-drawn curve image global feature matrix through a decoder to generate an optimized hand-drawn curve image; and an interpolation changing curvature determining unit configured to determine the interpolation changing curvature from the first key frame to the second key frame based on a shape of a curve in the optimized hand-drawn curve image.
In a specific example, in the above-mentioned custom lighting effect configuration system based on keyframes, the image noise reduction unit is configured to: and carrying out bilinear filtering on the light effect hand-drawn curve image to obtain the noise-reduced hand-drawn image.
In a specific example, in the above-mentioned custom lighting effect configuration system based on keyframes, the image blocking processing unit is configured to: and uniformly dividing the image blocks of the noise-reduced hand-drawn curve image to obtain a sequence of the hand-drawn curve image blocks, wherein each hand-drawn curve image block in the sequence of the hand-drawn curve image blocks has the same size.
In a specific example, in the above-mentioned key frame-based custom lighting effect configuration system, the shallow feature extraction unit is configured to: and respectively carrying out convolution processing, pooling processing and nonlinear activation processing on input data in forward transfer of layers by using each layer of the shallow feature extractor based on the convolutional neural network model so as to output shallow layers of the shallow feature extractor based on the convolutional neural network model as the plurality of hand-drawn curve image block feature matrices.
In a specific example, in the above-mentioned custom lighting effect configuration system based on a key frame, the shallow feature extractor based on a convolutional neural network model comprises 3-5 convolutional layers.
In a specific example, in the above-mentioned keyframe-based custom lighting effect configuration system, the bidirectional attention unit includes: chi Huazi unit, configured to pool the global feature matrix of the hand-drawn curve image along a horizontal direction and a vertical direction to obtain a first pooled vector and a second pooled vector; the association coding subunit is used for carrying out association coding on the first pooling vector and the second pooling vector to obtain a bidirectional association matrix; the activation subunit is used for inputting the bidirectional association matrix into a Sigmoid activation function to obtain a bidirectional association weight matrix; and the matrix calculating subunit is used for calculating the position-based point multiplication between the bidirectional association weight matrix and the hand-painted curve image global feature matrix to obtain the optimized hand-painted curve image global feature matrix.
In a specific example, in the above-mentioned keyframe-based custom lighting effect configuration system, the optimizing unit is configured to: performing class probability density discrimination enhancement on the optimized hand-drawn curve image global feature matrix by using the following optimization formula to obtain a re-optimized hand-drawn curve image global feature matrix; wherein, the optimization formula is: Wherein (1)>And->Is the mean value and standard deviation of the feature value set of each position in the global feature matrix of the optimized hand-drawn curve image,/for>Is the first +.>Characteristic value of position, and->Is the +.f. of the re-optimized hand-drawn curve image global feature matrix>Characteristic values of the location.
In one specific example, in the above-described key frame-based custom lighting effect configuration system, the decoder includes a plurality of deconvolution layers.
Here, it will be understood by those skilled in the art that the specific functions and operations of the respective units and modules in the above-described key frame-based customized light efficacy configuration system have been described in detail in the above description of the key frame-based customized light efficacy configuration method with reference to fig. 1 to 9, and thus, repetitive descriptions thereof will be omitted.
As described above, the key frame-based custom light efficacy configuration system 200 according to the embodiment of the present application may be implemented in various terminal devices, for example, a server for key frame-based custom light efficacy configuration, or the like. In one example, the key frame based custom light efficacy configuration system 200 according to embodiments of the present application may be integrated into a terminal device as a software module and/or hardware module. For example, the keyframe-based custom light efficacy configuration system 200 may be a software module in the operating system of the terminal device or may be an application developed for the terminal device; of course, the key frame based custom lighting effect configuration system 200 could also be one of many hardware modules of the terminal device.
Alternatively, in another example, the key frame based custom light efficacy configuration system 200 and the terminal device may be separate devices, and the key frame based custom light efficacy configuration system 200 may be connected to the terminal device via a wired and/or wireless network and communicate the interaction information in accordance with a agreed data format.
The present application also provides a computer program product comprising instructions which, when executed, cause an apparatus to perform operations corresponding to the above-described method.
In one embodiment of the present application, there is also provided a computer-readable storage medium storing a computer program for executing the above-described method.
It should be appreciated that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the forms of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects may be utilized. Furthermore, the computer program product may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Methods, systems, and computer program products of embodiments of the present application are described in the flow diagrams and/or block diagrams. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not intended to be limiting, and these advantages, benefits, effects, etc. are not to be considered as essential to the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not necessarily limited to practice with the above described specific details.
The block diagrams of the devices, apparatuses, devices, systems referred to in the present application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent aspects of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.
Claims (8)
1. The key frame-based self-defined lamp effect configuration method is characterized by comprising the following steps of:
receiving a light effect hand-drawn curve image input by a user;
taking the starting point and the end point of a curve in the light effect hand-drawn curve image as a first key frame and a second key frame; and
determining an interpolated varying curvature from the first keyframe to the second keyframe based on a shape of a curve in the light effect hand-drawn curve image;
wherein determining an interpolated curvature of change from the first keyframe to the second keyframe based on a shape of a curve in the light effect hand-drawn curve image comprises:
image noise reduction is carried out on the light effect hand-drawn curve image so as to obtain a noise-reduced hand-drawn curve image;
performing image blocking processing on the noise-reduced hand-painted curve image to obtain a sequence of hand-painted curve image blocks;
Respectively passing each hand-drawn curve image block in the sequence of hand-drawn curve image blocks through a shallow feature extractor based on a convolutional neural network model to obtain a plurality of hand-drawn curve image block feature matrixes;
arranging the hand-painted curve image block feature matrixes according to the positions of image blocks to obtain a hand-painted curve image global feature matrix;
the hand-drawn curve image global feature matrix is subjected to a bidirectional attention mechanism to obtain an optimized hand-drawn curve image global feature matrix;
performing class probability density discrimination enhancement on the optimized hand-drawn curve image global feature matrix to obtain a re-optimized hand-drawn curve image global feature matrix;
passing the re-optimized hand-drawn curve image global feature matrix through a decoder to generate an optimized hand-drawn curve image; and
determining the interpolated varying curvature from the first keyframe to the second keyframe based on the shape of the curve in the optimized hand-drawn curve image;
the method for classifying the probability density discrimination enhancement on the optimized hand-drawn curve image global feature matrix to obtain a re-optimized hand-drawn curve image global feature matrix comprises the following steps: performing class probability density discrimination enhancement on the optimized hand-drawn curve image global feature matrix by using the following optimization formula to obtain a re-optimized hand-drawn curve image global feature matrix;
Wherein, the optimization formula is:
,
wherein,and->Is the mean value and standard deviation of the feature value set of each position in the global feature matrix of the optimized hand-drawn curve image,/for>Is the first +.>Characteristic value of position, andis the +.f. of the re-optimized hand-drawn curve image global feature matrix>Characteristic values of the location.
2. The key frame-based custom light effect configuration method of claim 1, wherein image denoising is performed on the light effect hand-drawn curve image to obtain a denoised hand-drawn curve image, comprising:
and carrying out bilinear filtering on the light effect hand-drawn curve image to obtain the noise-reduced hand-drawn curve image.
3. The key frame-based custom light effect configuration method of claim 2, wherein performing image blocking processing on the noise-reduced hand-drawn curve image to obtain a sequence of hand-drawn curve image blocks comprises: and uniformly dividing the image blocks of the noise-reduced hand-drawn curve image to obtain a sequence of the hand-drawn curve image blocks, wherein each hand-drawn curve image block in the sequence of the hand-drawn curve image blocks has the same size.
4. The key frame-based custom light effect configuration method of claim 3, wherein passing each hand-drawn curve image block in the sequence of hand-drawn curve image blocks through a shallow feature extractor based on a convolutional neural network model to obtain a plurality of hand-drawn curve image block feature matrices, respectively, comprises: and respectively carrying out convolution processing, pooling processing and nonlinear activation processing on input data in forward transfer of layers by using each layer of the shallow feature extractor based on the convolutional neural network model so as to output shallow layers of the shallow feature extractor based on the convolutional neural network model as the plurality of hand-drawn curve image block feature matrices.
5. The key frame based custom light efficacy configuration method according to claim 4, wherein said shallow feature extractor based on convolutional neural network model comprises 3-5 convolutional layers.
6. The key frame-based custom light effect configuration method of claim 5, wherein passing the hand-drawn curve image global feature matrix through a bi-directional attention mechanism to obtain an optimized hand-drawn curve image global feature matrix comprises:
pooling the hand-drawn curve image global feature matrix along the horizontal direction and the vertical direction respectively to obtain a first pooling vector and a second pooling vector;
Performing association coding on the first pooling vector and the second pooling vector to obtain a bidirectional association matrix;
inputting the bidirectional association matrix into a Sigmoid activation function to obtain a bidirectional association weight matrix; and
and calculating the point-by-point multiplication between the bidirectional association weight matrix and the hand-painted curve image global feature matrix to obtain the optimized hand-painted curve image global feature matrix.
7. The key frame based custom light efficacy configuration method according to claim 6, characterized in that the decoder comprises a plurality of deconvolution layers.
8. A key frame based custom light efficacy configuration system comprising:
the image receiving module is used for receiving the light effect hand-drawn curve image input by the user;
the key frame generation module is used for taking the starting point and the end point of a curve in the light effect hand-drawn curve image as a first key frame and a second key frame; and
the interpolation change curvature generation module is used for determining the interpolation change curvature from the first key frame to the second key frame based on the shape of the curve in the light effect hand-drawn curve image;
wherein, the key frame generation module includes:
Image noise reduction is carried out on the light effect hand-drawn curve image so as to obtain a noise-reduced hand-drawn curve image;
performing image blocking processing on the noise-reduced hand-painted curve image to obtain a sequence of hand-painted curve image blocks;
respectively passing each hand-drawn curve image block in the sequence of hand-drawn curve image blocks through a shallow feature extractor based on a convolutional neural network model to obtain a plurality of hand-drawn curve image block feature matrixes;
arranging the hand-painted curve image block feature matrixes according to the positions of image blocks to obtain a hand-painted curve image global feature matrix;
the hand-drawn curve image global feature matrix is subjected to a bidirectional attention mechanism to obtain an optimized hand-drawn curve image global feature matrix;
performing class probability density discrimination enhancement on the optimized hand-drawn curve image global feature matrix to obtain a re-optimized hand-drawn curve image global feature matrix;
passing the re-optimized hand-drawn curve image global feature matrix through a decoder to generate an optimized hand-drawn curve image; and
determining the interpolated varying curvature from the first keyframe to the second keyframe based on the shape of the curve in the optimized hand-drawn curve image;
The method for classifying the probability density discrimination enhancement on the optimized hand-drawn curve image global feature matrix to obtain a re-optimized hand-drawn curve image global feature matrix comprises the following steps: performing class probability density discrimination enhancement on the optimized hand-drawn curve image global feature matrix by using the following optimization formula to obtain a re-optimized hand-drawn curve image global feature matrix;
wherein, the optimization formula is:
,
wherein,and->Is the mean value and standard deviation of the feature value set of each position in the global feature matrix of the optimized hand-drawn curve image,/for>Is the first +.>Characteristic value of position, andis the +.f. of the re-optimized hand-drawn curve image global feature matrix>Characteristic values of the location.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310592906.6A CN116580126B (en) | 2023-05-24 | 2023-05-24 | Custom lamp effect configuration method and system based on key frame |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310592906.6A CN116580126B (en) | 2023-05-24 | 2023-05-24 | Custom lamp effect configuration method and system based on key frame |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116580126A CN116580126A (en) | 2023-08-11 |
CN116580126B true CN116580126B (en) | 2023-11-07 |
Family
ID=87539406
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310592906.6A Active CN116580126B (en) | 2023-05-24 | 2023-05-24 | Custom lamp effect configuration method and system based on key frame |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116580126B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001188605A (en) * | 1999-12-28 | 2001-07-10 | Yaskawa Electric Corp | Method for interpolating curve |
CN104318600A (en) * | 2014-10-10 | 2015-01-28 | 无锡梵天信息技术股份有限公司 | Method for achieving role treading track animation by using Bezier curve |
CN110827703A (en) * | 2019-10-29 | 2020-02-21 | 杭州电子科技大学 | An input and display method of hand-painted LED lights based on similarity correction algorithm |
CN115937516A (en) * | 2022-11-21 | 2023-04-07 | 北京邮电大学 | Method, device, storage medium and terminal for image semantic segmentation |
CN116113125A (en) * | 2023-02-14 | 2023-05-12 | 永林电子股份有限公司 | Control method of LED atmosphere lamp group of decoration panel |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7826657B2 (en) * | 2006-12-11 | 2010-11-02 | Yahoo! Inc. | Automatically generating a content-based quality metric for digital images |
US10297227B2 (en) * | 2015-10-16 | 2019-05-21 | Sap Se | Dynamically-themed display utilizing physical ambient conditions |
US9984480B2 (en) * | 2016-03-21 | 2018-05-29 | Adobe Systems Incorporated | Enhancing curves using non-uniformly scaled cubic variation of curvature curves |
WO2019081623A1 (en) * | 2017-10-25 | 2019-05-02 | Deepmind Technologies Limited | Auto-regressive neural network systems with a soft attention mechanism using support data patches |
-
2023
- 2023-05-24 CN CN202310592906.6A patent/CN116580126B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001188605A (en) * | 1999-12-28 | 2001-07-10 | Yaskawa Electric Corp | Method for interpolating curve |
CN104318600A (en) * | 2014-10-10 | 2015-01-28 | 无锡梵天信息技术股份有限公司 | Method for achieving role treading track animation by using Bezier curve |
CN110827703A (en) * | 2019-10-29 | 2020-02-21 | 杭州电子科技大学 | An input and display method of hand-painted LED lights based on similarity correction algorithm |
CN115937516A (en) * | 2022-11-21 | 2023-04-07 | 北京邮电大学 | Method, device, storage medium and terminal for image semantic segmentation |
CN116113125A (en) * | 2023-02-14 | 2023-05-12 | 永林电子股份有限公司 | Control method of LED atmosphere lamp group of decoration panel |
Non-Patent Citations (1)
Title |
---|
External-Internal Attention for Hyperspectral Image Super-Resolution;Zhiling Guo et al;IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING;第1-8页 * |
Also Published As
Publication number | Publication date |
---|---|
CN116580126A (en) | 2023-08-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Hui et al. | Fast and accurate single image super-resolution via information distillation network | |
US10832387B2 (en) | Real-time intelligent image manipulation system | |
CN110097609B (en) | Sample domain-based refined embroidery texture migration method | |
CN110533721A (en) | A kind of indoor objects object 6D Attitude estimation method based on enhancing self-encoding encoder | |
CN109949214A (en) | An image style transfer method and system | |
Singla et al. | A review on Single Image Super Resolution techniques using generative adversarial network | |
CN111986075B (en) | Style migration method for target edge clarification | |
CN109816011A (en) | Generate the method and video key frame extracting method of portrait parted pattern | |
CN110322416A (en) | Image processing method, device and computer readable storage medium | |
CN115205544A (en) | A synthetic image harmonization method and system based on foreground reference image | |
CN110458247A (en) | The training method and device of image recognition model, image-recognizing method and device | |
CN110349087A (en) | RGB-D image superior quality grid generation method based on adaptability convolution | |
CN112598602A (en) | Mask-based method for removing Moire of deep learning video | |
CN110097615B (en) | A combined stylized and de-stylized word art editing method and system | |
CN116580126B (en) | Custom lamp effect configuration method and system based on key frame | |
CN117635418A (en) | Generative adversarial network training method, two-way image style conversion method and device | |
CN110363830A (en) | Element image generation method, apparatus and system | |
Ahn et al. | Interactive cartoonization with controllable perceptual factors | |
CN113129347A (en) | Self-supervision single-view three-dimensional hairline model reconstruction method and system | |
CN118537428A (en) | Image generation method, device, computer equipment and storage medium | |
CN107871162A (en) | A kind of image processing method and mobile terminal based on convolutional neural networks | |
Zhu et al. | Realistic real-time processing of anime portraits based on generative adversarial networks | |
CN114582029B (en) | A non-professional dance movement sequence enhancement method and system | |
CN114494523B (en) | Line manuscript automatic coloring model training method and device under limited color space, electronic equipment and storage medium | |
CN116563908A (en) | Face analysis and emotion recognition method based on multitasking cooperative network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |