CN115391692A - Video processing method and device - Google Patents
Video processing method and device Download PDFInfo
- Publication number
- CN115391692A CN115391692A CN202210961407.5A CN202210961407A CN115391692A CN 115391692 A CN115391692 A CN 115391692A CN 202210961407 A CN202210961407 A CN 202210961407A CN 115391692 A CN115391692 A CN 115391692A
- Authority
- CN
- China
- Prior art keywords
- video
- color
- pixel
- frame
- mask
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 19
- 238000000034 method Methods 0.000 claims abstract description 43
- 238000012545 processing Methods 0.000 claims description 26
- 239000000872 buffer Substances 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 9
- 230000000694 effects Effects 0.000 description 22
- 238000010586 diagram Methods 0.000 description 17
- 239000003086 colorant Substances 0.000 description 13
- 230000008569 process Effects 0.000 description 12
- 230000006870 function Effects 0.000 description 10
- 239000012634 fragment Substances 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 230000000873 masking effect Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000002829 reductive effect Effects 0.000 description 4
- 238000009877 rendering Methods 0.000 description 4
- 230000007704 transition Effects 0.000 description 4
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 238000004806 packaging method and process Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000005034 decoration Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000005562 fading Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/957—Browsing optimisation, e.g. caching or content distillation
- G06F16/9577—Optimising the visualization of content, e.g. distillation of HTML documents
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/958—Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Image Generation (AREA)
Abstract
The invention discloses a video processing method and device, and relates to the technical field of computers. One embodiment of the method comprises: creating a video label in a browser engine, and acquiring a video to be processed based on the video label; the video to be processed corresponds to a mask video which is manufactured in advance; setting the transparency of the background part pixels marked by the second color in the mask frame of the video frame as a first numerical value by using a shader based on a preset drawing protocol for any pair of video frames in the video to be processed and the mask video; for any pixel of the main part identified by the first color in the mask frame of the pair of video frames, assigning the color of the corresponding pixel in the frame to be processed of the pair of video frames to any pixel by using a shader, and setting the transparency of any pixel to be a second numerical value so as to enable the mask frame to form a target frame; and outputting the target video consisting of the target frames on a webpage for displaying. The implementation enables transparent presentation of the video background in the web page.
Description
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a video processing method and apparatus.
Background
With the gradual application and popularization of media functions on a Web platform, videos have been widely applied to decoration and contents of Web pages due to high compression rate and usability. However, because the current webpage video playing does not support background transparent display, the video in the webpage is difficult to be well combined with the contents such as the text in the webpage. For example, if special effect videos of the sky or the sea can be organically combined with web page contents, the web page expression effect is enriched, but due to the limitation of the current technology, the background portion in the special effect videos cannot be transparently presented, so that the above organic combination cannot be realized.
Disclosure of Invention
In view of this, embodiments of the present invention provide a video processing method and apparatus, which can implement transparent presentation of a video background in a webpage.
To achieve the above object, according to one aspect of the present invention, there is provided a video processing method.
The video processing method of the embodiment of the invention comprises the following steps: creating a video tag in a browser engine, and acquiring a video to be processed based on the video tag; the video to be processed corresponds to a pre-made mask video, and the mask video and the video to be processed have a main body part and a background part which are consistent in shape and size; the main part pixels of the mask video are in a first color, and the background part pixels are in a second color different from the first color; setting the transparency of the background part pixels marked by the second color in the mask frame of the video frame as a first numerical value representing high transparency by using a shader based on a preset drawing protocol for any pair of video frames in the video to be processed and the mask video; for any pixel of a main body part identified by a first color in a mask frame of the pair of video frames, assigning the color of a corresponding pixel in a frame to be processed of the pair of video frames to the any pixel by using the shader, and setting the transparency of the any pixel to be a second numerical value representing low transparency so that the mask frame forms a target frame; and outputting the target video consisting of the target frames on a webpage for displaying.
Optionally, the method further comprises: before the transparency of the background part pixels identified by the second color in the mask frame of the video frame is set to be a first numerical value representing high transparency by the shader based on the preset drawing protocol, creating a canvas label in the browser engine, and attaching the video frame as a texture in a buffer area created by calling the drawing protocol interface based on the canvas formed by the canvas label.
Optionally, the first color is white; and assigning, by the shader, a color of a corresponding pixel in a frame to be processed of the pair of video frames to the any pixel, including: determining a corresponding pixel in the frame to be processed, wherein the corresponding pixel has the same coordinate with any pixel; and adding the pixel value of any pixel and the pixel value of the corresponding pixel to obtain the pixel value of any pixel in the target frame.
Optionally, the second color is black, the first value is zero, the second value is one, and the shader is a vertex shader.
Optionally, the Video tag is a Video tag of hypertext markup language HTML5, the Canvas tag is a Canvas tag of HTML5, and the drawing protocol is a web page drawing protocol WebGL.
To achieve the above object, according to another aspect of the present invention, there is provided a video processing apparatus.
The video processing apparatus of an embodiment of the present invention may include: the video determining unit is used for creating a video tag in a browser engine and acquiring a video to be processed based on the video tag; the video to be processed corresponds to a pre-made mask video, and the mask video and the video to be processed have a main body part and a background part which are consistent in shape and size; the main part pixels of the mask video are in a first color, and the background part pixels are in a second color different from the first color; a target frame forming unit for: for any pair of video frames in the video to be processed and the mask video, setting the transparency of the background part pixels identified by the second color in the mask frames of the pair of video frames to be a first numerical value representing high transparency by using a shader based on a preset drawing protocol; for any pixel of a main body part identified by a first color in a mask frame of the pair of video frames, assigning the color of a corresponding pixel in a frame to be processed of the pair of video frames to the any pixel by using the shader, and setting the transparency of the any pixel to be a second numerical value representing low transparency so that the mask frame forms a target frame; and the output unit is used for outputting the target video consisting of the target frames on a webpage for display.
Optionally, the first color is white; the apparatus further comprises: a buffer unit for: before the shader based on the preset drawing protocol sets the transparency of the background part pixels identified by the second color in the mask frame of the video frame to be a first numerical value representing high transparency, creating a canvas label in the browser engine, and attaching the video frame as a texture in a buffer area created by calling the drawing protocol interface based on the canvas formed by the canvas label; the target frame forming unit is further configured to: determining a corresponding pixel in the frame to be processed, wherein the corresponding pixel has the same coordinate with any pixel; and adding the pixel value of any pixel and the pixel value of the corresponding pixel to obtain the pixel value of any pixel in the target frame.
Optionally, the second color is black, the first value is zero, the second value is one, and the shader is a vertex shader; the Video tags are Video tags of hypertext markup language HTML5, the Canvas tags are Canvas tags of HTML5, and the drawing protocol is webpage drawing protocol WebGL.
To achieve the above object, according to still another aspect of the present invention, there is provided an electronic apparatus.
An electronic device of the present invention includes: one or more processors; a storage device, configured to store one or more programs, which when executed by the one or more processors, cause the one or more processors to implement the video processing method provided by the present invention.
To achieve the above object, according to still another aspect of the present invention, there is provided a computer-readable storage medium.
A computer-readable storage medium of the present invention has stored thereon a computer program which, when executed by a processor, implements a video processing method provided by the present invention.
According to the technical scheme of the invention, the embodiment of the invention has the following advantages or beneficial effects:
before background transparency processing is carried out on a video to be processed in a webpage, a corresponding mask video is made in advance, the mask video is the same in other aspects except that the color of the mask video is different from that of the video to be processed, and in the aspect of color, a main body part and a background part of the mask video are respectively provided with a first color and a second color, so that the main body and the background in the video can be accurately distinguished through the color. When the vertex shader is used for drawing, the background in the second color positioning mask video is used, the transparency of the background is set to be zero, and the main body in the first color positioning mask video is used for assigning the color of the pixel at the corresponding position of the video to be processed (namely assigning the color to be the natural color), so that the background transparency is realized on the premise of ensuring the picture quality of the main body of the video, the use range of the video in a webpage is favorably expanded, the use effect of the video is improved, and the content expression of the webpage is enriched.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram illustrating the main steps of a video processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a video to be processed and a mask video according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a WebGL rendering process in accordance with an embodiment of the present invention;
FIG. 4 is a diagram illustrating steps performed in a video processing method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a portion of a video processing apparatus according to an embodiment of the present invention;
FIG. 6 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 7 is a schematic structural diagram of an electronic device for implementing a video processing method according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that embodiments of the present invention and technical features in the embodiments may be combined with each other without conflict.
Fig. 1 is a schematic diagram of the main steps of a video processing method according to an embodiment of the present invention.
As shown in fig. 1, the video processing method according to the embodiment of the present invention may be executed by a front-end Web platform, and specifically includes the following steps:
step S101: and creating a video tag in a browser engine, and acquiring a video to be processed based on the video tag.
In the embodiment of the present invention, the above browser engine may be any browser engine supporting HTML5 (HTML is HyperText Markup Language), and the Video tag may be a Video tag of HTML5, where the Video tag is used to define a Video in a web page, for example, video streams in various formats. In this step, in order to present the video subjected to background transparency processing in the Web page, the Web platform first creates a video tag, and adds the video tag to the Web page after assigning values to various parameters (such as a video stream address src) of the video tag and a related method for adding the video tag, thereby implementing initialization of video elements, that is, acquiring the video to be processed in the Web page. It should be noted that the above to-be-processed video refers to an initial video directly obtained by using the src parameter, and the colors of the subject and the background in the to-be-processed video are initial colors (i.e., natural colors) that are not subjected to subsequent processing. It is to be understood that the above subject refers to an object to be presented in an image or video, and the background refers to a portion other than the subject.
In particular, in the practical application of the embodiment of the present invention, for any video to be processed, the staff member prepares a corresponding mask video in advance, and the mask video and the video to be processed are different only in the color of the main body part or the background part, and are the same elsewhere. That is, the mask video and the corresponding to-be-processed video have a main body portion having a uniform shape and size, and have a background portion having a uniform shape and size, which differ only in that the colors of some pixels of the main body portion or the background portion may differ. In addition, in the mask video, the main portion and the background portion can be accurately distinguished by colors (i.e., pixel values), the main portion pixels are in a first color, and the background portion pixels are in a second color, wherein the second color is different from the first color. For example, if the main portion of the mask video is set to white (a first color) and the background portion is set to black (a second color), then all the main portion pixels in the mask video can be located by selecting the white pixel value, and all the background portion pixels in the mask video can be located by selecting the black pixel value. The above black and white colors are only examples, and any other suitable colors may be used for the first and second colors.
Obviously, for the to-be-processed video and the mask video which correspond to each other, a pair of video frames (including the to-be-processed frame in the to-be-processed video and the mask frame in the mask video) at the same video playing time corresponds to each other, that is, in a pair of video frames, there may be only a color difference between the to-be-processed frame and the mask frame in a main part or a background part, and both have a background part of the main part with a consistent shape and size. Fig. 2 is a schematic diagram of a to-be-processed video and a masking video according to an embodiment of the present invention, where the left half is a masking frame of the masking video at a certain time, and the right half is a to-be-processed frame of the to-be-processed video at the same time, it can be seen that, in the masking frame of the left half, a main portion and a background portion are strictly separated by two colors, white and black; in the right half of the frame to be processed, the color of the main body part is the natural color of the main body, and the left and right parts only have the color difference of the main body part. In this step, after the Web platform acquires the video to be processed, the corresponding mask video is acquired according to a predetermined storage address.
Step S102: setting the transparency of the background part pixels marked by the second color in the mask frame of the video frame as a first numerical value representing high transparency by using a shader based on a preset drawing protocol for any pair of video frames in the video to be processed and the mask video; and for any pixel of the main part identified by the first color in the mask frame of the pair of video frames, assigning the color of the corresponding pixel in the frame to be processed of the pair of video frames to the pixel by using a shader, and setting the transparency of the pixel to be a second numerical value representing low transparency so that the mask frame forms a target frame.
In this step, a shader created based on a preset drawing protocol, such as WebGL (Web Graphics Library), may be used to perform the background transparency processing. In practical applications, shaders in the WebGL system may include a Vertex Shader (Vertex Shader) and a Fragment Shader (Fragment Shader), both of which are known functional modules in the graphics card, the Vertex Shader is generally used to perform computation on Vertex-related data, the Fragment Shader is generally used to process a Fragment generated after rasterization, and for this step, the Vertex Shader is mainly used to perform background transparency processing.
Preferably, the Web platform performs a similar process for each pair of video frames in the to-be-processed video and the mask video to achieve background transparency. Taking any pair of video frames as an example, after a video to be processed and a mask video are obtained, the Web platform first creates a Canvas tag (e.g., a Canvas tag of HTML 5) in the browser engine, forms a Canvas from the Canvas tag, attaches the pair of video frames as textures to a buffer created by calling the drawing protocol interface (e.g., a createBuffer interface of WebGL) based on the Canvas, and then can perform background transparency processing in the buffer. It is to be appreciated that the canvas tab above may also be created in advance at historical times.
Specifically, the Web platform uses the vertex shader to independently execute two aspects of steps for the mask frame, and the two aspects of steps may be executed sequentially in any order or simultaneously. In a first aspect, the Web platform sets the transparency (i.e., the alpha value) of the background portion pixels identified by the second color in the mask frame of the video frame to a first value, where the first value represents high transparency (including complete transparency), and may take a value of zero or a value close to zero (e.g., a positive number smaller than 0.1) according to an actual situation. Therefore, the background part in the mask frame is accurately positioned through the second color, and the transparency of the background part is set to be zero, so that the transparency of the background part is realized.
In a second aspect, for any pixel of the main portion identified by the first color in the mask frame of the pair of video frames, the Web platform assigns the color of the corresponding pixel in the frame to be processed of the pair of video frames (i.e., the pixel having the same coordinate in the frame to be processed as the coordinate in the mask frame of any pixel above) to any pixel above, and sets the transparency of any pixel above to a second value, where the second value represents low transparency (including opacity), and a value of 1 or slightly less than 1 (e.g., a value between 0.9 and 1) may be taken according to practical situations. Therefore, the Web platform firstly utilizes the first color to accurately position the main body part in the mask frame, then assigns values to the main body part through the natural color of the corresponding pixels in the frame to be processed, and sets the main body part to be low in transparency, so that color restoration of the main body part and normal presentation in a webpage are realized. After the above processing, the mask frame finally forms a target frame which has a transparent background and a body with a natural color and meets the pre-display requirement, and when each mask frame in the mask video becomes the target frame, the mask video becomes the target video which has the transparent background and the body with the natural color and meets the pre-display requirement.
In practical application, the background of a video generated by a video tag is generally black, so in some conventional technologies, background transparency is realized by directly setting the transparency of black pixels to zero, but since a main pixel often contains black pixels, the processing method has a serious damage to picture quality. The above method of the present invention can overcome this defect, and as can be seen from the above description, the present invention first strictly defines the main body and the background by masking the video, and then performs transparency and color assignment for the main body and the background with accurate segmentation, respectively, to achieve background transparentization, thus not causing any damage to the main body of the picture.
In one embodiment, when the color of the main body part in the mask video is white, the Web platform can implement color assignment by adopting an image addition mode. Specifically, in this case, the Web platform first determines a corresponding pixel having the same coordinate as any one of the pixels in the frame to be processed, and then adds the pixel value of any one of the pixels to the pixel value of the corresponding pixel to serve as the pixel value of any one of the pixels in the target frame, thereby more conveniently implementing the color assignment of the main portion pixels in the mask video.
Step S103: and outputting the target video consisting of the target frames on a webpage for displaying.
In the step, the Web platform draws the target video which is generated in the past and has a transparent background and a natural color main body and meets the requirement in advance on the webpage so as to be displayed to the user, so that the page display of the transparent background video is realized.
Fig. 3 is a schematic diagram of a WebGL rendering process according to an embodiment of the present invention, and referring to fig. 3, first, javaScript creates a buffer object through an interface provided by the WebGL, and transmits necessary coordinate and color information to the buffer. Then, the vertex shader reads the data of the buffer object and extracts the vertex coordinates and the corresponding RGB (i.e., RGB three primary colors) color component values according to the transmitted parameters. After the data of the vertex coordinates are obtained, the WebGL needs to be informed to draw the graph according to the vertex coordinates through the graph drawing process. Thereafter, the areas covered by the graphics are converted into pixel fill information by a rasterization process involving known algorithms such as antialiasing, sampling, etc. After rasterization processing is finished, webGL calls a fragment shader to draw each fragment, finally, each pixel is filled with the color after rasterization processing, and the color is written into a color buffer area, so that the final graph and color are displayed in a browser.
One particular embodiment of the present invention is described below, which relates to the introduction of animated video in a web page. See fig. 2 and 4.
The existing modes for introducing video into a webpage can adopt the following methods:
first, GIF animation: GIF is a compressed bitmap format, supports transparent background images, is suitable for various operating systems, saves a plurality of images into one image file to form a video, can be manufactured through related software, and is finally embedded into a webpage by a front-end engineer by using a front-end technology.
Second, video animation: video is a tag element newly introduced by HTML5, the Video file address can be directly assigned to the attribute src to complete the Video presentation, and the Video can be automatically played and circularly played to achieve the Video effect.
Third, CSS3 (CSS is a Cascading Style Sheets) animation: the CSS3 transition animation can perform smooth animation effect transition when a certain element style or state changes, thereby realizing the effect of an animated video. The CSS3 animation is a CSS3 animation in the true sense, and by controlling the key frames and the cycle times, page tag elements can make smooth transitions according to the set style changes, and a complex animation effect can be achieved.
The above three ways are only the most frequently used animation techniques in the front-end development, and besides, the animation implementation can be implemented by other techniques, such as: flash, javaScript + HTML, javaScript + Canvas, and the like.
The existing method for introducing animation videos into the webpage has the following defects:
(1) In addition to the fact that the GIF animation only supports 256 colors, it may appear to be toned in a detailed animation while supporting only limited transparency without a semi-transparent or fading effect, the high-definition GIF animation is relatively large in volume, and if it is displayed in a compressed manner, poor interaction may occur due to a lost frame. The large size can cause certain influence on the performance of the front-end page, if a large number of GIF images are introduced in the first screen rendering process, page rendering can be blocked due to loading, and then a white screen phenomenon occurs, and under the condition, the user experience is very poor.
(2) The background color of the Video tag defaults to black, and other colors can be modified and overlaid using CSS style and hierarchy element z-index if desired, but a transparent background cannot be achieved.
(3) CSS3 is a popular animation implementation at present, but its animation implementation is limited to a certain label element, and the animation effect is simple and single, such as: rotation, zooming, progressiveness, etc. are better suited to optimize the interaction effect of the page, satisfying the user's basic perception, but if one wants to achieve a complex animation effect, the CSS3 property is simply not enough.
The method mainly solves the problems that when the complex animation effect needs to be displayed on the Web front-end page, the conventional technical means causes too many static resources, too large volume, slow page loading and the like, and the target value cannot be reached due to too simple realization effect of the conventional resources. The method utilizes the natural advantages that WebGL can render high-performance interactive 3D and 2D graphics in any compatible Web browser and the good performance characteristic that Canvas renders animation well, reconstructs and optimizes the Video tag to enable the Video tag to have an Alpha channel (namely transparency) to achieve the effect that background color can be transparent, and combines the characteristics of the Video tag such as circulatability, automatic playing and sound channels to achieve animation display with good performance, small volume and capability of playing sound at the same time. Finally, further packaging and packaging are carried out, and then the Package is released to an NPM (Node Package Manager) source, and for a user, the user is only a plug-in, only needs to download and configure corresponding parameters and can realize simple use.
According to the technical scheme, the plug-in is finally issued to the NPM and used as a plug-in, the learning cost of a user for codes is reduced to a certain extent, the plug-in can be used after unpacking, meanwhile, the plug-in combines good characteristics of technologies such as Canvas, webGL, video and the like, when a Web front-end page is used and displayed, the performance consumption of the page is reduced, unnecessary blocking is reduced, and the user experience is good.
The specific implementation steps of this embodiment are as follows:
the first step is as follows: defining a class, namely a constructor AlphaVideo in JavaScript, setting a default parameter autoplay: besides true (automatic playing), oneerror (error handling function) and onPlay (video playing function), the constructor can set parameters src (video stream address), loop (loop playing), canvas (animation Canvas element), width and height, and then after calling the constructor and introducing corresponding parameters, the constructor can return to the target video with transparent background.
Secondly, initializing a Video element, firstly calling a document, createlementelement ('Video') method to create a Video tag, sending parameters and default parameter assignment values into a user, calling an addEventListener method to add a play (redrawing animation) method and an error (error handling) method on the Video tag, and finally adding the Video tag into a webpage by using the document, body, appdchild method.
And thirdly, after the Video element is initialized, calling a textImage2D method provided by WebGL, setting TEXTURE _2D as a TEXTURE object, simultaneously specifying a color patch level RGB, setting the data TYPE of pixel data as UNSIGNED _ TYPE, finally specifying a pointer of an image as the Video element Video so as to bind the TEXTURE object with the Video, and calling a drawArrays method provided by WebGL to start drawing the image.
Fourth, since a video animation is formed of many frames, the processing for each frame is the same in this embodiment, and a loop step may be used to process each frame in the video animation.
After each frame of the video is acquired, the region needing to be transparent in each frame needs to be found and set to be transparent, and since the primary color of the animation may also be black, if a black element is directly found and set to be transparent, the effect of the original animation is likely to be damaged, at this time, a UI designer needs to make a bilateral symmetry video in advance (see fig. 2, i.e., make a mask video in advance), that is, the animation element of the mask video on the left side is white, the background is black, the primary color of the video animation to be processed on the right side is the natural color, and the background is the natural color. During specific operation, a left pixel point can be analyzed, white represents a main body, black represents a transparent background which needs to be modified, RGB of a corresponding pixel on the right side of the main body on the left side is obtained and assigned to a white area on the left side, and the transparency is set to be 1, so that color restoration of the white pixel on the left side is completed, on the other hand, the transparency of the background pixel on the left side is set to be zero, so that the black background on the left side is converted into the transparent background, and the following main steps are involved:
firstly, obtaining a Canvas element, setting the width and the height of the Canvas element, if detecting that a DOM (Document Object Model) element does not exist, creating by using Document.
Secondly, calling new flow 32Array () to set the position coordinates of the buffer area, calling createBuffer () provided by WebGL to create the buffer area, calling bindBuffer () of WebGL to bind the buffer area object to the mask video and the video to be processed, and finally calling bufferData () method of WebGL to write the buffer area object into the vertex data. In practice, two buffers may be created to store the whole frame image (i.e. the mask frame in combination with the frame to be processed) and the left image (mask frame), respectively, the former facilitating the execution of the color and transparency assignments, and the latter being used for writing the target frame.
Thirdly, the vertex shader reads the data of the buffer object, redraws according to the vertex coordinates of the image and the corresponding RGB values, and calls a texture2D method for conversion, so that the transparent background can be realized.
Fourthly, executing rasterization processing to realize smooth transition of colors.
Fifthly, after rasterization is finished, calling a fragment shader piece by piece, filling each pixel with the color after rasterization processing, and presenting a final effect.
The above process is only a complete step of one frame of image, and for a video, each frame of image needs to be processed circularly, and this function can be realized by using a requestAnimationFrame () method.
When background transparency processing is carried out on a plurality of videos, an AlphaVideo constructor can be thrown outwards, when a user uses the AlphaVideo constructor, the user only needs to create an instance based on the constructor, inputs predefined parameters and finally calls AV (AV represents an instance name) to present a final effect, the method is finally packaged and issued to the NPM, and the user can directly download and use the AlphaVideo command through an NPM install.
Therefore, the transparent background effect of the animation Video can be realized by combining the Canvas and the Video based on the WebGL technology, the volume of the animation Video and the performance consumption of a client can be reduced, and meanwhile, a simple method is provided to achieve the purpose of being used after being unpacked.
It should be noted that for the above-mentioned embodiments of the method, for convenience of description, the embodiments are described as a series of combinations of actions, but those skilled in the art should understand that the present invention is not limited by the described order of actions, and that some steps may in fact be performed in other orders or simultaneously. Moreover, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no acts or modules are necessarily required to implement the invention.
To facilitate a better implementation of the above-described aspects of embodiments of the present invention, the following also provides relevant means for implementing the above-described aspects.
Referring to fig. 5, a video processing apparatus according to an embodiment of the present invention may include: a video determining unit 501, a target frame forming unit 502, and an output unit 503.
The video determining unit 501 may be configured to create a video tag in a browser engine, and obtain a video to be processed based on the video tag; the video to be processed corresponds to a pre-made mask video, and the mask video and the video to be processed have a main body part and a background part which are consistent in shape and size; the main part pixels of the mask video are in a first color, and the background part pixels are in a second color different from the first color; the target frame forming unit 502 may be configured to: for any pair of video frames in the video to be processed and the mask video, setting the transparency of the background part pixels identified by the second color in the mask frames of the pair of video frames to be a first numerical value representing high transparency by using a shader based on a preset drawing protocol; for any pixel of a main body part identified by a first color in a mask frame of the pair of video frames, assigning the color of a corresponding pixel in a frame to be processed of the pair of video frames to the any pixel by using the shader, and setting the transparency of the any pixel to be a second numerical value representing low transparency so that the mask frame forms a target frame; the output unit 503 may be configured to output a target video composed of the target frames on a webpage for presentation.
In an embodiment of the present invention, the first color is white; the device 500 may further comprise: a buffer unit for: before the shader based on the preset drawing protocol sets the transparency of the background part pixels identified by the second color in the mask frame of the video frame to be a first numerical value representing high transparency, creating a canvas label in the browser engine, and attaching the video frame as a texture in a buffer area created by calling the drawing protocol interface based on the canvas formed by the canvas label; the target frame forming unit 502 may be further configured to: determining a corresponding pixel in the frame to be processed, wherein the corresponding pixel has the same coordinate with any pixel; and adding the pixel value of any pixel and the pixel value of the corresponding pixel to obtain the pixel value of any pixel in the target frame.
Preferably, the second color is black, the first value is zero, the second value is one, and the shader is a vertex shader; the Video tags are Video tags of hypertext markup language HTML5, the Canvas tags are Canvas tags of HTML5, and the drawing protocol is a webpage drawing protocol WebGL.
According to the technical scheme of the embodiment of the invention, before background transparency processing is carried out on the video to be processed in the webpage, a corresponding mask video is made in advance, the mask video is the same in other aspects except that the color of the mask video is different from that of the video to be processed, and in the aspect of color, a main body part and a background part of the mask video are respectively provided with a first color and a second color, so that the main body and the background in the video can be accurately distinguished through the color. When the vertex shader is used for drawing, the background in the second color positioning mask video is set to be zero in transparency, and the main body in the first color positioning mask video is assigned by the color of the pixel at the corresponding position of the video to be processed, so that the background transparency is realized on the premise of ensuring the picture quality of the main body of the video, the application range of the video in a webpage is enlarged, the use effect of the video is improved, and the content expression of the webpage is enriched.
Fig. 6 shows an exemplary system architecture 600 of a video processing method or video processing apparatus to which embodiments of the invention may be applied.
As shown in fig. 6, the system architecture 600 may include terminal devices 601, 602, 603, a network 604, and a server 605 (this architecture is merely an example, and the components included in a particular architecture may be adapted according to application specific circumstances). The network 604 serves as a medium for providing communication links between the terminal devices 601, 602, 603 and the server 605. Network 604 may include various types of connections, such as wire, wireless communication links, or fiber optic cables.
A user may use the terminal devices 601, 602, 603 to interact with the server 605 via the network 604 to receive or send messages or the like. Various client applications, such as a browser application (for example only), may be installed on the terminal devices 601, 602, 603.
The terminal devices 601, 602, 603 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 605 may be a server providing various services, such as a web server (for example only) providing support for a user using a browser application operated by the terminal device 601, 602, 603. The web server may process the received web page request and feed back the processing result (e.g. the requested web page-by way of example only) to the terminal device 601, 602, 603.
It should be noted that the video processing method provided by the embodiment of the present invention is generally executed by the server 605, and accordingly, the video processing apparatus is generally disposed in the server 605.
It should be understood that the number of terminal devices, networks, and servers in fig. 6 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
The invention also provides the electronic equipment. The electronic device of the embodiment of the invention comprises: one or more processors; a storage device, configured to store one or more programs, which when executed by the one or more processors, cause the one or more processors to implement the video processing method provided by the present invention.
Referring now to FIG. 7, shown is a block diagram of a computer system 700 suitable for use with the electronic device implementing an embodiment of the present invention. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiment of the present invention.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU) 701, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data necessary for the operation of the computer system 700 are also stored. The CPU701, the ROM 702, and the RAM703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, the processes described in the main step diagrams above may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the invention include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the main step diagram. In the above-described embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program, when executed by the central processing unit 701, performs the above-described functions defined in the system of the present invention.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present invention may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a video determination unit, a target frame formation unit, and an output unit. Where the names of these cells do not in some cases constitute a limitation of the cell itself, for example, the video determination unit may also be described as a "cell providing the target frame forming unit with the video to be processed and the mask video".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to perform steps comprising: creating a video tag in a browser engine, and acquiring a video to be processed based on the video tag; the video to be processed corresponds to a pre-made mask video, and the mask video and the video to be processed have a main body part and a background part which are consistent in shape and size; the main part pixels of the mask video are in a first color, and the background part pixels are in a second color different from the first color; setting the transparency of the background part pixels marked by the second color in the mask frame of the video frame as a first numerical value representing high transparency by using a shader based on a preset drawing protocol for any pair of video frames in the video to be processed and the mask video; for any pixel of a main body part identified by a first color in a mask frame of the pair of video frames, assigning the color of a corresponding pixel in a frame to be processed of the pair of video frames to the any pixel by using the shader, and setting the transparency of the any pixel to be a second numerical value representing low transparency so that the mask frame forms a target frame; and outputting the target video consisting of the target frames on a webpage for displaying.
In the technical scheme of the embodiment of the invention, before background transparentization processing is carried out on the video to be processed in the webpage, a corresponding mask video is prepared in advance, the mask video is the same in other aspects except that the color of the mask video is different from that of the video to be processed, and in the aspect of color, a main body part and a background part of the mask video are respectively provided with a first color and a second color, so that the main body and the background in the video can be accurately distinguished through the color. When the vertex shader is used for drawing, the background in the second color positioning mask video is assigned to zero, and the main body in the first color positioning mask video is assigned by the color of the pixel at the corresponding position of the video to be processed, so that the background transparency is realized on the premise of ensuring the picture quality of the main body of the video, the use range of the video in a webpage is favorably expanded, the use effect of the video is improved, and the content expression of the webpage is enriched.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may occur depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A video processing method, comprising:
creating a video label in a browser engine, and acquiring a video to be processed based on the video label; the video to be processed corresponds to a pre-made mask video, and the mask video and the video to be processed have a main body part and a background part which are consistent in shape and size; the main part pixels of the mask video are in a first color, and the background part pixels are in a second color different from the first color;
setting the transparency of the background part pixels marked by the second color in the mask frame of the video frame as a first numerical value representing high transparency by using a shader based on a preset drawing protocol for any pair of video frames in the video to be processed and the mask video; for any pixel of a main body part identified by a first color in a mask frame of the pair of video frames, assigning the color of a corresponding pixel in a frame to be processed of the pair of video frames to the pixel by using the shader, and setting the transparency of the pixel to be a second numerical value representing low transparency so as to enable the mask frame to form a target frame;
and outputting the target video consisting of the target frames on a webpage for displaying.
2. The method of claim 1, further comprising:
before the transparency of the background part pixels identified by the second color in the mask frame of the video frame is set to be a first numerical value representing high transparency by the shader based on the preset drawing protocol, creating a canvas label in the browser engine, and attaching the video frame as a texture in a buffer area created by calling the drawing protocol interface based on the canvas formed by the canvas label.
3. The method of claim 1, wherein the first color is white; and assigning, by the shader, the color of the corresponding pixel in the frame to be processed of the pair of video frames to the any pixel, including:
determining a corresponding pixel in the frame to be processed, wherein the corresponding pixel has the same coordinate with any pixel;
and adding the pixel value of any pixel and the pixel value of the corresponding pixel to obtain the pixel value of any pixel in the target frame.
4. The method of claim 3, wherein the second color is black, the first value is zero, the second value is one, and the shader is a vertex shader.
5. The method of claim 2, wherein the Video tag is a Video tag of hypertext markup language (HTML 5), the Canvas tag is a Canvas tag of HTML5, and the drawing protocol is a web page drawing protocol (WebGL).
6. A video processing apparatus, comprising:
the video determining unit is used for creating a video label in a browser engine and acquiring a video to be processed based on the video label; the video to be processed corresponds to a pre-made mask video, and the mask video and the video to be processed have a main body part and a background part which are consistent in shape and size; the main part pixels of the mask video are in a first color, and the background part pixels are in a second color different from the first color;
a target frame forming unit for: setting the transparency of the background part pixels marked by the second color in the mask frame of the video frame as a first numerical value representing high transparency by using a shader based on a preset drawing protocol for any pair of video frames in the video to be processed and the mask video; for any pixel of a main body part identified by a first color in a mask frame of the pair of video frames, assigning the color of a corresponding pixel in a frame to be processed of the pair of video frames to the any pixel by using the shader, and setting the transparency of the any pixel to be a second numerical value representing low transparency so that the mask frame forms a target frame;
and the output unit is used for outputting the target video consisting of the target frames to a webpage for displaying.
7. The device of claim 6, wherein the first color is white;
the apparatus further comprises: a buffer unit for: before the shader based on the preset drawing protocol is used for setting the transparency of the pixels of the background part identified by the second color in the mask frame of the video frame to be a first numerical value representing high transparency, a canvas label is created in the browser engine, and the video frame is attached to a buffer area created by calling the drawing protocol interface as a texture based on the canvas formed by the canvas label;
the target frame forming unit is further configured to: determining a corresponding pixel in the frame to be processed, wherein the corresponding pixel has the same coordinate with any pixel; and adding the pixel value of any pixel and the pixel value of the corresponding pixel to obtain the pixel value of any pixel in the target frame.
8. The apparatus of claim 7, wherein the second color is black, the first value is zero, the second value is one, and the shader is a vertex shader;
the Video tags are Video tags of hypertext markup language HTML5, the Canvas tags are Canvas tags of HTML5, and the drawing protocol is a webpage drawing protocol WebGL.
9. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210961407.5A CN115391692A (en) | 2022-08-11 | 2022-08-11 | Video processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210961407.5A CN115391692A (en) | 2022-08-11 | 2022-08-11 | Video processing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115391692A true CN115391692A (en) | 2022-11-25 |
Family
ID=84118715
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210961407.5A Pending CN115391692A (en) | 2022-08-11 | 2022-08-11 | Video processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115391692A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117093638A (en) * | 2023-10-17 | 2023-11-21 | 博智安全科技股份有限公司 | Micro-service data initialization method, system, electronic equipment and storage medium |
-
2022
- 2022-08-11 CN CN202210961407.5A patent/CN115391692A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117093638A (en) * | 2023-10-17 | 2023-11-21 | 博智安全科技股份有限公司 | Micro-service data initialization method, system, electronic equipment and storage medium |
CN117093638B (en) * | 2023-10-17 | 2024-01-23 | 博智安全科技股份有限公司 | Micro-service data initialization method, system, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107832108B (en) | Rendering method and device of 3D canvas webpage elements and electronic equipment | |
US10387549B2 (en) | Procedurally expressing graphic objects for web pages | |
US9710883B2 (en) | Flexible control in resizing of visual displays | |
CN108665520A (en) | A kind of method and device that page animation renders | |
US9342322B2 (en) | System and method for layering using tile-based renderers | |
WO2019228013A1 (en) | Method, apparatus and device for displaying rich text on 3d model | |
CA2510776A1 (en) | Common charting using shapes | |
CN114782612A (en) | Image rendering method, device, electronic device and storage medium | |
US9153193B2 (en) | Primitive rendering using a single primitive type | |
US11593908B2 (en) | Method for preprocessing image in augmented reality and related electronic device | |
CN111460342B (en) | Page rendering display method and device, electronic equipment and computer storage medium | |
KR20160120128A (en) | Display apparatus and control method thereof | |
CN111951356A (en) | Animation rendering method based on JSON data format | |
CN109144655B (en) | Method, device, system and medium for dynamically displaying image | |
CN115391692A (en) | Video processing method and device | |
US20050116946A1 (en) | Graphic decoder including graphic display accelerating function based on commands, graphic display accelerating method therefor and image reproduction apparatus | |
KR20050040712A (en) | 2-dimensional graphic decoder including graphic display accelerating function based on commands, graphic display accelerating method therefor and reproduction apparatus | |
US10067914B2 (en) | Techniques for blending document objects | |
CN116010736A (en) | Vector icon processing method, device, equipment and storage medium | |
CN118394312B (en) | 3D large-screen rotation display method and device based on three.js | |
US20250118337A1 (en) | Video processing method and apparatus | |
CN110288685B (en) | Gear mode data display method and device based on svg shade function | |
CN110032712A (en) | A kind of font rendering device and terminal | |
KR20240062268A (en) | Graphic user interface providing method and apparatus for home menu on iptv or ott application | |
CN116017058A (en) | Video playing method, device, electronic equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |