Disclosure of Invention
The application provides a page processing method, device and equipment, which are used for improving the reliability of displaying a webpage by electronic equipment.
In a first aspect, an embodiment of the present application provides a page processing method, including:
Determining a plurality of elements and element information of the elements in a first webpage, wherein the element information comprises a display period and a display position of the elements in the first webpage;
acquiring the display time length of the first webpage;
processing the plurality of elements according to the display duration and the element information to obtain video information corresponding to the first webpage;
and sending the video information corresponding to the first webpage to the electronic equipment.
In one possible implementation manner, the plurality of elements include static elements and dynamic elements, and the processing the plurality of elements according to the display duration and the element information to obtain video information corresponding to the first webpage includes:
processing the plurality of static elements according to the display duration and the element information of the static elements to obtain a static video;
And determining video information corresponding to the first webpage according to the static video, the dynamic element and the element information of the dynamic element.
In one possible implementation manner, according to the display duration and the element information of the static elements, the processing is performed on the plurality of static elements to obtain a static video, including:
determining the number N of video frames according to the display duration and a preset frame rate, wherein N is an integer greater than 1;
Generating N frames of target images according to the static elements and element information of the static elements;
and performing splicing processing on the N frames of target images to obtain the static video.
In one possible implementation, generating the N-frame target image according to the plurality of static elements and element information of the plurality of static elements includes:
According to the element information of each static element, determining an RGB image corresponding to each static element and a frame identifier corresponding to the RGB image, wherein the frame identifier is an integer greater than or equal to 1 and less than or equal to N;
according to the RGB images corresponding to each static element and the frame identifications corresponding to the RGB images, N image groups are determined, wherein the image groups comprise at least one RGB image, each RGB image in the ith image group corresponds to a frame identification i, and the i is an integer which is more than or equal to 1 and less than or equal to N;
And respectively carrying out fusion processing on the RGB images in each image group to obtain the N frames of target images.
In one possible implementation manner, for any static element, determining an RGB image corresponding to the static element and a frame identifier corresponding to the RGB image according to element information of the static element comprises:
generating an RGB image corresponding to the static element according to the display position of the static element in the first webpage;
And determining a frame identifier corresponding to the RGB image according to the display period of the static element in the first webpage and the preset frame rate, wherein the RGB image corresponds to at least one frame identifier.
In a possible implementation manner, the stitching processing is performed on the N frames of target images to obtain the still video, including:
performing format conversion processing on the N frames of target images to obtain N frames of target format images;
and performing splicing processing on the N frames of images in the target format to obtain the static video.
In one possible implementation manner, determining the video information corresponding to the first webpage according to the still video, the dynamic element and the element information of the dynamic element includes:
Determining the video information comprises the static video, the dynamic element and element information of the dynamic element;
Or alternatively
And carrying out fusion processing on the static video and the dynamic element according to the element information of the dynamic element to obtain a fusion video, and determining that the video information comprises the fusion video.
In a second aspect, an embodiment of the present application provides a page processing method, including:
Receiving video information corresponding to a first webpage, wherein the first webpage comprises a plurality of elements, the video information is determined according to the element information of the plurality of elements, and the element information comprises a display period and a display position of the elements in the first webpage;
And determining a target video according to the video information, and playing the target video.
In one possible implementation, the plurality of elements includes a static element and a dynamic element, wherein,
The video information comprises the static video, the dynamic element and element information of the dynamic element, wherein the static video is determined according to the element information of the static element in the first webpage;
Or alternatively
The video information comprises a fusion video, wherein the fusion video is obtained by fusion processing of the static video and the dynamic element.
In one possible implementation, the video information includes the still video, the dynamic element and element information of the dynamic element, and determining a target video according to the video information includes:
According to the element information of the dynamic element, carrying out fusion processing on the static video and the dynamic element to obtain the target video;
Or alternatively
The video information comprises a fusion video, and the target video is determined according to the video information, comprising the following steps:
And determining the fusion video as the target video.
In a third aspect, an embodiment of the present application provides a page processing apparatus, including a determining module, an acquiring module, a processing module, and a sending module, where,
The determining module is used for determining a plurality of elements and element information of the elements in a first webpage, wherein the element information comprises a display period and a display position of the elements in the first webpage;
the acquisition module is used for acquiring the display time length of the first webpage;
The processing module is used for processing the plurality of elements according to the display duration and the element information to obtain video information corresponding to the first webpage;
the sending module is used for sending video information corresponding to the first webpage to the electronic equipment.
In a possible implementation manner, the processing module is specifically configured to:
processing the plurality of static elements according to the display duration and the element information of the static elements to obtain a static video;
And determining video information corresponding to the first webpage according to the static video, the dynamic element and the element information of the dynamic element.
In a possible implementation manner, the processing module is specifically configured to:
determining the number N of video frames according to the display duration and a preset frame rate, wherein N is an integer greater than 1;
Generating N frames of target images according to the static elements and element information of the static elements;
and performing splicing processing on the N frames of target images to obtain the static video.
In a possible implementation manner, the processing module is specifically configured to:
According to the element information of each static element, determining an RGB image corresponding to each static element and a frame identifier corresponding to the RGB image, wherein the frame identifier is an integer greater than or equal to 1 and less than or equal to N;
according to the RGB images corresponding to each static element and the frame identifications corresponding to the RGB images, N image groups are determined, wherein the image groups comprise at least one RGB image, each RGB image in the ith image group corresponds to a frame identification i, and the i is an integer which is more than or equal to 1 and less than or equal to N;
And respectively carrying out fusion processing on the RGB images in each image group to obtain the N frames of target images.
In a possible implementation manner, the processing module is specifically configured to:
generating an RGB image corresponding to the static element according to the display position of the static element in the first webpage;
And determining a frame identifier corresponding to the RGB image according to the display period of the static element in the first webpage and the preset frame rate, wherein the RGB image corresponds to at least one frame identifier.
In a possible implementation manner, the processing module is specifically configured to:
performing format conversion processing on the N frames of target images to obtain N frames of target format images;
and performing splicing processing on the N frames of images in the target format to obtain the static video.
In a possible implementation manner, the processing module is specifically configured to:
Determining the video information comprises the static video, the dynamic element and element information of the dynamic element;
Or alternatively
And carrying out fusion processing on the static video and the dynamic element according to the element information of the dynamic element to obtain a fusion video, and determining that the video information comprises the fusion video.
In a fourth aspect, an embodiment of the present application provides a page processing apparatus, including a receiving module, a determining module, and a playing module, where,
The receiving module is used for receiving video information corresponding to a first webpage, wherein the first webpage comprises a plurality of elements, the video information is determined according to the element information of the plurality of elements, and the element information comprises the display time period and the display position of the elements in the first webpage;
The determining module is used for determining a target video according to the video information;
the playing module is used for playing the target video.
In one possible implementation manner, the video information comprises the static video, the dynamic element and element information of the dynamic element, wherein the static video is determined according to the element information of the static element in the first webpage;
Or alternatively
The video information comprises a fusion video, wherein the fusion video is obtained by fusion processing of the static video and the dynamic element.
In one possible implementation manner, the determining module is specifically configured to:
According to the element information of the dynamic element, carrying out fusion processing on the static video and the dynamic element to obtain the target video;
Or alternatively
The video information comprises a fusion video, and the target video is determined according to the video information, comprising the following steps:
And determining the fusion video as the target video.
In a fifth aspect, an embodiment of the present application provides a cloud device, including a memory and a processor;
The memory stores computer-executable instructions;
the processor executing computer-executable instructions stored in the memory, causing the processor to perform the page processing method of any one of the first aspects.
In a sixth aspect, an embodiment of the present application provides an electronic device, including a memory and a processor;
The memory stores computer-executable instructions;
The processor executing computer-executable instructions stored in the memory, causing the processor to perform the page processing method of any of the second aspects.
In a seventh aspect, embodiments of the present application provide a computer-readable storage medium having stored therein computer-executable instructions for implementing the page processing method of any one of the first aspects when the computer-executable instructions are executed by a processor.
In an eighth aspect, embodiments of the present application provide a computer-readable storage medium having stored therein computer-executable instructions for implementing the page processing method of any one of the second aspects when the computer-executable instructions are executed by a processor.
In a ninth aspect, an embodiment of the present application provides a computer program product comprising a computer program which, when executed by a processor, implements the page processing method of any one of the first aspects.
In a tenth aspect, embodiments of the present application provide a computer program product comprising a computer program which when executed by a processor implements the page processing method of any of the second aspects.
In the embodiment of the application, the cloud device can acquire the display time length, the plurality of elements and the element information of the first webpage, and can process the plurality of static elements according to the display time length and the element information corresponding to the static elements to obtain the static video. The cloud device can use the static video, the dynamic element and the element information corresponding to the dynamic element as video information, or fusion processing is carried out on the static video and the dynamic element according to the element information corresponding to the dynamic element, so as to obtain the video information corresponding to the first webpage. The cloud device may send video information corresponding to the first webpage to the electronic device. After receiving the video information corresponding to the first webpage, the electronic device can determine a target video according to the video information and play the target video. The cloud device can convert the webpage into the corresponding video information, so that the problem that the electronic device cannot be compatible with the format of the content in the webpage is avoided, and the reliability of displaying the webpage by the electronic device can be improved.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Fig. 1 is a schematic diagram of an application scenario provided in an exemplary embodiment of the present application. As shown in fig. 1, the cloud device and the plurality of electronic devices are included. For example, the plurality of electronic devices may include electronic device 1, electronic device 2, and/or electronic device n. The cloud device may communicate with the electronic device. The cloud device may be a computer device. For example, the cloud device may be a computer or the like. The electronic device has a display screen, and for example, the electronic device may be a terminal large screen, a mobile phone, a computer, and the like.
A worker can make a webpage on the cloud device, and the webpage can comprise one or more elements such as characters, images, videos and the like. After the webpage is manufactured, the cloud device can process elements in the webpage to convert the webpage into corresponding video information, and the video information is sent to the electronic device, so that the electronic device plays and displays the content in the webpage according to the video information. The web page can be a static web page or a dynamic web page and has a certain display duration. For example, when the web page is a static web page, the content in the web page can be a propaganda poster, the display time is 5s, and when the web page is a dynamic web page, the content in the web page can be an advertisement, a propaganda film and the like, and the display time is 10s.
In the related art, multimedia information is usually displayed in the form of a web page, for example, an HTML format web page may be written in a cloud device, where the web page includes text, images, video, and other contents. After the webpage is manufactured in the cloud device, generating a link of the webpage and sending the link to the electronic device. When the electronic equipment needs to play the multimedia information corresponding to the webpage, the electronic equipment downloads the content in the webpage from the cloud equipment according to the link and displays the content in the webpage. However, when the electronic device cannot be compatible with the format of the content in the web page, the electronic device fails to play the multimedia content, resulting in poor reliability of displaying the web page by the electronic device.
In the embodiment of the application, the cloud device can determine a plurality of elements and corresponding element information in the webpage, process the plurality of elements according to the display time length and the element information of the webpage, convert the webpage into the corresponding video information, enable the electronic device to download the video information, and display the webpage content according to the video information. The cloud device can convert the webpage into the corresponding video information, so that the problem that the electronic device cannot be compatible with the format of the content in the webpage is avoided, and the reliability of displaying the webpage by the electronic device can be improved.
The technical scheme shown in the application is described in detail by specific examples. It should be noted that the following embodiments may exist alone or in combination with each other, and for the same or similar content, the description will not be repeated in different embodiments.
Fig. 2 is a flow chart of a page processing method according to an exemplary embodiment of the present application. Referring to fig. 2, the method may include:
s201, determining a plurality of elements and element information of the elements in a first webpage.
The execution main body of the embodiment of the application can be cloud equipment or a page processing device arranged in the cloud equipment. The page processing device may be implemented by software, or may be implemented by a combination of software and hardware.
The first web page may be a web page in HTML format, for example, the first web page may be an H5 web page.
The first webpage can be a static webpage or a dynamic webpage, and the first webpage has a corresponding display duration.
When the first web page is a static web page, the content in the first web page may be static content, and the display content does not change in the display duration, for example, the content in the first web page may be a static propaganda poster. When the first web page is a dynamic web page, the content in the web page may include dynamic content, and the content displayed in the first web page is different in different display periods. For example, promotional text and promotional video may be included in the first web page.
The first web page may include a plurality of elements therein, for example, the elements may be text, images, video, images, and the like. Elements can be divided into static elements and dynamic elements, wherein the static elements comprise static contents such as characters, images and the like, and the dynamic elements comprise dynamic contents such as videos, dynamic diagrams and the like.
Each element in the first webpage is provided with corresponding element information, and the element information comprises the display time period and the display position of the element in the first webpage.
The display position may be represented by coordinates of the element in the first web page. For example, the upper left corner of the first web page may be taken as the origin of coordinates, and the coordinate values may be represented by pixel values. For example, a rectangular screen in which an element is 100px×150px in the first web page, the display position of the rectangular screen may be represented by an upper left corner vertex (10 px,150 px) and a lower right corner vertex (110 px,300 px), the upper left corner vertex of the element is 100px from the left boundary of the first web page, the upper right corner vertex of the element is 150px from the upper boundary of the first web page, the lower right corner vertex of the element is 110px from the left boundary of the first web page, and the upper right corner vertex of the element is 300px from the upper boundary of the first web page.
Different elements can be displayed in different display periods in the first webpage, and different element information is corresponding to the different elements. Next, a plurality of elements in the first web page will be described with reference to fig. 3.
Fig. 3 is a schematic diagram of a first web page according to an exemplary embodiment of the present application. As shown in fig. 3, if the display duration of the first web page is 3s, 24 display images can be included in each second, then 24 display images corresponding to 1ms to 60ms are included in the 1 st s, 24 display images corresponding to 61ms to 120ms are included in the 2 nd s, and 24 display images corresponding to 121ms to 180ms are included in the 3 rd s.
As shown in fig. 3, the first web page includes a text a, a text b, a video c, and an image d, where the text a, the text b, and the image d are static elements, and the video c is a dynamic element.
The display period of the text a in the first web page is 1ms to 120ms, and the display positions can be represented as the upper left corner vertices (10 px ) and the lower right corner vertices (60 px,40 px).
The display period of the text b in the first web page is 121ms to 180ms, and the display positions can be represented as the upper left corner vertices (10 px ) and the lower right corner vertices (60 px,40 px).
The display period of the video c in the first web page is 1ms to 180ms, and the display positions may be expressed as the upper left corner vertices (0 px,80 px) and the lower right corner vertices (280 px,180 px).
The display period of the image d in the first web page is 61ms to 180ms, and the display positions may be represented as the upper left corner vertices (300 px,10 px) and the lower right corner vertices (420 px,90 px).
S202, acquiring display time length of the first webpage.
The display duration of the first webpage refers to the display duration of the electronic equipment on the content in the first webpage. The display duration of the first web page may be preset. If the first webpage includes dynamic elements such as video, the display duration of the first webpage can also be determined according to the play duration of the dynamic elements.
And S203, processing the plurality of elements according to the display duration and the element information to obtain video information corresponding to the first webpage.
The first web page may include static elements therein, or the first web page may include dynamic elements therein, or the first web page may include both static and dynamic elements therein. When the elements included in the first webpage are different, the process of determining the video information corresponding to the first webpage is also different, including the following three cases:
case 1, the first web page includes static elements.
When the first webpage includes static elements but does not include dynamic elements, a plurality of static elements can be processed according to the display duration of the first webpage and the element information of the static elements to obtain a static video. That is, in this case, the video information corresponding to the first web page includes still video.
In the embodiment shown in fig. 4, the process of determining the still video is described, and will not be described here.
Case 2, the first web page includes a dynamic element.
When the first webpage includes a dynamic element, but does not include a static element, it may be determined that video information corresponding to the first webpage is the dynamic element.
Case 3, the first web page includes static elements and dynamic elements.
In this case, the cloud device may process the plurality of static elements according to the display duration and the element information of the static elements to obtain a static video, and determine video information corresponding to the first webpage according to the static video, the dynamic element, and the element information of the dynamic element. The static video is a video obtained by processing the static element.
The cloud device may determine the video information corresponding to the first webpage in multiple manners, which may include the following two manners:
in the mode 1, after the cloud device obtains the static video corresponding to the static element, the static video, the dynamic element and the element information of the dynamic element can be determined as the video information corresponding to the first webpage.
In this manner, the video information includes still video, dynamic elements, and element information of the dynamic elements.
When the video information in the electronic equipment needs to be updated, if the dynamic element changes and the static element does not change, the cloud equipment can send the dynamic element and the element information of the dynamic element to the electronic equipment without sending the static element to the electronic equipment. Or if the static element is changed and the dynamic element is not changed, the cloud device can send the static video to the electronic device without sending the dynamic element and the element information of the dynamic element to the electronic device. Unnecessary data transmission is reduced, and the workload of cloud equipment is reduced. In addition, in the practical application process, static elements or dynamic elements in video information required by different electronic devices may be the same, and in this case, the electronic devices may flexibly combine the contents in the video information sent to different electronic devices, so that the flexibility of sending the video information is higher.
In the mode 2, the cloud device can perform fusion processing on the static video and the dynamic element through a media synthesis algorithm according to the element information of the dynamic element to obtain a fusion video.
In this manner, the video information corresponding to the first web page may be a blended video. After the electronic equipment receives the fusion video, the electronic equipment directly plays the fusion video, so that the convenience of playing the video information by the electronic equipment is higher.
S204, sending video information corresponding to the first webpage to the electronic equipment.
After the cloud device determines the video information corresponding to the first webpage, the video information can be sent to the electronic device through a streaming media technology, so that the electronic device can process the video information while downloading the video information, and the content of the first webpage is displayed.
The streaming media technology refers to a technology of continuously playing multimedia files in real time on a network by adopting a streaming technology. By adopting the streaming media technology, the electronic equipment can download and process at the same time without waiting for the complete downloading of the multimedia file.
In the embodiment of the application, the cloud device can acquire the display time length, the plurality of elements and the element information of the first webpage, and can process the plurality of static elements according to the display time length and the element information corresponding to the static elements to obtain the static video. The cloud device can use the static video, the dynamic element and the element information corresponding to the dynamic element as video information, or fusion processing is carried out on the static video and the dynamic element according to the element information corresponding to the dynamic element, so as to obtain the video information corresponding to the first webpage. The cloud device may send video information corresponding to the first webpage to the electronic device. The cloud device can convert the webpage into the corresponding video information, so that the problem that the electronic device cannot be compatible with the format of the content in the webpage is avoided, and the reliability of displaying the webpage by the electronic device can be improved.
On the basis of the embodiment shown in fig. 2, when the elements included in the first web page are different, the process of determining the video information corresponding to the first web page is different. Next, a process of determining video information corresponding to the first web page will be described with reference to fig. 4, taking a case where the first web page includes a static element and a dynamic element as an example (S203 in the embodiment of fig. 2).
Fig. 4 is a flowchart illustrating a method for determining video information according to an exemplary embodiment of the present application. Referring to fig. 4, the method may include:
s401, determining the number N of video frames according to the display duration and the preset frame rate.
The preset frame rate refers to the frequency of continuous occurrence of images in units of frames. The preset frame rate may be preset by a human. For example, the preset frame rate may be 24 frames/second, indicating that 24 images are continuously displayed in the first web page within 1 second.
The number N of video frames refers to the number of image frames corresponding to the first webpage in the display duration. N is an integer greater than 1. For example, if the display duration of the first web page is 10s and the preset frame rate is 24 frames/s, the number N of video frames may be determined to be 240.
The cloud device may determine a product of the display duration of the first webpage and a preset frame rate as a number N of video frames.
S402, respectively determining a Red Green Blue (RGB) image corresponding to each static element and a frame identification corresponding to the RGB image according to the element information of each static element.
The RGB image corresponding to the static element comprises the static element, and the size of the RGB image is the same as the size of the first webpage. The image format of the RGB image is an RGB format.
The RGB image corresponding to the static element may be generated according to a display position of the static element in the first web page. Next, an RGB image corresponding to a static element will be described with reference to fig. 5.
Fig. 5 is a schematic diagram of an RGB image provided by an exemplary embodiment of the present application. Referring to fig. 5, a first web page 501, an RGB image 502, and an RGB image 503 are included. Referring to fig. 5, a first web page includes static element text 1 and image 1 at a certain time. According to the position of the text 1 in the first web page, it may be determined that the RGB image corresponding to the text 1 is the RGB image 502, and that the RGB image corresponding to the image 1 is the RGB image 503.
The frame identifier corresponding to the RGB image is an identifier of the frame in which the RGB image is displayed, that is, the frame identifier corresponding to the RGB image may indicate in which frames the RGB image is displayed. For example, assuming that the frame identification corresponding to an RGB image is 1, 2,3, it is explained that the RGB image is displayed in the 1 st, 2 nd, and 3 rd frames.
The frame identifier corresponding to the RGB image may be determined according to the display period of the static element in the first web page and the preset frame rate, and the RGB image corresponds to at least one frame identifier. For example, the frame identification of the image frame displayed in the display period may be calculated according to the display period and the preset frame rate, and the frame identification of the image frame displayed in the display period may be determined as at least one frame identification corresponding to the RGB image.
For example, assuming that the preset frame rate is 24 frames/s, the static element 1 is 61 th to 120 th milliseconds in the display period of the first web page, it may be determined that the frame identification of the image frame displayed in the display period is 25 th to 48 th frames, and at least one frame corresponding to the static element 1 is 25 th to 48 th frames.
S403, determining N image groups according to the RGB images corresponding to each static element and the frame identifications corresponding to the RGB images.
If the number of video frames corresponding to the first web page is N, N image groups may be determined according to RGB images corresponding to each static element in the first web page and frame identifiers corresponding to the RGB images.
For any one image group, the image group comprises at least one RGB image, and each RGB image in the ith image group corresponds to a frame identifier i.
Fig. 6 is a schematic diagram of a determination of an image group according to an exemplary embodiment of the present application. Referring to fig. 6, it is assumed that the first web page includes static element text 1 and image 1, wherein text 1 corresponds to RGB image 1, and the corresponding frames are identified as 1 st to 48 th frames. Image 1 corresponds to RGB image 2, whose corresponding frames are identified as 24 th to 72 th frames. The number of video frames corresponding to the first web page is 72, and the number of image groups is 72.
Since RGB image 1 corresponds to frame identifiers 1-23, it can be determined that RGB image 1 is included in each of the 1 st through 23 rd image groups.
Since RGB image 1 and RGB image 2 correspond to frame identifiers 24-48, it may be determined that RGB image 1 and RGB image 2 are included in the 24 th through 48 th image groups, respectively.
Since RGB image 2 corresponds to frame identifiers 49-72, it may be determined that RGB image 2 is included in the 49 th through 72 th image groups, respectively.
S404, respectively carrying out fusion processing on the RGB images in each image group to obtain N frames of target images.
After the cloud device determines the RGB images in each image group, each RGB image may be superimposed to obtain an N-frame target image.
The process of fusing RGB images in each image group is the same, and a description will be given below of the process of fusing RGB images in any one image group with reference to fig. 7.
Fig. 7 is a schematic diagram of image fusion provided by an exemplary embodiment of the present application. Referring to fig. 7, assuming that a certain image group includes an RGB image 1 corresponding to a text 1 and an RGB image 2 corresponding to an image 1, the cloud device may superimpose the RGB image 1 and the RGB image 2 to obtain an RGB image 3, and the RGB image 3 is a target image.
And S405, performing stitching processing on the N frames of target images to obtain a static video.
Because N frames of target images are obtained by overlapping all RGB images, each frame of target image is still a target image in an RGB format. The cloud device can perform format conversion processing on the N frames of target images to obtain N frames of target images in a target format. For example, the cloud device may perform format conversion processing on the target image in RGB format to obtain a video sequence frame image.
After the cloud device obtains the target image in the N-frame target format, the image in the N-frame target format can be spliced according to a preset frame rate through a media synthesis algorithm, so that the static video is obtained. For example, if the cloud device obtains a target image in a 240-frame target format, the cloud device may splice the target image in the 240-frame target format according to 24 frames per second to obtain a still video, where the display duration is 10s.
S406, determining video information corresponding to the first webpage according to the static video, the dynamic element and the element information of the dynamic element.
It should be noted that, the execution process of S406 may refer to case 3 in S203, and will not be described herein.
In the embodiment of the application, the cloud device can determine the number N of video frames corresponding to the first webpage according to the display duration and the preset frame rate of the first webpage, and generate the corresponding RGB image according to the display time period and the display position of each static element in the first webpage. The cloud device can determine N image groups according to the RGB images corresponding to each static element and the frame identifications corresponding to the RGB images, can superimpose the RGB images in each image group to obtain N frame target images, and further can perform format conversion processing on the N frame target images to obtain N frame target format target images. The cloud device can splice target images in the N-frame target format to obtain a static video, and determine video information corresponding to the first webpage according to the static video, the dynamic elements and the element information of the dynamic elements. The cloud device can convert the webpage into the corresponding video information, so that the problem that the electronic device cannot be compatible with the format of the content in the webpage is avoided, and the reliability of displaying the webpage by the electronic device can be improved.
On the basis of any one of the embodiments, after the electronic device receives the video information corresponding to the first webpage, the electronic device may play the video according to the video information. Next, a process of receiving and playing video information by the electronic device will be described with reference to fig. 8.
Fig. 8 is a flowchart of another page processing method according to an exemplary embodiment of the present application, referring to fig. 8, the method may include:
s801, receiving video information corresponding to a first webpage.
The execution body of the embodiment of the application can be electronic equipment or a page processing device arranged in the electronic equipment. The page processing device may be implemented by software, or may be implemented by a combination of software and hardware.
The first web page comprises a plurality of elements, wherein the plurality of elements comprise static elements and dynamic elements. For example, the static element may include text 1 and image 1, and the dynamic element may include video 1. Each element has corresponding element information including a display period and a display position of the element in the first web page.
The video information is determined based on element information of a plurality of elements in the first web page. The electronic device may receive video information corresponding to the first webpage sent by the cloud device.
S802, determining a target video according to the video information.
After the electronic device may receive the video information, the video information may be identified to determine content included in the video information. When the contents included in the video information are different, the manner of determining the target video of the electronic device is also different, and the following 4 cases may be included:
Case 1, the video information includes still video.
The static video is determined according to element information of static elements in the first webpage.
For example, if the first web page includes the text 1 and the image 1, the still video 1 may be obtained according to the text 1 and the image 1, and the corresponding video information includes the still video 1.
In this case, the electronic device may determine the still video as the target video.
Case 2, video information includes dynamic elements.
For example, if the first web page includes video 1, the corresponding video information includes video 1.
In this case, the electronic device may determine that the dynamic element is the target video.
Case 3, the video information includes the still video, the dynamic element and the element information of the dynamic element.
For example, if the first web page includes text 1, image 1 and video 1, where still video 1 may be generated according to text 1 and image 1, the corresponding video information includes a display period and a display position of still video 1, video 1 and video 1.
In this case, the electronic device may perform fusion processing on the static video and the dynamic element according to the element information of the dynamic element to obtain the target video
Case 4, video information includes a fused video.
The fusion video is obtained by fusion processing of the static video and the dynamic element.
For example, if the first web page includes the text 1, the image 1 and the video 1, the still video 1 may be generated according to the text 1 and the image 1, and the still video 1 and the video 1 may be fused to obtain a fused video, and the corresponding video information includes the fused video.
In this case, the electronic device may determine the blended video as the target video.
S803, playing the target video.
After the electronic device determines the target video, the target video may be played to display the content in the first webpage.
In the embodiment of the application, the electronic equipment can receive the video information corresponding to the first webpage and identify and process the video information. If the video information comprises the element information of the static video, the dynamic element and the dynamic element, the electronic device can conduct fusion processing on the static video and the dynamic element according to the element information of the dynamic element to obtain a target video and conduct playing so as to display the content corresponding to the first webpage. Because the electronic equipment receives the video information corresponding to the webpage, and the content in the webpage is not directly downloaded, the problem that the electronic equipment cannot be compatible with the format of the content in the webpage is avoided, and therefore the reliability of the electronic equipment for displaying the webpage can be improved.
Fig. 9 is a schematic structural diagram of a page processing apparatus according to an exemplary embodiment of the present application. Referring to fig. 9, the page processing apparatus 10 includes a determining module 11, an acquiring module 12, a processing module 13, and a transmitting module 14, wherein,
The determining module 11 is configured to determine a plurality of elements and element information of the elements in a first web page, where the element information includes a display period and a display position of the elements in the first web page;
The acquiring module 12 is configured to acquire a display duration of the first web page;
The processing module 13 is configured to process the plurality of elements according to the display duration and the element information, so as to obtain video information corresponding to the first webpage;
the sending module 14 is configured to send video information corresponding to the first web page to an electronic device.
The page processing device provided by the embodiment of the application can execute the technical scheme shown in the embodiment of the method, and the implementation principle and the beneficial effects are similar, and are not repeated here.
In one possible embodiment, the processing module 13 is specifically configured to:
processing the plurality of static elements according to the display duration and the element information of the static elements to obtain a static video;
And determining video information corresponding to the first webpage according to the static video, the dynamic element and the element information of the dynamic element.
In one possible embodiment, the processing module 13 is specifically configured to:
determining the number N of video frames according to the display duration and a preset frame rate, wherein N is an integer greater than 1;
Generating N frames of target images according to the static elements and element information of the static elements;
and performing splicing processing on the N frames of target images to obtain the static video.
In one possible embodiment, the processing module 13 is specifically configured to:
According to the element information of each static element, determining an RGB image corresponding to each static element and a frame identifier corresponding to the RGB image, wherein the frame identifier is an integer greater than or equal to 1 and less than or equal to N;
according to the RGB images corresponding to each static element and the frame identifications corresponding to the RGB images, N image groups are determined, wherein the image groups comprise at least one RGB image, each RGB image in the ith image group corresponds to a frame identification i, and the i is an integer which is more than or equal to 1 and less than or equal to N;
And respectively carrying out fusion processing on the RGB images in each image group to obtain the N frames of target images.
In one possible embodiment, the processing module 13 is specifically configured to:
generating an RGB image corresponding to the static element according to the display position of the static element in the first webpage;
And determining a frame identifier corresponding to the RGB image according to the display period of the static element in the first webpage and the preset frame rate, wherein the RGB image corresponds to at least one frame identifier.
In one possible embodiment, the processing module 13 is specifically configured to:
performing format conversion processing on the N frames of target images to obtain N frames of target format images;
and performing splicing processing on the N frames of images in the target format to obtain the static video.
In one possible embodiment, the processing module 13 is specifically configured to:
Determining the video information comprises the static video, the dynamic element and element information of the dynamic element;
Or alternatively
And carrying out fusion processing on the static video and the dynamic element according to the element information of the dynamic element to obtain a fusion video, and determining that the video information comprises the fusion video.
The page processing device provided by the embodiment of the application can execute the technical scheme shown in the embodiment of the method, and the implementation principle and the beneficial effects are similar, and are not repeated here.
Fig. 10 is a schematic structural diagram of another page processing apparatus according to an exemplary embodiment of the present application. Referring to fig. 10, the page processing apparatus 20 includes a receiving module 21, a determining module 22, and a playing module 23, wherein,
The receiving module 21 is configured to receive video information corresponding to a first web page, where the first web page includes a plurality of elements, the video information is determined according to element information of the plurality of elements, and the element information includes a display period and a display position of the element in the first web page;
The determining module 22 is configured to determine a target video according to the video information;
The playing module 23 is configured to play the target video.
The page processing device provided by the embodiment of the application can execute the technical scheme shown in the embodiment of the method, and the implementation principle and the beneficial effects are similar, and are not repeated here.
In one possible implementation manner, the video information comprises the static video, the dynamic element and element information of the dynamic element, wherein the static video is determined according to the element information of the static element in the first webpage;
Or alternatively
The video information comprises a fusion video, wherein the fusion video is obtained by fusion processing of the static video and the dynamic element.
In one possible implementation, the determining module 22 is specifically configured to:
According to the element information of the dynamic element, carrying out fusion processing on the static video and the dynamic element to obtain the target video;
Or alternatively
The video information comprises a fusion video, and the target video is determined according to the video information, comprising the following steps:
And determining the fusion video as the target video.
The page processing device provided by the embodiment of the application can execute the technical scheme shown in the embodiment of the method, and the implementation principle and the beneficial effects are similar, and are not repeated here.
An exemplary embodiment of the present application provides a schematic structure of a cloud device, referring to fig. 11, the cloud device 30 may include a processor 31 and a memory 32. The processor 31, the memory 32, and the like are illustratively interconnected by a bus 33.
The memory 32 stores computer-executable instructions;
The processor 31 executes computer-executable instructions stored in the memory 32, causing the processor 31 to execute the page processing method as shown in the method embodiments described above.
An exemplary embodiment of the present application provides a schematic structural diagram of an electronic device, referring to fig. 12, the electronic device 40 may include a processor 41 and a memory 42. The processor 41, the memory 42, are illustratively interconnected by a bus 43.
The memory 42 stores computer-executable instructions;
the processor 41 executes computer-executable instructions stored in the memory 42, causing the processor 41 to execute the page processing method as shown in the above-described method embodiment.
Accordingly, an embodiment of the present application provides a computer readable storage medium, where computer executable instructions are stored, for implementing the page processing method described in the above method embodiment when the computer executable instructions are executed by a processor.
Accordingly, embodiments of the present application may also provide a computer program product, including a computer program, which when executed by a processor may implement the page processing method shown in the foregoing method embodiments.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.