[go: up one dir, main page]

CN114666621B - Page processing method, device and equipment - Google Patents

Page processing method, device and equipment Download PDF

Info

Publication number
CN114666621B
CN114666621B CN202210286723.7A CN202210286723A CN114666621B CN 114666621 B CN114666621 B CN 114666621B CN 202210286723 A CN202210286723 A CN 202210286723A CN 114666621 B CN114666621 B CN 114666621B
Authority
CN
China
Prior art keywords
video
static
information
webpage
elements
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210286723.7A
Other languages
Chinese (zh)
Other versions
CN114666621A (en
Inventor
林啸洋
王鹏
顾文杰
李洪辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Cloud Computing Ltd
Original Assignee
Alibaba Cloud Computing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Cloud Computing Ltd filed Critical Alibaba Cloud Computing Ltd
Priority to CN202210286723.7A priority Critical patent/CN114666621B/en
Publication of CN114666621A publication Critical patent/CN114666621A/en
Application granted granted Critical
Publication of CN114666621B publication Critical patent/CN114666621B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2355Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

本申请实施例提供一种页面处理方法、装置及设备,该方法包括:在第一网页中确定多个元素和元素的元素信息,元素信息包括:元素在第一网页中的显示时段和显示位置;获取第一网页的显示时长;根据显示时长和元素信息,对多个元素进行处理,得到第一网页对应的视频信息;向电子设备发送第一网页对应的视频信息。提高了电子设备显示网页的可靠性。

The embodiment of the present application provides a page processing method, device and equipment, the method comprising: determining multiple elements and element information of the elements in a first webpage, the element information comprising: the display period and display position of the elements in the first webpage; obtaining the display duration of the first webpage; processing the multiple elements according to the display duration and the element information to obtain video information corresponding to the first webpage; and sending the video information corresponding to the first webpage to an electronic device. The reliability of the electronic device displaying webpages is improved.

Description

Page processing method, device and equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, and a device for processing a page.
Background
Currently, multimedia information can be played through electronic devices (large screen, mobile phone, computer, etc.), and the multimedia information can include text, image, video, etc.
In the related art, multimedia information is typically presented in the form of a web page, for example, a web page in hypertext markup language (Hyper Text Markup Language, HTML) format may be written in a cloud device, where the web page includes text, images, video, and the like. After the webpage is manufactured in the cloud device, generating a link of the webpage and sending the link to the electronic device. When the electronic equipment needs to play the multimedia information corresponding to the webpage, the electronic equipment downloads the content in the webpage from the cloud equipment according to the link and displays the content in the webpage. However, when the electronic device cannot be compatible with the format of the content in the web page, the electronic device fails to play the multimedia content, resulting in poor reliability of displaying the web page by the electronic device.
Disclosure of Invention
The application provides a page processing method, device and equipment, which are used for improving the reliability of displaying a webpage by electronic equipment.
In a first aspect, an embodiment of the present application provides a page processing method, including:
Determining a plurality of elements and element information of the elements in a first webpage, wherein the element information comprises a display period and a display position of the elements in the first webpage;
acquiring the display time length of the first webpage;
processing the plurality of elements according to the display duration and the element information to obtain video information corresponding to the first webpage;
and sending the video information corresponding to the first webpage to the electronic equipment.
In one possible implementation manner, the plurality of elements include static elements and dynamic elements, and the processing the plurality of elements according to the display duration and the element information to obtain video information corresponding to the first webpage includes:
processing the plurality of static elements according to the display duration and the element information of the static elements to obtain a static video;
And determining video information corresponding to the first webpage according to the static video, the dynamic element and the element information of the dynamic element.
In one possible implementation manner, according to the display duration and the element information of the static elements, the processing is performed on the plurality of static elements to obtain a static video, including:
determining the number N of video frames according to the display duration and a preset frame rate, wherein N is an integer greater than 1;
Generating N frames of target images according to the static elements and element information of the static elements;
and performing splicing processing on the N frames of target images to obtain the static video.
In one possible implementation, generating the N-frame target image according to the plurality of static elements and element information of the plurality of static elements includes:
According to the element information of each static element, determining an RGB image corresponding to each static element and a frame identifier corresponding to the RGB image, wherein the frame identifier is an integer greater than or equal to 1 and less than or equal to N;
according to the RGB images corresponding to each static element and the frame identifications corresponding to the RGB images, N image groups are determined, wherein the image groups comprise at least one RGB image, each RGB image in the ith image group corresponds to a frame identification i, and the i is an integer which is more than or equal to 1 and less than or equal to N;
And respectively carrying out fusion processing on the RGB images in each image group to obtain the N frames of target images.
In one possible implementation manner, for any static element, determining an RGB image corresponding to the static element and a frame identifier corresponding to the RGB image according to element information of the static element comprises:
generating an RGB image corresponding to the static element according to the display position of the static element in the first webpage;
And determining a frame identifier corresponding to the RGB image according to the display period of the static element in the first webpage and the preset frame rate, wherein the RGB image corresponds to at least one frame identifier.
In a possible implementation manner, the stitching processing is performed on the N frames of target images to obtain the still video, including:
performing format conversion processing on the N frames of target images to obtain N frames of target format images;
and performing splicing processing on the N frames of images in the target format to obtain the static video.
In one possible implementation manner, determining the video information corresponding to the first webpage according to the still video, the dynamic element and the element information of the dynamic element includes:
Determining the video information comprises the static video, the dynamic element and element information of the dynamic element;
Or alternatively
And carrying out fusion processing on the static video and the dynamic element according to the element information of the dynamic element to obtain a fusion video, and determining that the video information comprises the fusion video.
In a second aspect, an embodiment of the present application provides a page processing method, including:
Receiving video information corresponding to a first webpage, wherein the first webpage comprises a plurality of elements, the video information is determined according to the element information of the plurality of elements, and the element information comprises a display period and a display position of the elements in the first webpage;
And determining a target video according to the video information, and playing the target video.
In one possible implementation, the plurality of elements includes a static element and a dynamic element, wherein,
The video information comprises the static video, the dynamic element and element information of the dynamic element, wherein the static video is determined according to the element information of the static element in the first webpage;
Or alternatively
The video information comprises a fusion video, wherein the fusion video is obtained by fusion processing of the static video and the dynamic element.
In one possible implementation, the video information includes the still video, the dynamic element and element information of the dynamic element, and determining a target video according to the video information includes:
According to the element information of the dynamic element, carrying out fusion processing on the static video and the dynamic element to obtain the target video;
Or alternatively
The video information comprises a fusion video, and the target video is determined according to the video information, comprising the following steps:
And determining the fusion video as the target video.
In a third aspect, an embodiment of the present application provides a page processing apparatus, including a determining module, an acquiring module, a processing module, and a sending module, where,
The determining module is used for determining a plurality of elements and element information of the elements in a first webpage, wherein the element information comprises a display period and a display position of the elements in the first webpage;
the acquisition module is used for acquiring the display time length of the first webpage;
The processing module is used for processing the plurality of elements according to the display duration and the element information to obtain video information corresponding to the first webpage;
the sending module is used for sending video information corresponding to the first webpage to the electronic equipment.
In a possible implementation manner, the processing module is specifically configured to:
processing the plurality of static elements according to the display duration and the element information of the static elements to obtain a static video;
And determining video information corresponding to the first webpage according to the static video, the dynamic element and the element information of the dynamic element.
In a possible implementation manner, the processing module is specifically configured to:
determining the number N of video frames according to the display duration and a preset frame rate, wherein N is an integer greater than 1;
Generating N frames of target images according to the static elements and element information of the static elements;
and performing splicing processing on the N frames of target images to obtain the static video.
In a possible implementation manner, the processing module is specifically configured to:
According to the element information of each static element, determining an RGB image corresponding to each static element and a frame identifier corresponding to the RGB image, wherein the frame identifier is an integer greater than or equal to 1 and less than or equal to N;
according to the RGB images corresponding to each static element and the frame identifications corresponding to the RGB images, N image groups are determined, wherein the image groups comprise at least one RGB image, each RGB image in the ith image group corresponds to a frame identification i, and the i is an integer which is more than or equal to 1 and less than or equal to N;
And respectively carrying out fusion processing on the RGB images in each image group to obtain the N frames of target images.
In a possible implementation manner, the processing module is specifically configured to:
generating an RGB image corresponding to the static element according to the display position of the static element in the first webpage;
And determining a frame identifier corresponding to the RGB image according to the display period of the static element in the first webpage and the preset frame rate, wherein the RGB image corresponds to at least one frame identifier.
In a possible implementation manner, the processing module is specifically configured to:
performing format conversion processing on the N frames of target images to obtain N frames of target format images;
and performing splicing processing on the N frames of images in the target format to obtain the static video.
In a possible implementation manner, the processing module is specifically configured to:
Determining the video information comprises the static video, the dynamic element and element information of the dynamic element;
Or alternatively
And carrying out fusion processing on the static video and the dynamic element according to the element information of the dynamic element to obtain a fusion video, and determining that the video information comprises the fusion video.
In a fourth aspect, an embodiment of the present application provides a page processing apparatus, including a receiving module, a determining module, and a playing module, where,
The receiving module is used for receiving video information corresponding to a first webpage, wherein the first webpage comprises a plurality of elements, the video information is determined according to the element information of the plurality of elements, and the element information comprises the display time period and the display position of the elements in the first webpage;
The determining module is used for determining a target video according to the video information;
the playing module is used for playing the target video.
In one possible implementation manner, the video information comprises the static video, the dynamic element and element information of the dynamic element, wherein the static video is determined according to the element information of the static element in the first webpage;
Or alternatively
The video information comprises a fusion video, wherein the fusion video is obtained by fusion processing of the static video and the dynamic element.
In one possible implementation manner, the determining module is specifically configured to:
According to the element information of the dynamic element, carrying out fusion processing on the static video and the dynamic element to obtain the target video;
Or alternatively
The video information comprises a fusion video, and the target video is determined according to the video information, comprising the following steps:
And determining the fusion video as the target video.
In a fifth aspect, an embodiment of the present application provides a cloud device, including a memory and a processor;
The memory stores computer-executable instructions;
the processor executing computer-executable instructions stored in the memory, causing the processor to perform the page processing method of any one of the first aspects.
In a sixth aspect, an embodiment of the present application provides an electronic device, including a memory and a processor;
The memory stores computer-executable instructions;
The processor executing computer-executable instructions stored in the memory, causing the processor to perform the page processing method of any of the second aspects.
In a seventh aspect, embodiments of the present application provide a computer-readable storage medium having stored therein computer-executable instructions for implementing the page processing method of any one of the first aspects when the computer-executable instructions are executed by a processor.
In an eighth aspect, embodiments of the present application provide a computer-readable storage medium having stored therein computer-executable instructions for implementing the page processing method of any one of the second aspects when the computer-executable instructions are executed by a processor.
In a ninth aspect, an embodiment of the present application provides a computer program product comprising a computer program which, when executed by a processor, implements the page processing method of any one of the first aspects.
In a tenth aspect, embodiments of the present application provide a computer program product comprising a computer program which when executed by a processor implements the page processing method of any of the second aspects.
In the embodiment of the application, the cloud device can acquire the display time length, the plurality of elements and the element information of the first webpage, and can process the plurality of static elements according to the display time length and the element information corresponding to the static elements to obtain the static video. The cloud device can use the static video, the dynamic element and the element information corresponding to the dynamic element as video information, or fusion processing is carried out on the static video and the dynamic element according to the element information corresponding to the dynamic element, so as to obtain the video information corresponding to the first webpage. The cloud device may send video information corresponding to the first webpage to the electronic device. After receiving the video information corresponding to the first webpage, the electronic device can determine a target video according to the video information and play the target video. The cloud device can convert the webpage into the corresponding video information, so that the problem that the electronic device cannot be compatible with the format of the content in the webpage is avoided, and the reliability of displaying the webpage by the electronic device can be improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
Fig. 1 is a schematic diagram of an application scenario provided in an exemplary embodiment of the present application;
FIG. 2 is a schematic flow chart of a page processing method according to an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of a first web page according to an exemplary embodiment of the present application;
fig. 4 is a flowchart illustrating a method for determining video information according to an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of an RGB image provided by an exemplary embodiment of the present application;
FIG. 6 is a schematic diagram of a determination of a group of images provided by an exemplary embodiment of the present application;
FIG. 7 is a schematic diagram of image fusion provided by an exemplary embodiment of the present application;
FIG. 8 is a flowchart of yet another page processing method according to an exemplary embodiment of the present application;
fig. 9 is a schematic structural diagram of a page processing apparatus according to an exemplary embodiment of the present application;
Fig. 10 is a schematic structural view of another page processing apparatus according to an exemplary embodiment of the present application;
Fig. 11 is a schematic structural diagram of a cloud device according to an exemplary embodiment of the present application;
Fig. 12 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Fig. 1 is a schematic diagram of an application scenario provided in an exemplary embodiment of the present application. As shown in fig. 1, the cloud device and the plurality of electronic devices are included. For example, the plurality of electronic devices may include electronic device 1, electronic device 2, and/or electronic device n. The cloud device may communicate with the electronic device. The cloud device may be a computer device. For example, the cloud device may be a computer or the like. The electronic device has a display screen, and for example, the electronic device may be a terminal large screen, a mobile phone, a computer, and the like.
A worker can make a webpage on the cloud device, and the webpage can comprise one or more elements such as characters, images, videos and the like. After the webpage is manufactured, the cloud device can process elements in the webpage to convert the webpage into corresponding video information, and the video information is sent to the electronic device, so that the electronic device plays and displays the content in the webpage according to the video information. The web page can be a static web page or a dynamic web page and has a certain display duration. For example, when the web page is a static web page, the content in the web page can be a propaganda poster, the display time is 5s, and when the web page is a dynamic web page, the content in the web page can be an advertisement, a propaganda film and the like, and the display time is 10s.
In the related art, multimedia information is usually displayed in the form of a web page, for example, an HTML format web page may be written in a cloud device, where the web page includes text, images, video, and other contents. After the webpage is manufactured in the cloud device, generating a link of the webpage and sending the link to the electronic device. When the electronic equipment needs to play the multimedia information corresponding to the webpage, the electronic equipment downloads the content in the webpage from the cloud equipment according to the link and displays the content in the webpage. However, when the electronic device cannot be compatible with the format of the content in the web page, the electronic device fails to play the multimedia content, resulting in poor reliability of displaying the web page by the electronic device.
In the embodiment of the application, the cloud device can determine a plurality of elements and corresponding element information in the webpage, process the plurality of elements according to the display time length and the element information of the webpage, convert the webpage into the corresponding video information, enable the electronic device to download the video information, and display the webpage content according to the video information. The cloud device can convert the webpage into the corresponding video information, so that the problem that the electronic device cannot be compatible with the format of the content in the webpage is avoided, and the reliability of displaying the webpage by the electronic device can be improved.
The technical scheme shown in the application is described in detail by specific examples. It should be noted that the following embodiments may exist alone or in combination with each other, and for the same or similar content, the description will not be repeated in different embodiments.
Fig. 2 is a flow chart of a page processing method according to an exemplary embodiment of the present application. Referring to fig. 2, the method may include:
s201, determining a plurality of elements and element information of the elements in a first webpage.
The execution main body of the embodiment of the application can be cloud equipment or a page processing device arranged in the cloud equipment. The page processing device may be implemented by software, or may be implemented by a combination of software and hardware.
The first web page may be a web page in HTML format, for example, the first web page may be an H5 web page.
The first webpage can be a static webpage or a dynamic webpage, and the first webpage has a corresponding display duration.
When the first web page is a static web page, the content in the first web page may be static content, and the display content does not change in the display duration, for example, the content in the first web page may be a static propaganda poster. When the first web page is a dynamic web page, the content in the web page may include dynamic content, and the content displayed in the first web page is different in different display periods. For example, promotional text and promotional video may be included in the first web page.
The first web page may include a plurality of elements therein, for example, the elements may be text, images, video, images, and the like. Elements can be divided into static elements and dynamic elements, wherein the static elements comprise static contents such as characters, images and the like, and the dynamic elements comprise dynamic contents such as videos, dynamic diagrams and the like.
Each element in the first webpage is provided with corresponding element information, and the element information comprises the display time period and the display position of the element in the first webpage.
The display position may be represented by coordinates of the element in the first web page. For example, the upper left corner of the first web page may be taken as the origin of coordinates, and the coordinate values may be represented by pixel values. For example, a rectangular screen in which an element is 100px×150px in the first web page, the display position of the rectangular screen may be represented by an upper left corner vertex (10 px,150 px) and a lower right corner vertex (110 px,300 px), the upper left corner vertex of the element is 100px from the left boundary of the first web page, the upper right corner vertex of the element is 150px from the upper boundary of the first web page, the lower right corner vertex of the element is 110px from the left boundary of the first web page, and the upper right corner vertex of the element is 300px from the upper boundary of the first web page.
Different elements can be displayed in different display periods in the first webpage, and different element information is corresponding to the different elements. Next, a plurality of elements in the first web page will be described with reference to fig. 3.
Fig. 3 is a schematic diagram of a first web page according to an exemplary embodiment of the present application. As shown in fig. 3, if the display duration of the first web page is 3s, 24 display images can be included in each second, then 24 display images corresponding to 1ms to 60ms are included in the 1 st s, 24 display images corresponding to 61ms to 120ms are included in the 2 nd s, and 24 display images corresponding to 121ms to 180ms are included in the 3 rd s.
As shown in fig. 3, the first web page includes a text a, a text b, a video c, and an image d, where the text a, the text b, and the image d are static elements, and the video c is a dynamic element.
The display period of the text a in the first web page is 1ms to 120ms, and the display positions can be represented as the upper left corner vertices (10 px ) and the lower right corner vertices (60 px,40 px).
The display period of the text b in the first web page is 121ms to 180ms, and the display positions can be represented as the upper left corner vertices (10 px ) and the lower right corner vertices (60 px,40 px).
The display period of the video c in the first web page is 1ms to 180ms, and the display positions may be expressed as the upper left corner vertices (0 px,80 px) and the lower right corner vertices (280 px,180 px).
The display period of the image d in the first web page is 61ms to 180ms, and the display positions may be represented as the upper left corner vertices (300 px,10 px) and the lower right corner vertices (420 px,90 px).
S202, acquiring display time length of the first webpage.
The display duration of the first webpage refers to the display duration of the electronic equipment on the content in the first webpage. The display duration of the first web page may be preset. If the first webpage includes dynamic elements such as video, the display duration of the first webpage can also be determined according to the play duration of the dynamic elements.
And S203, processing the plurality of elements according to the display duration and the element information to obtain video information corresponding to the first webpage.
The first web page may include static elements therein, or the first web page may include dynamic elements therein, or the first web page may include both static and dynamic elements therein. When the elements included in the first webpage are different, the process of determining the video information corresponding to the first webpage is also different, including the following three cases:
case 1, the first web page includes static elements.
When the first webpage includes static elements but does not include dynamic elements, a plurality of static elements can be processed according to the display duration of the first webpage and the element information of the static elements to obtain a static video. That is, in this case, the video information corresponding to the first web page includes still video.
In the embodiment shown in fig. 4, the process of determining the still video is described, and will not be described here.
Case 2, the first web page includes a dynamic element.
When the first webpage includes a dynamic element, but does not include a static element, it may be determined that video information corresponding to the first webpage is the dynamic element.
Case 3, the first web page includes static elements and dynamic elements.
In this case, the cloud device may process the plurality of static elements according to the display duration and the element information of the static elements to obtain a static video, and determine video information corresponding to the first webpage according to the static video, the dynamic element, and the element information of the dynamic element. The static video is a video obtained by processing the static element.
The cloud device may determine the video information corresponding to the first webpage in multiple manners, which may include the following two manners:
in the mode 1, after the cloud device obtains the static video corresponding to the static element, the static video, the dynamic element and the element information of the dynamic element can be determined as the video information corresponding to the first webpage.
In this manner, the video information includes still video, dynamic elements, and element information of the dynamic elements.
When the video information in the electronic equipment needs to be updated, if the dynamic element changes and the static element does not change, the cloud equipment can send the dynamic element and the element information of the dynamic element to the electronic equipment without sending the static element to the electronic equipment. Or if the static element is changed and the dynamic element is not changed, the cloud device can send the static video to the electronic device without sending the dynamic element and the element information of the dynamic element to the electronic device. Unnecessary data transmission is reduced, and the workload of cloud equipment is reduced. In addition, in the practical application process, static elements or dynamic elements in video information required by different electronic devices may be the same, and in this case, the electronic devices may flexibly combine the contents in the video information sent to different electronic devices, so that the flexibility of sending the video information is higher.
In the mode 2, the cloud device can perform fusion processing on the static video and the dynamic element through a media synthesis algorithm according to the element information of the dynamic element to obtain a fusion video.
In this manner, the video information corresponding to the first web page may be a blended video. After the electronic equipment receives the fusion video, the electronic equipment directly plays the fusion video, so that the convenience of playing the video information by the electronic equipment is higher.
S204, sending video information corresponding to the first webpage to the electronic equipment.
After the cloud device determines the video information corresponding to the first webpage, the video information can be sent to the electronic device through a streaming media technology, so that the electronic device can process the video information while downloading the video information, and the content of the first webpage is displayed.
The streaming media technology refers to a technology of continuously playing multimedia files in real time on a network by adopting a streaming technology. By adopting the streaming media technology, the electronic equipment can download and process at the same time without waiting for the complete downloading of the multimedia file.
In the embodiment of the application, the cloud device can acquire the display time length, the plurality of elements and the element information of the first webpage, and can process the plurality of static elements according to the display time length and the element information corresponding to the static elements to obtain the static video. The cloud device can use the static video, the dynamic element and the element information corresponding to the dynamic element as video information, or fusion processing is carried out on the static video and the dynamic element according to the element information corresponding to the dynamic element, so as to obtain the video information corresponding to the first webpage. The cloud device may send video information corresponding to the first webpage to the electronic device. The cloud device can convert the webpage into the corresponding video information, so that the problem that the electronic device cannot be compatible with the format of the content in the webpage is avoided, and the reliability of displaying the webpage by the electronic device can be improved.
On the basis of the embodiment shown in fig. 2, when the elements included in the first web page are different, the process of determining the video information corresponding to the first web page is different. Next, a process of determining video information corresponding to the first web page will be described with reference to fig. 4, taking a case where the first web page includes a static element and a dynamic element as an example (S203 in the embodiment of fig. 2).
Fig. 4 is a flowchart illustrating a method for determining video information according to an exemplary embodiment of the present application. Referring to fig. 4, the method may include:
s401, determining the number N of video frames according to the display duration and the preset frame rate.
The preset frame rate refers to the frequency of continuous occurrence of images in units of frames. The preset frame rate may be preset by a human. For example, the preset frame rate may be 24 frames/second, indicating that 24 images are continuously displayed in the first web page within 1 second.
The number N of video frames refers to the number of image frames corresponding to the first webpage in the display duration. N is an integer greater than 1. For example, if the display duration of the first web page is 10s and the preset frame rate is 24 frames/s, the number N of video frames may be determined to be 240.
The cloud device may determine a product of the display duration of the first webpage and a preset frame rate as a number N of video frames.
S402, respectively determining a Red Green Blue (RGB) image corresponding to each static element and a frame identification corresponding to the RGB image according to the element information of each static element.
The RGB image corresponding to the static element comprises the static element, and the size of the RGB image is the same as the size of the first webpage. The image format of the RGB image is an RGB format.
The RGB image corresponding to the static element may be generated according to a display position of the static element in the first web page. Next, an RGB image corresponding to a static element will be described with reference to fig. 5.
Fig. 5 is a schematic diagram of an RGB image provided by an exemplary embodiment of the present application. Referring to fig. 5, a first web page 501, an RGB image 502, and an RGB image 503 are included. Referring to fig. 5, a first web page includes static element text 1 and image 1 at a certain time. According to the position of the text 1 in the first web page, it may be determined that the RGB image corresponding to the text 1 is the RGB image 502, and that the RGB image corresponding to the image 1 is the RGB image 503.
The frame identifier corresponding to the RGB image is an identifier of the frame in which the RGB image is displayed, that is, the frame identifier corresponding to the RGB image may indicate in which frames the RGB image is displayed. For example, assuming that the frame identification corresponding to an RGB image is 1, 2,3, it is explained that the RGB image is displayed in the 1 st, 2 nd, and 3 rd frames.
The frame identifier corresponding to the RGB image may be determined according to the display period of the static element in the first web page and the preset frame rate, and the RGB image corresponds to at least one frame identifier. For example, the frame identification of the image frame displayed in the display period may be calculated according to the display period and the preset frame rate, and the frame identification of the image frame displayed in the display period may be determined as at least one frame identification corresponding to the RGB image.
For example, assuming that the preset frame rate is 24 frames/s, the static element 1 is 61 th to 120 th milliseconds in the display period of the first web page, it may be determined that the frame identification of the image frame displayed in the display period is 25 th to 48 th frames, and at least one frame corresponding to the static element 1 is 25 th to 48 th frames.
S403, determining N image groups according to the RGB images corresponding to each static element and the frame identifications corresponding to the RGB images.
If the number of video frames corresponding to the first web page is N, N image groups may be determined according to RGB images corresponding to each static element in the first web page and frame identifiers corresponding to the RGB images.
For any one image group, the image group comprises at least one RGB image, and each RGB image in the ith image group corresponds to a frame identifier i.
Fig. 6 is a schematic diagram of a determination of an image group according to an exemplary embodiment of the present application. Referring to fig. 6, it is assumed that the first web page includes static element text 1 and image 1, wherein text 1 corresponds to RGB image 1, and the corresponding frames are identified as 1 st to 48 th frames. Image 1 corresponds to RGB image 2, whose corresponding frames are identified as 24 th to 72 th frames. The number of video frames corresponding to the first web page is 72, and the number of image groups is 72.
Since RGB image 1 corresponds to frame identifiers 1-23, it can be determined that RGB image 1 is included in each of the 1 st through 23 rd image groups.
Since RGB image 1 and RGB image 2 correspond to frame identifiers 24-48, it may be determined that RGB image 1 and RGB image 2 are included in the 24 th through 48 th image groups, respectively.
Since RGB image 2 corresponds to frame identifiers 49-72, it may be determined that RGB image 2 is included in the 49 th through 72 th image groups, respectively.
S404, respectively carrying out fusion processing on the RGB images in each image group to obtain N frames of target images.
After the cloud device determines the RGB images in each image group, each RGB image may be superimposed to obtain an N-frame target image.
The process of fusing RGB images in each image group is the same, and a description will be given below of the process of fusing RGB images in any one image group with reference to fig. 7.
Fig. 7 is a schematic diagram of image fusion provided by an exemplary embodiment of the present application. Referring to fig. 7, assuming that a certain image group includes an RGB image 1 corresponding to a text 1 and an RGB image 2 corresponding to an image 1, the cloud device may superimpose the RGB image 1 and the RGB image 2 to obtain an RGB image 3, and the RGB image 3 is a target image.
And S405, performing stitching processing on the N frames of target images to obtain a static video.
Because N frames of target images are obtained by overlapping all RGB images, each frame of target image is still a target image in an RGB format. The cloud device can perform format conversion processing on the N frames of target images to obtain N frames of target images in a target format. For example, the cloud device may perform format conversion processing on the target image in RGB format to obtain a video sequence frame image.
After the cloud device obtains the target image in the N-frame target format, the image in the N-frame target format can be spliced according to a preset frame rate through a media synthesis algorithm, so that the static video is obtained. For example, if the cloud device obtains a target image in a 240-frame target format, the cloud device may splice the target image in the 240-frame target format according to 24 frames per second to obtain a still video, where the display duration is 10s.
S406, determining video information corresponding to the first webpage according to the static video, the dynamic element and the element information of the dynamic element.
It should be noted that, the execution process of S406 may refer to case 3 in S203, and will not be described herein.
In the embodiment of the application, the cloud device can determine the number N of video frames corresponding to the first webpage according to the display duration and the preset frame rate of the first webpage, and generate the corresponding RGB image according to the display time period and the display position of each static element in the first webpage. The cloud device can determine N image groups according to the RGB images corresponding to each static element and the frame identifications corresponding to the RGB images, can superimpose the RGB images in each image group to obtain N frame target images, and further can perform format conversion processing on the N frame target images to obtain N frame target format target images. The cloud device can splice target images in the N-frame target format to obtain a static video, and determine video information corresponding to the first webpage according to the static video, the dynamic elements and the element information of the dynamic elements. The cloud device can convert the webpage into the corresponding video information, so that the problem that the electronic device cannot be compatible with the format of the content in the webpage is avoided, and the reliability of displaying the webpage by the electronic device can be improved.
On the basis of any one of the embodiments, after the electronic device receives the video information corresponding to the first webpage, the electronic device may play the video according to the video information. Next, a process of receiving and playing video information by the electronic device will be described with reference to fig. 8.
Fig. 8 is a flowchart of another page processing method according to an exemplary embodiment of the present application, referring to fig. 8, the method may include:
s801, receiving video information corresponding to a first webpage.
The execution body of the embodiment of the application can be electronic equipment or a page processing device arranged in the electronic equipment. The page processing device may be implemented by software, or may be implemented by a combination of software and hardware.
The first web page comprises a plurality of elements, wherein the plurality of elements comprise static elements and dynamic elements. For example, the static element may include text 1 and image 1, and the dynamic element may include video 1. Each element has corresponding element information including a display period and a display position of the element in the first web page.
The video information is determined based on element information of a plurality of elements in the first web page. The electronic device may receive video information corresponding to the first webpage sent by the cloud device.
S802, determining a target video according to the video information.
After the electronic device may receive the video information, the video information may be identified to determine content included in the video information. When the contents included in the video information are different, the manner of determining the target video of the electronic device is also different, and the following 4 cases may be included:
Case 1, the video information includes still video.
The static video is determined according to element information of static elements in the first webpage.
For example, if the first web page includes the text 1 and the image 1, the still video 1 may be obtained according to the text 1 and the image 1, and the corresponding video information includes the still video 1.
In this case, the electronic device may determine the still video as the target video.
Case 2, video information includes dynamic elements.
For example, if the first web page includes video 1, the corresponding video information includes video 1.
In this case, the electronic device may determine that the dynamic element is the target video.
Case 3, the video information includes the still video, the dynamic element and the element information of the dynamic element.
For example, if the first web page includes text 1, image 1 and video 1, where still video 1 may be generated according to text 1 and image 1, the corresponding video information includes a display period and a display position of still video 1, video 1 and video 1.
In this case, the electronic device may perform fusion processing on the static video and the dynamic element according to the element information of the dynamic element to obtain the target video
Case 4, video information includes a fused video.
The fusion video is obtained by fusion processing of the static video and the dynamic element.
For example, if the first web page includes the text 1, the image 1 and the video 1, the still video 1 may be generated according to the text 1 and the image 1, and the still video 1 and the video 1 may be fused to obtain a fused video, and the corresponding video information includes the fused video.
In this case, the electronic device may determine the blended video as the target video.
S803, playing the target video.
After the electronic device determines the target video, the target video may be played to display the content in the first webpage.
In the embodiment of the application, the electronic equipment can receive the video information corresponding to the first webpage and identify and process the video information. If the video information comprises the element information of the static video, the dynamic element and the dynamic element, the electronic device can conduct fusion processing on the static video and the dynamic element according to the element information of the dynamic element to obtain a target video and conduct playing so as to display the content corresponding to the first webpage. Because the electronic equipment receives the video information corresponding to the webpage, and the content in the webpage is not directly downloaded, the problem that the electronic equipment cannot be compatible with the format of the content in the webpage is avoided, and therefore the reliability of the electronic equipment for displaying the webpage can be improved.
Fig. 9 is a schematic structural diagram of a page processing apparatus according to an exemplary embodiment of the present application. Referring to fig. 9, the page processing apparatus 10 includes a determining module 11, an acquiring module 12, a processing module 13, and a transmitting module 14, wherein,
The determining module 11 is configured to determine a plurality of elements and element information of the elements in a first web page, where the element information includes a display period and a display position of the elements in the first web page;
The acquiring module 12 is configured to acquire a display duration of the first web page;
The processing module 13 is configured to process the plurality of elements according to the display duration and the element information, so as to obtain video information corresponding to the first webpage;
the sending module 14 is configured to send video information corresponding to the first web page to an electronic device.
The page processing device provided by the embodiment of the application can execute the technical scheme shown in the embodiment of the method, and the implementation principle and the beneficial effects are similar, and are not repeated here.
In one possible embodiment, the processing module 13 is specifically configured to:
processing the plurality of static elements according to the display duration and the element information of the static elements to obtain a static video;
And determining video information corresponding to the first webpage according to the static video, the dynamic element and the element information of the dynamic element.
In one possible embodiment, the processing module 13 is specifically configured to:
determining the number N of video frames according to the display duration and a preset frame rate, wherein N is an integer greater than 1;
Generating N frames of target images according to the static elements and element information of the static elements;
and performing splicing processing on the N frames of target images to obtain the static video.
In one possible embodiment, the processing module 13 is specifically configured to:
According to the element information of each static element, determining an RGB image corresponding to each static element and a frame identifier corresponding to the RGB image, wherein the frame identifier is an integer greater than or equal to 1 and less than or equal to N;
according to the RGB images corresponding to each static element and the frame identifications corresponding to the RGB images, N image groups are determined, wherein the image groups comprise at least one RGB image, each RGB image in the ith image group corresponds to a frame identification i, and the i is an integer which is more than or equal to 1 and less than or equal to N;
And respectively carrying out fusion processing on the RGB images in each image group to obtain the N frames of target images.
In one possible embodiment, the processing module 13 is specifically configured to:
generating an RGB image corresponding to the static element according to the display position of the static element in the first webpage;
And determining a frame identifier corresponding to the RGB image according to the display period of the static element in the first webpage and the preset frame rate, wherein the RGB image corresponds to at least one frame identifier.
In one possible embodiment, the processing module 13 is specifically configured to:
performing format conversion processing on the N frames of target images to obtain N frames of target format images;
and performing splicing processing on the N frames of images in the target format to obtain the static video.
In one possible embodiment, the processing module 13 is specifically configured to:
Determining the video information comprises the static video, the dynamic element and element information of the dynamic element;
Or alternatively
And carrying out fusion processing on the static video and the dynamic element according to the element information of the dynamic element to obtain a fusion video, and determining that the video information comprises the fusion video.
The page processing device provided by the embodiment of the application can execute the technical scheme shown in the embodiment of the method, and the implementation principle and the beneficial effects are similar, and are not repeated here.
Fig. 10 is a schematic structural diagram of another page processing apparatus according to an exemplary embodiment of the present application. Referring to fig. 10, the page processing apparatus 20 includes a receiving module 21, a determining module 22, and a playing module 23, wherein,
The receiving module 21 is configured to receive video information corresponding to a first web page, where the first web page includes a plurality of elements, the video information is determined according to element information of the plurality of elements, and the element information includes a display period and a display position of the element in the first web page;
The determining module 22 is configured to determine a target video according to the video information;
The playing module 23 is configured to play the target video.
The page processing device provided by the embodiment of the application can execute the technical scheme shown in the embodiment of the method, and the implementation principle and the beneficial effects are similar, and are not repeated here.
In one possible implementation manner, the video information comprises the static video, the dynamic element and element information of the dynamic element, wherein the static video is determined according to the element information of the static element in the first webpage;
Or alternatively
The video information comprises a fusion video, wherein the fusion video is obtained by fusion processing of the static video and the dynamic element.
In one possible implementation, the determining module 22 is specifically configured to:
According to the element information of the dynamic element, carrying out fusion processing on the static video and the dynamic element to obtain the target video;
Or alternatively
The video information comprises a fusion video, and the target video is determined according to the video information, comprising the following steps:
And determining the fusion video as the target video.
The page processing device provided by the embodiment of the application can execute the technical scheme shown in the embodiment of the method, and the implementation principle and the beneficial effects are similar, and are not repeated here.
An exemplary embodiment of the present application provides a schematic structure of a cloud device, referring to fig. 11, the cloud device 30 may include a processor 31 and a memory 32. The processor 31, the memory 32, and the like are illustratively interconnected by a bus 33.
The memory 32 stores computer-executable instructions;
The processor 31 executes computer-executable instructions stored in the memory 32, causing the processor 31 to execute the page processing method as shown in the method embodiments described above.
An exemplary embodiment of the present application provides a schematic structural diagram of an electronic device, referring to fig. 12, the electronic device 40 may include a processor 41 and a memory 42. The processor 41, the memory 42, are illustratively interconnected by a bus 43.
The memory 42 stores computer-executable instructions;
the processor 41 executes computer-executable instructions stored in the memory 42, causing the processor 41 to execute the page processing method as shown in the above-described method embodiment.
Accordingly, an embodiment of the present application provides a computer readable storage medium, where computer executable instructions are stored, for implementing the page processing method described in the above method embodiment when the computer executable instructions are executed by a processor.
Accordingly, embodiments of the present application may also provide a computer program product, including a computer program, which when executed by a processor may implement the page processing method shown in the foregoing method embodiments.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (15)

1.一种页面处理方法,其特征在于,包括:1. A page processing method, comprising: 在第一网页中确定多个元素和所述元素的元素信息,所述元素信息包括:所述元素在所述第一网页中的显示时段和显示位置;Determining a plurality of elements and element information of the elements in a first webpage, the element information comprising: a display period and a display position of the elements in the first webpage; 获取所述第一网页的显示时长;Obtaining the display time of the first webpage; 根据所述显示时长和所述元素信息,对所述多个元素进行处理,得到所述第一网页对应的视频信息;Processing the multiple elements according to the display duration and the element information to obtain video information corresponding to the first webpage; 向电子设备发送所述第一网页对应的视频信息;Sending video information corresponding to the first webpage to the electronic device; 所述多个元素中包括静态元素和动态元素;根据所述显示时长和所述元素信息,对所述多个元素进行处理,得到所述第一网页对应的视频信息,包括:The multiple elements include static elements and dynamic elements; the multiple elements are processed according to the display duration and the element information to obtain video information corresponding to the first webpage, including: 根据所述显示时长和所述静态元素的元素信息,对所述多个静态元素进行处理,得到静态视频;Processing the plurality of static elements according to the display duration and the element information of the static elements to obtain a static video; 根据所述静态视频、所述动态元素和所述动态元素的元素信息,确定所述第一网页对应的视频信息;Determining video information corresponding to the first webpage according to the static video, the dynamic element and element information of the dynamic element; 当所述电子设备中的视频信息需要更新时,若动态元素发生变化且静态元素未发生变化,则向所述电子设备发送动态元素和动态元素的元素信息,或者,若静态元素发生变化且动态元素未发生变化,则向所述电子设备发送静态视频。When the video information in the electronic device needs to be updated, if the dynamic element changes and the static element does not change, the dynamic element and the element information of the dynamic element are sent to the electronic device, or if the static element changes and the dynamic element does not change, a static video is sent to the electronic device. 2.根据权利要求1所述的方法,其特征在于,根据所述显示时长和所述静态元素的元素信息,对所述多个静态元素进行处理,得到静态视频,包括:2. The method according to claim 1, characterized in that the multiple static elements are processed according to the display duration and the element information of the static elements to obtain a static video, comprising: 根据所述显示时长和预设帧率,确定视频帧数量N,所述N为大于1的整数;Determine the number of video frames N according to the display duration and the preset frame rate, where N is an integer greater than 1; 根据所述多个静态元素和所述多个静态元素的元素信息,生成N帧目标图像;Generate N frames of target images according to the multiple static elements and element information of the multiple static elements; 对所述N帧目标图像进行拼接处理,得到所述静态视频。The N frames of target images are spliced to obtain the static video. 3.根据权利要求2所述的方法,其特征在于,根据所述多个静态元素和所述多个静态元素的元素信息,生成N帧目标图像,包括:3. The method according to claim 2, characterized in that generating N frames of target images according to the plurality of static elements and the element information of the plurality of static elements comprises: 分别根据每个静态元素的元素信息,确定每个静态元素对应的红绿蓝RGB图像、以及所述RGB图像对应的帧标识,所述帧标识为大于或等于1且小于或等于所述N的整数;Determine, according to the element information of each static element, a red, green, and blue (RGB) image corresponding to each static element and a frame identifier corresponding to the RGB image, wherein the frame identifier is an integer greater than or equal to 1 and less than or equal to N; 根据每个静态元素对应的RGB图像、以及所述RGB图像对应的帧标识,确定N个图像组,所述图像组中包括至少一个RGB图像,第i个图像组中各RGB图像对应帧标识i,所述i为大于或等于1且小于或等于所述N的整数;Determine N image groups according to the RGB image corresponding to each static element and the frame identifier corresponding to the RGB image, wherein the image group includes at least one RGB image, and each RGB image in the i-th image group corresponds to a frame identifier i, where i is an integer greater than or equal to 1 and less than or equal to N; 分别对每个图像组中的RGB图像进行融合处理,得到所述N帧目标图像。The RGB images in each image group are fused respectively to obtain the N frames of target images. 4.根据权利要求3所述的方法,其特征在于,针对任意一个静态元素;根据所述静态元素的元素信息,确定所述静态元素对应的RGB图像、以及所述RGB图像对应的帧标识,包括:4. The method according to claim 3, characterized in that, for any static element; determining the RGB image corresponding to the static element and the frame identifier corresponding to the RGB image according to the element information of the static element, comprises: 根据所述静态元素在所述第一网页中的显示位置,生成所述静态元素对应的RGB图像;generating an RGB image corresponding to the static element according to a display position of the static element in the first webpage; 根据所述静态元素在所述第一网页中的显示时段和所述预设帧率,确定所述RGB图像对应的帧标识,所述RGB图像对应至少一个帧标识。The frame identifier corresponding to the RGB image is determined according to the display period of the static element in the first webpage and the preset frame rate, and the RGB image corresponds to at least one frame identifier. 5.根据权利要求2-4任一项所述的方法,其特征在于,对所述N帧目标图像进行拼接处理,得到所述静态视频,包括:5. The method according to any one of claims 2 to 4, characterized in that the step of performing splicing processing on the N frames of target images to obtain the static video comprises: 对所述N帧目标图像进行格式转换处理,以得到N帧目标格式的图像;Performing format conversion processing on the N frames of target images to obtain N frames of images in a target format; 对所述N帧目标格式的图像进行拼接处理,得到所述静态视频。The N frames of images in the target format are spliced to obtain the static video. 6.根据权利要求1-4任一项所述的方法,其特征在于,根据所述静态视频、所述动态元素和所述动态元素的元素信息,确定所述第一网页对应的视频信息,包括:6. The method according to any one of claims 1 to 4, characterized in that determining the video information corresponding to the first webpage according to the static video, the dynamic element and the element information of the dynamic element comprises: 确定所述视频信息包括:所述静态视频、所述动态元素和所述动态元素的元素信息;Determine that the video information includes: the static video, the dynamic element, and element information of the dynamic element; 或者,or, 根据所述动态元素的元素信息,对所述静态视频和所述动态元素进行融合处理,得到融合视频,并确定所述视频信息包括所述融合视频。According to the element information of the dynamic element, the static video and the dynamic element are fused to obtain a fused video, and it is determined that the video information includes the fused video. 7.一种页面处理方法,其特征在于,包括:7. A page processing method, characterized by comprising: 接收第一网页对应的视频信息,所述第一网页中包括多个元素,所述多个元素中包括静态元素和动态元素,所述视频信息为根据静态视频、所述动态元素和所述动态元素的元素信息确定得到的,所述静态视频是根据显示时长和所述静态元素的元素信息,对所述多个静态元素进行处理得到的,所述元素信息包括所述元素在所述第一网页中的显示时段和显示位置,当电子设备中的视频信息需要更新时,若动态元素发生变化且静态元素未发生变化,则接收动态元素和动态元素的元素信息,或者,若静态元素发生变化且动态元素未发生变化,则接收静态视频;Receive video information corresponding to a first webpage, wherein the first webpage includes a plurality of elements, wherein the plurality of elements include a static element and a dynamic element, wherein the video information is determined based on a static video, the dynamic element, and element information of the dynamic element, wherein the static video is obtained by processing the plurality of static elements based on a display duration and element information of the static element, wherein the element information includes a display period and a display position of the element in the first webpage, and when the video information in the electronic device needs to be updated, if the dynamic element changes and the static element does not change, then receive the dynamic element and the element information of the dynamic element, or if the static element changes and the dynamic element does not change, then receive the static video; 根据所述视频信息,确定目标视频,并播放所述目标视频。According to the video information, a target video is determined, and the target video is played. 8.根据权利要求7所述的方法,其特征在于,所述多个元素中包括静态元素和动态元素,其中,8. The method according to claim 7, characterized in that the multiple elements include static elements and dynamic elements, wherein: 所述视频信息包括:所述静态视频、所述动态元素和所述动态元素的元素信息,所述静态视频为根据所述第一网页中的静态元素的元素信息确定得到的;The video information includes: the static video, the dynamic element and element information of the dynamic element, wherein the static video is determined based on the element information of the static element in the first webpage; 或者,or, 所述视频信息包括融合视频,所述融合视频为对所述静态视频和所述动态元素进行融合处理得到的。The video information includes a fused video, where the fused video is obtained by fusing the static video and the dynamic element. 9.根据权利要求8所述的方法,其特征在于,所述视频信息包括所述静态视频、所述动态元素和所述动态元素的元素信息;根据所述视频信息,确定目标视频,包括:9. The method according to claim 8, characterized in that the video information includes the static video, the dynamic element and element information of the dynamic element; determining the target video according to the video information comprises: 根据所述动态元素的元素信息,对所述静态视频和所述动态元素进行融合处理,得到所述目标视频;According to the element information of the dynamic element, the static video and the dynamic element are fused to obtain the target video; 或者,or, 所述视频信息包括融合视频;根据所述视频信息,确定目标视频,包括:The video information includes a fused video; and determining a target video according to the video information includes: 将所述融合视频确定为所述目标视频。The fused video is determined as the target video. 10.一种页面处理装置,其特征在于,包括:确定模块、获取模块、处理模块和发送模块,其中,10. A page processing device, characterized in that it comprises: a determination module, an acquisition module, a processing module and a sending module, wherein: 所述确定模块用于,在第一网页中确定多个元素和所述元素的元素信息,所述元素信息包括:所述元素在所述第一网页中的显示时段和显示位置;The determining module is used to determine a plurality of elements and element information of the elements in the first webpage, wherein the element information includes: a display period and a display position of the elements in the first webpage; 所述获取模块用于,获取所述第一网页的显示时长;The acquisition module is used to acquire the display time of the first webpage; 所述处理模块用于,根据所述显示时长和所述元素信息,对所述多个元素进行处理,得到所述第一网页对应的视频信息;The processing module is used to process the multiple elements according to the display duration and the element information to obtain the video information corresponding to the first webpage; 所述发送模块用于,向电子设备发送所述第一网页对应的视频信息;The sending module is used to send the video information corresponding to the first webpage to the electronic device; 所述多个元素中包括静态元素和动态元素;所述处理模块具体用于,根据所述显示时长和所述静态元素的元素信息,对所述多个静态元素进行处理,得到静态视频;The multiple elements include static elements and dynamic elements; the processing module is specifically used to process the multiple static elements according to the display duration and the element information of the static elements to obtain a static video; 根据所述静态视频、所述动态元素和所述动态元素的元素信息,确定所述第一网页对应的视频信息;Determining video information corresponding to the first webpage according to the static video, the dynamic element and element information of the dynamic element; 当所述电子设备中的视频信息需要更新时,若动态元素发生变化且静态元素未发生变化,则向所述电子设备发送动态元素和动态元素的元素信息,或者,若静态元素发生变化且动态元素未发生变化,则向所述电子设备发送静态视频。When the video information in the electronic device needs to be updated, if the dynamic element changes and the static element does not change, the dynamic element and the element information of the dynamic element are sent to the electronic device, or if the static element changes and the dynamic element does not change, a static video is sent to the electronic device. 11.一种页面处理装置,其特征在于,包括:接收模块、确定模块和播放模块,其中,11. A page processing device, comprising: a receiving module, a determining module and a playing module, wherein: 所述接收模块用于,接收第一网页对应的视频信息,所述第一网页中包括多个元素,所述多个元素中包括静态元素和动态元素,所述视频信息为根据静态视频、所述动态元素和所述动态元素的元素信息确定得到的,所述静态视频是根据显示时长和所述静态元素的元素信息,对所述多个静态元素进行处理得到的,所述元素信息包括所述元素在所述第一网页中的显示时段和显示位置,当电子设备中的视频信息需要更新时,若动态元素发生变化且静态元素未发生变化,则接收动态元素和动态元素的元素信息,或者,若静态元素发生变化且动态元素未发生变化,则接收静态视频;The receiving module is used to receive video information corresponding to a first webpage, wherein the first webpage includes a plurality of elements, wherein the plurality of elements include a static element and a dynamic element, wherein the video information is determined based on a static video, the dynamic element and the element information of the dynamic element, wherein the static video is obtained by processing the plurality of static elements based on a display duration and the element information of the static element, wherein the element information includes a display period and a display position of the element in the first webpage, and when the video information in the electronic device needs to be updated, if the dynamic element changes and the static element does not change, then the dynamic element and the element information of the dynamic element are received, or if the static element changes and the dynamic element does not change, then the static video is received; 所述确定模块用于,根据所述视频信息,确定目标视频;The determination module is used to determine the target video according to the video information; 所述播放模块用于,播放所述目标视频。The playing module is used to play the target video. 12.一种云端设备,其特征在于,包括:存储器和处理器;12. A cloud device, comprising: a memory and a processor; 所述存储器存储计算机执行指令;The memory stores computer-executable instructions; 所述处理器执行所述存储器存储的计算机执行指令,使得所述处理器执行如权利要求1至6任一项所述的页面处理方法。The processor executes the computer-executable instructions stored in the memory, so that the processor executes the page processing method according to any one of claims 1 to 6. 13.一种电子设备,其特征在于,包括:存储器和处理器;13. An electronic device, comprising: a memory and a processor; 所述存储器存储计算机执行指令;The memory stores computer-executable instructions; 所述处理器执行所述存储器存储的计算机执行指令,使得所述处理器执行如权利要求7至9任一项所述的页面处理方法。The processor executes the computer-executable instructions stored in the memory, so that the processor executes the page processing method according to any one of claims 7 to 9. 14.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机执行指令,当所述计算机执行指令被处理器执行时用于实现权利要求1至6任一项所述的页面处理方法,或者权利要求7至9任一项所述的页面处理方法。14. A computer-readable storage medium, characterized in that the computer-readable storage medium stores computer-executable instructions, which, when executed by a processor, are used to implement the page processing method described in any one of claims 1 to 6, or the page processing method described in any one of claims 7 to 9. 15.一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现权利要求1至6任一项所述的页面处理方法,或者权利要求7至9任一项所述的页面处理方法。15. A computer program product, comprising a computer program, wherein when the computer program is executed by a processor, the page processing method according to any one of claims 1 to 6 or the page processing method according to any one of claims 7 to 9 is implemented.
CN202210286723.7A 2022-03-22 2022-03-22 Page processing method, device and equipment Active CN114666621B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210286723.7A CN114666621B (en) 2022-03-22 2022-03-22 Page processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210286723.7A CN114666621B (en) 2022-03-22 2022-03-22 Page processing method, device and equipment

Publications (2)

Publication Number Publication Date
CN114666621A CN114666621A (en) 2022-06-24
CN114666621B true CN114666621B (en) 2024-12-20

Family

ID=82031485

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210286723.7A Active CN114666621B (en) 2022-03-22 2022-03-22 Page processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN114666621B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115396696B (en) * 2022-08-22 2024-04-12 网易(杭州)网络有限公司 Video data transmission method, system, processing device and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107480245A (en) * 2017-08-10 2017-12-15 腾讯科技(深圳)有限公司 A kind of generation method of video file, device and storage medium

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000049535A2 (en) * 1999-02-19 2000-08-24 Interactive Video Technologies, Inc. System, method and article for applying temporal elements to the attributes of a static document object
US20170004646A1 (en) * 2015-07-02 2017-01-05 Kelly Phillipps System, method and computer program product for video output from dynamic content
CN105808659B (en) * 2016-02-29 2019-03-05 努比亚技术有限公司 Mobile terminal and its webpage capture method
CN105847870A (en) * 2016-04-20 2016-08-10 乐视控股(北京)有限公司 Server, static video playing page generation method, device and system
US10795700B2 (en) * 2016-07-28 2020-10-06 Accenture Global Solutions Limited Video-integrated user interfaces
CN106649830A (en) * 2016-12-29 2017-05-10 北京奇虎科技有限公司 A method and device for displaying information
CN110324671B (en) * 2018-03-30 2021-06-08 中兴通讯股份有限公司 Webpage video playing method and device, electronic equipment and storage medium
CN109684565A (en) * 2018-12-11 2019-04-26 北京字节跳动网络技术有限公司 The generation of Webpage correlation video and methods of exhibiting, device, system and electronic equipment
CN110457624A (en) * 2019-06-26 2019-11-15 网宿科技股份有限公司 Video generation method, device, server and storage medium
US10984067B2 (en) * 2019-06-26 2021-04-20 Wangsu Science & Technology Co., Ltd. Video generating method, apparatus, server, and storage medium
CN110401660B (en) * 2019-07-26 2022-03-01 秒针信息技术有限公司 False flow identification method and device, processing equipment and storage medium
CN113516740B (en) * 2020-04-10 2024-07-09 阿里巴巴集团控股有限公司 Method and device for adding static element and electronic equipment
CN111654755B (en) * 2020-05-21 2023-04-18 维沃移动通信有限公司 Video editing method and electronic equipment
CN113010825A (en) * 2021-03-09 2021-06-22 腾讯科技(深圳)有限公司 Data processing method and related device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107480245A (en) * 2017-08-10 2017-12-15 腾讯科技(深圳)有限公司 A kind of generation method of video file, device and storage medium

Also Published As

Publication number Publication date
CN114666621A (en) 2022-06-24

Similar Documents

Publication Publication Date Title
CN109460233B (en) Method, device, terminal equipment and medium for updating native interface display of page
CN109168076B (en) Online course recording method, device, server and medium
US11785195B2 (en) Method and apparatus for processing three-dimensional video, readable storage medium and electronic device
CN105893161A (en) Method and apparatus for calling resource in software program
CN110070593B (en) Method, device, equipment and medium for displaying picture preview information
CN115103236B (en) Image record generation method, device, electronic equipment and storage medium
WO2024131621A1 (en) Special effect generation method and apparatus, electronic device, and storage medium
CN114666621B (en) Page processing method, device and equipment
CN109547851A (en) Video broadcasting method, device and electronic equipment
CN105049910B (en) A kind of method for processing video frequency and device
CN109871465B (en) Time axis calculation method and device, electronic equipment and storage medium
CN110618811A (en) Information presentation method and device
US20240048665A1 (en) Video generation method, video playing method, video generation device, video playing device, electronic apparatus and computer-readable storage medium
CN109640023B (en) Video recording method, device, server and storage medium
US12039628B2 (en) Video mask layer display method, apparatus, device and medium
CN110619615A (en) Method and apparatus for processing image
CN112017261A (en) Sticker generation method and device, electronic equipment and computer readable storage medium
CN114066721B (en) Display method and device and electronic equipment
CN112306339B (en) Method and apparatus for displaying image
EP4421727A1 (en) Image processing method and apparatus, electronic device, and storage medium
US12020347B2 (en) Method and apparatus for text effect processing
CN112148901B (en) Live streaming editing method and device
CN113496534B (en) Dynamic bitmap synthesis method, device, electronic equipment and computer storage medium
CN118710510A (en) A video data processing method, device and related equipment
CN118741242A (en) Video editing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant