CN115361569B - Dynamic frame screen projection method in cloud conference and related products - Google Patents
Dynamic frame screen projection method in cloud conference and related products Download PDFInfo
- Publication number
- CN115361569B CN115361569B CN202210958059.6A CN202210958059A CN115361569B CN 115361569 B CN115361569 B CN 115361569B CN 202210958059 A CN202210958059 A CN 202210958059A CN 115361569 B CN115361569 B CN 115361569B
- Authority
- CN
- China
- Prior art keywords
- cloud
- screen projection
- conference
- frame rate
- cloud conference
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 230000005540 biological transmission Effects 0.000 claims description 32
- 238000012545 processing Methods 0.000 claims description 31
- 238000004891 communication Methods 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 12
- 239000000284 extract Substances 0.000 claims description 2
- 238000012216 screening Methods 0.000 claims 1
- 239000000758 substrate Substances 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 14
- 101710129069 Serine/threonine-protein phosphatase 5 Proteins 0.000 description 6
- 101710199542 Serine/threonine-protein phosphatase T Proteins 0.000 description 6
- 229920000470 poly(p-phenylene terephthalate) polymer Polymers 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000007774 longterm Effects 0.000 description 4
- 238000001228 spectrum Methods 0.000 description 4
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010295 mobile communication Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234381—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
- G06F3/1454—Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Telephonic Communication Services (AREA)
Abstract
The embodiment of the application provides a dynamic frame screen projection method in a cloud conference and a related product, wherein the method comprises the following steps: the cloud conference server acquires parameter information of a speaker terminal and other cloud terminals; the cloud conference server predicts a screen projection strategy of the current conference according to the parameter information; and the cloud conference server dynamically adjusts the screen projection frame rate of the cloud conference according to the screen projection strategy and executes the cloud conference. The technical scheme provided by the application has the advantage of improving the cloud conference effect.
Description
Technical Field
The application relates to the technical field of electronics and communication, in particular to a dynamic frame screen projection method in a cloud conference and related products.
Background
Cloud conferencing is an efficient, convenient, low-cost form of conferencing based on cloud computing technology. The user can rapidly and efficiently share voice, data files and videos with all groups and clients in the world synchronously by simply and easily operating through an internet interface, and the user is helped by a cloud conference service provider to operate through complex technologies such as data transmission, processing and the like in the conference.
Image data transmission of the shared file area is not matched with the cloud terminal in the cloud conference scene, so that the effect of the cloud conference is affected, and the user experience is reduced.
Disclosure of Invention
The embodiment of the application discloses a dynamic frame screen projection method in a cloud conference and a related product.
In a first aspect, a method for projecting a dynamic frame in a cloud conference is provided, the method comprising the following steps:
the cloud conference server acquires parameter information of a speaker terminal and other cloud terminals;
the cloud conference server predicts a screen projection strategy of the current conference according to the parameter information;
and the cloud conference server dynamically adjusts the screen projection frame rate of the cloud conference according to the screen projection strategy and executes the cloud conference.
In a second aspect, a dynamic frame projection system in a cloud conference is provided, the system comprising:
the acquisition unit is used for acquiring parameter information of the speaker terminal and other cloud terminals;
the processing unit is used for predicting the screen projection strategy of the current conference according to the parameter information; and dynamically adjusting the screen projection frame rate of the cloud conference according to the screen projection strategy, and executing the cloud conference.
In a third aspect, there is provided an electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, storing a computer program for electronic data exchange, wherein the computer program causes a computer to perform the method of the first aspect.
In a fifth aspect, a computer program product is provided, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program, the computer program being operable to cause a computer to perform some or all of the steps as described in the first aspect of the embodiments of the application. The computer program product may be a software installation package.
According to the technical scheme, the cloud conference server acquires a plurality of parameter information of a speaker terminal and other cloud terminals; the cloud conference server predicts a screen projection strategy of the current conference according to the plurality of parameter information, dynamically adjusts the screen projection frame rate of the cloud conference according to the screen projection strategy, and executes the cloud conference. When the cloud conference is carried out, the screen projection frame rate of the cloud conference can be dynamically adjusted through a plurality of parameter information, the network data transmission quantity is reduced while the conference fluency of the cloud conference is ensured, the conference quality is improved, the user experience is improved, and the cost is reduced.
Drawings
The drawings used in the embodiments of the present application are described below.
FIG. 1 is a schematic diagram of a cloud conference platform architecture of the present application;
fig. 2 is a schematic flow chart of a dynamic frame screen projection method in a cloud conference;
fig. 3 is a schematic structural diagram of a dynamic frame screen projection system in a cloud conference according to the present application;
fig. 4 is a flow chart of a dynamic frame screen projection method in a cloud conference according to a first embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.
The term "and/or" in the present application is merely an association relation describing the association object, and indicates that three kinds of relations may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. In this context, the character "/" indicates that the front and rear associated objects are an "or" relationship.
The term "plurality" as used in the embodiments of the present application means two or more. The first, second, etc. descriptions in the embodiments of the present application are only used for illustrating and distinguishing the description objects, and no order is used, nor is the number of the devices in the embodiments of the present application limited, and no limitation on the embodiments of the present application should be construed. The "connection" in the embodiment of the present application refers to various connection manners such as direct connection or indirect connection, so as to implement communication between devices, which is not limited in the embodiment of the present application.
Referring to fig. 1, fig. 1 is a schematic diagram of a framework of a cloud conference platform, as shown in fig. 1, where the cloud conference platform includes a plurality of cloud terminals, and the cloud terminals are connected together by a cloud conference server, where the cloud terminals specifically may include: the processor, the memory, the display screen, the communication circuit, the audio component and the image pickup component may be connected through a bus, or may be connected through other means, and the present application is not limited to the specific manner of connection described above. The cloud terminal and the cloud conference platform can be connected through a wired network, and can be connected through a wireless network of a wireless communication system.
The wireless communication system may be: global system for mobile communications (Global System of Mobile communication, GSM), code division multiple access (Code Division Multiple Access, CDMA) system, wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA) system, general packet Radio service (General Packet Radio Service, GPRS), long term evolution (Long Term Evolution, LTE) system, long term evolution advanced (Advanced long term evolution, LTE-a) system, new Radio, NR system evolution system, LTE system over unlicensed spectrum (LTE-based access to unlicensed spectrum, LTE-U), unlicensed spectrum NR system (NR-based access tounlicensed spectrum, NR-U), universal mobile communication system (Universal Mobile Telecommunication System, UMTS), next generation communication system, or other communication system, etc.
Referring to fig. 2, fig. 2 provides a flow chart of a dynamic frame screen projection method in a cloud conference, where the method shown in fig. 2 may be performed under the framework of the cloud conference platform shown in fig. 1, specifically, it may be performed by a cloud terminal in the cloud conference platform shown in fig. 1, and of course, may also be performed by a cloud conference server, and this embodiment is illustrated by taking the cloud conference server as an example, and in practical application, may also be performed by a cloud terminal, where the method as shown in fig. 2 includes the following steps:
step S201, a cloud conference server acquires parameter information of a presenter terminal and other cloud terminals;
by way of example, the above parameter information includes, but is not limited to: the operation information of the speaker terminal, the network delay of other cloud terminals, the current lecture content of the cloud conference, and the like.
By way of example, the above-described operational information includes, but is not limited to: terminal interaction information and terminal hardware parameters; such as load rate, hardware usage (e.g., CPU usage, memory usage, etc.).
Step S202, the cloud conference server predicts a screen projection strategy of the current conference according to the parameter information;
for example, the above-mentioned screen projection strategy includes: high frame rate, medium frame rate, low frame rate or variable frame rate.
Step S203, the cloud conference server dynamically adjusts the screen projection frame rate of the cloud conference according to the screen projection strategy, and executes the cloud conference.
The screen projection frame rate of the cloud conference can be dynamically adjusted according to the screen projection strategy to match the screen projection strategy, and the parameter information can be periodically acquired, so that the dynamic adjustment is realized.
According to the technical scheme, the cloud conference server acquires a plurality of parameter information of a speaker terminal and other cloud terminals; the cloud conference server predicts a screen projection strategy of the current conference according to the plurality of parameter information, dynamically adjusts the screen projection frame rate of the cloud conference according to the screen projection strategy, and executes the cloud conference. When the cloud conference is carried out, the screen projection frame rate of the cloud conference can be dynamically adjusted through a plurality of parameter information, the network data transmission quantity is reduced while the conference fluency of the cloud conference is ensured, the conference quality is improved, the user experience is improved, and the cost is reduced.
For example, the cloud conference server may predict, according to the parameter information, a screen projection policy of the current conference, which specifically includes:
if the parameter information comprises the current lecture content of the cloud conference, the cloud conference server inputs the current lecture content into a type identification model to determine the first type of the current lecture content, the cloud conference server identifies a presenter to determine the first identity of the presenter, the historical lecture time t of the presenter to the first type is extracted according to the first identity, and the screen projection strategy of the current conference is determined according to the section where the time t is located.
By way of example, the type recognition model may be a neural network model, a deep neural network model, a support vector machine, etc., and the present application is not limited to the specific form of the model.
Specifically, the determining the screen-projection strategy of the current conference according to the interval where the time t is located may specifically include:
determining a first interval in which the time t is positioned, extracting a first strategy corresponding to the first interval according to a mapping relation between the interval and the screen projection strategy, and determining the first strategy as the screen projection strategy of the current conference.
Specifically, for example, the time t is 30 minutes, the extracted first policy may be compressed frame rate screen projection, if the time t is 2 minutes, it is determined that the first policy may be medium frame rate screen projection, etc., and the present application is not limited to a specific implementation manner of the mapping relationship between the interval and the screen projection policy.
For example, the cloud conference server may predict, according to the parameter information, a screen projection policy of the current conference, which specifically includes:
if the parameter information includes network delay of other cloud terminals, determining that the screen projection strategy of the current conference is a variable frame rate screen projection, specifically including: and distributing a frame rate corresponding to the delay interval to the terminal corresponding to the network delay according to the delay interval in which the network delay is positioned.
The implementation scheme of the method specifically comprises the following steps: if the network delay of the cloud terminal 1 is low, for example, 5 ms, the corresponding frame rate screen may be determined to be a high frame rate screen, if the network delay of the cloud terminal 2 is medium, for example, 20 ms, the corresponding frame rate screen may be determined to be a medium frame rate screen, and if the network delay of the cloud terminal 3 is high, for example, 100 ms, the corresponding frame rate screen may be determined to be a low frame rate screen. For different frame rates, different compression methods can be adopted to realize the screen projection, for example, for high frame rate screen projection, the transmission data of the screen projection can not be compressed, for medium frame rate screen projection, lossless compression transmission data can be adopted, and for low frame rate screen projection, lossy compression transmission data can be adopted.
For example, the method may further include:
the cloud conference server sets transmission priority of screen-throwing transmission data corresponding to other cloud terminals through network delay of the other cloud terminals, and sends the screen-throwing transmission data of the other cloud terminals according to the transmission priority.
Referring to fig. 3, fig. 3 provides a schematic structural diagram of a dynamic frame projection system in a cloud conference, where the system includes:
the acquiring unit 301 acquires parameter information of a speaker terminal and other cloud terminals;
the processing unit 302 predicts the screen projection strategy of the current conference according to the parameter information; and dynamically adjusting the screen projection frame rate of the cloud conference according to the screen projection strategy, and executing the cloud conference.
By way of example only, the present application is directed to a method of,
the parameter information includes: the method comprises the steps of (1) operating information of a presenter terminal, network delay of other cloud terminals and current lecture content of a cloud conference;
the screen projection strategy comprises the following steps: high frame rate, medium frame rate, low frame rate or variable frame rate.
By way of example only, the present application is directed to a method of,
and the processing unit is used for inputting the current lecture content into the type identification model to determine the first type of the current lecture content if the parameter information contains the current lecture content of the cloud conference, identifying and determining the first identity of the presenter by the presenter, extracting the historical lecture time t of the presenter on the first type according to the first identity, and determining the screen projection strategy of the current conference according to the section where the time t is located.
By way of example only, the present application is directed to a method of,
the processing unit is specifically configured to determine that the screen projection strategy of the current conference is a variable frame rate screen projection if the parameter information includes network delays of other cloud terminals, and specifically includes: and distributing a frame rate corresponding to the delay interval to the terminal corresponding to the network delay according to the delay interval in which the network delay is positioned.
By way of example only, the present application is directed to a method of,
the processing unit is further used for setting transmission priority of the screen-throwing transmission data corresponding to the other cloud terminals through network delay of the other cloud terminals, and sending the screen-throwing transmission data of the other cloud terminals according to the transmission priority.
Example 1
Referring to fig. 4, fig. 4 provides a flow chart of a dynamic frame screen projection method in a cloud conference, where the method shown in fig. 4 may be performed under the framework of the cloud conference platform shown in fig. 1, specifically, it may be performed by a cloud terminal in the cloud conference platform shown in fig. 1, and of course, may also be performed by a cloud conference server (hereinafter referred to as a cloud), and this embodiment is illustrated by taking the cloud conference server as an example, and in practical application, may also be performed by the cloud terminal, where the method is shown in fig. 4, and includes the following steps:
step S401, the cloud acquires the change condition of the cloud desktop image content of the current cloud conference;
step S402, the cloud end adapts a frame rate meeting the user requirement to transmit corresponding cloud desktop image data.
Specifically, if a screen-throwing scene with relatively small adjacent frame variation, such as a static file screen-throwing scene, a low transmission frame rate can be adopted, so that the data transmission amount in unit time is reduced to reduce power consumption, and in a video screen-throwing scene, a high transmission frame rate is adopted to improve fluency.
In the PPT screen throwing explanation scene, for example, a user can push a single frame for each page of PPT content under the condition of no interactive operation, and in the dynamic video explanation scene, the screen throwing of the dynamic video is required to be ensured to be free from blocking through a high frame rate.
The method may further include:
the cloud end can dynamically calculate the adaptive frame rate and the like according to the detection result of the interaction operation of the host on the local device or the large screen projection device (the conference desktop content is required to be dynamically updated and pushed to be displayed on the client device when the specific control operation is detected generally), the type of the shared content information (such as a file, an audio and video) displayed in the current conference desktop full-screen state, the network speed of the optional conference room projection device and the like.
For example, the host controls the target PPT file to explain to the fifth page, the content of the PPT page relates to a visual flow chart with specific step labels, after the host controls and displays the page, the cloud server predicts that the host needs to spend 30 seconds to explain, and the native annotation of the page obviously does not need additional interactive operation to remind a specific explanation position, so that the minimum push screen frame rate such as updating one second and pushing one frame is determined.
The cloud end can reduce the transmission data volume according to the difference between application scenes, namely through special video transmission of the cloud conference.
The cloud end can also increase different users to provide different transmission modes, the concept of videos possibly focused by different participants in the cloud conference can be different, different compressed pictures are transmitted for the users of different classifications through the classification of the users, and the data transmission quantity is further reduced.
The cloud end can also increase the priority of transmission, namely, the transmission quantity of each transmitted picture is distributed according to the time delay among different participants, for example, a high-network-speed line can be distributed with pictures with higher definition, a low-network-speed line can be distributed with pictures with lower definition, and then the synchronization between the definition of the pictures and the data quantity is ensured.
Aiming at the PPT screen-throwing mode, different modes are adopted for the screen-throwing pages of different PPTs.
It will be appreciated that the apparatus, in order to achieve the above-described functions, comprises corresponding hardware and/or software modules for performing the respective functions. The present application can be implemented in hardware or a combination of hardware and computer software, in conjunction with the example algorithm steps described in connection with the embodiments disclosed herein. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application in conjunction with the embodiments, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The present embodiment may divide the functional modules of the electronic device according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules described above may be implemented in hardware. It should be noted that, in this embodiment, the division of the modules is schematic, only one logic function is divided, and another division manner may be implemented in actual implementation.
It should be noted that, all relevant contents of each step related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein.
In case an integrated unit is employed, the user equipment may comprise a processing module and a storage module. The processing module may be configured to control and manage actions of the user equipment, for example, may be configured to support the electronic device to execute the steps executed by the acquiring unit, the communication unit, and the processing unit. The memory module may be used to support the electronic device to execute stored program code, data, etc.
Wherein the processing module may be a processor or a controller. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with this disclosure. A processor may also be a combination that performs computing functions, e.g., including one or more microprocessors, digital signal processing (digital signal processing, DSP) and microprocessor combinations, and the like. The memory module may be a memory. The communication module can be a radio frequency circuit, a Bluetooth chip, a Wi-Fi chip and other equipment which interact with other electronic equipment.
It should be understood that the connection relationship between the modules illustrated in the embodiment of the present application is only illustrative, and does not limit the structure of the ue. In other embodiments of the present application, the ue may also use different interfacing manners in the foregoing embodiments, or a combination of multiple interfacing manners.
Referring to fig. 5, fig. 5 is an electronic device 50 provided by an embodiment of the present application, where the electronic device 50 includes a processor 501, a memory 502, a communication interface 503, and a display screen 504, where the processor 501, the memory 502, and the communication interface 503 are connected to each other by a bus, and the display screen supplies power to the electronic device, and the electronic device may further include:
memory 502 includes, but is not limited to, random access memory (random access memory, RAM), read-only memory (ROM), erasable programmable read-only memory (erasable programmable read only memory, EPROM), or portable read-only memory (compact disc read-only memory, CD-ROM), with memory 502 for associated computer programs and data. The communication interface 503 is used to receive and transmit data.
The processor 501 may be one or more central processing units (central processing unit, CPU), and in the case where the processor 501 is a CPU, the CPU may be a single-core CPU or a multi-core CPU.
The processor 501 may include one or more processing units, such as: the processing units may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate components or may be integrated in one or more processors. In some embodiments, the user equipment may also include one or more processing units. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution. In other embodiments, memory may also be provided in the processing unit for storing instructions and data. The memory in the processing unit may be a cache memory, for example. The memory may hold instructions or data that the processing unit has just used or recycled. If the processing unit needs to reuse the instruction or data, it can be called directly from the memory. In this way, repeated accesses are avoided, and the latency of the processing unit is reduced, thereby improving the efficiency of the user equipment in processing data or executing instructions.
In some embodiments, processor 501 may include one or more interfaces. The interfaces may include inter-integrated circuit (inter-integrated circuit, I2C) interfaces, inter-integrated circuit audio (inter-integrated circuit sound, I2S) interfaces, pulse code modulation (pulse code modulation, PCM) interfaces, universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interfaces, mobile industry processor interfaces (mobile industry processor interface, MIPI), general-purpose input/output (GPIO) interfaces, SIM card interfaces, and/or USB interfaces, among others. The USB interface is an interface conforming to the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface can be used for connecting a charger to charge the user equipment and can also be used for transmitting data between the user equipment and the peripheral equipment. The USB interface can also be used for connecting with a headset, and playing audio through the headset.
If the electronic device 50 is a cloud conference server or a cloud device, such as a smart phone, a computer device, or a server, the processor 501 in the electronic device 50 is configured to read the computer program code stored in the memory 502, and perform the following operations:
acquiring parameter information of a speaker terminal and other cloud terminals; predicting a screen projection strategy of the current conference according to the parameter information; and dynamically adjusting the screen projection frame rate of the cloud conference according to the screen projection strategy, and executing the cloud conference.
For example, if the parameter information includes the current lecture content of the cloud conference, inputting the current lecture content into a type recognition model to determine a first type of the current lecture content, and the cloud conference server recognizes a presenter to determine a first identity of the presenter, extracts a historical lecture time t of the presenter to the first type according to the first identity, and determines a screen projection strategy of the current conference according to a section where the time t is located.
For example, if the parameter information includes network delay of other cloud terminals, the cloud conference server predicts a screen projection strategy of the current conference according to the parameter information, which specifically includes:
the screen projection strategy of the current conference is determined to be screen projection with a changing frame rate, and the method specifically comprises the following steps: and distributing a frame rate corresponding to the delay interval to the terminal corresponding to the network delay according to the delay interval in which the network delay is positioned.
By way of example, the cloud conference server sets transmission priority of the screen-projection transmission data corresponding to the other cloud terminals through network delay of the other cloud terminals, and sends the screen-projection transmission data of the other cloud terminals according to the transmission priority.
All relevant contents of each scenario related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored therein, which when run on a network device, implements the method flow shown in fig. 2.
Embodiments of the present application also provide a computer program product, which when run on a terminal, implements the method flow shown in fig. 2.
Embodiments of the present application also provide an electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of the embodiment shown in fig. 2.
The foregoing description of the embodiments of the present application has been presented primarily in terms of a method-side implementation. It will be appreciated that the electronic device, in order to achieve the above-described functions, includes corresponding hardware structures and/or software templates for performing the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application can divide the functional units of the electronic device according to the method example, for example, each functional unit can be divided corresponding to each function, and two or more functions can be integrated in one processing unit. The integrated units may be implemented in hardware or in software functional units. It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are presently preferred, and that the acts and templates referred to are not necessarily essential to the application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. And the aforementioned memory includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, which may include: flash disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
Claims (8)
1. The dynamic frame screen projection method in the cloud conference is characterized by comprising the following steps of:
the cloud conference server acquires parameter information of a speaker terminal and other cloud terminals;
the cloud conference server predicts a screen projection strategy of the current conference according to the parameter information;
the cloud conference server dynamically adjusts the screen projection frame rate of the cloud conference according to the screen projection strategy, and executes the cloud conference;
if the parameter information includes the current lecture content of the cloud conference, the cloud conference server predicts a screen projection strategy of the current conference according to the parameter information, and the screen projection strategy specifically includes:
the cloud conference server inputs the current lecture content into a type recognition model to determine a first type of the current lecture content, the cloud conference server recognizes a talkback person to determine a first identity of the talkback person, extracts historical lecture time t of the talkback person to the first type according to the first identity, and determines a screen projection strategy of the current conference according to an interval where the time t is located.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the parameter information includes: the method comprises the steps of (1) operating information of a presenter terminal, network delay of other cloud terminals and current lecture content of a cloud conference;
the screen projection strategy comprises the following steps: high frame rate, medium frame rate, low frame rate or variable frame rate.
3. The method of claim 2, wherein if the parameter information includes network delay of other cloud terminals, the cloud conference server predicts a screen projection strategy of the current conference according to the parameter information, which specifically includes:
the screen projection strategy of the current conference is determined to be screen projection with a changing frame rate, and the method specifically comprises the following steps: and distributing a frame rate corresponding to the delay interval to the terminal corresponding to the network delay according to the delay interval in which the network delay is positioned.
4. A method according to claim 3, characterized in that the method further comprises:
the cloud conference server sets transmission priority of screen-throwing transmission data corresponding to other cloud terminals through network delay of the other cloud terminals, and sends the screen-throwing transmission data of the other cloud terminals according to the transmission priority.
5. A dynamic frame screening system in a cloud conference, the system comprising:
the acquisition unit is used for acquiring parameter information of the speaker terminal and other cloud terminals;
the processing unit is used for predicting the screen projection strategy of the current conference according to the parameter information; dynamically adjusting the screen projection frame rate of the cloud conference according to the screen projection strategy, and executing the cloud conference;
and the processing unit is used for inputting the current lecture content into the type identification model to determine the first type of the current lecture content if the parameter information contains the current lecture content of the cloud conference, identifying and determining the first identity of the presenter by the presenter, extracting the historical lecture time t of the presenter on the first type according to the first identity, and determining the screen projection strategy of the current conference according to the section where the time t is located.
6. The system of claim 5, wherein the system further comprises a controller configured to control the controller,
the parameter information includes: the method comprises the steps of (1) operating information of a presenter terminal, network delay of other cloud terminals and current lecture content of a cloud conference;
the screen projection strategy comprises the following steps: high frame rate, medium frame rate, low frame rate or variable frame rate.
7. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps of the method of any of claims 1-4.
8. A computer readable storage medium having stored therein a computer program which, when run on a user equipment, performs the method of any of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210958059.6A CN115361569B (en) | 2022-08-10 | 2022-08-10 | Dynamic frame screen projection method in cloud conference and related products |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210958059.6A CN115361569B (en) | 2022-08-10 | 2022-08-10 | Dynamic frame screen projection method in cloud conference and related products |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115361569A CN115361569A (en) | 2022-11-18 |
CN115361569B true CN115361569B (en) | 2023-10-20 |
Family
ID=84001279
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210958059.6A Active CN115361569B (en) | 2022-08-10 | 2022-08-10 | Dynamic frame screen projection method in cloud conference and related products |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115361569B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118842933A (en) * | 2023-04-23 | 2024-10-25 | 北京字跳网络技术有限公司 | Image processing method and device and electronic equipment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7404001B2 (en) * | 2002-03-27 | 2008-07-22 | Ericsson Ab | Videophone and method for a video call |
CN101984661A (en) * | 2010-11-23 | 2011-03-09 | 广东威创视讯科技股份有限公司 | Information transmission method of video conference system and video conference system |
CN104980411A (en) * | 2014-04-14 | 2015-10-14 | 腾讯科技(深圳)有限公司 | Control method, server and terminal for video call and video call system |
CN108108139A (en) * | 2017-12-19 | 2018-06-01 | 广州敬信药草园信息科技有限公司 | A kind of throwing screen cut-in method of cloud meeting |
CN112752058A (en) * | 2019-10-31 | 2021-05-04 | 华为技术有限公司 | Method and device for adjusting attribute of video stream |
CN114697731A (en) * | 2020-12-31 | 2022-07-01 | 华为技术有限公司 | Screen projection method, electronic device and storage medium |
CN114827134A (en) * | 2022-07-01 | 2022-07-29 | 深圳乐播科技有限公司 | Differentiated pushing method, related device and display method for cloud conference desktop |
CN114840477A (en) * | 2022-06-30 | 2022-08-02 | 深圳乐播科技有限公司 | File sensitivity index determining method based on cloud conference and related product |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9325781B2 (en) * | 2005-01-31 | 2016-04-26 | Invention Science Fund I, Llc | Audio sharing |
-
2022
- 2022-08-10 CN CN202210958059.6A patent/CN115361569B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7404001B2 (en) * | 2002-03-27 | 2008-07-22 | Ericsson Ab | Videophone and method for a video call |
CN101984661A (en) * | 2010-11-23 | 2011-03-09 | 广东威创视讯科技股份有限公司 | Information transmission method of video conference system and video conference system |
CN104980411A (en) * | 2014-04-14 | 2015-10-14 | 腾讯科技(深圳)有限公司 | Control method, server and terminal for video call and video call system |
CN108108139A (en) * | 2017-12-19 | 2018-06-01 | 广州敬信药草园信息科技有限公司 | A kind of throwing screen cut-in method of cloud meeting |
CN112752058A (en) * | 2019-10-31 | 2021-05-04 | 华为技术有限公司 | Method and device for adjusting attribute of video stream |
CN114697731A (en) * | 2020-12-31 | 2022-07-01 | 华为技术有限公司 | Screen projection method, electronic device and storage medium |
CN114840477A (en) * | 2022-06-30 | 2022-08-02 | 深圳乐播科技有限公司 | File sensitivity index determining method based on cloud conference and related product |
CN114827134A (en) * | 2022-07-01 | 2022-07-29 | 深圳乐播科技有限公司 | Differentiated pushing method, related device and display method for cloud conference desktop |
Non-Patent Citations (1)
Title |
---|
网络远程同步互动课堂的建设实践;林超 等;海峡科学(第02期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN115361569A (en) | 2022-11-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11166330B2 (en) | Data-stream allocation method for link aggregation and related devices | |
CN111132234B (en) | Data transmission method and corresponding terminal | |
EP3046331B1 (en) | Media control method and system based on cloud desktop | |
US11182210B2 (en) | Method for resource allocation and terminal device | |
CN108874539A (en) | Resource allocation method, device, terminal and storage medium | |
CN106911943B (en) | Video display method and device and storage medium | |
WO2020042350A1 (en) | Desktop screen projection method and device, apparatus, and storage medium | |
WO2019042180A1 (en) | Resource allocation method and related product | |
CN108984258A (en) | Application split-screen display method and device, storage medium and electronic equipment | |
CN105094861A (en) | Webpage application program loading method, device and system | |
US11016812B2 (en) | Method for resource allocation and terminal device | |
WO2019072180A1 (en) | Method and apparatus for allocating resources to application | |
CN108234659B (en) | Data processing method, device and system | |
CN105554430A (en) | Video call method, system and device | |
CN115361569B (en) | Dynamic frame screen projection method in cloud conference and related products | |
US20110196916A1 (en) | Client terminal, server, cloud computing system, and cloud computing method | |
CN111949239B (en) | Screen sharing method and device, storage medium and terminal | |
US20120303709A1 (en) | Conference assistance system, data processing apparatus and recording medium | |
CN111111175A (en) | Game picture generation method and device and mobile terminal | |
CN112888024B (en) | Data processing method, data processing device, storage medium and electronic equipment | |
CN115334053B (en) | Method for realizing associated screen projection in cloud conference and related products | |
CN107426114B (en) | Resource allocation method and system | |
CN104080104A (en) | Communication control method and user equipment | |
CN113658213A (en) | Image presentation method, related device and computer program product | |
JP2021517372A (en) | Remote multiplex connection system and how it works |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |