[go: up one dir, main page]

CN111427528B - Display method and device and electronic equipment - Google Patents

Display method and device and electronic equipment Download PDF

Info

Publication number
CN111427528B
CN111427528B CN202010205296.6A CN202010205296A CN111427528B CN 111427528 B CN111427528 B CN 111427528B CN 202010205296 A CN202010205296 A CN 202010205296A CN 111427528 B CN111427528 B CN 111427528B
Authority
CN
China
Prior art keywords
focus position
position information
image
time point
shared screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010205296.6A
Other languages
Chinese (zh)
Other versions
CN111427528A (en
Inventor
庞浩然
徐倩怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010205296.6A priority Critical patent/CN111427528B/en
Publication of CN111427528A publication Critical patent/CN111427528A/en
Application granted granted Critical
Publication of CN111427528B publication Critical patent/CN111427528B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/452Remote windowing, e.g. X-Window System, desktop virtualisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses a display method, a display device and electronic equipment. One embodiment of the method comprises the following steps: determining a focus position information set, wherein the focus position information indicates the position and the operation time of a predefined operation on a shared screen; determining a sequence of target time points associated with the current moment, and for each target time point in the sequence of target time points, performing the following display steps in order: determining an image display style of an area including a position indicated by the focus position information in the shared screen based on the target time point and the operation time in the focus position information; and at the target time point, adding the image of the determined image display mode to the sharing screen for image display. Thus, a new display mode can be provided.

Description

Display method and device and electronic equipment
Technical Field
The disclosure relates to the technical field of internet, and in particular relates to a display method, a display device and electronic equipment.
Background
With the development of computer technology and internet technology, documents, videos, and other digital contents can be shared by the network. Screen sharing is a common sharing method, which refers to a device (called a "sender") reproducing a screen interface on a screen of another device (called a "receiver") within an allowable time delay through a network, and the receiver has an interface synchronized with the sender to view the shared content.
Screen sharing may be used for many scenarios, such as remote desktop (e.g., collaborative office, remote slide show), video conferencing, cloud-based applications (e.g., cloud gaming), and so forth.
It will be appreciated that the screens presented at the transmitting end and the receiving end may be referred to as shared screens through screen sharing.
Disclosure of Invention
This disclosure is provided in part to introduce concepts in a simplified form that are further described below in the detailed description. This disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The embodiment of the disclosure provides a display method, a display device and electronic equipment.
In a first aspect, an embodiment of the present disclosure provides a display method, including: determining a focus position information set, wherein the focus position information indicates the position and the operation time of a predefined operation on a shared screen; determining a sequence of target time points associated with the current moment, and for each target time point in the sequence of target time points, performing the following display steps in order: determining an image display style of an area including a position indicated by the focus position information in the shared screen based on the target time point and the operation time in the focus position information; and at the target time point, adding the image of the determined image display mode to the sharing screen for image display.
In a second aspect, embodiments of the present disclosure provide a display apparatus including: a set determining unit that determines a set of focus position information indicating a position and an operation timing of a predefined operation on the shared screen; a first display unit that determines a target time point sequence associated with a current time point, and sequentially performs, for each target time point in the target time point sequence, the following display steps: determining an image display style of an area including a position indicated by the focus position information in the shared screen based on the target time point and the operation time in the focus position information; and at the target time point, adding the image of the determined image display mode to the sharing screen for image display.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; and a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the display method as described in the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer readable medium having stored thereon a computer program which, when executed by a processor, implements the steps of the display method as described in the first aspect.
The display method, the display device and the electronic equipment provided by the embodiment of the disclosure are characterized in that a focus position information set is acquired, then a target time point sequence associated with the current moment is determined, and a display step is executed at each target time point; the displaying step may include: for each focus position, determining an image display pattern based on the operation time and the target time point as time passes, and performing image display using the determined image display pattern; thus, during the execution of at least one display step over time, the shared screen displays different images at the same focus position during the display over time; the images are dynamically displayed at the respective focus positions, and as the operation proceeds, a process indicating that the image of the predefined operation is gradually changed may be displayed on the shared screen.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow chart of one embodiment of a display method according to the present disclosure;
FIG. 2 is a schematic diagram of one implementation of a display step according to the present disclosure;
fig. 3A, 3B, and 3C are schematic views of one application scenario of the display method according to the present disclosure;
fig. 4A and 4B are schematic diagrams of another application scenario of the display method according to the present disclosure;
FIG. 5 is a schematic diagram of a structure of one embodiment of a display device according to the present disclosure;
FIG. 6 is an exemplary system architecture to which the display method of one embodiment of the present disclosure may be applied;
fig. 7 is a schematic view of a basic structure of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information displayed between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Referring to fig. 1, a flow of one embodiment of a display method according to the present disclosure is shown. The display method as shown in fig. 1 comprises the following steps:
step 101, a focus position information set is acquired.
In the present embodiment, the execution subject (e.g., terminal device) of the display method may acquire the focus position information set.
Here, the focus position information in the focus position information set may indicate the position and the operation timing of a predefined operation on the shared screen. In other words, the user may make a predefined operation on the shared screen, which may be a continuous operation. The predefined operation may have a plurality of operation positions on the shared screen during the duration of the predefined operation. The time at which the user operates at each operation position may be the operation time corresponding to the operation position. In this disclosure, the location of the predefined operation may also be referred to as the focal position.
In some application scenarios, the screen interfaces presented at the transmitting end and the receiving end in a screen sharing manner may be referred to as a shared screen.
It can be understood that the execution body may be a transmitting end or a receiving end in the screen sharing scene. If the execution subject is a transmitting end, the execution subject locally acquires a focus position information set; if the executing body is a receiving end, the executing body may directly or indirectly receive the focus position information set from other electronic devices (e.g., a server or a transmitting end) to obtain the focus position information set.
The operation manner and function of the predefined operation may be set according to the actual situation, and are not limited herein. The predefined operation may be a prompt operation for prompting a viewing user to prompt a concentration of attention on the shared screen.
As an example, a portion of the "Tengwang order" may be displayed on the shared screen. The predefined operation may be directed to "eucrypti and arc Ji Fei" to alert individual users viewing the shared screen to the phrase "eucrypti and arc Ji Fei".
The specific implementation of the predefined operations is not limited herein.
As an example, the predefined operation may be a user operating a mouse to select a portion of the content in the shared screen.
It will be appreciated that during the predefined operation, the predefined operation may involve a plurality of operation positions (or sampling positions for the operation), some or all of which may be referred to as positions as focal positions, and each focal position may correspond to a respective focal time (i.e., an operation time).
Step 102, determining a target time point sequence associated with the current time instant, and performing the displaying step sequentially for each target time point in the target time point sequence.
In this embodiment, the execution subject may determine a target time point sequence associated with the current time, and execute the display step for each target time point in the target time point sequence in order.
In some application scenarios, the current time may be a current point in time that is constant over time. It will be appreciated that the determined set of focal position information may be varied as the current moment changes as the predefined operation continues.
Here, the target time point sequence associated with the current time point may be a sequence of several time points within a future time period from the current time point, and the several time points are arranged in the sequence in order of time.
Here, the number and the determination manner of the target time points are not limited here. Optionally, the target time point may be determined within a preset duration range from this moment; a preset number of target time points may also be determined.
As an example, the above-described execution body may determine one target time point every 5 milliseconds to determine at least one target time point in response to acquiring the focus position information set, and then will start from the moment (e.g., zero point).
In the present embodiment, the execution body may execute the display step for each target time point in the target time point sequence in order. Here, the order may refer to an order of the times indicated by the target time points.
In some application scenarios, performing the displaying step for each target point in time may transform the images displayed at each focus position of the shared screen.
Fig. 2 shows an implementation of the display step in this embodiment. Specifically, the displaying step of the present embodiment may include step 201 and step 202. The method comprises the following steps:
step 201, determining an image display style of an area including a position indicated by the focus position information in the shared screen area based on the target time point and the operation time in the focus position information.
Here, the focus position information may indicate a position, and an image display style of an area including the position may be determined based on the target time point and the operation timing in the focus position information. Thus, for each focus position information, the image display style corresponding to the focus position information can be determined.
Here, determining the image display style based on the target time point and the operation time point may be achieved in various ways. In some application scenarios, the interval between the operation time corresponding to the focus position information and the target time point may be determined as a time interval corresponding to the focus position information, and then, the image display style corresponding to each focus position information may be determined according to the time interval corresponding to each focus position information.
Here, the image display style may indicate a display mode of an image.
Here, the specific content of the image display style is not limited herein. As an example, the image display style may include, but is not limited to, at least one of: image size (number of pixels occupied), color of the image, image content.
At the target time point, the image of the determined image display style is added to the image display on the shared screen 202.
In this embodiment, the execution subject may add the image of the determined image display style to the shared screen for image display at the target time point.
Here, the presentation time of the single frame image may be taken as the target point in time. For a single frame image presented by the shared screen, an interval between a presentation time of the single frame image and an operation time corresponding to the focus position may be determined as a time interval corresponding to the focus position information. It will be appreciated that as time passes, the image frames presented by the shared screen are refreshed as well, successively reaching the target time point in the sequence of target time points. In the refreshing process of the image frames, the time intervals corresponding to the same focus position information are different for each image frame, and the different time intervals correspond to different image display patterns, so that the images displayed at the same focus position in each image frame are also transformed.
In other words, the time interval corresponding to the same focal position information is increasing with the passage of time. The time interval corresponding to the same focus position information is changed, so that the images displayed in different display steps of the focus position are different; thus, the shared screen displays different images at the same focus position during the display over time.
If the predefined operation is a continuous process, the set of focus position information may indicate a continuous predefined operation. As time goes by, in the course of the shared screen display, an image displayed at a focus position corresponding to a later operation may be an image displayed at a previous time at a focus position corresponding to a earlier operation. Thus, for a sustained predefined operation, as the operation proceeds, a process indicating that the image of the predefined operation is graded may be displayed on the shared screen.
It should be noted that, in the embodiment provided by the present disclosure, the focus position information set is acquired, then, the target time point sequence associated with the current time is determined, and the display step is performed at each target time point; the displaying step may include: for each focus position, determining an image display pattern based on the operation time and the target time point as time passes, and performing image display using the determined image display pattern; thus, during the execution of at least one display step over time, the shared screen displays different images at the same focus position during the display over time; the images are dynamically displayed at the respective focus positions, and as the operation proceeds, a process indicating that the image of the predefined operation is gradually changed may be displayed on the shared screen.
In some application scenarios, the step 201 may include: an interval between the operation time in the focus position information and the target time point is determined as a time interval corresponding to the determined focus position information; according to the time interval, an image display style of an area including the position indicated by the focus position information in the shared screen is determined.
In some application scenarios, a user may click on a laser pen control displayed on the shared screen, and then draw a line below a portion of the content of the shared screen using a virtual laser pen, which may be a predefined operation. In an application scene of a shared screen, a laser pen control is provided, so that the action of indicating the position by a user by using a laser pen in a real conference scene can be simulated.
In some application scenarios, after a user clicks a laser pen control, the user can simulate and control the laser pen on the shared screen through a mouse, for example, a simulated laser point can be displayed on the screen by pressing a mouse button, and the simulated laser point can be continuously displayed by pressing the mouse button; the simulated laser spot can also be moved during the process of holding the mouse button and moving.
In some application scenarios, after the user clicks the laser pen control, the simulated laser point may also move during the process of pressing the mouse button and moving. And, the process of pressing the mouse button and moving can be understood as the above-described predefined operation. The execution body can acquire a plurality of focus positions and corresponding operation moments of the mouse on the screen in the moving process. Then, a plurality of target time points are determined as time goes by, and a display step is performed for each target time point. Thus, during the movement of the mouse, the simulated laser spot does not appear as a real laser spot (the laser spot immediately disappears immediately before the real laser pen's focal position moves), but can remain for a period of time and change over time.
It should be noted that, the simulated laser pen provided in the prior art has the simulated laser points as the real laser points, and the laser points at the previous moment disappear immediately as soon as the focal position of the laser pen moves; in a poor network environment, the data transmission is not timely, so that the shared party cannot timely receive the focus position information to display the simulated laser points, and the display of the simulated laser points is interrupted; moreover, the propagation delay is usually accompanied by the concentrated reception of multiple data packets at the next time, which may cause the simulated laser spot to jump randomly at multiple focal positions. In contrast, the simulated laser pen implemented by the method provided by the disclosure can reduce the data transmission amount and the requirement on the instantaneity of data transmission, and can ensure uninterrupted display images of the simulated laser pen and improve the smoothness of display even in a poor network transmission environment (such as slower data transmission, blocking and the like).
In some embodiments, the smaller the time interval corresponding to the focal position information, the more pixels the image displayed in the area including the position indicated by the focal position information occupies. In other words, the smaller the time interval corresponding to the above-mentioned focus position information, the more pixels the image displayed in the vicinity of the focus position occupies.
Referring to fig. 3A, 3B and 3C, an application scenario of an embodiment of the present application is shown. Fig. 3A, 3B, and 3C may be different states of the display image for the prompt operation of the sentence "eucryptite and solitary Ji Fei" on the shared screen 301 over time. As an example, fig. 3A may be a display state at 10-point whole time;
in fig. 3A, a first display image 302 of the prompt operation is displayed under "eucryptite and solitary Ji Fei". Fig. 3B may be a display state at a time of 10 points 0 minutes 05 seconds; in fig. 3B, a second display image 302 of the prompt operation is displayed under "eucryptite and solitary Ji Fei". Fig. 3C may be a display state at a time of 10 points 0 minutes 10 seconds; in fig. 3C, a third display image 303 of the prompt operation is displayed under "eucryptite and solitary Ji Fei". From the first display image 301, the second display image 302, and the third display image 303, it can be seen that the display image presenting the operation is becoming shorter with the passage of time, and the image corresponding to the operation performed earlier is getting more and more informative.
As can be seen from the above examples, the display image of the simulated laser pen can be visually referred to as "comet" from a dynamic perspective. Thus, the simulated laser pen may be referred to as a "comet pen".
In some application scenarios, the comet pen may be continuous, or the comet pen may be discontinuous.
In some application scenarios, the specific implementation manner of the "comet pen" may be set according to actual situations, which is not limited herein.
In some application scenarios, the "comet pen style" may be implemented in the following manner: curve fitting can be carried out on the focus position indicated by the focus position information, and the fitted curve is used as an operation track; for a sampling point (which may include a focus) on the operation track, determining a first direction rendering pixel number along a perpendicular direction of a tangent line of the operation track, and a second direction rendering pixel number along the tangent line direction; rendering the pixel number in the first direction and the pixel number in the second direction as an image display style of the region including the position indicated by the focus position information; and at the target time point, adding the image of the determined image display pattern to the shared screen for image display, wherein the number of the rendering pixels in the second direction corresponding to the adjacent sampling points is overlapped in the rendering area on the shared screen.
In some embodiments, the "comet pen style" may be implemented in the following manner: the radius corresponding to the focus position information can be determined according to the time interval; determining a circle having a circle center at the position indicated by the focus position information and a radius corresponding to the circle center as an image display pattern of a region including the position indicated by the focus position information; and at the target time point, adding the image of the determined image display style to the shared screen for image display, wherein circles corresponding to adjacent focus position information are overlapped in a rendering area on the shared screen. In other words, on the shared screen, image rendering is performed with the focus position as the center and the corresponding radius as the radius, and then the image is added to the shared screen for image display.
In the above two modes of scene provision, the adjacent focus position information is overlapped between corresponding image display areas on the shared screen, wherein the image display areas are areas including positions indicated by the focus position information and displaying images. Thus, the continuity of the 'comet pen' image can be ensured.
In some embodiments, the determining, according to the time interval, the image display style of the region including the position indicated by the focal position information in the shared screen may include: focal position information with interval time being longer than a preset first time length threshold value is determined to be outdated focal position information; and determining the image display pattern corresponding to the outdated focus position information as not displaying the image pattern.
Here, the specific value of the preset first time length threshold may be set according to actual situations, which is not limited herein.
It should be noted that, as time goes by, the image corresponding to the predefined operation may gradually disappear. Therefore, in the process that the user explains the shared screen, on one hand, the explanation content can be timely prompted to each shared object, on the other hand, the image displayed based on the earlier predefined operation can be ensured, and the shared objects are not interfered.
In some embodiments, the step 101 may include: and acquiring a focus position information set, and deleting focus position information with interval time larger than a preset first time length threshold value from the focus position information set.
Before the displaying step, the focal position information whose interval time is longer than the preset first time threshold is deleted from the focal position set, so that the number of focal position information can be reduced, thereby reducing the amount of calculation for the focal position information.
In some embodiments, the determining, according to the time interval, the image display style of the region including the position indicated by the focal position information in the shared screen may include: and determining an image display style corresponding to each focus position according to the preset image gradual change corresponding relation information and the time interval corresponding to each focus position information.
Here, the above-described image gradation correspondence information is used to characterize the correspondence between the image display style and the time interval.
Here, the image gradation correspondence information may be implemented in various ways, and is not limited thereto. As an example, the image gradation correspondence table may be any one of the following but is not limited to: the functional relationship is, a correspondence table, or the like.
Here, by setting the image gradation correspondence information, an image display style corresponding to each time interval can be set for that time interval. Thus, after determining the time interval corresponding to the focal position information, the image display style corresponding to the time interval can be quickly determined.
In some embodiments, the user performing the predefined operation on the shared screen may include at least one user having the rights of the shared screen.
Here, the user performing a predefined operation on the shared screen may be one or more.
Here, the above-described shared screen authority may indicate that editing or graffiti can be performed on the shared screen.
Here, the focus position information set generated based on a predefined operation of the user corresponds to the user identification. In other words, the focus position information set generated by the predefined operation of the user is bound to the user identification of the user. Further, the predefined operations by the respective users may be displayed in different styles when the images are displayed based on the focus position information set.
In some embodiments, the above method further comprises: for a target point in time, executing a sharing flow, the sharing flow may include: and transmitting the data pair comprising the focus position and the corresponding image display pattern to the second terminal. Here, the second terminal may perform image display at the focus position of the shared screen according to the acquired data pair.
Here, the sharing flow may be performed for each target time point in the target time point sequence; the sharing flow may also be performed for a portion of the target time points in the sequence of target time points.
Here, at each target point in time, for each focus position in each display step, the execution subject determines a corresponding image display pattern for the focus position, whereby the focus position and the corresponding image display pattern can be packaged as a data pair. The sharing procedure may include: for each target point in time, a plurality of data pairs may be determined for the set of focus position information and sent to the second terminal. The second terminal may receive the data pairs transmitted based on the plurality of shared flows. The second terminal may display the same image as the execution subject described above at the focus position of the shared screen.
It should be noted that, the execution body determines the image display style and the sharing image display style, which can reduce the calculation amount of the second terminal and improve the consistency of displaying the sharing screen on different terminals. In contrast, if the execution subject transmits the focus position information and the corresponding focus time, the second terminal is required to calculate the image display pattern at each focus position, whereby, on the one hand, the calculation amount of the second terminal increases, and, on the other hand, there may be caused a difference in the image displayed on the shared screen of the second terminal and the above-described execution subject due to various reasons (delay, stuck, calculation inaccuracy) or the like.
In some embodiments, the method further comprises: acquiring a marked image drawn on a shared screen and a corresponding user identifier; and in response to the acquisition of at least two user identifications, distributing a preset display mode for each user identification, and displaying the mark image corresponding to the user identification by adopting the distributed display mode.
As an example, the marker images of different users may be displayed in different colors. By displaying the marking images of different users in different display modes, the user operation can be more intuitively distinguished.
In some embodiments, the above method may further comprise: and displaying the marked image and drawing user prompt information in a correlated way on a sharing screen. Here, the drawing user prompt information is used to prompt the user indicated by the user identification to draw the marker image.
Here, the user who draws on the shared screen may be a user who logs in to the execution subject, or may be a user who logs in to a terminal other than the execution subject. In other words, the drawing operation may be detected by the execution subject described above, the drawn marker image and the user identification for performing the drawing operation may be acquired; the other terminal may acquire the drawn marker image and the user identifier for the drawing operation, and the execution subject may acquire the marker image and the user identifier from the other terminal or the server. Here, the specific style of drawing the user prompt information may be set according to the actual situation, which is not limited herein. In some application scenarios, the drawing user prompt information includes at least one of: user identification, text for prompt and preset display style for user identification.
Here, the executing subject performs the association display of the obtained marker image and the prompt information of the drawing user, so that the user can know the drawing person of the marker image when seeing the colleague of the marker image; thus, in this scene involving multiple persons sharing a screen, explicit renderer information can be provided for each marker image.
In some embodiments, the above method further comprises: determining a target stop time point in response to determining that the drawing is completed; and stopping displaying the drawn user prompt information in response to the fact that the current time point is determined to be the target stop time point.
Here, the interval between the above-described target stop time point and the time point when drawing is completed is a preset second duration.
Here, if the execution subject detects the drawing operation as described above, the drawing completion may be directly determined by the execution subject as described above. If the electronic device other than the execution subject detects a drawing operation, a drawing completion instruction signal may be sent to the execution subject by the electronic device, and then the execution subject may determine that drawing is completed in response to receiving the drawing completion instruction signal.
Here, how to determine that drawing is completed by detecting a drawing operation may be set according to actual conditions, and is not limited herein. As an example, the drawing operation may be performed by the user holding the mouse, and when it is detected that the user stops holding the mouse, it may be determined that the drawing is completed.
Here, the specific value of the preset second duration may be set according to the actual situation, which is not limited herein.
It should be noted that, if the current time point is the target stop time point, the displaying of the drawing user prompt information may be stopped, so that the drawing user prompt information may be displayed in a period of time during and after the drawing, the drawing user prompt information may be displayed, and during and after the drawing, the time period in which the watching user of the shared screen pays attention to the marked image may be put on a new event, after the time period has elapsed, the watching user of the shared screen may be stopped from displaying during the time period, so that the watching user may be guaranteed to learn the information; and the information quantity on the screen can be reduced by stopping displaying the information, so that the interference information on the shared screen can be reduced.
As an example, referring to fig. 4A and 4B, fig. 4A and 4B illustrate an implementation of drawing user prompt information. In fig. 4A and fig. 4B, the shared screen 401 may show a part of the content of "Tengwang Gein", the user "king" may be marked for the sentence "cloud pin rain, color-clear", in fig. 4A, the marked image 402 is displayed below the sentence "cloud pin rain, color-clear", and the drawn user prompt information 403 is displayed in association with the marked image 402 for prompting that the marked image 402 is drawn by "king". In fig. 4B, a schematic diagram of the current time point being the target stop time point and showing the drawing of the user prompt information is shown.
In some embodiments, the above method may further comprise: acquiring a mark image drawn on a shared screen; the acquired marker image is subjected to smoothing processing to generate an adjusted image.
Here, the above smoothing process may improve the smoothness of the lines of the mark image, so that the mark image as a whole has higher smoothness without the occurrence of sharp points or jaggies.
As an example, smoothing may be performed on a straight line or a curved line.
It is to be understood that, by the smoothing processing, the generated image has higher smoothness, and the adjusted image can be indicated by a smaller amount of data. By smoothing the marker image, the data amount can be compressed. Thus, the data transmission amount can be reduced in the screen sharing process.
With further reference to fig. 5, as an implementation of the method shown in the foregoing figures, the present disclosure provides an embodiment of a display device, where the embodiment of the device corresponds to the embodiment of the method shown in fig. 1, and the device may be specifically applied to various electronic apparatuses.
As shown in fig. 5, the display device of the present embodiment includes: a set acquisition unit 501 and a first display unit 502. Wherein the set determining unit determines a set of focus position information, wherein the focus position information indicates a position and an operation time of a predefined operation on the shared screen; a first display unit that determines a target time point sequence associated with a current time point, and sequentially performs, for each target time point in the target time point sequence, the following display steps: determining an image display style of an area including a position indicated by the focus position information in the shared screen based on the target time point and the operation time in the focus position information; and at the target time point, adding the image of the determined image display mode to the sharing screen for image display. .
In this embodiment, the specific processing of the set obtaining unit 501 and the first display unit 502 and the technical effects thereof may refer to the descriptions related to the steps 101 and 102 in the corresponding embodiment of fig. 1, and are not repeated here.
In some embodiments, the first display unit is further configured to: an interval between the operation time in the focus position information and the target time point is determined as a time interval corresponding to the determined focus position information; according to the time interval, an image display style of an area including the position indicated by the focus position information in the shared screen is determined.
In some embodiments, the first display unit is further configured to: focal position information with interval time being longer than a preset first time length threshold value is determined to be outdated focal position information; and determining the image display pattern corresponding to the outdated focus position information as not displaying the image pattern.
In some embodiments, the set determining unit is further configured to: and acquiring a focus position information set, and deleting focus position information with interval time larger than a preset first time length threshold value from the focus position information set.
In some embodiments, the first display unit is further configured to: and determining an image display pattern corresponding to each focus position according to preset image gradient corresponding relation information and time intervals of each focus position information, wherein the image gradient corresponding relation information is used for representing the corresponding relation between the image display pattern and the time intervals.
In some embodiments, the smaller the time interval corresponding to the focus position information, the more pixels the image is occupied by the area display containing the position indicated by the focus position information.
In some embodiments, the adjacent focus position information has an overlap between corresponding image display regions on the shared screen.
In some embodiments, the user performing the predefined operation on the shared screen includes at least one user having shared screen rights, and the set of focus position information generated based on the predefined operation of the user corresponds to the user identification.
In some embodiments, the apparatus further comprises: a sharing unit (not shown) for: executing a sharing process aiming at a target time point, wherein the sharing process comprises the following steps: and sending the data pair comprising the focus position and the corresponding image display pattern to a second terminal, wherein the second terminal displays the image at the focus position of the shared screen according to the acquired data pair.
In some embodiments, the apparatus further comprises: the first acquisition unit is used for acquiring the marked image drawn on the shared screen and the corresponding user identification; and the second display unit is used for distributing a preset display mode for each user identifier in response to the acquisition of at least two user identifiers and displaying the mark image corresponding to the user identifier by adopting the distributed display mode.
In some embodiments, the apparatus further comprises: and the third display unit is used for displaying the marked image and drawing user prompt information in a correlated way on a shared screen, wherein the drawing user prompt information is used for prompting the marked image to be drawn by a user indicated by the user identifier.
In some embodiments, the drawn user prompt includes at least one of: user identification, text for prompt and preset display style for user identification.
In some embodiments, the apparatus further comprises: a time determining unit, configured to determine a target stop time point in response to determining that the drawing is completed, where an interval between the target stop time point and the time point when the drawing is completed is a preset second duration; and the fourth display unit is used for stopping displaying the drawing user prompt information in response to the fact that the current time point is the target stopping time point.
In some embodiments, the method further comprises: a second acquisition unit configured to acquire a marker image drawn on the shared screen; and the processing unit is used for carrying out smoothing processing on the acquired marked image so as to generate an adjusted image.
Referring to fig. 6, fig. 6 illustrates an exemplary system architecture in which a display method of an embodiment of the present disclosure may be applied.
As shown in fig. 6, the system architecture may include terminal devices 601, 602, 603, a network 604, and a server 605. The network 604 is used as a medium to provide communication links between the terminal devices 601, 602, 603 and the server 605. The network 604 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The terminal devices 601, 602, 603 may display with a server 605 via a network 604 to receive or send messages or the like. Various client applications, such as a web browser application, a search class application, a news information class application, may be installed on the terminal devices 601, 602, 603. The client application in the terminal device 601, 602, 603 may receive the instruction of the user and perform the corresponding function according to the instruction of the user, for example, adding the corresponding information in the information according to the instruction of the user.
The terminal devices 601, 602, 603 may be hardware or software. When the terminal devices 601, 602, 603 are hardware, they may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablet computers, electronic book readers, MP3 players (Moving Picture Experts Group Audio Layer III, dynamic video expert compression standard audio plane 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic video expert compression standard audio plane 4) players, laptop and desktop computers, and the like. When the terminal devices 601, 602, 603 are software, they can be installed in the above-listed electronic devices. Which may be implemented as multiple software or software modules (e.g., software or software modules for providing distributed services) or as a single software or software module. The present invention is not particularly limited herein.
The server 605 may be a server that provides various services, for example, receives information acquisition requests sent by the terminal devices 601, 602, 603, and acquires presentation information corresponding to the information acquisition requests in various ways according to the information acquisition requests. And related data showing the information is transmitted to the terminal devices 601, 602, 603.
It should be noted that, the display method provided by the embodiments of the present disclosure may be performed by the terminal device, and accordingly, the display apparatus may be provided in the terminal devices 601, 602, 603. In addition, the display method provided by the embodiment of the present disclosure may also be performed by the server 605, and accordingly, the display device may be provided in the server 605.
It should be understood that the number of terminal devices, networks and servers in fig. 6 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to fig. 7, a schematic diagram of an electronic device (e.g., a terminal device or server in fig. 5) suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 7 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 7, the electronic device may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 701, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage means 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the electronic device 700 are also stored. The processing device 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
In general, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 7 shows an electronic device having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communication device 709, or installed from storage 708, or installed from ROM 702. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 701.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining a focus position information set, wherein the focus position information indicates the position and the operation time of a predefined operation on a shared screen; determining a sequence of target time points associated with the current moment, and for each target time point in the sequence of target time points, performing the following display steps in order: determining an image display style of an area including a position indicated by the focus position information in the shared screen based on the target time point and the operation time in the focus position information; and at the target time point, adding the image of the determined image display mode to the sharing screen for image display.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit is not limited to the unit itself in some cases, and for example, the first acquisition unit may also be described as "a unit that acquires at least the focus position information set".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (16)

1. A display method, comprising:
determining a focus position information set, wherein the focus position information indicates the position and the operation time of a predefined operation on a shared screen;
determining a target time point sequence associated with the current time point, wherein the target time point sequence is a sequence of a plurality of time points in a future time period from the current time point, and for each target time point in the target time point sequence in sequence, performing the following display steps:
determining an image display style of an area including a position indicated by the focus position information in the shared screen based on the target time point and the operation time in the focus position information;
at the target time point, adding the image of the determined image display style to the sharing screen for image display;
Wherein the determining an image display style of an area including the position indicated by the focus position information in the shared screen based on the target time point and the operation time in the focus position information includes:
an interval between the operation time in the focus position information and the target time point is determined as a time interval corresponding to the determined focus position information;
according to the time interval, an image display style of an area including the position indicated by the focus position information in the shared screen is determined.
2. The method according to claim 1, wherein the determining the image display style of the region including the position indicated by the focus position information in the shared screen according to the time interval includes:
focal position information with interval time being longer than a preset first time length threshold value is determined to be outdated focal position information;
and determining the image display pattern corresponding to the outdated focus position information as not displaying the image pattern.
3. The method of claim 2, wherein the determining the set of focus position information comprises:
and acquiring a focus position information set, and deleting focus position information with a time interval larger than a preset first time length threshold value from the focus position information set.
4. The method according to claim 1, wherein the determining the image display style of the region including the position indicated by the focus position information in the shared screen according to the time interval includes:
and determining an image display pattern corresponding to each focus position according to preset image gradient corresponding relation information and time intervals of each focus position information, wherein the image gradient corresponding relation information is used for representing the corresponding relation between the image display pattern and the time intervals.
5. The method according to claim 1, wherein the smaller the time interval corresponding to the focus position information is, the more pixels the image displayed in the area including the position indicated by the focus position information occupies.
6. The method of claim 5, wherein adjacent focus position information has an overlap between corresponding image display areas on the shared screen.
7. The method of claim 1, wherein the user performing the predefined operation on the shared screen comprises at least one user having shared screen rights, and wherein the set of focus position information generated based on the predefined operation of the user corresponds to the user identification.
8. The method according to claim 1, wherein the method further comprises:
executing a sharing process aiming at a target time point, wherein the sharing process comprises the following steps: and sending the data pair comprising the focus position and the corresponding image display pattern to a second terminal, wherein the second terminal displays the image at the focus position of the shared screen according to the acquired data pair.
9. The method according to claim 1, wherein the method further comprises:
acquiring a marked image drawn on a shared screen and a corresponding user identifier;
and in response to the acquisition of at least two user identifications, distributing a preset display mode for each user identification, and displaying the mark image corresponding to the user identification by adopting the distributed display mode.
10. The method according to claim 9, wherein the method further comprises:
and displaying the marked image and drawing user prompt information in a correlated way on a shared screen, wherein the drawing user prompt information is used for prompting the marked image to be drawn by a user indicated by the user identifier.
11. The method of claim 10, wherein the drawing user prompt information comprises at least one of: user identification, text for prompt and preset display style for user identification.
12. The method according to claim 10, wherein the method further comprises:
determining a target stop time point in response to determining that the drawing is completed, wherein the interval between the target stop time point and the time point when the drawing is completed is a preset second duration;
and stopping displaying the drawing user prompt information in response to determining that the current time point is the target stopping time point.
13. The method according to claim 1, wherein the method further comprises:
acquiring a mark image drawn on a shared screen;
the acquired marker image is subjected to smoothing processing to generate an adjusted image.
14. A display device, comprising:
a set determining unit that determines a set of focus position information indicating a position and an operation timing of a predefined operation on the shared screen;
a first display unit that determines a target time point sequence associated with a current time, wherein the target time point sequence is a sequence of several time points within a future time period from the current time, and performs, for each target time point in the target time point sequence in order, the following display steps:
Determining an image display style of an area including a position indicated by the focus position information in the shared screen based on the target time point and the operation time in the focus position information;
at the target time point, adding the image of the determined image display style to the sharing screen for image display;
wherein the determining an image display style of an area including the position indicated by the focus position information in the shared screen based on the target time point and the operation time in the focus position information includes:
an interval between the operation time in the focus position information and the target time point is determined as a time interval corresponding to the determined focus position information;
according to the time interval, an image display style of an area including the position indicated by the focus position information in the shared screen is determined.
15. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-13.
16. A computer readable medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any one of claims 1-13.
CN202010205296.6A 2020-03-20 2020-03-20 Display method and device and electronic equipment Active CN111427528B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010205296.6A CN111427528B (en) 2020-03-20 2020-03-20 Display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010205296.6A CN111427528B (en) 2020-03-20 2020-03-20 Display method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111427528A CN111427528A (en) 2020-07-17
CN111427528B true CN111427528B (en) 2023-07-25

Family

ID=71555380

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010205296.6A Active CN111427528B (en) 2020-03-20 2020-03-20 Display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111427528B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118069007A (en) 2020-08-03 2024-05-24 腾讯科技(深圳)有限公司 Screen sharing method, device, equipment and storage medium
CN112312060B (en) * 2020-08-28 2023-07-25 北京字节跳动网络技术有限公司 Screen sharing method and device and electronic equipment
CN112698759B (en) * 2020-12-28 2023-04-21 北京字跳网络技术有限公司 Labeling method, device and electronic equipment
CN113360058B (en) * 2021-04-21 2025-03-04 Vidaa美国公司 A display device and information prompting method
CN113254137B (en) * 2021-06-08 2023-03-14 Tcl通讯(宁波)有限公司 Dynamic image display method and device, storage medium and mobile terminal
CN114064593B (en) * 2021-11-12 2024-03-01 北京字跳网络技术有限公司 Document sharing method, device, equipment and medium
CN114968159B (en) * 2022-06-02 2025-05-20 深圳乐播科技有限公司 Display control method, electronic equipment and related products
CN115445192B (en) * 2022-09-19 2025-05-27 网易(杭州)网络有限公司 Virtual object display method, device, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011076399A (en) * 2009-09-30 2011-04-14 Fujitsu General Ltd Screen sharing system, pointer display method in the same and screen sharing program
JP2012252356A (en) * 2012-08-10 2012-12-20 Fuji Xerox Co Ltd Display control device, display system and program
JP2016206250A (en) * 2015-04-16 2016-12-08 キヤノン株式会社 Display device, display control method, and imaging device
WO2017128895A1 (en) * 2016-01-28 2017-08-03 华为技术有限公司 Location sharing-based navigation assistance method and terminal
CN109976635A (en) * 2019-03-19 2019-07-05 深圳市火王燃器具有限公司 Time control method and device based on intelligent kitchen appliance touch screen
WO2019140997A1 (en) * 2018-01-19 2019-07-25 广州视源电子科技股份有限公司 Display annotation method, device, apparatus, and storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4666053B2 (en) * 2008-10-28 2011-04-06 ソニー株式会社 Information processing apparatus, information processing method, and program
JP5789965B2 (en) * 2010-12-01 2015-10-07 富士通株式会社 Image transmission method, image transmission apparatus, and image transmission program
US20140040767A1 (en) * 2012-08-03 2014-02-06 Oracle International Corporation Shared digital whiteboard
JP5939285B2 (en) * 2014-08-28 2016-06-22 日本電気株式会社 Drawing control apparatus, information sharing system, drawing control method, and drawing control program
JP6492775B2 (en) * 2015-03-03 2019-04-03 セイコーエプソン株式会社 Display device and display control method
JP6553418B2 (en) * 2015-06-12 2019-07-31 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Display control method, display control device and control program
JP6515787B2 (en) * 2015-11-02 2019-05-22 富士通株式会社 Virtual desktop program, virtual desktop processing method, and virtual desktop system
CN106909234A (en) * 2015-12-23 2017-06-30 小米科技有限责任公司 Method, control device, terminal and the device being marked to display screen
CN108064375A (en) * 2016-12-30 2018-05-22 深圳市柔宇科技有限公司 A kind of control method for screen display and device
CN108021347A (en) * 2017-12-29 2018-05-11 航天科工智慧产业发展有限公司 A kind of method of Android terminal Screen sharing
CN108958455A (en) * 2018-07-10 2018-12-07 深圳闳宸科技有限公司 Handwriting trace projecting method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011076399A (en) * 2009-09-30 2011-04-14 Fujitsu General Ltd Screen sharing system, pointer display method in the same and screen sharing program
JP2012252356A (en) * 2012-08-10 2012-12-20 Fuji Xerox Co Ltd Display control device, display system and program
JP2016206250A (en) * 2015-04-16 2016-12-08 キヤノン株式会社 Display device, display control method, and imaging device
WO2017128895A1 (en) * 2016-01-28 2017-08-03 华为技术有限公司 Location sharing-based navigation assistance method and terminal
WO2019140997A1 (en) * 2018-01-19 2019-07-25 广州视源电子科技股份有限公司 Display annotation method, device, apparatus, and storage medium
CN109976635A (en) * 2019-03-19 2019-07-05 深圳市火王燃器具有限公司 Time control method and device based on intelligent kitchen appliance touch screen

Also Published As

Publication number Publication date
CN111427528A (en) 2020-07-17

Similar Documents

Publication Publication Date Title
CN111427528B (en) Display method and device and electronic equipment
CN112261459B (en) Video processing method and device, electronic equipment and storage medium
CN111399956B (en) Content display method and device applied to display equipment and electronic equipment
CN110784754A (en) Video display method and device and electronic equipment
CN113521728B (en) Cloud application implementation method, device, electronic device and storage medium
CN111338537A (en) Method, apparatus, electronic device, and medium for displaying video
EP4425313A1 (en) Data exchange method and apparatus, electronic device, storage medium and program product
CN111459364B (en) Icon updating method and device and electronic equipment
CN110619096A (en) Method and apparatus for synchronizing data
CN114064593A (en) A document sharing method, apparatus, device and medium
CN111596995A (en) Display method and device and electronic equipment
EP4550806A1 (en) Method and apparatus for interaction in live-streaming room, and device and medium
CN117041649A (en) Live broadcast interaction method, device, equipment and medium
CN110022493B (en) Playing progress display method and device, electronic equipment and storage medium
CN110456957B (en) Display interaction method, device, equipment and storage medium
CN110134905B (en) Page update display method, device, equipment and storage medium
CN114417782B (en) Display method, device and electronic device
CN110673886B (en) Method and device for generating thermodynamic diagrams
CN111225255B (en) Target video push playing method and device, electronic equipment and storage medium
CN113382293A (en) Content display method, device, equipment and computer readable storage medium
CN116319932B (en) Content push model training method, device, equipment and storage medium
CN111770385A (en) Card display method and device, electronic equipment and medium
CN114071028B (en) Video generation and playing method and device, electronic equipment and storage medium
CN111385638B (en) Video processing method and device
CN114879926B (en) Information drawing processing method, device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant