Disclosure of Invention
In view of the foregoing, it is desirable to provide a panoramic conference control method, apparatus, and computer device.
In a first aspect, the present disclosure provides a panoramic conference control method. The method comprises the following steps:
the method is applied to a conference host, the conference host is connected with at least one participant, and the method comprises the following steps:
acquiring panoramic video stream data;
displaying a first portion of the panoramic images generated using the panoramic video stream data, determining visual angle information corresponding to the current display of the first partial panoramic image;
And responding to a viewing angle sharing instruction, sending the viewing angle information to the at least one participant to instruct the at least one participant to acquire the panoramic video stream data, and displaying a second part of panoramic images in the panoramic images generated by using the panoramic video stream data based on the viewing angle information and a display window of the at least one participant.
In one embodiment, the sending the view information to the at least one participant to instruct the at least one participant to obtain the panoramic video stream data, and displaying a second part of the panoramic image generated by using the panoramic video stream data based on the view information and a display window of the at least one participant includes:
collecting the visual angle information according to a preset first time period, and generating a visual angle information queue;
Transmitting the view angle information queue to the at least one participant according to a preset second time period to instruct the at least one participant to acquire panoramic video stream data, acquiring the view angle information in the view angle information queue according to the first time period and time sequence, and displaying a second part of panoramic images in the panoramic images generated by using the panoramic video stream data sequentially based on the acquired view angle information in the view angle information queue and a display window of the at least one participant;
Wherein the first time period is less than or equal to the second time period.
In one embodiment, the method further comprises:
Responding to a spotlight instruction after the viewing angle sharing instruction, and responding to the spotlight instruction after the viewing angle sharing instruction, and displaying a spotlight area indicated by the spotlight instruction on the displayed first part of panoramic image;
collecting spotlight state information indicated by the spotlight region according to a preset third time period, and generating a spotlight state queue;
And sending the spotlight state queue to the at least one participant according to a preset fourth time period to instruct the at least one participant to acquire spotlight state information in the spotlight state queue according to the third time period, and displaying spotlight areas indicated by the spotlight state information on a displayed second part of panoramic image sequentially based on the acquired spotlight state information in the spotlight state queue, wherein the third time period is less than or equal to the fourth time period.
In one embodiment, the method further comprises:
amplifying the panoramic image of the spotlight area according to a preset amplification ratio;
And sending the enlargement ratio to the at least one participant to instruct the at least one participant to enlarge the panoramic image of the spotlight area displayed according to the enlargement ratio.
In one embodiment, the method further comprises:
Responding to a collaboration request, and acquiring first position information of a first hot spot marked and/or controlled on a first part of panoramic image and first description information input at the first hot spot;
Transmitting the first location information and the first description information to the at least one participant to instruct the at least one participant to display a first hot spot in the displayed second partial panoramic image based on the first location information, and displaying the first description information on the first hot spot of the second partial panoramic image;
Displaying the first hot spot and first descriptive information at the first hot spot on the displayed first partial panoramic image;
receiving second position information of a second hot spot fed back by a first participant in the at least one participant and second description information input at the second hot spot by the at least one participant;
Displaying the second hot spot and second descriptive information at the second hot spot on the displayed first partial panoramic image;
And sending second position information of the second hot spot and second description information input at the second hot spot by the at least one participant to a second participant except the first participant in the at least one participant so as to instruct the second participant to display the second hot spot in the displayed second partial panoramic image based on the second position information, and displaying the second description information on the second hot spot of the second partial panoramic image.
In a second aspect, the present disclosure further provides a panoramic conference control method applied to at least one participant, the at least one participant being connected to a conference host, the method comprising:
Receiving the view angle information sent by the conference host in response to receiving a view angle sharing instruction;
acquiring panoramic video stream data;
and determining a display window of at least one participant, and displaying a second partial panoramic image generated based on the panoramic video stream data based on the display window and the visual angle information.
In one embodiment of the apparatus, the displaying a second partial panoramic image generated based on the panoramic video stream data based on the display window and the perspective information includes:
acquiring a visual angle information queue sent by a conference host;
and acquiring the view angle information in the view angle information queue according to a preset first time period and time sequence, and displaying a second part of panoramic images in the panoramic images generated by utilizing the panoramic video stream data sequentially based on the acquired view angle information and the size of the display window.
In a third aspect, the present disclosure further provides a panoramic conference control device. The device is applied to a conference host, the conference host is connected with at least one participant, and the device comprises:
the data acquisition module is used for acquiring panoramic video stream data;
The image display module is used for displaying partial panoramic images in the panoramic images generated by utilizing the panoramic video stream data and determining view angle information corresponding to the partial panoramic images which are currently displayed;
And the sharing module is used for responding to a viewing angle sharing instruction, sending the viewing angle information to the at least one participant to instruct the at least one participant to acquire the panoramic video stream data, and adjusting partial panoramic images for display in the panoramic images generated by using the panoramic video stream data based on the viewing angle information.
In a fourth aspect, the present disclosure further provides a panoramic conference control device applied to at least one participant, said at least one participant being connected to a conference host, said device comprising:
The information receiving module is used for receiving the view angle information input by the conference host end in response to receiving the view angle sharing instruction;
The video data acquisition module is used for acquiring panoramic video stream data;
And the display adjustment module is used for displaying partial panoramic images in the panoramic images generated by utilizing the panoramic video stream data and adjusting the displayed partial panoramic images based on the visual angle information.
In a fifth aspect, the present disclosure also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of any of the method embodiments described above when the processor executes the computer program.
In a sixth aspect, the present disclosure also provides a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of any of the method embodiments described above.
In a fifth aspect, the present disclosure also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of any of the method embodiments described above.
In the embodiments, through panoramic video streaming data, by sending the view angle information to at least one participant, all the participants can acquire the same view angle experience, so that interaction and discussion among participants are ensured to be based on consistent visual content, and efficiency and effect of a conference are improved. According to the display windows of different participant terminals and the current visual angle information, the displayed panoramic image part can be dynamically adjusted, and the diversified requirements can be better met. In addition, through enabling all the participant terminals to acquire the same visual angle experience, the time and space limitation can be broken, and people in different places can interact with the condition of the operation site in real time as if the people are in the same place, so that the quick information transmission, the instant problem solving and the collaborative promotion of work are realized. The system can achieve the effects of high efficiency, accuracy and consistency in cooperation under the support of remote cooperation no matter in complex engineering projects, emergency fault treatment or daily production operation, thereby greatly improving the working efficiency, reducing the cost and guaranteeing the operation quality. And sharing the visual angle to other meeting personnel, and synchronously focusing important field pictures. Through the function, each participant can pay attention to the same scene and picture, unnecessary communication cost is reduced, the accuracy of information transmission is greatly improved, and the fluency of on-site cooperation is ensured.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more apparent, the present disclosure will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims herein and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, apparatus, article, or device that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or device.
In this document, the term "and/or" is merely one association relationship describing the associated object, meaning that three relationships may exist. For example, A and/or B may mean that A alone, both A and B, and B alone are present. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
The embodiment of the disclosure provides a panoramic conference control method, which can be applied to an application environment as shown in fig. 1. Wherein the conference moderator 102 may be connected to at least one participant 104. The panoramic camera 106 may be connected with a cloud server 108. The conference host 102 and at least one participant 104 may also be respectively connected to the cloud server 108, for respectively acquiring panoramic video stream data acquired by the panoramic camera 106 in real time or acquiring panoramic video stream data pre-stored in the cloud server 108. Conference host 102 may obtain panoramic video stream data. When a user performs a panoramic video conference through the conference host 102, the conference host 102 displays a partial panoramic image of panoramic images generated using panoramic video stream data, and determines viewing angle information corresponding to the partial panoramic image currently displayed. In response to the view sharing instruction, the conference host 102 transmits view information to at least one participant 102, and the at least one participant 102 acquires panoramic video stream data and adjusts a portion of the panoramic image for display among panoramic images generated using the panoramic video stream data based on the received view information. Among these, conference host 102 and participant 104 may be, but are not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. The portable wearable device may be a smart watch, smart bracelet, headset, or the like.
In one embodiment, as shown in fig. 2, a panoramic conference control method is provided, and the method is applied to the conference host 102 in fig. 1 for illustration, and includes the following steps:
S202, panoramic video stream data is acquired.
The panoramic video stream data refers to video data of capturing and displaying an omnidirectional view angle. The type of panoramic video stream data is also different according to different application scenes, for example, in an industrial production application scene, the type of panoramic video stream data can be a panoramic video of the current industrial production type. In an application scenario of device training, the type of panoramic video streaming data may be a panoramic video of a device type.
Specifically, a user may initiate a panoramic conference at a conference host. Conference moderators typically need to acquire panoramic video streaming data in a cloud server. In some application scenarios, in the event that the cloud server cannot connect to a panoramic camera on site. Panoramic video may be captured first with a panoramic camera. The panoramic camera then transmits the photographed panoramic video stream data to the cloud server (this case is applicable to roaming scenes). In other application scenes, under the condition that the cloud server can be connected with an on-site panoramic camera, the panoramic camera can send panoramic video to the cloud server in real time, and at the moment, the conference host can acquire panoramic video stream data in the cloud server in real time. In addition, it should be noted that, in some embodiments of the present disclosure, the conference may further include a voice function and/or a video function during the conference, and these functions are not described in detail because they already exist in the conventional conference technology at present.
In some exemplary embodiments, where the panoramic camera may be connected to the cloud server, the panoramic camera may push the panoramic video stream data to the cloud server through an RTMP (Real-TIME MESSAGING Protocol) network Protocol. RTMP (Real-TIME MESSAGING Protocol) is a Real-time messaging Protocol commonly used for Real-time data transmission over the internet, including audio, video and other multimedia data. Real-time performance of panoramic video stream data can be ensured through RTMP protocol. The conference host can acquire panoramic video stream data through webRTC streaming technology. WebRTC (Web Real-Time Communication) is a Real-time communication technology that can directly implement point-to-point audio, video and data transmission between Web browsers without the aid of third party plug-ins or software. The WebRTC technology is based on JavaScript APIs, and can realize real-time communication between browsers, including audio-video call, file sharing, and the like.
S204, displaying a first part of panoramic images in the panoramic images generated by utilizing the panoramic video stream data, and determining view angle information corresponding to the first part of panoramic images to be displayed currently. The partial panoramic image is determined based at least on a display window of the conference host.
The viewing angle information may include a horizontal viewing angle, a vertical viewing angle, and a viewing angle, among others. The horizontal viewing angle indicates a viewing angle rotation or shift of the observer in the horizontal direction. The vertical viewing angle indicates a viewing angle rotation or offset of the observer in the vertical direction. The field angle indicates the field of view of the observer, i.e. the width of the field of view of the picture that can be seen.
Specifically, after the conference host acquires the panoramic video stream data, the panoramic image may be generated by rendering based on the panoramic video stream data, and then the panoramic image is displayed on the conference host. Further, since the panoramic image is a 360 ° image, the entire panoramic image is not normally displayed on the conference host side, and only a part of the panoramic image is displayed. For example, a panoramic image of 0-90 degrees is displayed. Because the angles of the displayed images are different, the conference host can determine the view angle information corresponding to the currently displayed first part of panoramic image.
Still further, the displaying of the first portion of the panoramic image generated using the panoramic video stream data includes displaying the first portion of the panoramic image generated using the panoramic video stream data based on a display window of a conference host. It should be noted that, in the process of displaying the panoramic image, the conference host may adjust the displayed first part of the panoramic image according to the window of the panoramic image displayed by the conference host, for example, if the current window is relatively large, the viewing angle of the displayed first part of the panoramic image is relatively large. If the current window is smaller, the view angle of the currently displayed first partial panoramic image is smaller.
And S206, responding to a viewing angle sharing instruction, sending the viewing angle information to the at least one participant to instruct the at least one participant to acquire the panoramic video stream data, and displaying a second part of panoramic images in the panoramic images generated by using the panoramic video stream data based on the viewing angle information and a display window of the at least one participant.
Wherein the view sharing instruction may be an instruction to share the view of the conference host to at least one participant in some embodiments of the present disclosure. The second partial panoramic image has the same portion as the first partial panoramic image.
Specifically, the user may hit the view sharing virtual key at the conference host endpoint, and when detecting that the view sharing virtual key is hit, the conference host may determine to respond to the view sharing instruction. The conference moderator may send the currently determined view information to the at least one participant. After receiving the viewing angle information, at least one participant can simultaneously acquire panoramic video stream data from the cloud server. It should be noted that, the panoramic video stream data acquired by at least one participant is generally the same as the panoramic video stream data acquired by the conference host, so as to ensure the consistency of display and the synchronism of information when the panoramic video conference is performed. After the panoramic video stream data is acquired by at least one participant, a panoramic image may be generated and displayed using the panoramic video stream data in the manner of the conference host as mentioned above. In the displaying process, at least one participant can adjust the displayed second part of panoramic image according to the visual angle information to be consistent or similar to the first part of panoramic image displayed by the conference host side in order to ensure information synchronization. In addition, since the display window of at least one of the participant terminals has different sizes, the second partial panoramic image and the first partial panoramic image that are displayed are not necessarily identical, and therefore, after the at least one participant terminal determines the second partial panoramic image that needs to be displayed according to the viewing angle information, the second partial panoramic image that needs to be displayed may be further adjusted according to the display window. At this time, the view angle information of the partial panoramic image displayed by the at least one participant is the same as the view angle information of the panoramic image of the conference host, but the second partial panoramic image may be the same as the first partial panoramic image or may be different from the first partial panoramic image.
In some exemplary embodiments, for example, the current conference host displays a panoramic image with a window a, which can display a panoramic image in a range of 90 °. The center of the visual angle information displayed by the current conference host end is 180 degrees, and the first part of panoramic image is displayed according to the display range of the window, wherein the display range of the first part of panoramic image is 180 degrees to 45 degrees, and 180 degrees plus 45 degrees. The window for displaying the panoramic image at the current at least one conference end is B, and the range of the panoramic image which can be displayed is 100 degrees. After the viewing angle information is acquired, displaying a second partial panoramic image according to the display range of the display window, wherein the display range of the second partial panoramic image is 180-50 degrees, and 180 degrees+50 degrees. It is to be understood that the foregoing is only illustrative.
According to the panoramic conference control method, through panoramic video stream data, the visual angle information is sent to at least one participant, so that all the participants can acquire the same visual angle experience, interaction and discussion among participants are ensured to be based on consistent visual content, and conference efficiency and effect are improved. According to the display windows of different participant terminals and the current visual angle information, the displayed panoramic image part can be dynamically adjusted, and the diversified requirements can be better met. In addition, through enabling all the participant terminals to acquire the same visual angle experience, the time and space limitation can be broken, and people in different places can interact with the condition of the operation site in real time as if the people are in the same place, so that the quick information transmission, the instant problem solving and the collaborative promotion of work are realized. The system can achieve the effects of high efficiency, accuracy and consistency in cooperation under the support of remote cooperation no matter in complex engineering projects, emergency fault treatment or daily production operation, thereby greatly improving the working efficiency, reducing the cost and guaranteeing the operation quality. And sharing the visual angle to other meeting personnel, and synchronously focusing important field pictures. Through the function, each participant can pay attention to the same scene and picture, unnecessary communication cost is reduced, the accuracy of information transmission is greatly improved, and the fluency of on-site cooperation is ensured.
In one embodiment, as shown in fig. 3, the sending the view information to the at least one participant to instruct the at least one participant to obtain the panoramic video stream data, and displaying, based on the view information and a display window of the at least one participant, a second part of the panoramic image generated using the panoramic video stream data, includes:
S302, acquiring the visual angle information according to a preset first time period, and generating a visual angle information queue.
S304, sending the view angle information queue to the at least one participant according to a preset second time period to instruct the at least one participant to acquire panoramic video stream data, and acquiring the view angle information in the view angle information queue according to the first time period and time sequence, and displaying a second part of panoramic images in the panoramic images generated by using the panoramic video stream data based on the acquired view angle information in the view angle information queue and a display window of the at least one participant in sequence;
Wherein the first time period is less than or equal to the second time period.
In particular, in order to reduce frequent communication between the conference host and the at least one participant, and to ensure consistency of adjustment of the at least one participant according to the viewing angle information. The conference host can continuously collect the view angle information in a preset first time period, so that a view angle information queue is generated according to the collected time sequence, and the view angle information queue generally contains view angle information stored according to the time sequence. The conference host may send the generated viewing angle information queue to at least one participant according to a preset second time period. At least one participant can receive the visual angle information queue, acquire panoramic video stream data in the cloud server, and render the panoramic video stream data to generate a panoramic image. And then acquiring the view angle information in the video queue information according to the first time period and the time sequence, and displaying a second part of panoramic images in the panoramic images according to the acquired view angle information and the size of the display window of the participant in sequence.
In some exemplary embodiments, the conference host may begin to collect VIEW information at regular intervals (e.g., once for 30 ms), and generate a VIEW information queue and broadcast a cus_event_share_view EVENT to at least one participant at a fixed frequency, the EVENT carrying the VIEW information queue. At least one of the participants receives the viewing angle information queue, acquires the viewing angle information in the viewing angle information queue according to a fixed time and sequence (the fixed time is generally the same as the time of acquiring the viewing angle information, so as to ensure that the display effect is the same as the effect of switching the viewing angle), and displays the second partial panoramic image according to the acquired viewing angle information and the display window in the at least one of the participants. The CUS_EVENT_SHARE_VIEW EVENT is a custom EVENT used for sharing various information to at least one participant.
The data structure of the view information may be hlookat, vlookat, fov. "hlookat" is a parameter in a Panorama Viewer for controlling the rotation or horizontal deflection of the viewing angle in the horizontal direction. By adjusting the value of the "hlookat" parameter, the horizontal view angle in the panoramic image can be changed, thereby controlling the direction of the observed scene. This parameter is typically used in the context of Virtual Reality (VR) applications, panoramic pictures, or panoramic video players, etc., to change the direction of the viewer's line of sight, providing a more interactive and immersive experience. In panoramic viewers, vlookat parameters are typically used to control the vertical direction of the viewing angle (pitch angle). It defines the angle of inclination of the head, i.e. the angle of looking up or down, when the user is looking in a panoramic environment. Fov is typically expressed in terms of an angle (degrees or radians), typically from the point of view of the camera to the image plane.
In this embodiment, the viewing angle information is collected according to the first time period, the viewing angle information queue is generated, and then the at least one participant obtains the viewing angle information in the viewing angle information queue according to the first time period and the time sequence, so that the viewing angle change of the conference host and the viewing angle change of the at least one participant are ensured to be consistent.
In one embodiment, as shown in fig. 4, the method further comprises:
and S402, responding to the spotlight instruction after the viewing angle sharing instruction, and displaying a spotlight area indicated by the spotlight instruction on the displayed first partial panoramic image.
The spotlight command may be an operation command for circling the displayed first part of the panoramic image at the conference host. The spotlight instructions may include spotlight display instructions and spotlight control instructions.
Specifically, the user may hit a spotlight key at the conference host end, and after the spotlight key is hit, the first partial panoramic image displayed at the conference host end may display a spotlight region. At this time, the user may input a spotlight control instruction at the interface, and control the spotlight region to move on the first partial panoramic image using the spotlight control instruction.
S404, collecting spotlight state information indicated by the spotlight region according to a preset third time period, and generating a spotlight state queue.
And S406, sending the spotlight state queue to the at least one participant according to a preset fourth time period to instruct the at least one participant to acquire spotlight state information in the spotlight state queue according to the third time period, and displaying spotlight areas indicated by the spotlight state information on a second displayed partial panoramic image sequentially based on the acquired spotlight state information in the spotlight state queue, wherein the third time period is less than or equal to the fourth time period.
The third time period may be the same as the first time period or may be different from the first time period. The fourth time period may be the same as the second time period or may be different from the second time period. When the third time period is the same as the first time period, the fourth time period may be the same as the second time period. The spotlight status information may comprise an abscissa and an ordinate of the spotlight region on the first partial panoramic image.
Specifically, spotlight state information indicated by the spotlight region may be obtained according to a preset third time period, and then a spotlight state queue may be generated. And transmitting the spotlight state queue to at least one reference terminal according to a preset fourth time period. At least one participant may receive the spotlight status queue, then obtain spotlight status information in the spotlight status queue according to a third time period, and then sequentially display spotlight areas indicated by the spotlight status information on the displayed second partial panoramic image according to a time sequence.
In some exemplary embodiments, in response to a spotlight command, the conference host may time (e.g., 30ms to collect spotlight state information in turn) and generate a spotlight state queue and broadcast a cus_event_share_view EVENT to at least one participant at a fixed frequency, the EVENT carrying the spotlight state queue. After receiving the spotlight state queue, at least one participant obtains spotlight state information in the spotlight state queue according to fixed time and sequence (the fixed time is generally the same as the time of the obtained spotlight state information, so that the moving effect and the display effect of the spotlight are ensured to be the same as those of the conference host side), and sequentially displays spotlight areas on the second part of panoramic image according to the obtained spotlight state information.
The data structure of the spotlight status information may be { ath, atv }. an ath may generally be an abscissa of a certain position in the panoramic image, and atv may be an ordinate of a certain position in the panoramic image.
In this embodiment, spotlight state information is collected according to a third time period to generate a spotlight state queue, and then at least one participant obtains spotlight state information according to the third time period and sequence, so that the change of the spotlight of the conference host and the change of the spotlight of the at least one participant can be ensured to be consistent, and conference experience is improved.
In one embodiment, the method further comprises:
amplifying the panoramic image of the spotlight area according to a preset amplification ratio;
And sending the enlargement ratio to the at least one participant to instruct the at least one participant to enlarge the panoramic image of the spotlight area displayed according to the enlargement ratio.
Specifically, after determining the spotlight area of the conference host and at least one participant, in order to make both participants focus more on the spotlight area, the conference host may further zoom in and display the panoramic image included in the spotlight area according to a preset ratio. Because consistency of display effects of two parties of the conference needs to be ensured, the conference host end can also send preset amplification proportion to at least one participant end, and the at least one participant end can amplify and display panoramic images contained in the spotlight area according to the preset proportion after the spotlight area is displayed.
In this embodiment, the details contained in the spotlight area can be displayed by amplifying the spotlight area according to a preset amplifying ratio, so that the details are clearer, misunderstanding and information omission are reduced, and the amplifying spotlight area can guide the attention of a participant to concentrate on key points or important data, so that the participant can better understand the information contained in the spotlight area.
In one embodiment, as shown in fig. 5, the method further comprises:
S502, first position information of a first hot spot marked and/or controlled on a first part of panoramic image and first description information input at the first hot spot are acquired in response to a collaboration request.
Wherein, the collaboration request may be a request for collaboration mark information with at least one participant on the panoramic image. The first description information may generally be various information describing the corresponding first hotspot, such as some text or picture information explaining or annotating the first hotspot.
Specifically, the user may hit the on-site annotation key at the conference host end, and if the conference host end detects that the on-site annotation key is hit, it may be determined that the collaboration marking is required in response to the collaboration request. The user may click on a location in the conference host to mark and determine the first hotspot. The first location information is determined, after the first hot spot is marked, an input box is displayed at the first hot spot, and the user can input first description information in the input box.
And S504, the first position information and the first description information are sent to the at least one participant to instruct the at least one participant to display a first hot spot in the displayed second partial panoramic image based on the first position information, and the first description information is displayed on the first hot spot of the second partial panoramic image.
Specifically, the conference host may record the first location information and the first description information of the mark, send the first location information and the first description information to at least one participant, and when the at least one participant receives the first location information, determine a first hot spot in the second partial panoramic image according to the first location information, and display the first hot spot in the mark. The first description information is then displayed on a first hotspot of the second partial panoramic image.
And S506, displaying the first hot spot and the first description information at the first hot spot on the displayed first part of panoramic image.
Specifically, a first hotspot location marked on the first panoramic image according to the first location information and first description information input in an input box at the first hotspot location may be displayed in the conference host.
S508, receiving second location information of a second hot spot fed back by the first participant in the at least one participant and second description information input at the second hot spot by the at least one participant.
Specifically, in response to the collaboration request, control may be available not only at the conference host but also at the at least one participant. The user may click a certain position in a first participant in the at least one participant to mark and determine a second hotspot, thereby determining second position information, after marking the second hotspot, an input box is further displayed at the second hotspot, and the user may further input second description information in the input box. The at least one participant may send second location information and second description information for the second hotspot to the conference host. And the conference host receives the second position information and the second description information of the second hot spot.
And S510, displaying the second hot spot and second description information at the second hot spot on the displayed first partial panoramic image.
In particular, the conference host may mark and display a second hotspot on the displayed first portion of the panoramic image, and display second descriptive information at the second hotspot.
And S512, sending second position information of the second hot spot and second description information input at the second hot spot by the at least one participant to a second participant except the first participant in the at least one participant so as to instruct the second participant to display the second hot spot in the displayed second partial panoramic image based on the second position information, and displaying the second description information on the second hot spot of the second partial panoramic image.
Specifically, when there are multiple participant terminals, since the participant terminal is connected to the conference host terminal, information synchronization of the rest participant terminals is ensured. After the conference host receives the second location information and the second description information of the second hotspot fed back by the first participant, the conference host may send the second location information and the second description information of the second hotspot to a second participant other than the first participant. After the second participant receives the second location information and the second description information, a second hot spot may be determined and displayed on the second partial panoramic image displayed on the second participant according to the second location information, and the second description information may be displayed on the second hot spot displayed on the second participant on the second partial panoramic image.
In this embodiment, hotspots can be added through the conference host and the participant, and the position information and the description information of the hotspots are recorded accurately, and the description information not only supports texts but also supports pictures, so that the participant can label key and key information more clearly and definitely, and the accuracy and efficiency of information transfer are improved.
In one embodiment, as shown in fig. 6, the present disclosure further provides a panoramic conference control method applied to at least one participant, the at least one participant being connected to a conference host, the method comprising:
s602, receiving the view angle information sent by the conference host in response to receiving the view angle sharing instruction.
S604, obtaining panoramic video stream data.
S606, determining a display window of at least one participant, and displaying a second partial panoramic image generated based on the panoramic video stream data based on the display window and the visual angle information.
Reference may be made to the foregoing embodiments for specific implementation and limitation in this embodiment, and the detailed description is not repeated here.
In one embodiment, as shown in fig. 7, the displaying a second partial panoramic image generated based on the panoramic video stream data based on the display window and the viewing angle information includes:
S702, obtaining a visual angle information queue sent by a conference host.
S704, acquiring the view angle information in the view angle information queue according to a preset first time period and time sequence;
and S706, displaying a second part of panoramic images generated by using the panoramic video stream data based on the acquired visual angle information and the size of the display window in sequence.
Reference may be made to the foregoing embodiments for specific implementation and limitation in this embodiment, and the detailed description is not repeated here.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the disclosure also provides a panoramic conference control device for implementing the panoramic conference control method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in one or more embodiments of the panoramic conference control device provided below may refer to the limitation of the panoramic conference control method hereinabove, and will not be described herein.
In one embodiment, as shown in fig. 8, there is provided a panoramic conference control device 800 applied to a conference host, the conference host being connected to at least one participant, the device comprising a data acquisition module 802, an image display module 804 and a sharing module 806, wherein:
a data acquisition module 802, configured to acquire panoramic video stream data.
The image display module 804 is configured to display a first part of panoramic images in the panoramic images generated by using the panoramic video stream data, and determine perspective information corresponding to the first part of panoramic images to be currently displayed.
The sharing module 806 is configured to send the view angle information to the at least one participant in response to a view angle sharing instruction, so as to instruct the at least one participant to obtain the panoramic video stream data, and display a second portion of the panoramic image generated using the panoramic video stream data based on the view angle information and a display window of the at least one participant.
In one embodiment of the apparatus, the sharing module 806 includes:
the visual angle information queue generating module is used for acquiring the visual angle information according to a preset first time period and generating a visual angle information queue.
The queue information sending module is used for sending the view angle information queue to the at least one participant according to a preset second time period to instruct the at least one participant to obtain panoramic video stream data, obtaining the view angle information in the view angle information queue according to the first time period and time sequence, and displaying a second part of panoramic images in the panoramic images generated by utilizing the panoramic video stream data based on the obtained view angle information in the view angle information queue and a display window of the at least one participant in sequence, wherein the first time period is less than or equal to the second time period.
In one embodiment of the apparatus, the apparatus further comprises:
and the spotlight control module is used for responding to the spotlight instruction after the viewing angle sharing instruction, and displaying the spotlight area indicated by the spotlight instruction on the displayed first part of panoramic image.
And the spotlight state queue generating module is used for acquiring spotlight state information indicated by the spotlight region according to a preset third time period and generating a spotlight state queue.
The queue information sending module is further configured to send the spotlight state queue to the at least one participant according to a preset fourth time period, so as to instruct the at least one participant to obtain spotlight state information in the spotlight state queue according to the third time period, and sequentially display, on the displayed second part of panoramic images, a spotlight area indicated by the spotlight state information based on the obtained spotlight state information in the spotlight state queue, where the third time period is less than or equal to the fourth time period.
In one embodiment of the apparatus, the apparatus further comprises:
And the amplifying module is used for amplifying the panoramic image of the spotlight area according to a preset amplifying proportion.
And the proportion sending module is used for sending the enlargement proportion to the at least one reference terminal so as to instruct the at least one reference terminal to enlarge the panoramic image of the spotlight area displayed according to the enlargement proportion.
In one embodiment of the apparatus, the method further comprises:
And the hotspot marking module is used for responding to the collaboration request, and acquiring first position information of a first hotspot marked and/or controlled on the first part of panoramic image and first description information input at the first hotspot.
And the first information sending module is used for sending the first position information and the first description information to the at least one participant so as to instruct the at least one participant to display a first hot spot in the displayed second partial panoramic image based on the first position information, and the first description information is displayed on the first hot spot of the second partial panoramic image.
And the hot spot display module is used for displaying the first hot spot and the first description information at the first hot spot on the displayed first partial panoramic image.
And the second information receiving module is used for receiving second position information of a second hot spot fed back by the first participant in the at least one participant and second description information input at the second hot spot at the at least one participant.
The hotspot display module is further configured to display the second hotspot and second description information at the second hotspot on the displayed first partial panoramic image.
The second information sending module is configured to send second location information of the second hot spot and second description information input at the second hot spot by the at least one participant to a second participant except the first participant in the at least one participant, so as to instruct the second participant to display a second hot spot in the displayed second partial panoramic image based on the second location information, and display the second description information on the second hot spot of the second partial panoramic image.
In one embodiment, as shown in fig. 9, the present disclosure further provides a panoramic conference control device 900 applied to at least one participant, said at least one participant being connected to a conference host, said device comprising:
An information receiving module 902, configured to receive, in response to receiving a view sharing instruction, view information input by the conference host;
a video data acquisition module 904, configured to acquire panoramic video stream data;
A display adjustment module 906, configured to determine a display window of at least one participant, and display, based on the display window and the view angle information, a second partial panoramic image generated based on the panoramic video stream data.
In one embodiment of the device, the display adjustment module is further configured to obtain a view angle information queue sent by the conference host, obtain view angle information in the view angle information queue according to a preset first time period and time sequence, and display a second part of panoramic images in the panoramic images generated by using the panoramic video stream data sequentially based on the obtained view angle information and the size of the display window.
The respective modules in the panoramic conference control device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and an internal structure diagram thereof may be as shown in fig. 10. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program, when executed by a processor, implements a panoramic conference control method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the structures shown in FIG. 10 are only block diagrams of portions of structures associated with the disclosed aspects and are not limiting as to the computer device on which the disclosed aspects may be implemented, and that a particular computer device may include more or less components than those shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of any of the method embodiments described above when the computer program is executed.
In one embodiment, a computer readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, implements the steps of any of the method embodiments described above.
In an embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, implements the steps of any of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided by the present disclosure may include at least one of non-volatile and volatile memory, among others. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the various embodiments provided by the present disclosure may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors involved in the embodiments provided by the present disclosure may be general-purpose processors, central processing units, graphics processors, digital signal processors, programmable logic, quantum computing-based data processing logic, etc., without limitation thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples have expressed only a few embodiments of the present disclosure, which are described in more detail and detail, but are not to be construed as limiting the scope of the present disclosure. It should be noted that variations and modifications can be made by those skilled in the art without departing from the spirit of the disclosure, which are within the scope of the disclosure. Accordingly, the scope of the present disclosure should be determined from the following claims.