CN108076307B - AR-based video conference system and AR-based video conference method - Google Patents
AR-based video conference system and AR-based video conference method Download PDFInfo
- Publication number
- CN108076307B CN108076307B CN201810081347.1A CN201810081347A CN108076307B CN 108076307 B CN108076307 B CN 108076307B CN 201810081347 A CN201810081347 A CN 201810081347A CN 108076307 B CN108076307 B CN 108076307B
- Authority
- CN
- China
- Prior art keywords
- participant
- conference
- scene
- human body
- actual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000012545 processing Methods 0.000 claims abstract description 75
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 9
- 238000007405 data analysis Methods 0.000 claims description 6
- 230000015572 biosynthetic process Effects 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 238000003786 synthesis reaction Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 description 5
- 230000007547 defect Effects 0.000 description 4
- 230000000977 initiatory effect Effects 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Telephonic Communication Services (AREA)
Abstract
The invention discloses a video conference system and a method for AR. The system comprises a camera module, a video camera module and a video camera module, wherein the camera module is used for acquiring an initial conference scene of a target conference and acquiring human depth information of each participant; the image processing module is used for processing the human body depth information of each participant and extracting the human body contour map of each participant; the AR data processing module is used for processing the human body depth information of each participant to extract the actual participant scene position of each participant and adding the actual participant scene position of each participant to the appointed participant position in the three-dimensional scene graph of the target conference; and the AP server module is used for synthesizing the human body outline drawing of each participant and the actual participant scene position of each participant so as to obtain the actual conference scene of the target conference of each participant. Therefore, different scenes that the conference participants meet together at different places can be realized.
Description
Technical Field
The invention relates to the technical field of video conferences, in particular to an AR-based video conference system and an AR-based video conference method.
Background
Conference systems in the market at present mainly include traditional conferences and telephone conferences; so-called traditional meetings, in popular terms, are face-to-face meetings of multiple persons; at present, conference participants participate in a conference through mobile phones and telephones, so that the conference participants do not have a way of realizing face-to-face communication and a single scene, and users can customize own scenes or actual conference scenes according to the requirements of the users. Based on this, the video conference display system based on AR that this patent put forward combines virtual reality scene through image processing and virtual reality and augmented reality technique, realizes real scene of participating in the meeting through AR and VR technique through mobile internet.
Therefore, how to design an AR-based video conference system becomes a technical problem to be solved urgently in the field.
Disclosure of Invention
The invention aims to at least solve one of the technical problems in the prior art and provides an AR-based video conference system and an AR-based video conference method.
In order to achieve the above object, a first aspect of the present invention provides an AR-based video conference system, the system including a camera module, an image processing module, an AR data processing module, and an AP server module;
the camera module is used for acquiring an initial conference scene of a target conference and acquiring human depth information of each participant;
the image processing module is used for processing the human body depth information of each participant to extract the human body contour map of each participant;
the AR data processing module is used for processing the human body depth image of each participant to extract the actual participant scene position of each participant and adding the actual participant scene position of each participant to the appointed participant scene of the target conference;
and the AP server module is used for synthesizing the human body outline drawing of each participant and the actual participant scene position of each participant so as to obtain the actual conference scene of the target conference of each participant.
Preferably, the system further comprises an audio processing module, and the audio processing module is configured to process the voice information of each participant in real time.
Preferably, the system further comprises a conference content storage module, and the conference content storage module is used for storing data in the system.
Preferably, the system further includes a projection module, and the projection module is configured to project and display the three-dimensional scene graph of each conference participant.
Preferably, the human body depth information comprises a human body depth image and a human body color image, and the camera module comprises a color camera and an infrared camera;
the color camera is used for acquiring the human body color image and an initial conference scene of the target conference;
the infrared camera is used for acquiring the human body depth image;
the image processing module is used for processing the human body color images of the participants to extract the human body contour map of the participants;
and the AR data processing module is used for processing the human body depth image of each participant so as to extract the actual participant scene position of each participant.
In a second aspect of the present invention, an AR-based video conference method is provided, the method comprising:
acquiring an initial conference scene of a target conference and acquiring human depth information of each participant;
processing the human body depth information of each participant to extract a human body contour map of each participant;
processing the human body depth information of each participant to extract the actual participant scene position of each participant, and adding the actual participant scene position of each participant to the designated participant position in the initial conference scene of the target conference;
and synthesizing the human body outline of each participant and the actual participant scene position of each participant to obtain the actual conference scene of the target conference of each participant.
Preferably, the method further comprises:
and processing the voice information of each participant in real time.
Preferably, the method further comprises:
and storing the actual conference scene and the voice information of each participant.
Preferably, the method further comprises:
and projecting and displaying the three-dimensional scene graph of each conference participant.
Preferably, the human depth information includes a human depth image and a human color image;
the step of processing the human body depth information of each participant to extract the human body profile of each participant comprises the following steps:
processing the human body color image of each participant to extract a human body contour map of each participant;
the step of processing the human body depth information of each participant to extract the actual participant scene position of each participant comprises:
and processing the human body depth image of each participant to extract the actual participant scene position of each participant.
The AR-based video conference system firstly utilizes the camera module to acquire the initial conference scene of a target conference and acquire the human body depth information of each participant. And then, the image processing module processes the human body depth information of each participant to extract the human body contour map of each participant. And secondly, the AR data processing module processes the human body depth information of each participant to extract the actual participant scene position of each participant and adds the actual participant scene position of each participant into the appointed participant scene of the target conference. And finally, the AP server module synthesizes the human body outline of each participant and the actual participant scene position of each participant to obtain the actual conference scene of the target conference of each participant. Therefore, the actual conference participating scene position of each conference participant can be added to the designated conference participating position in the initial conference scene of the target conference through three-dimensional virtual-real combination, and the actual conference scene of the target conference is finally obtained, so that different scenes that the conference participants meet together in different places can be realized. Therefore, compared with the traditional teleconference, the AR-based video conference system can overcome the defect that a user cannot customize a conference scene or a real-time scene.
The AR-based video conference method firstly acquires an initial conference scene of a target conference and acquires human body depth information of each participant. And then, processing the human body depth information of each participant to extract the human body contour map of each participant. And secondly, processing the human body depth image of each participant to extract the actual participant scene position of each participant, and adding the actual participant scene position of each participant into the appointed participant scene of the target conference. And finally, synthesizing the human body outline of each participant and the actual participant scene position of each participant to obtain the actual conference scene of the target conference of each participant. Therefore, the actual conference participating scene position of each conference participant can be added to the designated conference participating position in the initial conference scene of the target conference through three-dimensional virtual-real combination, and the actual conference scene of the target conference is finally obtained, so that different scenes that the conference participants meet together in different places can be realized. Therefore, compared with the traditional teleconference, the AR-based video conference method can overcome the defect that a user cannot customize a conference scene or a real-time scene.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic structural diagram of an AR-based video conference system according to a first embodiment of the present invention;
fig. 2 is a flowchart of an AR-based video conferencing method according to a second embodiment of the present invention.
Description of the reference numerals
100: an AR-based video conferencing system;
110: a camera module;
111: a color camera;
112: an infrared camera;
120: an image processing module;
130: an AR data processing module;
140: an AP server module;
150: an audio processing module;
160: a conference content storage module;
170: and a projection module.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
Referring to fig. 1, a first aspect of the present invention relates to an AR-based video conferencing system 100, which includes a camera module 110, an image processing module 120, an AR data processing module 130, and an AP server module 140.
The camera module 110 is configured to obtain an initial conference scene of a target conference and obtain human depth information of each participant.
The image processing module 120 is configured to process the human depth information of each participant to extract a human body contour map of each participant.
The AR data processing module 130 is configured to process the human depth image of each participant to extract an actual conference scene position of each participant, and add the actual conference scene position of each participant to an assigned conference position in an initial conference scene of the target conference.
The AP server module 140 is configured to synthesize the human body contour map of each participant and the actual conference scene position of each participant to obtain an actual conference scene of the target conference of each participant.
In the AR-based video conference system 100 of the present embodiment, first, the camera module 110 is used to acquire an initial conference scene of a target conference and acquire human depth information of each participant. Then, the image processing module 120 processes the human depth information of the participants to extract the human body contour map of the participants. Secondly, the AR data processing module 130 processes the human depth information of the participants to extract the actual conference scene positions of the participants, and adds the actual conference scene positions of the participants to the designated conference position in the initial conference scene of the target conference. Finally, the AP server module 140 synthesizes the human body profile of each participant and the actual conference scene position of each participant to obtain the actual conference scene of the target conference of each participant. Therefore, the actual conference participating scene position of each conference participant can be added to the designated conference participating position in the initial conference scene of the target conference through three-dimensional virtual-real combination, and the actual conference scene of the target conference is finally obtained, so that different scenes that the conference participants meet together in different places can be realized. Therefore, the AR-based video conference system 100 of the present embodiment can overcome the defect that the user cannot customize the conference scene or the real-time scene compared to the conventional teleconference.
Preferably, the system further comprises an audio processing module 150, and the audio processing module 150 is configured to process the voice information of each conference participant in real time.
Specifically, the audio processing module 150 may remove the interfering noise and restore the reality of the voice for the main content voice portion of each participant, thereby ensuring a better conference effect.
Preferably, the system further comprises a conference content storage module 160, and the conference content storage module 160 is used for storing data in the system.
For example, the conference content storage module 160 may be configured to store an actual conference scene of the target conference, voice information of each participant, an initial conference scene of the target conference, and the like. This may facilitate review of the meeting content at a later time.
Preferably, the system further includes a projection module 170, and the projection module 170 is configured to project and display the three-dimensional scene graph of each conference participant.
That is, the projection module 170 may project the processed and synthesized actual meeting scene through the projection module 170 (for example, project the scene onto a device having a display function, such as a display screen), so as to improve the actual effect of the meeting.
Preferably, the human depth information includes a human depth image and a human color image, and the camera module 110 includes a color camera 111 and an infrared camera 112.
The color camera 111 is used for acquiring the human body color image and an initial conference scene of the target conference.
The infrared camera 112 is used for acquiring the human body depth image.
The image processing module 120 is configured to process the human body color image of each participant to extract a human body contour map of each participant. That is to say, the image processing module 120 can perform image data analysis, matting algorithm and image synthesis algorithm on the human color image to perform matting processing on the human color image of each participant, so as to obtain an image representing a person.
The AR data processing module 130 is configured to process the human depth image of each participant to extract an actual participant scene position of each participant. That is to say, the AR data processing module 130 may obtain a character skeleton by performing data analysis, a noise reduction algorithm, a depth map extraction algorithm on the human depth image of each participant, and synthesizing data through a depth map, and obtain an actual participant scene position of each participant through the character skeleton.
It should be noted that, if there are other participants participating, the AP server module 140 may add the member to the conference scene. And then sending the synthesized actual conference scene to each participating conferee.
One specific conferencing flow in the AR-based video conferencing system 100 is described in detail below:
and (3) initiating a conference:
the conference host initiates the conference, and the main conference site (the conference site where the initial conference scene of the target conference is located) is within the field angles of the color camera 111 and the infrared camera 112, so as to ensure that all the conference participants can be within the shooting range.
And (4) joining the conference:
people outside the main meeting place can initiate a meeting connection request through the mobile terminal, the mode of the meeting connection request is that data transmission is carried out through the mobile internet, when the meeting connection request is initiated, the mobile terminal can project an actual scene in the main meeting place in the current open area, the people initiating the participation meeting can walk to the place of the meeting through a virtual scene, and the personnel initiating the participation meeting can also project to the main meeting place.
The camera module 110 collects data:
the infrared camera 112 emits infrared light, the infrared receiving camera receives a scene covered in the meeting place, noise processing is performed, the character skeleton is extracted through an algorithm, information such as joints of a human body and the like and depth information are fed back according to the depth information and provided for the AP server module 140, and the AP server module 140 is convenient to display a three-dimensional and three-dimensional synthetic character of the character in the meeting place through an optical phenomenon by combining data provided by a color image.
The color camera 111 collects a meeting place scene, performs effect processing on the collected image, extracts people from the collected image through an image processing technology, performs virtualization in the process of spatial image processing, transmits the virtualized people to the AP server module 140, and provides materials for people to be synthesized in the AP server module 140 in the next step.
The audio processing module 150:
in the mic recording sound channel audio processing module 150, the audio processing module 150 processes the sound signal, and inputs the processed data into the AP server module 140.
AP server module 140:
the AP server module 140 receives the data collected by the camera module 110 and synthesizes the data processed by the audio processing module 150.
Conference content storage module 160:
the data synthesized by the AP server module 140 is stored.
The projection module 170:
the data processed by the AP server module 140 is projected by the projection module 170, and the data processed by the audio processing module 150 is played by a speaker. Therefore, a real-time audio three-dimensional conference site is obtained, and all the participants can carry out conferences in the video conference system through respective mobile terminals, and the participants can all have the feeling of meeting in the conference site.
When the whole meeting place is projected, the meeting personnel can directly go to the meeting place at this time and then feed back to other members in the meeting place, so that each member in the meeting can have the feeling of being personally on the scene.
In a second aspect of the present invention, an AR-based video conference method S100 is provided, where the method S100 includes:
and S110, acquiring an initial conference scene of the target conference and acquiring human depth information of each conference participant.
And S120, processing the human body depth information of each participant to extract the human body contour map of each participant.
And S130, processing the human body depth information of each participant to extract the actual conference scene position of each participant, and adding the actual conference scene position of each participant to the appointed conference position in the initial conference scene of the target conference.
And S140, synthesizing the human body outline of each participant and the actual conference scene position of each participant to obtain the actual conference scene of the target conference of each participant.
In the AR-based video conference method S100 of this embodiment, first, an initial conference scene of a target conference and human depth information of each participant are obtained. And then, processing the human body depth information of each participant to extract the human body contour map of each participant. And secondly, processing the human body depth information of each participant to extract the actual participant scene position of each participant, and adding the actual participant scene position of each participant into the appointed participant scene of the target conference. And finally, synthesizing the human body outline of each participant and the actual participant scene position of each participant to obtain the actual conference scene of the target conference of each participant. Therefore, the actual conference participating scene position of each conference participant can be added to the designated conference participating position in the initial conference scene of the target conference through three-dimensional virtual-real combination, and the actual conference scene of the target conference is finally obtained, so that different scenes that the conference participants meet together in different places can be realized. Therefore, the AR-based video conference method S100 of the present embodiment can overcome the defect that the user cannot customize the conference scene or the real-time scene, compared with the conventional teleconference.
Preferably, the method S100 further includes:
and processing the voice information of each participant in real time.
Specifically, the audio processing module 150 described above can be utilized to remove the interfering noise and restore the reality of the voice for the main content voice portion of each participant, thereby ensuring a better conference effect.
Preferably, the method S100 further includes:
and storing the actual conference scene and the voice information of each participant.
For example, the conference content storage module 160 may be used to store an actual conference scene of the target conference, voice information of each participant, an initial conference scene of the target conference, and the like. This may facilitate review of the meeting content at a later time.
Preferably, the method S100 further includes:
and projecting and displaying the three-dimensional scene graph of each conference participant.
That is, the projection module 170 may be used to project the processed and synthesized actual meeting scene through the projection module 170 (for example, to a device having a display function, such as a display screen), so as to improve the actual effect of the meeting.
Preferably, the human depth information includes a human depth image and a human color image;
the step S120 includes:
and processing the human body color image of each participant to extract the human body contour map of each participant.
The step S130 includes:
and processing the human body depth image of each participant to extract the actual participant scene position of each participant.
It will be understood that the above embodiments are merely exemplary embodiments taken to illustrate the principles of the present invention, which is not limited thereto. It will be apparent to those skilled in the art that various modifications and improvements can be made without departing from the spirit and substance of the invention, and these modifications and improvements are also considered to be within the scope of the invention.
Claims (8)
1. An AR-based video conference system is characterized in that the system comprises a camera module, an image processing module, an AR data processing module and an AP server module;
the camera module is used for acquiring an initial conference scene of a target conference and acquiring human depth information of each participant;
the human body depth information comprises a human body depth image and a human body color image, and the camera module comprises a color camera and an infrared camera;
the color camera is used for acquiring the human body color image and an initial conference scene of the target conference;
the infrared camera is used for acquiring the human body depth image;
the image processing module is used for carrying out image data analysis, matting algorithm and image synthesis algorithm processing on the human body color image of each participant so as to extract a human body contour map of each participant;
the AR data processing module is used for carrying out data analysis, noise reduction processing and depth map extraction algorithm processing on the human body depth image of each participant so as to extract the actual participant scene position of each participant and add the actual participant scene position of each participant to the appointed participant position in the initial conference scene of the target conference;
the AP server module is used for synthesizing the human body profile of each participant and the actual participant scene position of each participant to obtain the actual conference scene of the target conference of each participant, and is also used for adding the subsequent participants into the actual conference scene and sending the synthesized actual conference scene to the subsequent participants.
2. The AR-based videoconferencing system of claim 1, further comprising an audio processing module for processing voice information of each participant in real time.
3. The AR-based videoconferencing system of claim 2, further comprising a meeting content storage module for storing data in the system.
4. The AR-based video conferencing system of claim 3, wherein the system further comprises a projection module for projecting the three-dimensional scene graph of each participant.
5. An AR-based video conferencing method, the method comprising:
acquiring an initial conference scene of a target conference and acquiring human depth information of each participant;
the human body depth information comprises a human body depth image and a human body color image;
carrying out image data analysis, matting algorithm and image synthesis algorithm processing on the human body color images of the participants to extract human body contour maps of the participants;
carrying out data analysis, noise reduction processing and depth map extraction algorithm processing on the human body depth image of each participant to extract the actual participant scene position of each participant, and adding the actual participant scene position of each participant to the appointed participant scene of the target conference;
and synthesizing the human body outline diagrams of the participants and the actual conference scene positions of the participants to obtain the actual conference scene of the target conference of the participants, adding the follow-up participants into the actual conference scene, and sending the synthesized actual conference scene to the follow-up participants.
6. The AR based video conferencing method of claim 5, wherein the method further comprises:
and processing the voice information of each participant in real time.
7. The AR based video conferencing method of claim 6, wherein the method further comprises:
and storing the actual conference scene and the voice information of each participant.
8. The AR-based video conferencing method of claim 7, wherein the method further comprises:
and projecting and displaying the three-dimensional scene graph of each conference participant.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810081347.1A CN108076307B (en) | 2018-01-26 | 2018-01-26 | AR-based video conference system and AR-based video conference method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810081347.1A CN108076307B (en) | 2018-01-26 | 2018-01-26 | AR-based video conference system and AR-based video conference method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108076307A CN108076307A (en) | 2018-05-25 |
CN108076307B true CN108076307B (en) | 2021-01-05 |
Family
ID=62157186
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810081347.1A Active CN108076307B (en) | 2018-01-26 | 2018-01-26 | AR-based video conference system and AR-based video conference method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108076307B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109005443B (en) * | 2018-08-24 | 2021-05-28 | 重庆虚拟实境科技有限公司 | Real-person remote interaction method for VR-AR all-in-one machine terminal and system based on same |
CN110060351B (en) * | 2019-04-01 | 2023-04-07 | 叠境数字科技(上海)有限公司 | RGBD camera-based dynamic three-dimensional character reconstruction and live broadcast method |
CN110267029A (en) * | 2019-07-22 | 2019-09-20 | 广州铭维软件有限公司 | A kind of long-range holographic personage's display technology based on AR glasses |
CN112770074B (en) * | 2019-11-01 | 2024-03-12 | 中兴通讯股份有限公司 | Video conference realization method, device, server and computer storage medium |
WO2021175920A1 (en) | 2020-03-06 | 2021-09-10 | Telefonaktiebolaget Lm Ericsson (Publ) | Methods providing video conferencing with adjusted/modified video and related video conferencing nodes |
CN111582822A (en) * | 2020-05-07 | 2020-08-25 | 维沃移动通信有限公司 | AR-based conference method and device and electronic equipment |
WO2022001635A1 (en) * | 2020-07-03 | 2022-01-06 | 海信视像科技股份有限公司 | Display device and display method |
CN112055167A (en) * | 2020-09-18 | 2020-12-08 | 深圳随锐云网科技有限公司 | Remote collaboration three-dimensional modeling system and method based on 5G cloud video conference |
CN113489938B (en) * | 2020-10-28 | 2024-04-12 | 海信集团控股股份有限公司 | Virtual conference control method, intelligent device and terminal device |
CN113068003A (en) * | 2021-01-29 | 2021-07-02 | 深兰科技(上海)有限公司 | Data display method and device, intelligent glasses, electronic equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101610421A (en) * | 2008-06-17 | 2009-12-23 | 深圳华为通信技术有限公司 | Video communication method, Apparatus and system |
CN104349111A (en) * | 2013-07-24 | 2015-02-11 | 华为技术有限公司 | Meeting place creating method and system of video conference |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103209313A (en) * | 2012-01-16 | 2013-07-17 | 华为技术有限公司 | Image processing method, conference terminal, conference place electronic system and video conference system |
CN103034330B (en) * | 2012-12-06 | 2015-08-12 | 中国科学院计算技术研究所 | A kind of eye interaction method for video conference and system |
WO2015072195A1 (en) * | 2013-11-13 | 2015-05-21 | ソニー株式会社 | Display control device, display control method and program |
-
2018
- 2018-01-26 CN CN201810081347.1A patent/CN108076307B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101610421A (en) * | 2008-06-17 | 2009-12-23 | 深圳华为通信技术有限公司 | Video communication method, Apparatus and system |
CN104349111A (en) * | 2013-07-24 | 2015-02-11 | 华为技术有限公司 | Meeting place creating method and system of video conference |
Also Published As
Publication number | Publication date |
---|---|
CN108076307A (en) | 2018-05-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108076307B (en) | AR-based video conference system and AR-based video conference method | |
US9641585B2 (en) | Automated video editing based on activity in video conference | |
CN103595953B (en) | A kind of method and apparatus for controlling video capture | |
US11076127B1 (en) | System and method for automatically framing conversations in a meeting or a video conference | |
WO2018214746A1 (en) | Video conference realization method, device and system, and computer storage medium | |
JP2003506927A (en) | Method and apparatus for allowing video conferencing participants to appear in front of an opponent user with focus on the camera | |
US20030234859A1 (en) | Method and system for real-time video communication within a virtual environment | |
JPH07154763A (en) | Seated video conferencing equipment | |
US20090207233A1 (en) | Method and system for videoconference configuration | |
EP1912175A1 (en) | System and method for generating a video signal | |
WO2019096027A1 (en) | Communication processing method, terminal, and storage medium | |
JPH07255044A (en) | Animated electronic conference room and video conference system and method | |
CN107578777B (en) | Text information display method, device and system, and voice recognition method and device | |
CN111064919A (en) | VR (virtual reality) teleconference method and device | |
CN104144315B (en) | The display methods and multi-spot video conference system of a kind of multipoint videoconference | |
US10979666B2 (en) | Asymmetric video conferencing system and method | |
EP4106326A1 (en) | Multi-camera automatic framing | |
CN104349111A (en) | Meeting place creating method and system of video conference | |
US12333854B2 (en) | Systems and methods for correlating individuals across outputs of a multi-camera system and framing interactions between meeting participants | |
CN110401810A (en) | Virtual screen processing method, device, system, electronic equipment and storage medium | |
US11831454B2 (en) | Full dome conference | |
CN109788364B (en) | Video call interaction method and device and electronic equipment | |
CN105933637A (en) | Video communication method and system | |
US11792355B1 (en) | Using synchronous recording by front and back smartphone cameras for creating immersive video content and for video conferencing | |
JP3610423B2 (en) | Video display system and method for improving its presence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |