CN104244086A - Video real-time splicing device and method based on real-time conversation semantic analysis - Google Patents
Video real-time splicing device and method based on real-time conversation semantic analysis Download PDFInfo
- Publication number
- CN104244086A CN104244086A CN201410445784.9A CN201410445784A CN104244086A CN 104244086 A CN104244086 A CN 104244086A CN 201410445784 A CN201410445784 A CN 201410445784A CN 104244086 A CN104244086 A CN 104244086A
- Authority
- CN
- China
- Prior art keywords
- video
- audio
- module
- real
- unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 239000000463 material Substances 0.000 claims abstract description 83
- 239000012634 fragment Substances 0.000 claims abstract description 11
- 238000009877 rendering Methods 0.000 claims description 11
- 230000000694 effects Effects 0.000 abstract description 4
- 230000015572 biosynthetic process Effects 0.000 abstract description 3
- 230000003993 interaction Effects 0.000 abstract 1
- 238000003786 synthesis reaction Methods 0.000 abstract 1
- 230000002452 interceptive effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
Landscapes
- Television Signal Processing For Recording (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The invention discloses a video real-time splicing device and method based on real-time conversation semantic analysis. The method includes the steps that a video material library and an audio material library which are related to standard semantics are established, a semantic analysis module conducts semantic analysis on input, a video unit and an audio unit which need to be selected for use are calculated according to the semantic analysis, and corresponding commands are generated and sent to a real-time splicing module for corresponding processing; the real-time splicing module calls the corresponding video unit and the corresponding audio unit from the video material library and the audio material library according to the commands of the semantic analysis module, picture splicing and audio synthesis are conducted according to the requirements of the commands, and complete audio and video fragments are formed and provided for a player so as to be played. By means of the device and method, videos and audio are spliced in real time according to the semantics, real-time interaction with visitors on the scene can be achieved, the content needing to be known by the visitors on the scene can be shown in the played audio and videos duly, and therefore the advertising effect is greatly improved.
Description
Technical field
The present invention relates to field of computer technology, particularly relate to the real-time splicing apparatus of a kind of video based on actual conversation semantic analysis and method thereof.
Background technology
Along with the fast development of economy and the continuous progress of every technology, no matter be government now, or enterprises and institutions, mostly adopt the mode of movie to introduce the situation of local development or commodity, local scenic spot is introduced by film by tourism department.But normally fix, length fixes, and can only unilaterally play by content for existing film, lack real-time interactive, in playing process, people are wished to the part understood, in time can not appear in film, thus make the effect publicized not as people's will.
Summary of the invention
The object of the invention is to the deficiency overcoming prior art, the real-time splicing apparatus of a kind of video based on actual conversation semantic analysis and method thereof are provided, it is the mode by splicing in real time Audio and Video, realize the real-time interactive with site visit person, site visit person is made to need the content understood, in time can appear in the audio frequency and video of broadcasting, thus drastically increase the effect of publicity.
The technical solution adopted for the present invention to solve the technical problems is: the real-time splicing apparatus of a kind of video based on actual conversation semantic analysis, comprising:
Video material storehouse, this material database stores the multistage video unit formed by preset mode, and each video unit is associated with the material content indexing that can be read by semantic module, material occupation mode index, material use occasion index;
Audio material storehouse, this material database stores the multistage audio unit formed by preset mode, and each audio unit is associated with the material content indexing that can be read by semantic module, material occupation mode index, material use occasion index;
The input unit of at least one, this input unit is used for realizing identification, and realizes the input of voice or word;
Semantic module, this module carries out semantic analysis to input, calculate and should be spliced into corresponding video in which way by which video unit in video material storehouse, calculate and should be spliced into corresponding audio frequency in which way by which audio unit in audio material storehouse, and generate corresponding instruction and send to real-time concatenation module to carry out corresponding process;
Real-time concatenation module, the instruction of this module root Ju semantic module, transfers corresponding video unit and audio unit from video material storehouse and audio material storehouse, and press command request, carry out picture splicing and sound rendering, form complete audio frequency and video fragment, play for playing device.
Described real-time concatenation module at least comprises picture splicing submodule, picture fusant module, scene cuts submodule and sound synthon module, is used for realizing the splicing of picture respectively, merges, cuts out and sound rendering.
Described audio material storehouse also comprises sound and generates submodule.
Described video material storehouse also comprises video and generates submodule.
Described input unit comprises identification module, described identification module comprise following in a kind of or combination of more than two kinds and two kinds:
Biometric information identification module;
Action message identification module;
Digital information identification module.
Described input unit comprises voice acquisition module, is used for realizing the collection to voice messaging input.
The real-time joining method of video based on actual conversation semantic analysis, comprising:
A preset video material storehouse, this material database stores the multistage video unit formed by preset mode, and each video unit is associated with the material content indexing that can be read by semantic module, material occupation mode index, material use occasion index;
A preset audio material storehouse, this material database stores the multistage audio unit formed by preset mode, and each audio unit is associated with the material content indexing that can be read by semantic module, material occupation mode index, material use occasion index;
A preset semantic module, is used for semantic analysis and the video unit selected needed for calculating according to semantic analysis and audio unit, and generates splicing instruction;
A preset real-time concatenation module, is used for the video unit selected and audio unit to be spliced into audio frequency and video fragment;
Semantic module carries out semantic analysis to input, calculate and be spliced into corresponding video in which way by which video unit in video material storehouse, calculate and be spliced into corresponding audio frequency in which way by which audio unit in audio material storehouse, and generate corresponding instruction and send to real-time concatenation module to carry out corresponding process; The instruction of real-time concatenation module root Ju semantic module, from video material storehouse and audio material storehouse, transfer corresponding video unit and audio unit, and press command request, carry out picture splicing and sound rendering, form complete audio frequency and video fragment, be supplied to playing device and play.
Compared with prior art, the invention has the beneficial effects as follows:
A video material storehouse and an audio material storehouse is set up owing to have employed, and video material stock contains the multistage video unit formed by preset mode, each video unit is associated with the material content indexing that can be read by semantic module, material occupation mode index, material use occasion index, audio material stock contains the multistage audio unit formed by preset mode, and each audio unit is associated with the material content indexing that can be read by semantic module, material occupation mode index, material use occasion index; Then by semantic module, semantic analysis is carried out to input, calculate and be spliced into corresponding video by which video unit in video material storehouse, calculate and be spliced into corresponding audio frequency by which audio unit in audio material storehouse, and generate corresponding instruction and send to real-time concatenation module to carry out corresponding process; The instruction of real-time concatenation module then root Ju semantic module, from video material storehouse and audio material storehouse, transfer corresponding video unit and audio unit, and press command request, carry out picture splicing and sound rendering, form complete audio frequency and video fragment, be supplied to playing device and play.This devices and methods therefor, by the mode of splicing in real time Audio and Video, the real-time interactive with site visit person can be realized, make site visit person need the content understood, in time can appear in the audio frequency and video of broadcasting, thus drastically increase the effect of publicity.
Below in conjunction with drawings and Examples, the present invention is described in further detail; But the real-time splicing apparatus of a kind of video based on actual conversation semantic analysis of the present invention and method thereof are not limited to embodiment.
Accompanying drawing explanation
Fig. 1 is the formation schematic diagram of device of the present invention;
Fig. 2 is the formation schematic diagram of the real-time concatenation module of device of the present invention.
Embodiment
Embodiment
Shown in Fig. 1, Fig. 2, the real-time splicing apparatus of a kind of video based on actual conversation semantic analysis of the present invention, comprising:
Video material storehouse 1, this material database 1 stores the multistage video unit formed by preset mode, and each video unit is associated with the material content indexing that can be read by semantic module, material occupation mode index, material use occasion index;
Audio material storehouse 2, this material database 2 stores the multistage audio unit formed by preset mode, and each audio unit is associated with the material content indexing that can be read by semantic module, material occupation mode index, material use occasion index;
The input unit of at least one, this input unit is used for realizing identification, and realizes the input of voice or word;
Semantic module 3, this module 3 carries out semantic analysis to input, calculate and should be spliced into corresponding video in which way by which video unit in video material storehouse, calculate and should be spliced into corresponding audio frequency in which way by which audio unit in audio material storehouse, and generate corresponding instruction and send to real-time concatenation module to carry out corresponding process;
Real-time concatenation module 4, the instruction of this module 4 Ju semantic module, transfers corresponding video unit and audio unit from video material storehouse 1 and audio material storehouse 2, and press command request, carry out picture splicing and sound rendering, form complete audio frequency and video fragment, play for playing device.
Described real-time concatenation module 4 at least comprises picture splicing submodule 41, picture fusant module 42, scene cuts submodule 43 and sound synthon module 44, is used for realizing the splicing of picture respectively, merges, cuts out and sound rendering.The fusion of picture splicing, picture, scene cuts and sound rendering have had ripe technology to apply, and just do not state tired at this.
Described audio material storehouse 2 also comprises sound and generates submodule.
Described video material storehouse 1 also comprises video and generates submodule.
Described input unit comprises identification module, described identification module comprise following in a kind of or combination of more than two kinds and two kinds:
Biometric information identification module 51;
Action message identification module 52;
Digital information identification module 53;
Biometric information identification module 51, action message identification module 52 and digital information identification module 53 respectively existing ripe technology can be applied, and just do not state tired at this.
Described input unit also comprises voice acquisition module 54, is used for realizing the collection to voice messaging input, and input module 55, and input module 55 is used for realizing the various ways such as input through keyboard or touch-screen input.
The real-time joining method of a kind of video based on actual conversation semantic analysis of the present invention, comprising:
A preset video material storehouse 1, this material database stores the multistage video unit formed by preset mode, and each video unit is associated with the material content indexing that can be read by semantic module, material occupation mode index, material use occasion index;
A preset audio material storehouse 2, this material database stores the multistage audio unit formed by preset mode, and each audio unit is associated with the material content indexing that can be read by semantic module, material occupation mode index, material use occasion index;
A preset semantic module 3, is used for semantic analysis and the video unit selected needed for calculating according to semantic analysis and audio unit, and generates splicing instruction;
A preset real-time concatenation module 4, is used for the video unit selected and audio unit to be spliced into audio frequency and video fragment;
Semantic module 3 carries out semantic analysis to input, calculate and be spliced into corresponding video in which way by which video unit in video material storehouse 1, calculate and be spliced into corresponding audio frequency in which way by which audio unit in audio material storehouse 2, and generate corresponding instruction and send to real-time concatenation module 4 to carry out corresponding process; The instruction of real-time concatenation module 4 Ju semantic module, from video material storehouse 1 and audio material storehouse 2, transfer corresponding video unit and audio unit, and press command request, carry out picture splicing and sound rendering, form complete audio frequency and video fragment, be supplied to playing device and play.
Below exemplify a concrete example so that the real-time splicing apparatus of a kind of video based on actual conversation semantic analysis of the present invention and method thereof to be described.
Certain exhibition room, each visitor sends out a smart mobile phone, the starting image of film will to be spoken and the video segment waited for of remaining silent splices in proportion by real-time concatenation module 4, audio material generates voice by sound Core Generator, inquiry visitor wants to understand what content, think how long flower is understood, visitor makes a choice on mobile phone, available speech inputs, text event detection, key-press input various ways, semantic module 3 analyzes the input of user, the selection of counting user, result is sent to real-time concatenation module 4, real-time concatenation module 4 is spliced suitable film according to the selection of user and is play, so just can according to the enquirement of user, the problem of user is answered accurately in the mode of film, film material is allowed to play larger interactive effect.
Above-described embodiment is only used for further illustrating the real-time splicing apparatus of a kind of video based on actual conversation semantic analysis of the present invention and method thereof; but the present invention is not limited to embodiment; every above embodiment is done according to technical spirit of the present invention any simple modification, equivalent variations and modification, all fall in the protection range of technical solution of the present invention.
Claims (6)
1., based on the real-time splicing apparatus of video of actual conversation semantic analysis, it is characterized in that: comprising:
Video material storehouse, this material database stores the multistage video unit formed by preset mode, and each video unit is associated with the material content indexing that can be read by semantic module, material occupation mode index, material use occasion index;
Audio material storehouse, this material database stores the multistage audio unit formed by preset mode, and each audio unit is associated with the material content indexing that can be read by semantic module, material occupation mode index, material use occasion index;
The input unit of at least one, this input unit is used for realizing identification, and realizes the input of voice or word;
Semantic module, this module carries out semantic analysis to input, calculate and should be spliced into corresponding video in which way by which video unit in video material storehouse, calculate and should be spliced into corresponding audio frequency in which way by which audio unit in audio material storehouse, and generate corresponding instruction and send to real-time concatenation module to carry out corresponding process;
Real-time concatenation module, the instruction of this module root Ju semantic module, transfers corresponding video unit and audio unit from video material storehouse and audio material storehouse, and press command request, carry out picture splicing and sound rendering, form complete audio frequency and video fragment, play for playing device.
2. the real-time splicing apparatus of the video based on actual conversation semantic analysis according to claim 1, it is characterized in that: described real-time concatenation module at least comprises picture splicing submodule, picture fusant module, scene cuts submodule and sound synthon module, be used for realizing the splicing of picture respectively, merge, cut out and sound rendering.
3. the real-time splicing apparatus of the video based on actual conversation semantic analysis according to claim 1, is characterized in that: described audio material storehouse also comprises sound and generates submodule; Described video material storehouse also comprises video and generates submodule.
4. the real-time splicing apparatus of the video based on actual conversation semantic analysis according to claim 1, is characterized in that: described input unit comprises identification module, described identification module comprise following in a kind of or combination of more than two kinds and two kinds:
Biometric information identification module;
Action message identification module;
Digital information identification module.
5. the real-time splicing apparatus of the video based on actual conversation semantic analysis according to claim 1, is characterized in that: described input unit comprises voice acquisition module, is used for realizing the collection to voice messaging input.
6., based on the real-time joining method of video of actual conversation semantic analysis, it is characterized in that: comprising:
A preset video material storehouse, this material database stores the multistage video unit formed by preset mode, and each video unit is associated with the material content indexing that can be read by semantic module, material occupation mode index, material use occasion index;
A preset audio material storehouse, this material database stores the multistage audio unit formed by preset mode, and each audio unit is associated with the material content indexing that can be read by semantic module, material occupation mode index, material use occasion index;
A preset semantic module, is used for semantic analysis and the video unit selected needed for calculating according to semantic analysis and audio unit, and generates splicing instruction;
A preset real-time concatenation module, is used for the video unit selected and audio unit to be spliced into audio frequency and video fragment;
Semantic module carries out semantic analysis to input, calculate and be spliced into corresponding video in which way by which video unit in video material storehouse, calculate and be spliced into corresponding audio frequency in which way by which audio unit in audio material storehouse, and generate corresponding instruction and send to real-time concatenation module to carry out corresponding process; The instruction of real-time concatenation module root Ju semantic module, from video material storehouse and audio material storehouse, transfer corresponding video unit and audio unit, and press command request, carry out picture splicing and sound rendering, form complete audio frequency and video fragment, be supplied to playing device and play.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410445784.9A CN104244086A (en) | 2014-09-03 | 2014-09-03 | Video real-time splicing device and method based on real-time conversation semantic analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410445784.9A CN104244086A (en) | 2014-09-03 | 2014-09-03 | Video real-time splicing device and method based on real-time conversation semantic analysis |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104244086A true CN104244086A (en) | 2014-12-24 |
Family
ID=52231291
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410445784.9A Pending CN104244086A (en) | 2014-09-03 | 2014-09-03 | Video real-time splicing device and method based on real-time conversation semantic analysis |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104244086A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104735468A (en) * | 2015-04-03 | 2015-06-24 | 北京威扬科技有限公司 | Method and system for synthesizing images into new video based on semantic analysis |
CN104778219A (en) * | 2015-03-20 | 2015-07-15 | 广东欧珀移动通信有限公司 | Method and device for splicing songs with preset effects |
CN104780438A (en) * | 2015-03-20 | 2015-07-15 | 广东欧珀移动通信有限公司 | Method and device for splicing video and song audio |
CN104778958A (en) * | 2015-03-20 | 2015-07-15 | 广东欧珀移动通信有限公司 | Method and device for splicing noise-containing songs |
CN107566892A (en) * | 2017-09-18 | 2018-01-09 | 北京小米移动软件有限公司 | Video file processing method, device and computer-readable recording medium |
CN109949792A (en) * | 2019-03-28 | 2019-06-28 | 优信拍(北京)信息科技有限公司 | The synthetic method and device of Multi-audio-frequency |
CN110366013A (en) * | 2018-04-10 | 2019-10-22 | 腾讯科技(深圳)有限公司 | Promotional content method for pushing, device and storage medium |
CN110392281A (en) * | 2018-04-20 | 2019-10-29 | 腾讯科技(深圳)有限公司 | Image synthesizing method, device, computer equipment and storage medium |
CN110909185A (en) * | 2018-09-17 | 2020-03-24 | 国家新闻出版广电总局广播科学研究院 | Intelligent radio and television program production method and device |
CN111083396A (en) * | 2019-12-26 | 2020-04-28 | 北京奇艺世纪科技有限公司 | Video synthesis method and device, electronic equipment and computer-readable storage medium |
CN111274415A (en) * | 2020-01-14 | 2020-06-12 | 广州酷狗计算机科技有限公司 | Method, apparatus and computer storage medium for determining alternate video material |
CN114189740A (en) * | 2021-10-27 | 2022-03-15 | 杭州摸象大数据科技有限公司 | Video synthesis dialogue construction method and device, computer equipment and storage medium |
US12100209B2 (en) | 2019-01-23 | 2024-09-24 | Huawei Cloud Computing Technologies Co., Ltd. | Image analysis method and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6154600A (en) * | 1996-08-06 | 2000-11-28 | Applied Magic, Inc. | Media editor for non-linear editing system |
CN1423478A (en) * | 2001-11-22 | 2003-06-11 | 刘宝勇 | Method for producing intelligent video frequency programme |
CN102110304A (en) * | 2011-03-29 | 2011-06-29 | 华南理工大学 | Material-engine-based automatic cartoon generating method |
CN102694987A (en) * | 2011-03-25 | 2012-09-26 | 陈鹏 | Method for automatically synthesizing and making video animation program in computer |
CN102984465A (en) * | 2012-12-20 | 2013-03-20 | 北京中科大洋科技发展股份有限公司 | Program synthesis system and method applicable to networked intelligent digital media |
CN103391414A (en) * | 2013-07-24 | 2013-11-13 | 杭州趣维科技有限公司 | Video processing device and processing method applied to mobile phone platform |
-
2014
- 2014-09-03 CN CN201410445784.9A patent/CN104244086A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6154600A (en) * | 1996-08-06 | 2000-11-28 | Applied Magic, Inc. | Media editor for non-linear editing system |
CN1423478A (en) * | 2001-11-22 | 2003-06-11 | 刘宝勇 | Method for producing intelligent video frequency programme |
CN102694987A (en) * | 2011-03-25 | 2012-09-26 | 陈鹏 | Method for automatically synthesizing and making video animation program in computer |
CN102110304A (en) * | 2011-03-29 | 2011-06-29 | 华南理工大学 | Material-engine-based automatic cartoon generating method |
CN102984465A (en) * | 2012-12-20 | 2013-03-20 | 北京中科大洋科技发展股份有限公司 | Program synthesis system and method applicable to networked intelligent digital media |
CN103391414A (en) * | 2013-07-24 | 2013-11-13 | 杭州趣维科技有限公司 | Video processing device and processing method applied to mobile phone platform |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107622775B (en) * | 2015-03-20 | 2020-12-18 | Oppo广东移动通信有限公司 | Method for splicing songs containing noise and related products |
CN104778219A (en) * | 2015-03-20 | 2015-07-15 | 广东欧珀移动通信有限公司 | Method and device for splicing songs with preset effects |
CN104780438A (en) * | 2015-03-20 | 2015-07-15 | 广东欧珀移动通信有限公司 | Method and device for splicing video and song audio |
CN104778958A (en) * | 2015-03-20 | 2015-07-15 | 广东欧珀移动通信有限公司 | Method and device for splicing noise-containing songs |
CN107622775A (en) * | 2015-03-20 | 2018-01-23 | 广东欧珀移动通信有限公司 | The method and Related product of Noise song splicing |
CN104778219B (en) * | 2015-03-20 | 2018-05-29 | 广东欧珀移动通信有限公司 | A kind of method and device of default effect song splicing |
CN104735468B (en) * | 2015-04-03 | 2018-08-31 | 北京威扬科技有限公司 | A kind of method and system that image is synthesized to new video based on semantic analysis |
CN104735468A (en) * | 2015-04-03 | 2015-06-24 | 北京威扬科技有限公司 | Method and system for synthesizing images into new video based on semantic analysis |
CN107566892A (en) * | 2017-09-18 | 2018-01-09 | 北京小米移动软件有限公司 | Video file processing method, device and computer-readable recording medium |
CN107566892B (en) * | 2017-09-18 | 2020-09-08 | 北京小米移动软件有限公司 | Video file processing method and device and computer readable storage medium |
CN110366013B (en) * | 2018-04-10 | 2021-10-19 | 腾讯科技(深圳)有限公司 | Promotion content pushing method and device and storage medium |
CN110366013A (en) * | 2018-04-10 | 2019-10-22 | 腾讯科技(深圳)有限公司 | Promotional content method for pushing, device and storage medium |
CN110392281A (en) * | 2018-04-20 | 2019-10-29 | 腾讯科技(深圳)有限公司 | Image synthesizing method, device, computer equipment and storage medium |
CN110392281B (en) * | 2018-04-20 | 2022-03-18 | 腾讯科技(深圳)有限公司 | Video synthesis method and device, computer equipment and storage medium |
CN110909185A (en) * | 2018-09-17 | 2020-03-24 | 国家新闻出版广电总局广播科学研究院 | Intelligent radio and television program production method and device |
CN110909185B (en) * | 2018-09-17 | 2022-08-05 | 国家广播电视总局广播电视科学研究院 | Intelligent broadcast television program production method and device |
US12100209B2 (en) | 2019-01-23 | 2024-09-24 | Huawei Cloud Computing Technologies Co., Ltd. | Image analysis method and system |
CN109949792A (en) * | 2019-03-28 | 2019-06-28 | 优信拍(北京)信息科技有限公司 | The synthetic method and device of Multi-audio-frequency |
CN111083396A (en) * | 2019-12-26 | 2020-04-28 | 北京奇艺世纪科技有限公司 | Video synthesis method and device, electronic equipment and computer-readable storage medium |
CN111274415A (en) * | 2020-01-14 | 2020-06-12 | 广州酷狗计算机科技有限公司 | Method, apparatus and computer storage medium for determining alternate video material |
CN111274415B (en) * | 2020-01-14 | 2024-05-24 | 广州酷狗计算机科技有限公司 | Method, device and computer storage medium for determining replacement video material |
CN114189740A (en) * | 2021-10-27 | 2022-03-15 | 杭州摸象大数据科技有限公司 | Video synthesis dialogue construction method and device, computer equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104244086A (en) | Video real-time splicing device and method based on real-time conversation semantic analysis | |
WO2021083071A1 (en) | Method, device, and medium for speech conversion, file generation, broadcasting, and voice processing | |
US20200234478A1 (en) | Method and Apparatus for Processing Information | |
US20210272569A1 (en) | Voice feedback for user interface of media playback device | |
CN103430217A (en) | Input support device, input support method, and recording medium | |
CN110602516A (en) | Information interaction method and device based on live video and electronic equipment | |
CN109474843A (en) | The method of speech control terminal, client, server | |
US20170041730A1 (en) | Social media processing with three-dimensional audio | |
CN107450874B (en) | Multimedia data double-screen playing method and system | |
US20230109783A1 (en) | Computing device and corresponding method for generating data representing text | |
US11449301B1 (en) | Interactive personalized audio | |
WO2014154097A1 (en) | Automatic page content reading-aloud method and device thereof | |
WO2019047850A1 (en) | Identifier displaying method and device, request responding method and device | |
AU2016202381B2 (en) | Systems and methods for transcript processing | |
CN108696763A (en) | Advertisement broadcast method and device | |
TW201621883A (en) | Personalized audio and/or video shows | |
US20150035835A1 (en) | Enhanced video description | |
CN114339069A (en) | Video processing method and device, electronic equipment and computer storage medium | |
CN114501103A (en) | Interaction method, device and equipment based on live video and storage medium | |
KR20210050410A (en) | Method and system for suppoting content editing based on real time generation of synthesized sound for video content | |
US20140297285A1 (en) | Automatic page content reading-aloud method and device thereof | |
US11778277B1 (en) | Digital item processing for video streams | |
US10031899B2 (en) | Computing device and corresponding method for generating data representing text | |
KR102544612B1 (en) | Method and apparatus for providing services linked to video contents | |
US10339219B2 (en) | Computing device and corresponding method for generating data representing text |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20171103 Address after: Hangzhou City, Zhejiang province 310000 Binjiang District Albert Road, building 2 Room 202 rainbow heights Applicant after: Chen Fei Applicant after: Bao Kejie Address before: Hangzhou City, Zhejiang province 310000 Binjiang District Albert Road, building 2 Room 202 rainbow heights Applicant before: Chen Fei |
|
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20141224 |