[go: up one dir, main page]

CN113992926B - Interface display method, device, electronic equipment and storage medium - Google Patents

Interface display method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113992926B
CN113992926B CN202111214131.6A CN202111214131A CN113992926B CN 113992926 B CN113992926 B CN 113992926B CN 202111214131 A CN202111214131 A CN 202111214131A CN 113992926 B CN113992926 B CN 113992926B
Authority
CN
China
Prior art keywords
playing
video stream
subtitle
window
live video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111214131.6A
Other languages
Chinese (zh)
Other versions
CN113992926A (en
Inventor
刘坚
李秋平
何心怡
王明轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202111214131.6A priority Critical patent/CN113992926B/en
Publication of CN113992926A publication Critical patent/CN113992926A/en
Application granted granted Critical
Publication of CN113992926B publication Critical patent/CN113992926B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the disclosure discloses an interface display method, an interface display device, electronic equipment and a storage medium, wherein the method comprises the following steps: displaying a user interface, wherein the user interface comprises at least two play windows and one or more first subtitles corresponding to an audio stream in a live video stream; responding to a live video stream playing instruction, and playing the recorded live video stream in a first playing window of the at least two playing windows; in the process of playing the recorded live video stream, responding to a first subtitle modification instruction, and modifying a first subtitle pointed by the first subtitle modification instruction; and in response to a live video stream playback instruction, playing back the video stream fragments played in the first playing window in a second playing window of the at least two playing windows. By the interface display scheme provided by the embodiment of the disclosure, the purpose of respectively monitoring the quality of various video streams based on a user interface is realized.

Description

Interface display method, device, electronic equipment and storage medium
Technical Field
The disclosure relates to the field of information technology, and in particular, to an interface display method, an interface display device, electronic equipment and a storage medium.
Background
With the continuous development of video live technology, the demand of users for live video streams is also increasing. In order to improve user experience, after a subtitle is allocated to the live video stream, the live video stream added with the subtitle is sent to a user terminal for playing.
The prior art collates subtitles by manual means. But the manual calibration is less efficient and less accurate.
Disclosure of Invention
In order to solve the above technical problems or at least partially solve the above technical problems, embodiments of the present disclosure provide an interface display method, an apparatus, an electronic device, and a storage medium, which achieve the purpose of monitoring quality of multiple video streams respectively based on a user interface.
The embodiment of the disclosure provides an interface display method, which comprises the following steps:
displaying a user interface, wherein the user interface comprises at least two play windows and one or more first subtitles corresponding to an audio stream in a live video stream;
responding to a live video stream playing instruction, and playing the recorded live video stream in a first playing window of the at least two playing windows;
in the process of playing the recorded live video stream, responding to a first subtitle modification instruction, and modifying a first subtitle pointed by the first subtitle modification instruction;
And in response to a live video stream playback instruction, playing back the video stream fragments played in the first playing window in a second playing window of the at least two playing windows.
The embodiment of the disclosure also provides an interface display device, which comprises:
the first display module is used for displaying a user interface, and the user interface comprises at least two play windows and one or more first subtitles corresponding to an audio stream in the live video stream;
the first playing module is used for responding to the live video stream playing instruction and playing the recorded live video stream in a first playing window of the at least two playing windows;
the modification module is used for responding to a first subtitle modification instruction in the process of playing the recorded live video stream and modifying a first subtitle pointed by the first subtitle modification instruction;
and the playback module is used for responding to a live video stream playback instruction, and playing back the video stream fragments played in the first play window in a second play window in the at least two play windows.
The embodiment of the disclosure also provides an electronic device, which comprises:
one or more processors;
A storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the interface display method as described above.
The embodiment of the present disclosure also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the interface display method as described above.
The disclosed embodiments also provide a computer program product comprising a computer program or instructions which, when executed by a processor, implements the interface display method as described above.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has at least the following advantages:
according to the interface display method provided by the embodiment of the disclosure, through displaying a user interface, the user interface comprises at least two play windows and one or more first subtitles corresponding to an audio stream in a live video stream; responding to a live video stream playing instruction, and playing the recorded live video stream in a first playing window of the at least two playing windows; in the process of playing the recorded live video stream, the first subtitle pointed by the first subtitle modification instruction is modified in response to the first subtitle modification instruction, so that a user can calibrate the first subtitle while watching live video, calibration efficiency and precision can be improved, and the quality of the live video stream can be monitored by watching the live video stream. And in response to a live video stream playback instruction, playing back the video stream fragments played in the first playing window in a second playing window of the at least two playing windows, so that a user can repeatedly watch a certain section of video, and further improving the correction precision is facilitated.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a schematic structural diagram of a live broadcast concurrent hardware device in an embodiment of the disclosure;
fig. 2 is a schematic structural diagram of another live broadcast concurrent hardware device in an embodiment of the disclosure;
FIG. 3 is a flow chart of an interface display method in an embodiment of the disclosure;
FIG. 4 is a schematic diagram of a user interface in an embodiment of the present disclosure;
FIG. 5 is a schematic illustration of another interface display in an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of another user interface in an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of another user interface in an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of another user interface in an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of another user interface in an embodiment of the present disclosure;
FIG. 10 is a schematic diagram of another user interface in an embodiment of the present disclosure;
FIG. 11 is a schematic structural diagram of an interface display device according to an embodiment of the disclosure;
fig. 12 is a schematic structural diagram of an electronic device in an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Before explaining the interface display scheme provided by the embodiment of the present disclosure, a hardware device and an application scenario related to the interface display scheme are simply introduced, so as to better understand the interface display scheme provided by the embodiment of the present disclosure.
Live broadcast and transmission and calibration means: the method comprises the steps of adding subtitles to live broadcast content of a host broadcast, and then sending the added subtitles to a watching and broadcasting terminal, so that a user at the watching and broadcasting terminal can see live broadcast pictures with the subtitles, in the process of adding the subtitles, firstly, performing voice recognition on live broadcast audio by a machine to obtain a first subtitle to be checked, and then, performing machine translation on the basis of the first subtitle to be checked to obtain a second subtitle to be checked (for example, the first subtitle is Chinese, and the second subtitle is corresponding English). The original text proofreader proofreads the first subtitle, if errors are found, the first subtitle is manually modified, and the translation proofreader proofreads the second subtitle, if errors are found, the second subtitle is manually modified. It will be appreciated that the original proof reader may be the same person or may be different persons, and typically, in order to reduce the working strength and improve the working efficiency and the proof reader accuracy, the original proof reader is different persons from the translation proof reader.
The live broadcast and transmission and calibration process comprises the following steps: the live broadcast simultaneous transmission hardware equipment pulls live broadcast video streams of a host broadcast from a server or a host broadcast end, records and processes the live broadcast video streams (the processing comprises, for example, collecting audio in the live broadcast video streams, carrying out voice recognition on the audio to obtain first subtitles to be checked, translating the first subtitles to be checked to obtain second subtitles to be checked), then playing the recorded live broadcast video streams through the audio-video equipment, displaying the first subtitles and the second subtitles on a display interface, checking the first subtitles by an original text checker, carrying out manual modification if errors are found, checking the second subtitles by a translation checker, and carrying out manual modification if errors are found.
Optionally, referring to the schematic structural diagram of a live broadcast co-transmission hardware device shown in fig. 1, the live broadcast co-transmission hardware device and the audio-video device are the same device, and the live broadcast co-transmission hardware device and the audio-video device correspond to the device 24 in fig. 1. The original proofreader and the translation proofreader respectively correspond to one live broadcast simultaneous transmission hardware device, for example, the original proofreader makes a proofreading on the basis of a device 24 (the device 24 can be regarded as a first live broadcast simultaneous transmission hardware device) and the translation proofreader makes a proofreading on the basis of a device 25 (the device 25 is a backup of the device 24 and can be regarded as a second live broadcast simultaneous transmission hardware device). The terminal 21 corresponds to the terminal of the anchor, and the terminal 21 uploads the live video stream to the server 22. The device 24 pulls the direct-cast video stream from the terminal 21 or the server 22. For example, the device 24 pulls the live video stream from the server 22 according to the URL (Uniform Resource Locator ) of the live video stream. Terminal 27 corresponds to the terminal of the viewing user and device 26 corresponds to a server.
The moment at which the device 24 starts pulling the direct-cast video stream may be any time. Optionally, the device 24 starts pulling the direct-cast video stream after a "start command" is issued by the original proof reader. For example, the original proof reader clicks a button or icon in the user interface of the device 24 at 9:50 hours of the day, i.e., issues a "start command", and the device 24 starts pulling the direct video stream from 9:50 hours of the day. Further, if the original proof reader clicks "start live broadcast" on the user interface of the device 24 when 10:00 is the same day, the device 24 starts recording the live video stream pulled by the original proof reader from 10:00, and the device 24 starts processing the live video stream pulled by the original proof reader from 10:00 synchronously, and the processing includes: collecting audio in the live video stream, performing voice recognition on the collected audio to obtain a first subtitle, displaying the first subtitle on a display interface of the device 24, so that a original text proofreader proofreads the first subtitle, translating a result (for example, chinese text) of voice recognition to obtain a translated text (for example, english), namely, a second subtitle, and displaying the second subtitle on a display interface of the device 25, so that the translated proofreader proofreads the second subtitle.
Alternatively, referring to the schematic structural diagram of another live broadcast concurrent hardware device shown in fig. 2, the live broadcast concurrent hardware device and the audio/video device are not the same device, but two different devices, for example, the live broadcast concurrent hardware device corresponds to the device 24 in fig. 3, the audio/video device corresponds to the second server 23 in fig. 3, the original text proofreading agent and the translation proofreading agent respectively perform subtitle proofreading based on the different live broadcast concurrent hardware devices, for example, the original text proofreading agent performs proofreading on the first subtitle based on the device 24 (the device 24 may be regarded as a first live broadcast concurrent hardware device), and the translation proofreading agent performs proofreading on the second subtitle based on the device 25 (the device 25 is a backup of the device 24 and may be regarded as a second live broadcast concurrent hardware device). The terminal 21 corresponds to the anchor terminal, the terminal 21 uploads the live video stream to the first server 22, and the second server 23 pulls the live video stream from the first server 22 or the terminal 21. Terminal 27 corresponds to the terminal of the viewing user and device 26 corresponds to a server.
The moment when the second server 23 starts pulling the direct-cast video stream from the first server 22 or the terminal 21 may be any time. Optionally, the second server 23 starts pulling the direct-cast video stream after the original proof reader issues a "start command". For example, the original proof reader clicks a button or icon in the user interface of the device 24 at 9:50 hours of the day, thereby giving a "start instruction", and the device 24 sends the "start instruction" to the second server 23, and the second server 23 starts pulling the direct broadcast video stream from the first server 22 or the terminal 21 after receiving the "start instruction". On the day 10:00, the original proof reader clicks the "start live broadcast" button on the user interface of the device 24, and the device 24 sends a recording instruction to the second server 23 according to the clicking operation of the original proof reader, and assuming that the second server 23 quickly receives the recording instruction, that is, the second server 23 receives the recording instruction when 10:00, the second server 23 starts recording the live video stream pulled by the second server from 10:00, and the second server 23 starts processing the live video stream pulled synchronously from 10:00, that is, recording of the live video stream and processing of the live video stream are performed synchronously. The processing operation for the live video stream comprises the following steps: collecting audio in the live video stream, performing voice recognition on the collected audio to obtain a first subtitle, displaying the first subtitle on a display interface of the device 24, so that a text proofreader proofreads the first subtitle, translating a voice recognition result (for example, chinese text) to obtain a translated text (for example, english text), namely, a second subtitle, and displaying the second subtitle on a display interface of the device 25, so that the text proofreader proofreads the second subtitle.
Taking fig. 1 as an example, if the original proofreader modifies a first subtitle (e.g., chinese text) during the proofing of the first subtitle by the device 24, the device 24 synchronizes the modified first subtitle to the device 25 so that the proofreader modifies a corresponding second subtitle (e.g., english text) based on the modified first subtitle. Further, the device 25 sends the modified second subtitle to the device 24.
Taking fig. 2 as an example, if the original proof reader modifies the first subtitle (e.g., chinese text) during the process of proof reading the first subtitle based on the device 24, the device 24 synchronizes the modified first subtitle to the second server 23, and the second server 23 further synchronizes the modified first subtitle to the device 25, so that the translation proof reader modifies the corresponding second subtitle (e.g., english text) according to the modified first subtitle. Further, the device 25 sends the modified second subtitle to the second server 23, and the modified second subtitle is synchronized to the device 24 through the second server 23.
Fig. 3 is a flowchart of an interface display method in an embodiment of the disclosure, where the interface display method is applied to live broadcast concurrent hardware devices, and aims to improve the correction accuracy and correction efficiency of subtitles to be corrected and monitor the quality of live broadcast video streams through a certain interface display scheme. The method can be executed by an interface display device, the device can be realized in a software and/or hardware mode, and the device can be configured in live broadcast concurrent hardware equipment, such as an electronic terminal, and particularly comprises, but is not limited to, a smart phone, a palm computer, a tablet computer, a wearable equipment with a display screen, a desktop computer, a notebook computer, an all-in-one machine, intelligent household equipment and the like. As shown in fig. 3, the method specifically may include the following steps:
Step 301, displaying a user interface, where the user interface includes at least two play windows and one or more first subtitles corresponding to an audio stream in a live video stream.
Specifically, a user interface may be displayed on a display, such as a schematic diagram of one of the user interfaces shown in fig. 4, which includes a first playing window 410, a second playing window 420, and a plurality of first subtitles 430. The number of the first subtitles 430 may be one, and in fig. 4, the number of the first subtitles 430 is a plurality (a plurality generally refers to at least two) as an example.
The first subtitle is typically text obtained by audio extraction for a live video stream and speech recognition based on the extracted audio. Since the audio extraction and the speech recognition are usually performed automatically by a machine, the accuracy is not high, for example, the real text corresponding to the audio is "Zhang San", and the result of the speech recognition is "Zhang Shan", so that in order to improve the accuracy of the first subtitle, the first subtitle is usually corrected manually after being obtained to be modified in time when an error is found. When the first subtitle is calibrated, the recorded live video stream is usually played at the player 410, and the original text calibrator can calibrate the first subtitle while watching the video, so that the calibration efficiency and the calibration accuracy can be improved.
By displaying a plurality of first subtitles in the user interface in the context manner, as shown in fig. 4, the original text proofreader can correct the first subtitles by combining the longitudinal context information, so that the original text proofreader can conveniently and quickly scan to complete positioning and searching of the content, and the correction precision and the correction efficiency can be improved.
In one embodiment, the language corresponding to the first subtitle is the same as the language corresponding to the audio stream. For example, if the language corresponding to the audio stream is chinese, the first subtitle is a chinese text, and if the language corresponding to the audio stream is english, the first subtitle is an english text.
Step 302, in response to a live video stream playing instruction, playing the recorded live video stream in a first playing window of the at least two playing windows.
Specifically, for example, when the original text proofreader clicks (typically, may touch clicking or may click through a mouse clicking or the like) an icon or a button for "starting live broadcasting" on a user interface of the live broadcasting and transmitting hardware device, a live broadcasting video stream playing instruction is triggered, and in response to the instruction, the recorded live broadcasting video stream is played in the first playing window. The original text proofreader can control the playing of the live video stream in real time according to the own proofreading progress, and can realize the purposes of checking, watching live broadcast and listening to the audio at the same time, so that the purposes of checking the first subtitle by means of the audio heard and the mouth shape of the host in the live broadcast picture are realized, and the proofreading accuracy and the proofreading efficiency of the first subtitle can be improved.
Further, preset index information related to the live video stream, such as tone quality, image quality, volume, frame rate, resolution, network speed, frame loss rate and the like, is displayed in the first play window, so that an original text calibrator or other service personnel can monitor the quality of the live video stream conveniently, and intervention can be performed timely when the quality of the live video stream is found to be problematic. It should be noted that, the live video stream played in the first playing window may be a video stream obtained by the live broadcast simultaneous transmission hardware device by pulling from a specified pulling address, or may be a video stream directly pushed to the live broadcast simultaneous transmission hardware device by a live broadcast user according to a specified pushing address, and may be obtained in different manners according to different live broadcast users, so as to maximally ensure high real-time performance of the live broadcast video stream.
Step 303, in the process of playing the recorded live video stream, responding to a first subtitle modification instruction, and modifying a first subtitle pointed by the first subtitle modification instruction.
In the process of correcting the first caption, if the original text corrector finds that the first caption does not correspond to the text determined based on the live video stream heard and seen by the original text corrector, the first caption is modified, so that the purpose of correcting the first caption is achieved.
Step 304, in response to a live video stream playback instruction, playing back the video stream fragments played in the first playing window in a second playing window of the at least two playing windows.
As shown in fig. 4, when a live video stream playback instruction is received, a video stream segment played in the first play window 410 is played back in the second play window 420, so that an original text proofreader can repeatedly watch a certain video segment, and proofreading of a first subtitle corresponding to the video segment is enhanced, thereby realizing live backtracking.
In some embodiments, the playing back, in response to the live video stream playback instruction, the video stream segment played in the first playing window in the second playing window of the at least two playing windows includes: responding to triggering operation for a target subtitle, and playing a live video stream fragment corresponding to the target subtitle in the second playing window so that a user can calibrate the target subtitle by watching the live video stream fragment; the target subtitle is a subtitle in the one or more first subtitles. The triggering operation for the target subtitle may be an operation of clicking the target subtitle, an operation of sliding the target subtitle, an operation of clicking a related control associated with the target subtitle, or an operation of triggering a shortcut key when the target subtitle is in a specific state, or the like.
Specifically, the responding to the triggering operation for the target subtitle plays the live video stream segment corresponding to the target subtitle in the player, including:
in response to a triggering operation acting on a play control associated with the target subtitle, playing a live video stream segment corresponding to the target subtitle in the player; and when the target subtitle is in an editing state, displaying the play control at the associated position of the target subtitle. For example, when the mouse is suspended above the target subtitle, the target subtitle is in an editing state, and the original translator can delete or modify a certain word of the target subtitle or edit the target subtitle by adding a certain word into the target subtitle; or when the target subtitle is selected, the target subtitle is in an editing state; or when clicking on the related control, the target subtitle is put into an editing state. When the target subtitle is in the editing state, a play control is displayed at the associated position of the target subtitle, as shown in fig. 5, the target subtitle 430 is in the editing state, a play control 510 is displayed at the associated position of the target subtitle 430, and when the original translator clicks the play control 510, a live video stream segment corresponding to the target subtitle 430 is played in the second play window 420. The live video stream segment corresponding to the target subtitle 430 refers to: the target caption 430, i.e., the live video stream segment whose semantics are expressed by the target caption, can be obtained by performing voice recognition on the audio in the live video stream segment.
Optionally, the responding to the triggering operation for the target subtitle plays the live video stream segment corresponding to the target subtitle in the player, including:
and when the target subtitle is in an editing state, responding to the triggering operation of a preset shortcut key and playing the live video stream fragment corresponding to the target subtitle in the player.
Further, the at least two playing windows further comprise a third playing window, that is, the user interface comprises a first playing window, a second playing window and a third playing window. The method further comprises the steps of: according to the pushing progress of the live video stream, playing the live video stream at least comprising the first subtitle in the third playing window, namely playing the video stream added with the subtitle in the third playing window, wherein the video stream is a video stream pushed to a user at a watching and playing end for playing; and displaying preset index information of the live video stream at least comprising the first subtitle in a third playing window, such as tone quality, image quality, volume, frame rate, resolution, network speed, frame loss rate and the like, so as to facilitate an original text calibrator or other business personnel to monitor the quality and effect of the pushed video stream, and intervene in time when a problem in pushing is found, such as whether the added subtitle is proper in size, whether the subtitle is successfully added in the live video stream, whether the subtitle style is attractive, whether the subtitle line feed is problematic or not, and the like. By respectively displaying three play windows on the same user interface and respectively playing different video streams in the three play windows, the aim of monitoring different video streams based on the same user interface is fulfilled, and the correction efficiency and accuracy of the first subtitle are improved.
In some embodiments, the first playing window, the second playing window and the third playing window are arranged in a preset area of the user interface according to a preset positional relationship. The method may further comprise at least one of the following steps: responding to a first adjusting operation, and adjusting the positions and/or window sizes of the first playing window, the second playing window and/or the third playing window in the preset area; responding to a second adjusting operation, and displaying the first playing window, the second playing window and/or the third playing window in other areas except the preset area in the user interface; and responding to a third adjusting operation, and displaying the first playing window, the second playing window and/or the third playing window on other interfaces related to the user interface.
Specifically, the user can freely select and select the play windows according to actual conditions, and can select to display two play windows at the same time on the user interface, or can select to display three play windows at the same time on the user interface. The default first playing window is the largest, the second playing window and the third playing window are smaller and are arranged below the first playing window side by side, as shown in fig. 6, which is a schematic diagram of a user interface including three playing windows, and includes: a first play window 610, a second play window 620, and a third play window 630 arranged in a preset area 600 of the user interface. The original proof reader can perform custom adjustment on the positions of the first play window 610, the second play window 620 and the third play window 630 in the user interface, for example, when watching the playback video clip, the second play window 620 can be adjusted to the uppermost surface of the preset area 600 and the window size is the largest, so as to facilitate watching; as shown in fig. 7, the first play window 610 and the second play window 620 are shown in a user interface schematic after being interchanged. The manner of performing the custom adjustment on the positions of the first play window 610, the second play window 620 and the third play window 630 in the user interface may be to directly drag a certain window, for example, drag the second play window 620 to move in the preset area 600 of the user interface through a mouse or touch, when the second play window 620 is dragged to the position of the first play window 610, automatically occupy the position of the first play window 610, and adjust the first play window 610 to the default position of the second play window 620, so as to realize the exchange of the positions of the two. Because the potential risks of the related live broadcast activities are different between the source stream and the push stream, the live broadcast simultaneous transmission system of the disclosure allows the original text proofreader to freely adjust the size and the position of each play window according to actual conditions, and other play windows are adaptively adjusted correspondingly during adjustment. For example, when the playback video stream clip is not required to be viewed, the original proof reader may adjust the size of the second playing window 620 to be smaller, and then the size of the third playing window 630 is adaptively adjusted to be larger, as shown in a schematic diagram of a user interface shown in fig. 8, where the second playing window 620 is smaller, the third playing window 630 is adaptively enlarged, and the size of the preset area 600 occupied by the first playing window 610, the second playing window 620, and the third playing window 630 is unchanged. For example, the third window 630 may be manually closed or the size of the third window 630 may be reduced so as not to affect the proof reading of the first subtitle by the proof reader.
Further, because the live broadcast supported field environments are different, in order to adapt to a scene with an auxiliary screen or even multiple auxiliary screens in a professional scene, the live broadcast simultaneous transmission system of the present disclosure provides a function that a user can freely pop up multiple playing windows individually, that is, except for adjusting the positions of the first playing window 610, the second playing window 620 and the third playing window 630 in the preset area 600, any playing window can be dragged to other positions except the preset area 600 so as to highlight the playing window, so that the original text proofreader can conveniently watch the video content played in the playing window to monitor the video content. A schematic diagram of a user interface is shown in fig. 9, in which the second playing window 620 is dragged out of the preset area 600, and the window size becomes larger.
Furthermore, the live broadcast simultaneous transmission system also supports dragging the pop-up window to an independent display for display so as to achieve the effect of batch monitoring. That is, the first playback window, the second playback window, and/or the third playback window may also be displayed in an interface associated with the user interface, the other interface being displayed on a different display than the user interface.
In some embodiments, the method further comprises: and responding to the volume adjustment operation, and adjusting the volume of the live video stream played in the first playing window, the second playing window or the third playing window. For example, when the original proof reader views the playback video stream segment, in order to reduce the interference caused by other video streams, the live video stream played in the first playing window can be muted, and the video stream added with the first subtitle played in the third playing window can be muted, so that only the sound of the video stream segment played back in the second playing window is kept. It should be noted that, as shown in a schematic diagram of a user interface in fig. 10, volume identifiers 1011, 1021, and 1031 of video streams are displayed in the first playing window 1010, the second playing window 1020, and the third playing window 1030, respectively, where the volume identifiers are used to represent the volume of the video streams themselves, and do not refer to the playing volume of the video streams in the playing windows. In this way, even if the live video stream played in the first play window 1010 is muted, the volume of the live video stream itself can be known by the volume identifier 1011 displayed in the first play window 1010. The volume identifiers 1011, 1021, and 1031 are real-time dynamic volume identifiers that change with the size of the anchor volume, e.g., the volume is generally larger when the anchor is counting down, and the volume identifier at this time changes with the change, so as to indicate that the volume of the current live video stream itself becomes larger. The quality of the video stream can be conveniently monitored by an original text proofreader by displaying the volume identifier.
According to the interface display method provided by the embodiment of the disclosure, the user can monitor different video streams simultaneously by displaying a plurality of play windows on the user interface simultaneously; for example, the live video stream after adding the subtitle is monitored separately. And the live broadcast backtracking can be performed by performing real-time shift backlooking through an independent playing window, and simultaneously, the first subtitle in the modification state is repeatedly played back, so that the original text proofreading personnel can conveniently proofread the first subtitle.
Fig. 11 is a schematic structural diagram of an interface display device according to an embodiment of the disclosure. The device provided by the embodiment of the disclosure can be configured in a live broadcast concurrent hardware device. As shown in fig. 11, the apparatus specifically includes: a first display module 1110, a first play module 1120, a modification module 1130, and a playback module 1140.
The first display module 1110 is configured to display a user interface, where the user interface includes at least two play windows and one or more first subtitles corresponding to an audio stream in a live video stream; a first playing module 1120, configured to respond to a live video stream playing instruction, and play the recorded live video stream in a first playing window of the at least two playing windows; a modifying module 1130, configured to respond to a first subtitle modifying instruction in the process of playing the recorded live video stream, and modify a first subtitle pointed by the first subtitle modifying instruction; a playback module 1140, configured to play back, in response to a live video stream playback instruction, a video stream clip played in the first play window in a second play window of the at least two play windows.
Optionally, the at least two playing windows further include a third playing window; the apparatus further comprises: and the second playing module is used for playing the live video stream at least comprising the first subtitle in the third playing window according to the pushing progress of the live video stream.
Optionally, preset index information of the video stream played by each window is displayed in the first playing window, the second playing window and the third playing window respectively.
Optionally, the first playing window, the second playing window and the third playing window are arranged in a preset area of the user interface according to a preset position relationship.
Optionally, the apparatus further includes: and the second display module is used for displaying the volume identification of the video stream in the first playing window, the second playing window and the third playing window respectively.
Optionally, the apparatus further includes: an adjustment module for performing at least one of the following steps: responding to a first adjusting operation, and adjusting the positions and/or window sizes of the first playing window, the second playing window and/or the third playing window in the preset area; responding to a second adjusting operation, and displaying the first playing window, the second playing window and/or the third playing window in other areas except the preset area in the user interface; and responding to a third adjusting operation, and displaying the first playing window, the second playing window and/or the third playing window on other interfaces related to the user interface.
Optionally, the playback module 1140 is specifically configured to: responding to triggering operation for a target subtitle, and playing a live video stream fragment corresponding to the target subtitle in the second playing window so that a user can calibrate the target subtitle by watching the live video stream fragment; the target subtitle is a subtitle in the one or more first subtitles.
Optionally, the playback module 1140 is specifically configured to: in response to a triggering operation acting on a play control associated with the target subtitle, playing a live video stream segment corresponding to the target subtitle in the second play window; and when the target subtitle is in an editing state, displaying the play control at the associated position of the target subtitle.
Optionally, the playback module 1140 is specifically configured to: and when the target subtitle is in an editing state, responding to the triggering operation of a preset shortcut key to play the live video stream fragment corresponding to the target subtitle in the second play window.
According to the interface display method provided by the embodiment of the disclosure, the user can monitor different video streams simultaneously by displaying a plurality of play windows on the user interface simultaneously; for example, the live video stream after adding the subtitle is monitored separately. And the live broadcast backtracking can be performed by performing real-time shift backlooking through an independent playing window, and simultaneously, the first subtitle in the modification state is repeatedly played back, so that the original text proofreading personnel can conveniently proofread the first subtitle.
The device provided by the embodiment of the present disclosure may perform the method steps provided by the embodiment of the present disclosure, and the beneficial effects are not described herein.
Fig. 12 is a schematic structural diagram of an electronic device in an embodiment of the disclosure. Referring now in particular to fig. 12, a schematic diagram of an electronic device 500 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 500 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), wearable electronic devices, and the like, and fixed terminals such as digital TVs, desktop computers, smart home devices, and the like. The electronic device shown in fig. 12 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 12, the electronic device 500 may include a processing means (e.g., a central processor, a graphics processor, etc.) 501 that may perform various suitable actions and processes to implement the … method of the embodiments as described in the present disclosure according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
In general, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 508 including, for example, magnetic tape, hard disk, etc.; and communication means 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 12 shows an electronic device 500 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts, thereby implementing the method as described above. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or from the storage means 508, or from the ROM 502. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 501.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
displaying a user interface, wherein the user interface comprises at least two play windows and one or more first subtitles corresponding to an audio stream in a live video stream; responding to a live video stream playing instruction, and playing the recorded live video stream in a first playing window of the at least two playing windows; in the process of playing the recorded live video stream, responding to a first subtitle modification instruction, and modifying a first subtitle pointed by the first subtitle modification instruction; and in response to a live video stream playback instruction, playing back the video stream fragments played in the first playing window in a second playing window of the at least two playing windows.
Alternatively, the electronic device may perform other steps described in the above embodiments when the above one or more programs are executed by the electronic device.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, the present disclosure provides an interface display method including: displaying a user interface, wherein the user interface comprises at least two play windows and one or more first subtitles corresponding to an audio stream in a live video stream; responding to a live video stream playing instruction, and playing the recorded live video stream in a first playing window of the at least two playing windows; in the process of playing the recorded live video stream, responding to a first subtitle modification instruction, and modifying a first subtitle pointed by the first subtitle modification instruction; and in response to a live video stream playback instruction, playing back the video stream fragments played in the first playing window in a second playing window of the at least two playing windows.
In accordance with one or more embodiments of the present disclosure, in the method provided by the present disclosure, optionally, the at least two play windows further include a third play window; the method further comprises the steps of: and playing the live video stream at least comprising the first subtitle in the third playing window according to the pushing progress of the live video stream.
According to one or more embodiments of the present disclosure, in the method provided by the present disclosure, optionally, preset index information of the video stream played by each of the windows is displayed in the first playing window, the second playing window, and the third playing window.
According to one or more embodiments of the present disclosure, in the method provided by the present disclosure, optionally, the first playing window, the second playing window, and the third playing window are arranged in a preset area of the user interface according to a preset positional relationship.
In accordance with one or more embodiments of the present disclosure, in the method provided by the present disclosure, optionally, further comprising: and respectively displaying volume identifiers of the video streams in the first playing window, the second playing window and the third playing window.
In accordance with one or more embodiments of the present disclosure, in the methods provided by the present disclosure, optionally, at least one of the following steps is further included: responding to a first adjusting operation, and adjusting the positions and/or window sizes of the first playing window, the second playing window and/or the third playing window in the preset area; responding to a second adjusting operation, and displaying the first playing window, the second playing window and/or the third playing window in other areas except the preset area in the user interface; and responding to a third adjusting operation, and displaying the first playing window, the second playing window and/or the third playing window on other interfaces related to the user interface.
According to one or more embodiments of the present disclosure, in the method provided by the present disclosure, optionally, in response to a live video stream playback instruction, a second play window of the at least two play windows plays back a video stream clip played in the first play window, including: responding to triggering operation for a target subtitle, and playing a live video stream fragment corresponding to the target subtitle in the second playing window so that a user can calibrate the target subtitle by watching the live video stream fragment; the target subtitle is a subtitle in the one or more first subtitles.
According to one or more embodiments of the present disclosure, in a method provided by the present disclosure, optionally, the playing, in the second playing window, a live video stream segment corresponding to a target subtitle in response to a triggering operation for the target subtitle includes: in response to a triggering operation acting on a play control associated with the target subtitle, playing a live video stream segment corresponding to the target subtitle in the second play window; and when the target subtitle is in an editing state, displaying the play control at the associated position of the target subtitle.
According to one or more embodiments of the present disclosure, in a method provided by the present disclosure, optionally, the playing, in the second playing window, a live video stream segment corresponding to a target subtitle in response to a triggering operation for the target subtitle includes: and when the target subtitle is in an editing state, responding to the triggering operation of a preset shortcut key to play the live video stream fragment corresponding to the target subtitle in the second play window.
According to one or more embodiments of the present disclosure, the present disclosure provides an interface display device including: the first display module is used for displaying a user interface, and the user interface comprises at least two play windows and one or more first subtitles corresponding to an audio stream in the live video stream; the first playing module is used for responding to the live video stream playing instruction and playing the recorded live video stream in a first playing window of the at least two playing windows; the modification module is used for responding to a first subtitle modification instruction in the process of playing the recorded live video stream and modifying a first subtitle pointed by the first subtitle modification instruction; and the playback module is used for responding to a live video stream playback instruction, and playing back the video stream fragments played in the first play window in a second play window in the at least two play windows.
According to one or more embodiments of the present disclosure, the interface display device provided by the present disclosure, optionally, the at least two playing windows further include a third playing window; the apparatus further comprises: and the second playing module is used for playing the live video stream at least comprising the first subtitle in the third playing window according to the pushing progress of the live video stream.
According to one or more embodiments of the present disclosure, the interface display device provided in the present disclosure may optionally display preset index information of the video stream played by each of the windows in the first playing window, the second playing window, and the third playing window.
According to one or more embodiments of the present disclosure, optionally, the first playing window, the second playing window, and the third playing window are arranged in a preset area of the user interface according to a preset positional relationship.
According to one or more embodiments of the present disclosure, the interface display device provided by the present disclosure, optionally, further includes: and the second display module is used for displaying the volume identification of the video stream in the first playing window, the second playing window and the third playing window respectively.
According to one or more embodiments of the present disclosure, the interface display device provided by the present disclosure, optionally, further includes: the adjustment module is used for executing at least one of the following steps: responding to a first adjusting operation, and adjusting the positions and/or window sizes of the first playing window, the second playing window and/or the third playing window in the preset area; responding to a second adjusting operation, and displaying the first playing window, the second playing window and/or the third playing window in other areas except the preset area in the user interface; and responding to a third adjusting operation, and displaying the first playing window, the second playing window and/or the third playing window on other interfaces related to the user interface.
According to one or more embodiments of the present disclosure, the interface display device, optionally, the playback module is specifically configured to: responding to triggering operation for a target subtitle, and playing a live video stream fragment corresponding to the target subtitle in the second playing window so that a user can calibrate the target subtitle by watching the live video stream fragment; the target subtitle is a subtitle in the one or more first subtitles.
According to one or more embodiments of the present disclosure, the interface display device, optionally, the playback module is specifically configured to: in response to a triggering operation acting on a play control associated with the target subtitle, playing a live video stream segment corresponding to the target subtitle in the second play window; and when the target subtitle is in an editing state, displaying the play control at the associated position of the target subtitle.
According to one or more embodiments of the present disclosure, the interface display device, optionally, the playback module is specifically configured to: and when the target subtitle is in an editing state, responding to the triggering operation of a preset shortcut key to play the live video stream fragment corresponding to the target subtitle in the second play window.
According to one or more embodiments of the present disclosure, the present disclosure provides an electronic device comprising:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement any of the methods as provided by the present disclosure.
According to one or more embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method as any of the disclosure provides.
The disclosed embodiments also provide a computer program product comprising a computer program or instructions which, when executed by a processor, implement a method as described above.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (11)

1. An interface display method, characterized in that the method comprises:
displaying a user interface, wherein the user interface comprises at least two play windows and one or more first subtitles corresponding to an audio stream in a live video stream; wherein the one or more subtitles are displayed in a first region of the user interface, the first region being a region independent of the at least two playback windows;
responding to a live video stream playing instruction, and playing the recorded live video stream in a first playing window of the at least two playing windows;
in the process of playing the recorded live video stream, responding to a first subtitle modification instruction, and modifying a first subtitle pointed by the first subtitle modification instruction;
responding to a triggering operation for a target subtitle, and playing back a live video stream fragment corresponding to the target subtitle, which is played in the first playing window, in a second playing window of the at least two playing windows, so that a user can calibrate the target subtitle by watching the live video stream fragment; the live video stream segment corresponding to the target subtitle refers to: and obtaining the target caption by carrying out voice recognition on the audio in the live video stream fragment, wherein the target caption is the caption in the one or more first captions.
2. The method of claim 1, wherein the at least two play windows further comprise a third play window;
the method further comprises the steps of:
and playing the live video stream at least comprising the first subtitle in the third playing window according to the pushing progress of the live video stream.
3. The method of claim 2, wherein the preset index information of the video stream played by each of the windows is displayed in the first playing window, the second playing window and the third playing window.
4. The method of claim 2, wherein the first playing window, the second playing window, and the third playing window are arranged in a preset area of the user interface according to a preset positional relationship.
5. The method as recited in claim 2, further comprising:
and respectively displaying volume identifiers of the video streams in the first playing window, the second playing window and the third playing window.
6. The method of claim 2, further comprising at least one of the following steps:
responding to a first adjusting operation, and adjusting the positions and/or window sizes of the first playing window, the second playing window and/or the third playing window in a preset area;
Responding to a second adjusting operation, and displaying the first playing window, the second playing window and/or the third playing window in other areas except the preset area in the user interface;
and responding to a third adjusting operation, and displaying the first playing window, the second playing window and/or the third playing window on other interfaces related to the user interface.
7. The method of claim 6, wherein the playing, in response to the triggering operation for the target subtitle, the live video stream segment corresponding to the target subtitle in the second playing window includes:
in response to a triggering operation acting on a play control associated with the target subtitle, playing a live video stream segment corresponding to the target subtitle in the second play window;
and when the target subtitle is in an editing state, displaying the play control at the associated position of the target subtitle.
8. The method of claim 6, wherein the playing, in response to the triggering operation for the target subtitle, the live video stream segment corresponding to the target subtitle in the second playing window includes:
And when the target subtitle is in an editing state, responding to the triggering operation of a preset shortcut key to play the live video stream fragment corresponding to the target subtitle in the second play window.
9. An interface display device, comprising:
the first display module is used for displaying a user interface, and the user interface comprises at least two play windows and one or more first subtitles corresponding to an audio stream in the live video stream; wherein the one or more subtitles are displayed in a first region of the user interface, the first region being a region independent of the at least two playback windows;
the first playing module is used for responding to the live video stream playing instruction and playing the recorded live video stream in a first playing window of the at least two playing windows;
the modification module is used for responding to a first subtitle modification instruction in the process of playing the recorded live video stream and modifying a first subtitle pointed by the first subtitle modification instruction;
a playback module, configured to respond to a triggering operation for a target subtitle, and play back, in a second play window of the at least two play windows, a live video stream segment corresponding to the target subtitle, which is played in the first play window, so that a user checks the target subtitle by watching the live video stream segment; the live video stream segment corresponding to the target subtitle refers to: and obtaining the target caption by carrying out voice recognition on the audio in the live video stream fragment, wherein the target caption is the caption in the one or more first captions.
10. An electronic device, the electronic device comprising:
one or more processors;
a storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-8.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-8.
CN202111214131.6A 2021-10-19 2021-10-19 Interface display method, device, electronic equipment and storage medium Active CN113992926B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111214131.6A CN113992926B (en) 2021-10-19 2021-10-19 Interface display method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111214131.6A CN113992926B (en) 2021-10-19 2021-10-19 Interface display method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113992926A CN113992926A (en) 2022-01-28
CN113992926B true CN113992926B (en) 2023-09-12

Family

ID=79739307

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111214131.6A Active CN113992926B (en) 2021-10-19 2021-10-19 Interface display method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113992926B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114640884B (en) * 2022-03-21 2023-01-31 广东易教优培教育科技有限公司 Online video playing quality analysis method, system and computer storage medium
CN115002529A (en) * 2022-05-07 2022-09-02 咪咕文化科技有限公司 Video strip splitting method, device, equipment and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6064965A (en) * 1998-09-02 2000-05-16 International Business Machines Corporation Combined audio playback in speech recognition proofreader
CN101086886A (en) * 2006-06-07 2007-12-12 索尼株式会社 Recording system and recording method
CN106340294A (en) * 2016-09-29 2017-01-18 安徽声讯信息技术有限公司 Synchronous translation-based news live streaming subtitle on-line production system
CN106792069A (en) * 2015-11-19 2017-05-31 北京国双科技有限公司 Method for broadcasting multimedia file and device
CN108063970A (en) * 2017-11-22 2018-05-22 北京奇艺世纪科技有限公司 A kind of method and apparatus for handling live TV stream
CN108259971A (en) * 2018-01-31 2018-07-06 百度在线网络技术(北京)有限公司 Subtitle adding method, device, server and storage medium
CN110769265A (en) * 2019-10-08 2020-02-07 深圳创维-Rgb电子有限公司 Simultaneous caption translation method, smart television and storage medium
CN111968649A (en) * 2020-08-27 2020-11-20 腾讯科技(深圳)有限公司 Subtitle correction method, subtitle display method, device, equipment and medium
CN112437337A (en) * 2020-02-12 2021-03-02 上海哔哩哔哩科技有限公司 Method, system and equipment for realizing live broadcast real-time subtitles
CN112601101A (en) * 2020-12-11 2021-04-02 北京有竹居网络技术有限公司 Subtitle display method and device, electronic equipment and storage medium
CN112599130A (en) * 2020-12-03 2021-04-02 安徽宝信信息科技有限公司 Intelligent conference system based on intelligent screen
CN112601102A (en) * 2020-12-11 2021-04-02 北京有竹居网络技术有限公司 Method and device for determining simultaneous interpretation of subtitles, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11295497B2 (en) * 2019-11-25 2022-04-05 International Business Machines Corporation Dynamic subtitle enhancement

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6064965A (en) * 1998-09-02 2000-05-16 International Business Machines Corporation Combined audio playback in speech recognition proofreader
CN101086886A (en) * 2006-06-07 2007-12-12 索尼株式会社 Recording system and recording method
CN106792069A (en) * 2015-11-19 2017-05-31 北京国双科技有限公司 Method for broadcasting multimedia file and device
CN106340294A (en) * 2016-09-29 2017-01-18 安徽声讯信息技术有限公司 Synchronous translation-based news live streaming subtitle on-line production system
CN108063970A (en) * 2017-11-22 2018-05-22 北京奇艺世纪科技有限公司 A kind of method and apparatus for handling live TV stream
CN108259971A (en) * 2018-01-31 2018-07-06 百度在线网络技术(北京)有限公司 Subtitle adding method, device, server and storage medium
CN110769265A (en) * 2019-10-08 2020-02-07 深圳创维-Rgb电子有限公司 Simultaneous caption translation method, smart television and storage medium
CN112437337A (en) * 2020-02-12 2021-03-02 上海哔哩哔哩科技有限公司 Method, system and equipment for realizing live broadcast real-time subtitles
CN111968649A (en) * 2020-08-27 2020-11-20 腾讯科技(深圳)有限公司 Subtitle correction method, subtitle display method, device, equipment and medium
CN112599130A (en) * 2020-12-03 2021-04-02 安徽宝信信息科技有限公司 Intelligent conference system based on intelligent screen
CN112601101A (en) * 2020-12-11 2021-04-02 北京有竹居网络技术有限公司 Subtitle display method and device, electronic equipment and storage medium
CN112601102A (en) * 2020-12-11 2021-04-02 北京有竹居网络技术有限公司 Method and device for determining simultaneous interpretation of subtitles, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113992926A (en) 2022-01-28

Similar Documents

Publication Publication Date Title
US11943486B2 (en) Live video broadcast method, live broadcast device and storage medium
WO2019205872A1 (en) Video stream processing method and apparatus, computer device and storage medium
WO2021196903A1 (en) Video processing method and device, readable medium and electronic device
JP2023541569A (en) Multimedia data processing method, generation method and related equipment
WO2020233142A1 (en) Multimedia file playback method and apparatus, electronic device, and storage medium
CN112616062B (en) Subtitle display method and device, electronic equipment and storage medium
GB2594214A (en) Image display method and apparatus
WO2023104102A1 (en) Live broadcasting comment presentation method and apparatus, and device, program product and medium
CN111064987B (en) Information display method and device and electronic equipment
WO2021114979A1 (en) Video page display method and apparatus, electronic device and computer-readable medium
CN111562895A (en) Multimedia information display method and device and electronic equipment
CN113992926B (en) Interface display method, device, electronic equipment and storage medium
CN109462779B (en) Video preview information playing control method, application client and electronic equipment
CN112601101A (en) Subtitle display method and device, electronic equipment and storage medium
CN114095671A (en) Cloud conference live broadcast system, method, device, equipment and medium
JP2023515392A (en) Information processing method, system, device, electronic device and storage medium
CN113886612A (en) Multimedia browsing method, device, equipment and medium
CN112601102A (en) Method and device for determining simultaneous interpretation of subtitles, electronic equipment and storage medium
US20240028189A1 (en) Interaction method and apparatus, electronic device and computer readable medium
CN114125358A (en) Cloud conference subtitle display method, system, device, electronic equipment and storage medium
US20240296871A1 (en) Method, apparatus, device, storage medium and program product for video generation
CN113891168B (en) Subtitle processing method, subtitle processing device, electronic equipment and storage medium
CN111818383B (en) Video data generation method, system, device, electronic equipment and storage medium
EP3862963A1 (en) Interpretation system, server device, distribution method, and recording medium
CN117793478A (en) Method, apparatus, device, medium, and program product for generating explanation information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant