[go: up one dir, main page]

CN116016813A - Bullet time generation method, device and storage medium - Google Patents

Bullet time generation method, device and storage medium Download PDF

Info

Publication number
CN116016813A
CN116016813A CN202211690965.9A CN202211690965A CN116016813A CN 116016813 A CN116016813 A CN 116016813A CN 202211690965 A CN202211690965 A CN 202211690965A CN 116016813 A CN116016813 A CN 116016813A
Authority
CN
China
Prior art keywords
time
target
video file
bullet
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211690965.9A
Other languages
Chinese (zh)
Inventor
马振同
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision System Technology Co Ltd
Original Assignee
Hangzhou Hikvision System Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision System Technology Co Ltd filed Critical Hangzhou Hikvision System Technology Co Ltd
Priority to CN202211690965.9A priority Critical patent/CN116016813A/en
Publication of CN116016813A publication Critical patent/CN116016813A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Television Signal Processing For Recording (AREA)

Abstract

The application discloses a bullet time generation method, device and storage medium, relates to the technical field of video production, and is used for accurately generating bullet time. The method comprises the following steps: acquiring a plurality of video files, and synchronizing the equipment time of a plurality of cameras; receiving operation of selecting a target playing time by a user in the process of playing a target video file, and displaying a target image sequence, wherein the target image sequence comprises multi-frame images in a first preset time length from the target playing time to the front and multi-frame images in a second preset time length from the target playing time to the back in the target video file; receiving an operation of selecting a target image in a target image sequence by a user, taking the acquisition time of the target image as bullet time generation time, and extracting an image with the closest acquisition time and bullet time generation time from each video file except the target video file; and generating bullet time according to the target image and the image with the closest acquisition time and bullet time generation time in each video file.

Description

Bullet time generation method, device and storage medium
Technical Field
The present disclosure relates to the field of video production technologies, and in particular, to a bullet time generating method, device and storage medium.
Background
With the rapid development of internet technology and the increasing demand of users, a video special effect called bullet time has been developed, which can show the state of the same object at different viewing angles at the same point of time to users.
At present, a method for generating bullet time takes a frame number corresponding to a target image in a video file under one view angle as a target frame number, selects images corresponding to the target frame number from video files under other view angles respectively, and generates bullet time according to the selected images. On the one hand, very complicated user operation is needed to select a target image from a plurality of images of the video file under a view angle, which is not beneficial to user experience; on the other hand, the image corresponding to the target frame number selected from the video file under other viewing angles may not match or have a large time interval in shooting time corresponding to the target image, so that the finally generated bullet time may not well meet the user's needs.
Therefore, how to accurately generate bullet time is a problem to be solved.
Disclosure of Invention
The application provides a bullet time generation method, device and storage medium, which are used for accurately generating bullet time.
In order to achieve the technical purpose, the application adopts the following technical scheme:
in a first aspect, an embodiment of the present application provides a bullet time generating method, where the method includes:
acquiring a plurality of video files, wherein the plurality of video files are respectively recorded by a plurality of cameras at different recording visual angles to the same target object at the same time, the acquisition time of each frame of image in the video files is added by the cameras according to the equipment time of the cameras, and the equipment time of the plurality of cameras is synchronous;
playing a target video file, wherein the target video file is any one of a plurality of video files;
receiving operation of selecting a target playing time by a user in the process of playing a target video file, and displaying a target image sequence, wherein the target image sequence comprises multi-frame images in a first preset time length from the target playing time to the front and multi-frame images in a second preset time length from the target playing time to the back in the target video file;
receiving an operation of selecting a target image in a target image sequence by a user, taking the acquisition time of the target image as bullet time generation time, and extracting an image with the closest acquisition time and bullet time generation time from each video file except the target video file;
And generating bullet time according to the target image and the image with the closest acquisition time and bullet time generation time in each video file.
The technical scheme provided by the application at least brings the following beneficial effects: in the bullet time generation process, the user only carries out the selection operation of the target playing time and the target image, the operation steps are fewer, and the user is simpler and more convenient to use. In addition, in the generation process of bullet time, the terminal device presents multi-frame images in a first preset time length forward from the target playing time and a second preset time length backward from the target playing time in the target video file to the user for selection, and the user can select the target image which is the most accurate and meets the requirements. Because the time of a plurality of cameras is synchronous, the terminal equipment accurately determines the image closest to the acquisition time of the target image in other video files according to the acquisition time of the target image, so that the generated bullet time is more accurate and meets the user requirements.
In one possible implementation, extracting, from each video file other than the target video file, an image whose acquisition time is closest to the bullet time generation time, includes: for each video file except the target video file, cutting out a video fragment to be decoded from the video file, wherein the video fragment to be decoded is a fragment with a third preset time length from the target playing time to the front and a fourth preset time length from the target playing time to the rear; decoding the video segment to be decoded to obtain multi-frame images; from the multi-frame images, the image whose acquisition time is closest to the bullet time generation time is determined. Therefore, only the video clips in the video file within the third preset time period from the target playing time to the front and within the fourth preset time period from the target playing time to the rear are needed to be decoded, the whole video file is not needed to be decoded and searched, the decoding workload is reduced, and the bullet time can be generated more quickly.
In one possible implementation, displaying the sequence of target images includes: according to the target playing time, capturing a video fragment to be decoded from the target video file, wherein the video fragment to be decoded is a fragment with a first preset time length from the target playing time forward and a second preset time length from the target playing time backward; and decoding the video segment to be decoded to obtain a target image sequence, and displaying the target image sequence. In this way, since there may be a selection error when the user selects the target playing time, for example, when the user views the target video file, the viewed picture meets the requirement, and when the user clicks the video pause, the picture is already played and ends, and there is an error between the target playing time that the user wants to select and the actually determined target playing time. Through displaying the images contained in the first preset time length from the target playing time to the front and the second preset time length from the target playing time to the rear, more images can be presented to the user, the user is prevented from adjusting the target playing time for many times, the operation steps of the user are simplified, and more satisfied use experience can be brought to the user.
In one possible implementation, the generating the bullet time according to the target image and the image with the closest acquisition time and bullet time generation time in each video file includes: and combining the target image with the image closest to the bullet time generation time in each video file according to the arrangement sequence of the preset images in the bullet time and the frame rate of the preset bullet time by taking the target image as the first frame image of the bullet time, so as to obtain the bullet time. Therefore, the target image and the image with the closest acquisition time and bullet time generation time in each video file are combined, so that the obtained bullet time effect is better, and the user requirement can be met.
In one possible implementation, in the bullet time, the positions of the cameras corresponding to any two adjacent frames of images are adjacent. Therefore, as the setting positions of the cameras corresponding to any two adjacent frames of images in the bullet time are adjacent, namely the recording visual angles of any two adjacent frames of images are adjacent, the situation that the visual angles are switched and jumped can not be generated when a user watches the bullet time, and the watching experience is better.
In one possible implementation, the method further includes: receiving timing requests sent by each camera according to a preset timing frequency, and responding to the timing requests, and sending standard time to each camera, wherein the standard time is used for calibrating equipment time of the camera by the camera so as to enable the equipment time of the camera to be synchronous with the standard time; or receiving a timing request sent by the reference camera according to a preset timing frequency; transmitting a standard time to the reference camera in response to the timing request, the standard time being used for the reference camera to calibrate the device time of the reference camera such that the device time of the reference camera is synchronized with the standard time; the reference camera is one of a plurality of cameras and is used for timing at least one of the other cameras according to standard time; or, timing the storage equipment according to a preset timing frequency so as to synchronize equipment time of the storage equipment with standard time; the storage device is used for storing image data which is sent by the camera and is used for forming the video file in the process of recording the video file by any camera, taking the moment of storing the first frame of image data of the video file as the recording starting moment of the video file and the moment of storing the last frame of image data of the video file as the recording ending moment of the video file according to the equipment time of the storage device; acquiring a plurality of video files, including: and acquiring a plurality of video files of which the recording starting time meets the target starting time and the recording ending time meets the target ending time from the storage device according to the target starting time and the target ending time input by the user. In this way, when the equipment time of the cameras is synchronous, the cameras record the acquisition time of the images according to the equipment time of the cameras when the cameras work, so that the cameras can record the same target object at the same time, and the acquisition time of the images is consistent in a plurality of video files; by timing the storage device, the storage device can accurately record the time when the storage device starts to receive the video file and the time when the storage device ends to receive the video file, namely the working time of the camera, so that a user can conveniently find the required video file according to the recording start time and the recording end time.
In a second aspect, the present application provides a bullet time generating apparatus. The bullet time generating apparatus comprises means for performing the method of the first aspect or any one of the possible designs of the first aspect.
In a third aspect, the present application provides a bullet time generating apparatus, including: one or more processors; one or more memories; wherein the one or more memories are configured to store computer program code comprising computer instructions that, when executed by the one or more processors, perform any of the methods of generating bullet time provided in the first aspect described above.
In a fourth aspect, the present application provides a computer-readable storage medium storing computer-executable instructions that, when run on a computer, cause the computer to perform any one of the methods of generating a bullet time provided in the first aspect.
For a detailed description of the second to fourth aspects and various implementations thereof in this application, reference may be made to the detailed description of the first aspect and various implementations thereof; moreover, the advantages of the second aspect to the fourth aspect and the various implementations thereof may be referred to for analysis of the advantages of the first aspect and the various implementations thereof, and will not be described here again.
Drawings
Fig. 1 is a schematic structural diagram of a bullet time generation system applicable to a bullet time generation method according to an embodiment of the present application;
fig. 2 is a schematic diagram of an application scenario of a bullet time generating method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a computing device according to an embodiment of the present disclosure;
FIG. 4 is a flowchart of a bullet time generation method according to an embodiment of the present disclosure;
fig. 5 is a second application scenario schematic diagram of a bullet time generating method according to an embodiment of the present application;
fig. 6 is a third application scenario schematic diagram of a bullet time generating method according to an embodiment of the present application;
fig. 7 is a schematic diagram of an application scenario of a bullet time generating method according to an embodiment of the present application;
fig. 8 is a schematic diagram of an application scenario of a bullet time generating method according to an embodiment of the present application;
fig. 9 is a sixth application scenario schematic diagram of a bullet time generating method provided in the embodiment of the present application;
fig. 10 is a schematic diagram seventh of an application scenario of a bullet time generating method provided in an embodiment of the present application;
fig. 11 is an application scenario diagram eight of a bullet time generating method provided in the embodiment of the present application;
Fig. 12 is a schematic diagram of an application scenario nine of a bullet time generating method provided in an embodiment of the present application;
fig. 13 is a schematic diagram of an application scenario of a bullet time generating method according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of another bullet time generating apparatus according to an embodiment of the present disclosure;
fig. 15 is a schematic structural diagram of another bullet time generating apparatus according to an embodiment of the present application.
Detailed Description
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone.
The terms "first" and "second" and the like in the description and in the drawings are used for distinguishing between different objects or for distinguishing between different processes of the same object and not for describing a particular sequential order of objects.
Furthermore, references to the terms "comprising" and "having" and any variations thereof in the description of the present application are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or apparatus.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the present application, unless otherwise indicated, the meaning of "a plurality" means two or more.
Bullet time (Bullettime), a special video effect, also known as time division, time freezing, freeview, etc., is a visual effect of time stopping or time slowing. Bullet time is characterized by a limit transition of time (slow enough to exhibit phenomena that would not normally be observed at all, such as a flying bullet) to space (utilizing the ability of the camera angle, i.e., the viewing angle of the viewer, to move around the scene at normal speed while slowing down the event). For example, bullet times are often used in movies to represent scenes where the main corner evades the bullet.
In scenic spot sightseeing, the bullet time is used as visual aid, so that good memory in sightseeing can be recorded in all directions, and the memory is vivid. Often because of seeing unclear, visual angle dislocation in the athletic event, can appear erroneous judgement, utilize the bullet time can all-round, no dead angle, accurate record match process, avoid erroneous judgement, erroneous judgement match result, through the bullet time, also can bring better viewing experience for spectator. In the programs such as dance, martial arts and the like, bullet time is applied, multiple visual angles of people can be presented to users, and better viewing experience is brought to the users.
To produce the effect of bullet time, a plurality of cameras surrounding the subject, for example, a camera group consisting of 25 to 150 cameras, may be used. The multiple cameras record the shot object, one frame of image meeting the user requirement is selected from one of the multiple video files according to the effect wanted by the user, then multiple frames of images which are selected from other video files according to the frame number of the image and correspond to the frame number are distributed, and the images are combined together according to a specific combination sequence to generate bullet time. However, when the user selects an image, the user needs to drag the playing progress bar frequently manually, and the user may not select the frame of image meeting the user's requirement. In addition, the image corresponding to the frame number determined from other video files according to the frame number of one frame image selected by the user does not necessarily correspond to the image selected by the user, so that the finally generated bullet time cannot well meet the user requirement.
Therefore, how to accurately generate bullet time is a problem to be solved.
In this regard, the present application provides a bullet time generating method, in which multiple frame images within a first preset duration from a target playing time determined by a user and multiple frame images within a second preset duration from the target playing time are presented to the user, so that the user can select a required target image from fewer frame images quickly, simply and accurately with fewer operation steps. In addition, as the equipment time of the cameras is the same, the image closest to the acquisition time of the target image in other video files can be accurately determined according to the acquisition time of the target image, so that the generated bullet time is more accurate and meets the user requirements.
The bullet time generation method provided by the embodiment of the application can be applied to a bullet time generation system. Fig. 1 shows one possible configuration of a bullet time generation system. As shown in fig. 1, a bullet time generation system 1 provided in an embodiment of the present application may include: a plurality of cameras 10, and an electronic device 20.
Wherein a communication connection may be established between the camera 10 and the electronic device 20. It should be appreciated that the connection may be a wireless connection, such as a Bluetooth connection, wi-Fi connection, etc.; alternatively, the connection may be a wired connection, for example, an optical fiber connection, which is not limited thereto.
In some embodiments, the multiple cameras 10 record the same target object at different recording angles to obtain multiple video files, and the acquisition time of each frame of image in the video files is added by the camera 10 shooting the image according to the own equipment time, so that the equipment time of the multiple cameras 10 is synchronous. For example, as shown in fig. 2, there are 16 cameras 10 uniformly distributed around the target object, and when the 16 cameras 10 are operated, 16 video files with different recording angles can be obtained.
In some embodiments, camera 10 may be configured to capture one or more images in various ways. As an example, camera 10 may be configured to capture images by a user, programming, hardware settings, or a combination of the above. If the camera 10 is configured to capture images by software or hardware programming or hardware settings, image capture may be performed according to one or more predetermined conditions. For example, the setting of a predetermined condition (e.g., sensing that an actor is beginning an action) may trigger the camera 10 to capture an image. For another example, the camera 10 may capture an image in response to a user operation, such as a user pressing a control button. In this disclosure, "image" may refer, in part or in whole, to a static or dynamic visual representation, including, but not limited to: photographs, pictures, graphics, video, holograms, virtual reality pictures, augmented reality pictures, other visual representations, or combinations thereof.
In some embodiments, the user may preset the camera 10 to be in a mode capable of capturing one or more images.
In some embodiments, the camera 10 may include a micro universal serial bus (universal serial bus, USB) interface, a high-resolution multimedia interface (high-definition multimedia interface, HDMI), wi-Fi, bluetooth module, etc. to establish a connection with other devices so that video and/or audio data may be output to the other devices. The other device may be a controller capable of controlling the operation of the camera 10, a display device capable of displaying the operation of the camera 10, or the like.
In some embodiments, the camera 10 may have a receiving module integrated thereon for receiving control signals from the electronic device 20, so that the user can operate the electronic device 20 to perform the operation of the camera 10.
In some embodiments, camera 10 may be any type of image capture device. For example, the video camera 10 may be an action camera, a digital camera, a web camera. The camera 10 may also be embedded in another device such as a smart phone, a computer, a personal digital assistant (personal digital assistant, PDA), a video game console, etc.
In some embodiments, the electronic device 20 may have bullet time application software installed thereon, which in turn may display various operational pages in the bullet time generation process and the resulting bullet time.
Specifically, based on the communication connection with the plurality of cameras 10, the electronic device 20 acquires a plurality of video files recorded by the plurality of cameras 10, receives an operation of selecting a target video file from the plurality of video files by the user, and plays the target video file. In the process of playing the target video file, the electronic device 20 receives an operation of selecting a target playing time by a user, and displays a target image sequence. The electronic device 20 receives an operation of selecting a target image in the target image sequence by a user, extracts an image with the closest acquisition time to the bullet time generation time from each video file except the target video file by taking the acquisition time of the target image as the bullet time generation time, and finally generates bullet time according to the target image and the image with the closest acquisition time to the bullet time generation time in each video file. The target image sequence comprises a plurality of frame images in a first preset time length from a target playing time and a plurality of frame images in a second preset time length from the target playing time, such as images of N seconds before and after the target playing time, in the target video file, wherein N is a positive number.
In some embodiments, the electronic device 20 may include a controller to enable determination of a target image, a sequence of target images, an image closest to the bullet time generation instant, and so forth.
Alternatively, the controller may be a stand-alone device designed specifically for bullet time generation. Alternatively, the controller may be part of a larger device such as a computer. Alternatively, the controller may be implemented by hardware, software, or a combination of hardware and software.
Alternatively, the controller may be used to effect manipulation of multiple cameras 10. For example, when the user wants to record video, the controller sends a control instruction to the camera 10 by issuing a recording start instruction on the electronic device 20, thereby controlling the camera 10 to start recording video.
In some embodiments, the controller may be a central processing unit (central processing unit, CPU), a general purpose processor network processor (network processor, NP), a digital signal processor (digital signal processing, DSP), a programmable logic device (programmable logic device, PLD), a microprocessor, a microcontroller, or any combination thereof. The controller may also be any other device having a processing function, such as a circuit, a device, or a software module, which is not limited in any way by the embodiments of the present application.
In some embodiments, the electronic device 20 may further include a decoding module configured to decode the acquired video file to obtain multiple frame images in the video file.
In some embodiments, the electronic device 20 further includes a display for displaying icons of the plurality of video files, playing the target video file, displaying the sequence of target images, the target image, and so forth.
In some embodiments, the electronic device 20 may further include a human-computer interaction device, such as a touch key, a keyboard, etc., to enable input operations by a user.
In some embodiments, as also shown in fig. 1, a storage device 30 may also be included in the bullet time generation system 1.
Wherein the storage device 30 is used for storing video files recorded by a plurality of cameras 10.
Alternatively, a plurality of video files recorded by a plurality of cameras 10 may be stored in one storage device 30.
Alternatively, the bullet time generating system may include a plurality of storage devices 30, where a plurality of cameras 10 are in one-to-one correspondence with the plurality of storage devices 30, and video files recorded by the cameras 10 are stored in the storage devices 30 corresponding thereto.
Alternatively, the storage device 30 may be integrated with the camera 10. For example, when the storage device 30 is a TF memory card, the TF memory card may be installed in the video camera 10.
In some embodiments, storage device 30 may be any of a NAND flash memory, a double-rate synchronous dynamic random access memory (DDR), a Random Access Memory (RAM), a read-only memory (ROM), or a Static Random Access Memory (SRAM), and may include, but is not limited to, other types of dynamic storage devices that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
In some embodiments, the electronic device 20 is also used to time the plurality of cameras 10, the storage device 30. Specifically, the electronic device 20 receives the precise coordinated universal time UTC from a rights clock source (e.g., atomic clock, GPS), and transmits the standard time to the plurality of cameras 10 and the storage device 30 according to a network time protocol (network time protocol, NTP) such that the device times of the plurality of cameras 10 and the storage device 30 are synchronized with the standard time.
In the embodiment of the present application, the electronic device 20 may be a terminal device, such as a personal computer (personal computer, PC), a notebook computer, a mobile device, a tablet computer, a laptop computer, or the like, and the embodiment of the present application does not limit the specific form of the electronic device 20. Alternatively, the electronic device 20 may be a single server or a cluster of servers, which in some implementations may be a distributed cluster server,
The basic hardware structure of the electronic device 20 described above includes the elements included in the computing apparatus shown in fig. 3. The hardware configuration of the electronic device 20 will be described below using the computing device shown in fig. 3 as an example.
As shown in fig. 3, the computing device may include a processor 501, a memory 502, a communication interface 503, and a bus 504. The processor 501, the memory 502, and the communication interface 503 may be connected by a bus 504.
The processor 501 is a control center of a computing device, and may be one processor or a collective term of a plurality of processing elements. For example, the processor 501 may be a general-purpose central processing unit (central processing unit, CPU), or may be another general-purpose processor. Wherein the general purpose processor may be a microprocessor or any conventional processor or the like.
As one example, processor 501 may include one or more CPUs, such as CPU 0 and CPU 1 shown in fig. 3.
Memory 502 may be, but is not limited to, read-only memory (ROM) or other type of static storage device that can store static information and instructions, random access memory (random access memory, RAM) or other type of dynamic storage device that can store information and instructions, as well as electrically erasable programmable read-only memory (EEPROM), magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
In a possible implementation, the memory 502 may exist separately from the processor 501, and the memory 502 may be connected to the processor 501 by a bus 504 for storing instructions or program code. The processor 501, when invoking and executing instructions or program code stored in the memory 502, is capable of implementing the map matching method provided in the embodiments of the present application.
In the embodiment of the present application, the software programs stored in the memory 502 are different for the electronic device 50 and the server 20, and the functions implemented by the electronic device 50 and the server 20 are different. The functions performed with respect to the respective devices will be described in connection with the following flowcharts.
In another possible implementation, the memory 502 may also be integrated with the processor 501.
A communication interface 503 for connecting the computing device with other devices via a communication network, which may be ethernet, a radio access network (radio access network, RAN), a wireless local area network (wireless local area networks, WLAN), etc. The communication interface 503 may include a receiving unit for receiving data, and a transmitting unit for transmitting data.
Bus 504 may be an industry standard architecture (industry standard architecture, ISA) bus, an external device interconnect (peripheral component interconnect, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in fig. 3, but not only one bus or one type of bus.
It should be noted that the structure shown in fig. 3 is not limiting of the computing device, and the computing device may include more or less components than those shown in fig. 3, or may combine some components, or a different arrangement of components.
The embodiments provided in the present application are specifically described below with reference to the drawings attached to the specification.
The bullet time generation method provided by the embodiments of the present application may be performed by the electronic device 20.
As shown in fig. 4, an embodiment of the present application provides a bullet time generating method, which includes the following steps:
s101, acquiring a plurality of video files.
The video files are obtained by recording the same target object by a plurality of cameras at different recording visual angles, the acquisition time of each frame of image in the video files is added by the cameras according to the equipment time of the cameras, and the equipment time of the cameras is synchronous.
In some embodiments, the device time synchronization of the cameras may be embodied in several ways:
in the mode 1, a terminal device receives a timing request sent by each camera according to a preset timing frequency, and responds to the timing request, and sends standard time to each camera, wherein the standard time is used for calibrating equipment time of the camera, so that the equipment time of the camera is synchronous with the standard time.
Optionally, each camera sends a timing request to the terminal device according to a preset timing frequency.
In the mode 2, the terminal device receives a timing request sent by the reference camera according to a preset timing frequency, responds to the timing request, and sends standard time to the reference camera, wherein the standard time is used for calibrating equipment time of the reference camera by the reference camera, so that the equipment time of the reference camera is synchronous with the standard time. Wherein the reference camera is one of a plurality of cameras, and the reference camera is used for timing according to at least one of the rest cameras of standard time.
The reference camera is exemplified by C1, C1 sends a timing request to the terminal equipment according to a preset timing frequency, receives the standard time returned by the terminal equipment in response to the timing request, and changes the equipment time of the reference camera into the standard time. The camera C2 sends a timing request to C1, receives the device time of C1 returned by C1 in response to the timing request, and changes its device time to the device time of C1. By analogy, camera C3 sends a timing request to C2, … …, and camera Cn sends a timing request to camera Cn-1.
Optionally, the timing camera referred by each camera may be set by the user or set by default by the terminal device.
And 3, receiving timing requests sent by the other cameras by the reference camera, and sending the equipment time of the reference camera to the other cameras so as to synchronize the equipment time of the other cameras with the equipment time of the reference camera. Wherein the reference camera is one of a plurality of cameras.
Therefore, when the equipment time of the cameras is synchronous, the cameras record the image acquisition time according to the equipment time of the cameras when the cameras work, and the cameras can record the same target object at the same time, so that the image acquisition time is consistent in a plurality of video files obtained by recording the same target object.
S102, playing the target video file.
Wherein the target video file is any one of a plurality of video files.
In some embodiments, when the user selects a video file recorded by the target camera to be viewed on the terminal device, the terminal device takes the video file recorded by the target camera as a target video file and plays the target video file. For example, as shown in fig. 5, the terminal device displays video files recorded by a plurality of cameras, and if the user selects a video file recorded by the camera C3, the terminal device takes the video file recorded by the camera C3 as a target video file and plays the target video file.
S103, receiving the operation of selecting the target playing time by the user in the process of playing the target video file, and displaying a target image sequence.
The target image sequence comprises multiple frames of images in a first preset duration from the target playing time and multiple frames of images in a second preset duration from the target playing time.
In some embodiments, the terminal device may determine the target playing time by receiving an operation of suspending playing of the video file by the user, an operation of dragging the video playing progress bar to a specific time, an operation of directly inputting the target playing time, and the like.
Exemplary, as shown in fig. 6, the target video file is played to 03 at the terminal device: 28:16, the user clicks the touch screen of the terminal device, so that the terminal device pauses playing the target video file, and the terminal device will 03:28:16 as the target play time. Alternatively, as shown in fig. 7, the target video file is played to 03 at the terminal device: 28:16, the user drags the video playing progress bar to 03:34:27, the terminal device pauses playing the target video file, and will 03:34:27 as target play time. Alternatively, as shown in fig. 8, the user directly inputs 03 in an input box displayed on the terminal device: 28:16, the terminal device will 03:28:16 as the target play time.
In some embodiments, the terminal device intercepts a video clip to be decoded from the target video file according to the target playing time, the video clip to be decoded is a clip with a first preset time length from the target playing time and a second preset time length from the target playing time, decodes the video clip to be decoded to obtain a target image sequence, and displays the target image sequence.
Optionally, the first preset duration and the second preset duration may be the same or different.
For example, the first preset duration is 2s, the second preset duration is 1s, and the target playing time selected by the user is 03:25:36, the video clip to be decoded should be 03:25:34 to 03:25: and 37, and then the terminal equipment decodes the video segment to obtain multi-frame images in the video segment, namely a target image sequence, and presents the target image sequence to a user. Assuming that the frame rate of the video file is 5 frames per second, the target image sequence is as shown in fig. 9, and includes 03:25:34 to 03:25: and 15 images between 37, the terminal device displays the 15 images, and after the user selects the images, the user clicks a 'determination' control in the display page, so that the terminal device determines the images selected by the user as target images.
In this way, since there may be a selection error when the user selects the target playing time, for example, when the user views the target video file, the viewed picture meets the requirement, and when the user clicks the video pause, the picture is already played and ends, and there is an error between the target playing time that the user wants to select and the actually determined target playing time. Through displaying the images contained in the first preset time length from the target playing time to the front and the second preset time length from the target playing time to the rear, more images can be presented to the user, the user is prevented from adjusting the target playing time for many times, the operation steps of the user are simplified, and more satisfied use experience can be brought to the user.
S104, receiving an operation of selecting a target image in the target image sequence by a user, taking the acquisition time of the target image as the bullet time generation time, and extracting an image with the closest acquisition time and bullet time generation time from all video files except the target video file.
In some embodiments, extracting the image whose acquisition time is closest to the bullet time generation time from each video file except the target video file may be specifically implemented as: for each video file except the target video file, cutting out a video fragment to be decoded from the video file, wherein the video fragment to be decoded is a fragment with a third preset time length from the target playing time to the front and a fourth preset time length from the target playing time to the rear; decoding the video segment to be decoded to obtain multi-frame images; from the multi-frame images, the image whose acquisition time is closest to the bullet time generation time is determined.
Optionally, the third preset duration and the fourth preset duration may be the same or different.
For example, the third preset duration is 3s, the fourth preset duration is 3s, and the target playing time selected by the user is 03:25:36, in each video file except the target video file, the video clip to be decoded should be 03:25:33 to 03:25:39, and then the terminal equipment decodes the video segment in the time end to obtain multi-frame images in the video segment, and searches the image with the closest acquisition time and bullet time generation time from the multi-frame images.
In this way, the terminal device only needs to decode the video clips in the video file within the third preset time period from the target playing time to the front and within the fourth preset time period from the target playing time to the rear, and does not need to decode and search the required images of the whole video file, so that the decoding workload is reduced, and the bullet time can be generated more quickly.
In some embodiments, the terminal device may use the target image selected by the user as the first frame image of the bullet time, or determine one frame image from the images of the frames corresponding to other video files and closest to the bullet time generation time as the first frame image of the bullet time.
S105, generating bullet time according to the target image and the image with the closest acquisition time and bullet time generation time in each video file.
In some embodiments, generating the bullet time from the target image and the image in each video file having the closest acquisition time to the bullet time generation time comprises: and combining the target image with the image closest to the bullet time generation time in each video file according to the arrangement sequence of the preset images in the bullet time and the frame rate of the preset bullet time by taking the target image as the first frame image of the bullet time, so as to obtain the bullet time.
For example, the 16 cameras are uniformly distributed around the target object, and the change of the recording view angle of the camera corresponding to the bullet time generated finally can be shown in the arrow direction in fig. 10, and the target camera is taken as the starting view angle, rotates clockwise, and finally returns to the recording view angle of the target camera. Or, the change of the recording view angle of the camera corresponding to the bullet time finally generated may be as shown in the arrow direction in fig. 11, rotate counterclockwise with the target camera as the starting view angle, and finally return to the recording view angle of the target camera. Assuming a preset frame rate of 8 frames per second, the duration of the final generated bullet time is 2s.
In another example, 8 cameras are distributed in an arc around the target object, and the change of the recording view angles of the cameras corresponding to the bullet time generated finally can be shown in the arrow direction in fig. 12, the target camera is taken as the initial view angle, and rotates clockwise, then rotates anticlockwise after reaching the recording view angle of the camera at the far right end of the arc, rotates clockwise again after reaching the recording view angle of the camera at the far left end of the arc, and finally returns to the recording view angle of the target camera. Or, the change of the recording view angle of the camera corresponding to the bullet time finally generated may be shown in the arrow direction in fig. 13, and rotate counterclockwise with the target camera as the starting view angle, then rotate clockwise after reaching the recording view angle of the camera at the leftmost end of the arc, rotate counterclockwise again after reaching the recording view angle of the camera at the rightmost end of the arc, and finally return to the recording view angle of the target camera. Assuming that the preset frame rate is 8 frames per second, the duration of the finally generated bullet time is 1s.
Therefore, the target image and the image with the closest acquisition time and bullet time generation time in each video file are combined, so that the obtained bullet time effect is better, and the user requirement can be met.
In some embodiments, the camera positions corresponding to any two adjacent frames of images are adjacent in bullet time. Therefore, as the setting positions of the cameras corresponding to any two adjacent frames of images in the bullet time are adjacent, namely the recording visual angles of any two adjacent frames of images are adjacent, the situation that the visual angles are switched and jumped can not be generated when a user watches the bullet time, and the watching experience is better.
The embodiment shown in fig. 4 brings at least the following advantages: in the bullet time generation process, the user only carries out the selection operation of the target playing time and the target image, the operation steps are fewer, and the user is simpler and more convenient to use. In addition, in the generation process of bullet time, the terminal device presents multi-frame images in a first preset time length forward from the target playing time and a second preset time length backward from the target playing time in the target video file to the user for selection, and the user can select the target image which is the most accurate and meets the requirements. Because the time of a plurality of cameras is synchronous, the terminal equipment accurately determines the image closest to the acquisition time of the target image in other video files according to the acquisition time of the target image, so that the generated bullet time is more accurate and meets the user requirements.
In some embodiments, the terminal device time-checks the storage device according to a preset time-checking frequency, so that the device time of the storage device is synchronized with the standard time; the storage device is used for storing image data which is sent by the camera and is used for forming the video file in the process of recording the video file by any camera, taking the moment of storing the first frame of image data of the video file as the recording starting moment of the video file and the moment of storing the last frame of image data of the video file as the recording ending moment of the video file according to the equipment time of the storage device. Based on this, step S101 may be specifically implemented as: and acquiring a plurality of video files of which the recording starting time meets the target starting time and the recording ending time meets the target ending time from the storage device according to the target starting time and the target ending time input by the user.
Illustratively, the target starting time input by the user in the terminal device is 08:15:30, target end time is 08:36:45, the terminal device searches the storage device for a recording start time of 08:15:30, recording end time is 08:36:45, thereby obtaining a plurality of video files meeting the user's needs.
Therefore, by timing the storage device, the storage device can accurately record the time when the storage device starts to receive the video file and the time when the storage device ends to receive the video file, namely the working time of the video camera, so that a user can conveniently search the required video file according to the recording start time and the recording end time.
The foregoing description of the solution provided in the embodiments of the present application has been mainly presented in terms of a method. To achieve the above functions, it includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the elements and algorithm steps of the various examples described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. The technical aim may be to use different methods to implement the described functions for each particular application, but such implementation should not be considered beyond the scope of the present application.
As shown in fig. 14, an embodiment of the present application provides a bullet time generating apparatus for performing the bullet time generating method shown in fig. 14. The bullet time generating apparatus 600 includes: an acquisition module 601, a display module 602, a receiving module 603 and a processing module 604.
The acquiring module 601 is configured to acquire a plurality of video files, where the plurality of video files are recorded by a plurality of cameras at different recording angles of view simultaneously on a same target object, and an acquisition time of each frame of image in the video files is added by the camera according to equipment time of the camera, and the equipment time of the plurality of cameras is synchronous; a display module 602, configured to play a target video file, where the target video file is any one of a plurality of video files; a receiving module 603, configured to receive an operation of selecting a target playing time by a user during a process of playing a target video file; the display module 602 is further configured to display a target image sequence, where the target image sequence includes a plurality of frame images in a first preset duration from a target playing time to a first preset duration and a plurality of frame images in a second preset duration from the target playing time to a second preset duration in the target video file; the receiving module 603 is further configured to receive an operation of selecting a target image in the target image sequence by a user; the processing module 604 is configured to extract, from each video file except the target video file, an image whose acquisition time is closest to the bullet time generation time, with the acquisition time of the target image as the bullet time generation time; the processing module 604 is further configured to generate bullet time according to the target image and the image whose acquisition time is closest to the bullet time generation time in each video file.
In some embodiments, the processing module 604 is specifically configured to: for each video file except the target video file, cutting out a video fragment to be decoded from the video file, wherein the video fragment to be decoded is a fragment with a third preset time length from the target playing time to the front and a fourth preset time length from the target playing time to the rear; decoding the video segment to be decoded to obtain multi-frame images; from the multi-frame images, the image whose acquisition time is closest to the bullet time generation time is determined.
In other embodiments, the processing module 604 is further configured to intercept, according to the target playing time, a video segment to be decoded from the target video file, where the video segment to be decoded is a segment with a first preset duration from the target playing time forward and a second preset duration from the target playing time backward; the processing module 604 is further configured to decode a video segment to be decoded to obtain a target image sequence; the display module 602 is specifically configured to display a target image sequence.
In other embodiments, the processing module 604 is further configured to combine the target image with the image having the closest acquisition time and bullet time generation time in each video file according to the arrangement order of the preset images in the bullet time and the frame rate of the preset bullet time, with the target image being the first frame image of the bullet time, so as to obtain the bullet time.
In other embodiments, the receiving module 603 is further configured to receive timing requests sent by each camera according to a preset timing frequency; the processing module 604 is further configured to send a standard time to each camera in response to the timing request, where the standard time is used by the camera to calibrate the device time of the camera such that the device time of the camera is synchronized with the standard time.
In other embodiments, the receiving module 603 is further configured to receive a timing request sent by the reference camera according to a preset timing frequency; a processing module 604, further configured to send a standard time to the reference camera in response to the timing request, the standard time being used by the reference camera to calibrate a device time of the reference camera such that the device time of the reference camera is synchronized with the standard time; wherein the reference camera is one of a plurality of cameras, and the reference camera is used for timing at least one of the other cameras according to standard time.
In other embodiments, the processing module 604 is further configured to time the storage device according to a preset time-correction frequency, so that a device time of the storage device is synchronized with a standard time; the storage device is used for storing image data which is sent by the camera and is used for forming the video file in the process of recording the video file by any camera, taking the moment of storing the first frame of image data of the video file as the recording starting moment of the video file and the moment of storing the last frame of image data of the video file as the recording ending moment of the video file according to the equipment time of the storage device; the processing module 604 is specifically configured to obtain, from the storage device, a plurality of video files whose recording start time satisfies the target start time and recording end time satisfies the target end time according to the target start time and the target end time input by the user.
It should be noted that the division of the modules in fig. 14 is illustrative, and is merely a logic function division, and other division manners may be implemented in practice. For example, two or more functions may also be integrated in one processing module. The integrated modules may be implemented in hardware or in software functional modules.
Another embodiment of the present application further provides a bullet time generating apparatus, as shown in fig. 15, the bullet time generating apparatus 700 includes a memory 701 and a processor 702; memory 701 and processor 702 are coupled; the memory 701 is used to store computer program code, which includes computer instructions. Wherein the processor 702, when executing the computer instructions, causes the bullet time generating apparatus 700 to perform the steps performed by the bullet time generating apparatus in the method flow shown in the method embodiment described above.
In actual implementation, the acquisition module 601, the display module 602, the receiving module 603, and the processing module 604 may be implemented by the processor 702 shown in fig. 15 calling computer program code in the memory 701. For a specific implementation process, reference is made to the description of the bullet time generation method section above, and details are not repeated here.
Embodiments of the present application also provide a computer-readable storage medium. All or part of the flow in the above method embodiments may be implemented by computer instructions to instruct related hardware, and the program may be stored in the above computer readable storage medium, and the program may include the flow in the above method embodiments when executed. The computer readable storage medium may be any of the foregoing embodiments or memory. The computer-readable storage medium may be an external storage device of the bullet time generating apparatus, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) card, a flash card (flash card) or the like provided in the bullet time generating apparatus. Further, the computer readable storage medium may further include both an internal storage unit and an external storage device of the bullet time generating apparatus. The computer readable storage medium is used for storing the computer program and other programs and data required by the bullet time generating apparatus. The above-described computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
The present application also provides a computer program product comprising a computer program which, when run on a computer, causes the computer to perform any one of the bullet time generation methods provided in the above embodiments.
Although the present application has been described herein in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the figures, the disclosure, and the appended claims. In the claims, the word "Comprising" does not exclude other elements or steps, and the "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Although the present application has been described in connection with specific features and embodiments thereof, it will be apparent that various modifications and combinations can be made without departing from the spirit and scope of the application. Accordingly, the specification and drawings are merely exemplary illustrations of the present application as defined in the appended claims and are considered to cover any and all modifications, variations, combinations, or equivalents that fall within the scope of the present application. It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.
The foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. The seed bullet time generation method is characterized by comprising the following steps of:
acquiring a plurality of video files, wherein the plurality of video files are obtained by simultaneously recording the same target object by a plurality of cameras at different recording visual angles, the acquisition time of each frame of image in the video files is added by the cameras according to the equipment time of the cameras, and the equipment time of the plurality of cameras is synchronous;
playing a target video file, wherein the target video file is any one of the plurality of video files;
receiving an operation of selecting a target playing time by a user in the process of playing the target video file, and displaying a target image sequence, wherein the target image sequence comprises multi-frame images in a first preset time length from the target playing time to the front and multi-frame images in a second preset time length from the target playing time to the rear in the target video file;
Receiving an operation of selecting a target image in the target image sequence by a user, taking the acquisition time of the target image as bullet time generation time, and extracting an image with the closest acquisition time and bullet time generation time from each video file except the target video file;
and generating bullet time according to the target image and the image with the closest acquisition time and bullet time generation time in each video file.
2. The method of claim 1, wherein extracting the image having the closest acquisition time to the bullet time generation time from each of the video files other than the target video file comprises:
for each video file except the target video file, a video fragment to be decoded is cut from the video file, wherein the video fragment to be decoded is a fragment with a third preset time length from the target playing time to the front and a fourth preset time length from the target playing time to the rear;
decoding the video segment to be decoded to obtain multi-frame images;
and determining an image with the closest acquisition time and the bullet time generation time from the multi-frame images.
3. The method of claim 1, wherein displaying the sequence of target images comprises:
according to the target playing time, capturing a video segment to be decoded from the target video file, wherein the video segment to be decoded is a segment with a first preset time length from the target playing time and a second preset time length from the target playing time;
and decoding the video segment to be decoded to obtain the target image sequence, and displaying the target image sequence.
4. A method according to any one of claims 1-3, wherein said generating bullet time from said target image and said image of each of said video files having a closest acquisition instant to said bullet time generation instant comprises:
and combining the target image with the closest acquisition time and bullet time generation time in each video file according to the arrangement sequence of the preset images in the bullet time and the frame rate of the preset bullet time by taking the target image as the first frame image of the bullet time, so as to obtain the bullet time.
5. The method of claim 4, wherein the cameras corresponding to any two adjacent frames of images are positioned adjacent to each other in the bullet time.
6. A method according to any one of claims 1-3, characterized in that the method further comprises:
receiving timing requests sent by each camera according to a preset timing frequency, and responding to the timing requests, and sending standard time to each camera, wherein the standard time is used for the cameras to calibrate the equipment time of the cameras so as to enable the equipment time of the cameras to be synchronous with the standard time; or,
receiving a timing request sent by a reference camera according to a preset timing frequency, and responding to the timing request, and sending standard time to the reference camera, wherein the standard time is used for the reference camera to calibrate the equipment time of the reference camera so as to enable the equipment time of the reference camera to be synchronous with the standard time; wherein the reference camera is one of the plurality of cameras, and the reference camera is used for timing at least one of the rest cameras according to the standard time; or,
timing the storage equipment according to a preset timing frequency so as to synchronize equipment time of the storage equipment with standard time; the storage device is used for storing image data which is sent by the camera and is used for forming a video file in the process of recording the video file by any one of the cameras, taking the moment of storing the first frame of image data of the video file as the recording starting moment of the video file and the moment of storing the last frame of image data of the video file as the recording ending moment of the video file according to the equipment time of the storage device;
The obtaining a plurality of video files includes:
and acquiring a plurality of video files of which the recording starting time meets the target starting time and the recording ending time meets the target ending time from the storage equipment according to the target starting time and the target ending time input by a user.
7. A bullet time generation apparatus, the apparatus comprising:
the acquisition module is used for acquiring a plurality of video files, the video files are respectively recorded by a plurality of cameras at different recording visual angles to the same target object at the same time, the acquisition time of each frame of image in the video files is added by the cameras according to the equipment time of the cameras, and the equipment time of the cameras is synchronous;
the display module is used for playing a target video file, wherein the target video file is any one of the plurality of video files;
the receiving module is used for receiving the operation of selecting the target playing time by a user in the process of playing the target video file;
the display module is further configured to display a target image sequence, where the target image sequence includes a multi-frame image in a first preset duration from the target playing time and a multi-frame image in a second preset duration from the target playing time in the target video file;
The receiving module is further used for receiving the operation of selecting the target image in the target image sequence by a user;
the processing module is used for taking the acquisition time of the target image as bullet time generation time and extracting images with the acquisition time closest to the bullet time generation time from the video files except the target video file;
and the processing module is also used for generating bullet time according to the target image and the image with the closest acquisition time and the bullet time generation time in each video file.
8. The apparatus of claim 7, wherein the device comprises a plurality of sensors,
the processing module is specifically used for: for each video file except the target video file, a video fragment to be decoded is cut from the video file, wherein the video fragment to be decoded is a fragment with a third preset time length from the target playing time to the front and a fourth preset time length from the target playing time to the rear; decoding the video segment to be decoded to obtain multi-frame images; determining an image with the closest acquisition time and bullet time generation time from the multi-frame images;
The processing module is further configured to intercept a video segment to be decoded from the target video file according to the target playing moment, where the video segment to be decoded is a segment with a first preset duration from the target playing moment and a second preset duration from the target playing moment; the processing module is further used for decoding the video segment to be decoded to obtain the target image sequence; the display module is specifically configured to display the target image sequence;
the processing module is further configured to combine the target image with an image whose acquisition time in each video file is closest to the bullet time generation time according to an arrangement sequence of the preset images in the bullet time and a frame rate of the preset bullet time, so as to obtain the bullet time;
in the bullet time, the setting positions of the cameras corresponding to any two adjacent frames of images are adjacent;
the receiving module is also used for receiving timing requests sent by the cameras according to preset timing frequency; the processing module is further configured to send a standard time to each camera in response to the timing request, where the standard time is used for the camera to calibrate the device time of the camera so that the device time of the camera is synchronous with the standard time;
The receiving module is also used for receiving a timing request sent by the reference camera according to a preset timing frequency; the processing module is further configured to send a standard time to the reference camera in response to the timing request, where the standard time is used for the reference camera to calibrate a device time of the reference camera, so that the device time of the reference camera is synchronous with the standard time; wherein the reference camera is one of the plurality of cameras, and the reference camera is used for timing at least one of the rest cameras according to the standard time;
the processing module is also used for timing the storage equipment according to a preset timing frequency so as to synchronize the equipment time of the storage equipment with the standard time; the storage device is used for storing image data which is sent by the camera and is used for forming a video file in the process of recording the video file by any one of the cameras, taking the moment of storing the first frame of image data of the video file as the recording starting moment of the video file and the moment of storing the last frame of image data of the video file as the recording ending moment of the video file according to the equipment time of the storage device; the processing module is specifically configured to obtain, from the storage device, a plurality of video files whose recording start time satisfies the target start time and recording end time satisfies the target end time according to the target start time and the target end time input by the user.
9. A bullet time generation apparatus, comprising:
one or more processors;
one or more memories;
wherein the one or more memories are configured to store computer program code comprising computer instructions that, when executed by the one or more processors, perform the bullet time generation method of any of claims 1-6.
10. A computer readable storage medium storing computer executable instructions which, when run on a computer, cause the computer to perform the bullet time generation method of any one of claims 1 to 6.
CN202211690965.9A 2022-12-27 2022-12-27 Bullet time generation method, device and storage medium Pending CN116016813A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211690965.9A CN116016813A (en) 2022-12-27 2022-12-27 Bullet time generation method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211690965.9A CN116016813A (en) 2022-12-27 2022-12-27 Bullet time generation method, device and storage medium

Publications (1)

Publication Number Publication Date
CN116016813A true CN116016813A (en) 2023-04-25

Family

ID=86024268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211690965.9A Pending CN116016813A (en) 2022-12-27 2022-12-27 Bullet time generation method, device and storage medium

Country Status (1)

Country Link
CN (1) CN116016813A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111475676A (en) * 2020-04-07 2020-07-31 深圳市超高清科技有限公司 Video data processing method, system, device, equipment and readable storage medium
CN112738010A (en) * 2019-10-28 2021-04-30 阿里巴巴集团控股有限公司 Data interaction method and system, interaction terminal and readable storage medium
CN112887794A (en) * 2021-01-26 2021-06-01 维沃移动通信有限公司 Video editing method and device
CN114697466A (en) * 2022-03-17 2022-07-01 杭州海康威视数字技术股份有限公司 Video frame acquisition synchronization control

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112738010A (en) * 2019-10-28 2021-04-30 阿里巴巴集团控股有限公司 Data interaction method and system, interaction terminal and readable storage medium
CN111475676A (en) * 2020-04-07 2020-07-31 深圳市超高清科技有限公司 Video data processing method, system, device, equipment and readable storage medium
CN112887794A (en) * 2021-01-26 2021-06-01 维沃移动通信有限公司 Video editing method and device
CN114697466A (en) * 2022-03-17 2022-07-01 杭州海康威视数字技术股份有限公司 Video frame acquisition synchronization control

Similar Documents

Publication Publication Date Title
US11943486B2 (en) Live video broadcast method, live broadcast device and storage medium
CN104811814B (en) Information processing method and system, client and server based on video playing
CN106331877B (en) Barrage playback method and device
JP7587063B2 (en) Application page display method, device and equipment
EP4485946A2 (en) Special effect video determination method and apparatus, electronic device and storage medium
WO2019024257A1 (en) Method and device for publishing video files
CN112822541B (en) Video generation method and device, electronic equipment and computer readable medium
CN111683266A (en) Method and terminal for configuring subtitles through simultaneous translation of videos
CA3001480C (en) Video-production system with dve feature
CN111970532A (en) Video playing method, device and equipment
CN111277890B (en) Virtual gift acquisition method and three-dimensional panoramic living broadcast room generation method
CN109348155A (en) Video recording method, device, computer equipment and storage medium
US20170150212A1 (en) Method and electronic device for adjusting video
CN107635153B (en) Interaction method and system based on image data
WO2024051578A1 (en) Image capturing method and apparatus, device, and storage medium
JP2019047432A (en) Information processing apparatus, information processing method, and program
JP2019092146A (en) Distribution server, distribution program and terminal
CN110730340B (en) Virtual audience display method, system and storage medium based on lens transformation
EP4171006A1 (en) Previewing method and apparatus for effect application, and device and storage medium
CN115002335B (en) Video processing method, apparatus, electronic device, and computer-readable storage medium
CN112188219B (en) Video receiving method and device and video transmitting method and device
CN113476837B (en) Image quality display method, device, equipment and storage medium
CN114095785B (en) Video playing method and device and computer equipment
WO2024131577A1 (en) Method and apparatus for creating special effect, and device and medium
CN116582708B (en) Video playing method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination