Disclosure of Invention
In a first aspect, an embodiment of the present application provides a video generating method, including:
acquiring a current audience view angle and a current audience position of an audience in a target scene;
Finding out at least one target camera in the same direction as the current audience viewing angle from the plurality of cameras according to the current audience viewing angle, the current audience position, the view field directions of the plurality of cameras and the distances between the plurality of cameras and the target intersection point;
calculating a current viewing angle included angle between at least one target camera and the audience according to the current viewing angle of the audience, the current position of the audience, the viewing field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
Shifting the video picture shot by at least one target camera according to the included angle between the at least one target camera and the current viewing angle of the audience;
generating a target video picture based on the offset video picture;
The view field directions of the cameras face the same area in the target scene, and the target intersection point is a common intersection point of view field central lines of the cameras.
In one possible design, the generating the target video frame based on the offset video frame includes:
If at least one target camera is a camera, taking the video picture obtained after offset as the target video picture;
and if at least one target camera is a plurality of cameras, splicing the plurality of video pictures obtained after the offset to obtain the target video picture.
In one possible design, before acquiring the current audience view angle and the current audience position of the audience in the target scene, the method further includes:
and adjusting the viewing angle and/or the position of the audience in the target scene in response to the adjustment operation of the user.
In one possible design, before adjusting the viewing angle and/or position of the viewer in the target scene in response to the user's adjustment operation, the method further comprises:
The viewing angle and position of the viewer in the target scene is initialized.
In one possible design, the method further comprises:
calculating the distance between at least one target camera and the current audience position;
Before or after shifting the video picture shot by at least one target camera according to the included angle between the target camera and the current viewing angle of the audience, the method further comprises:
Scaling a video picture shot by at least one target camera according to the distance between the at least one target camera and the current audience position;
The generating a target video picture based on the offset video picture comprises the following steps:
And obtaining a target video picture based on the offset and the scaled video picture.
In one possible design, the shifting the video picture shot by at least one target camera according to the included angle between the at least one target camera and the current viewing angle of the audience includes:
Calculating an offset angle of a video picture corresponding to at least one target camera according to the included angle between the at least one target camera and the current viewing angle of the audience;
and shifting the video picture shot by at least one target camera according to the shift angle of the video picture corresponding to at least one target camera.
In one possible design, the method further comprises:
And receiving the view field directions of the cameras, the distances from the cameras to the target intersection point and the identification numbers of the cameras, which are uploaded by third-party equipment connected with the cameras.
In a second aspect, an embodiment of the present application provides a video generating apparatus, including:
The acquisition module is used for acquiring the current audience view angle and the current audience position of the audience in the target scene;
the searching module is used for searching at least one target camera which is in the same direction as the current audience view angle from the plurality of cameras according to the current audience view angle, the current audience position, the view field directions of the plurality of cameras and the distance between the plurality of cameras and the target intersection point;
The calculation module is used for calculating the current viewing angle included angle between at least one target camera and the audience according to the current viewing angle of the audience, the current viewing position, the viewing field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
the offset module is used for offsetting the video picture shot by at least one target camera according to the included angle between the at least one target camera and the current visual angle of the audience;
the generating module is used for generating a target video picture based on the offset video picture;
The view field directions of the cameras face the same area in the target scene, and the target intersection point is a common intersection point of view field central lines of the cameras.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the bus;
A memory for storing a computer program;
and the processor is used for executing the programs stored in the memory and realizing the following processes:
acquiring a current audience view angle and a current audience position of an audience in a target scene;
Finding out at least one target camera in the same direction as the current audience viewing angle from the plurality of cameras according to the current audience viewing angle, the current audience position, the view field directions of the plurality of cameras and the distances between the plurality of cameras and the target intersection point;
calculating a current viewing angle included angle between at least one target camera and the audience according to the current viewing angle of the audience, the current position of the audience, the viewing field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
Shifting the video picture shot by at least one target camera according to the included angle between the at least one target camera and the current viewing angle of the audience;
generating a target video picture based on the offset video picture;
The view field directions of the cameras face the same area in the target scene, and the target intersection point is a common intersection point of view field central lines of the cameras.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having a computer program stored therein, the computer program when executed by a processor implementing the following procedures:
acquiring a current audience view angle and a current audience position of an audience in a target scene;
Finding out at least one target camera in the same direction as the current audience viewing angle from the plurality of cameras according to the current audience viewing angle, the current audience position, the view field directions of the plurality of cameras and the distances between the plurality of cameras and the target intersection point;
calculating a current viewing angle included angle between at least one target camera and the audience according to the current viewing angle of the audience, the current position of the audience, the viewing field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
Shifting the video picture shot by at least one target camera according to the included angle between the at least one target camera and the current viewing angle of the audience;
generating a target video picture based on the offset video picture;
The view field directions of the cameras face the same area in the target scene, and the target intersection point is a common intersection point of view field central lines of the cameras.
The above at least one technical solution adopted by one or more embodiments of the present application can achieve the following beneficial effects:
The method comprises the steps of finding out a target camera in the same direction as the current viewing angle of a spectator, calculating the current viewing angle included angle between the target camera and the spectator, shifting video pictures shot by at least one target camera based on the current viewing angle included angle between the target camera and the spectator, and generating target video pictures based on the shifted video pictures. Therefore, the displayed video picture can be changed along with the change of the viewing angle of the audience, so that the video can be switched at any viewing angle, and the interaction between the audience and the scene is realized.
Detailed Description
In order to facilitate the switching of any viewing angle of a video, embodiments of the present application provide a video generating method, apparatus, electronic device, and storage medium, where the video generating method, apparatus, electronic device, and storage medium may enable a displayed video frame to change with the change of the viewing angle of a viewer, so as to facilitate the switching of any viewing angle of the video.
First, in order to more intuitively understand the scheme provided by the embodiment of the present application, a system architecture of the video generation scheme provided by the embodiment of the present application is described below with reference to fig. 1.
Fig. 1 is a schematic view of an application environment of a video generating method, an apparatus, an electronic device, and a storage medium according to one or more embodiments of the present application. As shown in fig. 1, the third party device is connected to a plurality of cameras and is communicatively connected to a video playback device. The view field directions of the cameras are all oriented to the same area in the target scene, and the view field central lines of the cameras have a common intersection point, namely a target intersection point. The third party device may transmit data such as a field of view direction of the plurality of cameras, identification numbers of the plurality of cameras, distances of the plurality of cameras from the target intersection point, videos captured by the plurality of cameras, and the number of the plurality of cameras to the video playing device. The third party device may be, but is not limited to, a server, a personal computer or other devices for data summarizing and forwarding, the video playing device may be, but is not limited to, a personal computer, a smart phone, a tablet computer, a smart television or other devices with video playing function, and the identification number may be a number or a device code for uniquely identifying the camera.
The video generation method provided by the embodiment of the application will be described in detail.
The video generation method provided by the embodiment of the application can be applied to video playing equipment. For convenience of description, embodiments of the present application will be described with reference to a video playback device as an execution body, unless otherwise specified.
It will be appreciated that the execution body is not to be construed as limiting the embodiments of the application.
As shown in fig. 2, the video generating method provided by the embodiment of the present application may include the following steps:
step S201, a current viewing angle of the viewer in the target scene and a current viewer position are acquired.
In the embodiment of the application, when the video playing device starts to play the video picture shot by the camera, the vision and the position of the audience in the target scene can be initialized first. The user at the video playing device side can initiate an adjustment operation for the vision and/or the position of the audience, and at the moment, the video playing device responds to the adjustment operation of the user and adjusts the view angle and/or the position of the audience in the target scene.
After each time of response to the adjustment operation of the user, the video playing device side can reacquire the current viewing angle of the audience in the target scene and the current audience position.
Step S202, at least one target camera which is in the same direction as the current viewing angle of the audience is found out from the cameras according to the current viewing angle of the audience, the current viewing position, the view field directions of the cameras and the distances between the cameras and the target intersection point.
In the embodiment of the application, the third party device is connected with the plurality of cameras, and can send data such as the view field directions of the plurality of cameras, the identification numbers of the plurality of cameras, the distance between the plurality of cameras and the target intersection point, the videos shot by the plurality of cameras, the number of the plurality of cameras and the like to the video playing device. The video playback device may find at least one target camera from the plurality of cameras that is co-directional with the current viewing angle of the viewer based on the current viewing angle of the viewer, the current viewing position, the field of view directions of the plurality of cameras, and the distance of the plurality of cameras from the target intersection point.
Wherein in the target scene, the center line of the current viewing angle of the audience is just within the field of view of one or more cameras, then the one or more cameras can be called target cameras which are in the same direction as the current viewing angle of the audience.
It should be noted that, when the third party device sends the videos shot by the plurality of cameras to the video playing device, the video shot by the plurality of cameras needs to be synchronously sent to the video playing device through a unified clock generator, so as to ensure that a plurality of video frames received by the video playing device keep clock synchronization, and avoid that the video frames are inconsistent with the actual situation due to asynchronous clocks when the frames are spliced later.
Step S203, calculating the current viewing angle included angle between at least one target camera and the audience according to the current viewing angle of the audience, the current audience position, the viewing field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point.
The included angle between the target camera and the current viewing angle of the audience can be an included angle between the central line of the viewing field of the target camera and the central line of the viewing angle of the current audience.
In the embodiment of the application, the target intersection point is the center point of the target scene, and the position of the target intersection point is known. When the included angle between the at least one target camera and the current viewing angle of the audience is calculated, the central line direction of the viewing field of each target camera can be determined according to the viewing field direction of each target camera in the at least one target camera, and the position of each target camera can be determined according to the central line direction of the viewing field of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point. And then calculating the current viewing angle included angle between each target camera and the audience according to the position of each target camera, the central line direction of the viewing field of each target camera, the central line direction of the current viewing angle of the audience and the current audience position.
Step S204, offsetting the video frame shot by the at least one target camera according to the included angle between the at least one target camera and the current viewing angle of the audience.
Specifically, the offset angle of the video frame corresponding to each target camera may be calculated according to the included angle between each target camera in at least one target camera and the current viewing angle of the audience. And then, shifting the video pictures shot by each target camera according to the shifting angles of the video pictures corresponding to each target camera.
In one possible design, the video playback device may also calculate the distance of each target camera from the current viewer position based on the current viewer position and the position of each target camera. Before or after the video picture shot by at least one target camera is shifted, the video picture shot by each target camera can be scaled according to the distance between each target camera and the current audience position. Specifically, the video picture shot by each target camera can be amplified, and the larger the distance between the target camera and the current audience position is, the larger the amplification factor is. In this way, the size of the object in the scaled video picture is kept consistent with the size of the object photographed from the current viewer position, thereby avoiding distortion of the video picture.
In step S205, a target video frame is generated based on the shifted video frame.
Specifically, if at least one target camera is one camera, the video picture obtained after the offset is taken as a target video picture. And if the at least one target camera is a plurality of cameras, splicing the plurality of video pictures obtained after the offset to obtain a target video picture.
The video frames are spliced into the prior art, and specific description is not given in the embodiment of the present application.
In one possible design, if the video frames captured by each target camera are scaled, the target video frames may be generated based on the offset and scaled video frames when the target video frames are obtained. And if the at least one target camera is a plurality of cameras, the plurality of video pictures obtained after the offset and the scaling are spliced to obtain the target video picture.
In summary, in the video generating method provided by the embodiment of the application, the target camera in the same direction as the current viewing angle of the audience is found, the current viewing angle included angle between the target camera and the audience is calculated, then the video picture shot by at least one target camera is shifted based on the current viewing angle included angle between the target camera and the audience, and the target video picture is generated based on the shifted video picture. Therefore, the displayed video picture can be changed along with the change of the viewing angle of the audience, so that the video can be switched at any viewing angle, and the interaction between the audience and the scene is realized. Meanwhile, when the third party equipment sends videos shot by the cameras to the video playing equipment, the videos shot by the cameras are synchronously sent to the video playing equipment, so that the clock synchronization of a plurality of video pictures received by the video playing equipment is ensured, and the situation that the video pictures are inconsistent with the actual situation due to the fact that the clocks are not synchronous when picture splicing is carried out subsequently is avoided. In addition, the video pictures shot by the target cameras can be scaled according to the distance between the target cameras and the current audience position, so that the size of objects in the scaled video pictures is consistent with the size of objects shot from the current audience position, and the video picture distortion is avoided.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Referring to fig. 3, at the hardware level, the electronic device includes a processor, and optionally an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory (non-volatile Memory), such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, network interface, and memory may be interconnected by an internal bus, which may be an ISA (Industry Standard Architecture ) bus, a PCI (PERIPHERAL COMPONENT INTERCONNECT, peripheral component interconnect standard) bus, or EISA (Extended Industry Standard Architecture ) bus, among others. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 3, but not only one bus or type of bus.
And the memory is used for storing programs. In particular, the program may include program code including computer-operating instructions. The memory may include memory and non-volatile storage and provide instructions and data to the processor.
The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form the video generating device on a logic level. The processor is used for executing the programs stored in the memory and is specifically used for executing the following operations:
acquiring a current audience view angle and a current audience position of an audience in a target scene;
Finding out at least one target camera in the same direction as the current audience viewing angle from the plurality of cameras according to the current audience viewing angle, the current audience position, the view field directions of the plurality of cameras and the distances between the plurality of cameras and the target intersection point;
calculating a current viewing angle included angle between at least one target camera and the audience according to the current viewing angle of the audience, the current position of the audience, the viewing field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
Shifting the video picture shot by at least one target camera according to the included angle between the at least one target camera and the current viewing angle of the audience;
generating a target video picture based on the offset video picture;
The view field directions of the cameras face the same area in the target scene, and the target intersection point is a common intersection point of view field central lines of the cameras.
The method performed by the video generating apparatus disclosed in the embodiment of fig. 3 of the present application may be applied to a processor or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The Processor may be a general-purpose Processor including a central processing unit (Central Processing Unit, CPU), a network Processor (Network Processor, NP), etc., or may be a digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic diagrams in one or more embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with one or more embodiments of the application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
The electronic device may also execute the method of fig. 2 and implement the functions of the video generating apparatus in the embodiment shown in fig. 3, which is not described herein.
Of course, other implementations, such as a logic device or a combination of hardware and software, are not excluded from the electronic device of the present application, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or a logic device.
The embodiments of the present application also provide a computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a portable electronic device comprising a plurality of application programs, enable the portable electronic device to perform the method of the embodiment of fig. 2, and in particular to perform the operations of:
acquiring a current audience view angle and a current audience position of an audience in a target scene;
Finding out at least one target camera in the same direction as the current audience viewing angle from the plurality of cameras according to the current audience viewing angle, the current audience position, the view field directions of the plurality of cameras and the distances between the plurality of cameras and the target intersection point;
calculating a current viewing angle included angle between at least one target camera and the audience according to the current viewing angle of the audience, the current position of the audience, the viewing field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
Shifting the video picture shot by at least one target camera according to the included angle between the at least one target camera and the current viewing angle of the audience;
generating a target video picture based on the offset video picture;
The view field directions of the cameras face the same area in the target scene, and the target intersection point is a common intersection point of view field central lines of the cameras.
Fig. 4 is a schematic structural diagram of a video generating apparatus according to an embodiment of the present application. Referring to fig. 4, in a software implementation, the video generating apparatus includes:
The acquisition module is used for acquiring the current audience view angle and the current audience position of the audience in the target scene;
the searching module is used for searching at least one target camera which is in the same direction as the current audience view angle from the plurality of cameras according to the current audience view angle, the current audience position, the view field directions of the plurality of cameras and the distance between the plurality of cameras and the target intersection point;
The calculation module is used for calculating the current viewing angle included angle between at least one target camera and the audience according to the current viewing angle of the audience, the current viewing position, the viewing field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
the offset module is used for offsetting the video picture shot by at least one target camera according to the included angle between the at least one target camera and the current visual angle of the audience;
the generating module is used for generating a target video picture based on the offset video picture;
The view field directions of the cameras face the same area in the target scene, and the target intersection point is a common intersection point of view field central lines of the cameras.
In summary, the foregoing description is only a preferred embodiment of the present document, and is not intended to limit the scope of the present document. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and principles of this document should be included within the scope of protection of this document.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
All embodiments in this document are described in a progressive manner, and identical and similar parts of the embodiments are all referred to each other, and each embodiment is mainly described as different from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.