[go: up one dir, main page]

CN113596544B - Video generation method, device, electronic equipment and storage medium - Google Patents

Video generation method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113596544B
CN113596544B CN202110844901.9A CN202110844901A CN113596544B CN 113596544 B CN113596544 B CN 113596544B CN 202110844901 A CN202110844901 A CN 202110844901A CN 113596544 B CN113596544 B CN 113596544B
Authority
CN
China
Prior art keywords
target
audience
current
cameras
target camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110844901.9A
Other languages
Chinese (zh)
Other versions
CN113596544A (en
Inventor
王博
刘智美
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Super Language (Beijing) Technology Co.,Ltd.
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110844901.9A priority Critical patent/CN113596544B/en
Publication of CN113596544A publication Critical patent/CN113596544A/en
Application granted granted Critical
Publication of CN113596544B publication Critical patent/CN113596544B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • H04N21/42653Internal components of the client ; Characteristics thereof for processing graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4524Management of client data or end-user data involving the geographical location of the client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Graphics (AREA)
  • Television Signal Processing For Recording (AREA)
  • Studio Devices (AREA)

Abstract

本申请提供了一种视频生成方法、装置、电子设备及存储介质,涉及图像处理技术领域。方法包括获取观众在目标场景中的当前观众视角和当前观众位置;根据当前观众视角、当前观众位置、多个摄像机的视场方向和多个摄像机距目标交点的距离,找出至少一个与当前观众视角同向的目标摄像机;根据当前观众视角、当前观众位置、各目标摄像机的视场方向、各目标摄像机距目标交点的距离和目标交点的位置,计算出目标摄像机与观众的当前视角夹角;根据目标摄像机与观众的当前视角夹角,对目标摄像机所拍摄到的视频画面进行偏移;基于偏移后的视频画面,生成目标视频画面。本申请提供的方法、装置、电子设备及存储介质可实现视频任意视角切换。

The present application provides a video generation method, device, electronic device and storage medium, which relates to the field of image processing technology. The method includes obtaining the current viewer perspective and current viewer position of the viewer in the target scene; according to the current viewer perspective, the current viewer position, the field of view directions of multiple cameras and the distances of multiple cameras from the target intersection, finding at least one target camera in the same direction as the current viewer perspective; according to the current viewer perspective, the current viewer position, the field of view directions of each target camera, the distances of each target camera from the target intersection and the position of the target intersection, calculating the current view angle between the target camera and the viewer; according to the current view angle between the target camera and the viewer, offsetting the video picture captured by the target camera; based on the offset video picture, generating the target video picture. The method, device, electronic device and storage medium provided in the present application can realize arbitrary view angle switching of the video.

Description

Video generation method, device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a video generating method, a video generating device, an electronic device, and a storage medium.
Background
With the popularity of live broadcast and short video, people have a higher demand for the viewing form of video, panoramic video is a technology which is raised under the demand, but panoramic video is mainly applied to range recording, and in the viewing process, the audience can only watch from a designated angle and cannot interact with a scene. For example, in a panoramic video showing a study, a viewer has no way to walk around to the desk to see the brand of the desk, nor can he walk around to the other side of the desk to see what the other side of the desk is. The viewing angle of the viewer is limited to the position of the photographing apparatus, and cannot approach the object of interest, nor cannot change the angle to see the same object.
Therefore, how to provide an effective solution to facilitate any view switching of video has become a challenge in the prior art.
Disclosure of Invention
In a first aspect, an embodiment of the present application provides a video generating method, including:
acquiring a current audience view angle and a current audience position of an audience in a target scene;
Finding out at least one target camera in the same direction as the current audience viewing angle from the plurality of cameras according to the current audience viewing angle, the current audience position, the view field directions of the plurality of cameras and the distances between the plurality of cameras and the target intersection point;
calculating a current viewing angle included angle between at least one target camera and the audience according to the current viewing angle of the audience, the current position of the audience, the viewing field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
Shifting the video picture shot by at least one target camera according to the included angle between the at least one target camera and the current viewing angle of the audience;
generating a target video picture based on the offset video picture;
The view field directions of the cameras face the same area in the target scene, and the target intersection point is a common intersection point of view field central lines of the cameras.
In one possible design, the generating the target video frame based on the offset video frame includes:
If at least one target camera is a camera, taking the video picture obtained after offset as the target video picture;
and if at least one target camera is a plurality of cameras, splicing the plurality of video pictures obtained after the offset to obtain the target video picture.
In one possible design, before acquiring the current audience view angle and the current audience position of the audience in the target scene, the method further includes:
and adjusting the viewing angle and/or the position of the audience in the target scene in response to the adjustment operation of the user.
In one possible design, before adjusting the viewing angle and/or position of the viewer in the target scene in response to the user's adjustment operation, the method further comprises:
The viewing angle and position of the viewer in the target scene is initialized.
In one possible design, the method further comprises:
calculating the distance between at least one target camera and the current audience position;
Before or after shifting the video picture shot by at least one target camera according to the included angle between the target camera and the current viewing angle of the audience, the method further comprises:
Scaling a video picture shot by at least one target camera according to the distance between the at least one target camera and the current audience position;
The generating a target video picture based on the offset video picture comprises the following steps:
And obtaining a target video picture based on the offset and the scaled video picture.
In one possible design, the shifting the video picture shot by at least one target camera according to the included angle between the at least one target camera and the current viewing angle of the audience includes:
Calculating an offset angle of a video picture corresponding to at least one target camera according to the included angle between the at least one target camera and the current viewing angle of the audience;
and shifting the video picture shot by at least one target camera according to the shift angle of the video picture corresponding to at least one target camera.
In one possible design, the method further comprises:
And receiving the view field directions of the cameras, the distances from the cameras to the target intersection point and the identification numbers of the cameras, which are uploaded by third-party equipment connected with the cameras.
In a second aspect, an embodiment of the present application provides a video generating apparatus, including:
The acquisition module is used for acquiring the current audience view angle and the current audience position of the audience in the target scene;
the searching module is used for searching at least one target camera which is in the same direction as the current audience view angle from the plurality of cameras according to the current audience view angle, the current audience position, the view field directions of the plurality of cameras and the distance between the plurality of cameras and the target intersection point;
The calculation module is used for calculating the current viewing angle included angle between at least one target camera and the audience according to the current viewing angle of the audience, the current viewing position, the viewing field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
the offset module is used for offsetting the video picture shot by at least one target camera according to the included angle between the at least one target camera and the current visual angle of the audience;
the generating module is used for generating a target video picture based on the offset video picture;
The view field directions of the cameras face the same area in the target scene, and the target intersection point is a common intersection point of view field central lines of the cameras.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the bus;
A memory for storing a computer program;
and the processor is used for executing the programs stored in the memory and realizing the following processes:
acquiring a current audience view angle and a current audience position of an audience in a target scene;
Finding out at least one target camera in the same direction as the current audience viewing angle from the plurality of cameras according to the current audience viewing angle, the current audience position, the view field directions of the plurality of cameras and the distances between the plurality of cameras and the target intersection point;
calculating a current viewing angle included angle between at least one target camera and the audience according to the current viewing angle of the audience, the current position of the audience, the viewing field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
Shifting the video picture shot by at least one target camera according to the included angle between the at least one target camera and the current viewing angle of the audience;
generating a target video picture based on the offset video picture;
The view field directions of the cameras face the same area in the target scene, and the target intersection point is a common intersection point of view field central lines of the cameras.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having a computer program stored therein, the computer program when executed by a processor implementing the following procedures:
acquiring a current audience view angle and a current audience position of an audience in a target scene;
Finding out at least one target camera in the same direction as the current audience viewing angle from the plurality of cameras according to the current audience viewing angle, the current audience position, the view field directions of the plurality of cameras and the distances between the plurality of cameras and the target intersection point;
calculating a current viewing angle included angle between at least one target camera and the audience according to the current viewing angle of the audience, the current position of the audience, the viewing field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
Shifting the video picture shot by at least one target camera according to the included angle between the at least one target camera and the current viewing angle of the audience;
generating a target video picture based on the offset video picture;
The view field directions of the cameras face the same area in the target scene, and the target intersection point is a common intersection point of view field central lines of the cameras.
The above at least one technical solution adopted by one or more embodiments of the present application can achieve the following beneficial effects:
The method comprises the steps of finding out a target camera in the same direction as the current viewing angle of a spectator, calculating the current viewing angle included angle between the target camera and the spectator, shifting video pictures shot by at least one target camera based on the current viewing angle included angle between the target camera and the spectator, and generating target video pictures based on the shifted video pictures. Therefore, the displayed video picture can be changed along with the change of the viewing angle of the audience, so that the video can be switched at any viewing angle, and the interaction between the audience and the scene is realized.
Drawings
The accompanying drawings, which are included to provide a further understanding of the present document, illustrate and explain the present document, and are not to be construed as limiting the document. In the drawings:
Fig. 1 is a schematic view of an application environment of a video generating method, a video generating device, an electronic device and a storage medium according to an embodiment of the present application.
Fig. 2 is a flowchart of a video generating method according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a video generating apparatus according to an embodiment of the present application.
Detailed Description
In order to facilitate the switching of any viewing angle of a video, embodiments of the present application provide a video generating method, apparatus, electronic device, and storage medium, where the video generating method, apparatus, electronic device, and storage medium may enable a displayed video frame to change with the change of the viewing angle of a viewer, so as to facilitate the switching of any viewing angle of the video.
First, in order to more intuitively understand the scheme provided by the embodiment of the present application, a system architecture of the video generation scheme provided by the embodiment of the present application is described below with reference to fig. 1.
Fig. 1 is a schematic view of an application environment of a video generating method, an apparatus, an electronic device, and a storage medium according to one or more embodiments of the present application. As shown in fig. 1, the third party device is connected to a plurality of cameras and is communicatively connected to a video playback device. The view field directions of the cameras are all oriented to the same area in the target scene, and the view field central lines of the cameras have a common intersection point, namely a target intersection point. The third party device may transmit data such as a field of view direction of the plurality of cameras, identification numbers of the plurality of cameras, distances of the plurality of cameras from the target intersection point, videos captured by the plurality of cameras, and the number of the plurality of cameras to the video playing device. The third party device may be, but is not limited to, a server, a personal computer or other devices for data summarizing and forwarding, the video playing device may be, but is not limited to, a personal computer, a smart phone, a tablet computer, a smart television or other devices with video playing function, and the identification number may be a number or a device code for uniquely identifying the camera.
The video generation method provided by the embodiment of the application will be described in detail.
The video generation method provided by the embodiment of the application can be applied to video playing equipment. For convenience of description, embodiments of the present application will be described with reference to a video playback device as an execution body, unless otherwise specified.
It will be appreciated that the execution body is not to be construed as limiting the embodiments of the application.
As shown in fig. 2, the video generating method provided by the embodiment of the present application may include the following steps:
step S201, a current viewing angle of the viewer in the target scene and a current viewer position are acquired.
In the embodiment of the application, when the video playing device starts to play the video picture shot by the camera, the vision and the position of the audience in the target scene can be initialized first. The user at the video playing device side can initiate an adjustment operation for the vision and/or the position of the audience, and at the moment, the video playing device responds to the adjustment operation of the user and adjusts the view angle and/or the position of the audience in the target scene.
After each time of response to the adjustment operation of the user, the video playing device side can reacquire the current viewing angle of the audience in the target scene and the current audience position.
Step S202, at least one target camera which is in the same direction as the current viewing angle of the audience is found out from the cameras according to the current viewing angle of the audience, the current viewing position, the view field directions of the cameras and the distances between the cameras and the target intersection point.
In the embodiment of the application, the third party device is connected with the plurality of cameras, and can send data such as the view field directions of the plurality of cameras, the identification numbers of the plurality of cameras, the distance between the plurality of cameras and the target intersection point, the videos shot by the plurality of cameras, the number of the plurality of cameras and the like to the video playing device. The video playback device may find at least one target camera from the plurality of cameras that is co-directional with the current viewing angle of the viewer based on the current viewing angle of the viewer, the current viewing position, the field of view directions of the plurality of cameras, and the distance of the plurality of cameras from the target intersection point.
Wherein in the target scene, the center line of the current viewing angle of the audience is just within the field of view of one or more cameras, then the one or more cameras can be called target cameras which are in the same direction as the current viewing angle of the audience.
It should be noted that, when the third party device sends the videos shot by the plurality of cameras to the video playing device, the video shot by the plurality of cameras needs to be synchronously sent to the video playing device through a unified clock generator, so as to ensure that a plurality of video frames received by the video playing device keep clock synchronization, and avoid that the video frames are inconsistent with the actual situation due to asynchronous clocks when the frames are spliced later.
Step S203, calculating the current viewing angle included angle between at least one target camera and the audience according to the current viewing angle of the audience, the current audience position, the viewing field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point.
The included angle between the target camera and the current viewing angle of the audience can be an included angle between the central line of the viewing field of the target camera and the central line of the viewing angle of the current audience.
In the embodiment of the application, the target intersection point is the center point of the target scene, and the position of the target intersection point is known. When the included angle between the at least one target camera and the current viewing angle of the audience is calculated, the central line direction of the viewing field of each target camera can be determined according to the viewing field direction of each target camera in the at least one target camera, and the position of each target camera can be determined according to the central line direction of the viewing field of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point. And then calculating the current viewing angle included angle between each target camera and the audience according to the position of each target camera, the central line direction of the viewing field of each target camera, the central line direction of the current viewing angle of the audience and the current audience position.
Step S204, offsetting the video frame shot by the at least one target camera according to the included angle between the at least one target camera and the current viewing angle of the audience.
Specifically, the offset angle of the video frame corresponding to each target camera may be calculated according to the included angle between each target camera in at least one target camera and the current viewing angle of the audience. And then, shifting the video pictures shot by each target camera according to the shifting angles of the video pictures corresponding to each target camera.
In one possible design, the video playback device may also calculate the distance of each target camera from the current viewer position based on the current viewer position and the position of each target camera. Before or after the video picture shot by at least one target camera is shifted, the video picture shot by each target camera can be scaled according to the distance between each target camera and the current audience position. Specifically, the video picture shot by each target camera can be amplified, and the larger the distance between the target camera and the current audience position is, the larger the amplification factor is. In this way, the size of the object in the scaled video picture is kept consistent with the size of the object photographed from the current viewer position, thereby avoiding distortion of the video picture.
In step S205, a target video frame is generated based on the shifted video frame.
Specifically, if at least one target camera is one camera, the video picture obtained after the offset is taken as a target video picture. And if the at least one target camera is a plurality of cameras, splicing the plurality of video pictures obtained after the offset to obtain a target video picture.
The video frames are spliced into the prior art, and specific description is not given in the embodiment of the present application.
In one possible design, if the video frames captured by each target camera are scaled, the target video frames may be generated based on the offset and scaled video frames when the target video frames are obtained. And if the at least one target camera is a plurality of cameras, the plurality of video pictures obtained after the offset and the scaling are spliced to obtain the target video picture.
In summary, in the video generating method provided by the embodiment of the application, the target camera in the same direction as the current viewing angle of the audience is found, the current viewing angle included angle between the target camera and the audience is calculated, then the video picture shot by at least one target camera is shifted based on the current viewing angle included angle between the target camera and the audience, and the target video picture is generated based on the shifted video picture. Therefore, the displayed video picture can be changed along with the change of the viewing angle of the audience, so that the video can be switched at any viewing angle, and the interaction between the audience and the scene is realized. Meanwhile, when the third party equipment sends videos shot by the cameras to the video playing equipment, the videos shot by the cameras are synchronously sent to the video playing equipment, so that the clock synchronization of a plurality of video pictures received by the video playing equipment is ensured, and the situation that the video pictures are inconsistent with the actual situation due to the fact that the clocks are not synchronous when picture splicing is carried out subsequently is avoided. In addition, the video pictures shot by the target cameras can be scaled according to the distance between the target cameras and the current audience position, so that the size of objects in the scaled video pictures is consistent with the size of objects shot from the current audience position, and the video picture distortion is avoided.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Referring to fig. 3, at the hardware level, the electronic device includes a processor, and optionally an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory (non-volatile Memory), such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, network interface, and memory may be interconnected by an internal bus, which may be an ISA (Industry Standard Architecture ) bus, a PCI (PERIPHERAL COMPONENT INTERCONNECT, peripheral component interconnect standard) bus, or EISA (Extended Industry Standard Architecture ) bus, among others. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 3, but not only one bus or type of bus.
And the memory is used for storing programs. In particular, the program may include program code including computer-operating instructions. The memory may include memory and non-volatile storage and provide instructions and data to the processor.
The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form the video generating device on a logic level. The processor is used for executing the programs stored in the memory and is specifically used for executing the following operations:
acquiring a current audience view angle and a current audience position of an audience in a target scene;
Finding out at least one target camera in the same direction as the current audience viewing angle from the plurality of cameras according to the current audience viewing angle, the current audience position, the view field directions of the plurality of cameras and the distances between the plurality of cameras and the target intersection point;
calculating a current viewing angle included angle between at least one target camera and the audience according to the current viewing angle of the audience, the current position of the audience, the viewing field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
Shifting the video picture shot by at least one target camera according to the included angle between the at least one target camera and the current viewing angle of the audience;
generating a target video picture based on the offset video picture;
The view field directions of the cameras face the same area in the target scene, and the target intersection point is a common intersection point of view field central lines of the cameras.
The method performed by the video generating apparatus disclosed in the embodiment of fig. 3 of the present application may be applied to a processor or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The Processor may be a general-purpose Processor including a central processing unit (Central Processing Unit, CPU), a network Processor (Network Processor, NP), etc., or may be a digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic diagrams in one or more embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with one or more embodiments of the application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
The electronic device may also execute the method of fig. 2 and implement the functions of the video generating apparatus in the embodiment shown in fig. 3, which is not described herein.
Of course, other implementations, such as a logic device or a combination of hardware and software, are not excluded from the electronic device of the present application, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or a logic device.
The embodiments of the present application also provide a computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a portable electronic device comprising a plurality of application programs, enable the portable electronic device to perform the method of the embodiment of fig. 2, and in particular to perform the operations of:
acquiring a current audience view angle and a current audience position of an audience in a target scene;
Finding out at least one target camera in the same direction as the current audience viewing angle from the plurality of cameras according to the current audience viewing angle, the current audience position, the view field directions of the plurality of cameras and the distances between the plurality of cameras and the target intersection point;
calculating a current viewing angle included angle between at least one target camera and the audience according to the current viewing angle of the audience, the current position of the audience, the viewing field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
Shifting the video picture shot by at least one target camera according to the included angle between the at least one target camera and the current viewing angle of the audience;
generating a target video picture based on the offset video picture;
The view field directions of the cameras face the same area in the target scene, and the target intersection point is a common intersection point of view field central lines of the cameras.
Fig. 4 is a schematic structural diagram of a video generating apparatus according to an embodiment of the present application. Referring to fig. 4, in a software implementation, the video generating apparatus includes:
The acquisition module is used for acquiring the current audience view angle and the current audience position of the audience in the target scene;
the searching module is used for searching at least one target camera which is in the same direction as the current audience view angle from the plurality of cameras according to the current audience view angle, the current audience position, the view field directions of the plurality of cameras and the distance between the plurality of cameras and the target intersection point;
The calculation module is used for calculating the current viewing angle included angle between at least one target camera and the audience according to the current viewing angle of the audience, the current viewing position, the viewing field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
the offset module is used for offsetting the video picture shot by at least one target camera according to the included angle between the at least one target camera and the current visual angle of the audience;
the generating module is used for generating a target video picture based on the offset video picture;
The view field directions of the cameras face the same area in the target scene, and the target intersection point is a common intersection point of view field central lines of the cameras.
In summary, the foregoing description is only a preferred embodiment of the present document, and is not intended to limit the scope of the present document. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and principles of this document should be included within the scope of protection of this document.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
All embodiments in this document are described in a progressive manner, and identical and similar parts of the embodiments are all referred to each other, and each embodiment is mainly described as different from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.

Claims (9)

1. A video generation method, comprising:
acquiring a current audience view angle and a current audience position of an audience in a target scene;
finding out at least one target camera which is in the same direction as the current viewing angle of the audience from the plurality of cameras according to the current viewing angle of the audience, the current viewing position, the view field directions of the plurality of cameras and the distance between the plurality of cameras and the target intersection point, wherein the center line of the current viewing angle of the audience is just in the view field range of one or more cameras, and the one or more cameras can be called as the target camera which is in the same direction as the current viewing angle of the audience;
calculating a current viewing angle included angle between at least one target camera and the audience according to the current viewing angle of the audience, the current position of the audience, the viewing field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
The method comprises the steps of calculating the offset angle of a video picture corresponding to at least one target camera according to the current angle of view of the at least one target camera and a spectator, and offsetting the video picture shot by at least one target camera according to the offset angle of the video picture corresponding to the at least one target camera;
generating a target video picture based on the offset video picture;
The view field directions of the cameras face the same area in the target scene, and the target intersection point is a common intersection point of view field central lines of the cameras.
2. The method of claim 1, wherein generating the target video picture based on the offset video picture comprises:
If at least one target camera is a camera, taking the video picture obtained after offset as the target video picture;
and if at least one target camera is a plurality of cameras, splicing the plurality of video pictures obtained after the offset to obtain the target video picture.
3. The method of claim 1, wherein prior to obtaining the current audience view angle and the current audience position of the audience in the target scene, the method further comprises:
and adjusting the viewing angle and/or the position of the audience in the target scene in response to the adjustment operation of the user.
4. A method according to claim 3, wherein before adjusting the viewing angle and/or position of the viewer in the target scene in response to the user's adjustment, the method further comprises:
The viewing angle and position of the viewer in the target scene is initialized.
5. The method according to claim 1, wherein the method further comprises:
calculating the distance between at least one target camera and the current audience position;
Before or after shifting the video picture shot by at least one target camera according to the included angle between the target camera and the current viewing angle of the audience, the method further comprises:
Scaling a video picture shot by at least one target camera according to the distance between the at least one target camera and the current audience position;
The generating a target video picture based on the offset video picture comprises the following steps:
And obtaining a target video picture based on the offset and the scaled video picture.
6. The method according to claim 1, wherein the method further comprises:
And receiving the view field directions of the cameras, the distances from the cameras to the target intersection point and the identification numbers of the cameras, which are uploaded by third-party equipment connected with the cameras.
7. A video generating apparatus, comprising:
The acquisition module is used for acquiring the current audience view angle and the current audience position of the audience in the target scene;
The system comprises a search module, a target view angle detection module and a display module, wherein the search module is used for searching at least one target camera which is in the same direction as the current view angle of the audience from the plurality of cameras according to the current view angle of the audience, the current audience position, the view field directions of the plurality of cameras and the distance between the plurality of cameras and a target intersection point;
The calculation module is used for calculating the current viewing angle included angle between at least one target camera and the audience according to the current viewing angle of the audience, the current viewing position, the viewing field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
The offset module is used for offsetting the video pictures shot by at least one target camera according to the current angle of view of the at least one target camera and the audience, and concretely comprises the steps of calculating the offset angle of the video pictures corresponding to the at least one target camera according to the current angle of view of the at least one target camera and the audience;
the generating module is used for generating a target video picture based on the offset video picture;
The view field directions of the cameras face the same area in the target scene, and the target intersection point is a common intersection point of view field central lines of the cameras.
8. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the bus;
A memory for storing a computer program;
and the processor is used for executing the programs stored in the memory and realizing the following processes:
acquiring a current audience view angle and a current audience position of an audience in a target scene;
finding out at least one target camera which is in the same direction as the current viewing angle of the audience from the plurality of cameras according to the current viewing angle of the audience, the current viewing position, the view field directions of the plurality of cameras and the distance between the plurality of cameras and the target intersection point, wherein the center line of the current viewing angle of the audience is just in the view field range of one or more cameras, and the one or more cameras can be called as the target camera which is in the same direction as the current viewing angle of the audience;
calculating a current viewing angle included angle between at least one target camera and the audience according to the current viewing angle of the audience, the current position of the audience, the viewing field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
The method comprises the steps of calculating the offset angle of a video picture corresponding to at least one target camera according to the current angle of view of the at least one target camera and a spectator, and offsetting the video picture shot by at least one target camera according to the offset angle of the video picture corresponding to the at least one target camera;
generating a target video picture based on the offset video picture;
The view field directions of the cameras face the same area in the target scene, and the target intersection point is a common intersection point of view field central lines of the cameras.
9. A computer readable storage medium, wherein a computer program is stored in the storage medium, the computer program realizing the following flow when executed by a processor:
acquiring a current audience view angle and a current audience position of an audience in a target scene;
finding out at least one target camera which is in the same direction as the current viewing angle of the audience from the plurality of cameras according to the current viewing angle of the audience, the current viewing position, the view field directions of the plurality of cameras and the distance between the plurality of cameras and the target intersection point, wherein the center line of the current viewing angle of the audience is just in the view field range of one or more cameras, and the one or more cameras can be called as the target camera which is in the same direction as the current viewing angle of the audience;
calculating a current viewing angle included angle between at least one target camera and the audience according to the current viewing angle of the audience, the current position of the audience, the viewing field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
The method comprises the steps of calculating the offset angle of a video picture corresponding to at least one target camera according to the current angle of view of the at least one target camera and a spectator, and offsetting the video picture shot by at least one target camera according to the offset angle of the video picture corresponding to the at least one target camera;
generating a target video picture based on the offset video picture;
The view field directions of the cameras face the same area in the target scene, and the target intersection point is a common intersection point of view field central lines of the cameras.
CN202110844901.9A 2021-07-26 2021-07-26 Video generation method, device, electronic equipment and storage medium Active CN113596544B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110844901.9A CN113596544B (en) 2021-07-26 2021-07-26 Video generation method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110844901.9A CN113596544B (en) 2021-07-26 2021-07-26 Video generation method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113596544A CN113596544A (en) 2021-11-02
CN113596544B true CN113596544B (en) 2025-04-08

Family

ID=78249991

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110844901.9A Active CN113596544B (en) 2021-07-26 2021-07-26 Video generation method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113596544B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114500846B (en) * 2022-02-12 2024-04-02 北京蜂巢世纪科技有限公司 Live action viewing angle switching method, device, equipment and readable storage medium
CN114745504A (en) * 2022-04-28 2022-07-12 维沃移动通信有限公司 Shooting method and electronic equipment
CN115373571B (en) * 2022-10-26 2023-02-03 四川中绳矩阵技术发展有限公司 Image display device, method, equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103546720A (en) * 2012-07-13 2014-01-29 晶睿通讯股份有限公司 Processing system and processing method for synthesizing virtual visual angle image
CN108174240A (en) * 2017-12-29 2018-06-15 哈尔滨市舍科技有限公司 Panoramic video playback method and system based on user location
CN113038117A (en) * 2021-03-08 2021-06-25 烽火通信科技股份有限公司 Panoramic playing method and device based on multiple visual angles

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10390007B1 (en) * 2016-05-08 2019-08-20 Scott Zhihao Chen Method and system for panoramic 3D video capture and display

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103546720A (en) * 2012-07-13 2014-01-29 晶睿通讯股份有限公司 Processing system and processing method for synthesizing virtual visual angle image
CN108174240A (en) * 2017-12-29 2018-06-15 哈尔滨市舍科技有限公司 Panoramic video playback method and system based on user location
CN113038117A (en) * 2021-03-08 2021-06-25 烽火通信科技股份有限公司 Panoramic playing method and device based on multiple visual angles

Also Published As

Publication number Publication date
CN113596544A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN113596544B (en) Video generation method, device, electronic equipment and storage medium
US9172856B2 (en) Folded imaging path camera
CN104639832B (en) A kind of panorama photographic method and terminal
EP3545686B1 (en) Methods and apparatus for generating video content
US20170064174A1 (en) Image shooting terminal and image shooting method
US11134191B2 (en) Image display method and electronic device
WO2018214365A1 (en) Image correction method, apparatus, device, and system, camera device, and display device
US20150348325A1 (en) Method and system for stabilization and reframing
CN105635588B (en) A kind of digital image stabilization method and device
JP2018011302A (en) Image processing apparatus, image processing method, and program
CN106605403A (en) Shooting method and electronic equipment
WO2019056527A1 (en) Capturing method and device
CN108900787A (en) Image display method, device, system and equipment, readable storage medium
KR20150097987A (en) Electronic device and method for processing image
CN112085775A (en) Image processing method, device, terminal and storage medium
US9325776B2 (en) Mixed media communication
CN106470313B (en) Image generation system and image generation method
CN110111241B (en) Method and apparatus for generating dynamic image
KR20200087816A (en) Image processing apparatus, image processing system, image processing method and recording medium
US20170069063A1 (en) Image processing apparatus and method, and decoding apparatus
CN106550183A (en) A kind of image pickup method and device
CN114596346B (en) Image processing method, video processing method, device and electronic equipment
CN112532856A (en) Shooting method, device and system
CN107993253B (en) Target tracking method and device
CN119211469A (en) Annotation follow-up display method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20250417

Address after: Room 15, A02, 3rd Floor, 3rd Floor, No.17 Guangshun North Street, Chaoyang District, Beijing 100000

Patentee after: Super Language (Beijing) Technology Co.,Ltd.

Country or region after: China

Address before: 100000 No. 30, wangjiamo village, Dashiwo Town, Fangshan District, Beijing

Patentee before: Wang Bo

Country or region before: China