CN120786187A - Shooting method, equipment, storage medium and program product for multi-camera shooting system - Google Patents
Shooting method, equipment, storage medium and program product for multi-camera shooting systemInfo
- Publication number
- CN120786187A CN120786187A CN202510762426.9A CN202510762426A CN120786187A CN 120786187 A CN120786187 A CN 120786187A CN 202510762426 A CN202510762426 A CN 202510762426A CN 120786187 A CN120786187 A CN 120786187A
- Authority
- CN
- China
- Prior art keywords
- electronic device
- video image
- standby
- video
- wireless communication
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/69—Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/667—Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/268—Signal distribution or switching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Studio Devices (AREA)
Abstract
The shooting method of the multi-camera shooting system is applied to first electronic equipment, the first electronic equipment is a host computer position of the multi-camera shooting system, the multi-camera shooting system further comprises at least one auxiliary computer position, the method comprises the steps that the first electronic equipment displays a main interface, the main interface comprises a first video image, the first electronic equipment receives first user operation in a video shooting mode, the first electronic equipment displays a second video image according to the first user operation, the first video image is from the host computer position or the auxiliary computer position, the second video image is from the auxiliary computer position, the first electronic equipment receives video switching operation, the video switching operation aims at the second video image, and the first electronic equipment switches the first video image and the second video image according to the video switching operation.
Description
The present application is a divisional application, the application number of which is 202210501226.4, the application date of which is 2022, 05, 09, and the entire contents of which are incorporated herein by reference.
Technical Field
The embodiment of the application relates to the technical field of video shooting, in particular to a shooting method, equipment, a storage medium and a program product of a multi-camera shooting system.
Background
The multi-camera shooting system is a commonly used shooting means in film shooting, live broadcasting platform shooting and stage shooting, and can make stories and live broadcasting pictures more vivid and have better substitution sense.
The mobile phone and the handheld small shooting device can be combined into a multi-station shooting system through a short-distance communication technology, such as WiFi, so that more convenient shooting experience is brought to a user. For a multi-camera shooting system composed of mobile phones or handheld small shooting devices, a photographer often needs to select or splice images of video images of a plurality of shooting cameras in the shooting creation process, a video preview picture of the plurality of cameras is generally called a guide picture, a mobile phone or a wireless shooting device for controlling shooting of other cameras in the multi-camera shooting system is called a host, and the controlled mobile phone or wireless shooting device is called an auxiliary camera.
In the prior art, a multi-camera shooting system consisting of a mobile phone and small shooting equipment adopts two modes of guiding pictures, one mode is that no guiding picture is adopted, namely, when a host is in video recording, if a shot is selected to be switched, only an alternative lens name prompt box can be displayed, after the selection is successful, the corresponding lens image is directly switched, no guiding picture is adopted, and the other mode is that once a preview image is selected, namely, the shooting lens is locked, namely, the host is in front of the video shooting, all possible lens images are displayed, a photographer can select any 2 images from the shot images to start shooting, but only the image shooting records of the two lenses cannot be selected again in the shooting process, namely, the video image cannot be switched. The shooting process in the prior art cannot present the guide broadcasting picture or can not switch the shot image, so that the flexibility of video shooting is poor.
Disclosure of Invention
The embodiment of the application provides a shooting method, equipment, a storage medium and a program product of a multi-camera shooting system, which can present a guide picture and switch video images in multi-camera shooting, thereby improving the flexibility of video shooting.
The first aspect provides a shooting method of a multi-camera shooting system, which is applied to first electronic equipment, wherein the first electronic equipment is a host computer position of the multi-camera shooting system, the multi-camera shooting system further comprises at least one auxiliary computer position, the method comprises the steps that the first electronic equipment displays a main interface, the main interface comprises a first video image, the first electronic equipment receives first user operation in a video shooting mode, the first electronic equipment displays second video images according to the first user operation, the first video images come from the host computer position or the auxiliary computer position, the second video images come from the auxiliary computer position, the first electronic equipment receives video switching operation, the video switching operation aims at the second video images, and the first electronic equipment switches the first video images and the second video images according to the video switching operation.
When the host computer is in the video shooting mode, a second video image which is simultaneously in the main interface with the first video image is displayed after the first user operation is received, and the first video image and the second video image are switched after the video switching operation is received, so that the purposes of displaying the guide broadcasting pictures on the multi-computer shooting interface and switching the video images in the shooting process through the guide broadcasting pictures are achieved, and the flexibility of video shooting is improved.
In one possible implementation, the main interface includes an operation control for triggering the display of the second video image at the main interface, e.g. the first user operation includes a click operation for the operation control. By clicking the operation control, the control based on the guide picture can be conveniently realized, so that the flexibility of video shooting is improved.
In one possible embodiment, the first user operation includes a first swipe gesture operation for a preset position of the main interface. Through the sliding gesture operation acting on the preset position, the control based on the guide picture can be conveniently realized, so that the flexibility of video shooting is improved.
In one possible implementation, the video switching operation includes a single click operation on a second video image in the main interface. By acting on the clicking operation of the video image, the control based on the guide picture can be conveniently realized, so that the flexibility of video shooting is improved.
In one possible implementation, one of the first video image and the second video image is from a second electronic device, the second electronic device is one auxiliary machine of the multi-machine shooting system, the other of the first video image and the second video image is from the first electronic device or a third electronic device, and the third electronic device is the other auxiliary machine of the multi-machine shooting system.
In one possible implementation, after the first electronic device performs the process of switching the first video image and the second video image according to the video switching operation, the method further includes the steps that the first electronic device receives a second user operation, the first electronic device hides the first video image on the main interface according to the second user operation, the first electronic device receives a third user operation, and the first electronic device displays the first video image on the main interface according to the third user operation. A guide picture hiding function is provided to facilitate a user to better view the guide picture.
In one possible implementation, the main interface includes an operation control for triggering hiding the first video image at the main interface and hiding the first video image at the main interface, e.g., the second user operation and the third user operation include a click operation on the operation control.
In one possible embodiment, the third user operation comprises a first swipe gesture operation acting at a preset position, and the second user operation comprises a second swipe gesture operation acting at a preset position.
In one possible implementation, after the first electronic device performs the process of switching the first video image and the second video image according to the video switching operation, the method further includes a receiver position switching operation of the first electronic device, where the position switching operation is directed to the first video image, and the first electronic device replaces the first video image with a third video image according to the position switching operation, where the first video image and the third video image are video images provided by different positions in the multi-position shooting system. So that the user can switch the guide pictures from different sites.
In one possible implementation, the machine position switching operation includes a double-click operation on the first video image.
In one possible implementation, after the first electronic device performs the process of switching the first video image and the second video image according to the video switching operation, the method further includes the steps that the first electronic device receives a camera switching operation, the camera switching operation is aimed at the first video image, the first electronic device replaces the first video image with a fourth video image according to the camera switching operation, and the first video image and the fourth video image are video images provided by different cameras at the same camera in the multi-camera shooting system. So that the user can conveniently switch the guide broadcasting pictures from different cameras at the same machine position.
In one possible implementation, the camera switching operation includes a three-click operation on the first video image.
In one possible implementation, after the first electronic device switches the first video image from the second video image according to the video switching operation, the method further includes the first electronic device receiving a picture-in-picture display operation for the first video image, and the first electronic device displaying a fifth video image at least partially surrounded by the second video image on the main interface according to the picture-in-picture display operation, the fifth video image being identical to the first video image. So as to realize the recording effect of the picture-in-picture.
In one possible implementation, the picture-in-picture display operation includes a drag operation on the first video image.
In one possible implementation of the method according to the invention,
The method comprises the steps that after the first electronic device performs switching on a first video image and a second video image according to video switching operation, the first video image is from the second electronic device, the second electronic device is an auxiliary machine of the multi-machine shooting system, the method further comprises the steps that after the first video image is hidden in a main interface and the first electronic device does not display the video image from the second electronic device, the first electronic device sends a standby control instruction to the second electronic device, the standby control instruction is used for instructing the second electronic device to enable any one or any combination of a camera module, an image signal processor ISP, a video processor VPU, a display screen and a wireless communication link to be closed or operated in a low power consumption mode, and the wireless communication link is used for transmitting the video image from the second electronic device to the first electronic device. By sending the standby control instruction, the unused machine or the corresponding communication link can be standby to reduce the power consumption.
In one possible implementation, the standby control instructions are further configured to instruct the second electronic device to change the wireless communication link from the continuous communication state to the periodic dormant communication state; when the duration of the wireless communication link in the periodic dormant communication state reaches a first preset duration, the first electronic device establishes a low-power consumption Bluetooth BLE connection between the first electronic device and the second electronic device and disconnects the wireless communication link, after the first electronic device sends a standby control instruction to the second electronic device, the method further comprises the first electronic device executing a first standby restoration process or a second standby restoration process, wherein the first standby restoration process and the second standby restoration process both comprise the steps that the first electronic device receives a third user operation and the first electronic device displays a first video image on a main interface according to the third user operation, the third user operation is a standby restoration operation in the first standby restoration process, the second standby restoration process further comprises the steps that the first electronic device receives the standby restoration operation before the first electronic device receives the third user operation, the first standby restoration process and the second standby restoration process further comprise the first electronic device according to the standby restoration operation before the process of the main interface display of the second video image, if the first electronic device receives the standby restoration operation is received in the periodic dormant communication state, the first electronic device continuously resumes the standby control operation through the wireless communication link when the wireless communication link is in the periodic dormant communication state, the first electronic device continuously returns to the standby restoration operation when the first electronic device receives the standby control instruction, and sending a standby recovery control instruction to the second electronic device through the low-power consumption Bluetooth BLE connection, wherein the standby recovery control instruction is used for instructing the second electronic device to recover the wireless communication link to the continuous communication state. In the second standby recovery process, the wireless communication link is recovered in advance based on standby recovery operation before a third user operation, and then the first video image is displayed according to the third user operation so as to improve the response speed of displaying the video image.
In one possible embodiment, in the second standby recovery procedure, the standby recovery operation includes a touch operation acting on a preset position.
In one possible implementation, the standby control instruction is used for instructing the second electronic device to enable the wireless communication link to be changed from the continuous communication state to the first periodic dormant communication state, the method further comprises the steps that the first electronic device enables the wireless communication link to be changed to the second periodic dormant communication state when the continuous duration of the wireless communication link in the first periodic dormant communication state reaches a first preset duration, the dormant period of the first periodic dormant communication state is smaller than that of the second periodic dormant communication state, after the standby control instruction is sent to the second electronic device, the method further comprises the steps that the first electronic device executes a third standby restoration process or a fourth standby restoration process, the third standby restoration process and the fourth standby restoration process comprise the steps of receiving a third user operation, displaying a first video image on a main interface according to the third user operation, in the third standby restoration process, the third user operation is a standby restoration operation, the fourth standby restoration process further comprises the steps that the first electronic device receives the standby restoration operation before the first electronic device receives the third user operation, the third electronic device receives the standby restoration operation, and the third standby restoration process further comprises the step that the third electronic device continuously resumes the wireless communication link according to the third standby restoration process and the third standby restoration process is enabled to enable the second electronic device to display the first video image according to the third user operation. In the fourth standby recovery flow, the wireless communication link is recovered in advance based on standby recovery operation before the third user operation, and then the video image is displayed according to the third user operation so as to improve the response speed of displaying the video image.
In one possible embodiment, in the fourth standby recovery procedure, the standby recovery operation is a touch operation acting on a preset position.
In one possible implementation, after the process of sending the standby resume control instruction to the second electronic device, the method further includes the first electronic device displaying the transition image at the preset position according to a third user operation, and the process of displaying the first video image on the main interface by the first electronic device according to the third user operation includes the first electronic device replacing the transition image with the first video image when the first electronic device receives the first video image from the second electronic device through the wireless communication link that resumes to the continuous communication state according to the third user operation. When a third user operation is received, firstly displaying the transition image, and replacing the transition image with the video image until the wireless communication link is restored, so as to solve the problem of display delay caused by the fact that the video image cannot be transmitted in time.
The shooting device of the multi-camera shooting system is applied to first electronic equipment, the first electronic equipment is a host computer position of the multi-camera shooting system, the multi-camera shooting system further comprises at least one auxiliary computer position, the device comprises an interface display module and a video switching module, the interface display module is used for displaying a main interface, the main interface comprises a first video image, the operation receiving module is used for receiving first user operation in a video shooting mode, the interface display module is further used for displaying second video images on the main interface according to the first user operation, the first video images are from the host computer position or the auxiliary computer position, the second video images are from the auxiliary computer position, the operation receiving module is further used for receiving video switching operation aiming at the second video images, and the video switching module is used for switching the first video images and the second video images according to the video switching operation.
In one possible implementation, the main interface includes an operation control for triggering the display of the second video image at the main interface, e.g. the first user operation includes a click operation for the operation control.
In one possible embodiment, the first user operation includes a first swipe gesture operation for a preset position of the main interface.
In one possible implementation, the video switching operation includes a single click operation on a second video image in the main interface.
In one possible implementation, one of the first video image and the second video image is from a second electronic device, the second electronic device is one auxiliary machine of the multi-machine shooting system, the other of the first video image and the second video image is from the first electronic device or a third electronic device, and the third electronic device is the other auxiliary machine of the multi-machine shooting system.
In one possible implementation, the operation receiving module is further configured to receive a second user operation, the interface display module is further configured to hide the first video image on the main interface according to the second user operation, the operation receiving module is further configured to receive a third user operation, and the interface display module is further configured to display the first video image on the main interface according to the third user operation.
In one possible implementation, the main interface includes an operation control for triggering, hiding the first video image at the main interface and hiding the first video image at the main interface, e.g., the second user operation and the third user operation include a click operation on the operation control.
In one possible embodiment, the third user operation comprises a first swipe gesture operation acting at a preset position, and the second user operation comprises a second swipe gesture operation acting at a preset position.
In one possible implementation, the operation receiving module is further configured to receive a receiver position switching operation, where the position switching operation is directed to a first video image, and the interface display module is further configured to replace the first video image with a third video image according to the position switching operation, where the first video image and the third video image are video images provided by different positions in the multi-position shooting system.
In one possible implementation, the machine position switching operation includes a double-click operation on the first video image.
In one possible implementation, the operation receiving module is further configured to receive a camera switching operation, where the camera switching operation is directed at a first video image, and the interface display module is further configured to replace the first video image with a fourth video image according to the camera switching operation, where the first video image and the fourth video image are video images provided by different cameras in a same camera in the multi-camera shooting system.
In one possible implementation, the camera switching operation includes a three-click operation on the first video image.
In one possible implementation, the operation receiving module is further configured to receive a picture-in-picture display operation, the picture-in-picture display operation being for a first video image, and the interface display module is further configured to display a fifth video image at least partially surrounded by the second video image on the main interface according to the picture-in-picture display operation, the fifth video image being identical to the first video image.
In one possible implementation, the picture-in-picture display operation includes a drag operation on the first video image.
In one possible implementation, the device further comprises a standby control module, configured to, after the first video image is hidden by the main interface, send a standby control instruction to the second electronic device, where the standby control instruction is configured to instruct the second electronic device to turn off or operate any component or any combination of components in a low power mode, where the standby control instruction is configured to instruct the second electronic device to turn off or operate any component in a low power mode, where the camera module, the image signal processor ISP, the video processor VPU, the display screen, and the wireless communication link are configured to transmit the video image from the second electronic device to the first electronic device, where the second electronic device is a secondary site of the multi-site capturing system, and where the video image is the first video image or the second video image.
In one possible implementation, the standby control instruction is further used for instructing the second electronic device to change the wireless communication link from the continuous communication state to the periodic dormant communication state, the device further comprises a communication control module, when the duration of the wireless communication link in the periodic dormant communication state reaches a first preset duration, the communication control module is used for establishing a low-power consumption Bluetooth (BLE) connection between the first electronic device and the second electronic device and disconnecting the wireless communication link, the operation receiving module is specifically used for receiving a third user operation in a first standby restoration process or receiving a standby restoration operation and a third user operation in a second standby extensive restoration process, the third user operation is a standby restoration operation in the first standby restoration process, the standby control module is further used for sending a standby restoration control instruction to the second electronic device through the wireless communication link if the standby restoration operation is received when the wireless communication link is in the periodic dormant communication state, the standby restoration control instruction is further used for instructing the second electronic device to restore the wireless communication link to the continuous communication state through the wireless communication link according to the standby restoration operation, and sending the standby restoration control instruction to the second electronic device through the low-power consumption Bluetooth (BLE) when the wireless communication link is disconnected.
In one possible embodiment, in the second standby recovery procedure, the standby recovery operation includes a touch operation acting on a preset position.
In one possible implementation, the standby control instruction is used for instructing the second electronic device to change the wireless communication link from the continuous communication state to the first periodic dormant communication state, the device further comprises a communication control module, when the continuous duration of the wireless communication link in the first periodic dormant communication state reaches a first preset duration, the wireless communication link is changed to the second periodic dormant communication state, the dormant period of the first periodic dormant communication state is smaller than the dormant period of the second periodic dormant communication state, the operation receiving module is specifically used for receiving a third user operation in a third standby recovery process, or receiving the standby recovery operation and the third user operation in a fourth standby recovery process, and the third user operation is a standby recovery operation in the third standby recovery process, and the standby control module is further used for sending a standby recovery control instruction to the second electronic device through the wireless communication link according to the standby recovery operation, and the standby recovery control instruction is used for instructing the second electronic device to recover the wireless communication link to the continuous communication state.
In one possible embodiment, in the fourth standby recovery procedure, the standby recovery operation is a touch operation acting on a preset position.
In a possible implementation mode, the device further comprises an interface display module, which is further used for displaying the transition image at a preset position according to a third user operation, and the interface display module is further specifically used for replacing the transition image with the first video image when the first electronic device receives the video image from the second electronic device through the wireless communication link restored to the continuous communication state according to the third user operation.
In a third aspect, an electronic device is provided comprising a memory for storing computer program instructions and a processor for executing the computer program instructions, wherein the computer program instructions, when executed by the processor, trigger the electronic device to perform the method described above.
In a fourth aspect, a computer readable storage medium is provided, where the computer readable storage medium includes a stored program, where the program when run controls a device in which the computer readable storage medium is located to perform the method described above.
In a fifth aspect, a computer program product is provided, the computer program product comprising executable instructions which, when executed on a computer, cause the computer to perform the method described above.
According to the shooting method, equipment, storage medium and program product of the multi-camera shooting system, when the host computer is in the video shooting mode, the second video image which is simultaneously in the main interface with the first video image is displayed after the first user operation is received, and the first video image and the second video image are switched after the video switching operation is received, so that the purposes of displaying the guide broadcast picture on the multi-camera shooting interface and switching the video image in the shooting process through the guide broadcast picture are achieved, and the flexibility of video shooting is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present specification, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a multi-camera shooting system according to an embodiment of the present application;
Fig. 2 is a schematic diagram of a camera of an electronic device according to an embodiment of the present application;
fig. 3 is a flowchart of a shooting method of a multi-camera shooting system according to an embodiment of the present application;
Fig. 4a is a schematic diagram of a main interface of a first electronic device according to an embodiment of the present application including a first video image;
fig. 4b is a schematic diagram of a main interface of a first electronic device according to an embodiment of the present application including a first video image and a second video image;
FIG. 4c is a schematic diagram of the first video image and the second video image of FIG. 4b after being switched;
FIG. 4d is a schematic diagram of the first electronic device of FIG. 4c after hiding the first video image;
fig. 5a is a flowchart illustrating a shooting method of another multi-camera shooting system according to an embodiment of the present application;
fig. 5b is a flowchart illustrating a shooting method of another multi-camera shooting system according to an embodiment of the present application;
fig. 6 is a schematic diagram of a scenario in which a first electronic device performs a machine-position switching operation according to an embodiment of the present application;
Fig. 7 is a schematic view of a scenario in which a first electronic device performs a camera switching operation according to an embodiment of the present application;
Fig. 8 is a schematic view of a scene of a first electronic device displaying a pip effect according to an embodiment of the present application;
fig. 9 is a schematic view of a scene of hiding a pip effect by a first electronic device according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a transmission queue for communication via WiFi according to an embodiment of the application;
fig. 11 is a flowchart of a shooting method of a multi-camera shooting system including a first standby recovery procedure according to an embodiment of the present application;
fig. 12 is a schematic diagram of a change in a wireless communication link status according to an embodiment of the present application;
Fig. 13 is a flow chart of a shooting method of a multi-camera shooting system including a second standby recovery flow according to an embodiment of the present application;
Fig. 14 is a schematic diagram of another change in the state of a wireless communication link according to an embodiment of the present application;
fig. 15 is a schematic diagram of a change in a state of a wireless communication link according to another embodiment of the present application;
fig. 16 is a flowchart of a shooting method of a multi-camera shooting system including a third standby recovery procedure according to an embodiment of the present application;
Fig. 17 is a flowchart of a shooting method of a multi-camera shooting system including a fourth standby recovery procedure according to an embodiment of the present application;
fig. 18 is a schematic diagram of a synchronization period of a wireless communication link according to an embodiment of the present application;
FIG. 19 is a schematic diagram of another multi-camera shooting system according to an embodiment of the present application;
fig. 20 is a schematic view of a scene of a first electronic device displaying a transition image according to an embodiment of the present application;
FIG. 21 is a schematic view of a scene change of a mobile phone and a multi-camera shooting system formed by the mobile phone according to an embodiment of the present application;
FIG. 22 is a schematic view of a scene change of a multi-camera shooting system formed by a mobile phone and shooting equipment according to an embodiment of the present application;
Fig. 23 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For a better understanding of the technical solutions of the present specification, the following detailed description of the embodiments of the present application refers to the accompanying drawings.
It should be understood that the described embodiments are only some, but not all, of the embodiments of the present description. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present disclosure.
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the description. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely an association relationship describing the associated object, and means that three relationships may exist, for example, a and/or b, and may mean that a single first exists while a single first and a single second exist. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
It should be understood that, in embodiments of the present application, "first," "second," etc. are merely intended to refer to different objects, and are not meant to limit the referenced objects otherwise.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a multi-camera shooting system provided by the embodiment of the present application, and the shooting method and device provided by the embodiment of the present application may be applicable to a multi-camera shooting system, where the multi-camera shooting system includes at least two wireless shooting devices. For example, the multi-camera shooting system 10 includes three wireless shooting devices, namely, a first electronic device 100, a second electronic device 200 and a third electronic device 300, wherein one wireless shooting device is one camera of the multi-camera shooting system, in order to select and switch video images shot by each camera of the multi-camera shooting system, one convenient way is to integrate a broadcasting picture into a host computer of the multi-camera shooting system, so that the first electronic device 100 is used as the host computer of the multi-camera shooting system, the first electronic device 100 used as the host computer needs to shoot video images, and at the same time, the first electronic device 100 needs to have at least shooting functions, display functions, communication functions and data processing functions, the first electronic device 100 can be a mobile phone or other wireless shooting devices, the second electronic device 200 and the third electronic device 300 are auxiliary computers of the multi-camera shooting system, the electronic device used as auxiliary computers needs to shoot video images and transmit the video images to the host computer, and the second electronic device 200 at least needs to have the second electronic device 200 and the third electronic device 300 can have at least the communication functions or the wireless shooting functions. Before shooting, a main machine position and an auxiliary machine position in the multi-position shooting system need to be firstly networked to establish communication connection, and then the main machine position can send a control instruction to the auxiliary machine position through the communication connection, and the auxiliary machine position can realize picture control of shooting video, control of a shooting camera, control of whether video images are transmitted to the main machine position, control of whether the auxiliary machine position is dormant or not and the like according to the control instruction.
The first electronic device 100, the second electronic device 200, and the third electronic device 300 establish a connection through a wireless communication network, for example, a wireless communication connection may be established through WiFi, and command transmission and data transmission between the first electronic device 100, the second electronic device 200, and the third electronic device 300 may be transmitted through a WiFi wireless communication link. In order to realize the management and control of the host bit to the auxiliary bit shooting video image, the host bit can be provided with a camera hardware abstract layer CAMERAHAL, the auxiliary bit can be provided with a VirtualCameraAgent virtual camera agent, when the host bit detects that the control instruction relates to the far-end auxiliary bit, the control instruction is forwarded to VirtualCameraAgent of the opposite-end auxiliary bit through CAMERAHAL to control the auxiliary bit, and VirtualCameraAgent of the auxiliary bit is used for escaping the control instruction sent by the opposite-end host bit to the corresponding ISP so as to control the camera. For example, the host site can be realized to enable the opposite-end auxiliary site to transmit video images to the host site, and the video images of the auxiliary site are presented on the host site.
It should be understood that the configuration of the multi-camera system 10 illustrated in the embodiments of the present application is not intended to limit the configuration of the multi-camera system 10, and that in other embodiments of the present application, the multi-camera system 10 may include more or less wireless cameras than illustrated. The multi-level camera system 10 typically has one host level and in special cases multiple host levels are possible.
Referring to fig. 2, fig. 2 is a schematic diagram of a camera of an electronic device according to an embodiment of the present application. In fig. 2, the first electronic device 100 in fig. 1 is illustrated by taking a mobile phone as an example, and fig. 2 shows a front view and a rear view of the mobile phone, where two front cameras 111 and 112 are disposed on the front side of the mobile phone, and four rear cameras 121, 122, 123 and 124 are disposed on the rear side of the mobile phone. By configuring a plurality of cameras, a plurality of shooting modes can be provided for a user, and the flexibility of shooting videos for the user is improved.
It should be understood that the illustration in fig. 2 is only an exemplary illustration and should not be taken as a limitation on the scope of the application. For example, the number and location of the cameras may be different for different handsets. In addition, the electronic device according to the embodiment of the present application may be a tablet computer, a Personal Computer (PC), a Personal Digital Assistant (PDA), a smart watch, a netbook, a wearable electronic device, an augmented reality (augmented reality, AR) device, a Virtual Reality (VR) device, a vehicle-mounted device, a smart car, a smart sound, a robot, smart glasses, a smart television, or the like, in addition to a mobile phone. In some possible implementations, the electronic device may also be referred to as a terminal device, a User Equipment (UE), or the like, which is not limited by the embodiments of the present application. In addition, the second electronic device and the third electronic device shown in fig. 1 may also employ the electronic devices shown in fig. 2.
In an actual application scene, after the multi-station shooting system is networked to successfully establish connection, the electronic equipment of a plurality of stations enter a working mode, and after the electronic equipment serving as a host station enters a video shooting mode, the host station displays a main interface. The main interface is used for displaying video images, the video images displayed are video images provided by a main machine position or an auxiliary machine position, and when the main machine position is successfully networked for the first time, one path of video images provided by the main machine position or the auxiliary machine position in the multi-machine position shooting system can be displayed as an initial video image in the main interface according to the preset of the system.
Referring to fig. 3, fig. 3 is a flowchart of a photographing method of a multi-camera photographing system according to an embodiment of the present application, which is applied to a first electronic device 100 of the multi-camera photographing system shown in fig. 1, where the first electronic device 100 is a host of the multi-camera photographing system, as shown in fig. 3, the photographing method may include:
Step 1000, as shown in fig. 4a, the first electronic device displays a main interface 20, the main interface 20 is an interface watched by a host or a recorder, the main interface 20 includes a first video image 201, and the first video image 201 is, for example, a hand-held bicycle frame;
Step 1001, a first electronic device receives a first user operation in a video shooting mode;
Step 1002, as shown in fig. 4b, according to a first user operation, the first electronic device displays a second video image 202 on the main interface 20, where the second video image 202 is, for example, a sitting frame, the first video image 201 and the second video image 202 may be displayed on the main interface 20 in an overlapping manner, a display area of the first video image 201 may be larger than a display area of the second video image 202, the second video image 202 may be displayed on an upper layer of the first video image 201, the first video image 201 is from a main machine position or an auxiliary machine position, and the second video image 202 is from an auxiliary machine position, that is, the first video image 201 and the second video image 202 are video images provided by different machine positions in the multi-machine position shooting system;
At this time, the first video image 201 is a recorded picture or a live broadcast picture, that is, a picture in which the first video image 201 is recorded and saved or transmitted to the server for live broadcast, that is, a video image finally presented to the viewer, the second video image 202 is a non-recorded picture, or a broadcast guide picture, that is, the second video image 202 is only used for the anchor to watch the preview and is not presented to the viewer to watch, if the second video image 202 is from the second electronic device, when the second video image 202 is displayed on the main interface 20, the first electronic device sends an instruction to the second electronic device to acquire the second video image, and the second electronic device transmits the second video image captured to the first electronic device according to the instruction.
Step 1003, the first electronic device receives a video switching operation, wherein the video switching operation is aimed at the second video image 202, for example, the user clicks on the sitting screen in the main interface 20;
In step 1004, as shown in fig. 4c, the first electronic device switches the first video image 201 and the second video image 202 according to the video switching operation, that is, switches the first video image 201 to the original display area of the second video image 202, switches the second video image 202 to the original display area of the first video image 201, after switching, the first video image 201 becomes a guide frame only used for the main broadcasting to watch the preview, and is no longer presented to the audience to watch, and the second video image 202 becomes a recording frame or a live broadcast frame, and is presented to the audience to watch, at this time, the first video image 201 may be displayed on the upper layer of the second video image 202.
For example, the user clicks the sitting-up screen in the main interface 20 shown in fig. 4b, the first electronic device switches the video image, and the main interface 20 is switched from the one shown in fig. 4b to the one shown in fig. 4c, and the video switching operation functionally switches the sitting-up screen to the recording screen and the hand-held bicycle screen to the guiding screen, in addition to the switching of the video image in the interface.
Specifically, after the first electronic device (the host site) and the auxiliary site are networked successfully, the first electronic device 100 enters a shooting application, where the application to be shot includes, but is not limited to, shooting or live broadcasting, and may be a camera application on a mobile phone or other applications supporting a multi-site system. When the first electronic device 100 enters the photographing application and is in the video photographing mode, the user may see the first video image 201 of the main interface 20, and the user may perform a first user operation to cause the second video image 202, which is a conductive screen, to be displayed on the main interface 20 so that the video image from one station may be viewed while the video image from another station is being viewed on the main interface 20.
When the user wishes to switch the second video image 202 to a recorded picture, a video switching operation is performed. It should be noted that, the main interface 20 may include a plurality of second video images 202, for example, a user may see, through the main interface 20, a plurality of second video images 202 provided by a plurality of machine positions, and the user may select a certain path of second video image to be recorded from the main interface 20 to switch with the first video image 201, so as to record the path of second video image 202, that is, select, through a video switching operation, to switch the second video image 202 to be recorded from the main interface 20 with the first video image 201. By switching the video images, the flexibility of video shooting is improved. In addition, the first video image 201 of the original main interface 20 is switched to the guide screen so that the anchor continues to watch the first video image 201 in the main interface 20.
Illustratively, the main interface 20 includes an operation control, such as a video recording button, a pause button, a hover ball, a zoom button, etc., where the hover ball may be dragged to any position, and the hover ball is used to trigger a second video image to be displayed on the main interface, for example, the first user operation includes a click operation on the operation control (e.g., the hover ball), or the first user operation includes a first sliding gesture operation on the preset position 30, for example, as shown in fig. 4b, where the preset position 30 is a display area corresponding to the second video image 202 before the video switch operation, where the video switch operation includes a single click operation on the second video image 202 before the video switch operation, and the single click operation may be a fast single click operation or a long click operation. For example, when the user performs the right swipe gesture operation at the preset position 30 on the lower left side of the main interface 20 as shown in fig. 4a, the main interface 20 becomes as shown in fig. 4b, and the second video image 202 is displayed. For another example, when the first electronic device displays a hover ball, after the user clicks the hover ball as shown in fig. 4a, the main interface 20 may change to display a second video image 202 as shown in fig. 4 b. The first user operation may also be other types of user operations, such as virtual key operation, physical key operation, and the like, and the form of the first user operation is not limited in this embodiment.
For example, the video switching operation may include a long press operation of the second video image 202 by the user. For example, when the user presses the second video image 202 shown in fig. 4b for a long time, the main interface 20 becomes as shown in fig. 4c, and the first video image 201 and the second video image 202 are switched, and at this time, if the user presses the first video image 201 for a long time, the first video image 201 and the second video image 202 are switched again, that is, the main interface 20 is changed back to as shown in fig. 4 b. The video switching operation may also be other forms of user operation, such as a virtual key operation, an entity key operation, etc., for example, the main interface 20 may include a video switching control, and the video switching operation may include an operation acting on the video switching control.
Further, when the user does not need to use the non-recorded picture, the second video image 202 shown in fig. 4b may be hidden, or the first video image 201 shown in fig. 4c may be hidden, so that the user may better watch a video image being recorded.
As shown in fig. 4a to 4d and fig. 5a, on the basis of the embodiment shown in fig. 3, after the process of switching the first video image 201 and the second video image 202 by the first electronic device in step 1004 according to the video switching operation, the shooting method may further include:
step 1005, the first electronic device receives a second user operation;
Step 1006, the first electronic device hides the first video image 201 on the main interface according to the second user operation, i.e. the screen display state of the first electronic device is changed from fig. 4c to fig. 4d according to the second user operation, in which case the main interface 20 only displays the second video image 202 after hiding the first video image 201.
When the user again needs the first video image 201 to be displayed, steps 1007 and 1008 may also be performed, step 1007 being receiving a third user operation, step 1008 being displaying the first video image 201 on the main interface 20 according to the third user operation. The non-recording picture is displayed or hidden according to the requirement, so that the flexibility of video shooting is improved. If the first video image 201 is from the second electronic device, when the first electronic device hides the first video image 201 at the main interface, there may be two cases for the second electronic device, where one case is that the second electronic device still transmits the first video image to the first electronic device, so that when the first electronic device can quickly resume displaying the first video image, the other case is that the second electronic device receives an instruction from the first electronic device, and does not transmit the first video image to the first electronic device according to the instruction, so as to save power consumption, but if the first electronic device needs to resume displaying the first video image, the first electronic device needs to transmit an instruction for resuming displaying the first video image to the second electronic device, and the second electronic device receives the instruction and resumes transmitting the first video image to the first electronic device according to the instruction. If the second electronic device is a mobile phone, the first video image is hidden in the first electronic device, and the screen of the second electronic device can always display the first video image or simultaneously hide the first video image.
The flowchart shown in fig. 5b is another photographing procedure in parallel with the flowchart shown in fig. 5a, and the photographing method further includes, after the first electronic device displays the second video image and before the first electronic device receives the video switching operation, i.e., after step 1002 and before step 1003, receiving a second user operation, hiding the second video image at the main interface according to the second user operation even if the interface shown in fig. 4b becomes the interface shown in fig. 4a, receiving a third user operation, displaying the second video image at the main interface according to the third user operation even if the interface shown in fig. 4a is restored to the interface shown in fig. 4 b. Steps 1003 and 1004 may then be performed to effect video switching, changing the interface shown in fig. 4b to the interface shown in fig. 4 c.
That is, in the flow shown in fig. 5a, the user conceals the first video image 201 as a non-recorded picture after performing the video switching operation, in the flow shown in fig. 5b, the user conceals the second video image 202 as a non-recorded picture, and can view the first video image 201 in a larger picture through the main interface 20, and when the video switching is required, the user resumes the display of the second video image 202 by performing the third user operation, and then performs the video switching operation.
In addition, in the flow shown in fig. 5a and 5b, the third user operation may be the same as the first user operation described above or may be different from the second user operation described above.
Illustratively, the operation control (e.g., a hover ball) is used to trigger hiding the first video image at the main interface and hiding the first video image at the main interface, e.g., the second user operation and the third user operation include a click operation on the operation control (e.g., a hover ball), and further, the third user operation may further include a first swipe gesture operation on the preset position 30, the second user operation may include a second swipe gesture operation on the preset position 30, and the first swipe gesture operation and the second swipe gesture operation may be opposite-direction swipe gesture operations, e.g., the second swipe gesture operation is a left swipe gesture operation. For example, when the user performs a left swipe gesture operation at the preset position 30 as shown in fig. 4c, or clicks the hover ball, the first video image 201 may be hidden, and then the main interface 20 may become as shown in fig. 4d, and if the user performs a right swipe gesture operation at the preset position 30 as shown in fig. 4d, or clicks the hover ball, the main interface 20 may become as shown in fig. 4c, i.e., resume the display of the first video image 201. In addition, the second user operation and the third user operation may be other types of user operations, such as virtual key operation, physical key operation, and the like, and the forms of the second user operation and the third user operation are not limited in this embodiment.
The above embodiments are further described below based on fig. 4a to 4 d.
Referring to fig. 4a, a first video image 201 is displayed on the main interface 20 of the first electronic device, where the first video image 201 is a video image of one path provided by the first electronic device or an auxiliary unit.
The first video image 201 may be recorded, and when a user performs a video recording operation, for example, after the user clicks a video recording button, the first electronic device receives the video recording operation, stores data of the first video image 201, or transmits the data of the first video image 201 to a server for live broadcast, etc., the data of the video image is obtained by a machine location providing the video image and transmitted to the first electronic device, and the first electronic device stores the data after encoding processing.
It can be appreciated that, in order to make the first video image 201 displayed more clearly, the first video image 201 may be displayed on the main interface 20 in a full screen manner, or may be displayed on a middle screen or a small screen instead of the full screen manner, and specifically may be set according to the user requirement, and the size of the first video image 201 is not limited in this embodiment.
In addition, the first video image 201 is a video image provided by the first electronic device or the auxiliary device, if the display screen of the first electronic device is smaller, the main interface 20 may display only one path of first video images, if the display screen of the first electronic device is larger, the main interface 20 may also display multiple paths of first video images, where the multiple paths of first video images may come from different locations or come from different cameras of the same location, for example, when the main interface 20 displays two paths of first video images, the two paths of first video images may be vertically tiled or horizontally tiled and displayed in the main interface, and in a practical application scenario, the number of first video images displayed in the main interface may be set based on the screen size of the first electronic device.
See fig. 4b. After receiving the first user operation, a second video image 202 is displayed on the main interface 20, where the second video image 202 is a video image provided by the first electronic device or the auxiliary device, and the first video image 201 and the second video image 202 are video images provided by different devices.
It will be appreciated that, in order not to affect the user's viewing of the first video image 201, the display size of the second video image 202 is not set too large, and generally the display size of the second video image 202 is smaller than the display size of the first video image 201, and the resolution of the second video image 202 is smaller than the resolution of the first video image 201, when the video image of the auxiliary machine is the second video image 202, the first electronic device sends a video acquisition instruction to the auxiliary machine, and the auxiliary machine can transmit the video image with low resolution to the first electronic device, so as to reduce the transmission amount of data. If the first video image 201 is displayed in full screen, the second video image 202 may be displayed superimposed on the first video image 201, and if the first video image 201 is not displayed in full screen, the second video image 202 may be displayed superimposed on the first video image 201 or not. The size, position and positional relationship with the first video image are not limited in this embodiment.
In addition, when the display screen of the first electronic device is relatively large, the main interface 20 may include more than one path of second video images, for example, the main interface 20 may display two paths of second video images, where the two paths of second video images may be vertically tiled or horizontally tiled in a partial area of the main interface 20, and in a practical application scenario, the number of second video images 202 may be set based on the size of the display screen and the size of the main interface.
See fig. 4c. In the scenario shown in fig. 4b, after the first electronic device receives the video switching operation of the user, the video switching operation switches the display effects of the first video image 201 and the second video image 202 in the main interface 20 with respect to the second video image 202, as shown in fig. 4c, and at the same time, in the background, switches the first video image 201 to a non-recording picture and switches the second video image 202 to a recording picture.
In one possible implementation, one of the first video image and the second video image is from the second electronic device 200, the second electronic device 200 is one auxiliary unit of the multi-unit shooting system, the other of the first video image and the second video image is from the first electronic device 100 or the third electronic device 300, and the third electronic device 300 is the other auxiliary unit of the multi-unit shooting system. That is, in the scene shown in fig. 4b or fig. 4c, in combination with the multi-camera shooting system shown in fig. 1, in one possible shooting process, one of the first video image 201 and the second video image 202 is from the first electronic device 100 to be shot by itself, the other of the first video image 201 and the second video image 202 is from the second electronic device 200 to be shot, and in the other possible shooting process, one of the first video image 201 and the second video image 202 is from the second electronic device 200 to be shot, and the other of the first video image 201 and the second video image 202 is from the third electronic device 300 to be shot.
Referring to fig. 4d, fig. 4d is a schematic view of a scene of fig. 4c after the first electronic device conceals the first video image. After receiving the second user operation, as shown in fig. 4d, the main interface 20 hides the first video image, and at this time, the main interface 20 displays only the second video image 202.
In some embodiments, this may be accomplished by performing a set-top-off operation when the user needs to view other video images.
As shown in fig. 6, on the basis of the embodiment shown in fig. 3, after the first electronic device switches the first video image 201 and the second video image 202 according to the video switching operation, the photographing method may further include:
Executing step 1009, a first electronic device receiver position switching operation, wherein the position switching operation is aimed at the first video image 201;
Step 1010 is executed in which the first electronic device replaces the first video image 201 with the third video image 203 according to the machine position switching operation, where the first video image 201 and the third video image 203 are video images provided by different machine positions in the multi-machine position shooting system.
For example, a multi-camera shooting system includes three electronic devices. The second video image 202 of the main interface 20 in the first electronic device 100 is a video image captured by a camera of the first electronic device 100, for example, a sitting frame, before the receiver position switching operation, the second video image 202 of the main interface 20 in the first electronic device 100 is a video image captured by the second electronic device 200, for example, a hand-held bicycle frame, and the video image (for example, a landscape frame) captured by the third electronic device 300 is not yet displayed on the first electronic device 100, and after the receiver position switching operation, the hand-held bicycle frame of the main interface 20 of the first electronic device 100 is switched to the landscape frame according to the receiver position switching operation. I.e. to switch non-recorded pictures in the main interface 20 to video images provided by different sites.
When the user needs to switch the non-recording picture based on the machine position, a machine position switching operation may be performed, so that the user views video images provided by other machine positions through the main interface 20 of the first electronic device 100. When the first electronic device 100 receives the station switching operation, a video image of another station provided by the main station or the auxiliary station is displayed at a position of the non-recording screen. For example, the host and the auxiliary sites share four video images provided by the sites, wherein one video image provided by the site is a recorded picture of the host interface 20, the remaining three video images provided by the sites can be presented as non-recorded pictures of the host interface 20, if the current non-recorded picture only displays the video image provided by one site, and the video images provided by the two sites are not displayed on the host interface 20, a site switching operation is performed, the non-recorded picture of the host interface 20 is switched to the video image provided by the other site, and the video images provided by the three sites can be checked in turn by repeatedly performing the site switching operation. The video images which do not enter the main interface 20 of all the machine positions can be easily checked by the user through the machine position switching operation, so that the user can conveniently watch the video images of different machine positions.
Illustratively, the set-point switching operation includes a double-click operation on the first video image 201. For example, when the user double-clicks the first video image 201 displayed on the main interface 20, the first video image 201 is switched to the third video image 203 of another machine, if the user continues double-clicking the third video image 203, the third video image 203 is switched to the video image of yet another machine, and so on, and once every double-click. The order of the machine position switching can be preset by the system. The machine position switching operation may also be other types of operations, such as virtual key operation, physical key operation, etc., for example, the main interface may include a machine position switching control, and the machine position switching operation may include an operation acting on the machine position switching control.
In the position switching process, different situations may exist for the second electronic device and the third electronic device, in the position switching process of the first electronic device, the second electronic device may continuously transmit the first video image to the first electronic device, the third electronic device may also transmit the third video image to the first electronic device, so that the first electronic device can quickly resume displaying the first video image or the third video image, in addition, in the position switching process, the first electronic device may also send an instruction to the second electronic device and the third electronic device, for example, the second electronic device receives the instruction from the first electronic device, and does not transmit the first video image to the first electronic device according to the instruction, so as to save power consumption, and the third electronic device receives the instruction from the first electronic device and resumes transmitting the third video image to the first electronic device according to the instruction. If the second electronic device and the third electronic device are mobile phones, in the process of switching the phone positions of the first electronic device, the screen of the second electronic device can always display the first video image, or after the first electronic device switches the first video image into the third video image, the second electronic device hides the first video image, the screen of the third electronic device can always display the third video image, or after the first electronic device switches the first video image into the third video image, the third electronic device starts to display the third video image.
In some embodiments, as shown in fig. 7, for example, when the second electronic device 200 is a mobile phone including a front camera and a rear camera as shown in fig. 2, the first video image in the main interface 20 of the first electronic device 100 is from the rear camera of the second electronic device 200, and in order to view the video image provided by the front camera of the second electronic device 200, the method may be implemented by a camera switching operation.
On the basis of the embodiment shown in fig. 3, after the first electronic device 100 switches the first video image 201 and the second video image 202 according to the video switching operation, the photographing method may further include:
Step 1011 is executed, wherein the first electronic device 100 receives a camera switching operation, the camera switching operation being directed to the first video image 201;
step 1012 is executed in which the first electronic device 100 replaces the first video image 201 with the fourth video image 204 according to the camera switching operation, where the first video image 201 and the fourth video image 204 are video images provided by different cameras in the same camera in the multi-camera shooting system.
For example, the multi-camera shooting system includes a first electronic device 100 and a second electronic device 200, wherein the second electronic device 200 has a front camera and a rear camera. Before the camera switching operation is received, the second video image 202 in the main interface 20 of the first electronic device 100 is a video image, such as a sitting picture, shot by the camera of the first electronic device 100, the first video image 201 in the main interface 20 of the first electronic device 100 is a video image, such as a hand-held bicycle picture, shot by the rear camera of the second electronic device 200, and at this time, the second electronic device 200 displays a hand-held bicycle picture shot by the rear camera, after the camera switching operation is received, the first electronic device 100 switches the first video image 201 in the main interface 20 to a fourth video image 204 according to the camera switching operation, and the fourth video image 204 is a self-shot picture shot by the front camera of the second electronic device 200. I.e. to switch non-recorded pictures of the main interface 20 in the first electronic device 100 to video images provided by different cameras in the same location.
When the user needs to switch the non-recorded picture of the main interface 20 in the first electronic device 100 to a video image provided by another camera at the same location, the camera switching operation may be performed. When the first electronic device 100 receives the camera switching operation, it switches the current non-recording picture in the main interface 20 to a video image provided by another camera with the same camera position. Through the camera switching operation, the user can easily view the video images provided by each camera of the same camera position, the user can conveniently view the video images of different cameras, and the flexibility of video shooting is improved.
Illustratively, the camera switching operation includes a three-click operation on the first video image 201. For example, when the user clicks the first video image 201 of the main interface 20 three times, the first video image 201 may be switched to a video image provided by another camera in the same location, and if more than two cameras continue to click three times, the switching may be performed once every three times. The order in which the cameras are switched may be preset by the system. The camera switching operation may also be other types of operations, such as virtual key operation, physical key operation, and so on, for example, the main interface 20 may include a position switching control, and the position switching operation may include an operation acting on the position switching control.
In the process of camera switching, different situations may exist for the second electronic device, in the process of camera switching of the first electronic device, the second electronic device may continuously transmit the first video image and the fourth video image to the first electronic device, so that the first electronic device can quickly resume displaying the first video image or the fourth video image, in addition, when the camera is switched, the first electronic device may also send an instruction to the second electronic device, for example, before the first electronic device receives the camera switching operation, the first electronic device and the second electronic device both display the first video image, when the first electronic device receives the camera switching operation, the second electronic device sends a response instruction to the second electronic device, and the second electronic device displays the fourth video image according to the instruction, and the first electronic device switches the first video image to the fourth video image. In addition, before the camera is switched, the second electronic device may not transmit the fourth video image to the first electronic device, and at this time, when the second electronic device receives the instruction of the first electronic device, the fourth video image may be transmitted to the first electronic device, so that the first electronic device performs the camera switching.
In some embodiments, in order to meet the diversified requirements of users for shooting videos, the expression mode of the videos is increased, and the picture-in-picture effect can also be increased in the recorded pictures.
As shown in fig. 8, on the basis of the embodiment shown in fig. 3, after the process of switching the first video image 201 and the second video image 202 by the first electronic device 100 according to the video switching operation, the photographing method may further include:
Step 1013 is performed in which the first electronic device 100 receives a picture-in-picture display operation for the first video image 201;
Step 1014 is performed in which the first electronic device 100 displays a fifth video image 205 at least partially surrounded by the second video image 202 on the main interface 20 according to the picture-in-picture display operation, the fifth video image 205 being identical to the first video image 201, the fifth video image 205 and the second video image 202 being recorded pictures or live pictures, i.e. superimposed pictures of the fifth video image 205 and the second video image 202 for presentation to a viewer for viewing.
For example, a picture-in-picture display operation including a drag operation on the first video image 201, a user holding the first video image 201 drag upward, will generate a fifth video image 205.
It will be appreciated that the display size of the fifth video image 205 is smaller than the display size of the second video image 202, and that the resolution of the fifth video image 205 is smaller than the resolution of the second video image 202, when the fifth video image 205 comes from a certain auxiliary station, the auxiliary station may transmit a video image of low resolution to the first electronic device to reduce the transmission amount of data.
When the user needs to display the pip effect in the recording screen, the pip display operation may be performed so that the main interface 20 of the first electronic device 100 displays the fifth video image 205 as a part of the recording screen, at least a part of the fifth video image 205 is surrounded by the second video image 202, and the fifth video image 205 and the second video image 202 are overlapped to be the recording screen, that is, the fifth video image 205 and the second video image 202 are recorded as overlapped video images so as to be saved as the video screen of the pip effect, or transmitted to the server to be live broadcast, so as to form the live broadcast screen of the pip effect. The video image recording effect of the picture-in-picture increases the expression form of the video and improves the flexibility of video shooting.
For example, the picture-in-picture display operation may include a drag operation on the first video image 201. For example, when the user holds the first video image 201 displayed on the main interface 20 and drags it upward or drags it upward to the right, the fifth video image 205 identical to the first video image 201 is additionally displayed on the main interface 20. The pd display operation may also be other types of operations, such as virtual key operation, physical key operation, etc., for example, the main interface includes a pd control pd display operation may include an operation acting on the pd control, and the form of the pd display operation is not limited in this embodiment. In addition, for the fifth video image 205, it may be dragged in any area covered by the main interface 20.
Illustratively, the user may implement the position movement of the fifth video image 205 in the main interface 20 by dragging the fifth video image 205 in the main interface 20, and the user may select the position where the fifth video image 205 is placed according to the requirement, thereby improving the flexibility of video capturing.
Further, when the user does not need to display the fifth video image 205, the fifth video image 205 may be hidden to end the picture-in-picture display effect.
As shown in fig. 9, when the main interface 20 of the first electronic device 100 displays the fifth video image 205, the photographing method may further include:
Step 1015 is performed in which the first electronic device 100 receives a PIP hiding operation, which may include a drag operation on the fifth video image 205, such as a user holding the fifth video image 205 up and sliding out of the boundary of the main interface 20, hiding the fifth video image 205, i.e., the main interface 20 of the first electronic device 100 does not display the fifth video image.
At step 1016, first electronic device 100 conceals fifth video image 205 at main interface 20 in accordance with the picture-in-picture concealment operation.
It should be noted that, when the user no longer needs the pip effect, a pip hiding operation may be performed to hide the fifth video image 205. When the user again desires the pip display, the pip display operation may also be performed again to cause the fifth video image 205 to be displayed. The picture-in-picture effect is displayed or hidden as required, so that the flexibility of video shooting is improved.
For example, besides the drag operation, the pd hiding operation may be other types of operations, such as virtual key operation, physical key operation, etc., for example, the main interface includes a pd control, the third interface hiding operation may be an operation acting on the pd control, and the form of the third interface hiding operation is not limited in this embodiment.
It may be appreciated that, in fig. 8 and fig. 9, the superposition of the first video image 201 and the fifth video image 205 is a recording interface, when the user performs a video recording operation, for example, clicks a video recording key, the first electronic device 100 receives the video recording operation, stores video image data, where the video image data is video image data corresponding to a video image obtained by superposing the first video image 201 and the fifth video image 205, the video image data is acquired by a machine location providing the video image and is transmitted to the first electronic device 100, and the first electronic device 100 stores or transmits the video image data to a server live broadcast after performing an encoding process.
In some embodiments, the user may perform picture control on the video images displayed on the main interface 20 in the first electronic device 100, and may separately implement picture control on different video images, such as zooming, focusing, and so on. Accordingly, the main interface 20 in the first electronic device 100 may present a video image after the screen control. The implementation manner is that the first electronic device 100 transmits the picture control instruction to the corresponding machine position, the corresponding machine position adjusts the video image according to the picture control instruction, and the adjusted video image is transmitted to the first electronic device 100 for display.
In addition, in the video image capturing process, there are usually some special effects such as beautifying, and these effects may be directly implemented on the first electronic device for the video image, or the special effects control instruction may be transmitted to the corresponding auxiliary machine position of the video image, where the corresponding auxiliary machine position adjusts the video image according to the special effects control instruction, and the adjusted video image is transmitted to the first electronic device 100 for display. In addition, if the first electronic device 100 displays the fifth video image, that is, the picture-in-picture recording method is used, the same special effect control may be performed on the second video image and the fifth video image, or different special effect controls may be performed on the second video image and the fifth video image, respectively.
In some embodiments, when the user operates the video image displayed on the main interface 20 at the first electronic device 100 and the operated video image is a video image provided by the auxiliary machine site, the first electronic device 100 needs to send a shooting control instruction to the auxiliary machine site, so that the auxiliary machine site transmits the operated video image to the first electronic device 100 for display. In order to reduce the delay in image display, the photographing control instructions transmitted from the first electronic apparatus 100 to the auxiliary machine position all need to be transmitted with priority.
In some embodiments, as shown in fig. 1 and 3, it is assumed that in step 1002, the second video image is from the second electronic device 200, and the second electronic device 200 is a secondary unit of the multi-unit shooting system. The step 1002, the process of displaying the second video image 202 on the main interface 20 by the first electronic device 100 according to the first user operation includes:
Step 10021, the first electronic device 100 generates a data transmission control instruction according to the first user operation, where the data transmission control instruction carries a high-priority tag;
step 10022, the first electronic device 100 transmits the data transmission instruction to the second electronic device 200 based on the high priority;
the data transmission instruction is used for controlling the second electronic device 200 to transmit the second video image shot by the second electronic device back to the first electronic device 100;
Step 10023, the first electronic device 100 receives the second video image sent by the second electronic device 200;
at step 10024, the first electronic device 100 displays the second video image 202 on the main interface 20.
In addition, the above method may further include transmitting a focus control instruction or a zoom control instruction, etc., to the second electronic device 200 based on the high priority according to the user operation. When the first electronic device 100 and the second electronic device 200 communicate through WiFi, these shooting control instructions such as a data transmission control instruction, a focus control instruction, and a zoom control instruction are transmitted to the auxiliary station through WiFi to perform shooting control. Fig. 10 illustrates a schematic diagram of a transmission queue, where a media access control service data unit (MAC SERVICE DATA Units, MSDUs) is classified into a plurality of transmission queues VO, a_vo, a_vi, VI, BE, BK by a transmission queue and access class (access category) map, VO is Voice (Video), BE is Best effort (Best effort), BK is Background, and the transmission queues are classified into enhanced distributed channel access (enhanced distributed channelaccess, EDCA) queues, which include VO, VI, BE, and BK according to the priority. In order to make the operation experience of the user better, in the embodiment of the application, the data packets of the shooting control instructions carry high-priority labels, and the data packets carrying the high-priority labels are put into the EDCA queue of WiFi in a differentiated mode and transmitted to the second electronic device by adopting the high-priority queue (i.e. the VO queue in FIG. 10), and other control instructions except the shooting control instructions and the transmission data of the video images are placed in the default queue according to the protocol. The problem of communication delay between the first electronic equipment and the auxiliary equipment is reduced by transmitting the shooting control instruction to the second electronic equipment based on high priority.
In some embodiments, in order to reduce power consumption of each machine in the multi-machine shooting system, when a video image of a certain machine is not displayed on a main interface of the first electronic device, the corresponding machine may be controlled to enter a low-power consumption working state.
Fig. 11 is a flowchart of a shooting method of another multi-camera shooting system according to an embodiment of the present application, and based on the embodiment shown in fig. 1, before receiving a third user operation, for example, after step 1004, the first video image 201 is from the second electronic device 200, and the second electronic device 200 is an auxiliary camera of the multi-camera shooting system, where the shooting method may further include:
Step 1005, receiving a second user operation;
step 1006, hiding the first video image 201 at the main interface 20 according to the second user operation, wherein the specific process of steps 1005 and 1006 can refer to the descriptions related to fig. 5a and 5 b;
Step 1017, after the first video image 201 is hidden by the main interface 20, and the first electronic device 100 does not display the video image from the second electronic device 200, the first electronic device 100 sends a standby control instruction to the second electronic device 200, where the standby control instruction is used to instruct the second electronic device 200 to turn off or operate any component or any combination of components in a low power consumption mode, that is, the camera module, the image signal processor ISP, the video processor VPU, the display screen, and the wireless communication link, where the wireless communication link is used to transmit the video image from the second electronic device 200 to the first electronic device 100. The second electronic device 200, if receiving the standby control command, performs standby according to the standby control command, for example, turns off or switches any of the above components or combinations to operate in the low power consumption mode.
The machine position in the multi-machine position shooting system can be a mobile device or other wireless shooting devices, and can only rely on the electric quantity of the mobile device or other wireless shooting devices. For example, the second electronic device 200 may be a mobile phone, and the reduction of the shooting power consumption of the mobile phone needs to be considered, where the power consumption is reduced by turning off or controlling at least part of the components of the second electronic device 200 to operate in a low power consumption mode when the second electronic device 200 is not in use, so as to keep the low power consumption of the device, for example, controlling the second electronic device 200 to enter a standby mode. That is, for the multi-camera shooting system, if a video image of a certain camera is not displayed by the first electronic device 100, i.e., is not used by the first electronic device 100, the first electronic device 100 can control the corresponding camera to reduce power consumption by transmitting a standby control instruction thereto. In addition, for the first electronic device 100, if all the video images displayed by the first electronic device 100 come from the auxiliary units, that is, the first electronic device 100 does not display the video images shot by the first electronic device 100, the camera module of the first electronic device 100, the image signal processor ISP and other devices that do not affect the display can be controlled to be turned off or controlled to operate in the low power consumption mode. For example, the multi-camera shooting system includes a first electronic device 100 and a second electronic device 200, as shown in fig. 4c and fig. 4d, the second video image 202 is from a video image shot by the first electronic device 100, the first video image 201 is from the second electronic device 200, and when the first electronic device 100 hides the first video image 201 at the main interface 20, the first electronic device 100 may send a corresponding standby control instruction to the second electronic device 200 to control the second electronic device 200 to enter a standby mode, so as to save power consumption.
In addition, in one possible implementation, the user may re-control the display of the first video image 201 in a very fast time after hiding the first video image 201, so as to improve the situation that the second electronic device 200 needs to be frequently controlled to stand by and recover from the standby mode in this case, the step 1017 may be that after the first video image 201 is hidden by the main interface 20 and the first electronic device 100 does not display the video image from the second electronic device 200, the process of the first electronic device 100 sending the standby control instruction to the second electronic device 200 may include:
When the first electronic device 100 hides the first video image 201 at the main interface 20 for more than a preset time, and the first electronic device 100 does not display the video image from the second electronic device 200 within the preset time, the first electronic device 100 sends a standby control instruction to the second electronic device 200. To avoid frequent awakening of the second electronic device 200 and the wireless communication link from the standby mode.
The first electronic device 100 may be provided with a detection module, configured to periodically detect a target location in a display screen of the first electronic device, where no video image is used in the display screen, and when the target location is detected, notify the target location to enter a standby mode, and turn off a corresponding device.
When the second electronic device 200 is at least partially turned off or controlled to operate in the low power mode, if the first electronic device 100 uses the video image of the second electronic device 200 again, the secondary station needs to be quickly awakened. It should be noted that, the quick start of the second electronic device 200 entering the standby mode requires 1) quick recovery of the wireless communication link, and 2) quick recovery of the already-turned-off power components, such as the camera module, ISP, VPU, and/or display screen.
In some embodiments, as shown in fig. 12, the wireless communication link has a continuous communication state in which video image transmission efficiency between the first electronic device 100 and the second electronic device 200 is high, so that the first electronic device 100 receives the video image from the second electronic device 200 through the wireless communication link and displays the video image on the main interface 20 of the first electronic device 100, and when the first electronic device 100 hides the first video image 201 for more than a preset time and the first electronic device 100 does not display the video image from the second electronic device 200 for the preset time, it is indicated that the user may not use the video image from the second electronic device 200 for a period of time, and thus sends a standby control instruction to the second electronic device 200, where the standby control instruction is used to instruct the second electronic device 200 to turn off a component such as a camera in the second electronic device 200, and the standby control instruction is also used to instruct the second electronic device 200 to change the wireless communication link from the continuous communication state to the periodic sleep communication state. In the periodic dormant communication state, the wireless communication link comprises alternating dormant phases and wake-up phases, and the wake-up phases can transmit data, namely, data transmission can be intermittently realized, so that power consumption is saved, and meanwhile, the basic information transmission function of the wireless communication link is ensured.
When the duration of the wireless communication link in the periodic dormant communication state reaches a first preset duration, a bluetooth low energy (Bluetooth Low Energy, BLE) connection between the first electronic device 100 and the second electronic device 200 is established and the wireless communication link is disconnected. For example, the first electronic device 100 and the second electronic device 200 determine, by the inactivity timer, to disconnect the wireless communication link (i.e., the WiFi link) when the duration of the periodic dormant communication state reaches the first preset duration, that is, when the inactivity timer expires, disconnect the wireless communication link to reduce power consumption. However, to ensure that communication can still be maintained between the first electronic device 100 and the second electronic device 200 after the wireless communication link is broken, a BLE connection between the two may be established to implement functions such as command transmission.
As shown in fig. 11 and 13, after the first electronic device 100 sends the standby control instruction to the second electronic device 200, the method further includes the first electronic device 100 executing the first standby recovery procedure S1 or the second standby recovery procedure S2. The first standby recovery process S1 and the second standby recovery process S2 each include a process that the first electronic device 100 receives a third user operation, and the first electronic device 100 displays the first video image 201 on the main interface 20 according to the third user operation.
As shown in fig. 11, in the first standby recovery procedure S1, the third user operation is a standby recovery operation, that is, the first standby recovery procedure S1 includes the steps 1018 that the first electronic device 100 receives the standby recovery operation, and the first electronic device 100 performs the step 1021 that the first video image 201 is displayed on the main interface 20 according to the standby recovery operation.
As shown in fig. 13, the second standby recovery process S2 includes:
Step 1007, the first electronic device 100 receives a third user operation;
The first electronic device 100 performs step 1021 of displaying the first video image 201 on the main interface 20 according to the third user operation;
The second standby recovery procedure S2 further includes executing step 1018, prior to receiving the third user operation, that the first electronic device 100 receives a standby recovery operation.
As shown in fig. 11 and 13, before the process of displaying the first video image 201 by the main interface 20, the first standby recovery process S1 and the second standby recovery process S2 further include:
The first electronic device 100 performs step 1019 of determining whether the wireless communication link is in a periodic sleep communication state or in a disconnected state when the standby resume operation is received according to the standby resume operation;
As shown in fig. 14, if a standby resume operation is received while the wireless communication link is in the periodic dormant communication state, step 10201 is performed in which a standby resume control instruction for instructing the second electronic device 200 to resume the wireless communication link to the continuous communication state is transmitted to the second electronic device 200 through the wireless communication link, in which case no BLE connection needs to be established and the wireless communication link can be quickly resumed to the continuous communication state since the wireless communication link is not disconnected;
as shown in fig. 12, if a standby resume operation is received while the wireless communication link is disconnected, step 10202 is performed to transmit a standby resume control instruction to the second electronic device 200 through the bluetooth low energy BLE connection, the standby resume control instruction being for instructing the second electronic device 200 to resume the wireless communication link to a continuous communication state while the BLE connection may be disconnected to save power consumption. Based on the standby recovery control instruction, since the wireless communication link has been disconnected before, authentication is required first, and after authentication is successful, the wireless communication link can be recovered to the continuous communication state.
After the wireless communication link is restored to the continuous communication state, the first electronic device 100 may receive the video image from the second electronic device 200 through the wireless communication link, and then may perform step 1021 of displaying the first video image 201 on the main interface 20 according to the standby restoring operation. In the first standby recovery procedure S1, the transmission of the standby recovery control instruction and the display of the first video image 201 can be triggered by one standby recovery operation, whereas in the second standby recovery procedure S2, the transmission of the standby recovery instruction is triggered by the standby recovery operation, and the display of the first video image 201 is triggered by another third user operation. Therefore, in the second standby recovery flow S2, a standby recovery operation may be set as a pre-operation of the third user operation, that is, a standby recovery instruction may be sent in advance before the user performs the third user operation, at this time, the wireless communication link may be recovered in advance, the components of the second electronic device that have been turned off may be recovered, and then, when the third user operation is received, the first video image 201 from the second electronic device may be displayed on the main interface 20, so as to improve the response speed of displaying the first video image 201.
For example, in the first standby recovery procedure S1, the third user operation is a standby recovery operation, that is, the standby recovery operation includes a click operation on an operation control (e.g., a hover ball), and in addition, the third user operation may further include a first swipe gesture operation on the preset position 30. As shown in fig. 11, after step 1017, when the first electronic device receives a click operation of the user on the hover ball or a first slide gesture operation on the preset position 30, the process from step 1019 to step 1021 is performed to resume the display of the first video image 201.
For example, in the second standby recovery process S2, the standby recovery operation includes a touch operation applied to the preset position 30, and assuming that the preset position 30 is a region to be displayed corresponding to the first video image, as shown in fig. 13, after step 1017, when the first electronic device receives the touch operation applied to the preset position 30 by the user, steps 1019 to 10202 are performed to implement sending a standby recovery control instruction to the second electronic device, and when the first electronic device receives the first sliding gesture operation applied to the preset position 30 by the user, step 1021 is performed to recover the display of the first video image 201. When the user performs the first sliding gesture operation on the preset position 30, the user must touch the preset position 30 first, that is, when the user performs the third user operation, the process of sending the standby resume operation instruction is triggered in advance due to the front operation of the third user operation, and then the first video image 201 is displayed according to the third user operation.
It should be noted that, in the photographing method of the multi-camera photographing system of the embodiment of the present application, the broadcast period interval of the BLE connection used is shortened relative to the broadcast period adjustment in other processes, for example, in the process of using BLE to implement intelligent home appliance control, the broadcast period interval of the BLE connection is 200ms, and in the photographing method of the multi-camera photographing system of the embodiment of the present application, the broadcast period interval of the BLE connection can be adjusted to 50ms, so that in the photographing process, the instruction sent to the second electronic device through the BLE connection can be responded quickly.
In other embodiments, as shown in fig. 15, the standby control instruction is configured to instruct the second electronic device to change the wireless communication link from the continuous communication state to the first periodic dormant communication state, where power consumption is lower than that of the continuous communication state, and when the duration of the wireless communication link in the first periodic dormant communication state reaches a first preset duration, the first electronic device changes the wireless communication link to the second periodic dormant communication state, where the dormant period of the first periodic dormant communication state is smaller than that of the second periodic dormant communication state, and the power consumption can be further reduced by changing the wireless communication link to the second periodic dormant communication state.
Specifically, as shown in fig. 15 to 18, after sending the standby control instruction to the second electronic device, the method further includes the steps that the first electronic device executes a third standby recovery procedure S3 or a fourth standby recovery procedure S4, wherein the third standby recovery procedure S3 and the fourth standby recovery procedure S4 each include the process of receiving the third user operation and displaying the first video image 201 on the main interface 20 according to the third user operation;
As shown in fig. 16, in the third standby recovery procedure S3, the third user operation is a standby recovery operation, that is, the third standby recovery procedure S3 includes a step 1018 of receiving the standby recovery operation, a step 1021 of displaying the first video image 201 on the main interface 20 according to the standby recovery operation.
As shown in fig. 17, the fourth standby recovery process S4 includes:
Step 1007, receiving a third user operation;
According to the third user operation, step 1021 is performed, step 1021 displaying the first video image 201 on the main interface 20;
the fourth standby recovery procedure S4 further includes performing step 1018 of receiving a standby recovery operation before receiving the third user operation.
As shown in fig. 16 and 17, before the process of displaying the first video image 201 by the main interface 20, the third standby recovery process S3 and the fourth standby recovery process S4 further include:
according to the standby restoration operation, step 10201 is performed of transmitting a standby restoration control instruction to the second electronic device over the wireless communication link, the standby restoration control instruction being for instructing the second electronic device to restore the wireless communication link to the continuous communication state.
After the wireless communication link is restored to the continuous communication state, the first electronic device 100 may receive the video image from the second electronic device 200 through the wireless communication link, and then may perform step 1021 according to the standby restoring operation, which includes displaying the first video image 201 on the main interface 20.
For example, in the third standby recovery process S3, the third user operation is a standby recovery operation, that is, the standby recovery operation includes a click operation on an operation control (e.g., a hover ball), and in addition, the third user operation may further include a first slide gesture operation on the preset position 30, as shown in fig. 16, after step 1017, when the first electronic device receives the click operation on the hover ball by the user or the first slide gesture operation on the preset position 30, that is, the process of steps 10201 to 1021 is performed to recover the display of the first video image 201.
For example, in the fourth standby recovery process S4, the standby recovery operation includes a touch operation applied to the preset position 30, and assuming that the preset position 30 is a region to be displayed corresponding to the first video image, as shown in fig. 17, after step 1017, when the first electronic device receives the touch operation applied to the preset position 30 by the user, step 10201 is performed to implement sending a standby recovery control instruction to the second electronic device, and when the first electronic device receives the first sliding gesture operation applied to the preset position 30 by the user, step 1021 is performed to recover the display of the first video image 201. When the user performs the first sliding gesture operation on the preset position 30, the user must touch the preset position 30 first, that is, when the user performs the third user operation, the process of sending the standby resume operation instruction is triggered in advance due to the front operation of the third user operation, and then the first video image 201 is displayed according to the third user operation.
In the embodiment shown in fig. 16 and 17, the data transmission process shown in fig. 18 may be applied to the wireless communication link, where the wireless communication link includes a plurality of synchronization periods, and the duration of the synchronization periods is, for example, 524ms, and each synchronization period includes an exploration window and a data transmission stage, and the duration of the exploration window is, for example, 16ms, and in each exploration window, the Master device Master involved in the wireless communication link may send a synchronization beacon Beacons, and the Slave device Slave involved in the wireless communication link only needs to receive the synchronization Beacons to achieve synchronization with the Master device, so that, in the embodiment of the present application, if the multi-station capturing system includes a mobile phone and a wireless capturing device, the mobile phone is used as the Master device, and the wireless capturing device is used as the Slave device, so that the power consumption of the wireless capturing device is reduced. As shown in fig. 15 and 18, in the continuous communication state, the data transmission stage in each synchronization cycle transmits data, and in the first periodic sleep communication state and the second periodic sleep communication state, the wireless communication link includes alternating sleep stages and wake-up stages, the data transmission stage of the wake-up stage can transmit data, and the data transmission stage of the sleep stage can not transmit data, wherein the duration of the sleep stage of the first periodic sleep communication state is smaller than the duration of the sleep stage of the second periodic sleep communication state.
In some embodiments, when the second electronic device enters the standby mode, the user may be reminded that the camera is still used in the standby mode of the multi-camera shooting system by controlling the auxiliary light on the second electronic device to flash, except that the lens and other components of the second electronic device are not used. It will be appreciated that other ways of reminding the user may be used, not just the auxiliary light flashing, and therefore the present embodiment is not limited to the way of reminding the user that the stand-by is operating in the standby mode. As shown in fig. 19, taking a specific scenario as an example, where the first electronic device 100 and the second electronic device 200 are mobile phones, and the third electronic device 300 is a wireless photographing device, when the first electronic device 100 only displays a picture shot by the first electronic device 100, the second electronic device 200 and the third electronic device 300 are respectively provided with an auxiliary lamp 900, the first electronic device 100 sends a standby control instruction to the second electronic device 200 and the third electronic device 300, so that the screen and the camera of the second electronic device 200 are closed, the camera of the third electronic device 300 is closed, and the auxiliary lamps on the second electronic device 200 and the third electronic device 300 flash to prompt a user that the device is still used in a standby multi-machine position, but the camera of the device is not used.
In some embodiments, as shown in fig. 20, the first electronic device 100 receives a touch operation or a click operation performed by a user on the preset position 30, and sends a standby resume instruction to the second electronic device according to the touch operation, and after the standby resume control instruction is sent to the second electronic device, the method further includes:
The first electronic device 100 displays the transition image 209 at the preset position 30 according to a third user operation, for example, a first slide gesture operation acting on the preset position 30;
The process of displaying the first video image on the main interface by the first electronic device 100 according to the third user operation includes that when the first electronic device receives the first video image from the second electronic device through the wireless communication link restored to the continuous communication state by the first electronic device 100 according to the third user operation, the transition image 209 is replaced by the first video image 201, that is, the restored first video image 201 is displayed on the first electronic device 100, and the restored first video image 201 is from the second electronic device.
For example, on the basis of the one shown in fig. 11 or fig. 16, the third user operation is a standby recovery operation. I.e. the restoration of the wireless communication link, the restoration of the already closed components of the second electronic device and the restoration of the first video image 201 displayed by the first electronic device may be instructed by the same user operation. After sending the standby recovery control instruction to the second electronic device, the method further includes:
The first electronic device 100 displays a transition image at a preset position according to the standby restoration operation;
The first electronic device 100 replaces the transition image 209 with the first video image 201 when the first electronic device receives the first video image from the second electronic device through the wireless communication link restored to the continuous communication state according to the standby restoration operation. That is, the step of displaying the transition image is added on the basis of that shown in fig. 11 or fig. 16, and step 1021 is replaced with a "process of replacing the transition image 209 with the first video image 201".
For example, the third user operation, i.e., the standby restoring operation, is the first swipe gesture operation acting on the preset position 30. After the first electronic device 100 conceals the first video image 201 and sends the standby control instruction to the second electronic device, when the user wishes to resume displaying the first video image 201, the first sliding gesture operation acting on the preset position 30 may be performed on the first electronic device 100, and since the video image of the auxiliary machine position needs to be displayed, the auxiliary machine position needs to transmit the video image to the first electronic device 100, and a time difference between the first sliding gesture operation and displaying the video image of the auxiliary machine position on the main interface may occur. In the time difference, the first electronic device 100 displays a pre-stored transition image 209 at the preset position 30 according to the first sliding gesture operation, and when the second electronic device and the wireless communication link recover from the low-power consumption state, the first electronic device 100 replaces the transition image 209 with the first video image 201 to realize that the first video image 201 is recovered to be displayed at the first electronic device 100, thereby solving the problem of image display delay. The transition image may be a dynamic effect of the blurring rotation, or may be another type of fixed image, and the type of the transition image is not limited in this embodiment.
Similarly, for example, on the basis of the operation shown in fig. 13 or 17, the standby resume operation and the third user operation are different operations, for example, the standby resume operation is a touch operation or a click operation acting on the preset position 30. For example, after the second electronic device enters the standby mode, if the user needs to use the video image from the second electronic device, it may be triggered by a touch operation at a preset location 30 on the first electronic device. After sending the standby recovery control instruction to the second electronic device, the method further includes:
the first electronic device 100 displays a transition image at a preset position according to a third user operation;
The first electronic device 100 replaces the transition image 209 with the first video image 201 when the first electronic device receives the first video image from the second electronic device through the wireless communication link restored to the continuous communication state according to the standby restoration operation. That is, on the basis of the one shown in fig. 13 or 17, a step of displaying the transition image is added, and step 1021 is replaced with a "process of replacing the transition image 209 with the first video image 201".
Specifically, for example, the standby recovery operation is a touch operation or a click operation of the preset position 30 of the first electronic device 100, and the third operation is a first slide gesture operation for the preset position 30. After the first electronic device 100 hides the first video image 201 and sends a standby control instruction to the second electronic device, when the user wishes to resume the display of the first video image 201, clicking the preset position 30 of the first electronic device 100 and performing a first slide gesture operation, in which a standby resume operation and a third user operation are sequentially performed. When the standby recovery operation of clicking the preset position 30 by the user is received, a standby recovery control instruction is sent to the second electronic device to recover the standby state in advance, and after the first sliding gesture operation is received, the auxiliary machine position is required to transmit the video image to the first electronic device 100 because the video image of the auxiliary machine position is required to be displayed, and the time difference between the first sliding gesture operation and the video image of the auxiliary machine position displayed on the main interface can occur. In the time difference, the first electronic device 100 displays a pre-stored transition image 209 at the preset position 30 according to the first sliding gesture operation, and when the second electronic device and the wireless communication link recover from the low-power consumption state, the first electronic device 100 replaces the transition image 209 with the first video image 201 to realize that the first video image 201 is recovered to be displayed at the first electronic device 100, thereby solving the problem of image display delay.
In addition, different auxiliary locations may have different preset positions on the first electronic device, for example, the second electronic device has a corresponding preset position on the upper left corner of the screen on the first electronic device, and the third electronic device has a corresponding preset position on the upper right corner of the screen on the first electronic device. Assuming that the first electronic device currently displays only the second video image 202 and no other video images are displayed, the second video image 202 is a video image captured by the first electronic device itself, and both the second electronic device and the third electronic device may be in standby mode.
For example, in the first scenario, when the user clicks on, for example, the upper left corner, or a screen-swipe operation from the upper left corner to the lower right corner, as the third user operation, the first electronic device may send a standby resume control instruction to the second electronic device.
For example, in the second scenario, when the user clicks on, for example, the upper right corner, or swipes the screen from the upper right corner to the lower left corner, the first electronic device may send a standby resume control instruction to the third electronic device. An embodiment of a process for implementing control over a wireless communication link between a first electronic device and a second electronic device is described below, in which, in an initial state, the first electronic device only displays a second video image 202, where the second video image 202 is a video image captured by the first electronic device itself, after which, the user controls the first electronic device through a touch event, and if a screen control of the second electronic device at a remote end is involved in the touch event, for example, the user controls to display the first video image from the second electronic device on the first electronic device, the control is forwarded to VirtualCameraAgent of the second electronic device through CAMERAHAL, so that the video image captured by the second electronic device is transmitted to the first electronic device to form the first video image displayed in the main interface.
The embodiments of the present application will be described below by way of two specific examples.
As shown in fig. 21, two mobile phones supporting multi-camera shooting are respectively used as the first electronic device 100 and the second electronic device 200. The display and concealment of the video image may be achieved by a slide-in and slide-out operation on the screen of the first electronic device 100 acting on the preset position 30. For example, the main interface of the first electronic device 100 may display the second video image 202 captured by the first electronic device 100, where the second video image 202 is used as a recording screen, a sliding-out operation on the right sliding screen of the preset position 30 may cause the first video image 201 captured by the second electronic device 200 to be displayed on the main interface of the first electronic device 100, a video switching operation may cause the first video image 201 and the second video image 202 in the main interface to be switched, so that the first video image 201 is switched to the recording screen, and a sliding-in operation on the left sliding screen of the preset position 30 may cause the second video image 202 to be hidden. In addition, picture-in-picture recording can also be realized. The second electronic device 200 may choose to enable the low power mode or may choose not to enable the low power mode if the first electronic device 100 is hiding the first video image 201 and the video image of the second electronic device 200 is not being used. For the control transmission of the second electronic device 200, the time delay of the control operation on the first electronic device 100 can be reduced by the above-mentioned priority optimization strategy.
As shown in fig. 22, a mobile phone supporting multi-camera shooting is used as a first electronic device 100 and a fourth electronic device 400 to be networked, and the fourth electronic device 400 is a shooting device. The specific process and principle may be similar to that described above in relation to fig. 21, except that the first video image 201 in fig. 22 is from the fourth electronic device 400, and the fourth electronic device 400 may be provided with a low power consumption indicator light and a start indicator light, where the low power consumption indicator light of the fourth electronic device 400 blinks when the first electronic device 100 does not use the video image of the fourth electronic device 400, indicating that the device has been networked, but does not use shooting or preview, and where the start indicator light of the fourth electronic device 400 blinks when the first video image 201 that is slid out is an image from the fourth electronic device 400, indicating that the device has been enabled for shooting or preview.
The embodiment of the application also provides a shooting device of the multi-camera shooting system, which is applied to the first electronic equipment, wherein the first electronic equipment is a main camera of the multi-camera shooting system, the multi-camera shooting system further comprises at least one auxiliary camera, the device comprises an interface display module and a video switching module, the interface display module is used for displaying a main interface, the main interface comprises a first video image, the operation receiving module is used for receiving first user operation in a video shooting mode, the interface display module is further used for displaying second video images on the main interface according to the first user operation, the first video images are from the main camera or the auxiliary camera, the second video images are from auxiliary cameras, namely, the first video images and the second video images are video images provided by different cameras in the multi-camera shooting system, the operation receiving module is further used for receiving video switching operation, the video switching operation is aimed at the second video images, and the video switching module is used for switching the first video images and the second video images according to the video switching operation. The shooting device of the multi-camera shooting system can be applied to the shooting method of the multi-camera shooting system in any embodiment, and specific processes and principles are the same as those of the above embodiment, and are not repeated here.
It should be understood that the above division of the photographing device of the multi-camera photographing system is merely a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated. The modules can be realized in the form of software calling through the processing element, can be realized in the form of hardware, can also be realized in the form of software calling through the processing element, and can be realized in the form of hardware. For example, any one of the interface display module, the operation receiving module, and the video switching module may be a processing element that is set up separately, or may be integrated in the photographing device, for example, implemented in a certain chip of the photographing device, or may be stored in a memory of the photographing device in a program form, and the functions of the above modules may be called and executed by a certain processing element of the photographing device. The implementation of the other modules is similar. In addition, all or part of the modules can be integrated together or can be independently implemented. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form.
For example, the camera may be one or more integrated circuits configured to implement the above methods, such as one or more application specific integrated circuits (Application SpecificIntegrated Circuit, ASIC), or one or more microprocessors (DIGITAL SINGNAL processor, DSP), or one or more field programmable gate arrays (Field Programmable GATE ARRAY, FPGA), or the like. For another example, when a module above is implemented in the form of a processing element scheduler, the processing element may be a general purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may invoke a program. For another example, the modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
In some embodiments, the main interface includes an operation control for triggering the display of the second video image at the main interface, e.g., the first user operation includes a click operation for the operation control.
In some embodiments, the first user operation includes a first swipe gesture operation for a preset position of the main interface.
In some embodiments, the video switching operation includes a single click operation on a second video image in the main interface.
In some embodiments, one of the first video image and the second video image is from a second electronic device, the second electronic device being one auxiliary location of the multi-location capture system, and the other of the first video image and the second video image is from the first electronic device or a third electronic device, the third electronic device being another auxiliary location of the multi-location capture system.
In some embodiments, the operation receiving module is further configured to receive a second user operation, the interface display module is further configured to hide the first video image at the main interface according to the second user operation, the operation receiving module is further configured to receive a third user operation, and the interface display module is further configured to display the first video image at the main interface according to the third user operation.
In some embodiments, the main interface includes an operation control for triggering, hiding the first video image at the main interface and hiding the first video image at the main interface, e.g., the second user operation and the third user operation include a click operation for the operation control.
In some embodiments, the third user operation comprises a first swipe gesture operation at a preset location, and the second user operation comprises a second swipe gesture operation at a preset location.
In some embodiments, the operation receiving module is further configured to receive a receiver position switching operation, where the position switching operation is directed to a first video image, and the interface display module is further configured to replace the first video image with a third video image according to the position switching operation, where the first video image and the third video image are video images provided by different positions in a multi-position shooting system.
In some embodiments, the set-top operation includes a double-click operation on the first video image.
In some embodiments, the operation receiving module is further configured to receive a camera switching operation, where the camera switching operation is for a first video image, and the interface display module is further configured to replace the first video image with a fourth video image according to the camera switching operation, where the first video image and the fourth video image are video images provided by different cameras in a same camera in the multi-camera shooting system.
In some embodiments, the camera switching operation includes a three-click operation on the first video image.
In some embodiments, the operation receiving module is further configured to receive a picture-in-picture display operation, the picture-in-picture display operation being for a first video image, and the interface display module is further configured to display a fifth video image at least partially surrounded by the second video image on the main interface according to the picture-in-picture display operation, the fifth video image being identical to the first video image.
In some embodiments, the picture-in-picture display operation includes a drag operation on the first video image.
In some embodiments, the apparatus further comprises a standby control module configured to, after the first video image is hidden by the main interface, send a standby control instruction to the second electronic device, where the standby control instruction is configured to instruct the second electronic device to turn off or operate any component or any combination of components in a low power mode, the camera module, the image signal processor ISP, the video processor VPU, the display screen, and the wireless communication link, where the wireless communication link is configured to transmit the video image from the second electronic device to the first electronic device, the second electronic device is a secondary site of the multi-site capturing system, and the video image is the first video image or the second video image.
In some embodiments, the standby control instruction is further configured to instruct the second electronic device to change the wireless communication link from the continuous communication state to the periodic dormant communication state, the apparatus further includes a communication control module configured to establish a bluetooth low energy connection between the first electronic device and the second electronic device and disconnect the wireless communication link when the duration of the wireless communication link in the periodic dormant communication state reaches a first preset duration, the operation receiving module is specifically configured to receive a third user operation during a first standby resume procedure or receive a standby resume operation and a third user operation during a second standby extensive resume procedure, in which the third user operation is a standby resume operation, the standby control module is further configured to send a standby resume control instruction to the second electronic device through the wireless communication link if the standby resume operation is received while the wireless communication link is in the periodic dormant communication state, the standby resume control instruction is further configured to instruct the second electronic device to resume the wireless communication state through the bluetooth low energy connection if the standby resume operation is received while the wireless communication link is disconnected, and the standby control module is further configured to resume the wireless communication link to the second electronic device through the bluetooth low energy connection according to the standby resume operation.
In some embodiments, in the second standby recovery procedure, the standby recovery operation includes a touch operation acting on a preset position.
In some embodiments, the standby control instruction is used for instructing the second electronic device to change the wireless communication link from the continuous communication state to the first periodic dormant communication state, the device further comprises a communication control module, when the continuous time of the wireless communication link in the first periodic dormant communication state reaches a first preset time, the wireless communication link is changed to the second periodic dormant communication state, the dormant period of the first periodic dormant communication state is smaller than the dormant period of the second periodic dormant communication state, the operation receiving module is specifically used for receiving a third user operation in a third standby restoration process, or receiving a standby restoration operation and a third user operation in a fourth standby restoration process, the third user operation is a standby restoration operation in the third standby restoration process, and the standby control module is further used for sending a standby restoration control instruction to the second electronic device through the wireless communication link according to the standby restoration operation, and the standby restoration control instruction is used for instructing the second electronic device to restore the wireless communication link to the continuous communication state.
In some embodiments, in the fourth standby recovery procedure, the standby recovery operation is a touch operation acting on a preset position.
In some embodiments, the apparatus further comprises an interface display module further configured to display a transition image at a preset location according to a third user operation, and the interface display module is further specifically configured to replace the transition image with the first video image when the first electronic device receives the video image from the second electronic device via the wireless communication link that is restored to the continuous communication state according to the third user operation.
Corresponding to the above method embodiments, the present application also provides an electronic device comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the electronic device to perform part or all of the steps in the above method embodiments.
Referring to fig. 23, fig. 23 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 500 may be any of the electronic devices described in any of the embodiments above. The electronic device 500 may include a processor 410, an external memory interface 420, an internal memory 421, a Universal Serial Bus (USB) interface 430, a charge management module 440, a power management module 441, a battery 442, an antenna 1, an antenna 2, a mobile communication module 450, a wireless communication module 460, an audio module 470, a speaker 470A, a receiver 470B, a microphone 470C, an earphone interface 470D, a sensor module 480, keys 490, a motor 491, an indicator 492, a camera 493, a display screen 494, and a subscriber identity module (subscriber identification module, SIM) card interface 495, among others. The sensor modules 480 may include pressure sensors 480A, gyroscope sensors 480B, barometric pressure sensors 480C, magnetic sensors 480D, acceleration sensors 480E, distance sensors 480F, proximity sensors 480G, fingerprint sensors 480H, temperature sensors 480J, touch sensors 480K, ambient light sensors 480L, bone conduction sensors 480M, and the like.
It should be understood that the illustrated structure of the embodiment of the present invention does not constitute a specific limitation on the electronic device 500.
In other embodiments of the application, electronic device 500 may include more or fewer components than shown, or may combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 410 may include one or more processing units, for example, the processor 410 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 410 for storing instructions and data. In some embodiments, the memory in the processor 410 is a cache memory. The memory may hold instructions or data that the processor 410 has just used or recycled. If the processor 410 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided, reducing the latency of the processor 410 and thus improving the efficiency of the system.
In some embodiments, processor 410 may include one or more interfaces. The interfaces may include an integrated circuit (inter-INTEGRATED CIRCUIT, I2C) interface, an integrated circuit built-in audio (inter-INTEGRATED CIRCUIT SOUND, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processorinterface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SERIAL DATA LINE, SDA) and a serial clock line (derailclockline, SCL). In some embodiments, the processor 410 may contain multiple sets of I2C buses. The processor 410 may be coupled to the touch sensor 480K, charger, flash, camera 493, etc., respectively, through different I2C bus interfaces. For example, the processor 410 may couple the touch sensor 480K through an I2C interface, causing the processor 410 to communicate with the touch sensor 480K through an I2C bus interface, implementing the touch functionality of the electronic device 500.
The I2S interface may be used for audio communication. In some embodiments, the processor 410 may contain multiple sets of I2S buses. The processor 410 may be coupled to the audio module 470 via an I2S bus to enable communication between the processor 410 and the audio module 470. In some embodiments, the audio module 470 may communicate audio signals to the wireless communication module 460 through the I2S interface to implement a function of answering a call through a bluetooth headset.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 470 and the wireless communication module 460 may be coupled by a PCM bus interface. In some embodiments, the audio module 470 may also transmit audio signals to the wireless communication module 460 through the PCM interface to implement a function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 410 with the wireless communication module 460. For example, the processor 410 communicates with a bluetooth module in the wireless communication module 460 through a UART interface to implement bluetooth functions. In some embodiments, the audio module 470 may transmit an audio signal to the wireless communication module 460 through a UART interface, implementing a function of playing music through a bluetooth headset.
The MIPI interface may be used to connect the processor 410 to peripheral devices such as the display screen 494, the camera 493, and the like. The MIPI interfaces include camera serial interfaces (CAMERA SERIAL INTERFA CE, CSI), display serial interfaces (DISPLAY SERIAL INTERFACE, DSI), and the like. In some embodiments, processor 410 and camera 493 communicate through a CSI interface to implement the photographing function of electronic device 500. The processor 410 and the display screen 494 communicate via a DSI interface to implement the display functionality of the electronic device 500.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 410 with the camera 493, display screen 494, wireless communication module 460, audio module 470, sensor module 480, and the like. The GPIO interface may also be configured as an I15c interface, an I14S interface, a UART interface, an MIPI interface, etc.
The USB interface 430 is an interface conforming to the USB standard specification, and may specifically be a MiniUSB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 430 may be used to connect a charger to charge the electronic device 500, or may be used to transfer data between the electronic device 500 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the connection between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the electronic device 500. In other embodiments of the present application, the electronic device 500 may also employ different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
The charge management module 440 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 440 may receive a charging input of a wired charger through the USB interface 430. In some wireless charging embodiments, the charge management module 440 may receive wireless charging input through a wireless charging coil of the electronic device 500. The battery 442 may be charged by the charge management module 440, and the electronic device may be powered by the power management module 441.
The power management module 441 is configured to connect the battery 442, the charge management module 440 and the processor 410. The power management module 441 receives input from the battery 442 and/or the charge management module 440 to power the processor 410, the internal memory 421, the display screen 494, the camera 493, the wireless communication module 460, and the like. The power management module 441 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 441 may also be disposed in the processor 410. In other embodiments, the power management module 441 and the charge management module 440 may be disposed in the same device.
The wireless communication function of the electronic device 500 may be implemented by the antenna 1, the antenna 2, the mobile communication module 450, the wireless communication module 460, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in electronic device 500 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example, the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 450 may provide a solution for wireless communication, including 2G/3G/4G/5G, as applied to the electronic device 500. The mobile communication module 450 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), or the like. The mobile communication module 450 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 450 may amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate the electromagnetic waves. In some embodiments, at least some of the functional modules of the mobile communication module 450 may be disposed in the processor 410. In some embodiments, at least some of the functional modules of the mobile communication module 450 may be disposed in the same device as at least some of the modules of the processor 410.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through audio devices (not limited to speaker 470A, receiver 470B, etc.), or displays images or video through display screen 494. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 450 or other functional module, independent of the processor 410.
The wireless communication module 460 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (WIRELESS FIDELITY, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation SATELLITE SYSTEM, GNSS), frequency modulation (frequency modulation, FM), near field communication (NEAR FIELD communication, NFC), infrared (IR), etc., applied to the electronic device 500. The wireless communication module 460 may be one or more devices that integrate at least one communication processing module. The wireless communication module 460 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and transmits the processed signals to the processor 410. The wireless communication module 460 may also receive a signal to be transmitted from the processor 410, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 450 of electronic device 500 are coupled, and antenna 2 and wireless communication module 460 are coupled, such that electronic device 500 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (glo b ALS Y S T E M F o r m o bil ecommunications, GSM), general packet radio service (GENERALPACKET RADIO SERVICE, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband codedivision multiple access, WCDMA), time division code division multiple access (time-division code divisionmultiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (globalnavigation SATELLITE SYSTEM, GLONASS), a beidou satellite navigation system (beidou navigation SATELLITE SYSTEM, BDS), a quasi zenith satellite system (quasi-zenithsatellite system, QZSS) and/or a satellite based augmentation system (SATELLITE BASED AUGMENTATION SYSTEMS, SBAS).
The electronic device 500 implements display functions through a GPU, a display screen 494, and an application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display screen 494 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 410 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 494 is used to display images, videos, and the like. The display screen 494 includes a display panel. The display panel may employ a liquid crystal display (liquid crystaldisplay, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (flexlight-emittingdiode), miniled, microLed, micro-oLed, a quantum dot light-emitting diode (quantum dot lightemitting diodes, QLED), or the like. In some embodiments, the electronic device 500 may include 1 or N displays 494, N being a positive integer greater than 1.
Electronic device 500 may implement shooting functionality through an ISP, camera 493, video codec, GPU, display screen 494, and application processor, among others.
The ISP is used to process the data fed back by the camera 493. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, an ISP may be provided in the camera 493.
The camera 493 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 500 may include 1 or N cameras 493, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 500 is selecting a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 500 may support one or more video codecs. Thus, the electronic device 500 may play or record video in a variety of encoding formats, such as moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent recognition of the electronic device 500, for example, image recognition, face recognition, voice recognition, text understanding, etc., may be implemented through the NPU.
The external memory interface 420 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 500. The external memory card communicates with the processor 410 through an external memory interface 420 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 421 may be used to store computer-executable program code that includes instructions. The internal memory 421 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 500 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 421 may include a high-speed random access memory, and may also include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 410 performs various functional applications and data processing of the electronic device 500 by executing instructions stored in the internal memory 421 and/or instructions stored in a memory provided in the processor.
The electronic device 500 may implement audio functions through an audio module 470, a speaker 470A, a receiver 470B, a microphone 470C, an ear-headphone interface 470D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 470 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 470 may also be used to encode and decode audio signals. In some embodiments, the audio module 470 may be disposed in the processor 410, or some functional modules of the audio module 470 may be disposed in the processor 410.
Speaker 470A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 500 may listen to music, or to hands-free conversations, through the speaker 470A.
A receiver 470B, also referred to as a "earpiece," is used to convert the audio electrical signal into a sound signal. When electronic device 500 is answering a telephone call or voice message, voice may be received by placing receiver 470B in close proximity to the human ear.
Microphone 470C, also referred to as a "microphone" or "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 470C through the mouth, inputting a sound signal to the microphone 470C. The electronic device 500 may be provided with at least one microphone 470C. In other embodiments, the electronic device 500 may be provided with two microphones 470C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 500 may also be provided with three, four, or more microphones 470C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The headphone interface 470D is for connecting a wired headphone. Earphone interface 470D may be a USB interface 430 or a 3.5mm open mobile electronic device platform (open mobile terminalplatform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 480A is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, pressure sensor 480A may be disposed on display screen 494. The pressure sensor 480A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. When a force is applied to the pressure sensor 480A, the capacitance between the electrodes changes. The electronic device 500 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 494, the electronic apparatus 500 detects the touch operation intensity according to the pressure sensor 480A. The electronic device 500 may also calculate the location of the touch based on the detection signal of the pressure sensor 480A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example, when a touch operation with a touch operation intensity smaller than a first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 480B may be used to determine a motion gesture of the electronic device 500. In some embodiments, the angular velocity of electronic device 500 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 480B. The gyro sensor 480B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 480B detects the shake angle of the electronic device 500, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 500 through the reverse motion, so as to realize anti-shake. The gyro sensor 480B may also be used for navigation, somatosensory of game scenes.
The air pressure sensor 480C is used to measure air pressure. In some embodiments, electronic device 500 calculates altitude from barometric pressure values measured by barometric pressure sensor 480C, aiding in positioning and navigation.
The magnetic sensor 480D includes a hall sensor. The electronic device 500 may detect the opening and closing of the flip holster using the magnetic sensor 480D. In some embodiments, when the electronic device 500 is a flip machine, the electronic device 500 may detect the opening and closing of the flip according to the magnetic sensor 480D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 480E may detect the magnitude of acceleration of the electronic device 500 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 500 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 480F for measuring distance. The electronic device 500 may measure the distance by infrared or laser. In some embodiments, the electronic device 500 may range using the distance sensor 480F to achieve fast focus.
Proximity light sensor 480G may include, for example, a Light Emitting Diode (LED) and a light detector, for example, a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 500 emits infrared light outward through the light emitting diode. The electronic device 500 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it may be determined that an object is in the vicinity of the electronic device 500. When insufficient reflected light is detected, the electronic device 500 may determine that there is no object in the vicinity of the electronic device 500. The electronic device 500 may detect that the user holds the electronic device 500 close to the ear using the proximity light sensor 480G, so as to automatically extinguish the screen for power saving purposes. The proximity light sensor 480G may also be used in holster mode, pocket mode to automatically unlock and lock the screen.
The ambient light sensor 480L is used to sense ambient light level. The electronic device 500 may adaptively adjust the brightness of the display screen 494 based on the perceived ambient light level. The ambient light sensor 480L may also be used to automatically adjust white balance during photographing. Ambient light sensor 480L may also cooperate with proximity light sensor 480G to detect whether electronic device 500 is in a pocket to prevent false touches.
The fingerprint sensor 480H is used to collect a fingerprint. The electronic device 500 may utilize the collected fingerprint feature to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call, etc.
The temperature sensor 480J detects temperature. In some embodiments, the electronic device 500 performs a temperature processing strategy using the temperature detected by the temperature sensor 480J. For example, when the temperature reported by temperature sensor 480J exceeds a threshold, electronic device 500 performs a reduction in performance of a processor located in the vicinity of temperature sensor 480J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 500 heats the battery 442 to avoid the low temperature causing the electronic device 500 to be abnormally shut down. In other embodiments, when the temperature is below a further threshold, the electronic device 500 performs boosting of the output voltage of the battery 442 to avoid abnormal shutdown caused by low temperatures.
Touch sensor 480K, also referred to as a "touch device". The touch sensor 480K may be disposed on the display screen 494, and the touch sensor 480K and the display screen 494 form a touch screen, which is also called a "touch screen". The touch sensor 480K is used to detect a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display screen 494. In other embodiments, the touch sensor 480K may also be disposed on the surface of the electronic device 500 at a different location than the display screen 494.
Bone conduction sensor 480M may acquire a vibration signal. In some embodiments, bone conduction sensor 480M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 480M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 480M may also be provided in a headset, in combination with an osteoinductive headset. The audio module 470 may parse out a voice signal based on the vibration signal of the sound part vibration bone block obtained by the bone conduction sensor 480M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beat signals acquired by the bone conduction sensor 480M, so that a heart rate detection function is realized. The keys 490 include a power-on key, a volume key, etc. The keys 490 may be mechanical keys. Or may be a touch key. The electronic device 500 may receive key inputs, generate key signal inputs related to user settings and function controls of the electronic device 500.
The motor 491 may generate a vibration cue. The motor 491 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 491 may also correspond to different vibration feedback effects by touch operations applied to different areas of the display screen 494. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 492 may be an indicator light, which may be used to indicate a state of charge, a change in charge, an indication message, a missed call, a notification, or the like.
The SIM card interface 495 is used to connect to a SIM card. The SIM card may be inserted into the SIM card interface 495 or removed from the SIM card interface 495 to enable contact and separation with the electronic device 500. The electronic device 500 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 495 may support Nano SIM cards, micro SIM cards, etc. The same SIM card interface 495 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 495 may also be compatible with different types of SIM cards. The SIM card interface 495 may also be compatible with external memory cards. The electronic device 500 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments, electronic device 500 employs an eSIM, i.e., an embedded SIM card. The eSIM card can be embedded in the electronic device 500 and cannot be separated from the electronic device 500.
The embodiment of the application also provides a computer storage medium, wherein the computer storage medium can store a program, and the program can control a device where the computer readable storage medium is located to execute part or all of the steps in the above embodiment. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random-access memory (random access memory RAM), or the like.
Embodiments of the present application also provide a computer program product containing executable instructions which, when executed on a computer, cause the computer to perform some or all of the steps of the method embodiments described above.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relation of association objects, and indicates that there may be three kinds of relations, for example, a and/or B, and may indicate that a alone exists, a and B together, and B alone exists. Wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of the following" and the like means any combination of these items, including any combination of single or plural items. For example, at least one of a, b and c may represent a, b, c, a-b, ac, b-c, or a-b-c, wherein a, b, c may be single or plural.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in the embodiments disclosed herein can be implemented as a combination of electronic hardware, computer software, and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In several embodiments provided by the present invention, any of the functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. The storage medium includes various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory RAM), a magnetic disk, or an optical disk.
The foregoing is merely exemplary embodiments of the present invention, and any person skilled in the art may easily conceive of changes or substitutions within the technical scope of the present invention, which should be covered by the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (28)
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN2022101139872 | 2022-01-30 | ||
| CN202210113987 | 2022-01-30 | ||
| CN202210501226.4A CN116582754B (en) | 2022-01-30 | 2022-05-09 | Shooting method, device, storage medium and program product of multi-camera shooting system |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210501226.4A Division CN116582754B (en) | 2022-01-30 | 2022-05-09 | Shooting method, device, storage medium and program product of multi-camera shooting system |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN120786187A true CN120786187A (en) | 2025-10-14 |
Family
ID=87470575
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202510762426.9A Pending CN120786187A (en) | 2022-01-30 | 2022-05-09 | Shooting method, equipment, storage medium and program product for multi-camera shooting system |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN120786187A (en) |
| WO (1) | WO2023142959A1 (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118540575B (en) * | 2024-07-24 | 2024-12-06 | 荣耀终端有限公司 | Same-camera shooting method, electronic device, storage medium and program product |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8451312B2 (en) * | 2010-01-06 | 2013-05-28 | Apple Inc. | Automatic video stream selection |
| CN105872570A (en) * | 2015-12-11 | 2016-08-17 | 乐视网信息技术(北京)股份有限公司 | Method and apparatus for implementing multi-camera video synchronous playing |
| WO2018000227A1 (en) * | 2016-06-29 | 2018-01-04 | 北京小米移动软件有限公司 | Video broadcast method and device |
| CN106791485B (en) * | 2016-11-16 | 2020-02-07 | 深圳市异度信息产业有限公司 | Video switching method and device |
| CN106802759A (en) * | 2016-12-21 | 2017-06-06 | 华为技术有限公司 | The method and terminal device of video playback |
| CN107071329A (en) * | 2017-02-27 | 2017-08-18 | 努比亚技术有限公司 | The method and device of automatic switchover camera in video call process |
| CN113596319A (en) * | 2021-06-16 | 2021-11-02 | 荣耀终端有限公司 | Picture-in-picture based image processing method, apparatus, storage medium, and program product |
-
2022
- 2022-05-09 CN CN202510762426.9A patent/CN120786187A/en active Pending
-
2023
- 2023-01-06 WO PCT/CN2023/070822 patent/WO2023142959A1/en not_active Ceased
Also Published As
| Publication number | Publication date |
|---|---|
| WO2023142959A1 (en) | 2023-08-03 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20230396886A1 (en) | Multi-channel video recording method and device | |
| CN112333380B (en) | Shooting method and equipment | |
| CN113535284B (en) | Full-screen display methods, devices and electronic devices | |
| CN118051111A (en) | A high energy efficiency display processing method and device | |
| CN118192856B (en) | Display screen window switching method and electronic equipment | |
| WO2021013106A1 (en) | Foldable screen illumination method and apparatus | |
| CN114339429A (en) | Audio and video playing control method, electronic equipment and storage medium | |
| CN116582754B (en) | Shooting method, device, storage medium and program product of multi-camera shooting system | |
| CN113867520B (en) | Device control method, electronic device, and computer-readable storage medium | |
| CN114302063B (en) | Shooting method and equipment | |
| CN120786187A (en) | Shooting method, equipment, storage medium and program product for multi-camera shooting system | |
| CN116782023A (en) | A photographing method and electronic device | |
| CN116582743B (en) | Shooting method, electronic equipment and medium | |
| CN116528337B (en) | Business collaboration method, electronic device, readable storage medium, and chip system | |
| CN115762108B (en) | Remote control method, remote control device, and controlled device | |
| CN116437194B (en) | Method, apparatus and readable storage medium for displaying preview image | |
| WO2023071497A1 (en) | Photographing parameter adjusting method, electronic device, and storage medium | |
| CN117319369A (en) | Document delivery methods, electronic equipment and storage media | |
| CN116820222A (en) | Method and device for preventing false touch | |
| CN114579900A (en) | Cross-device page switching method, electronic device and storage medium | |
| CN114691066B (en) | Application display method and electronic equipment | |
| US12549860B2 (en) | Multi-channel video recording method and device | |
| CN114520870B (en) | A display method and terminal | |
| CN118317205B (en) | Image processing method and terminal device | |
| CN110531864B (en) | A gesture interaction method, device and terminal equipment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |