CN120640124A - Interaction method, device and electronic device - Google Patents
Interaction method, device and electronic deviceInfo
- Publication number
- CN120640124A CN120640124A CN202511004107.8A CN202511004107A CN120640124A CN 120640124 A CN120640124 A CN 120640124A CN 202511004107 A CN202511004107 A CN 202511004107A CN 120640124 A CN120640124 A CN 120640124A
- Authority
- CN
- China
- Prior art keywords
- input
- shooting
- image
- additional lens
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/81—Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
Abstract
The application discloses an interaction method, an interaction device and electronic equipment, and belongs to the technical field of camera shooting. The method comprises the steps of displaying a shooting preview interface corresponding to a camera of the electronic equipment, receiving first input of the shooting preview interface, and responding to the first input to enlarge and display part of shooting objects in the shooting preview interface, wherein an image corresponding to the enlarged and displayed part of shooting objects is obtained based on an image acquired by an additional lens.
Description
Technical Field
The application belongs to the technical field of camera shooting, and particularly relates to an interaction method, an interaction device and electronic equipment.
Background
Along with the continuous progress of image technology, the use frequency of the tele lens in the shooting scene is significantly improved, and the requirements on the high-magnification preview picture and the final film definition are also becoming more stringent. The additional lens can effectively improve high-power preview definition and sheeting quality by virtue of a unique external light path structure, enhances main body detail expressive force, and has obvious advantages in the aspects of improving user shooting experience and enhancing product market competitiveness.
However, the additional shots inevitably introduce a loss of field angle while optimizing the high-magnification preview definition and the presentation of the subject details. When the user desires to keep the complete scene composition and also desires to highlight the main body detail characteristics, the limitation of the additional lens is highlighted, the application scene of the additional lens is limited, and the requirement of the user on high-quality images cannot be met effectively.
Therefore, the main body details cannot be ensured at present, and the complete shooting scene is considered.
Disclosure of Invention
The embodiment of the application aims to provide an interaction method, an interaction device and electronic equipment, which can solve the problem that the main body details cannot be ensured at present and the complete shooting scene is considered.
In a first aspect, an embodiment of the present application provides an interaction method performed by an electronic device, where the electronic device is connected to an additional lens, the method including:
displaying a shooting preview interface corresponding to a camera of the electronic equipment;
receiving a first input to a shooting preview interface;
and in response to the first input, magnifying and displaying part of the shooting objects in the shooting preview interface, wherein the image corresponding to the magnified and displayed part of the shooting objects is obtained based on the image acquired by the additional lens.
In a second aspect, an embodiment of the present application provides an interaction device, where the interaction device is connected to an additional lens, and the interaction device includes:
the display module is used for displaying a shooting preview interface corresponding to a camera of the device;
The receiving module is used for receiving a first input of the shooting preview interface;
And the display module is also used for responding to the first input and displaying part of shooting objects in the shooting preview interface in an enlarged mode, wherein the image corresponding to the part of shooting objects in the enlarged mode is obtained based on the image acquired by the additional lens.
In a third aspect, an embodiment of the present application provides an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, a shooting preview interface corresponding to a camera of the electronic equipment is displayed, and part of shooting objects in the shooting preview interface are enlarged and displayed in response to a first input of the shooting preview interface, wherein the image corresponding to the enlarged and displayed part of shooting objects is obtained based on the image acquired by the additional lens, so that the clear effect of the part of shooting objects after being optically enlarged by the additional lens can be directly seen on the preview interface, and the optical zooming capability of the additional lens is applied to preview interaction, so that definition and detail representation which are superior to those of digital enlargement which is simply dependent on the image of the main camera can be provided. Therefore, the main body details of part of shooting objects are ensured, and the complete shooting scene is considered.
Drawings
FIG. 1 is a flow chart of an interaction method provided by an embodiment of the present application;
fig. 2 is a schematic diagram of a main area in a first frame according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a function icon of an additional lens according to an embodiment of the present application;
FIG. 4 is a flow chart of an interaction method provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a partial frame in a second frame according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a first frame according to an embodiment of the present application;
FIG. 7 is a schematic diagram of another first frame according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an input for adjusting magnification according to an embodiment of the present application;
FIG. 9 is a schematic diagram of another magnification adjustment input provided by an embodiment of the present application;
FIG. 10 is a schematic view of still another magnification adjustment input provided by an embodiment of the present application;
FIG. 11 is an interface schematic diagram of a filter library according to an embodiment of the present application;
FIG. 12 is a schematic diagram of an interface of a shape library provided by an embodiment of the present application;
FIG. 13 is an interface schematic diagram of a special effects library provided by an embodiment of the present application;
Fig. 14 is a schematic view of a dynamic image provided by an embodiment of the present application;
fig. 15 is a schematic view of another dynamic image provided by an embodiment of the present application;
fig. 16 is a schematic view of still another dynamic image provided by an embodiment of the present application;
FIG. 17 is a schematic diagram of a video processing interface according to an embodiment of the present application;
FIG. 18 is a block diagram of an interactive device according to an embodiment of the present application;
FIG. 19 is one of the schematic structural diagrams of the electronic device according to the embodiment of the present application;
fig. 20 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the accompanying drawings of the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms "first," "second," and the like in the description of the present application, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. In addition, "and/or" in the specification means at least one of the connected objects, and the character "/", generally means a relationship in which the associated objects are one kind of "or".
The interaction method provided by the embodiment of the application can be at least applied to the following application scenes, and the following explanation is provided.
Currently, in concert scenes, users often face the dilemma that shooting equipment is difficult to consider stage panorama and singer clear imaging. The additional lens can obviously improve the main body detail capturing capability by prolonging the light path structure and increasing the optical zoom multiple, and can present clear pictures such as facial expression, clothing texture and the like of singers when in high-power preview and film forming. However, the additional lens greatly compresses the Field of View (FOV) while increasing the magnification due to its optical characteristics. When a user shoots a concert by using the additional lens, even if the lens is aimed at the stage, only a local area of the stage can be shot, the whole stage setting, the lamplight special effect and the dance group are difficult to accommodate, the picture composition is incomplete, and a macro scene of the concert cannot be displayed.
While a conventional camera built in the electronic equipment can shoot a wider field of view range and completely record the whole stage picture, because the optical zoom capability of the conventional camera is limited, when a singer with a longer shooting distance forcibly zooms in the picture, if the picture is enlarged by digital zooming, pixel interpolation calculation can be caused, the picture resolution is reduced, the noise is increased, the details of the face, limbs, and the like of the singer are blurred, and the mosaic phenomenon can not be met, so that the requirement of a user on high-quality images can not be met. The limitation of the additional lens and the electronic equipment camera in concert shooting makes a user difficult to decide between recording the panorama and shooting the subject, and seriously influences shooting experience and video quality.
Aiming at the problems of the related art, the embodiment of the application provides an interaction method, an interaction device and electronic equipment, which can solve the problem that the prior art cannot ensure the main body details and simultaneously take the complete shooting scene into account.
The following describes in detail the interaction method provided by the embodiment of the present application through specific embodiments and application scenarios thereof with reference to the accompanying drawings.
Fig. 1 is a flowchart of an interaction method according to an embodiment of the present application.
As shown in fig. 1, the interaction method may include steps 110-130, which are executed by an electronic device, where the electronic device is connected to an additional lens, and the method is applied to an interaction apparatus, as follows:
Step 110, displaying a shooting preview interface corresponding to a camera of the electronic equipment;
and the shooting preview interface is a graphical user interface for displaying the picture captured by the camera on the screen of the electronic equipment in real time and enabling a user to carry out composition and parameter adjustment before shooting a photo or video.
Step 120, receiving a first input to the shooting preview interface;
The first input is an interactive action performed by a user on a shooting preview interface of the electronic device, such as a single-touch click, a long press or a double-finger pinch gesture of a specific area on a preview screen, so as to indicate that the user wants to enlarge a part of shooting objects in the displayed shooting preview interface.
As shown in fig. 2, a first input to a capture preview interface is received.
The electronic device may render the shot preview interface in real time. After the user observes the preview screen, if a clearer view of certain specific details in the screen is desired, a first input may be performed.
And 130, in response to the first input, magnifying and displaying part of the shooting objects in the shooting preview interface, wherein the magnified and displayed image corresponding to the part of the shooting objects is obtained based on the image acquired by the additional lens.
And the partial shooting object refers to a local area of the picture, which is designated by a user through a first input in a shooting preview interface and needs to be emphasized or enlarged. Part of the shot object typically contains a body or detail of interest to the user, such as a human face, a distant scene, or a tiny object texture.
The additional lens refers to an external optical lens accessory which is physically connected to the outside of the main camera of the electronic equipment. The optical characteristics of the additional lens are usually different from those of the main camera built in the device, and are often used for providing functions such as optical zooming, wide angle, micro-distance or special effect, etc. so as to expand the original shooting capability of the device. The hardware work of the camera of the electronic equipment is not affected, and the imaging effect is changed only through optical superposition.
After the additional lens is installed, the camera of the electronic equipment still can normally collect pictures, the additional lens is used for processing light rays firstly through optical refraction and then guiding the light rays into the camera of the electronic equipment, the additional lens is used as a front optical component of the original lens to collect pictures, ambient light rays are refracted through the additional lens firstly, the light ray paths are changed, then enter the camera of the electronic equipment, and finally the photoelectric conversion and picture collection are completed through the image sensor.
The optical center of the additional lens is coincident with the lens center of the camera of the electronic device, namely, the optical axis is coaxial, so that light rays can be ensured to accurately pass through the lens group of the camera of the electronic device and be focused on the sensor after being refracted by the additional lens.
And magnifying and displaying part of shooting objects in the shooting preview interface, namely displaying the image content corresponding to the part of shooting objects on a screen of the electronic equipment in a larger size and with higher visual resolution than the original preview picture, wherein the magnified and displayed image corresponding to the part of shooting objects is obtained based on the image acquired by the additional lens, namely finally magnifying and displaying the clear image data of the part of shooting objects on the screen, and the magnified and displayed image data depends on optical capturing and sensor imaging of the additional lens. Thus, the part of the photographic subject in the photographic preview interface is enlarged and displayed, unlike simple pixel stretching, to provide clearer details.
And responding to the first input, accurately analyzing the position coordinates of the first input, and determining the display area of the part of the shooting object which the user wants to enlarge. And extracting high-quality image blocks corresponding to the partial shooting objects from the image information which is specially collected by the additional lens aiming at the display areas of the partial shooting objects by using an image processing algorithm.
The electronic device dynamically enlarges and displays the extracted high-quality image area on the shooting preview interface, and replaces or superimposes and displays the original preview content of part of shooting objects. The enlarged view is directly derived from the optical imaging of the additional lens, thus providing better sharpness and detail than if the image were digitally enlarged by simply relying on the camera of the electronic device.
Therefore, before formal shooting, a user can directly see the clear effect of part of shooting objects after optical amplification of the additional lens on the preview interface, and the optical zoom capability of the additional lens is applied to preview interaction, so that the inconvenience that only a fuzzy digital amplification effect can be seen during preview and the optical effect cannot be seen until shooting is completed is avoided. The target details can be clearly amplified and observed in the preview stage, so that a user can more accurately conduct composition, confirm whether a main body is at an ideal position or not, more accurately manually select a focusing point or check an automatic focusing effect, and the shooting success rate is remarkably improved, especially in the scene requiring fine operation such as tele-shooting or macro-distance.
The optical amplifying capability of the additional lens is effectively exerted, the amplified picture previewed by the user is the real optical image quality finally captured by the additional lens before the user decides to press the shutter, and the gap between the previewing and the sheeting effect is eliminated. For a photographing fan or a professional user, the experience of looking at the enlarged focus in an optical viewfinder or an electronic viewfinder similar to a professional camera is provided, more professional preview operation can be realized on the mobile device, and the confidence and controllability of photographing by using an additional lens are enhanced.
In a possible embodiment, in a case that the additional lens is operatively connected to the electronic device, displaying a function icon of the additional lens on the photographing preview interface;
Receiving a second input of a function icon for the additional lens;
And controlling the additional lens to acquire an image in response to the second input.
And the electronic equipment can identify and normally use the functions of the additional lens.
And the function icons are icons representing functions of the additional shots displayed on the shooting interface of the electronic equipment, and the user enables the additional shots to acquire pictures by clicking the icons.
And detecting the physical connection state of the additional lens and the electronic equipment in real time, and displaying a functional icon of the additional lens on a shooting interface of the electronic equipment when the physical connection state of the additional lens is detected to be an effective connection state, so as to prompt a user that the additional lens is available. And when the user clicks and selects the function icon, controlling the additional lens to collect images.
As shown in fig. 3, in the photographing preview mode, when the user selects 0.6X magnification, the electronic device invokes the wide-angle camera to collect an image in real time. At this time, the wide-angle camera of the electronic device is in a working state with the largest angle of view and the shortest focal length, and can completely reserve the panoramic scene and atmosphere of the concert scene, but the optical characteristics of the camera of the electronic device limit that the presentation capability of main body details such as stars on a stage is insufficient, so that only the main body outline can be displayed.
When the additional lens is physically connected to the electronic device and is recognized as being in an effectively connected state, a function icon 212 of a magnified shape is displayed over the photographing mode interface. If the electronic equipment currently uses the middle-short focal length module for shooting, such as a main shot or a wide angle, the user is prompted in an interface to open an extra-lens ultra-clear amplifying display function by clicking an upper icon.
As shown in fig. 4, after the user clicks the function icon 212, the function icon 212 is in a highlighted state, and in response to this operation, the additional lens is controlled to start working, to start collecting image data in real time, and to save the collected image in a buffer frame, but at this time, the preview interface still displays the picture content collected by the medium-short focal length module. If the additional lens is not effectively connected with the electronic equipment, the function icon corresponding to the enlarged shape is automatically hidden, so that misoperation of a user is avoided.
After the user carries the electronic equipment provided with the additional lens to arrive at the concert site and correctly installs the additional lens on the electronic equipment, the electronic equipment detects that the physical connection state of the additional lens is an effective connection state, and functional icons of the additional lens are displayed on a shooting interface. The user wants to shoot high-definition details of singers on the stage, clicks a functional icon of the selected additional lens, and the electronic equipment controls the additional lens to acquire images, so that an ideal concert shooting result is finally obtained.
Therefore, the functional icons are displayed on the shooting interface to clearly prompt the user of the available state of the additional lens, and the operation flow of enabling the additional lens by the user is simplified. The user can control the additional lens fast and intuitively to collect the pictures, and the picture can be conveniently switched between different picture display modes, so that the efficiency and fluency of shooting operation are improved, the shooting experience of the user by using the additional lens is enhanced, the functional advantage of the additional lens is fully exerted, and the quality of shooting results is improved.
In a possible embodiment, in step 130, the following steps may be specifically included:
Step 410, in the case that the image collected by the additional lens includes a subject image corresponding to the subject region, cropping an image corresponding to a part of the shooting objects corresponding to the subject region in the image collected by the additional lens;
And step 420, adjusting the size of the image corresponding to the part of the shooting object acquired by the additional lens to obtain an adjusted local image, and displaying the adjusted local image on the main body area of the shooting preview interface in a covering manner according to the position information of the main body area in the shooting preview interface.
And the local image is an image part which is cut out from the image acquired by the additional lens and corresponds to the main body area, and comprises high-definition details of the main body.
When the image acquired by the additional lens comprises a main body image corresponding to the main body region, firstly, precisely cutting the image acquired by the additional lens according to the position and size information of the main body region in a shooting preview interface to acquire a local image comprising high-definition details of the main body. And then, according to the position information of the main body area in the shooting preview interface, accurately covering and displaying the cut local image on the corresponding main body area of the shooting preview interface, and eliminating the splicing trace through an image fusion technology, thereby having both panoramic scenes and main body high-definition details.
Illustratively, in a concert, a user photographs using an electronic device equipped with an additional lens. The electronic equipment simultaneously displays a shooting preview interface, the shooting preview interface completely presents gorgeous stage panorama, and the singer in the passion of the center of the stage is clearly captured by the image acquired by the additional lens. The user selects the singer as a main body area in the shooting preview interface, detects that the main body image of the singer is contained in the image acquired by the additional lens, cuts out a local high-definition image of the singer from the image acquired by the additional lens, and precisely covers and displays the local high-definition image on the singer position of the shooting preview interface, so that not only is the macro stage scene of the concert reserved, but also details of the singer are clearly visible.
And calculating the proportional parameter of the size of the local image acquired by the additional lens and the size of the region to be filled in the shooting preview interface according to the position information of the main body region in the shooting preview interface. And (3) carrying out amplification or reduction processing on the local image according to the proportion parameters, so that the size of the adjusted local image is consistent with the size of the space of the shooting preview interface from which the original corresponding region is removed.
As shown in fig. 5, the adjusted partial image 214 is accurately overlaid and displayed on the main area of the shooting preview interface, so that the area removed by the image fusion requirement in the shooting preview interface is filled, the accurate fusion of the main details collected by the additional lens and the wide-angle panoramic image is realized, the panoramic atmosphere is reserved, and the presentation of the main details is enhanced.
Therefore, according to the position information of the main body area in the shooting preview interface, the local image is overlaid and displayed on the main body area of the shooting preview interface, so that efficient fusion of the shooting preview interface and the image acquired by the additional lens is realized, main body details focused by a user are highlighted on the premise of not losing a panoramic scene, the visual expressive force and the information transmission effect of the image are improved, the requirement of the user on high-quality shooting images is met, and the user can easily obtain ideal shooting results in the shooting process.
In addition, in the dual-lens collaborative work flow, when a user touches and selects a new main body area, processing is carried out according to the position relation between the new main body area and the current acquired image of the additional lens in two cases:
If the new main body area and the current star main body area are in the image range acquired by the additional lens at the moment, the system directly calls the registered image in the additional lens cache frame, the extraction algorithm locates the new main body area, and the main body area switching and the super-clear amplification display can be completed without the movement of a motor. As shown in fig. 6, in response to the first input to the partial object 215a, an enlarged displayed partial object 215b is obtained, and when the user selects another partial object, in response to the first input to the other partial object 216a, an enlarged displayed partial object 216b is obtained, and when both the partial object 215a and the other partial object 216a are located in the currently acquired picture of the plug-in lens, the motor can perform super-clear enlarged display on the next star body without moving.
As shown in FIG. 7, in response to a first input to a partial object 215a, an enlarged displayed partial object 215b is obtained, when a user selects another partial object, in response to a first input to another partial object 217a, an enlarged displayed partial object 217b is obtained, if the partial object 215a is far from the other partial object 217a and exceeds the current collection image range of the additional lens, the displacement is calculated according to the current central positions of the partial object 217a and the additional lens, the additional lens is driven to move to the target position, the additional lens is refocused after the motor is stabilized, a clear new main area image is obtained, then an image registration algorithm is used for establishing a mapping between the new image and the wide-angle image, a new main area is cut, the enlargement processing is carried out according to the enlargement ratio, and finally the processed high-definition main area image is covered to the corresponding position of the wide-angle image, so that the switching and the enhanced display of the cross-range main area are realized.
In a possible embodiment, before step 130, the following steps may be further included:
Determining the deviation amount of the center coordinates of the main body region and the center coordinates of the image acquired by the additional lens under the condition that the image acquired by the additional lens does not comprise the main body image corresponding to the main body region;
And controlling the prism in the additional lens to move according to the deviation amount, so that the image acquired by the additional lens comprises the main image corresponding to the main area.
And the main body image is a main body region corresponding image content, namely, the presentation of the object which is wanted to be clearly shot by the user in the image.
The deviation amount is the difference between the center coordinates of the main body area and the center coordinates of the image collected by the additional lens, and is used for measuring the deviation degree of the main body area and the center of the image collected by the additional lens.
When the image acquired by the additional lens does not comprise the main body image corresponding to the main body region, firstly, respectively determining the center coordinates of the main body region and the center coordinates of the image acquired by the additional lens through an image recognition algorithm, and calculating the deviation amount between the center coordinates and the center coordinates. Then, a control command is generated according to the direction and the magnitude of the deviation amount, and the prism in the additional lens is driven to move. Because the moving direction of the prism is opposite to the direction of the deviation, the optical path of the additional lens is changed by adjusting the position of the prism, so that the image acquired by the additional lens can cover the main image corresponding to the main area.
Illustratively, at a concert site, a user photographs using an electronic device equipped with an additional shot, and selects a singer in the center of the stage as a subject area in a first image. But the image collected by the additional lens only shoots the accompanying dance on one side of the stage, and does not contain the singer. After the deviation between the central coordinate of the main body area and the central coordinate of the image acquired by the additional lens is calculated, the prism is controlled to move, the singer is successfully brought into the shooting range by the image acquired by the additional lens, and a high-quality image with panorama and main body details can be obtained when the singer is subsequently fused with the first image, so that the shooting effect and the user experience are improved.
Therefore, according to the deviation amount, the prism in the additional lens is controlled to move, so that the image collected by the additional lens comprises the main body image corresponding to the main body region, and when the image is fused with the first image, a high-quality image with panorama and main body details can be obtained, and the shooting effect and the user experience are improved.
In a possible embodiment, in the step of determining the deviation amount between the center coordinates of the subject area and the center coordinates of the image acquired by the additional lens, when the image acquired by the additional lens does not include the subject image corresponding to the subject area, the method specifically may include the following steps:
determining the lateral deviation amount and the longitudinal deviation amount of the central coordinate of the main body area and the central coordinate of the image acquired by the additional lens;
the controlling the prism movement in the additional lens according to the deviation amount includes:
generating a first motor control instruction according to the lateral deviation amount so as to drive a prism in the additional lens to move laterally;
generating a second motor control instruction according to the longitudinal deviation amount so as to drive a prism in the additional lens to longitudinally move;
wherein the direction of movement of the prism in the additional lens is opposite to the direction of the deviation amount.
The lateral deviation is the difference between the center coordinates of the main body area and the center coordinates of the image acquired by the additional lens in the horizontal direction.
The longitudinal deviation is the difference between the center coordinates of the main body area and the center coordinates of the image acquired by the additional lens in the vertical direction.
And a first motor control instruction, which is generated according to the lateral deviation amount and is used for driving the prism in the additional lens to move laterally.
And a second motor control instruction for generating an instruction for driving the prism in the additional lens to move longitudinally according to the longitudinal deviation amount.
After the lateral deviation amount and the longitudinal deviation amount of the center coordinates of the main body area and the center coordinates of the image acquired by the additional lens are determined, a first motor control instruction and a second motor control instruction are respectively generated according to the two deviation amounts. The first motor control command drives the prism to move transversely according to the transverse deviation amount, the second motor control command drives the prism to move longitudinally according to the longitudinal deviation amount, and the moving direction of the prism is opposite to the deviation amount. When the transverse deviation exceeds a first preset threshold, the motor is controlled to drive the horizontal movement at a first speed coefficient, and when the longitudinal deviation exceeds a second preset threshold, the motor is controlled to drive the vertical movement at a second speed coefficient, so that the prism is efficiently and safely adjusted in place.
In the process of photographing preview and additional lens collaborative work, stage lamps in an image are collected by an additional lens, corresponding coordinates in a wide-angle image are A (x 1, y 1) after registration, corresponding coordinates in a main body area are B (x 2, y 2), an initial position of a motor is set to be M (0, 0), and a position of the motor after the Nth movement is P (hx, hy). Wherein the x-axis motion coefficient of the motor isThe effective stroke is L x mu m, and the y-axis motion coefficient of the motor isThe effective stroke is L y μm, kx and Ky are positive integers greater than 0.
When x2> x1 is used,At the moment, the motor drives the x-axis to move forward, and the image acquired by the corresponding additional lens moves upwards;
when x2 is less than x1, At this time, the motor drives the x-axis to reversely move, and the image collected by the corresponding additional lens moves downwards.
When y2> y1 is used,At the moment, the motor drives the y-axis to move forward, and the image corresponding to the additional lens moves right;
When y2< y1 is chosen, At the moment, the motor drives the y-axis to reversely move, the image corresponding to the additional lens moves leftwards, the movement of the x-axis and the y-axis is realized by the motor at the same time, and the image shot by the additional lens can be horizontally and vertically adjusted to cover the main body area.
The method comprises the steps of controlling the motor to drive horizontal movement at a first speed coefficient under the condition that the transverse deviation exceeds a first preset threshold value, controlling the motor to drive vertical movement at a second speed coefficient under the condition that the longitudinal deviation exceeds a second preset threshold value, and calibrating the first speed coefficient and the second speed coefficient in advance according to the physical stroke range of a prism in an additional lens.
The first preset threshold value is a preset transverse deviation critical value and is used for judging whether the prism needs to be driven to transversely move at a specific speed. And the second preset threshold value is a preset longitudinal deviation critical value and is used for judging whether the prism needs to be driven to longitudinally move at a specific speed. And the first speed coefficient and the second speed coefficient are speed adjustment coefficients calibrated in advance according to the physical stroke range of the prism in the additional lens and are used for controlling the speed of the motor driving the prism to move.
Illustratively, the user selects the main singing of the upper left hand corner of the stage as the main body region. And detecting that the central coordinate of the main body area deviates 60 pixels from the central coordinate of the image acquired by the additional lens to the left in the transverse direction, exceeds a first preset threshold value by 50 pixels, deviates 20 pixels in the longitudinal direction, and does not exceed a second preset threshold value by 25 pixels. And generating a first motor control command according to the transverse deviation amount, driving the prism to move rightwards at a first speed coefficient, and keeping the prism longitudinally unchanged because the longitudinal deviation amount does not exceed a threshold value.
In one possible embodiment, the first input includes a first sub-input and a second sub-input, and the magnifying display of the portion of the photographic subject in the photographic preview interface in response to the first input includes:
determining a part of shooting objects in the shooting preview interface in response to the first sub-input;
receiving a second sub-input;
And responding to the second sub-input, and displaying the part of shooting objects in an enlarged mode according to the magnification indicated by the second sub-input.
The first sub-input refers to an initial interaction action performed by the user on the shooting preview interface for designating a part of shooting objects in the shooting preview interface. The first sub-input is used to explicitly indicate to the electronic device a specific location or range of the part of the photographic subject that the user wishes to zoom in subsequently. The input forms of the first sub-input include, but are not limited to:
and (3) clicking through single touch, namely directly clicking the center of the part of shooting objects interested by the user in the preview picture.
Long press, continuously pressing the screen on part of the shooting object for a period of time.
Frame gesture, namely dragging a finger on a screen to form a rectangular frame, and framing a part of the shooting object which a user wants to enlarge.
A specific gesture is to draw a closed contour around a part of the photographed object.
And the partial shooting object refers to a specific picture element or a local area which is explicitly designated in a shooting preview interface by a user through a first sub-input and is expected to be observed in a magnifying way or shot. Such as remote buildings, character faces.
The second sub-input refers to the subsequent interaction action which is performed by the user immediately after the first sub-input is completed and used for setting the magnification. The second sub-input is used to inform the electronic device to what extent the user desires to zoom in the selected partial photographic subject. The input form of the second word input includes:
Magnification, that is, a scale factor of the display size of a part of the photographed object with respect to the display size of the object in the original photographing preview interface when the photographed object is enlarged and displayed on the screen. For example, 2-fold magnification means that the size of the object displayed on the screen is twice the size in the original preview screen. The magnification is specified by the user through the second sub-input.
When a user observes a shooting preview interface, if the user wants to zoom in on a part of shooting objects in a picture, first sub-input is executed. The electronic equipment analyzes the input space coordinates or region information in real time, and accurately locks the part of the shooting object which is intended to be amplified by the user. At this point, a preliminary action may be performed, such as slightly adjusting the focus of the additional lens or initially invoking its image data.
The electronic device responds to the second sub-input and parses the magnification instruction it represents. And then, according to the part of the shooting object and the magnification selected by the user, carrying out accurate clipping and scaling calculation on the high-quality image data. And the electronic equipment dynamically renders and displays the processed image on a shooting preview interface. It is ensured that the enlarged picture core information is directly derived from the optical imaging of the additional lens.
And through the intuitive continuous operation of a user, acquiring a high-quality image of the selected area by using the additional lens and displaying the specified multiplying power. The first sub-input and the second sub-input are separated, so that a user can express the complex intention step by step and clearly, the possibility of misoperation is obviously reduced, and the control sense and the accuracy of operation are improved. The user does not need to rely on a preset fixed amplifying gear, and the user can dynamically adjust the optimal amplifying rate according to actual observation requirements, so that the experience is smoother and more natural.
Because the enlargement display is completely based on the image acquired by the additional lens, and the magnification is directly set by the user, the enlargement effect seen by the user in the preview stage is highly consistent with the actual photo or video effect finally shot by using the additional lens, and the deviation between the preview and the result is eliminated.
Therefore, through definite target area selection and flexible multiplying power control, a user can realize highly controllable and visible local optical amplifying experience on a shooting preview interface by combining a high-quality image source provided by an additional lens, and the convenience, the accuracy and the predictability of a final imaging effect of the external lens are greatly enhanced.
Wherein, in the case where the second sub-input is a pressing operation, the magnification is determined according to a pressing intensity of the pressing operation;
In the case where the second sub-input is a slide operation, the magnification is determined according to a slide trajectory of the slide operation;
in the case where the second sub-input is a zoom operation, the magnification is determined according to a zoom distance of the zoom operation.
And a second sub-input, wherein the operation input of the user on the local screen comprises different types of pressing operation, sliding operation, zooming operation and the like.
Pressing operation, namely, the operation of applying pressure with different intensities on the screen of the electronic equipment by a user, wherein the pressure reflects the requirement of the user on the adjustment degree of the local picture.
And (3) sliding operation, namely, performing operation on the screen of the electronic equipment by a user through finger sliding, wherein the length and the direction of a sliding track are used for determining the magnification of the local picture.
And the zoom operation is that a user performs operation on the screen of the electronic equipment through a finger opening and closing action, and the finger opening and closing distance is used for determining the zoom-in or zoom-out scale of the local picture.
And the magnification is used for adjusting the display effect of the local picture according to the parameters determined by the second sub-input operation of different types of users.
When the user performs the second sub-input operation, the type of the second sub-input is first recognized. If the pressing operation is performed, the intensity of the screen pressed by the user is detected through the pressure sensor of the electronic equipment, the corresponding relation between the pressing intensity and the magnification is preset, the corresponding magnification is determined according to the detected intensity, and the larger the pressing intensity is, the higher the magnification is, and the larger the local picture magnification is. If the sliding operation is performed, the length and the direction of the track of the sliding of the finger are recorded, and the sliding track information is converted into the magnification according to a preset algorithm, wherein the magnification is larger as the sliding track is longer. If the zoom operation is performed, the touch sensor of the electronic device is used for acquiring the opening and closing distance of the finger, the opening and closing distance of the finger is mapped into the magnification according to the set rule, and the larger the opening and closing distance of the finger is, the higher the magnification of the local picture is.
In the video shooting or processing process, a plurality of interaction modes are provided to realize the adjustment of the magnification of the super-clear part shooting object, and finally the image cutting and fusion are completed, specifically as follows:
When a user touches an ultra-clear main body area on the screen, a pressure sensor of the electronic device senses the pressing strength in real time. The partial photographic subject acquired by the additional lens is displayed at 2 times magnification by default. If the user lightens the touch force, the magnification of a part of shooting objects is reduced along with the reduction of the pressing intensity, when the pressing intensity is reduced to 0, the magnification reaches 1 time of the minimum value, and at the moment, the definition of the part of shooting objects is still obviously better than that of the original part of shooting objects acquired by the camera of the electronic equipment due to the high-definition acquisition capability of the additional lens although the part of shooting objects are reduced. If the user enhances the touch strength, the magnification of part of the shooting objects is gradually increased along with the increase of the pressing strength, and when the pressing strength reaches the set maximum value, the magnification reaches 4 times. And after the magnification is adjusted, cutting the picture, and fusing the processed super-clear part shooting object with the wide-angle picture.
As shown in fig. 8, when the user selects a partial subject 218a by the first sub-input and the second sub-input is a pressing operation, the magnification is determined according to the pressing intensity of the pressing operation, and the partial subject is displayed by magnification, thereby obtaining a magnified partial subject 218b.
When a user draws a circle gesture in the ultra-clear main body area, a yellow magnification adjusting frame can appear on the edge of the main body immediately. By sliding around the adjustment frame, the magnification can be accurately set, i.e., the magnification gradually decreases from 4 to 1 when the user's hand rotates clockwise along the frame, and gradually increases from 1 to 4 when the user's hand rotates counterclockwise along the frame. During the operation of the user, the system responds to the gesture change in real time and updates the magnification. And after the magnification is adjusted, cutting the picture acquired by the additional lens, and fusing the processed part of shooting objects with the wide-angle picture to ensure that the finally presented picture has panoramic vision and main body high-definition details.
As shown in fig. 9, when the user selects a partial object 219a through the first sub-input and the second sub-input is a slide operation, the user determines the magnification according to the slide trajectory of the slide operation, and enlarges the partial object based on the magnification, thereby obtaining an enlarged partial object 219b.
When a user performs double-finger operation in the ultra-clear main body area, the system realizes the amplification factor adjustment according to double-finger action, wherein if the user is pinched inwards in a double-finger way, the amplification factor is gradually reduced from 4 times to 1 time, the ultra-clear part of the shooting object is reduced along with the double-finger operation, and if the user is opened outwards, the amplification factor is gradually increased from 1 time to 4 times, and the ultra-clear part of the shooting object is synchronously amplified. In the double-finger operation process, the system captures the finger distance change in real time and dynamically adjusts the magnification. After the adjustment is completed, the picture is cut, and the optimized super-clear part shooting object is fused with the wide-angle picture, so that a high-quality synthesized picture is output.
As shown in fig. 10, when the user selects a partial subject 2110a through the first sub-input and the second sub-input is a zoom operation, the magnification is determined according to the zoom distance of the zoom operation, and the partial subject 2110b is displayed by being magnified based on the magnification.
Therefore, various methods for determining the magnification based on different operation modes of the user are provided, interaction modes of the user and a shot picture are enriched, the user can realize fine adjustment of the display effect of the local picture by more natural and convenient operation, diversified requirements of different users under different shooting scenes are met, flexibility and interestingness of shooting operation are improved, and user experience is enhanced.
In a possible embodiment, in the case that the additional lens is an optical zoom lens, the optical zoom focal length is determined according to the magnification, and the image corresponding to the part of the shooting objects which is magnified and displayed is obtained based on the image acquired by the additional lens at the optical zoom focal length.
The additional lens is an external lens with optical zooming capability, and is characterized in that the optical zooming focal length is changed through the physical movement of the internal lens group, so that the nondestructive changing of the visual angle and the imaging size is realized. The system calculates an optical zoom focal length value required by accurately matching the amplifying requirement through an internal preset mapping model or algorithm according to the amplifying multiplying power set by a user and combining the position of a part of shooting objects in a picture, the characteristics of a sensor and the physical performance parameters of the optical zoom lens.
After the calculation is completed, the electronic equipment sends an instruction to the optical zoom additional lens through the connecting interface, drives a motor or a mechanical structure in the electronic equipment to move the lens group to a corresponding physical position, and adjusts the optical focal length of the lens to the target optical zoom focal length. The system then uses primarily or entirely the high resolution raw image data acquired by the optical zoom lens at this particular optical zoom focal length. For the part of the shooting object area selected by the user, the system performs clipping and necessary adaptation processing based on the high-quality optical zoom image data, and finally realizes the enlarged display of the part of the shooting object on a shooting preview interface. The core of the whole flow is that the amplified and displayed image is directly derived from the optical information captured by the additional lens under the physical focal length corresponding to the multiplying power required by the user.
In the case where the additional lens supports an optical zoom function, the digital magnification and the optical zoom magnification may be dynamically mapped. The 35mm equivalent focal length of the wide-angle camera of the electronic equipment is known to be a fixed value, and the 35mm equivalent focal length range of the additional lens is 56mm to 200mm. When the user selects the default magnification 2x, the system drives the zoom motor of the additional lens to enable the focal length of the lens to move from the initial 56mm to 112mm, wherein 56mm corresponds to 2 times of optical zooming of the wide-angle focal length, and 112mm corresponds to 4 times of optical zooming, so that collaborative optimization of physical focal length adjustment and digital magnification is realized, and the picture definition is maintained while the main detail is improved.
The enlarged part of the photographed object seen by the user on the screen has definition, detail richness and image quality performance which are completely equivalent to the actual photo effect photographed at the same physical focal length by using the optical zoom lens finally, because the preview view shares the same high quality optical imaging source with the sheeting. The problems of blurring, noise increase and detail loss caused by digital amplification during preview are fundamentally solved, and a user obtains extremely high accuracy and confidence when composing, confirming focusing precision and checking main body details. Lossless optical magnification in the preview phase is achieved.
Meanwhile, the visual magnification requirement of a user is automatically and accurately converted into the physical focal length adjustment for driving the optical lens, so that the operation complexity of professional-level optical zooming is greatly simplified, and the usability is improved. Finally, the high-quality optical magnification preview experience obtained by the view is remarkably improved in predictability and overall satisfaction of the user on shooting results, and the hardware potential of the optical zoom additional lens is fully released.
In a possible embodiment, after step 130, the following steps may be further included:
shooting to obtain a target file in response to shooting operation, wherein the target file comprises at least one of an image and a video;
receiving a fourth input under the condition of displaying an editing interface of the target file;
And responding to the fourth input, and updating display parameters of the part of shooting objects in the target file, wherein the display parameters comprise at least one of display outlines, filter parameters and display special effects.
The object file refers to an image or video file generated by the user through a photographing operation, the contents of which include a part of the photographing object previously enlarged and displayed in step 130.
The editing interface is a graphical operating environment in which the electronic device displays the target file and provides editing functions. The fourth input is a touch, gesture, or parameter adjustment operation performed by the user on the target file on the interface. The display parameters refer to the adjustable attribute of visual presentation characteristics of a part of shooting objects in the control target file, the display outline refers to the edge strengthening effect of the main body area, the filter parameters refer to the color/stylization processing coefficient applied to the main body, and the display special effects refer to the dynamic visual effect overlapped on the main body.
When a user opens a target file in an editing interface, the system automatically identifies a predefined partial photographic object in the file. After receiving the fourth input, the system analyzes the input type, namely, if the gesture is drawn for the outline, generating a vector path attached to the edge of the main body and rendering the vector path as a display outline, if the parameter of the filter is adjusted, adjusting a color mapping matrix of the main body area in real time, and if the parameter of the filter is selected for the special effect, overlapping the particle animation for the main body. All parameter updating only acts on the layer where part of shooting objects are located, and independent processing of a main body and a background is realized through an image segmentation and layering rendering technology. For example, in video editing, the system applies the same filter parameters to the subject area frame by frame, ensuring consistency of the dynamic effects.
And displaying a filter library interface, a shape library interface and a special effect library interface on a shooting interface of the electronic equipment for a user to select editing materials.
As shown in fig. 11, after clicking the filter option, a preset filter group is displayed below the photo, including but not limited to, a happy filter, which enhances the brightness of the picture by enhancing the saturation and contrast, and is suitable for highlight scenes such as singing concert, celebration, etc.; the natural filter adopts low saturation and softening treatment to restore the real color of the scene, is suitable for natural scenery or portrait close-up, and the wounded filter reduces the brightness and increases the cold tone to create a low-sinking atmosphere for narrative photography.
As shown in FIG. 12, after clicking the enlarged frame shape option, multiple geometric outline templates are provided, namely a round frame, a smooth round edge is generated, the round frame is preferentially used for highlighting the face or the main body close-up of a person, a sawtooth frame, a mechanical sense is enhanced through the irregular edge, the enlarged frame is suitable for industrial subjects such as a car show, a scientific product and the like, and an loving frame, a heart-shaped outline is generated, and the heart-shaped outline is used for emotion expression scenes.
As shown in figure 13, after clicking the special effect option for amplification, a dynamic background synthesis function is provided, namely, a firework special effect is provided, namely, particle firework animation is superimposed on the edge of the main amplification area, special moments such as cross-year, celebration and the like are simulated, a lightning special effect is provided, namely, thunderstorm weather or tension atmosphere is created through irregular light effect and contrast change, and a sunlight special effect is provided, namely, a halation and scattering effect is added to the background, so that the bright sense of a sunny or cheerful scene is enhanced.
Therefore, through providing various filters, shapes and special effect materials, rich picture editing functions are given to users, so that the users can personally edit pictures according to shooting scenes and personal creatives, the creation requirements of diversification of the users are met, the artistry and the ornamental value of shooting works are improved, and creative expression and participation of the users in the shooting process are enhanced. The user can directly control the visual effect of the core object through the fourth input without manually selecting the main body, for example, adding a luminous outline to the human body to make the human body stand out from a complex background, or independently enhancing the color saturation for the macro-shooting insects without affecting the environmental tone. The user can quickly generate a short video with a main body in a flicker focusing way or make a display diagram with a background blurring and a product contour highlighting, so that the technical threshold of professional-level partition editing is greatly reduced.
In a possible embodiment, after step 130, the following steps may be further included:
The method comprises the steps of responding to shooting operation, shooting to obtain a dynamic image, wherein the dynamic image comprises a main body image area and a background image area, the image of the background image area is obtained based on a first video acquired by a camera of the electronic equipment, and the image of the main body image area is obtained based on a second video acquired by the additional lens;
receiving a third input to the dynamic image;
And responding to the third input, and displaying a dynamic image corresponding to a target image area according to the input parameters of the third input, wherein the target image area comprises at least one of the background image area and the main image area.
The moving image refers to a video file or image sequence generated by a photographing operation, and is composed of a subject image area and a background image area. The main image area refers to a video stream area corresponding to a core object captured by a user through an additional lens focus, and the image data of the main image area is derived from a second video. The background image area refers to a scene portion other than the subject in the moving image, and its image data is derived from the first video.
After the user finishes the amplifying operation of the preview interface on part of the shooting objects, triggering the shooting operation, continuously recording the first video by the camera to obtain the panoramic background, and recording the second video by the additional lens to focus on capturing the main object appointed by the user. The first video and the second video are synthesized into a dynamic image in real time through a time stamp alignment and image fusion algorithm, wherein a main object occupies a main image area, and the surrounding environment forms a background image area. When the user plays back the dynamic image, if a specific region needs to be emphasized, a third input may be performed.
Therefore, by integrating video and image information acquired by different devices, a dynamic image with unique display effect is generated. The dynamic background of the complete scene is reserved, the high-definition dynamic details of the main body are highlighted, compared with the traditional video shooting with a single visual angle, the video content with layering sense and visual impact is provided for the user, the expression form of shooting results is enriched, the requirements of the user on diversified and high-quality image creation are met, and the creative expression space and the work quality of the user in the shooting process are improved.
Specifically, the method includes playing a dynamic picture of the background image area and keeping the subject image area static according to the first video in response to a third input to the background image area, playing a dynamic picture of the subject image area and keeping the background image area static according to the second video in response to a third input to the subject image area, and playing a dynamic picture of the background image area and keeping the subject image area static according to the first video in response to a third input to the subject image area and the background image area.
When a third input of the user to the subject image region is received, the input position is identified as belonging to the subject image region. Subsequently, the dynamic play of the background image area is paused, and the current frame is kept to be static display. Meanwhile, a segment relevant to the current moment is extracted from the second video, and a dynamic picture of the segment is played in a main body image area, so that a picture effect that the main body area moves dynamically and the background area is still is formed.
As shown in fig. 14, in a concert, a user uses an electronic device to capture and generate a moving image. When a star is on the field, the user touches the star subject image area on the screen. In response to this input, the dynamic play of the background image area is suspended so as to keep the still picture of the viewer. Meanwhile, a dynamic segment of 3 seconds before and after the star field moment is extracted from the second video, the dynamic segment is played in the main image area, namely, a dynamic image corresponding to the target image area 140 is displayed, a highlight dynamic picture of stars rising from a stage and singing is displayed, and a strong visual contrast effect is formed.
Therefore, the audience's attention can be focused on the dynamic performance of the main body, the action details and expression changes of the main body are highlighted through static comparison of the background, the drama feeling and expressive force of the picture are enhanced, unique visual experience is provided for the user, and the method is particularly suitable for capturing key moment and wonderful performance.
When the third input of the user to the background image area is detected, the input position is judged to belong to the background image area. And then pausing the dynamic play of the main image area, and keeping the current frame of the main image area to be static display. At the same time, a corresponding segment is extracted from the first video, and a dynamic picture of the segment is played in a background image area, so that a picture effect that the main area is static and the background area moves dynamically is realized.
As shown in fig. 15, in a shooting scene of a sea, a dynamic image generated by a user shows a dolphin performance. At the moment of the dolphin's height Gao Yueqi, the user touches the background image area 150 on the screen. And responding to the input, suspending the dynamic play of the main body image area, and keeping the stop motion picture of the dolphin jump. Meanwhile, a dynamic segment of the audience area is extracted from the first video, and dynamic pictures of full-field audiences with the same voice and applause are played in the background image area 150, so that interesting contrast of static main body and dynamic background is formed.
Therefore, the dynamic change of the background is compared with the static scaling of the main body, so that the atmosphere and the dynamic sense of the environment of the scene can be highlighted, the wonderful moment of the main body is reserved, the creative visual effect is created for the user, and the method is suitable for recording the scene with the strong environment atmosphere.
When a third input by the user is received for the subject image region and the background image region, the input is recognized to cover both regions. And simultaneously, corresponding fragments are extracted from the first video and the second video, and dynamic pictures of the two fragments are respectively played in a background image area and a main body image area, so that the effect of simultaneously and dynamically playing the main body and the background is realized.
As shown in fig. 16, in the large grassland shooting scene, a dynamic image generated by a user shows a picture of a girl on the grassland. When a user touches the girl main body image area 160 and the background image area 160 on the screen simultaneously, in response to the third input of the dynamic image, dynamic fragments of the grassland blown by wind are extracted from the first video and played in the background image area, and dynamic fragments of the girl pacifying forehead blown by wind are extracted from the second video and played in the main body image area. And finally, displaying a harmonious picture with dynamic main body and background.
Therefore, the advantages of double-lens shooting can be fully utilized, dynamic details of a main body and a background are displayed, the picture is more vivid and real, the substitution sense of audiences is enhanced, and the method is suitable for recording pictures which need to comprehensively display dynamic scenes, such as natural wind, light, person and environment interaction.
The display mode of the dynamic image can be flexibly adjusted according to the watching requirement and aesthetic preference of the user by giving the user the autonomous control capability of the display effect of the dynamic image, receiving the third input of the user and dynamically switching the display state according to the input position. The interaction mode enhances the interaction between the user and the shooting result, improves the interest and participation of the user in watching the image content, meets the diversified watching requirements of the user, and further optimizes the experience of the user in the image creation and appreciation process.
In a possible embodiment, the target file includes a video, and after the capturing, the method may further include the following steps:
Receiving a fifth input;
Responding to the fifth input, displaying a video editing interface of the target file, wherein the video editing interface comprises a first video display area corresponding to the part of shooting objects, a second video display area corresponding to other shooting objects except the part of shooting objects and an editing area;
Receiving a sixth input;
And responding to the sixth input, editing the video corresponding to the first video display area or editing the video corresponding to the second video display area.
The target file refers to a video file generated by shooting by a user, and the content of the video file includes a part of shooting objects and other shooting objects defined by the step 130.
And the editing area is used for displaying an editing function area aiming at the video in the video processing interface, such as editing tools of cutting, filter adding, special effect making and the like, so that the user can conduct personalized processing on the video conveniently.
The fifth input is a touch instruction executed by the user on the target file, and is used for triggering the video editing interface. The interface includes a first video display area, a second video display area, and an editing area. The sixth input is a touch operation of the user at the editing interface.
And responding to the fifth input, separating the video stream into two logic layers through pre-stored main body position metadata, rendering the layers corresponding to part of shooting objects to a first video display area, and rendering the background layers corresponding to other shooting objects except the part of shooting objects to a second video display area. Both areas support synchronous/independent playback control. When the user operates the function control of the editing area through the sixth input, automatically associating the target area according to the input focus:
When a user selects a certain editing function in the editing area, corresponding videos are processed according to the function type, for example, a clipping function changes the video picture range through an image clipping algorithm, and a filter function changes the video tone through a color adjustment algorithm. When a user executes splitting operation, splitting the target file into a second video corresponding to other shooting objects except a part of shooting objects and a first video corresponding to the part of shooting objects according to the video acquisition equipment identification, and re-laying out an album display interface to enable playing, editing and previewing functions of different videos to be independent of each other, so that the user can conveniently operate respectively, and finally the user can store the processed videos into equipment storage through a exporting function.
In the album interface, after clicking a split option in a menu arranged at the upper right of a video playing window, a user performs decoupling processing on an amplified target file cooperatively collected through double lenses and separates the amplified target file into two independent video streams, wherein one video stream is a wide-angle video stream collected by a camera of the electronic equipment, and the other video stream is a tele video stream collected by an additional lens.
As shown in fig. 17, the album display interface is automatically reconfigured into an upper functional area and a lower functional area, wherein the upper half is a dual-video parallel playing area, a first video display area 1701 corresponding to a part of shooting objects is displayed, the first video display area 1701 is used for playing high-definition main video acquired by an additional lens, a second video display area 1702 corresponding to other shooting objects except the part of shooting objects is displayed, the second video display area 1702 is used for playing panoramic video acquired by a wide-angle lens, two playing windows are respectively provided with an independent playing control component and a pause control component, the lower half is a dual-track video editing area, the upper track displays a frame sequence thumbnail of the additional lens video, the lower track displays a frame sequence thumbnail of the wide-angle lens video, and a user can independently reject redundant frames in any track through a drag operation. After editing, the user can respectively store the processed two paths of videos into the equipment for storage through setting an option of 'deriving wide-angle videos' and an option of 'deriving additional lens videos' in a menu.
After a concert is completed, the user takes live video using the equipped additional shots. The method comprises the steps of opening a video processing interface of the electronic equipment, wherein a first playing area in the interface is used for playing second videos which are collected by a camera and correspond to other shooting objects except part of the shooting objects, the bright stage panorama of a concert and the hot scenes of an audience are completely presented, and the second playing area is used for playing the first videos which are collected by an additional lens and correspond to part of the shooting objects, so that the highlight performance details of singers on the stage are clearly displayed. The user wants to split the video shot by the additional lens and the video shot at the wide angle, and then clicks the split key under the upper right setup menu of the video. And responding to clicking operation, splitting the previous amplified target file into two sections of independent videos, wherein the two sections of independent videos correspond to the second videos which are acquired by the wide-angle lens and correspond to other shooting objects except for part of shooting objects and the first videos which are acquired by the additional lens and correspond to part of shooting objects respectively.
Correspondingly, the album display interface is changed, the upper half part is changed into a video playing area and is subdivided into a video playing area and a wide-angle lens video playing area which are acquired by the additional lens, a user can independently play and pause videos of the two areas respectively, the lower half part is a video editing and previewing area which is also divided into a video frame previewing area and an editing area of the additional lens, a wide-angle lens video frame previewing area and an editing area, and the user can independently perform frame rejection operation on videos acquired by the lenses in the two areas to remove unsatisfactory video frames. And finally, clicking the options of 'deriving wide-angle video' and 'deriving additional lens video' by the user, and respectively deriving and storing the split and edited video into the device album.
Thus, by providing an intuitive and powerful video processing interface, a user can clearly distinguish and manipulate videos captured by different devices. Through independent play and editing areas, a user can conduct personalized processing on panoramic videos and detail videos, and diversified video editing requirements are met. The function of splitting the video enables a user to flexibly manage shooting materials, and facilitates subsequent further creation or sharing of specific videos. Independent editing functions such as frame elimination and the like improve the fineness of video processing, are beneficial to users to obtain higher-quality video works, effectively improve the efficiency and experience of video processing of the users, and enhance the practicability and the professionality of the electronic equipment in video shooting and processing.
The user can independently regulate and control the visual effect of the main body and the environment, avoids the complicated process of frame-by-frame image matting in the traditional video editing, ensures the space accuracy of parameter adjustment, provides visual contrast preview for double-region synchronous display, and greatly reduces the complexity of multi-track operation by combining with the centralized control of editing regions. The efficiency and quality of professional video creation are remarkably improved.
In a possible embodiment, after step 130, the following steps may be further included:
receiving a seventh input to the capture preview interface;
in response to the seventh input, the partial photographic subject of the enlarged display is updated.
The seventh input refers to a new touch operation performed by the user on the photographing preview interface that is already in the enlarged display state. Part of the shooting objects refer to core targets which are displayed in an enlarged mode, and image data of the shooting objects are generated based on images acquired by the additional lens. Updating the enlarged display means terminating the enlarged state of the current object according to the user instruction and determining another photographic subject as a new enlarged subject in the preview interface.
When a certain part of shooting objects are displayed in an enlarged mode, if a user triggers a seventh input to other areas of the preview interface, the system analyzes the input coordinate information in real time, and the target position newly designated by the user is determined. A new partial photographic subject region is dynamically defined around the location by an image recognition algorithm. The enlarged display of the original subject is then terminated, and the newly identified region is enlarged displayed at the same or inherited magnification based on the image stream re-captured by the additional lens.
The user can dynamically switch the observation focus without exiting the amplifying mode, the complex reconstruction process is simplified into intuitive clicking/sliding through the cooperation of real-time target repositioning and the lens, the method is particularly suitable for scenes needing to quickly transfer attention such as motion follow-up shooting and activity recording, the flexibility and controllability of shooting preparation stages are greatly improved, and the operation efficiency of multi-target shooting scenes is improved.
In the embodiment of the application, a shooting preview interface corresponding to a camera of the electronic equipment is displayed, and part of shooting objects in the shooting preview interface are enlarged and displayed in response to a first input of the shooting preview interface, wherein the image corresponding to the enlarged and displayed part of shooting objects is obtained based on the image acquired by the additional lens, so that the clear effect of the part of shooting objects after being optically enlarged by the additional lens can be directly seen on the preview interface, the optical zooming capability of the additional lens is seamlessly integrated into preview interaction, and definition and detail representation superior to digital enlargement purely depending on the image of the main camera can be provided. Therefore, the main body details of part of shooting objects are ensured, and the complete shooting scene is considered.
According to the interaction method provided by the embodiment of the application, the execution subject can be an interaction device. In the embodiment of the application, an interaction device is taken as an example to execute an interaction method, and the interaction device provided by the embodiment of the application is described.
Fig. 18 is a block diagram of an interaction device according to an embodiment of the present application, where the device 1800 includes:
The display module 1810 is configured to display a shooting preview interface corresponding to a camera of the device;
a receiving module 1820, configured to receive a first input to the shooting preview interface;
and the display module 1810 is further configured to enlarge and display a part of the shooting objects in the shooting preview interface in response to the first input, where an image corresponding to the enlarged and displayed part of the shooting objects is obtained based on the image acquired by the additional lens.
In one possible embodiment, the first input includes a first sub-input and a second sub-input, and the display module 1810 is specifically configured to:
determining a part of shooting objects in the shooting preview interface in response to the first sub-input;
a receiving module 1820 for receiving a second sub-input;
And a display module 1810, configured to respond to the second sub-input, and enlarge and display the part of the shooting object according to the magnification indicated by the second sub-input.
In a possible embodiment, in case the second sub-input is a press operation, the magnification is determined according to a press intensity of the press operation;
In the case where the second sub-input is a slide operation, the magnification is determined according to a slide trajectory of the slide operation;
in the case where the second sub-input is a zoom operation, the magnification is determined according to a zoom distance of the zoom operation.
In one possible embodiment, the apparatus 1800 may further include:
and the determination module is used for determining the optical zoom focal length according to the magnification when the additional lens is an optical zoom lens, and the image corresponding to the part of the shooting objects which are displayed in a magnified manner is obtained based on the image acquired by the additional lens with the optical zoom focal length.
In a possible embodiment, the display module 1810 is further configured to display a functional icon of the additional lens on the shooting preview interface when the additional lens is effectively connected to the electronic device;
the apparatus 1800 may further include a control module:
the receiving module 1820 is configured to receive a second input of a function icon of the additional lens;
the control module is used for receiving and responding to the second input and controlling the additional lens to acquire images.
In one possible embodiment, the apparatus 1800 may further include:
The system comprises a shooting module, a shooting module and a lens module, wherein the shooting module is used for responding to shooting operation to obtain a dynamic image, the dynamic image comprises a main body image area and a background image area, the image of the background image area is obtained based on a first video acquired by a camera of the electronic equipment, and the image of the main body image area is obtained based on a second video acquired by the additional lens;
the receiving module 1820 is further configured to receive a third input to the dynamic image;
The display module 1810 is further configured to display, in response to the third input, a dynamic image corresponding to a target image area according to an input parameter of the third input, where the target image area includes at least one of the background image area and the main image area.
In a possible embodiment, the shooting module is further used for responding to shooting operation to obtain a target file through shooting, wherein the target file comprises at least one of an image and a video;
The receiving module 1820 is further configured to receive a fourth input if the editing interface of the target file is displayed;
The apparatus 1800 may further include:
And the updating module is used for responding to the fourth input and updating the display parameters of the part of shooting objects in the target file, wherein the display parameters comprise at least one of display outlines, filter parameters and display special effects.
In a possible embodiment, the receiving module 1820 is further configured to receive a fifth input;
the display module 1810 is further configured to display, in response to the fifth input, a video editing interface of the target file, where the video editing interface includes a first video display area corresponding to the part of the shooting objects, a second video display area corresponding to other shooting objects except the part of the shooting objects, and an editing area;
the receiving module 1820 is further configured to receive a sixth input;
The apparatus 1800 may further include:
And the editing module is used for responding to the sixth input and editing the video corresponding to the first video display area or the video corresponding to the second video display area.
In a possible embodiment, the receiving module 1820 is further configured to receive a seventh input to the shooting preview interface;
The display module 1810 is further configured to update the part of the shooting object displayed in an enlarged manner in response to the seventh input.
In the embodiment of the application, the main body area which is required to be displayed in a refined manner by a user is determined by receiving the first input of the main body area in the first picture, the first picture and the second picture are fused according to the position information of the main body area in the first picture in response to the first input, the first picture is acquired through the camera of the electronic equipment, the second picture is acquired through the additional lens coaxially aligned with the camera, the focal length of the additional lens when acquiring the second picture is larger than the focal length of the camera when acquiring the first picture, the camera of the electronic equipment acquires the first picture containing the whole shooting scene by virtue of the wide angle characteristic of the additional lens, the additional lens acquires the second picture with high definition details by virtue of the long focal length advantage, the first picture containing the whole shooting scene and the second picture with high definition details are fused, the image information of the first picture and the second picture can be integrated, the third picture containing panorama and the main body area is synthesized, the quality of the whole shooting scene can be considered while the picture quality of the shooting scene is ensured, and the shooting content is improved.
The interaction device in the embodiment of the application can be an electronic device or a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. The electronic device may be a Mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a Mobile internet appliance (Mobile INTERNET DEVICE, MID), an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a robot, a wearable device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc., and may also be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., which are not particularly limited in the embodiments of the present application.
The interaction device of the embodiment of the application can be a device with an action system. The action system may be an Android (Android) action system, an iOS action system, or other possible action systems, and the embodiment of the application is not limited specifically.
The interaction device provided by the embodiment of the present application can implement each process implemented by the above method embodiment, and in order to avoid repetition, details are not repeated here.
Optionally, as shown in fig. 19, an embodiment of the present application further provides an electronic device 1910, including a processor 1911, a memory 1912, and a program or an instruction stored in the memory 1912 and capable of being executed on the processor 1911, where the program or the instruction implements each step of any one of the above interaction method embodiments when executed by the processor 1911, and the steps achieve the same technical effects, and are not repeated herein.
The electronic device of the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 20 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 2000 includes, but is not limited to, a radio frequency unit 2001, a network module 2002, an audio output unit 2003, an input unit 2004, a sensor 2005, a display unit 2006, a user input unit 2007, an interface unit 2008, a memory 2009, and a processor 2010. The electronic device 2000 is coupled to additional lenses that may be controlled by the processor 2010.
Those skilled in the art will appreciate that the electronic device 2000 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 2010 through a power management system so as to perform functions such as managing charging, discharging, and power consumption by the power management system. The electronic device structure shown in fig. 20 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown in the drawings, or may combine some components, or may be arranged in different components, which will not be described in detail herein.
The display unit 2006 is configured to display a shooting preview interface corresponding to a camera of the device;
A user input unit 2007 for receiving a first input to the shooting preview interface;
and the display unit 2006 is further configured to enlarge and display a part of the shooting objects in the shooting preview interface in response to the first input, where an image corresponding to the enlarged and displayed part of the shooting objects is obtained based on the image acquired by the additional lens.
Optionally, the first input includes a first sub-input and a second sub-input, and the processor 2010 is configured to determine a portion of the photographic subject in the photographic preview interface in response to the first sub-input;
A user input unit 2007 for receiving a second sub-input;
And a display unit 2006 for displaying the partial photographic subject in an enlarged manner according to the magnification indicated by the second sub-input in response to the second sub-input.
Optionally, in the case that the second sub-input is a pressing operation, the magnification is determined according to a pressing intensity of the pressing operation, in the case that the second sub-input is a sliding operation, the magnification is determined according to a sliding track of the sliding operation, and in the case that the second sub-input is a zooming operation, the magnification is determined according to a zooming distance of the zooming operation.
Optionally, the processor 2010 is further configured to determine an optical zoom focal length according to the magnification when the additional lens is an optical zoom lens, and the image corresponding to the part of the shooting objects that is displayed in a magnified manner is obtained based on the image acquired by the additional lens at the optical zoom focal length.
Optionally, the display unit 2006 is further configured to display, in the case where the additional lens is effectively connected to the electronic device, a function icon of the additional lens on the shooting preview interface;
A user input unit 2007 for receiving a second input of a function icon of the additional lens;
processor 2010 is also configured to receive a control of the additional lens to capture an image in response to the second input.
Optionally, the processor 2010 is further configured to capture a dynamic image in response to a capturing operation, where the dynamic image includes a main image area and a background image area, an image of the background image area is obtained based on a first video captured by a camera of the electronic device, and an image of the main image area is obtained based on a second video captured by the additional lens;
A user input unit 2007 for receiving a third input of the moving image;
And the display unit 2006 is further configured to display, in response to the third input, a dynamic image corresponding to a target image area according to an input parameter of the third input, where the target image area includes at least one of the background image area and the main image area.
Optionally, the processor 2010 is further configured to obtain a target file in response to the shooting operation, where the target file includes at least one of an image and a video;
a user input unit 2007 for receiving a fourth input in the case of displaying an editing interface of the target file;
The processor 2010 is further configured to update, in response to the fourth input, display parameters of the part of the shooting objects in the target file, where the display parameters include at least one of a display profile, a filter parameter, and a display special effect.
Optionally, the user input unit 2007 is further configured to receive a fifth input;
A display unit 2006, further configured to display, in response to the fifth input, a video editing interface of the target file, where the video editing interface includes a first video display area corresponding to the part of the shooting objects, a second video display area corresponding to other shooting objects except the part of the shooting objects, and an editing area;
A user input unit 2007 for receiving a sixth input;
The processor 2010 is further configured to edit the video corresponding to the first video display area or edit the video corresponding to the second video display area in response to the sixth input.
Optionally, the user input unit 2007 is further configured to receive a seventh input to the shooting preview interface;
and a display unit 2006 for updating the part of the shooting object displayed in enlargement in response to the seventh input.
In the embodiment of the application, a shooting preview interface corresponding to a camera of the electronic equipment is displayed, and part of shooting objects in the shooting preview interface are enlarged and displayed in response to a first input of the shooting preview interface, wherein the image corresponding to the enlarged and displayed part of shooting objects is obtained based on the image acquired by the additional lens, so that the clear effect of the part of shooting objects after being optically enlarged by the additional lens can be directly seen on the preview interface, the optical zooming capability of the additional lens is seamlessly integrated into preview interaction, and definition and detail representation superior to digital enlargement purely depending on the image of the main camera can be provided. Therefore, the main body details of part of shooting objects are ensured, and the complete shooting scene is considered.
It should be appreciated that in embodiments of the present application, the input unit 2004 may include a graphics processor (Graphics Processing Unit, GPU) 20041 and a microphone 20042, the graphics processor 20041 processing image data of still pictures or video images obtained by an image capturing device (e.g., a camera) in a video image capturing mode or an image capturing mode. The display unit 2006 may include a display panel 20061, and the display panel 20061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 2007 includes at least one of a touch panel 20071 and other input devices 20072. Touch panel 20071, also known as a touch screen. The touch panel 20071 may include two parts, a touch detection device and a touch controller. Other input devices 20072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 2009 may be used to store software programs as well as various data including, but not limited to, application programs and an action system. Processor 2010 may integrate an application processor with a modem processor, wherein the application processor primarily handles the action system, user pages and applications, etc., and the modem processor primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 2010.
The memory 2009 may be used to store software programs as well as various data. The memory 2009 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 2009 may include volatile memory or nonvolatile memory, or the memory x09 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDRSDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCH LINK DRAM, SLDRAM), and Direct random access memory (DRRAM). Memory 2009 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 2010 may include one or more processing units and, optionally, processor 2010 integrates an application processor primarily processing operations involving an operating system, user interface, application program, etc., and a modem processor primarily processing wireless communication signals such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 2010.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above-mentioned interactive method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the above-mentioned interactive method embodiment, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the above-mentioned interaction method embodiments, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.
Claims (11)
1. An interaction method performed by an electronic device, the electronic device being coupled to an additional lens, the method comprising:
displaying a shooting preview interface corresponding to a camera of the electronic equipment;
Receiving a first input to the shooting preview interface;
and responding to the first input, and magnifying and displaying part of shooting objects in the shooting preview interface, wherein the magnified and displayed image corresponding to the part of shooting objects is obtained based on the image acquired by the additional lens.
2. The method of claim 1, wherein the first input comprises a first sub-input and a second sub-input, and wherein the zooming in on the portion of the photographic subject in the photographic preview interface in response to the first input comprises:
determining a part of shooting objects in the shooting preview interface in response to the first sub-input;
receiving a second sub-input;
And responding to the second sub-input, and displaying the part of shooting objects in an enlarged mode according to the magnification indicated by the second sub-input.
3. The method according to claim 2, wherein in the case where the second sub-input is a pressing operation, the magnification is determined according to a pressing intensity of the pressing operation;
In the case where the second sub-input is a slide operation, the magnification is determined according to a slide trajectory of the slide operation;
in the case where the second sub-input is a zoom operation, the magnification is determined according to a zoom distance of the zoom operation.
4. The method according to claim 2, wherein the method further comprises:
And when the additional lens is an optical zoom lens, determining an optical zoom focal length according to the magnification, wherein the image corresponding to the part of shooting objects which are displayed in a magnified manner is obtained based on the image acquired by the additional lens with the optical zoom focal length.
5. The method according to claim 1, wherein the method further comprises:
Displaying functional icons of the additional lens on the shooting preview interface under the condition that the additional lens is effectively connected with the electronic equipment;
Receiving a second input of a function icon for the additional lens;
And controlling the additional lens to acquire an image in response to the second input.
6. The method of claim 1, wherein the method further comprises, in response to the first input, after magnifying the display of the portion of the photographic subject in the photographic preview interface:
The method comprises the steps of responding to shooting operation, shooting to obtain a dynamic image, wherein the dynamic image comprises a main body image area and a background image area, the image of the background image area is obtained based on a first video acquired by a camera of the electronic equipment, and the image of the main body image area is obtained based on a second video acquired by the additional lens;
receiving a third input to the dynamic image;
And responding to the third input, and displaying a dynamic image corresponding to a target image area according to the input parameters of the third input, wherein the target image area comprises at least one of the background image area and the main image area.
7. The method of claim 1, wherein the method further comprises, in response to the first input, after magnifying the display of the portion of the photographic subject in the photographic preview interface:
shooting to obtain a target file in response to shooting operation, wherein the target file comprises at least one of an image and a video;
receiving a fourth input under the condition of displaying an editing interface of the target file;
And responding to the fourth input, and updating display parameters of the part of shooting objects in the target file, wherein the display parameters comprise at least one of display outlines, filter parameters and display special effects.
8. The method of claim 7, wherein the target file comprises video, and wherein after the capturing the target file, the method further comprises receiving a fifth input;
Responding to the fifth input, displaying a video editing interface of the target file, wherein the video editing interface comprises a first video display area corresponding to the part of shooting objects, a second video display area corresponding to other shooting objects except the part of shooting objects and an editing area;
Receiving a sixth input;
And responding to the sixth input, editing the video corresponding to the first video display area or editing the video corresponding to the second video display area.
9. The method of claim 1, wherein the method further comprises, in response to the first input, after magnifying the display of the portion of the photographic subject in the photographic preview interface:
receiving a seventh input to the capture preview interface;
in response to the seventh input, the partial photographic subject of the enlarged display is updated.
10. An interactive apparatus, wherein the apparatus is coupled to an additional lens, the apparatus comprising:
the display module is used for displaying a shooting preview interface corresponding to a camera of the device;
The receiving module is used for receiving a first input of the shooting preview interface;
And the display module is also used for responding to the first input and displaying part of shooting objects in the shooting preview interface in an enlarged mode, wherein the image corresponding to the part of shooting objects in the enlarged mode is obtained based on the image acquired by the additional lens.
11. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method of any one of claims 1 to 9.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202511004107.8A CN120640124A (en) | 2025-07-21 | 2025-07-21 | Interaction method, device and electronic device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202511004107.8A CN120640124A (en) | 2025-07-21 | 2025-07-21 | Interaction method, device and electronic device |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN120640124A true CN120640124A (en) | 2025-09-12 |
Family
ID=96969701
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202511004107.8A Pending CN120640124A (en) | 2025-07-21 | 2025-07-21 | Interaction method, device and electronic device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN120640124A (en) |
-
2025
- 2025-07-21 CN CN202511004107.8A patent/CN120640124A/en active Pending
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112492209B (en) | Shooting method, shooting device and electronic device | |
| CN114598819B (en) | Video recording method and device and electronic equipment | |
| CN107230187B (en) | Method and device for processing multimedia information | |
| EP4478731A1 (en) | Video recording method and apparatus, and electronic device | |
| KR20210082232A (en) | Real-time video special effects systems and methods | |
| KR20160128366A (en) | Mobile terminal photographing method and mobile terminal | |
| CN104023172A (en) | Shooting method and shooting device of dynamic image | |
| WO2023134583A1 (en) | Video recording method and apparatus, and electronic device | |
| WO2023151609A1 (en) | Time-lapse photography video recording method and apparatus, and electronic device | |
| WO2023016214A1 (en) | Video processing method and apparatus, and mobile terminal | |
| CN114466232B (en) | Video processing method, device, electronic equipment and medium | |
| WO2026012205A1 (en) | Control method and apparatus, electronic device, and wearable ar device | |
| EP4503590A1 (en) | Shooting method, apparatus, electronic device and medium | |
| CN114025237B (en) | Video generation method, device and electronic device | |
| CN120640124A (en) | Interaction method, device and electronic device | |
| CN114500852B (en) | Shooting method, shooting device, electronic equipment and readable storage medium | |
| CN114173178A (en) | Video playing method, video playing device, electronic equipment and readable storage medium | |
| CN114827477B (en) | Method, device, electronic equipment and medium for time-lapse photography | |
| KR20200114170A (en) | Method and computer program for photograph without background and taking composite photograph | |
| CN120416653A (en) | Dynamic photo shooting method, device, electronic equipment and medium | |
| CN121367818A (en) | Photographing processing method, photographing processing equipment and computer storage medium | |
| CN119496993A (en) | Method, device, electronic device and storage medium for generating dynamic photos | |
| CN120416652A (en) | Dynamic photo shooting method, device, electronic device and storage medium | |
| CN121284249A (en) | Video processing method and device and electronic equipment | |
| CN117528250A (en) | Multimedia file processing method, multimedia file processing device and electronic equipment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |