[go: up one dir, main page]

CN109005337B - Photographing method and terminal - Google Patents

Photographing method and terminal Download PDF

Info

Publication number
CN109005337B
CN109005337B CN201810729882.3A CN201810729882A CN109005337B CN 109005337 B CN109005337 B CN 109005337B CN 201810729882 A CN201810729882 A CN 201810729882A CN 109005337 B CN109005337 B CN 109005337B
Authority
CN
China
Prior art keywords
screen
camera
terminal
target object
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810729882.3A
Other languages
Chinese (zh)
Other versions
CN109005337A (en
Inventor
何佳佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201810729882.3A priority Critical patent/CN109005337B/en
Publication of CN109005337A publication Critical patent/CN109005337A/en
Application granted granted Critical
Publication of CN109005337B publication Critical patent/CN109005337B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a photographing method and a terminal, which are applied to the terminal and are characterized in that the terminal comprises a first screen and a second screen, the first screen is positioned on a first surface of the terminal, the second screen is positioned on a second surface, deviating from the first surface, of the terminal, a first camera is arranged on the first surface of the terminal, and the method comprises the following steps: displaying preview images acquired by the first camera on the first screen and the second screen respectively; displaying a target object in a preview image acquired by the first camera after being amplified in a first proportion on the first screen; and if a first input for indicating photographing is received, outputting a photographed image. In the invention, the photographed person can adjust the photographing posture and/or facial expression of the person according to the display picture of the first screen, thereby improving the quality of the photographed image and improving the photographing experience.

Description

Photographing method and terminal
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a photographing method and a terminal.
Background
In recent years, with the popularization of terminals such as digital cameras and mobile phones, photographing and image retention using the terminals have become a habit of people. Particularly, with the rise of mobile internet in recent years, people rely more on terminals to record their own fresh things and share them on the network.
At present, the terminal includes first face and the second face that deviates from mutually, wherein, is equipped with first camera on the first face, is equipped with screen and second camera on the second face. When the terminal is used for photographing, no matter the first camera or the second camera is used for photographing, the preview image collected by the camera is displayed on the screen of the second surface of the terminal. Therefore, if the first camera is used for photographing, the photographed person cannot watch the preview image acquired by the camera displayed on the screen, and therefore the photographed person cannot adjust the photographing posture and/or facial expression according to the display picture of the screen, and the quality of the photographed image and the photographing experience are poor.
Disclosure of Invention
The embodiment of the invention provides a photographing method and a terminal, and aims to solve the problems that in the prior art, a photographed person cannot watch a preview image acquired by a camera displayed on a screen, so that the photographing posture and/or facial expression cannot be adjusted according to a display picture of the screen, and the quality of a photographed image and the photographing experience are poor.
In order to solve the problems, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a photographing method applied to a terminal, where the terminal includes a first screen and a second screen, the first screen is located on a first surface of the terminal, the second screen is located on a second surface of the terminal, the second surface being away from the first surface, the first surface of the terminal is provided with a first camera, and the second surface of the terminal is provided with a second camera, and the method includes:
displaying preview images acquired by the first camera on the first screen and the second screen respectively;
displaying a target object in a preview image acquired by the first camera after being amplified in a first proportion on the first screen;
and if a first input for indicating photographing is received, outputting a photographed image.
In a second aspect, an embodiment of the present invention further provides a terminal, where the terminal includes a first screen and a second screen, the first screen is located on a first surface of the terminal, the second screen is located on a second surface of the terminal, the second surface being opposite to the first surface, the first surface of the terminal is provided with a first camera, the second surface of the terminal is provided with a second camera, and the terminal includes:
the first display module is used for displaying the preview images acquired by the first camera on the first screen and the second screen respectively;
the second display module is used for displaying the amplified target object in the preview image acquired by the first camera on the first screen according to a first proportion;
and the output module is used for outputting the photographed image if receiving a first input for indicating photographing.
In a third aspect, an embodiment of the present invention further provides a terminal, where the terminal includes a processor, a memory, and a computer program stored in the memory and being executable on the processor, and when the computer program is executed by the processor, the steps of the photographing method described above are implemented.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements the steps of the photographing method as described above.
Therefore, in the embodiment of the invention, the terminal displays the preview image acquired by the first camera on the first screen which is arranged on the same surface of the terminal as the first camera, and further, the display proportion of the target object on the first screen is improved. Therefore, the photographed person corresponding to the target object can adjust the photographing posture and/or facial expression of the photographed person according to the display picture of the first screen, so that the quality of the photographed image can be improved, and the photographing experience can also be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a flowchart of a photographing method according to an embodiment of the present invention;
fig. 2a is a schematic diagram of a terminal provided in an embodiment of the present invention;
fig. 2b is one of schematic diagrams of a first screen of the terminal provided by the embodiment of the present invention;
fig. 2c is a schematic diagram of a second screen of the terminal provided by the embodiment of the present invention;
fig. 3a is a second schematic diagram of the first screen of the terminal according to the embodiment of the present invention;
fig. 3b is a third schematic diagram of the first screen of the terminal according to the embodiment of the present invention;
fig. 4 is a fourth schematic diagram of the first screen of the terminal according to the embodiment of the present invention;
fig. 5 is one of the structural diagrams of a terminal provided in an embodiment of the present invention;
fig. 6 is a second structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The photographing method of the embodiment of the invention is mainly applied to the terminal.
It should be noted that the terminal of the embodiment of the present invention includes a first screen and a second screen, where the first screen is located on a first side of the terminal, and the second screen is located on a second side of the terminal, which is away from the first side, that is, the first screen and the second screen are arranged in a folded manner; and a first camera is arranged on the first surface of the terminal. Further, a second camera may be disposed on a second face of the terminal. That is, in the embodiment of the present invention, the terminal may be provided with only one camera, or one camera may be provided on each of the first and second surfaces.
It should be understood that the terminal of the embodiments of the present invention may be a folding dual-screen terminal. Therefore, in a photographing scene, the first screen and the second screen can be folded to realize the photographing method of the embodiment of the invention; in other application scenarios, such as a video viewing scenario, the first screen and the second screen may be tiled to increase the display area.
In particular, the terminal may be a Mobile phone, a Tablet Personal Computer (Tablet Personal Computer), a Laptop Computer (Laptop Computer), a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a Wearable Device (Wearable Device), or the like.
The photographing method of the embodiment of the present invention is explained below.
Referring to fig. 1, fig. 1 is a flowchart of a photographing method according to an embodiment of the present invention. As shown in fig. 1, the photographing method of the present embodiment includes the following steps:
step 101, displaying the preview images acquired by the first camera on the first screen and the second screen respectively.
In this step, the terminal starts the first camera to shoot, and displays the preview image collected by the first camera on the first screen and the second screen, so that the shot person can watch the preview image collected by the first camera through the first screen, and the shot person can watch the preview image collected by the second camera through the second screen, thereby improving the utilization rate of the terminal screen.
For ease of understanding, please refer to fig. 2a, 2b and 2c together.
As shown in fig. 2a, the terminal comprises a first face 21 and a second face 22 facing away from each other. As shown in fig. 2b, a first screen 211 and a first camera 212 may be disposed on the first side 21. As shown in fig. 2c, a second screen 221 and a second camera 222 may be disposed on the second side 22.
As shown in fig. 2b, the first screen 211 can display the preview image 23 collected by the first camera 212 in a full screen, so that the display ratio of the preview image 23 on the first screen 211 can be increased, and the display effect can be further improved; as shown in fig. 2c, in order to facilitate the photographer to adjust the photographing parameters or trigger the terminal to perform the photographing operation, the second screen 221 may display a function bar 24 including auxiliary function controls such as a setting control and a photographing control, in addition to the preview image 23 collected by the first camera 212. As can be seen, the scale value of the display scale of the first screen 211 for displaying the preview image 23 may be greater than the scale value of the display scale of the second screen 221 for displaying the preview image 23.
And 102, displaying the amplified target object in the preview image acquired by the first camera on the first screen according to a first proportion.
In this step, since the first screen displays the enlarged target object, it can be understood that the first scale is larger than the original display scale of the non-enlarged target object on the first screen. In some embodiments, the terminal may display the enlarged target object full screen on the first screen. The target object can be understood as imaging of the photographed person in the preview image.
It should be understood that when the terminal displays the target object in an enlarged manner, the terminal may display the entire target object in an enlarged manner, or may display a target area of the target object in an enlarged manner, such as a face area, a hand area, or the like.
For ease of understanding, please refer to fig. 3a and 3b together. In fig. 2b and 2c, the preview image 23 captured by the first camera 212 includes a first object 231 and a second object 232. It is assumed that the target object is the first object 231 and the target region of the target object is the face region of the first object 231.
In a scene in which the entire target object is displayed in an enlarged manner, as shown in fig. 3a, the target object, i.e., the first object 231 is displayed on the first screen 211 in a first scale, i.e., the first screen 211 displays the enlarged first object 231 in the first scale; in a scene in which the target region of the target object is displayed in an enlarged manner, as shown in fig. 3b, the target region of the target object, i.e., the face region of the first object 231 is displayed on the first screen 211 in a first scale, i.e., the first screen 211 displays the enlarged target region of the target object, i.e., the face region of the first object 231, in a first scale.
It should be understood that the target object may also be the second object 232 in the preview image 23, or the first object 231 and the second object 232 in the preview image 23; the target area may be other local areas such as a hand area of the target object, and may be determined specifically according to actual needs, which is not limited in the embodiment of the present invention.
Therefore, the display proportion of the whole target object or the target area of the target object on the first screen can be improved, so that a photographed person corresponding to the target object can adjust the photographing posture and/or facial expression of the person according to the display picture of the first screen, the quality of the photographed image can be improved, and the photographing efficiency can be improved.
And 103, outputting the photographed image if a first input for indicating photographing is received.
In a specific implementation, the first input may be represented by any one of a voice input, a gesture input, and a touch input. The first input may be an input performed by the subject or an input performed by the photographer, and may be determined according to actual needs, which is not limited in the embodiment of the present invention.
For example, the photographed person may fix the terminal at a certain position, self-shoot by using the first camera by performing the first input, and present the shooting effect in real time through the first screen in the same plane as the first camera.
In this step, the output photographed image may be generated based on a preview image acquired by the first camera, in which the target object is displayed in the preview image in an original display scale; the output photographed image may also be generated based on the enlarged target object in the preview image, in which the target object is displayed in the preview image in a first ratio, which may be specifically set according to actual needs, and this is not limited in the embodiment of the present invention.
According to the photographing method, the terminal displays the preview image acquired by the first camera on the first screen which is arranged on the same side of the terminal as the first camera, and further, the display proportion of the target object on the first screen is improved. Therefore, the photographed person corresponding to the target object can adjust the photographing posture and/or facial expression of the person according to the display picture of the first screen, so that the quality of the photographed image can be improved, and the photographing efficiency can be improved.
In this embodiment of the present invention, the triggering, by the terminal, the target object displayed on the first screen in the first ratio after being enlarged may be performed in a variety of ways, and optionally, the displaying, on the first screen, the target object in the preview image acquired by the first camera in the first ratio after being enlarged includes:
firstly, if a second input for indicating that a target object in a preview image acquired by a first camera is displayed in an amplifying mode is received, displaying the amplified target object on a first screen in a first proportion;
or,
and secondly, determining an object of which the visual line faces the first camera in the preview image acquired by the first camera as a target object, and displaying the amplified target object on the first screen according to a first proportion.
In the first mode, the user may trigger the terminal to display the enlarged target object on the first screen by performing the second input. Therefore, the first mode is that the terminal passively magnifies and displays the target object.
In particular implementations, the user may be the subject or the photographer, and the second input may be represented as a gesture input or a voice input.
For example, in a scenario where the second input is represented as a gesture input, the terminal may determine an object with a gesture as a target object, and the terminal may pre-store a correspondence relationship between the gesture and an area, where the area may include a global area of the target object and each local area of the target object, and the global area of the target object may be understood as the entire target object. In the corresponding relationship between the gestures and the regions, the first gesture corresponds to the global region of the target object, the second region corresponds to the facial region of the target object, the third gesture corresponds to the hand region of the target object, and so on.
In specific implementation, if the image recognizes that an object in the preview image has a first gesture, which indicates that a photographed person corresponding to the object wants to observe a global area of the photographed person, the terminal can determine the object as a target object, and magnify and display the whole target object on a first screen; if the second gesture is recognized in the preview image, and the photographed person corresponding to the object wants to observe the face area of the photographed person, the terminal can determine the object as the target object and enlarge and display the face area of the target object on the first screen.
Of course, in a scenario where the second input is represented as a gesture input, the terminal may pre-store a correspondence relationship between the gesture and the object to determine the object corresponding to the gesture as the target object. Assuming that in the corresponding relationship between the gesture and the object, the gesture a corresponds to a first object from left to right in the preview image, the gesture B corresponds to a second object from left to right in the preview image, and so on; gesture a corresponds to the face region of the object, gesture b corresponds to the hand region of the object, and so on.
In specific implementation, if an object swing gesture A is recognized in the preview image, the terminal can determine a first object corresponding to the gesture A as a target object, and magnify and display the whole first object on a first screen; if the gesture B and the gesture a of the object in the preview image are recognized, the terminal may determine a second object corresponding to the gesture B as a target object, and enlarge and display a face area of the second object on the first screen.
In a scenario where the second input is represented as a voice input, the terminal may pre-store a correspondence relationship of the keyword and the object to determine the object corresponding to the keyword as the target object. Assuming that in the corresponding relationship between the keywords and the objects, the keyword "one" corresponds to a first object from left to right in the preview image, the keyword "two" corresponds to a second object from left to right in the preview image, and so on; the keyword "face" corresponds to a face region of the object, the keyword "hand" corresponds to a hand region of the object, and so on.
In specific implementation, if the keyword 'one' is recognized by voice, the terminal can determine a first object corresponding to the keyword 'one' as a target object, and magnify and display the whole first object on a first screen; if the keywords "two" and "face" are recognized, the terminal may determine a second object corresponding to the keyword "two" as a target object and enlarge and display a face region of the first object on the first screen.
For the first mode, the terminal determines the target object based on the second input, so that the photographed person corresponding to the target object can adjust the photographing posture and/or facial expression of the photographed person based on the display picture of the first screen, and the photographed person corresponding to the other object can prompt the photographed person corresponding to the target object to adjust the photographing posture and/or facial expression based on the display picture of the first screen, especially the photographed person corresponding to the target object is not oriented to the scene of the first camera, so that the quality of the photographed image can be improved, and the filming rate can be improved.
In the second mode, after detecting the object whose sight line is directed to the first camera, the terminal may determine that the object is the target object, and display the enlarged target object on the first screen. Therefore, the second mode is that the terminal actively magnifies and displays the target object.
For the second mode, the terminal determines the object with the sight line facing the first camera as the target object, so that the phenomenon that the terminal magnifies and displays the image of the passerby mistakenly entering the lens can be avoided, and the operation burden of the terminal can be reduced.
In addition, compared with the first mode, in the second mode, the photographed person does not need to execute the second input, and only needs to watch the first camera, so that the terminal can be triggered to amplify and display the image of the photographed person in the preview image, and the operation can be simplified.
For the second mode, the first mode determines the target object based on the second input, so that the flexibility of determining the target object can be improved, and the enlarged target object displayed on the first screen is more in line with the expectation of the photographed person.
It should be noted that, as for the second mode, the determining, as a target object, an object whose visual line is oriented to the first camera in the preview image captured by the first camera, and displaying the enlarged target object on the first screen in the first scale may also be represented as:
determining an object, of which the visual line faces to the first camera, in the preview image acquired by the first camera as a target object;
and if a third input for indicating that the target object is displayed in an enlarged mode is received, displaying the enlarged target object on the first screen in a first proportion.
Therefore, the flexibility of display can be improved, and the display picture on the first screen is more in line with the user expectation.
In this embodiment of the present invention, optionally, the number of the target objects is N, where N is an integer greater than 1. For the application scene, the terminal may sequentially display the N amplified target objects on the first screen, that is, only one amplified target object is displayed on the first screen each time; the N enlarged target objects may also be displayed on the first screen in a split-screen manner, that is, the first screen simultaneously displays the N enlarged target objects. The concrete description is as follows.
The displaying, on the first screen and in a first proportion, the enlarged target object in the preview image acquired by the first camera includes:
sequentially switching and displaying N target objects in a preview image acquired by the first camera after amplification by a first proportion on the first screen;
or,
and secondly, displaying N target objects in the preview image acquired by the first camera after the first screen is enlarged in a first proportional split screen mode.
In a first mode, the sequentially switching and displaying the N target objects in the preview image acquired by the first camera after the zooming in and the zooming out in a first ratio on the first screen may specifically be as follows:
displaying an ith target object of the enlarged N target objects on the first screen in a first proportion;
under the condition that the display time length of the ith target object is detected to exceed the preset time length or a switching input is received, displaying the (i + 1) th target object in the N amplified target objects on the first screen in a first proportion;
wherein i is an integer greater than 0 and less than N.
In this embodiment, the display order of the N target objects may be determined based on the positions of the N target objects in the first preview image, for example, the terminal may start the N target objects in the preview image sequentially from left to right; the target object with the smallest display scale on the first screen may also be determined based on the switching input, for example, the terminal may determine the target object displayed in the first screen as the first target object of the enlarged display, and then the terminal may determine the target object indicated by the switching input as the target object of the next enlarged display.
For a scene in which the (i + 1) th target object in the N enlarged target objects is displayed on the first screen at the first scale when it is detected that the display duration of the ith target object exceeds the preset duration, the terminal may automatically switch the target object enlarged and displayed on the first screen after determining the enlargement display order of the N target objects, so that the operation may be simplified. The preset time period may be set according to actual requirements, such as 5 seconds, and is not particularly limited.
For the scene that the (i + 1) th target object in the N enlarged target objects is displayed on the first screen at the first scale under the condition that the switching input is received, the terminal may not need to determine the enlarged display order of the N target objects in advance, and may determine the target object of the next enlarged display based on the instruction of the switching input, so that the flexibility of determining the enlarged display order of the N target objects may be improved.
In a second mode, displaying the N target objects in the preview image acquired by the first camera after the first proportional split screen display on the first screen may be understood as: the whole of the N target objects displayed in the split screen accounts for a first proportion on the first screen.
For ease of understanding, please refer to FIG. 4. In fig. 4, the target objects include a first object 231 and a second object 232, and the first screen 211 displays the enlarged first object 231 and second object 232 split up and down.
It should be understood that the split-screen display manner and the ratio of each target object in the first screen in fig. 4 are only examples, in other embodiments, the split-screen display manner may be a left-right split-screen display, and the like, and the ratio of each target object in the first screen may be equal or unequal, which may be determined according to actual needs, and is not limited in this embodiment of the present invention.
Compared with the mode one in which the enlarged N target objects are sequentially switched and displayed, the mode two can simultaneously display the enlarged N target objects on the first screen, so that the display efficiency can be improved, and the switching display operation can be reduced.
In addition, compared with the second mode, the first screen only displays one target object in the enlarged N target objects at a time in the first mode, and therefore, the display scale of the target object on the first screen is larger than that of the target object on the second screen in the second mode, so that more details of the target object can be displayed, and the accuracy of adjusting the photographing posture or the facial expression of the photographer of the target object is improved.
It should be understood that, for a plurality of local regions of the same target object, that is, a scene in which the target object includes K regions and K is an integer greater than 1, the terminal may display the local regions by using a display method of enlarged display of N target objects, which may be embodied as:
the displaying the enlarged target object on the first screen at a first scale includes:
sequentially switching and displaying N areas of the amplified target object on the first screen according to a first proportion; or,
and displaying the N areas of the amplified target object on the first screen in a first proportional split screen mode.
The implementation manner of the method is similar to the implementation principle related to displaying N target objects, and reference may be specifically made to the above description related to displaying N target objects, which is not repeated herein.
In the embodiment of the invention, when the terminal receives the first input for instructing to take a picture, the content displayed on the first screen may be a target object after being enlarged or a preview image without being enlarged.
Optionally, after displaying the enlarged target object in the preview image collected by the first camera on the first screen in the first ratio, before outputting the photographed image, the method further includes:
if a third input is received, displaying a target object in the preview image acquired by the first camera on the first screen in a second proportion;
wherein the proportional value of the second proportion is smaller than the proportional value of the first proportion. That is, when the target object is displayed on the first screen at the second scale, the first display size of the target object is smaller than the second display size of the target object when the target object is displayed on the first screen at the first scale. Illustratively, the first ratio may be 1:1, i.e., the target object in the preview image is displayed full screen on the first screen; the second ratio may be 1:3, i.e., the display size of the target object occupies only 1/4 of the first screen.
In this embodiment, when the terminal receives the third input, the content displayed on the first screen may be a preview image without being subjected to the enlargement processing. The second scale may be an original display scale of the target object in the preview image.
In a specific implementation, the third input may be a gesture input or a voice input, which may be determined according to actual needs, and this is not limited in the embodiment of the present invention.
Therefore, the photographed person can watch the display effect of the preview image without being amplified so as to determine whether the photographing posture and/or the facial expression need to be continuously adjusted, and further, the quality of the photographed image is improved and the filming rate is improved.
In this embodiment of the present invention, in order to improve the display effect of the photographed image, optionally, before the preview images collected by the first camera are respectively displayed on the first screen and the second screen, the method further includes:
displaying a reference image on the first screen;
the displaying the preview images collected by the first camera on the first screen and the second screen respectively comprises:
and if a fourth input is received, displaying the preview image acquired by the first camera on the first screen, or displaying the reference image and the preview image acquired by the first camera on the first screen in a split screen mode.
In this embodiment, before, during or after the terminal starts the first camera, the terminal displays the reference image on the first screen in advance, so as to show the reference image for the photographed person to refer to the photographing posture and the like.
If the fourth input is received, the terminal can switch the display content of the first screen into the preview image acquired by the first camera, so that the photographed person can adjust the photographing posture and/or facial expression of the photographed person based on the display state of the photographed person in the first screen, and the quality of the photographed image is improved.
Or the terminal can switch the display content of the first screen into the reference image and the preview image acquired by the first camera, and compared with the method of only displaying the preview image, the photographed person can adjust the photographing posture and/or facial expression of the photographed person based on the displayed reference image, so that the adjustment accuracy is improved, and the photographing effect is more consistent with the display effect of the reference image.
It should be understood that in a scene where the reference image and the preview image captured by the first camera are displayed on the first screen in a split manner, the reference image may be always displayed on the first screen even if the terminal displays the enlarged target object.
In the embodiment of the invention, if the terminal starts the target camera to take a picture, the terminal displays the amplified target object in the preview image acquired by the target camera on the target screen which is positioned on the same surface as the target camera. Wherein, the target camera can be the first camera or the second camera. Specifically, if the target camera is a first camera, the target screen is a first screen; and if the target camera is the second camera, the target screen is the second screen.
Therefore, the photographed person corresponding to the target object can adjust the photographing posture and/or facial expression of the person according to the display picture of the target screen, and the quality of the photographed image can be improved.
It should be noted that the content of the preview image displayed on the second screen may be consistent with the content of the preview image displayed on the first screen, so that the photographer may also view the display image on the second screen to assist the photographer in adjusting the photographing posture and/or facial expression, thereby improving the display effect of the photographed image and increasing the filming rate. Of course, the terminal may not perform any processing on the preview image displayed on the second screen, which may be determined according to actual needs, and this is not limited in the embodiment of the present invention.
In addition, various optional implementations described in the embodiments of the present invention may be implemented in combination with each other or implemented separately, and the embodiments of the present invention are not limited thereto.
Referring to fig. 5, fig. 5 is a diagram illustrating a structure of a terminal according to an embodiment of the present invention. Terminal 500 includes first screen and second screen, first screen is located the first face of terminal 500, the second screen is located in terminal 500 with the second face that the first face deviates from mutually, be equipped with first camera on the first face of terminal 500, be equipped with the second camera on the second face of terminal 500. As shown in fig. 5, the terminal 500 includes:
a first display module 501, configured to display preview images acquired by the first camera on the first screen and the second screen respectively;
a second display module 502, configured to display, on the first screen, a target object in the preview image acquired by the first camera after being enlarged in a first ratio;
the output module 503 is configured to output the photographed image if a first input for instructing photographing is received.
In addition to fig. 5, a module included in terminal 500 and a unit included in each module will be described below.
Optionally, the second display module 502 is specifically configured to:
if a second input for indicating that the target object in the preview image collected by the first camera is displayed in an enlarged mode is received, displaying the enlarged target object on the first screen in a first proportion;
or,
and determining an object of which the visual line faces the first camera in the preview image acquired by the first camera as a target object, and displaying the amplified target object on the first screen in a first proportion.
Optionally, the number of the target objects is N, where N is an integer greater than 1;
the second display module 502 is specifically configured to:
sequentially switching and displaying N target objects in the preview image acquired by the first camera after the first camera is amplified according to a first proportion on the first screen;
or,
and displaying N target objects in the preview image acquired by the first camera after the first screen is amplified in a first proportional split screen mode.
Optionally, the terminal 500 further includes:
the third display module is used for displaying the target object in the preview image acquired by the first camera on the first screen in a second proportion after the enlarged target object in the preview image acquired by the first camera is displayed on the first screen in a first proportion and before a photographed image is output;
wherein the proportional value of the second proportion is smaller than the proportional value of the first proportion.
Optionally, the terminal 500 further includes:
the fourth display module is used for displaying a reference image on the first screen before the preview image acquired by the first camera is displayed on the first screen and the second screen respectively;
the first display module 501 is specifically configured to:
and if a fourth input is received, displaying the preview image acquired by the first camera on the first screen, or displaying the reference image and the preview image acquired by the first camera on the first screen in a split screen mode.
The terminal 500 can implement each process in the method embodiment of the present invention and achieve the same beneficial effects, and is not described herein again to avoid repetition.
Referring to fig. 6, fig. 6 is a second structural diagram of a terminal according to a second embodiment of the present invention, where the terminal may be a hardware structural diagram of a terminal for implementing various embodiments of the present invention. Terminal 600 includes first screen and second screen, first screen is located the first face of terminal 600, the second screen is located in terminal 600 with the second face that first face deviates from mutually, be equipped with first camera on the first face of terminal 600, be equipped with the second camera on the second face of terminal 600.
As shown in fig. 6, terminal 600 includes, but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, a processor 610, and a power supply 611. Those skilled in the art will appreciate that the terminal configuration shown in fig. 6 is not intended to be limiting, and that the terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
Wherein, the processor 610 is configured to:
displaying preview images acquired by the first camera on the first screen and the second screen respectively;
displaying a target object in a preview image acquired by the first camera after being amplified in a first proportion on the first screen;
and if a first input for indicating photographing is received, outputting a photographed image.
Optionally, the processor 610 is further configured to:
if a second input for indicating that the target object in the preview image collected by the first camera is displayed in an enlarged mode is received, displaying the enlarged target object on the first screen in a first proportion;
or,
and determining an object of which the visual line faces the first camera in the preview image acquired by the first camera as a target object, and displaying the amplified target object on the first screen in a first proportion.
Optionally, the number of the target objects is N, where N is an integer greater than 1;
a processor 610, further configured to:
sequentially switching and displaying N target objects in the preview image acquired by the first camera after the first camera is amplified according to a first proportion on the first screen;
or,
and displaying N target objects in the preview image acquired by the first camera after the first screen is amplified in a first proportional split screen mode.
Optionally, the processor 610 is further configured to: if a third input is received, displaying a target object in the preview image acquired by the first camera on the first screen in a second proportion;
wherein the proportional value of the second proportion is smaller than the proportional value of the first proportion.
Optionally, the processor 610 is further configured to: displaying a reference image on the first screen;
and if a fourth input is received, displaying the preview image acquired by the first camera on the first screen, or displaying the reference image and the preview image acquired by the first camera on the first screen in a split screen mode.
It should be noted that, in this embodiment, the terminal 600 may implement each process in the method embodiment of the present invention and achieve the same beneficial effects, and for avoiding repetition, details are not described here.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 601 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 610; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 601 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 601 may also communicate with a network and other devices through a wireless communication system.
The terminal provides wireless broadband internet access to the user through the network module 602, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
The audio output unit 603 may convert audio data received by the radio frequency unit 601 or the network module 602 or stored in the memory 609 into an audio signal and output as sound. Also, the audio output unit 603 can also provide audio output related to a specific function performed by the terminal 600 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 603 includes a speaker, a buzzer, a receiver, and the like.
The input unit 604 is used to receive audio or video signals. The input Unit 604 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the Graphics processor 6041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 606. The image frames processed by the graphic processor 6041 may be stored in the memory 609 (or other storage medium) or transmitted via the radio frequency unit 601 or the network module 602. The microphone 6042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 601 in case of the phone call mode.
The terminal 600 also includes at least one sensor 605, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 6061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 6061 and/or the backlight when the terminal 600 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 605 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 606 is used to display information input by the user or information provided to the user. The Display unit 606 may include a Display panel 6061, and the Display panel 6061 may be configured by a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 607 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal. Specifically, the user input unit 607 includes a touch panel 6071 and other input devices 6072. Touch panel 6071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 6071 using a finger, stylus, or any suitable object or accessory). The touch panel 6071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 610, receives a command from the processor 610, and executes the command. In addition, the touch panel 6071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 607 may include other input devices 6072 in addition to the touch panel 6071. Specifically, the other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 6071 can be overlaid on the display panel 6061, and when the touch panel 6071 detects a touch operation on or near the touch panel 6071, the touch operation is transmitted to the processor 610 to determine the type of the touch event, and then the processor 610 provides a corresponding visual output on the display panel 6061 according to the type of the touch event. Although in fig. 6, the touch panel 6071 and the display panel 6061 are two independent components to realize the input and output functions of the terminal, in some embodiments, the touch panel 6071 and the display panel 6061 may be integrated to realize the input and output functions of the terminal, and this is not limited here.
The interface unit 608 is an interface for connecting an external device to the terminal 600. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 608 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal 600 or may be used to transmit data between the terminal 600 and an external device.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 609 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 610 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 609 and calling data stored in the memory 609, thereby performing overall monitoring of the terminal. Processor 610 may include one or more processing units; preferably, the processor 610 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The terminal 600 may further include a power supply 611 (e.g., a battery) for supplying power to the various components, and preferably, the power supply 611 is logically connected to the processor 610 via a power management system, so that functions of managing charging, discharging, and power consumption are performed via the power management system.
In addition, the terminal 600 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides a terminal, including a processor 610, a memory 609, and a computer program stored in the memory 609 and capable of running on the processor 610, where the computer program is executed by the processor 610 to implement each process of the foregoing photographing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the above-mentioned photographing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the descriptions thereof are omitted here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (7)

1. The photographing method is applied to a terminal and is characterized in that the terminal comprises a first screen and a second screen, the first screen is located on a first face of the terminal, the second screen is located on a second face, deviating from the first face, of the terminal, a first camera is arranged on the first face of the terminal, and the photographing method comprises the following steps:
displaying preview images acquired by the first camera on the first screen and the second screen respectively;
displaying a target object in a preview image acquired by the first camera after being amplified in a first proportion on the first screen;
if a first input for indicating photographing is received, outputting a photographed image, wherein the photographed image is generated based on the amplified target object in the preview image, and the target object is displayed in the preview image in a first proportion in the photographed image;
the displaying, on the first screen and in a first proportion, the enlarged target object in the preview image acquired by the first camera includes:
if a second input for indicating that the target object in the preview image acquired by the first camera is displayed in an amplifying mode is received, displaying the amplified target object on the first screen in a first proportion, wherein the area of the amplified target object is determined according to the corresponding relation between the gesture input prestored in the terminal and each area of the target object;
or,
determining an object, of which the visual line faces the first camera, in the preview image acquired by the first camera as a target object, and displaying the amplified target object on the first screen in a first proportion;
under the condition that the number of the target objects is N and N is an integer greater than 1, displaying the target objects in the preview image acquired by the first camera after being magnified on the first screen in a first proportion, including:
sequentially switching and displaying N target objects in the preview image acquired by the first camera after the first camera is amplified according to a first proportion on the first screen;
or,
and displaying N target objects in the preview image acquired by the first camera after the first screen is amplified in a first proportional split screen mode.
2. The method according to claim 1, wherein after displaying the target object in the preview image captured by the first camera after zooming in on the first screen in the first scale and before outputting the photographed image, the method further comprises:
if a third input is received, displaying a target object in the preview image acquired by the first camera on the first screen in a second proportion;
wherein the proportional value of the second proportion is smaller than the proportional value of the first proportion.
3. The method of claim 1, wherein before displaying the preview images captured by the first camera on the first screen and the second screen, respectively, further comprising:
displaying a reference image on the first screen;
the displaying the preview images collected by the first camera on the first screen and the second screen respectively comprises:
and if a fourth input is received, displaying the preview image acquired by the first camera on the first screen, or displaying the reference image and the preview image acquired by the first camera on the first screen in a split screen mode.
4. The utility model provides a terminal, its characterized in that, the terminal includes first screen and second screen, first screen is located the first face of terminal, the second screen is located in the terminal with the second face that first face deviates from mutually, be equipped with first camera on the first face of terminal, the terminal includes:
the first display module is used for displaying the preview images acquired by the first camera on the first screen and the second screen respectively;
the second display module is used for displaying the amplified target object in the preview image acquired by the first camera on the first screen according to a first proportion;
the output module is used for outputting a photographed image if a first input for indicating photographing is received, wherein the photographed image is generated based on the amplified target object in the preview image, and the target object is displayed in the preview image in a first proportion in the photographed image;
the second display module is specifically configured to:
if a second input for indicating that the target object in the preview image acquired by the first camera is displayed in an amplifying mode is received, displaying the amplified target object on the first screen in a first proportion, wherein the area of the amplified target object is determined according to the corresponding relation between the gesture input prestored in the terminal and each area of the target object;
or,
determining an object, of which the visual line faces the first camera, in the preview image acquired by the first camera as a target object, and displaying the amplified target object on the first screen in a first proportion;
the number of the target objects is N, and N is an integer greater than 1;
the second display module is specifically configured to:
sequentially switching and displaying N target objects in the preview image acquired by the first camera after the first camera is amplified according to a first proportion on the first screen;
or,
and displaying N target objects in the preview image acquired by the first camera after the first screen is amplified in a first proportional split screen mode.
5. The terminal of claim 4, further comprising:
the third display module is used for displaying the target object in the preview image acquired by the first camera on the first screen in a second proportion after the enlarged target object in the preview image acquired by the first camera is displayed on the first screen in a first proportion and before a photographed image is output;
wherein the proportional value of the second proportion is smaller than the proportional value of the first proportion.
6. The terminal of claim 4, further comprising:
the fourth display module is used for displaying a reference image on the first screen before the preview image acquired by the first camera is displayed on the first screen and the second screen respectively;
the first display module is specifically configured to:
and if a fourth input is received, displaying the preview image acquired by the first camera on the first screen, or displaying the reference image and the preview image acquired by the first camera on the first screen in a split screen mode.
7. A terminal, characterized in that it comprises a processor, a memory and a computer program stored on said memory and executable on said processor, said computer program, when executed by said processor, implementing the steps of the photographing method according to any one of claims 1 to 3.
CN201810729882.3A 2018-07-05 2018-07-05 Photographing method and terminal Active CN109005337B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810729882.3A CN109005337B (en) 2018-07-05 2018-07-05 Photographing method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810729882.3A CN109005337B (en) 2018-07-05 2018-07-05 Photographing method and terminal

Publications (2)

Publication Number Publication Date
CN109005337A CN109005337A (en) 2018-12-14
CN109005337B true CN109005337B (en) 2021-08-24

Family

ID=64598679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810729882.3A Active CN109005337B (en) 2018-07-05 2018-07-05 Photographing method and terminal

Country Status (1)

Country Link
CN (1) CN109005337B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110365898A (en) * 2019-07-11 2019-10-22 维沃移动通信有限公司 A kind of image pickup method and terminal device
JP7408358B2 (en) * 2019-11-18 2024-01-05 キヤノン株式会社 Information processing device, program, storage medium, and information processing method
CN112153283B (en) * 2020-09-22 2022-08-12 维沃移动通信有限公司 Shooting method, device and electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103428423A (en) * 2012-05-24 2013-12-04 联发科技股份有限公司 Preview system and preview method
CN107623793A (en) * 2017-10-19 2018-01-23 广东欧珀移动通信有限公司 Method and device for image capture and processing
CN107770312A (en) * 2017-11-07 2018-03-06 广东欧珀移动通信有限公司 Information display method, device and terminal
CN108174042A (en) * 2018-01-23 2018-06-15 北京珠穆朗玛移动通信有限公司 Image pickup method, mobile terminal and the device of mobile terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102081932B1 (en) * 2013-03-21 2020-04-14 엘지전자 주식회사 Display device and method for controlling the same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103428423A (en) * 2012-05-24 2013-12-04 联发科技股份有限公司 Preview system and preview method
CN107623793A (en) * 2017-10-19 2018-01-23 广东欧珀移动通信有限公司 Method and device for image capture and processing
CN107770312A (en) * 2017-11-07 2018-03-06 广东欧珀移动通信有限公司 Information display method, device and terminal
CN108174042A (en) * 2018-01-23 2018-06-15 北京珠穆朗玛移动通信有限公司 Image pickup method, mobile terminal and the device of mobile terminal

Also Published As

Publication number Publication date
CN109005337A (en) 2018-12-14

Similar Documents

Publication Publication Date Title
US11689649B2 (en) Shooting method and terminal
CN108513070B (en) Image processing method, mobile terminal and computer readable storage medium
CN108881733B (en) Panoramic shooting method and mobile terminal
CN110602401A (en) Photographing method and terminal
CN109639969B (en) Image processing method, terminal and server
CN110602389B (en) Display method and electronic equipment
CN110213440B (en) Image sharing method and terminal
CN110213485B (en) Image processing method and terminal
CN109688253B (en) Shooting method and terminal
CN110769174B (en) Video viewing method and electronic equipment
WO2021013009A1 (en) Photographing method and terminal device
WO2021190390A1 (en) Focusing method, electronic device, storage medium and program product
CN108881721B (en) Display method and terminal
CN111147752A (en) Zoom factor adjusting method, electronic device, and medium
CN109819166B (en) Image processing method and electronic equipment
CN108924422B (en) Panoramic photographing method and mobile terminal
CN109005337B (en) Photographing method and terminal
CN108174081B (en) A shooting method and mobile terminal
CN110944114B (en) Photographing method and electronic equipment
CN108833796A (en) An image capturing method and terminal
CN108924413B (en) Shooting method and mobile terminal
CN108282611B (en) Image processing method and mobile terminal
CN111182206B (en) Image processing method and device
CN109660750B (en) Video call method and terminal
CN107800968A (en) A kind of image pickup method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant