Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a self-timer method according to an embodiment of the present invention, where the self-timer method is applied to a mobile terminal including a front camera, as shown in fig. 1, the self-timer method includes the following steps:
step 101, starting a front camera, and displaying a default preview frame of the front camera.
In the embodiment of the invention, the image of the user can be collected through the front camera, and the user can take a self-timer through the front camera. In the user self-timer process, if the eyes of the user directly look at the front camera, the front camera can collect the self-timer photos of the direct-view lens. If the included angle between the gazing direction of the eyes of the user and the direct-viewing direction when the eyes directly view the front camera is very small, the front camera can acquire the self-photographing picture similar to the direct-viewing lens. For example, when the angle between the gazing direction of the eyes of the user and the direct-view direction when the eyes directly look at the front camera is 5 degrees, the self-timer photograph taken by the front camera has a self-timer effect similar to that of the direct-view lens.
In this embodiment, the area of the default preview frame is a fixed value, and the manner of displaying the default preview frame may be set as: and displaying a default preview frame at a fixed position of a screen of the mobile terminal after the camera is opened each time. The default preview frame may be a set of a preview image and a manipulation button, or may be a frame that includes only the preview image without including an operation button. The default preview frame can be displayed in a full screen mode, or the full screen mode can be occupied after the combination of the control buttons.
And 102, generating an image preview frame in a screen area which is a preset distance away from the front camera.
In step 102, the area of the image preview frame is smaller than the area of the default preview frame. Specifically, the length and width of the default preview frame may be acquired, and the image preview frame may be generated in a screen area that is a preset distance away from the front camera according to size data smaller than the length and width of the default preview frame. The area of the image preview frame is generally 30% or less of the area of the default preview frame.
In this embodiment, the screen area is a screen area of the mobile terminal, and the preset distance may be a default numerical value or a user-defined numerical value. For example, the preset distance may be 3 centimeters. The size of the image preview frame can be a preset size parameter, and can also be a size parameter calculated according to the distance between the face of the user and the front camera. The image preview frame can be displayed in the center of the screen area, and can also be displayed in the left screen area or the right screen area. Specifically, the image preview box may display a screen area directly below the front camera.
And 103, displaying the image acquired by the front camera in the image preview frame.
In this step 103, the image acquired by the front camera may include a user image and/or an image of another object. Because the image preview frame is positioned in the screen area which is a preset distance away from the front camera, when the preset distance is smaller, the user can clearly see the image displayed in the image preview frame. In this embodiment, when the image preview frame displays the image collected by the front-facing camera, a reminding message can be sent, wherein the reminding message is used for reminding the eyes of the user to see the image preview frame so as to remind the user to see whether the preview image meets the requirements of the user. The reminding mode comprises the flashing of the image preview frame, the color replacement of the edge of the image preview frame, the alternation of the reduction and the enlargement of the image preview frame and the like.
Referring to fig. 2, fig. 2 is a schematic diagram of a self-timer scenario according to an embodiment of the present invention. Fig. 2 includes a mobile terminal 200, a front camera 201, a screen 202, and an image preview frame 203. In fig. 2, an image preview frame 203 is displayed in a screen area directly below a front camera 201, and a distance between the image preview frame 203 and the front camera 201 is a preset distance. The image captured by the front camera is displayed in the screen 202. A preview image is displayed in the image preview frame 203.
And 104, detecting the sight of the user, and judging whether the sight of the user is the sight of watching the image preview frame.
In this embodiment, the detection of the user's gaze may be performed by pupillary-corneal reflex, iris-scleral limbus, or the like. The pupil-cornea reflection method requires detecting the position of the pupil center and the position of the cornea reflection point. The iris-sclera boundary method estimates the direction of sight by detecting the size and elliptical shape of the circle that the iris and sclera intersect in the picture taken by the camera. It is added that the circle at the interface of the iris and the sclera is imaged into an elliptical shape in the camera.
In this step 104, if the user's sight line is a sight line watching the image preview frame, which indicates that the user is approximately looking straight at the front camera, and the front camera can capture an image of the user that is approximately looking straight at the lens, then step 105 is executed. If the user's sight line is not the sight line watching the image preview frame, it indicates that the user does not use the near-direct-view sight line to look at the front camera, the process ends or continues to execute step 104 until the detected user's sight line is the sight line watching the image preview frame, and then execute step 105. The flowchart shown in fig. 1 is exemplified by the end of the flow, but is not limited thereto.
And 105, taking a picture through the front camera.
In this step 105, the taken picture includes a self-shot picture of the face of the user, and the sight line of the user in the self-shot picture is approximately straight ahead, so that a self-shot picture with a good self-shot effect can be obtained.
In the embodiment of the present invention, the mobile terminal may be any mobile terminal with a front camera, for example: a Mobile phone, a Tablet Personal Computer (Tablet Personal Computer), a Laptop Computer (Laptop Computer), a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a Wearable Device (Wearable Device), or the like.
According to the self-photographing method, the front camera is started, and the default preview frame of the front camera is displayed; generating an image preview frame in a screen area which is a preset distance away from the front camera, and displaying an image collected by the front camera in the image preview frame, wherein the area of the image preview frame is smaller than that of the default preview frame; detecting the sight of a user, and judging whether the sight of the user is the sight of watching the image preview frame; and if the sight of the user is the sight of watching the image preview frame, shooting a picture through the front camera. Therefore, the user can look over the shooting effect through the watching image preview frame, take the autodyne photo with the approximate direct-view front, and improve the autodyne photo effect of the mobile terminal.
Referring to fig. 3, fig. 3 is a flowchart of another self-timer method according to an embodiment of the present invention, where the self-timer method is applied to a mobile terminal including a front camera, as shown in fig. 3, and includes the following steps:
step 301, opening a front camera and displaying default preview of the front camera.
Step 301 is the same as step 101 in the embodiment of the present invention, and is not described herein again.
And 302, detecting the distance between the front camera and the face of the user.
In step 302, a distance between the front camera and a face of the user may be detected by a distance sensor. Referring to fig. 4, fig. 4 is a flowchart illustrating another self-timer method according to an embodiment of the invention. In fig. 4, a mobile terminal 400 is shown, where the mobile terminal 400 includes a front camera 401, a distance sensor 402, a screen 403, an included angle 404, a first position 405, a second position 406, a first distance 407 between a face of a user and the mobile terminal, a second distance 4071 between the face of the user and the mobile terminal, and a user 4010. Where the angle 404 represents the angle between the line of sight of the user 4010 looking straight at the front camera 401 and the line of sight of the user looking at the screen 403. Based on fig. 4, a first distance 407 between the front camera 401 and the face of the user when the mobile terminal 400 is at the first position 405 can be detected by the distance sensor 402, and a second distance 4071 between the front camera 401 and the face of the user when the mobile terminal 400 is at the second position 406 can be detected by the distance sensor 402.
And step 303, judging whether the distance between the front camera and the face of the user is smaller than a preset distance.
The distance between the mobile terminal and the face of the user is different in different situations. Referring to fig. 4, when the mobile terminal 400 is at the first position 405, it is assumed that the distance between the mobile terminal 400 and the face of the user is the first distance 407, and when the mobile terminal 400 is at the second position 406, it is assumed that the distance between the mobile terminal 400 and the face of the user is the second distance 4071, and the first distance 407 is smaller than the second distance 4071.
In this embodiment, when the distance between the mobile terminal and the face of the user exceeds the preset distance, the picture is directly taken, and when the distance between the mobile terminal and the face of the user is less than the preset distance, the image preview frame is provided, so that the user can clearly view the preview image. The preset distance can be a default setting parameter or a user-defined parameter.
In step 303, if the distance between the front camera and the face of the user is smaller than a preset distance, step 304 is executed, and if the distance between the front camera and the face of the user is greater than the preset distance, step 307 is executed.
And 304, generating an image preview frame in a screen area which is away from the front camera by a preset distance.
In this step 304, the area of the image preview frame is smaller than the area of the default preview frame. Specifically, the length and width of the default preview frame may be acquired, and the image preview frame may be generated in a screen area that is a preset distance away from the front camera according to size data smaller than the length and width of the default preview frame.
Optionally, the step 304 specifically includes the following steps:
if the distance between the front camera and the face of the user is smaller than a preset distance, calculating the target size of the image preview frame according to the distance between the front camera and the face of the user, the distance between the front camera and the screen and a preset angle, and generating the image preview frame with the target size in the screen area.
Specifically, the length of the image preview frame can be calculated according to the following formula: l-tan θ -X, where L is a length of the image preview frame, S is a distance between the mobile terminal and a face of the user, X is a distance between the camera and the screen, and θ is a preset angle, and the preset angle may be a default value or a value defined by the user. And acquiring the length-width ratio of the screen, and determining the width of the image preview frame according to the length-width ratio of the screen and the calculated length of the image preview frame. And determining the target size according to the calculated length and width of the image preview frame.
Specifically, please participate in fig. 4 at the same time, and fig. 4 further includes a first image preview frame 408 and a second image preview frame 409. The first image preview frame 408 is an image preview frame determined according to the first distance 407, the second image preview frame 409 is an image preview frame determined according to the second distance 4071, and the length of the first image preview frame 408 is greater than the length of the second image preview frame 409.
It should be noted that, in the embodiment, steps 302 to 304 are specific steps of step 102 in the embodiment of the present invention. Optionally, step 102 in this embodiment of the present invention may further include the following steps:
judging whether the front camera shoots the face of the front side of the user or not;
if the front camera shoots the front face of the user, judging whether an angle between an original sight and a first sight of the user is larger than a preset angle, wherein the original sight is the sight of the user watching the front camera, and the first sight is the sight of the user watching the screen area at the first moment;
and if the included angle between the original sight line and the first sight line of the user is larger than a preset angle, generating an image preview frame in a screen area which is away from the front camera by a preset distance.
And 305, displaying the image acquired by the front camera in the image preview frame.
In this step 305, the images acquired by the front camera may include user images and/or other object images. When the image preview frame displays the image collected by the front camera, a reminding message can be sent, wherein the reminding message is used for reminding the eyes of the user to see the image preview frame so as to remind the user to see whether the preview image meets the requirements of the user.
Referring to fig. 5, fig. 5 is a schematic view of another self-timer scenario according to an embodiment of the invention. Fig. 5 includes a mobile terminal 500, a front camera 501, a screen 502, a first image preview frame 503, a second image preview frame 504, and a user 505. In fig. 5, the first image preview frame 503 is an image preview frame when the distance between the front camera 501 and the face of the user 505 is a first distance, and the second image preview frame 504 is an image preview frame when the distance between the front camera 501 and the face of the user 505 is a second distance, where the first distance is smaller than the second distance. The first image preview frame 503 and the second image preview frame 504 are displayed in a screen area right below the front camera 501, and the distance between the image preview frame and the front camera 501 is a preset distance. The image captured by the front camera is displayed in the screen 502. Preview images are displayed in the first image preview frame 503 and the second image preview frame.
And step 306, detecting the sight of the user, and judging whether the sight of the user is the sight of watching the image preview frame.
Optionally, in step 306, the step of detecting the line of sight of the user specifically includes the following steps:
acquiring a pupil image of a user;
extracting a user pupil contour feature from the user pupil image;
searching a user sight direction corresponding to the pupil contour characteristics of the user from a preset corresponding relation between the pupil contour characteristics and the sight direction;
and determining the sight of the user according to the sight direction of the user and the position of the eyes of the user.
Specifically, the face image of the user can be acquired through the front camera, and the pupil image of the user is acquired from the face image according to an image recognition technology. And extracting the pupil contour characteristics of the user from the pupil image of the user, wherein the pupil contour characteristics comprise curvature information of the pupil contour. The corresponding relation between the pupil contour characteristics and the sight line direction can be obtained through the analysis of a large amount of pupil image data. Therefore, the user sight line direction corresponding to the pupil contour feature of the user can be searched from the preset corresponding relation between the pupil contour feature and the sight line direction. The method comprises the steps of shooting the face of a user through a front camera, determining the two-dimensional position of eyes of the user in a picture, determining the three-dimensional position of the eyes of the user according to the two-dimensional position of the eyes in the picture and hardware parameters and shooting parameters of the camera, and determining the sight line of the user according to the three-dimensional position of the eyes of the user and the sight line direction.
Optionally, in step 306, the step of determining whether the line of sight of the user is a line of sight of the image preview frame includes the following steps:
judging whether the position of the sight of the user on the screen is in the image preview frame, if so, determining that the sight of the user is gazing the sight of the image preview frame, and if not, determining that the sight of the user is not gazing the sight of the image preview frame.
In this step 306, if the user's line of sight is a line of sight looking at the image preview frame, which indicates that the user is approximately looking straight at the front camera, and the front camera can capture an image of the user that is approximately looking straight at the lens, then step 307 is executed. If the user's sight line is not the sight line watching the image preview frame, it indicates that the user does not use the near-direct-view sight line to look at the front camera, the process ends or continues to execute step 306 until the detected user's sight line is the sight line watching the image preview frame, and then executes step 307. The flowchart shown in fig. 3 is exemplified by the end of the flow, but is not limited thereto.
And 307, taking a picture through the front camera.
Optionally, the step 307 specifically includes the following steps:
judging whether an included angle between the original sight line and a second sight line of the user is smaller than a preset angle or not, if so, shooting a picture through a front camera, wherein the second sight line is the sight line of the user watching the image preview frame at a second moment, and the second moment is after the first moment;
if the included angle between the original sight line and the second sight line of the user is larger than a preset angle, reducing the size of the image preview frame;
detecting a third sight of a user, judging whether the third sight of the user is a sight watching the reduced image preview frame, if so, judging whether an included angle between the third sight of the user and the original sight is smaller than a preset angle, wherein the third sight is a sight watching the screen area by the user at a third moment, and the third moment is after the second moment;
and if the included angle between the third sight line of the user and the original sight line is smaller than a preset angle, shooting a picture through the front camera.
Optionally, the step of reducing the size of the image preview frame if the included angle between the original line of sight and the second line of sight of the user is greater than a preset angle specifically includes the following steps:
and if the included angle between the original sight line and the second sight line of the user is larger than a preset angle, reducing the size of the image preview frame according to a preset proportion, for example, reducing the image preview frame according to a proportion of 5%. If the included angle between the sight line of the image preview frame watched by the user and the original sight line is larger than the preset angle after the image preview frame is reduced according to the preset proportion, the image preview frame is reduced again according to the preset proportion until the included angle between the sight line of the image preview frame watched by the user and the original sight line is smaller than the preset angle.
It should be noted that the preset angle may be a default value or a user-defined value. For example, the preset angle may be 5 degrees. The smaller the preset angle is, if the included angle between the original sight line and the sight line of the user is smaller than the preset angle, the fact that the eyes of the user are similar to a direct-view lens is indicated.
According to the self-photographing method, the front camera is started, and the default preview frame of the front camera is displayed; detecting the distance between the front camera and the face of the user; judging whether the distance between the front camera and the face of the user is smaller than a preset distance or not; if the distance between the front camera and the face of the user is smaller than a preset distance, generating an image preview frame in a screen area which is away from the front camera by the preset distance, and displaying an image collected by the front camera in the image preview frame, wherein the area of the image preview frame is smaller than that of the default preview frame; detecting the sight of a user, and judging whether the sight of the user is the sight of watching the image preview frame; and if the sight of the user is the sight of watching the image preview frame, shooting a picture through the front camera. Therefore, the user can look over the shooting effect through the watching image preview frame, take the autodyne photo with the approximate direct-view front, and improve the autodyne photo effect of the mobile terminal.
Referring to fig. 6, fig. 6 is a structural diagram of a mobile terminal according to an embodiment of the present invention, as shown in fig. 6, the mobile terminal 600 includes an opening module 601, a generating module 602, a determining module 603, and a shooting module 604, where:
the starting module 601 is configured to start the front-facing camera and display a default preview frame of the front-facing camera;
a generating module 602, configured to generate an image preview frame in a screen area that is a preset distance away from the front-facing camera, and display an image acquired by the front-facing camera in the image preview frame, where an area of the image preview frame is smaller than an area of the default preview frame;
the judging module 603 is configured to detect a line of sight of a user, and judge whether the line of sight of the user is a line of sight watching the image preview frame;
a shooting module 604, configured to take a picture through the front-facing camera if the user's sight line is a sight line watching the image preview frame.
Optionally, referring to fig. 7, fig. 7 is a structural diagram of another mobile terminal provided in the implementation of the present invention, and as shown in fig. 7, the generating module 602 includes:
a first detection submodule 6021, configured to detect a distance between the front-facing camera and a face of the user;
the first judgment submodule 6022 is configured to judge whether a distance between the front camera and the face of the user is smaller than a preset distance;
the first generation submodule 6023 is configured to generate an image preview frame in a screen area that is a preset distance away from the front camera if the distance between the front camera and the face of the user is smaller than the preset distance.
Optionally, referring to fig. 8, fig. 8 is a structural diagram of another mobile terminal provided in the implementation of the present invention, and as shown in fig. 8, the generating module 602 includes:
a second judgment submodule 6024, configured to judge whether the front-facing camera shoots a front face of the user;
a third determining submodule 6025, configured to determine whether an angle between an original line of sight and a first line of sight of the user is greater than a preset angle if the front camera shoots a front face of the user, where the original line of sight is a line of sight of the user watching the front camera, and the first line of sight is a line of sight of the user watching the screen area at a first time;
and the second generating submodule 6026 is configured to generate an image preview frame in a screen area which is a preset distance away from the front camera if an included angle between the original sight line and the first sight line of the user is greater than a preset angle.
Optionally, the first generation submodule 6023 is further configured to, if the distance between the front camera and the face of the user is smaller than a preset distance, calculate a target size of the image preview frame according to the distance between the front camera and the face of the user, the distance between the front camera and the screen, and a preset angle, and generate the image preview frame with the target size in the screen area.
Optionally, referring to fig. 9, fig. 9 is a structural diagram of another mobile terminal provided in the implementation of the present invention, and as shown in fig. 9, the shooting module 604 includes:
a fourth determining submodule 6041, configured to determine whether an included angle between the original line of sight and a second line of sight of the user is smaller than a preset angle, and if the included angle between the original line of sight and the second line of sight of the user is larger than the preset angle, reduce the size of the image preview frame, where the second line of sight is a line of sight that the user gazes at the image preview frame at a second time, and the second time is after the first time;
a fifth judging submodule 6042, configured to detect a third line of sight of the user, judge whether the third line of sight of the user is a line of sight looking at the reduced image preview frame, and if the third line of sight of the user is a line of sight looking at the reduced image preview frame, judge whether an included angle between the third line of sight of the user and the original line of sight is smaller than a preset angle, where the third line of sight is a line of sight of the user looking at the screen area at a third time, and the third time is after the second time;
and a shooting submodule 6043, configured to shoot a picture through the front camera if an included angle between the third line of sight of the user and the original line of sight is smaller than a preset angle.
Optionally, referring to fig. 10, fig. 10 is a structural diagram of another mobile terminal provided in the implementation of the present invention, where the determining module 603 includes:
an acquisition sub-module 6031 for acquiring a pupil image of the user;
an extraction sub-module 6032, configured to extract a user pupil profile feature from the user pupil image;
the searching sub-module 6033 is configured to search, from a preset correspondence between the pupil contour feature and the gaze direction, a user gaze direction corresponding to the user pupil contour feature;
and a determining sub-module 6034, configured to determine the line of sight of the user according to the line of sight direction of the user and the position of the eyes of the user.
The mobile terminal 600 can implement each process implemented by the mobile terminal in the method embodiments of fig. 1 and fig. 3, and is not described herein again to avoid repetition.
The mobile terminal 600 of the embodiment of the present invention starts the front camera, and displays a default preview frame of the front camera; generating an image preview frame in a screen area which is a preset distance away from the front camera, and displaying an image collected by the front camera in the image preview frame, wherein the area of the image preview frame is smaller than that of the default preview frame; detecting the sight of a user, and judging whether the sight of the user is the sight of watching the image preview frame; and if the sight of the user is the sight of watching the image preview frame, shooting a picture through the front camera. Therefore, the user can look over the shooting effect through the watching image preview frame, take the autodyne photo with the approximate direct-view front, and improve the autodyne photo effect of the mobile terminal.
Referring to fig. 11, fig. 11 is a structural diagram of a mobile terminal according to an embodiment of the present invention, which can implement details of the self-timer method in the foregoing embodiment and achieve the same effect. As shown in fig. 11, the mobile terminal 1100 includes: at least one processor 1101, memory 1102, at least one user interface 1103, and a network interface 1104. Various components in mobile terminal 1100 are coupled together by a bus system 1105. It is understood that the bus system 1105 is used to enable communications among the components. The bus system 1105 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 1105 in fig. 11, and the mobile terminal 1100 further includes a front-facing camera 1106, the front-facing camera 1106 being coupled to the various components of the mobile terminal via the bus system 1105.
The user interface 1103 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, trackball, touch pad, or touch screen, among others.
It is to be understood that the memory 1102 in embodiments of the present invention can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double data rate Synchronous Dynamic random access memory (ddr DRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous link SDRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The memory 1102 of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 1102 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof: an operating system 11021 and application programs 11022.
The operating system 11021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application 11022 contains various applications such as a Media Player (Media Player), a Browser (Browser), etc. for implementing various application services. Programs that implement methods in accordance with embodiments of the invention may be included in application 11022.
In the embodiment of the present invention, the mobile terminal 1100 further includes: a computer program stored on the memory 1102 and executable on the processor 1101, the computer program when executed by the processor 1101 performing the steps of:
opening the front camera, and displaying a default preview frame of the front camera;
generating an image preview frame in a screen area which is a preset distance away from the front camera, and displaying an image collected by the front camera in the image preview frame, wherein the area of the image preview frame is smaller than that of the default preview frame;
detecting the sight of a user, and judging whether the sight of the user is the sight of watching the image preview frame;
and if the sight of the user is the sight of watching the image preview frame, shooting a picture through the front camera.
The methods disclosed in the embodiments of the present invention described above may be implemented in the processor 1101 or by the processor 1101. The processor 1101 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by instructions in the form of hardware, integrated logic circuits, or software in the processor 1101. The Processor 1101 may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable Gate Array (FPGA) or other programmable logic device, discrete Gate or transistor logic device, discrete hardware component. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 1102, and the processor 1101 reads the information in the memory 1102 and completes the steps of the above method in combination with the hardware thereof.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Optionally, the computer program when executed by the processor 1101 may further implement the steps of:
detecting the distance between the front camera and the face of the user;
judging whether the distance between the front camera and the face of the user is smaller than a preset distance or not;
and if the distance between the front camera and the face of the user is smaller than a preset distance, generating an image preview frame in a screen area which is away from the front camera by the preset distance.
Optionally, the computer program when executed by the processor 1101 may further implement the steps of:
judging whether the front camera shoots the face of the front side of the user or not;
if the front camera shoots the front face of the user, judging whether an angle between an original sight and a first sight of the user is larger than a preset angle, wherein the original sight is the sight of the user watching the front camera, and the first sight is the sight of the user watching the screen area at the first moment;
and if the included angle between the original sight line and the first sight line of the user is larger than a preset angle, generating an image preview frame in a screen area which is away from the front camera by a preset distance.
Optionally, the computer program when executed by the processor 1101 may further implement the steps of:
if the distance between the front camera and the face of the user is smaller than a preset distance, calculating the target size of the image preview frame according to the distance between the front camera and the face of the user, the distance between the front camera and the screen and a preset angle, and generating the image preview frame with the target size in the screen area.
Optionally, the computer program when executed by the processor 1101 may further implement the steps of:
judging whether an included angle between the original sight line and a second sight line of the user is smaller than a preset angle or not, and if the included angle between the original sight line and the second sight line of the user is larger than the preset angle, reducing the size of the image preview frame, wherein the second sight line is the sight line of the user watching the image preview frame at a second moment, and the second moment is after the first moment;
detecting a third sight of a user, judging whether the third sight of the user is a sight watching the reduced image preview frame, if so, judging whether an included angle between the third sight of the user and the original sight is smaller than a preset angle, wherein the third sight is a sight watching the screen area by the user at a third moment, and the third moment is after the second moment;
and if the included angle between the third sight line of the user and the original sight line is smaller than a preset angle, shooting a picture through the front camera.
Optionally, the computer program when executed by the processor 1101 may further implement the steps of:
acquiring a pupil image of a user;
extracting a user pupil contour feature from the user pupil image;
searching a user sight direction corresponding to the pupil contour characteristics of the user from a preset corresponding relation between the pupil contour characteristics and the sight direction;
and determining the sight of the user according to the sight direction of the user and the position of the eyes of the user.
The mobile terminal 1100 is capable of implementing each process implemented by the mobile terminal in the foregoing embodiments, and details are not repeated here to avoid repetition.
In the mobile terminal 1100 according to the embodiment of the present invention, the front camera is turned on, and a default preview frame of the front camera is displayed; generating an image preview frame in a screen area which is a preset distance away from the front camera, and displaying an image collected by the front camera in the image preview frame, wherein the area of the image preview frame is smaller than that of the default preview frame; detecting the sight of a user, and judging whether the sight of the user is the sight of watching the image preview frame; and if the sight of the user is the sight of watching the image preview frame, shooting a picture through the front camera. Therefore, the user can look over the shooting effect through the watching image preview frame, take the autodyne photo with the approximate direct-view front, and improve the autodyne photo effect of the mobile terminal.
The present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
opening the front camera, and displaying a default preview frame of the front camera;
generating an image preview frame in a screen area which is a preset distance away from the front camera, and displaying an image collected by the front camera in the image preview frame, wherein the area of the image preview frame is smaller than that of the default preview frame;
detecting the sight of a user, and judging whether the sight of the user is the sight of watching the image preview frame;
and if the sight of the user is the sight of watching the image preview frame, shooting a picture through the front camera.
Optionally, the step of generating an image preview frame in the screen area away from the front camera by a preset distance includes:
detecting the distance between the front camera and the face of the user;
judging whether the distance between the front camera and the face of the user is smaller than a preset distance or not;
and if the distance between the front camera and the face of the user is smaller than a preset distance, generating an image preview frame in a screen area which is away from the front camera by the preset distance.
Optionally, the step of generating an image preview frame in the screen area away from the front camera by a preset distance includes:
judging whether the front camera shoots the face of the front side of the user or not;
if the front camera shoots the front face of the user, judging whether an angle between an original sight and a first sight of the user is larger than a preset angle, wherein the original sight is the sight of the user watching the front camera, and the first sight is the sight of the user watching the screen area at the first moment;
and if the included angle between the original sight line and the first sight line of the user is larger than a preset angle, generating an image preview frame in a screen area which is away from the front camera by a preset distance.
Optionally, if the distance between the front-facing camera and the face of the user is smaller than a preset distance, the step of generating an image preview frame in a screen area that is a preset distance away from the front-facing camera includes:
if the distance between the front camera and the face of the user is smaller than a preset distance, calculating the target size of the image preview frame according to the distance between the front camera and the face of the user, the distance between the front camera and the screen and a preset angle, and generating the image preview frame with the target size in the screen area.
Optionally, if the line of sight of the user is a line of sight watching the image preview frame, the step of taking a picture through the front camera includes:
judging whether an included angle between the original sight line and a second sight line of the user is smaller than a preset angle or not, and if the included angle between the original sight line and the second sight line of the user is larger than the preset angle, reducing the size of the image preview frame, wherein the second sight line is the sight line of the user watching the image preview frame at a second moment, and the second moment is after the first moment;
detecting a third sight of a user, judging whether the third sight of the user is a sight watching the reduced image preview frame, if so, judging whether an included angle between the third sight of the user and the original sight is smaller than a preset angle, wherein the third sight is a sight watching the screen area by the user at a third moment, and the third moment is after the second moment;
and if the included angle between the third sight line of the user and the original sight line is smaller than a preset angle, shooting a picture through the front camera.
Optionally, the step of detecting the line of sight of the user includes:
acquiring a pupil image of a user;
extracting a user pupil contour feature from the user pupil image;
searching a user sight direction corresponding to the pupil contour characteristics of the user from a preset corresponding relation between the pupil contour characteristics and the sight direction;
and determining the sight of the user according to the sight direction of the user and the position of the eyes of the user.
The computer-readable storage medium includes a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.