[go: up one dir, main page]

CN110958392A - Shooting method of terminal equipment, terminal equipment and storage medium - Google Patents

Shooting method of terminal equipment, terminal equipment and storage medium Download PDF

Info

Publication number
CN110958392A
CN110958392A CN201911280912.8A CN201911280912A CN110958392A CN 110958392 A CN110958392 A CN 110958392A CN 201911280912 A CN201911280912 A CN 201911280912A CN 110958392 A CN110958392 A CN 110958392A
Authority
CN
China
Prior art keywords
image
module
portrait
camera
terminal device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911280912.8A
Other languages
Chinese (zh)
Inventor
许玉新
张刘哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huizhou TCL Mobile Communication Co Ltd
Original Assignee
Huizhou TCL Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou TCL Mobile Communication Co Ltd filed Critical Huizhou TCL Mobile Communication Co Ltd
Priority to CN201911280912.8A priority Critical patent/CN110958392A/en
Publication of CN110958392A publication Critical patent/CN110958392A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a shooting method of a terminal device, the terminal device and a storage medium. According to the invention, the first camera and the second camera are simultaneously opened to respectively obtain the first image and the second image, the first image is uploaded to the server to obtain the portrait outline, the portrait image of the first image is extracted according to the portrait outline, and the portrait image and the second image are synthesized, so that the purpose of simulating the rear camera to shoot the user and the environment where the user is located is achieved, and the field angle of the environment where the user is located when the user self shoots is improved.

Description

Shooting method of terminal equipment, terminal equipment and storage medium
Technical Field
The invention belongs to the field of terminal camera shooting, and particularly relates to a terminal equipment shooting method, terminal equipment and a storage medium.
Background
With the development of mobile device technology, mobile devices have become items that users carry with them. The user uses the mobile phone to take pictures, which is a common thing, and the demand for taking pictures is increasing day by day. With the continuous improvement of the processing capability of smart phones and the development of AI technologies, the photographing function of the smart phones is more and more powerful, and using the smart phones to perform self-photographing becomes a common function for most users. When going out for travel, the user can shoot a photo of the user and the beautiful scenery at any time and any place, so as to avoid missing any scenery. However, sometimes, when the user takes the combined photograph, the user may prefer to use the rear camera to take the photograph, because the scenery to be taken is far away from the camera when the user takes the photograph by using the rear camera, and the user may also choose to use the wide-angle lens, so that more scenery can be included in the lens. In this case, however, the user needs to find another person to assist shooting. The reason is that the front camera has the following disadvantages: 1. when the user takes a front photo, because the distance between the user and the camera is relatively short, the portrait of the user occupies most of the preview picture, and the picture occupied by the landscape is relatively small. 2. The front camera is generally small in angle of view, the angle of view for viewing the landscape is not as good as the angle of view effect of the wide-angle camera, and therefore the user and the landscape have poor co-shooting effect.
For this reason, it is desirable to provide a shooting method of a terminal device to achieve shooting of a user's own picture and a user's environment (e.g., landscape) by an analog rear camera, thereby improving a field angle of the environment in which the user is located at the time of self-shooting.
Disclosure of Invention
The embodiment of the invention provides a shooting method of terminal equipment, the terminal equipment and a storage medium. According to the invention, the first camera and the second camera are simultaneously opened to respectively obtain the first image and the second image, the first image is uploaded to the server to obtain the portrait outline, the portrait image of the first image is extracted according to the portrait outline, and the portrait image and the second image are synthesized, so that the purpose of simulating the rear camera to shoot the user and the environment where the user is located is achieved, and the field angle of the environment where the user is located when the user self shoots is improved.
According to a first aspect of the present invention, there is provided a photographing method of a terminal device having a first camera and a second camera, the method comprising: calling a first camera and a second camera to respectively obtain a first image and a second image; uploading the first image to a server; acquiring a portrait outline generated by the server; extracting a portrait image of the first image according to the portrait outline; synthesizing the human figure and the second image to obtain a synthesized image; and outputting the composite image.
Further, in the step of invoking the first camera and the second camera to respectively acquire the first image and the second image, the method further comprises: and displaying the first image and the second image on a display screen of the terminal device in a split screen mode.
Further, in the step of uploading the first image to a server, the server generates the portrait contour through neural network calculation.
Further, after the step of extracting the portrait image of the first image according to the portrait contour and before the step of combining the portrait image with the second image, the method further comprises: and carrying out scaling operation on the human figure at a corresponding scale.
According to a second aspect of the present invention, there is provided a terminal device comprising: the calling module is used for calling the first camera and the second camera to respectively acquire a first image and a second image; the uploading module is connected with the calling module and is used for uploading the first image to a server; the acquisition module is used for acquiring the portrait outline generated by the server; the extraction module is respectively connected with the calling module and the acquisition module, and is used for extracting the portrait image of the first image according to the portrait outline; the synthesis module is respectively connected with the calling module and the extracting module and is used for synthesizing the human image and the second image to obtain a synthesized image; and the output module is connected with the synthesis module and used for outputting the synthesized image.
Furthermore, the terminal device further comprises a display module connected with the calling module, and the display module is used for displaying the first image and the second image on a display screen of the terminal device in a split screen manner.
Furthermore, the terminal device further comprises a scaling module connected with the extraction module, and the scaling module is used for scaling the human figure in a corresponding proportion.
Further, the server is used for generating the portrait outline through neural network calculation.
Further, the terminal device further comprises a processor and a memory, wherein the processor is electrically connected with the memory, the memory is used for storing instructions and data, and the processor is used for executing the steps in the shooting method.
According to a third aspect of the present invention, there is provided a storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform the above-described photographing method.
According to the embodiment of the invention, the first camera and the second camera are simultaneously opened to respectively obtain the first image and the second image, the first image is uploaded to the server to obtain the portrait outline, the portrait image of the first image is extracted according to the portrait outline, and the portrait image and the second image are synthesized, so that the effect of simulating the rear camera to shoot the user and the environment where the user is located is realized, and the field angle of the environment where the user is located when the user self shoots is improved.
Drawings
The technical solution and the advantages of the present invention will be apparent from the following detailed description of the embodiments of the present invention with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart illustrating steps of a shooting method of a terminal device according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a first structure of a terminal device according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a second structure of the terminal device according to the embodiment of the present invention.
Fig. 4 is a schematic diagram of a third structure of the terminal device according to the embodiment of the present invention.
Fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the objects so described are interchangeable under appropriate circumstances. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions.
In particular embodiments, the drawings discussed below and the embodiments used to describe the principles of the present disclosure are by way of illustration only and should not be construed to limit the scope of the present disclosure. Those skilled in the art will understand that the principles of the present invention may be implemented in any suitably arranged system. Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. Further, a terminal according to an exemplary embodiment will be described in detail with reference to the accompanying drawings. Like reference symbols in the various drawings indicate like elements.
The terminology used in the detailed description is for the purpose of describing particular embodiments only and is not intended to be limiting of the inventive concepts. Unless the context clearly dictates otherwise, expressions used in the singular form encompass expressions in the plural form. In the present specification, it is to be understood that terms such as "comprising," "having," and "containing" are intended to specify the presence of stated features, integers, steps, acts, or combinations thereof, as taught in the present specification, and are not intended to preclude the presence or addition of one or more other features, integers, steps, acts, or combinations thereof. Like reference symbols in the various drawings indicate like elements.
As shown in fig. 1, the present invention provides a shooting method of a terminal device having a first camera and a second camera. The photographing method includes the following steps.
Step S10, call the first camera and the second camera to acquire the first image and the second image, respectively.
In the embodiment of the invention, the first camera is a front camera and is used for self-shooting. The second camera is a rear camera, and specifically, but not limited to, a wide-angle camera. Optionally, after the first image and the second image are acquired, the first image and the second image are displayed on a display screen of the terminal device in a split-screen manner, so that a user can view shooting conditions.
Step S20, uploading the first image to a server.
In the embodiment of the invention, the first image is uploaded to the server through the network, and the portrait outline on the first image is calculated through the neural network on the server. Due to the fact that the neural network is used for the machine deep learning method, the portrait outline on the first image can be extracted more accurately. The network may be a wired network or a wireless network.
And step S30, acquiring the portrait outline generated by the server.
In the embodiment of the invention, the server generates the portrait outline and feeds the portrait outline back to the terminal equipment through the network.
And step S40, extracting a portrait image of the first image according to the portrait contour.
In the embodiment of the invention, the terminal equipment extracts the portrait from the first image by acquiring the portrait outline generated by the server. The portrait image is more accurate and natural. After the human image is obtained, the human image is subjected to scaling operation of a corresponding proportion so as to simulate the imaging effect of the rear camera (the distance from a user to the camera is relatively far).
Step S50, the human figure is synthesized with the second image to obtain a synthesized image.
In the embodiment of the present invention, when the human figure and the second image are synthesized, the position of the human figure in the second image, the size of the human figure, or the rotation angle of the human figure in the second image may be adjusted. In the synthesis process, the light angle of the second image during shooting can be analyzed, and details such as shadow and the like are properly added to the human image, so that the synthesized image is closer to the image directly shot by the rear camera.
Step S60, outputting the composite image.
According to the embodiment of the invention, the first camera and the second camera are simultaneously opened to respectively obtain the first image and the second image, the first image is uploaded to the server to obtain the portrait outline, the portrait image of the first image is extracted according to the portrait outline, and the portrait image and the second image are synthesized, so that the effect of simulating the rear camera to shoot the user and the environment where the user is located is realized, and the field angle of the environment where the user is located when the user self shoots is improved.
In other embodiments, the process of acquiring the first image by calling the first camera can be realized, and when the portrait outline is continuously extracted through artificial intelligence, the comparison of a plurality of images is realized, so that the skin color of the portrait and the color of clothes are analyzed, and the portrait outline is more accurately extracted. Wherein, when previewing the image, the edge of the portrait is smoothly transited (such as gradual transparency).
As shown in fig. 2, an embodiment of the present invention provides a terminal device, which includes a calling module 110, an uploading module 120, an obtaining module 130, an extracting module 140, a synthesizing module 150, and an outputting module 160.
The invoking module 110 is configured to invoke the first camera and the second camera to respectively acquire the first image and the second image.
In the embodiment of the invention, the first camera is a front camera and is used for self-shooting. The second camera is a rear camera, and specifically, but not limited to, a wide-angle camera. Optionally, after the first image and the second image are acquired, the first image and the second image are displayed on a display screen of the terminal device in a split-screen manner, so that a user can view shooting conditions.
The upload module 120 is connected to the call module 110. The upload module 120 is configured to upload the first image to a server.
In the embodiment of the invention, the first image is uploaded to the server through the network, and the portrait outline on the first image is calculated through the neural network on the server. Due to the fact that the neural network is used for the machine deep learning method, the portrait outline on the first image can be extracted more accurately. The network may be a wired network or a wireless network.
The obtaining module 130 is configured to obtain the portrait contour generated by the server.
In the embodiment of the invention, the server generates the portrait outline and feeds the portrait outline back to the terminal equipment through the network.
The extraction module 140 is connected to the calling module 110 and the obtaining module 130, respectively. The extracting module 140 is configured to extract a portrait image of the first image according to the portrait contour.
In the embodiment of the invention, the terminal equipment extracts the portrait from the first image by acquiring the portrait outline generated by the server. The portrait image is more accurate and natural. After the human image is obtained, the human image is subjected to scaling operation of a corresponding proportion so as to simulate the imaging effect of the rear camera (the distance from a user to the camera is relatively far).
The synthesizing module 150 is connected to the calling module 110 and the extracting module 140 respectively. The synthesis module 150 is configured to synthesize the human image and the second image to obtain a synthesized image.
In the embodiment of the present invention, when the human figure and the second image are synthesized, the position of the human figure in the second image, the size of the human figure, or the rotation angle of the human figure in the second image may be adjusted. In the synthesis process, the light angle of the second image during shooting can be analyzed, and details such as shadow and the like are properly added to the human image, so that the synthesized image is closer to the image directly shot by the rear camera.
The output module 160 is connected to the synthesis module 150. The output module 160 is configured to output the composite image.
According to the embodiment of the invention, the first camera and the second camera are simultaneously opened to respectively obtain the first image and the second image, the first image is uploaded to the server to obtain the portrait outline, the portrait image of the first image is extracted according to the portrait outline, and the portrait image and the second image are synthesized, so that the effect of simulating the rear camera to shoot the user and the environment where the user is located is realized, and the field angle of the environment where the user is located when the user self shoots is improved.
As shown in fig. 3, an embodiment of the present invention provides a terminal device, which includes a calling module 110, an uploading module 120, an obtaining module 130, an extracting module 140, a synthesizing module 150, an outputting module 160, a displaying module, and a scaling module.
The invoking module 110 is configured to invoke the first camera and the second camera to respectively acquire the first image and the second image.
In the embodiment of the invention, the first camera is a front camera and is used for self-shooting. The second camera is a rear camera, and specifically, but not limited to, a wide-angle camera. Optionally, after the first image and the second image are acquired, the first image and the second image are displayed on a display screen of the terminal device in a split-screen manner, so that a user can view shooting conditions.
The upload module 120 is connected to the call module 110. The upload module 120 is configured to upload the first image to a server.
In the embodiment of the invention, the first image is uploaded to the server through the network, and the portrait outline on the first image is calculated through the neural network on the server. Due to the fact that the neural network is used for the machine deep learning method, the portrait outline on the first image can be extracted more accurately. The network may be a wired network or a wireless network.
The obtaining module 130 is configured to obtain the portrait contour generated by the server.
In the embodiment of the invention, the server generates the portrait outline and feeds the portrait outline back to the terminal equipment through the network.
The extraction module 140 is connected to the calling module 110 and the obtaining module 130, respectively. The extracting module 140 is configured to extract a portrait image of the first image according to the portrait contour.
In the embodiment of the invention, the terminal equipment extracts the portrait from the first image by acquiring the portrait outline generated by the server. The portrait image is more accurate and natural. After the human image is obtained, the human image is subjected to scaling operation of a corresponding proportion so as to simulate the imaging effect of the rear camera (the distance from a user to the camera is relatively far).
The synthesizing module 150 is connected to the calling module 110 and the extracting module 140 respectively. The synthesis module 150 is configured to synthesize the human image and the second image to obtain a synthesized image.
In the embodiment of the present invention, when the human figure and the second image are synthesized, the position of the human figure in the second image, the size of the human figure, or the rotation angle of the human figure in the second image may be adjusted. In the synthesis process, the light angle of the second image during shooting can be analyzed, and details such as shadow and the like are properly added to the human image, so that the synthesized image is closer to the image directly shot by the rear camera.
The output module 160 is connected to the synthesis module 150. The output module 160 is configured to output the composite image.
The display module 170 is connected to the calling module 110. The display module 170 is configured to display the first image and the second image on a display screen of the terminal device in a split-screen manner.
In the embodiment of the invention, after the first image and the second image are acquired, the first image and the second image are displayed on the display screen of the terminal device in a split screen mode so that a user can check shooting conditions.
The scaling module 180 is connected to the extraction module 140. The scaling module 180 is configured to perform a scaling operation on the human figure.
In the embodiment of the invention, after the human image is obtained, the human image is subjected to scaling operation of a corresponding proportion so as to simulate the imaging effect of the rear camera (the distance between a user and the camera is far).
According to the embodiment of the invention, the first camera and the second camera are simultaneously opened to respectively obtain the first image and the second image, the first image is uploaded to the server to obtain the portrait outline, the portrait image of the first image is extracted according to the portrait outline, and the portrait image and the second image are synthesized, so that the effect of simulating the rear camera to shoot the user and the environment where the user is located is realized, and the field angle of the environment where the user is located when the user self shoots is improved.
Referring to fig. 4, an embodiment of the present invention further provides a terminal device 200, where the terminal device 200 may be a mobile phone, a tablet, a computer, or other devices. As shown in fig. 4, the terminal device 200 includes a processor 201 and a memory 202. Wherein the processor 201 is connected to the memory 202.
The processor 201 is a control center of the terminal device 200, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or loading an application program stored in the memory 202 and calling data stored in the memory 202, thereby performing overall monitoring of the terminal device.
In this embodiment, the terminal device 200 is provided with a plurality of memory partitions, the plurality of memory partitions includes a system partition and a target partition, the processor 201 in the terminal device 200 loads instructions corresponding to processes of one or more application programs into the memory 202 according to the following steps, and the processor 201 runs the application programs stored in the memory 202, so as to implement various functions:
calling a first camera and a second camera to respectively obtain a first image and a second image;
uploading the first image to a server;
acquiring a portrait outline generated by the server;
extracting a portrait image of the first image according to the portrait outline;
synthesizing the human figure and the second image to obtain a synthesized image; and
and outputting the composite image.
Fig. 5 shows a specific block diagram of a terminal device 300 according to an embodiment of the present invention, where the terminal device 300 may be used to implement the shooting method of the terminal device provided in the above-described embodiment. The terminal device 300 may be a mobile phone or a tablet.
The RF circuit 310 is used for receiving and transmitting electromagnetic waves, and performing interconversion between the electromagnetic waves and electrical signals, thereby communicating with a communication network or other devices. RF circuitry 310 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. RF circuit 310 may communicate with various networks such as the internet, an intranet, a wireless network, or with other devices over a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The Wireless network may use various Communication standards, protocols and technologies, including but not limited to Global System for Mobile Communication (GSM), Enhanced Mobile Communication (EDGE), Wideband Code Division Multiple Access (WCDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Wireless Fidelity (Wi-Fi) (e.g., IEEE802.11a, IEEE802.11 b, IEEE 802.access g and/or IEEE802.11 n), Internet telephony (voice over Internet Protocol, VoIP), world wide Internet microwave Access (microwave for Wireless Communication, Max-1 Max), and other short message protocols, as well as any other suitable communication protocols, and may even include those that have not yet been developed.
The memory 320 may be used to store software programs and modules, such as program instructions/modules corresponding to the photographing method in the above-described embodiment, and the processor 380 executes various functional applications and data processing by running the software programs and modules stored in the memory 320, so as to implement a photographing function. The memory 320 may include high speed random access memory and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 320 may further include memory located remotely from processor 380, which may be connected to terminal device 300 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input unit 330 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 330 may include a touch-sensitive surface 331 as well as other input devices 332. The touch-sensitive surface 331, also referred to as a touch screen or touch pad, may collect touch operations by a user on or near the touch-sensitive surface 331 (e.g., operations by a user on or near the touch-sensitive surface 331 using a finger, a stylus, or any other suitable object or attachment), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface 331 may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 380, and can receive and execute commands sent by the processor 380. In addition, the touch-sensitive surface 331 may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 330 may comprise other input devices 332 in addition to the touch sensitive surface 331. In particular, other input devices 332 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 340 may be used to display information input by or provided to the user and various graphic user interfaces of the terminal apparatus 300, which may be configured by graphics, text, icons, video, and any combination thereof. The Display unit 340 may include a Display panel 341, and optionally, the Display panel 341 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-emitting diode), or the like. Further, touch-sensitive surface 331 may overlay display panel 341, and when touch-sensitive surface 331 detects a touch operation thereon or thereabout, communicate to processor 380 to determine the type of touch event, and processor 380 then provides a corresponding visual output on display panel 341 in accordance with the type of touch event. Although in FIG. 5, touch-sensitive surface 331 and display panel 341 are implemented as two separate components for input and output functions, in some embodiments, touch-sensitive surface 331 and display panel 341 may be integrated for input and output functions.
The terminal device 300 may also include at least one sensor 350, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 341 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 341 and/or the backlight when the terminal device 300 is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal device 300, detailed descriptions thereof are omitted.
Audio circuitry 360, speaker 361, microphone 362 may provide an audio interface between a user and terminal device 300. The audio circuit 360 may transmit the electrical signal converted from the received audio data to the speaker 361, and the audio signal is converted by the speaker 361 and output; on the other hand, the microphone 362 converts the collected sound signal into an electrical signal, which is received by the audio circuit 360 and converted into audio data, which is then processed by the audio data output processor 380 and then transmitted to, for example, another terminal via the RF circuit 310, or the audio data is output to the memory 320 for further processing. The audio circuit 360 may also include an earbud jack to provide communication of peripheral headphones with the terminal device 300.
The terminal device 300 may assist the user in e-mail, web browsing, streaming media access, etc. through the transmission module 370 (e.g., a Wi-Fi module), which provides the user with wireless broadband internet access. Although fig. 5 shows the transmission module 370, it is understood that it does not belong to the essential constitution of the terminal device 300, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 380 is a control center of the terminal device 300, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the terminal device 300 and processes data by running or executing software programs and/or modules stored in the memory 320 and calling data stored in the memory 320, thereby performing overall monitoring of the mobile phone. Optionally, processor 380 may include one or more processing cores; in some embodiments, processor 380 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 380.
Terminal device 300 also includes a power supply 390 (e.g., a battery) for powering the various components, which may be logically coupled to processor 380 via a power management system in some embodiments to manage charging, discharging, and power consumption management functions via the power management system. The power supply 390 may also include any component including one or more of a dc or ac power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown, the terminal device 300 may further include a camera (e.g., a front camera, a rear camera), a bluetooth module, and the like, which are not described in detail herein. Specifically, in this embodiment, the display unit of the terminal device is a touch screen display, the terminal device further includes a memory, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the one or more processors, and the one or more programs include instructions for:
calling a first camera and a second camera to respectively obtain a first image and a second image;
uploading the first image to a server;
acquiring a portrait outline generated by the server;
extracting a portrait image of the first image according to the portrait outline;
synthesizing the human figure and the second image to obtain a synthesized image; and
and outputting the composite image.
In specific implementation, the above modules may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and specific implementation of the above modules may refer to the foregoing method embodiments, which are not described herein again.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by instructions controlling associated hardware, and the instructions may be stored in a computer-readable storage medium and loaded and executed by a processor. To this end, embodiments of the present invention provide a storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the steps in any one of the photographing methods provided by the embodiments of the present invention.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any shooting method provided by the embodiment of the present invention, the beneficial effects that can be achieved by any shooting method provided by the embodiment of the present invention can be achieved, for details, see the foregoing embodiments, and are not described herein again. The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
According to the embodiment of the invention, the first camera and the second camera are simultaneously opened to respectively obtain the first image and the second image, the first image is uploaded to the server to obtain the portrait outline, the portrait image of the first image is extracted according to the portrait outline, and the portrait image and the second image are synthesized, so that the effect of simulating the rear camera to shoot the user and the environment where the user is located is realized, and the field angle of the environment where the user is located when the user self shoots is improved.
The above describes in detail a shooting method of a terminal device, and a storage medium provided in the embodiments of the present invention, and a specific example is applied in the present document to explain the principle and the implementation of the present invention, and the description of the above embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A shooting method of a terminal device, the terminal device having a first camera and a second camera, the method comprising:
calling a first camera and a second camera to respectively obtain a first image and a second image;
uploading the first image to a server;
acquiring a portrait outline generated by the server;
extracting a portrait image of the first image according to the portrait outline;
synthesizing the human figure and the second image to obtain a synthesized image; and
and outputting the composite image.
2. The shooting method according to claim 1, wherein the step of calling the first camera and the second camera to respectively acquire the first image and the second image further comprises: and displaying the first image and the second image on a display screen of the terminal device in a split screen mode.
3. The photographing method according to claim 1, wherein in the step of uploading the first image to a server, the server generates the portrait outline by neural network calculation.
4. The photographing method according to claim 1, wherein after the step of extracting the portrait image of the first image from the portrait outline and before the step of combining the portrait image with the second image, the method further comprises: and carrying out scaling operation on the human figure at a corresponding scale.
5. A terminal device, comprising:
the calling module is used for calling the first camera and the second camera to respectively acquire a first image and a second image;
the uploading module is connected with the calling module and is used for uploading the first image to a server;
the acquisition module is used for acquiring the portrait outline generated by the server;
the extraction module is respectively connected with the calling module and the acquisition module, and is used for extracting the portrait image of the first image according to the portrait outline;
the synthesis module is respectively connected with the calling module and the extracting module and is used for synthesizing the human image and the second image to obtain a synthesized image; and
and the output module is connected with the synthesis module and is used for outputting the synthesized image.
6. The terminal device according to claim 5, further comprising a display module connected to the invoking module, wherein the display module is configured to display the first image and the second image on a display screen of the terminal device in a split-screen manner.
7. The terminal device according to claim 5, further comprising a scaling module, connected to the extracting module, wherein the scaling module is configured to perform scaling operation on the human figure according to a corresponding scale.
8. The terminal device of claim 5, wherein the server is configured to generate the portrait outline through a neural network calculation.
9. The terminal device according to claim 5, further comprising a processor and a memory, wherein the processor is electrically connected to the memory, the memory is used for storing instructions and data, and the processor is used for executing the steps in the photographing method according to any one of claims 1 to 4.
10. A storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform the photographing method according to any one of claims 1 to 4.
CN201911280912.8A 2019-12-13 2019-12-13 Shooting method of terminal equipment, terminal equipment and storage medium Pending CN110958392A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911280912.8A CN110958392A (en) 2019-12-13 2019-12-13 Shooting method of terminal equipment, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911280912.8A CN110958392A (en) 2019-12-13 2019-12-13 Shooting method of terminal equipment, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110958392A true CN110958392A (en) 2020-04-03

Family

ID=69981402

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911280912.8A Pending CN110958392A (en) 2019-12-13 2019-12-13 Shooting method of terminal equipment, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110958392A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287155A (en) * 2020-10-30 2021-01-29 维沃移动通信有限公司 Image processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847676A (en) * 2016-03-28 2016-08-10 乐视控股(北京)有限公司 Image processing method and apparatus
CN107169939A (en) * 2017-05-31 2017-09-15 广东欧珀移动通信有限公司 Image processing method and related product
CN109089045A (en) * 2018-09-18 2018-12-25 上海连尚网络科技有限公司 A kind of image capture method and equipment and its terminal based on multiple photographic devices
CN110266942A (en) * 2019-06-03 2019-09-20 Oppo(重庆)智能科技有限公司 The synthetic method and Related product of picture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847676A (en) * 2016-03-28 2016-08-10 乐视控股(北京)有限公司 Image processing method and apparatus
CN107169939A (en) * 2017-05-31 2017-09-15 广东欧珀移动通信有限公司 Image processing method and related product
CN109089045A (en) * 2018-09-18 2018-12-25 上海连尚网络科技有限公司 A kind of image capture method and equipment and its terminal based on multiple photographic devices
CN110266942A (en) * 2019-06-03 2019-09-20 Oppo(重庆)智能科技有限公司 The synthetic method and Related product of picture

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287155A (en) * 2020-10-30 2021-01-29 维沃移动通信有限公司 Image processing method and device
CN112287155B (en) * 2020-10-30 2024-03-22 维沃移动通信有限公司 Picture processing method and device

Similar Documents

Publication Publication Date Title
US12322180B2 (en) Image shooting method and electronic device for video generation
CN108924414B (en) A shooting method and terminal equipment
CN109191549B (en) Method and device for displaying animation
CN108307109B (en) High dynamic range image preview method and terminal equipment
CN107566730B (en) A panoramic image shooting method and mobile terminal
CN107592471A (en) A kind of high dynamic range images image pickup method and mobile terminal
CN107566748A (en) A kind of image processing method, mobile terminal and computer-readable recording medium
WO2019174628A1 (en) Photographing method and mobile terminal
CN104135609A (en) A method and a device for assisting in photographing, and a terminal
CN107707825B (en) A kind of panorama shooting method, mobile terminal and computer readable storage medium
CN111182236A (en) Image synthesis method and device, storage medium and terminal equipment
CN108320263A (en) A kind of method, device and mobile terminal of image procossing
CN109544486A (en) A kind of image processing method and terminal device
CN107483836A (en) A shooting method and mobile terminal
CN105635553B (en) Image shooting method and device
CN107623818A (en) A kind of image exposure method and mobile terminal
CN108335258A (en) A kind of image processing method and device of mobile terminal
CN110798621A (en) An image processing method and electronic device
CN112581358A (en) Training method of image processing model, image processing method and device
CN108280817A (en) A kind of image processing method and mobile terminal
CN110807769A (en) Image display control method and device
CN107959755A (en) A kind of photographic method and mobile terminal
CN107817963B (en) Image display method, mobile terminal and computer-readable storage medium
CN113489903A (en) Shooting method, shooting device, terminal equipment and storage medium
CN109729269B (en) Image processing method, terminal equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200403

RJ01 Rejection of invention patent application after publication