[go: up one dir, main page]

CN106534590B - A kind of photo processing method, device and terminal - Google Patents

A kind of photo processing method, device and terminal Download PDF

Info

Publication number
CN106534590B
CN106534590B CN201611224704.2A CN201611224704A CN106534590B CN 106534590 B CN106534590 B CN 106534590B CN 201611224704 A CN201611224704 A CN 201611224704A CN 106534590 B CN106534590 B CN 106534590B
Authority
CN
China
Prior art keywords
target person
camera
depth
photo
mobile
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611224704.2A
Other languages
Chinese (zh)
Other versions
CN106534590A (en
Inventor
张宇希
张腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201611224704.2A priority Critical patent/CN106534590B/en
Publication of CN106534590A publication Critical patent/CN106534590A/en
Application granted granted Critical
Publication of CN106534590B publication Critical patent/CN106534590B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • H04N5/2226Determination of depth image, e.g. for foreground/background separation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Telephone Function (AREA)

Abstract

The invention discloses a kind of photo processing methods, wherein, this method comprises: being identified to by the target person in the first camera and the photo of second camera shooting, the depth of view information of the target person in the photo is obtained by first camera and the second camera, after the target person that will identify that is moved to the second position from first position, the size of the target person is adjusted according to the variation of the depth of field, the invention also discloses a kind of picture processing device and terminals, solve the problems, such as that size and background are unconformable after the personage in photo in the related technology is moved, according to the size of the change adjust automatically target person of the depth of field, so that photo overall effect is more coordinated, improve user experience.

Description

A kind of photo processing method, device and terminal
Technical field
The present invention relates to field of terminal technology more particularly to a kind of photo processing methods, device and terminal.
Background technique
Universal with intelligent mobile terminal with the development of mobile internet, the user group of intelligent mobile terminal is increasingly Greatly, while also more intelligence, humanized demand are proposed to software.
In existing technology, intelligent mobile terminal in fact, although may be used also by user as a game machine or television set It can be a learning machine, it is also possible to which more enjoyment are brought to our life in the paradise etc. as baby.
As user gradually increases to mobile terminal dependence, the application of user in the terminal also increasingly increases More, the taking pictures of current mobile terminal can use twin-lens and take pictures, a camera be used to measure shoot object the depth of field, Then another camera synthesizes a photo to information captured by two cameras for shooting object.
After photograph taking, if shooting object is moved to other positions, due to not knowing that the depth of field of the position is believed Breath, after leading to movement, the scenery size that may cause the mobile object and surrounding is uncoordinated.
Size and the unconformable problem of background, not yet propose at present after being moved for the personage in photo in the related technology Solution.
Summary of the invention
It is a primary object of the present invention to propose a kind of photo processing method, device and terminal, it is intended to solve the relevant technologies Size and the unconformable problem of background after personage in middle photo is moved.
To achieve the above object, the present invention provides a kind of photo processing methods, comprising:
It is identified to by the target person in the first camera and the photo of second camera shooting, wherein described First camera is used to measure the depth of field of shooting object, and for the second camera for shooting object, the photo is by described The information synthesis of first camera and second camera shooting;
The depth of view information of the target person in the photo is obtained by first camera and the second camera;
After the target person that will identify that is moved to the second position from first position, according to the variation of the depth of field Adjust the size of the target person.
Further, know to by the target person in the first camera and the photo of second camera shooting Do not include:
Image segmentation is carried out in conjunction with depth image and original image, isolates the target person and target context.
Further, all targets in collective's photo are obtained by first camera and the second camera The depth of view information of personage includes: the depth information that scene is obtained by binocular ranging platform or depth transducer.
It further, include: to be believed according to the depth of field according to the size that the variation of the depth of field adjusts the target person Breath determines the distance that the target person is mobile;The target is adjusted according to the determining distance that the target person is mobile The size of personage.
Further, the big parcel of the target person is adjusted according to the determining distance that the target person is mobile It includes:
After the target person that will identify that is moved to the second position from the first position, the target person The depth of field become larger in the case where, according to target person described in pre-set scale smaller;
After the target person that will identify that is moved to the second position from the first position, the target person The depth of field become smaller in the case where, according to target person described in pre-set ratio enlargement.
Further, after the variation according to the depth of field adjusts the size of the target person, the method is also wrapped Include: the first position after being removed according to the depth of view information to the target person carries out background filling.
According to another aspect of the present invention, a kind of picture processing device is provided, comprising:
Identification module, for knowing to by the target person in the first camera and the photo of second camera shooting Not, wherein first camera is used to measure the depth of field of shooting object, and the second camera is described for shooting object Photo is that the information shot by first camera and the second camera synthesizes;
Module is obtained, for obtaining the target person in the photo by first camera and the second camera The depth of view information of object;
Module is adjusted, after being moved to the second position from first position in the target person that will identify that, according to The variation of the depth of field adjusts the size of the target person.
Further, the identification module is also used to that depth image and original image is combined to carry out image segmentation, isolates The target person and target context.
Further, the acquisition module is also used to obtain the depth of scene by binocular ranging platform or depth transducer Spend information.
Further, the adjustment module includes:
Determination unit, for determining the distance that the target person is mobile according to the depth of view information;
Adjustment unit, for adjusting the big of the target person according to the determining distance that the target person is mobile It is small.
Further, the adjustment module includes:
Reducing unit, for being moved to the second position from the first position in the target person that will identify that Afterwards, in the case that the depth of field of the target person becomes larger, according to target person described in pre-set scale smaller;
Amplifying unit, for being moved to the second position from the first position in the target person that will identify that Afterwards, in the case that the depth of field of the target person becomes smaller, according to target person described in pre-set ratio enlargement.
Further, described device further include: background fills module, is used for according to the depth of view information to the target person First position after object is removed carries out background filling.
According to another aspect of the present invention, one of a kind of terminal, including above-mentioned apparatus are additionally provided.
Through the invention, the target person in the photo shot by the first camera and the second camera is carried out Identification, the depth of view information of the target person in the photo is obtained by first camera and the second camera, After the target person that will identify that is moved to the second position from first position, the mesh is adjusted according to the variation of the depth of field The size for marking personage solves the problems, such as that size and background are unconformable after the personage in photo in the related technology is moved, according to The size of the change adjust automatically target person of the depth of field improves user experience so that photo overall effect is more coordinated.
Detailed description of the invention
The hardware structural diagram of Fig. 1 mobile terminal of each embodiment to realize the present invention;
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1;
Fig. 3 is the flow chart of photo processing method according to an embodiment of the present invention;
Fig. 4 is adjustment according to an embodiment of the present invention by the schematic diagram of personage after movement;
Fig. 5 is the schematic diagram of stereoscopic imaging apparatus according to an embodiment of the present invention;
Fig. 6 is the schematic diagram one of binocular ranging basic principle according to an embodiment of the present invention;
Fig. 7 is the schematic diagram two of binocular ranging basic principle according to an embodiment of the present invention;
Fig. 8 is the schematic diagram three of binocular ranging basic principle according to an embodiment of the present invention;
Fig. 9 is the block diagram of picture processing device according to an embodiment of the present invention;
Figure 10 is the block diagram one of picture processing device according to the preferred embodiment of the invention;
Figure 11 is the block diagram two of picture processing device according to the preferred embodiment of the invention;
Figure 12 is the block diagram three of picture processing device according to the preferred embodiment of the invention.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The mobile terminal of each embodiment of the present invention is realized in description with reference to the drawings.In subsequent description, use For indicate element such as " module ", " component " or " unit " suffix only for being conducive to explanation of the invention, itself There is no specific meanings.Therefore, " module " can be used mixedly with " component ".
Mobile terminal can be implemented in a variety of manners.For example, terminal described in the present invention may include such as moving Phone, smart phone, laptop, digit broadcasting receiver, PDA (personal digital assistant), PAD (tablet computer), PMP The mobile terminal of (portable media player), navigation device etc. and such as number TV, desktop computer etc. are consolidated Determine terminal.Hereinafter it is assumed that terminal is mobile terminal.However, it will be understood by those skilled in the art that in addition to being used in particular for moving Except the element of purpose, the construction of embodiment according to the present invention can also apply to the terminal of fixed type.
The hardware structural diagram of Fig. 1 mobile terminal of each embodiment to realize the present invention.
Mobile terminal 100 may include wireless communication unit 110, A/V (audio/video) input unit 120, user's input Unit 130, sensing unit 140, output unit 150, memory 160, interface unit 170, controller 180 and power supply unit 190 Etc..
Fig. 1 shows the mobile terminal 100 with various assemblies, it should be understood that being not required for implementing all show Component out.More or fewer components can alternatively be implemented.The element of mobile terminal 100 will be discussed in more detail below.
Wireless communication unit 110 usually may include one or more components, allow mobile terminal 100 and wireless communication Radio communication between system or network.For example, wireless communication unit 110 may include broadcasting reception module 111, move and lead to Believe at least one of module 112, wireless Internet module 113, short range communication module 114 and location information module 115.
Broadcasting reception module 111 receives broadcast singal and/or broadcast from external broadcast management server via broadcast channel Relevant information.Broadcast channel may include satellite channel and/or terrestrial channel.Broadcast management server, which can be, to be generated and sent The broadcast singal and/or broadcast related information generated before the server or reception of broadcast singal and/or broadcast related information And send it to the server of terminal.Broadcast singal may include TV broadcast singal, radio signals, data broadcasting Signal etc..Moreover, broadcast singal may further include the broadcast singal combined with TV or radio signals.Broadcast phase Closing information can also provide via mobile communications network, and in this case, broadcast related information can be by mobile communication mould Block 112 receives.Broadcast singal can exist in a variety of manners, for example, it can be with the electronics of digital multimedia broadcasting (DMB) Program guide (EPG), digital video broadcast-handheld (DVB-H) electronic service guidebooks (ESG) etc. form and exist.Broadcast Receiving module 111 can receive signal broadcast by using various types of broadcast systems.Particularly, broadcasting reception module 111 It can be wide by using such as multimedia broadcasting-ground (DMB-T), digital multimedia broadcasting-satellite (DMB-S), digital video It broadcasts-holds (DVB-H), the Radio Data System of forward link media (MediaFLO@), received terrestrial digital broadcasting integrated service (ISDB-T) etc. digit broadcasting system receives digital broadcasting.Broadcasting reception module 111, which may be constructed such that, to be adapted to provide for extensively Broadcast the various broadcast systems and above-mentioned digit broadcasting system of signal.Via the received broadcast singal of broadcasting reception module 111 and/ Or broadcast related information can store in memory 160 (or other types of storage medium).
Mobile communication module 112 sends radio signals to base station (for example, access point, node B etc.), exterior terminal And at least one of server and/or receive from it radio signal.Such radio signal may include that voice is logical Talk about signal, video calling signal or according to text and/or Multimedia Message transmission and/or received various types of data.
The Wi-Fi (Wireless Internet Access) of the support mobile terminal of wireless Internet module 113.The module can be internally or externally It is couple to terminal.Wi-Fi (Wireless Internet Access) technology involved in the module may include WLAN (Wireless LAN) (Wi-Fi), Wibro (WiMAX), Wimax (worldwide interoperability for microwave accesses), HSDPA (high-speed downlink packet access) etc..
Short range communication module 114 is the module for supporting short range communication.Some examples of short-range communication technology include indigo plant Tooth TM, radio frequency identification (RFID), Infrared Data Association (IrDA), ultra wide band (UWB), purple honeybee TM etc..
Location information module 115 is the module for checking or obtaining the location information of mobile terminal.Location information module 115 typical case is GPS (global positioning system).According to current technology, GPS calculate from three or more satellites away from From information and correct time information and for the Information application triangulation of calculating, thus according to longitude, latitude and height Degree accurately calculates three-dimensional current location information.Currently, three satellites are used simultaneously for the method for calculating position and temporal information And the error of calculated position and temporal information is corrected by using an other satellite.In addition, GPS can be by real-time Ground Continuous plus current location information carrys out calculating speed information.
A/V input unit 120 is for receiving audio or video signal.A/V input unit 120 may include 121 He of camera Microphone 122, camera 121 is to the static images obtained in video acquisition mode or image capture mode by image capture apparatus Or the image data of video is handled.Treated, and picture frame may be displayed on display unit 151.It is handled through camera 121 Picture frame afterwards can store in memory 160 (or other storage mediums) or be sent out via wireless communication unit 110 It send, two or more cameras 121 can be provided according to the construction of mobile terminal 100.Microphone 122 can be in telephone relation mould Sound (audio data) is received via microphone 122 in formula, logging mode, speech recognition mode etc. operational mode, and energy It is enough audio data by such acoustic processing.Audio that treated (voice) data can be in the case where telephone calling model Be converted to the format output that mobile communication base station can be sent to via mobile communication module 112.Microphone 122 can be implemented various The noise of type is eliminated (or inhibition) algorithm and is made an uproar with eliminate that (or inhibition) generate during sending and receiving audio signal Sound or interference.
The order that user input unit 130 can be inputted according to user generates key input data to control mobile terminal 100 Various operations.User input unit 130 allows user to input various types of information, and may include keyboard, metal dome, Touch tablet (for example, the sensitive component of detection due to the variation of resistance, pressure, capacitor etc. caused by being contacted), shakes idler wheel Bar etc..Particularly, when touch tablet is superimposed upon in the form of layer on display unit 151, touch screen can be formed.
Sensing unit 140 detects the current state of mobile terminal 100, (for example, mobile terminal 100 opens or closes shape State), the position of mobile terminal 100, user is for the presence or absence of contact (that is, touch input) of mobile terminal 100, mobile terminal 100 orientation, the acceleration or deceleration movement of mobile terminal 100 and direction etc., and generate for controlling mobile terminal 100 The order of operation or signal.For example, sensing unit 140 can sense when mobile terminal 100 is embodied as sliding-type mobile phone The sliding-type phone is to open or close.In addition, sensing unit 140 be able to detect power supply unit 190 whether provide electric power or Whether person's interface unit 170 couples with external device (ED).Sensing unit 140 may include proximity sensor 141.
Interface unit 170 be used as at least one external device (ED) connect with mobile terminal 100 can by interface.For example, External device (ED) may include wired or wireless headphone port, external power supply (or battery charger) port, wired or nothing Line data port, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end Mouth, video i/o port, ear port etc..Identification module can be storage and use each of mobile terminal 100 for verifying user It plants information and may include subscriber identification module (UIM), client identification module (SIM), Universal Subscriber identification module (USIM) Etc..In addition, the device (hereinafter referred to as " identification device ") with identification module can take the form of smart card, therefore, know Other device can be connect via port or other attachment devices with mobile terminal 100.Interface unit 170, which can be used for receiving, to be come from The input (for example, data information, electric power etc.) of external device (ED) and the input received is transferred in mobile terminal 100 One or more elements can be used for transmitting data between mobile terminal 100 and external device (ED).
In addition, when mobile terminal 100 is connect with external base, interface unit 170 may be used as allowing will be electric by it Power, which is provided from pedestal to the path or may be used as of mobile terminal 100, allows the various command signals inputted from pedestal to pass through it It is transferred to the path of mobile terminal 100.The various command signals or electric power inputted from pedestal may be used as identification mobile terminal 100 The signal whether being accurately fitted on pedestal.Output unit 150 is configured to the offer of vision, audio and/or tactile manner Output signal (for example, audio signal, vision signal, alarm signal, vibration signal etc.).Output unit 150 may include showing Show unit 151, audio output module 152, alarm unit 153 etc..
Display unit 151 may be displayed on the information handled in mobile terminal 100.For example, when mobile terminal 100 is in electricity When talking about call mode, display unit 151 can show and converse or other communicate (for example, text messaging, multimedia file Downloading etc.) relevant user interface (UI) or graphic user interface (GUI).When mobile terminal 100 is in video calling mode Or when image capture mode, display unit 151 can show captured image and/or received image, show video or figure Picture and the UI or GUI of correlation function etc..
Meanwhile when display unit 151 and touch tablet in the form of layer it is superposed on one another to form touch screen when, display unit 151 may be used as input unit and output device.Display unit 151 may include liquid crystal display (LCD), thin film transistor (TFT) In LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc. at least It is a kind of.Some in these displays may be constructed such that transparence to allow user to watch from outside, this is properly termed as transparent Display, typical transparent display can be, for example, TOLED (transparent organic light emitting diode) display etc..According to specific Desired embodiment, mobile terminal 100 may include two or more display units (or other display devices), for example, moving Dynamic terminal 100 may include outernal display unit (not shown) and inner display unit (not shown).Touch screen can be used for detecting Touch input pressure and touch input position and touch input area.
Audio output module 152 can be in call signal reception pattern, call mode, record mould in mobile terminal 100 It is when under the isotypes such as formula, speech recognition mode, broadcast reception mode, wireless communication unit 110 is received or in memory The audio data transducing audio signal and output stored in 160 is sound.Moreover, audio output module 152 can provide with The relevant audio output of specific function that mobile terminal 100 executes is (for example, call signal receives sound, message sink sound etc. Deng).Audio output module 152 may include loudspeaker, buzzer etc..
Alarm unit 153 can provide output notifying event to mobile terminal 100.Typical event can be with Including calling reception, message sink, key signals input, touch input etc..Other than audio or video output, alarm unit 153 can provide output in different ways with the generation of notification event.For example, alarm unit 153 can be in the form of vibration Output is provided, when receiving calling, message or some other entrance communications (incoming communication), alarm list Member 153 can provide tactile output (that is, vibration) to notify to user.By providing such tactile output, even if When the mobile phone of user is in the pocket of user, user also can recognize that the generation of various events.Alarm unit 153 The output of the generation of notification event can be provided via display unit 151 or audio output module 152.
Memory 160 can store the software program etc. of the processing and control operation that are executed by controller 180, Huo Zheke Temporarily to store oneself data (for example, telephone directory, message, still image, video etc.) through exporting or will export.And And memory 160 can store about the vibrations of various modes and audio signal exported when touching and being applied to touch screen Data.
Memory 160 may include the storage medium of at least one type, and the storage medium includes flash memory, hard disk, more Media card, card-type memory (for example, SD or DX memory etc.), random access storage device (RAM), static random-access storage Device (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read only memory (PROM), magnetic storage, disk, CD etc..Moreover, mobile terminal 100 can execute memory with by network connection The network storage device of 160 store function cooperates.
The overall operation of the usually control mobile terminal of controller 180.For example, controller 180 executes and voice communication, data Communication, video calling etc. relevant control and processing.In addition, controller 180 may include for reproducing (or playback) more matchmakers The multi-media module 181 of volume data, multi-media module 181 can construct in controller 180, or can be structured as and control Device 180 separates.Controller 180 can be with execution pattern identifying processing, by the handwriting input executed on the touchscreen or picture It draws input and is identified as character or image.
Power supply unit 190 receives external power or internal power under the control of controller 180 and provides operation each member Electric power appropriate needed for part and component.
Various embodiments described herein can be to use the calculating of such as computer software, hardware or any combination thereof Machine readable medium is implemented.Hardware is implemented, embodiment described herein can be by using application-specific IC (ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), scene can Programming gate array (FPGA), controller, microcontroller, microprocessor, is designed to execute function described herein processor At least one of electronic unit is implemented, and in some cases, such embodiment can be implemented in controller 180. For software implementation, the embodiment of such as process or function can with allow to execute the individual of at least one functions or operations Software module is implemented.Software code can by the software application (or program) write with any programming language appropriate Lai Implement, software code can store in memory 160 and be executed by controller 180.
So far, oneself is through describing mobile terminal 100 according to its function.In addition, the mobile terminal 100 in the embodiment of the present invention It can be such as folded form, board-type, oscillating-type, sliding-type and other various types of mobile terminals, do not do herein specifically It limits.
Mobile terminal 100 as shown in Figure 1 may be constructed such that using via frame or grouping send data it is all if any Line and wireless communication system and satellite-based communication system operate.
Referring now to Fig. 2 description communication system that wherein mobile terminal according to the present invention can operate.
Different air interface and/or physical layer can be used in such communication system.For example, used by communication system Air interface includes such as frequency division multiple access (FDMA), time division multiple acess (TDMA), CDMA (CDMA) and universal mobile communications system System (UMTS) (particularly, long term evolution (LTE)), global system for mobile communications (GSM) etc..As non-limiting example, under The description in face is related to cdma communication system, but such introduction is equally applicable to other types of system.
With reference to Fig. 2, cdma wireless communication system may include multiple intelligent terminals 100, multiple base stations (BS) 270, base station Controller (BSC) 275 and mobile switching centre (MSC) 280.MSC 280 is configured to and Public Switched Telephony Network (PSTN) 290 form interface.MSC 280 is also structured to be formed with the BSC 275 that can be couple to base station 270 via back haul link and connect Mouthful.Back haul link can be constructed according to any in several known interfaces, and the interface may include such as Europe mark Quasi- high capacity digital circuit/Unite States Standard high capacity digital circuit (E1/T1), asynchronous transfer mode (ATM), network protocol (IP), point-to-point protocol (PPP), frame relay, high-bit-rate digital subscriber line road (HDSL), Asymmetrical Digital Subscriber Line (ADSL) Or various types digital subscriber line (xDSL).It will be appreciated that system may include multiple BSC 275 as shown in Figure 2.
Each BS 270 can service one or more subregions (or region), by multidirectional antenna or the day of direction specific direction Each subregion of line covering is radially far from BS 270.Alternatively, each subregion can by two for diversity reception or more Multiple antennas covering.Each BS 270, which may be constructed such that, supports multiple frequency distribution, and the distribution of each frequency has specific frequency It composes (for example, 1.25MHz, 5MHz etc.).
What subregion and frequency were distributed, which intersects, can be referred to as CDMA Channel.BS 270 can also be referred to as base station transceiver System (BTS) or other equivalent terms.In this case, term " base station " can be used for broadly indicating single BSC 275 and at least one BS 270.Base station can also be referred to as " cellular station ".Alternatively, each subregion of specific BS 270 can be claimed For multiple cellular stations.
As shown in Figure 2, broadcast singal is sent to the mobile terminal operated in system by broadcsting transmitter (BT) 295 100.Broadcasting reception module 111 as shown in Figure 1 is arranged at mobile terminal 100 to receive the broadcast sent by BT 295 Signal.In fig. 2 it is shown that several global positioning system (GPS) satellites 300.The help of satellite 300 positions multiple mobile terminals At least one of 100.
In Fig. 2, multiple satellites 300 are depicted, it is understood that, it is useful to can use any number of satellite acquisition Location information.Location information module 115 as shown in Figure 1 (such as: GPS) is generally configured to cooperate with satellite 300 to obtain The location information that must be wanted.It substitutes GPS tracking technique or except GPS tracking technique, can be used can track movement eventually Other technologies of the position at end.In addition, at least one 300 property of can choose of GPS satellite or extraly processing satellite dmb pass It is defeated.
As a typical operation of wireless communication system, BS 270 receives the reverse strand from various mobile terminals 100 Road signal.Mobile terminal 100 usually participates in call, information receiving and transmitting and other types of communication.Certain base station is received each anti- It is handled in specific BS 270 to link signal.The data of acquisition are forwarded to relevant BSC 275.BSC provides logical Talk about the mobile management function of resource allocation and the coordination including the soft switching process between BS 270.BSC 275 will also be received Data be routed to MSC 280, the additional route service for forming interface with PSTN 290 is provided.Similarly, PSTN 290 form interface with MSC 280, and MSC and BSC 275 form interface, and BSC 275 controls BS 270 correspondingly with will be positive Link signal is sent to mobile terminal 100.
Based on above-mentioned mobile terminal, the embodiment of the invention provides a kind of photo processing method, Fig. 3 is according to the present invention The flow chart of the photo processing method of embodiment, as shown in figure 3, method includes the following steps:
Step S302, to by the first camera and second camera shooting photo in target person identify, Wherein, first camera is used to measure the depth of field of shooting object, and the second camera is for shooting object, the photo It is that the information shot by first camera and the second camera synthesizes;
Step S304 obtains the depth of field of the target person in the photo by first camera and the second camera Information;
Step S306, after the target person that will identify that is moved to the second position from first position, according to the depth of field Variation adjust the target person size.
Through the above steps, the target person in the photo shot by dual camera is identified, is taken the photograph by this pair As head obtains the depth of view information of the target person in the photo, is moved to from first position in the target person that will identify that Behind two positions, the size of the target person is adjusted according to the variation of the depth of field, solves personage's quilt in photo in the related technology Size and the unconformable problem of background after movement, according to the size of the change adjust automatically target person of the depth of field, so that photo is whole Body effect is more coordinated, and user experience is improved.
Fig. 4 is adjustment according to an embodiment of the present invention by the schematic diagram of personage after movement, as shown in figure 4, after photograph taking, It is uncoordinated by the scenery size of mobile personage and surrounding after mobile the personage of shooting to other positions, thus need by Be adjusted by the size of mobile personage, by depth of view information, then judge the mobile position of mobile object with respect to anteposition It sets and how many distance is moved, zooming in or out for mobile object is then carried out according to moving distance, forward movement is then amplified, backward It is mobile then reduce and the depth of field increase, reduce by mobile object, the depth of field becomes smaller, amplify by mobile object.According to the depth of field The size that variation adjusts the target person includes: that the distance that the target person is mobile is determined according to the depth of view information;According to true The fixed distance that the target person is mobile adjusts the size of the target person.In another alternative embodiment, will know Not Chu the target person be moved to the second position from the first position after, in the case that the depth of field of the target person becomes larger, press According to the pre-set scale smaller target person;The target person that will identify that from the first position be moved to this second Behind position, in the case that the depth of field of the target person becomes smaller, according to the pre-set ratio enlargement target person.
It as can be seen from Figure 4 is that will more be coordinated after mobile personage's amplification with background.
Fig. 5 is the schematic diagram of stereoscopic imaging apparatus according to an embodiment of the present invention, as shown in figure 5, stereoscopic imaging apparatus by Two or more digital camera head compositions, these camera relative positions are fixed, can be acquired with different view in synchronization Image.11 and 12 be two digital camera heads, and 13 be the connecting component of two cameras.11 and 12 are fixed on connecting component 13 On.This imaging system can obtain two photos in synchronization, this two photos transfers to subsequent module for processing, can be used for subsequent Three-dimensional correction, Stereo matching, depth of field measurement.After the variation according to the depth of field adjusts the size of the target person, according to this First position after depth of view information is removed the target person carries out background filling.
The depth of field letter of all target persons in collective's photo is obtained by first camera and the second camera Breath includes: the depth information that scene is obtained by binocular ranging platform or depth transducer.Depth measuring module obtain it is three-dimensional at As the photo for the different perspectives that device is shot, depth is generated using stereoscopic measurement method to foreground part region in two photos Figure.A kind of specific embodiment is given below.
Fig. 6 is the schematic diagram one of binocular ranging basic principle according to an embodiment of the present invention, as shown in fig. 6, binocular vision It is simulation human vision principle, uses the method for the passive perceived distance of computer.An object from two or more points, Obtain the image under different perspectives.
Fig. 7 is the schematic diagram two of binocular ranging basic principle according to an embodiment of the present invention, as shown in fig. 7, P is that physics is empty Between middle certain point, c1 and c2 are that two video cameras are watched from different location, the imaging position in different cameral for p m and m '.
According to the matching relationship of pixel between image, the offset between pixel is calculated by principle of triangulation to obtain The three-dimensional information of object.P is certain point in space, and Ol and Or are respectively two camera centers in left and right, and xl and xr are left and right two The imaging point on side.
The parallax d=xl-xr of imaging point of the point P in left images, the distance Z of P point is calculated using following formula.
Wherein f is the focal length of two digital camera heads in stereoscopic imaging apparatus (it is assumed that two camera focal lengths one Sample), T is the spacing between two digital camera heads.
Stereo Matching Algorithm mainly gets up xl and xr Corresponding matching.Fig. 8 is that binocular according to an embodiment of the present invention is surveyed Schematic diagram three away from basic principle, as shown in figure 8, a point p in reference picture, scans in an other sub-picture, find out With the most similar pixel q of p, reach matching similitude is defined as: the local gray level window difference of two pixels Value is minimum.
The depth of view information of object is obtained, so that it may calculate the actual range between object and camera, object dimensional is big It is small, actual range between two o'clock;Depth transducer is then to obtain scene using actively emitting infrared light and in the scene reflect Range information.Include: to identification is carried out by the target person in the first camera and the photo of second camera shooting Image segmentation is carried out in conjunction with depth image and original image, isolates the target person and target context.
Since distance of the subject goal with background area apart from video camera of scene is different, subject goal and background area Depth value also can be different, this provides a georeferencing feature for the separation of subsequent main body and background image, is conducive to figure As the accuracy of partitioning algorithm.
Traditional image segmentation algorithm is carried out in 2D plane, has lacked this important information of the space length feature of scene, Image segmentation algorithm is generally difficult the background being precisely separating out in scene and subject goal, using depth information of scene, in conjunction with biography The epidemic algorithms of system such as figure cuts algorithm or meanshift algorithm carries out main body and background image segmentation.
After image segmentation algorithm obtains different image-regions, it is also necessary to by morphological operation by the profile of image into Row extracts, region interior void is filled, and guarantees the integrality in image segmentation region.
The embodiment of the invention provides a kind of picture processing device, Fig. 9 is photo processing dress according to an embodiment of the present invention The block diagram set, as shown in Figure 9, comprising:
Identification module 92, for being carried out to the target person in the photo shot by the first camera and second camera Identification, wherein first camera is used to measure the depth of field of shooting object, and the second camera is for shooting object, institute Stating photo is that the information shot by first camera and the second camera synthesizes;
Module 94 is obtained, for obtaining the target person in the photo by first camera and the second camera Depth of view information;
Module 96 is adjusted, after being moved to the second position from first position in the target person that will identify that, according to The variation of the depth of field adjusts the size of the target person.
Further, the identification module 92 is also used to that depth image and original image is combined to carry out image segmentation, isolates The target person and target context.
Further, the acquisition module 94 is also used to obtain the depth of scene by binocular ranging platform or depth transducer Spend information.
Figure 10 is the block diagram one of picture processing device according to the preferred embodiment of the invention, as shown in Figure 10, the adjustment mould Block 96 includes:
Determination unit 102, for determining the distance that the target person is mobile according to the depth of view information;
Adjustment unit 104, for adjusting the size of the target person according to the determining distance that the target person is mobile.
Figure 11 is the block diagram two of picture processing device according to the preferred embodiment of the invention, as shown in figure 11, the adjustment mould Block includes:
Reducing unit 112, after being moved to the second position from the first position in the target person that will identify that, In the case that the depth of field of the target person becomes larger, according to the pre-set scale smaller target person;
Amplifying unit 114, after being moved to the second position from the first position in the target person that will identify that, In the case that the depth of field of the target person becomes smaller, according to the pre-set ratio enlargement target person.
Figure 12 is the block diagram three of picture processing device according to the preferred embodiment of the invention, and as shown in figure 12, the device is also Include: background filling module 122, carries out background for the first position after being removed according to the depth of view information to the target person Filling.
The embodiment of the invention also provides one of a kind of terminals, including above-mentioned apparatus.
Through the invention, the target person in the photo shot by dual camera is identified, is imaged by this pair Head obtains the depth of view information of the target person in the photo, is moved to second from first position in the target person that will identify that Behind position, the size of the target person is adjusted according to the variation of the depth of field, the personage solved in photo in the related technology is moved Size and the unconformable problem of background after dynamic, according to the size of the change adjust automatically target person of the depth of field, so that photo is whole Effect is more coordinated, and user experience is improved.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do There is also other identical elements in the process, method of element, article or device.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art The part contributed out can be embodied in the form of software products, which is stored in a storage medium In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal device (can be mobile phone, computer, clothes Business device, air conditioner or the network equipment etc.) execute the method that each embodiment of the present invention is somebody's turn to do.
Obviously, those skilled in the art should be understood that each module of the above invention or each step can be with general Computing device realize that they can be concentrated on a single computing device, or be distributed in multiple computing devices and formed Network on, optionally, they can be realized with the program code that computing device can perform, it is thus possible to which they are stored It is performed by computing device in the storage device, and in some cases, it can be to be different from shown in sequence execution herein Out or description the step of, perhaps they are fabricated to each integrated circuit modules or by them multiple modules or Step is fabricated to single integrated circuit module to realize.In this way, the present invention is not limited to any specific hardware and softwares to combine.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills Art field, is included within the scope of the present invention.

Claims (7)

1. a kind of photo processing method characterized by comprising
It is identified to by the target person in the first camera and the photo of second camera shooting, wherein described first Camera is used to measure the depth of field of shooting object, and for the second camera for shooting object, the photo is by described first The information synthesis of camera and second camera shooting;
The depth of view information of the target person in the photo is obtained by first camera and the second camera, is wrapped It includes: obtaining the depth information of scene by binocular ranging platform or depth transducer;
After the target person that will identify that is moved to the second position from first position, adjusted according to the variation of the depth of field The size of the target person, comprising: the distance that the target person is mobile is determined according to the depth of view information;According to determination The distance that the target person is mobile adjust the size of the target person.
2. the method according to claim 1, wherein to first camera and the second camera is passed through Target person in the photo of shooting carries out identification
Image segmentation is carried out in conjunction with depth image and original image, isolates the target person and target context.
3. method according to claim 1 or 2, which is characterized in that according to it is determining by the target person it is mobile away from Include: from the size for adjusting the target person
After the target person that will identify that is moved to the second position from the first position, the scape of the target person In the case where becoming larger deeply, according to target person described in pre-set scale smaller;
After the target person that will identify that is moved to the second position from the first position, the scape of the target person In the case where becoming smaller deeply, according to target person described in pre-set ratio enlargement.
4. according to the method described in claim 3, it is characterized in that, adjusting the target person according to the variation of the depth of field Size after, the method also includes:
First position after being removed according to the depth of view information to the target person carries out background filling.
5. a kind of picture processing device characterized by comprising
Identification module, for being identified to by the target person in the first camera and the photo of second camera shooting, Wherein, first camera is used to measure the depth of field of shooting object, and the second camera is for shooting object, the photo It is that the information shot by first camera and the second camera synthesizes;
Module is obtained, for obtaining the target person in the photo by first camera and the second camera Depth of view information, comprising: the depth information of scene is obtained by binocular ranging platform or depth transducer;
Module is adjusted, after being moved to the second position from first position in the target person that will identify that, according to described The variation of the depth of field adjusts the size of the target person, comprising: determination unit, being used for will be described according to the depth of view information determination The mobile distance of target person;Adjustment unit, for adjusting the mesh according to the determining distance that the target person is mobile Mark the size of personage.
6. device according to claim 5, which is characterized in that the identification module is also used to combine depth image and original Beginning image carries out image segmentation, isolates the target person and target context.
7. a kind of terminal, which is characterized in that including device described in any one of claim 5 to 6.
CN201611224704.2A 2016-12-27 2016-12-27 A kind of photo processing method, device and terminal Active CN106534590B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611224704.2A CN106534590B (en) 2016-12-27 2016-12-27 A kind of photo processing method, device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611224704.2A CN106534590B (en) 2016-12-27 2016-12-27 A kind of photo processing method, device and terminal

Publications (2)

Publication Number Publication Date
CN106534590A CN106534590A (en) 2017-03-22
CN106534590B true CN106534590B (en) 2019-08-20

Family

ID=58338432

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611224704.2A Active CN106534590B (en) 2016-12-27 2016-12-27 A kind of photo processing method, device and terminal

Country Status (1)

Country Link
CN (1) CN106534590B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118558A (en) * 2017-06-23 2019-01-01 沈瑜越 The method taken a group photo to multiple group photo objects in different time points or place
CN109215043A (en) * 2017-06-30 2019-01-15 北京小米移动软件有限公司 Image-recognizing method and device, computer readable storage medium
CN107392850B (en) * 2017-06-30 2020-08-25 联想(北京)有限公司 Image processing method and system
CN108322644A (en) * 2018-01-18 2018-07-24 努比亚技术有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN110274573B (en) * 2018-03-16 2021-10-26 赛灵思电子科技(北京)有限公司 Binocular ranging method, device, equipment, storage medium and computing equipment
CN109840059B (en) * 2019-01-29 2020-05-15 北京字节跳动网络技术有限公司 Method and apparatus for displaying image

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140192238A1 (en) * 2010-10-24 2014-07-10 Linx Computational Imaging Ltd. System and Method for Imaging and Image Processing
CN104202533B (en) * 2014-09-24 2019-05-21 中磊电子(苏州)有限公司 Motion detection device and movement detection method
CN105807952B (en) * 2016-03-07 2020-01-31 联想(北京)有限公司 information processing method and electronic equipment
CN107680164B (en) * 2016-08-01 2023-01-10 中兴通讯股份有限公司 A method and device for adjusting the size of a virtual object

Also Published As

Publication number Publication date
CN106534590A (en) 2017-03-22

Similar Documents

Publication Publication Date Title
CN106502693B (en) A kind of image display method and device
CN106534590B (en) A kind of photo processing method, device and terminal
CN105404484B (en) Terminal split screen device and method
CN104954689B (en) A kind of method and filming apparatus that photo is obtained using dual camera
CN104835165B (en) Image processing method and image processing device
CN106878588A (en) A kind of video background blurs terminal and method
CN105120135B (en) A kind of binocular camera
CN106331499B (en) Focusing method and photographing device
CN106791455B (en) Panorama shooting method and device
CN105227865B (en) A kind of image processing method and terminal
CN106534619A (en) Method and apparatus for adjusting focusing area, and terminal
CN105488756B (en) Picture synthetic method and device
WO2017071476A1 (en) Image synthesis method and device, and storage medium
CN106713640B (en) A kind of brightness adjusting method and equipment
CN107071263A (en) A kind of image processing method and terminal
CN106303229A (en) A kind of photographic method and device
CN108668071A (en) A kind of image pickup method, device, system and a kind of mobile terminal
CN106791111A (en) A kind of images share method, device and terminal
CN106534552B (en) Mobile terminal and its photographic method
CN106791119A (en) A kind of photo processing method, device and terminal
CN106851125A (en) A kind of mobile terminal and multiple-exposure image pickup method
CN107343140A (en) A kind of image processing method and mobile terminal
CN105959520B (en) A kind of photo camera and method
CN104935822A (en) Method and device for processing images
CN107087109A (en) One kind, which is taken pictures, adjusts terminal and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant