[go: up one dir, main page]

WO2017088714A1 - Mobile terminal and three-dimensional image generation method therefor - Google Patents

Mobile terminal and three-dimensional image generation method therefor Download PDF

Info

Publication number
WO2017088714A1
WO2017088714A1 PCT/CN2016/106637 CN2016106637W WO2017088714A1 WO 2017088714 A1 WO2017088714 A1 WO 2017088714A1 CN 2016106637 W CN2016106637 W CN 2016106637W WO 2017088714 A1 WO2017088714 A1 WO 2017088714A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
image
mobile terminal
character
feature data
Prior art date
Application number
PCT/CN2016/106637
Other languages
French (fr)
Chinese (zh)
Inventor
张圣杰
金蓉
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2017088714A1 publication Critical patent/WO2017088714A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Definitions

  • the present application relates to, but is not limited to, the field of communication technologies, and in particular to a mobile terminal and a method for generating a three-dimensional image thereof.
  • This paper provides a mobile terminal and a method for generating a three-dimensional image thereof, which can generate a vivid three-dimensional image in a daily use scenario of the mobile terminal.
  • a reading module configured to: read panoramic image data corresponding to the panoramic photo of the captured three-dimensional object;
  • An extraction module configured to: extract feature data required to generate a three-dimensional image from the panoramic image data
  • And generating a module configured to: according to the extracted feature data, activate a three-dimensional image engine to generate a three-dimensional image corresponding to the captured three-dimensional object.
  • the three-dimensional object is a three-dimensional character object
  • the extraction module includes:
  • a pre-processing unit configured to: extract the three-dimensional character object from the panoramic image data Overall image data, and calibrating the three-dimensional character objects of different orientations in the overall image data;
  • the first data extracting unit is configured to: extract face image data from the image data corresponding to the three-dimensional character object in different orientations, and extract relevant feature data from the face image data, where the related feature data includes Texture feature data of the face image;
  • the ratio calculating unit is configured to: distinguish the head, the upper body, the lower body, and the limbs in the image data corresponding to the different orientations of the three-dimensional human object, to correspondingly measure the head, the upper body, the lower body of the three-dimensional object and The length ratio of the limbs;
  • the second data extracting unit is configured to extract other feature data from the image data corresponding to the three-dimensional character object in different orientations, and the other feature data includes hairstyle feature data, wearing feature data, and color feature data.
  • the pre-processing unit is configured to: use an image edge detection algorithm to distinguish a three-dimensional character from a background environment, and extract image data corresponding to the detected determined pixel edge to obtain overall image data of the three-dimensional object object. .
  • the first data extracting unit is further configured to: after determining the area where the face image is located in the image data corresponding to the three-dimensional character object in different orientations, the face image is zoomed, rotated, and stretched One or more of the processes result in a preset standard size face image.
  • the generating module includes:
  • a model building unit configured to: perform three-dimensional reconstruction according to the extracted partial feature data related to the constructing character model in the feature data, to generate a character model corresponding to the captured three-dimensional character object;
  • the model rendering unit is configured to: perform a character model rendering according to the extracted other part of the feature data related to the character model rendering to generate a three-dimensional character image corresponding to the captured three-dimensional character object.
  • the model building unit is configured to: calculate the length, width, height, and limb ratio data of the overall character image in the three-dimensional space by using the obtained length ratios of the head, the upper body, the lower body, and the limbs of the person to generate and The character model corresponding to the captured three-dimensional character object.
  • the model rendering unit is further configured to: adopt a panoramic stitching fusion technology to not The splicing process is performed on the image information of the same orientation to generate a three-dimensional character image corresponding to the captured three-dimensional character object.
  • the mobile terminal further includes:
  • the application association module is configured to: associate the generated three-dimensional character image with an application scenario in the mobile terminal;
  • the three-dimensional character image display module is configured to: when the associated application scene is in an active state, display a three-dimensional character image corresponding to the associated application scene on a display screen of the mobile terminal.
  • the mobile terminal further includes:
  • a shooting module configured to: activate a panoramic shooting mode in a camera application of the mobile terminal to capture and store a panoramic photo of the three-dimensional object, wherein the current movement is detected in real time during the panoramic shooting of the mobile terminal Whether the shooting angle of the terminal is within the set shooting angle range. If not, the corresponding correction prompt is issued.
  • the shooting module is configured to: detect, by using one or more of a gravity sensor, an attitude sensor, a gyroscope, and a compass, whether the shooting angle of the current mobile terminal is in a set shooting angle range in real time. within.
  • the embodiment of the present invention further provides a method for generating a three-dimensional image, which is applied to a mobile terminal, and the method for generating the three-dimensional image includes:
  • a three-dimensional image engine is launched to generate a three-dimensional image corresponding to the captured three-dimensional object.
  • the three-dimensional object is a three-dimensional character object; and extracting, from the panoramic image data, feature data required to generate a three-dimensional image includes:
  • Other feature data is extracted from the image data corresponding to the three-dimensional character object in different orientations, the other feature data including hairstyle feature data, wearing feature data, and color feature data.
  • the extracting the overall image data of the three-dimensional character object from the panoramic image data includes:
  • the image edge detection algorithm is used to distinguish the three-dimensional character from the background environment, and the image data corresponding to the detected determined pixel edge is extracted, and the overall image data of the three-dimensional character object is obtained.
  • the method further includes:
  • the face image is processed by one or more of zooming, rotating, and stretching to obtain a preset standard size face image.
  • the generating, according to the extracted feature data, the three-dimensional image engine to generate a three-dimensional image corresponding to the captured three-dimensional object comprises:
  • Character model rendering is performed according to the extracted other feature data related to the character model rendering in the feature data to generate a three-dimensional character image corresponding to the captured three-dimensional character object.
  • the method further includes:
  • a three-dimensional character image corresponding to the associated application scenario is displayed on a display screen of the mobile terminal.
  • performing three-dimensional reconstruction according to the extracted partial feature data related to the built-in character model in the extracted feature data to generate a character model corresponding to the captured three-dimensional character object includes:
  • the method further includes:
  • the image information of different orientations is spliced by a panoramic stitching fusion technique to generate a three-dimensional character image corresponding to the captured three-dimensional character object.
  • the reading the panoramic image data corresponding to the panoramic photo of the captured three-dimensional object includes:
  • Activating a panoramic photographing mode in the camera application of the mobile terminal to capture and store a panoramic photo of the three-dimensional object wherein during the panoramic photographing process, the mobile terminal detects whether the current shooting angle of the mobile terminal is in real time Within the set shooting angle range, if no, the corresponding correction prompt is issued.
  • the real-time detecting whether the shooting angle of the current mobile terminal is within a set shooting angle range includes:
  • a gravity sensor By using one or more of a gravity sensor, an attitude sensor, a gyroscope, and a compass, it is detected in real time whether the shooting angle of the current mobile terminal is within a set shooting angle range.
  • the embodiment of the invention further provides a computer readable storage medium storing computer executable instructions, which are implemented when executed by a processor.
  • the mobile terminal generates a three-dimensional shape based on the panoramic photo of the three-dimensional object.
  • the three-dimensional image engine is activated to generate a corresponding three-dimensional image.
  • the embodiment of the invention can conveniently and quickly generate a three-dimensional image of the captured object, and facilitate the user to associate with the related application, thereby satisfying the personalized use requirement of the user and improving the user experience.
  • FIG. 1 is a schematic structural diagram of hardware of an optional mobile terminal in implementing an embodiment of the present invention
  • Figure 2 is a block diagram showing the electrical structure of the camera of Figure 1;
  • FIG. 3 is a schematic diagram of functional modules of a first embodiment of a mobile terminal according to the present invention.
  • FIG. 4 is a schematic diagram of a refinement function module of the extraction module of FIG. 3;
  • FIG. 5 is a schematic diagram of a refinement function module of the generation module in FIG. 3;
  • FIG. 6 is a schematic diagram of functional modules of a second embodiment of a mobile terminal according to the present invention.
  • FIG. 7 is a schematic diagram of functional modules of a third embodiment of a mobile terminal according to the present invention.
  • FIG. 8 is a schematic diagram of an embodiment of a panoramic photo taken by a mobile terminal according to the present invention.
  • FIG. 9 is a schematic flow chart of a first embodiment of a method for generating a three-dimensional image according to the present invention.
  • FIG. 10 is a schematic diagram of the refinement process of step S20.
  • step S30 is a schematic flowchart of the refinement of step S30;
  • FIG. 12 is a schematic flow chart of a second embodiment of a method for generating a three-dimensional image according to the present invention.
  • FIG. 13 is a schematic flow chart of a third embodiment of a method for generating a three-dimensional image according to the present invention.
  • the mobile terminal can be implemented in a variety of forms.
  • the terminals described herein may include, for example, mobile phones, smart phones, notebook computers, digital broadcast receivers, PDAs (Personal Digital Assistants), PADs (Tablets), PMPs (Portable Multimedia Players), navigation devices, and the like.
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
  • FIG. 1 is a schematic structural diagram of hardware of an optional mobile terminal in implementing an embodiment of the present invention.
  • the mobile terminal 100 may include a wireless communication unit 110, an A/V (Audio/Video) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, a controller 170, and the like.
  • Figure 1 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
  • Wireless communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication device or network.
  • the A/V input unit 120 is arranged to receive an audio or video signal.
  • the A/V input unit 120 may include a camera 121 that processes image data of still pictures or video obtained by an image capturing device in a video capturing mode or an image capturing mode.
  • the processed image frame can be displayed on the display unit 151.
  • the image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the configuration of the mobile terminal.
  • the user input unit 130 may generate key input data according to a command input by the user to control the operation of the mobile terminal.
  • the user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a touch panel (eg, a touch sensitive component that detects changes in resistance, pressure values, capacitance, etc. due to contact), Roller, rocker, etc.
  • a touch screen may be formed when the touch panel is superimposed on the display unit 151 in the form of a layer.
  • the sensing unit 140 detects the current state of the mobile terminal 100 (eg, the opening of the mobile terminal 100) Or the off state), the location of the mobile terminal 100, the presence or absence of the user's contact with the mobile terminal 100 (ie, touch input), the orientation of the mobile terminal 100, the direction of movement and the tilt angle of the mobile terminal 100, and the like, and are generated for A command or signal that controls the operation of the mobile terminal 100.
  • the sensing unit 140 includes an accelerometer 141 that is configured to detect real-time acceleration of the mobile terminal 100 to derive a direction of motion of the mobile terminal 100, and a gyroscope 142 that is configured to detect that the mobile terminal 100 is relative to its location The angle of inclination of the plane.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • the output unit 150 may include a display unit 151, an audio output module 152, and the like.
  • the display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
  • UI user interface
  • GUI graphical user interface
  • the display unit 151 can function as an input device and an output device.
  • the display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like.
  • LCD liquid crystal display
  • TFT-LCD thin film transistor LCD
  • OLED organic light emitting diode
  • a flexible display a three-dimensional (3D) display, and the like.
  • 3D three-dimensional
  • Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like.
  • TOLED Transparent Organic Light Emitting Diode
  • the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) .
  • the touch screen can be used to detect touch input pressure values as well as touch input positions and touch input areas.
  • the audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like.
  • the audio signal is output as sound.
  • the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100.
  • the audio output module 152 can To include pickups, buzzers, and so on.
  • the memory 160 may store a software program or the like for processing and control operations performed by the controller 170, or may temporarily store data (for example, a phone book, a message, a still image, a video, etc.) that has been output or is to be output. Moreover, the memory 160 may store data regarding vibration and audio signals of various manners that are output when a touch is applied to the touch screen.
  • the memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
  • Controller 170 typically controls the overall operation of the mobile terminal. For example, controller 170 performs the control and processing associated with voice calls, data communications, video calls, and the like. The controller 170 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 170 Implemented in the middle.
  • implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation.
  • the software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by
  • the mobile terminal of the embodiment of the present invention may further include: a reading module 510, an extracting module 520, and a generating module 530, where
  • the reading module 510 is configured to: read panoramic image data corresponding to the captured panoramic photo of the three-dimensional object; and the extracting module 520 is configured to: extract three generated from the panoramic image data The feature data required by the dimension image; the generating module 530 is configured to: according to the extracted feature data, activate the three-dimensional image engine to generate a three-dimensional image corresponding to the captured three-dimensional object.
  • FIG. 2 is a block diagram of the electrical structure of the camera of FIG. 1.
  • the photographic lens 1211 is composed of a plurality of optical lenses for forming a subject image, and is a single focus lens or a zoom lens.
  • the photographic lens 1211 is movable in the optical axis direction under the control of the lens driver 1221, and the lens driver 1221 controls the focus position of the photographic lens 1211 in accordance with a control signal from the lens driving control circuit 1222, and can also be controlled in the case of the zoom lens. Focus distance.
  • the lens drive control circuit 1222 performs drive control of the lens driver 1221 in accordance with a control command from the microcomputer 1217.
  • An imaging element 1212 is disposed on the optical axis of the photographic lens 1211 near the position of the subject image formed by the photographic lens 1211.
  • the imaging element 1212 is provided to image the subject image and acquire captured image data.
  • Photodiodes constituting each pixel are arranged two-dimensionally and in a matrix on the imaging element 1212.
  • the photodiode generates a photoelectric conversion current corresponding to the amount of received light, and the photoelectric conversion current is charged by a capacitor connected to the photodiode.
  • the front surface of each pixel is provided with a Bayer array of RGB color filters.
  • the imaging element 1212 is connected to the imaging circuit 1213.
  • the imaging circuit 1213 performs charge accumulation control and image signal readout control in the imaging element 1212, and performs waveform shaping after reducing the reset noise of the read image signal (analog image signal). Further, gain improvement or the like is performed to obtain an appropriate signal level.
  • the imaging circuit 1213 is connected to an A/D converter 1214 that performs analog-to-digital conversion on the analog image signal and outputs a digital image signal (hereinafter referred to as image data) to the bus 1227.
  • the bus 1227 is a transmission path for transmitting a variety of data read or generated inside the camera.
  • the A/D converter 1214 is connected to the bus 1227, and an image processor 1215, a JPEG processor 1216, a microcomputer 1217, a SDRAM (Synchronous Dynamic Random Access Memory) 1218, and a memory interface are also connected. (hereinafter referred to as memory I/F) 1219, LCD (Liquid Crystal Display) driver 1220.
  • the image processor 1215 performs OB subtraction processing, white balance adjustment, color matrix calculation, gamma conversion, color difference signal processing, noise removal processing, simultaneous processing, edge processing, and the like on the image data based on the output of the imaging element 1212. deal with.
  • JPEG processor 1216 is remembering image data When recorded on the recording medium 1225, the image data read from the SDRAM 1218 is compressed in accordance with the JPEG compression method. Further, the JPEG processor 1216 performs decompression of JPEG image data for image reproduction display. At the time of decompression, the file recorded on the recording medium 1225 is read, and after the compression processing is performed in the JPEG processor 1216, the decompressed image data is temporarily stored in the SDRAM 1218 and displayed on the LCD 1226. Further, in the present embodiment, the JPEG method is adopted as the image compression/decompression method. However, the compression/decompression method is not limited thereto, and other compression/decompression methods such as MPEG, TIFF, and H.264 may be used
  • the microcomputer 1217 functions as a control unit of the entire camera, and collectively controls various processing sequences of the camera.
  • the microcomputer 1217 is connected to the operation unit 1223 and the flash memory 1224.
  • the operating unit 1223 includes, but is not limited to, a physical button or a virtual button, and the entity or virtual button may be a power button, a camera button, an edit button, a dynamic image button, a reproduction button, a menu button, a cross button, an OK button, a delete button, an enlarge button A variety of input buttons and a variety of input keys and other operational controls to detect the operational status of these operational controls.
  • the detection result is output to the microcomputer 1217. Further, a touch panel is provided on the front surface of the LCD 1226 as a display, and the touch position of the user is detected, and the touch position is output to the microcomputer 1217.
  • the microcomputer 1217 executes a plurality of processing sequences corresponding to the user's operation in accordance with the detection result from the operation position of the operation unit 1223.
  • the flash memory 1224 stores programs for executing various processing sequences of the microcomputer 1217.
  • the microcomputer 1217 performs overall control of the camera in accordance with the program. Further, the flash memory 1224 stores various adjustment values of the camera, and the microcomputer 1217 reads out the adjustment value, and performs control of the camera in accordance with the adjustment value.
  • the SDRAM 1218 is an electrically rewritable volatile memory for temporarily storing image data or the like.
  • the SDRAM 1218 temporarily stores image data output from the A/D converter 1214 and image data processed in the image processor 1215, the JPEG processor 1216, and the like.
  • the memory interface 1219 is connected to the recording medium 1225, and performs control for writing image data and a file header attached to the image data to the recording medium 1225 and reading out from the recording medium 1225.
  • the recording medium 1225 is, for example, a recording medium such as a memory card that can be detachably attached to the camera body.
  • the recording medium 1225 is not limited thereto, and may be a hard disk or the like built in the camera body.
  • the LCD driver 1210 is connected to the LCD 1226, and the number of images to be processed by the image processor 1215 According to the storage in the SDRAM 1218, when the display is required, the image data stored by the SDRAM 1218 is read and displayed on the LCD 1226, or the image data compressed by the JPEG processor 1216 is stored in the SDRAM 1218. When it is required to be displayed, the JPEG processor 1216 reads the compression of the SDRAM 1218. The image data that has passed is decompressed, and the decompressed image data is displayed on the LCD 1226.
  • the LCD 1226 is configured to display an image on the back of the camera body.
  • the LCD 1226 is an LCD, but is not limited thereto, and various display panels such as an organic EL may be used.
  • FIG. 3 is a schematic diagram of functional modules of a first embodiment of a mobile terminal according to the present invention.
  • the mobile terminal includes:
  • the reading module 510 is configured to: read panoramic image data corresponding to the panoramic photo of the captured three-dimensional object;
  • a panoramic photo of the subject that is, a 360-degree image corresponding to the subject.
  • the photographic subject in this embodiment is three-dimensional, and the form of the three-dimensional object is not limited, for example, a panoramic photo of a physical person may be taken, or a panoramic photo of a solid animal or a physical item may also be taken for generating a corresponding image on the mobile terminal.
  • the three-dimensional image In order to realize the subsequent image stitching processing of the three-dimensional image, there are enough overlapping information between the images taken at different angles in the acquired panoramic photo.
  • the panoramic photo of the three-dimensional object is saved on the mobile terminal or other device after the shooting is completed, and is read by the reading module 510 when the three-dimensional image generation process is required, wherein the method of reading is not limited, according to actual needs. Make settings.
  • the extracting module 520 is configured to: extract feature data required to generate a three-dimensional image from the panoramic image data;
  • the feature data required to generate a three-dimensional image such as facial texture, height, wearing, skin color, limb ratio, and the like, are extracted from the read panoramic image data by the extraction module 520, and then according to the extracted features.
  • the data can be used to generate a corresponding three-dimensional image.
  • the method for extracting feature data in an image is not limited, and may be set according to actual needs. For example, using edge detection to extract 3D objects from the environment background Come, or use the face detection algorithm to detect the face position and extract the facial texture features of the three-dimensional character object.
  • the generating module 530 is configured to: according to the extracted feature data, activate a three-dimensional image engine to generate a three-dimensional image corresponding to the captured three-dimensional object.
  • the generating module 530 performs synthesis on all the extracted feature data by using a preset three-dimensional image engine, thereby generating corresponding The three-dimensional image.
  • the three-dimensional image engine is a three-dimensional graphics engine developed in this embodiment for facilitating generation of a three-dimensional image in the embodiment of the present invention.
  • large-scale development tools such as OpenGL or DirectX are generally used to write 3D graphics applications on a microcomputer.
  • 3D graphics involve many algorithms and expertise, it is still difficult to quickly develop 3D applications. Therefore, the development of 3D applications requires a three-dimensional graphics development environment that encapsulates hardware operations and graphics algorithms, as well as an easy-to-use and feature-rich environment.
  • This three-dimensional graphics development environment can be called a three-dimensional graphics engine.
  • OGRE Object-Oriented Graphics Rendering Engine
  • OSG Open Scene Graph
  • the mobile terminal performs extraction of feature data required to generate a three-dimensional image based on the panoramic photo of the three-dimensional object, and then starts the three-dimensional image engine to generate a corresponding three-dimensional image according to the extracted feature data.
  • the embodiment of the invention can conveniently and quickly generate a three-dimensional image of the captured object, and facilitate the user to associate with the related application, thereby satisfying the personalized use requirement of the user and improving the user experience.
  • FIG. 4 is a schematic diagram of a refinement function module of the extraction module of FIG. Based on the above embodiment, the three-dimensional object is described as a three-dimensional object in the embodiment.
  • the extraction module 520 includes:
  • the pre-processing unit 5201 is configured to: extract overall image data of the three-dimensional human object from the panoramic image data, and perform calibration on the three-dimensional human object in different orientations in the overall image data;
  • the pre-processing unit 5201 is configured to: use an image edge detection algorithm to distinguish three-dimensional The character and the background environment extract the image data corresponding to the detected pixel edge closed, and obtain the overall image data of the three-dimensional character object.
  • the pre-processing unit 5201 since the panoramic image data obtained by the shooting generally includes the image data of the three-dimensional human object and the image data of the environment in which the human object is located, the pre-processing unit 5201 takes the overall image data of the three-dimensional human object from the image data thereof. The environment image is extracted and processed separately. In addition, since the overall image data of the three-dimensional human object includes image data of different orientations of the three-dimensional human object, the pre-processing unit 5201 also performs one-to-one calibration on the three-dimensional human objects in different orientations of the overall image data of the three-dimensional human object. Used to make a distinction.
  • the manner of extracting the whole image data of the three-dimensional character object is not limited. Since the three-dimensional character object in the panoramic image data is an integral closed area, for example, an image edge detection algorithm may be used to distinguish the three-dimensional character from the background environment, and thus, The image data corresponding to the detected determined pixel edge is extracted to obtain the overall image data of the three-dimensional human object.
  • the manner of calibrating the three-dimensional human objects in different directions in the overall image data of the three-dimensional human object is not limited, and is set according to actual needs.
  • a three-dimensional human object may be calibrated using a human body orientation detection algorithm, such as calibrating a human body orientation every 45° with reference to the front of the human object, and the orientation of the human object for 360° orientation may be calibrated to eight orientations.
  • the feature data in the image of the person object corresponding to the different orientations is mostly different, so the character object feature data extraction in different orientation directions can be performed.
  • the first data extracting unit 5202 is configured to: determine an area where the face image is located from the image data corresponding to the three-dimensional character object in different orientations, and extract relevant feature data from the data of the face image, the correlation
  • the feature data includes at least texture feature data of the face image
  • the first data extracting unit 5202 is further configured to: after determining the area where the face image is located in the image data corresponding to the three-dimensional character object in different orientations, the face image is zoomed, rotated, and stretched.
  • One or more of the processes result in a preset standard size face image.
  • the first data extracting unit 5202 in the present embodiment performs face detection on different image data for different orientations, and determines the location of the face image in the image data of the existing face.
  • the area, and then the location of the key points of the face, such as the center of the eye, the corner of the mouth, the bridge of the nose, etc., due to the different shooting distances and angles selected during the shooting, the size of the head of the corresponding image Angle orientation is also different, so
  • the face region feature data can be extracted by processing the face by zooming and/or rotating and/or stretching to obtain a normal standard face avatar of a preset standard size.
  • the manner of extracting the feature data of the face region is not limited.
  • the LBP algorithm Local Binary Patterns
  • the HOG algorithm Heistogram of Oriented Gradient
  • Gabor filter may be adopted.
  • the algorithm and the like perform feature extraction of the image. For example, texture feature data, brightness feature data, and the like of the face image are extracted.
  • the ratio calculating unit 5203 is configured to: distinguish the head, the upper body, the lower body, and the limbs in the image data corresponding to the different orientations of the three-dimensional human object, to correspondingly measure the head, the upper body, and the lower body of the three-dimensional object And the length ratio of the limbs;
  • the ratio calculating unit 5203 further determines the head, the upper body, and the lower body in the image data corresponding to the different orientations. And the area where the limbs are located is obtained by correspondingly calculating the length ratios of the head, the upper body, the lower body and the limbs of the three-dimensional object.
  • the length ratio of the head, the upper body, the lower body, and the limbs of the three-dimensional object may not be calculated, and may be performed according to actual needs. Settings.
  • the second data extracting unit 5204 is configured to: extract other feature data from the image data corresponding to the three-dimensional character object in different orientations, where the other feature data includes at least the hair style feature data, the wearing feature data, and the color feature data.
  • the other feature data includes at least the hair style feature data, the wearing feature data, and the color feature data.
  • the second data extraction unit 5204 continues to acquire the hairstyle feature data, the wearing feature data, the color feature data, and the like of the captured three-dimensional character object.
  • the combination of edge detection and feature extraction is used to obtain the 360° appearance characteristic data of the three-dimensional character hairstyle; according to the upper body and the lower body region, the feature detection of the three-dimensional character is performed, such as extracting the appearance style of the clothing and the characteristic data such as the main printing. Then, color characteristic data such as hair color, skin color, pupil color, and wearing color of the three-dimensional character can be further extracted.
  • the extraction method of the feature data is not limited, for example, LBP algorithm (Local Binary Patterns), or HOG algorithm (Histogram of Oriented Gradient), Gabor filter algorithm, etc. perform feature extraction of images.
  • more feature data is extracted from the overall image data of the three-dimensional human object, including distinctive facial feature data and body limb ratio data, Hair styling data, wearing feature data, color characterization data, etc., thereby providing the user with a more playable three-dimensional character image.
  • FIG. 5 is a schematic diagram of a refinement function module of the generation module in FIG. Based on the foregoing embodiment, in this embodiment, the generating module 530 includes:
  • the model building unit 5301 is configured to: perform three-dimensional reconstruction according to the extracted partial feature data related to the built-in character model in the feature data, to generate a character model corresponding to the captured three-dimensional character object;
  • the model building unit 5301 is configured to: calculate the length, width, height, and limb ratio data of the overall character image in the three-dimensional space by using the obtained length ratios of the head, the upper body, the lower body, and the limbs of the person to generate and The character model corresponding to the three-dimensional character object is captured.
  • the captured panoramic image is a two-dimensional image
  • all the two-dimensional feature data extracted before is three-dimensionally reconstructed by the model construction unit 5301, and correspondingly obtained by the ascending dimension processing method. 3D feature data.
  • the length, width, height, and limb ratio data of the overall figure in the three-dimensional space are calculated for generating and photographing the three-dimensional character object.
  • the preliminary three-dimensional character model corresponds to the preliminary three-dimensional character model.
  • the model rendering unit 5302 is configured to perform character model rendering according to the extracted other partial feature data related to the character model rendering in the feature data to generate a three-dimensional character image corresponding to the captured three-dimensional character object.
  • the model rendering unit 5302 is further configured to perform splicing processing on the image information of different orientations by using a panoramic stitching fusion technique to generate a three-dimensional character image corresponding to the photographed three-dimensional character object.
  • the model rendering unit 5302 performs character model rendering on the preliminary three-dimensional character model obtained by the model building unit 5301, and uses feature data extracted from the corresponding person image data in the panoramic image, such as facial features. Data, hair styling data, wearing feature data, color feature data, etc. are rendered one by one; in addition, panoramic splicing fusion technology may be used to splicing image information of different orientations to finally generate and image the three-dimensional object objects Corresponding 3D characters.
  • the rendering using the extracted feature data enables the generated three-dimensional character image to be closer to life than the captured three-dimensional character, thereby bringing a more interesting use experience to the user.
  • FIG. 6 is a schematic diagram of functional modules of a second embodiment of a mobile terminal according to the present invention.
  • the mobile terminal further includes:
  • the application association module 540 is configured to: associate the generated three-dimensional character image with an application scenario in the mobile terminal;
  • the three-dimensional character image display module 550 is configured to display a three-dimensional character image corresponding to the associated application scene on the display screen of the mobile terminal when the associated application scene is in an active state.
  • the application association module 540 associates the generated three-dimensional character image with the application scenario in the mobile terminal, for example, the two-dimensional character image of Wang Er and the king.
  • the contact phone number of the second is associated; the three-dimensional character image of Li San is associated with the voice assistant.
  • the three-dimensional character image related to the application scenario is displayed on the mobile terminal, thereby satisfying the user.
  • the application's personalized interaction requirements such as the ability to give voice to a 3D character, or facial expressions, to provide users with more User-friendly and playable experience.
  • FIG. 7 is a schematic diagram of functional modules of a third embodiment of a mobile terminal according to the present invention.
  • the mobile terminal further includes:
  • the shooting module 560 is configured to: start a panoramic shooting mode in the camera application of the mobile terminal to capture and store a panoramic photo of the three-dimensional object, wherein the current detection is performed in real time during the panoramic shooting of the mobile terminal Whether the shooting angle of the mobile terminal is within the set shooting angle range, and if not, a corresponding correction prompt is issued.
  • the shooting module 560 is configured to detect, in real time, whether the shooting angle of the current mobile terminal is within a set shooting angle range by using one or more of a gravity sensor, an attitude sensor, a gyroscope, and a compass. Inside.
  • the panoramic shooting module in the camera application is activated by the shooting module 560, and the photographing object (a person or an animal or an item that needs to generate a three-dimensional image) is focused, and then circular motion is started around the photographing object along the same radius distance. Clockwise or counterclockwise can be used until the camera acquires image data of 360° orientation of the subject, as shown in Figure 8.
  • the shooting module 560 saves the image data of the captured panoramic photo to the mobile terminal to facilitate subsequent generation processing of the three-dimensional image.
  • FIG. 9 is a schematic flowchart diagram of a first embodiment of a method for generating a three-dimensional image according to the present invention.
  • the method is applied to a mobile terminal, and the method for generating the three-dimensional image includes:
  • Step S10 reading panoramic image data corresponding to the captured panoramic photo of the three-dimensional object
  • a panoramic photo of the subject that is, a 360-degree image corresponding to the subject.
  • the photographic subject in this embodiment is three-dimensional, and the form of the three-dimensional object is not limited, for example, a panoramic photo of a physical person may be taken, or a panoramic photo of a solid animal or a physical item may also be taken for generating a corresponding image on the mobile terminal.
  • the three-dimensional image In order to realize the subsequent image stitching processing of the three-dimensional image, there are enough overlapping information between the images taken at different angles in the acquired panoramic photo.
  • the panoramic photo of the three-dimensional object is saved on the mobile terminal or other device after the shooting is completed, and is read when the three-dimensional image generation process is required.
  • the method of reading is not limited, and is set according to actual needs.
  • Step S20 extracting feature data required to generate a three-dimensional image from the panoramic image data
  • the feature data required to generate a three-dimensional image such as facial texture, height, wearing, skin color, limb ratio, and the like, are extracted from the read panoramic image data by the extraction module 520, and then according to the extracted features.
  • the data can be used to generate a corresponding three-dimensional image.
  • the method for extracting feature data in an image is not limited, and may be set according to actual needs.
  • the edge detection method is used to extract the three-dimensional object from the environmental background
  • the face detection algorithm is used to detect the position of the face and extract the facial texture features of the three-dimensional object.
  • Step S30 according to the extracted feature data, launch a three-dimensional image engine to generate a three-dimensional image corresponding to the captured three-dimensional object.
  • the generating module 530 performs synthesis on all the extracted feature data by using a preset three-dimensional image engine, thereby generating corresponding The three-dimensional image.
  • the mobile terminal performs extraction of feature data required to generate a three-dimensional image based on the panoramic photo of the three-dimensional object, and then starts the three-dimensional image engine to generate a corresponding three-dimensional image according to the extracted feature data.
  • the embodiment of the invention can conveniently and quickly generate a three-dimensional image of the captured object, and is convenient for the user to associate with the related application, so as to meet the personalized use requirement of the user, and improve the use. User experience.
  • FIG. 10 is a schematic diagram of the refinement process of step S20.
  • the three-dimensional object is taken as a three-dimensional object, and the step S20 includes:
  • Step S201 extracting overall image data of the three-dimensional human object from the panoramic image data, and calibrating the three-dimensional human object in different orientations in the overall image data;
  • step S201 extracting the whole image data of the three-dimensional character object from the panoramic image data comprises: using an image edge detection algorithm to distinguish a three-dimensional character from a background environment, and closing the detected determined pixel edge The corresponding image data is extracted to obtain the overall image data of the three-dimensional human object.
  • the panoramic image data obtained by the shooting generally includes the image data of the three-dimensional human object and the image data of the environment in which the human object is located
  • the overall image data of the three-dimensional human object is taken from the environment image thereof. Extracted and processed separately.
  • the overall image data of the three-dimensional human object includes image data of different orientations of the three-dimensional human object
  • the three-dimensional human objects of different orientations in the overall image data of the three-dimensional human object are uniformly calibrated for distinguishing.
  • the manner of extracting the whole image data of the three-dimensional character object is not limited. Since the three-dimensional character object in the panoramic image data is an integral closed area, for example, an image edge detection algorithm may be used to distinguish the three-dimensional character from the background environment, and thus, The image data corresponding to the detected determined pixel edge is extracted to obtain the overall image data of the three-dimensional human object.
  • the manner of calibrating the three-dimensional human objects in different directions in the overall image data of the three-dimensional human object is not limited, and is set according to actual needs.
  • a three-dimensional human object may be calibrated using a human body orientation detection algorithm, such as calibrating a human body orientation every 45° with reference to the front of the human object, and the orientation of the human object for 360° orientation may be calibrated to eight orientations.
  • the feature data in the image of the person object corresponding to the different orientations is mostly different, so the character object feature data extraction in different orientation directions can be performed.
  • Step S202 extracting from the image data corresponding to the three-dimensional character object in different orientations Face image data, and extracting related feature data from the face image data, the related feature data including at least texture feature data of the face image;
  • step S202 after determining, according to the image data corresponding to the different orientations, the image of the face image, the method further includes: converting the face image by one of zooming, rotating, and stretching One or more kinds of processing to obtain a preset standard size face image.
  • the facial image data feature is an important distinguishing feature
  • the face detection is performed on all the image data in different directions, and the location area of the face image in the image data of the face is determined, and then Positioning the key points of the face on the basis, such as the center of the eye, the corner of the mouth, the bridge of the nose, etc., due to the different shooting distances and angles selected during the shooting, the head size and angle orientation of the characters in the corresponding images are also different. Therefore, the facial region feature data can be extracted by processing the face by scaling and/or rotating and/or stretching to obtain a preset standard size normal face avatar.
  • the manner of extracting the feature data of the face region is not limited.
  • the LBP algorithm Local Binary Patterns
  • the HOG algorithm Heistogram of Oriented Gradient
  • Gabor filter may be adopted.
  • the algorithm and the like perform feature extraction of the image. For example, texture feature data, brightness feature data, and the like of the face image are extracted.
  • Step S203 distinguishing between the head, the upper body, the lower body, and the limbs in the image data corresponding to the different orientations of the three-dimensional human object, to correspondingly calculate the length ratio of the head, the upper body, the lower body, and the limbs of the three-dimensional object.
  • the embodiment further determines the head, the upper body, the lower body and the limbs in the image data corresponding to the different orientations.
  • the length ratios of the head, upper body, lower body and limbs of different three-dimensional object objects are obtained. For example, from the image of the person according to the relative position of the different parts of the human body and the correlation function to distinguish the head, upper body, lower body and limbs of the person, and then according to the pixel coordinates of each part area, such as the maximum coordinate distance, calculate and Determine the length ratio of the head, upper body, lower body and limbs.
  • the length ratio of the head, the upper body, the lower body, and the limbs of the three-dimensional object may not be calculated, and may be performed according to actual needs. Settings.
  • Step S204 extracting other feature data from the image data corresponding to the three-dimensional character object in different orientations, the other feature data including at least one of hairstyle feature data, wearing feature data, and color feature data.
  • the hairstyle feature data, the wearing feature data, the color feature data, and the like of the captured three-dimensional character object are further acquired.
  • the combination of edge detection and feature extraction is used to obtain the 360° appearance characteristic data of the three-dimensional character hairstyle; according to the upper body and the lower body region, the feature detection of the three-dimensional character is performed, such as extracting the appearance style of the clothing and the characteristic data such as the main printing.
  • color characteristic data such as hair color, skin color, pupil color, and wearing color of the three-dimensional character can be further extracted.
  • the extraction method of the feature data is not limited, for example, the LBP algorithm (Local Binary Patterns), or the HOG algorithm (Histogram of Oriented Gradient), the Gabor filter algorithm, etc. extract.
  • the execution order of the above steps S202, S203, and S204 is not limited.
  • more feature data is extracted from the overall image data of the three-dimensional character object, including distinctive facial feature data and body limb ratio data, hair style characteristic data, wearing Feature data, color feature data, etc., to provide users with more playable 3D characters.
  • FIG. 11 is a schematic flowchart of the refinement of step S30. Based on the above embodiment, in the embodiment, the foregoing step S30 includes:
  • Step S301 performing three-dimensional reconstruction according to the extracted partial feature data related to the constructing character model in the feature data, to generate a character model corresponding to the captured three-dimensional character object;
  • step S301 includes: calculating, by using the obtained length ratios of the head, the upper body, the lower body, and the limbs of the person, the length, width, height, and limb ratio data of the overall character in the three-dimensional space to generate and photograph the said A character model corresponding to a three-dimensional character object.
  • the captured panoramic image is a two-dimensional image
  • three-dimensional reconstruction is performed on all the two-dimensional feature data extracted before, and the processing is performed in an ascending dimension.
  • the corresponding three-dimensional feature data is obtained.
  • the length, width, height, and limb ratio data of the overall figure in the three-dimensional space are calculated for generating and photographing the three-dimensional character object.
  • the preliminary three-dimensional character model corresponds to the preliminary three-dimensional character model.
  • Step S302 Perform character model rendering according to the extracted other feature data related to the character model rendering in the feature data to generate a three-dimensional character image corresponding to the captured three-dimensional character object.
  • the step S302 may further include: performing splicing processing on the image information of different orientations by using a panoramic stitching fusion technology to generate a three-dimensional character image corresponding to the photographed three-dimensional character object.
  • the character model is rendered on the obtained preliminary three-dimensional character model, and the feature data extracted from the corresponding person image data in the panoramic image, such as facial feature data, hair styling data, wearing feature data, and color features, is used. Data and the like are rendered one by one; in addition, the panoramic stitching fusion technology may be used to splicing the image information of different orientations, thereby finally generating a three-dimensional character image corresponding to the photographed three-dimensional character object.
  • the rendering using the extracted feature data enables the generated three-dimensional character image to be closer to life than the captured three-dimensional character, thereby bringing a more interesting use experience to the user.
  • FIG. 12 is a schematic flowchart diagram of a second embodiment of a method for generating a three-dimensional image according to the present invention. Based on the above embodiment, in this implementation, after the step S30, the method further includes:
  • Step S40 associate the generated three-dimensional character image with an application scenario in the mobile terminal
  • step S50 when the associated application scenario is in an active state, a three-dimensional character image corresponding to the associated application scenario is displayed on the display screen of the mobile terminal.
  • the generated three-dimensional character image is associated with the application scenario in the mobile terminal, for example, the two-dimensional character image of Wang Er is related to the contact phone number of Wang Er. Link; the three-dimensional character image of Li San is related to the voice assistant Union. And displaying a three-dimensional character image corresponding to the associated application scene on the display screen of the mobile terminal, for example, when receiving the call of Wang Er, displaying the three-dimensional character image of Wang Er on the display screen of the mobile terminal; or opening the voice assistant At the time, Li San’s three-dimensional figure is displayed.
  • the three-dimensional character image related to the application scenario is displayed on the mobile terminal, thereby satisfying the user.
  • the application's personalized interaction requirements such as the ability to give voice functions to three-dimensional characters, or facial expressions, to provide users with a more user-friendly and playable experience.
  • FIG. 13 is a schematic flowchart diagram of a third embodiment of a method for generating a three-dimensional image according to the present invention. Based on the foregoing embodiment, in this implementation, before step S10, the method further includes:
  • Step S01 starting a panoramic shooting mode in the camera application of the mobile terminal to capture and store a panoramic photo of the three-dimensional object, wherein in the panoramic shooting process of the mobile terminal, detecting the current shooting of the mobile terminal in real time Whether the angle is within the set shooting angle range. If not, the corresponding correction prompt is issued.
  • step S01 by using one or more of a gravity sensor, an attitude sensor, a gyroscope, and a compass, it is detected in real time whether the shooting angle of the current mobile terminal is within a set shooting angle range.
  • the panoramic shooting module in the camera application is activated, and the photographing object (a character or an animal or an item that needs to generate a three-dimensional image) is focused, and then circular motion is started around the photographing object along the same radius distance, clockwise or inverse
  • the hour hand can be used until the camera acquires the image data of the 360° orientation of the photographing object, and the process is as shown in FIG. 8 .
  • the image data of the captured panoramic photo is saved in the mobile terminal to facilitate subsequent generation processing of the three-dimensional image.
  • the embodiment of the invention further provides a computer readable storage medium storing computer executable instructions, which are implemented by the processor to implement the method for generating the three-dimensional image.
  • the embodiment of the invention can conveniently and quickly generate a three-dimensional image of the captured object, and facilitate the user to associate with the related application, thereby satisfying the personalized use requirement of the user and improving the user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

A mobile terminal and a three-dimensional image generation method therefor. The mobile terminal comprises: a reading module (510), configured to: read panoramic image data corresponding to a panoramic picture of a shot three-dimensional object; an extraction module (520), configured to: extract characteristic data needed for generation of a three-dimensional image from the panoramic image data; and a generation module (530), configured to: initiate a three-dimensional image engine according to the extracted characteristic data so as to generate a three-dimensional image corresponding to the shot three-dimensional object.

Description

移动终端及其三维形象的生成方法Mobile terminal and method for generating three-dimensional image thereof 技术领域Technical field
本申请涉及但不限于通信技术领域,尤指移动终端及其三维形象的生成方法。The present application relates to, but is not limited to, the field of communication technologies, and in particular to a mobile terminal and a method for generating a three-dimensional image thereof.
背景技术Background technique
相关技术,在计算机上生成三维人物形象需要用到专业的三维人物模型和开发工具,同时对于开发人员的专业素质要求也比较高。此外在计算机上也无法快速生成三维人物形象,而这在移动终端上更是如此,因而相关技术上无法在移动终端的日常使用场景下生成生动活泼的三维形象,进而无法满足用户在相关应用场景下日益增长的个性化交互需求。Related technologies, the generation of three-dimensional characters on a computer requires the use of professional three-dimensional character models and development tools, while the professional quality requirements for developers are relatively high. In addition, the three-dimensional character image cannot be quickly generated on the computer, and this is especially true on the mobile terminal. Therefore, the related art cannot generate a vivid three-dimensional image in the daily use scenario of the mobile terminal, and thus cannot satisfy the user in the relevant application scenario. The growing demand for personalized interactions.
发明内容Summary of the invention
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。The following is an overview of the topics detailed in this document. This Summary is not intended to limit the scope of the claims.
本文提供一种移动终端及其三维形象的生成方法,可以实现在移动终端的日常使用场景下生成生动活泼的三维形象。This paper provides a mobile terminal and a method for generating a three-dimensional image thereof, which can generate a vivid three-dimensional image in a daily use scenario of the mobile terminal.
本发明实施例提供的一种移动终端,所移动终端包括:A mobile terminal provided by an embodiment of the present invention includes:
读取模块,设置为:读取与所拍摄的三维对象的全景照片相对应的全景图像数据;a reading module configured to: read panoramic image data corresponding to the panoramic photo of the captured three-dimensional object;
提取模块,设置为:从所述全景图像数据中提取生成三维形象所需的特征数据;An extraction module, configured to: extract feature data required to generate a three-dimensional image from the panoramic image data;
生成模块,设置为:根据所提取的所述特征数据,启动三维形象引擎以生成与所拍摄的所述三维对象相对应的三维形象。And generating a module, configured to: according to the extracted feature data, activate a three-dimensional image engine to generate a three-dimensional image corresponding to the captured three-dimensional object.
可选地,所述三维对象为三维人物对象;所述提取模块包括:Optionally, the three-dimensional object is a three-dimensional character object; the extraction module includes:
预处理单元,设置为:从所述全景图像数据中提取所述三维人物对象 的整体图像数据,并对所述整体图像数据中不同朝向的所述三维人物对象进行标定;a pre-processing unit, configured to: extract the three-dimensional character object from the panoramic image data Overall image data, and calibrating the three-dimensional character objects of different orientations in the overall image data;
第一数据提取单元,设置为:从所述三维人物对象在不同朝向所对应的图像数据中提取人脸图像数据,并从所述人脸图像数据中提取相关特征数据,所述相关特征数据包括人脸图像的纹理特征数据;The first data extracting unit is configured to: extract face image data from the image data corresponding to the three-dimensional character object in different orientations, and extract relevant feature data from the face image data, where the related feature data includes Texture feature data of the face image;
比例测算单元,设置为:区分所述三维人物对象在不同朝向所对应的图像数据中的头部、上半身、下半身及四肢所在区域,以相应测算所述三维人物对象的头部、上半身、下半身及四肢的长度比例;The ratio calculating unit is configured to: distinguish the head, the upper body, the lower body, and the limbs in the image data corresponding to the different orientations of the three-dimensional human object, to correspondingly measure the head, the upper body, the lower body of the three-dimensional object and The length ratio of the limbs;
第二数据提取单元,设置为:从所述三维人物对象在不同朝向所对应的图像数据中提取其他特征数据,所述其他特征数据包括发型特征数据、穿着特征数据、颜色特征数据。The second data extracting unit is configured to extract other feature data from the image data corresponding to the three-dimensional character object in different orientations, and the other feature data includes hairstyle feature data, wearing feature data, and color feature data.
可选地,所述预处理单元,设置为:采用图像边缘检测算法区分三维人物与背景环境,将所检测确定的像素边缘闭合后所对应的图像数据提取出来,得到三维人物对象的整体图像数据。Optionally, the pre-processing unit is configured to: use an image edge detection algorithm to distinguish a three-dimensional character from a background environment, and extract image data corresponding to the detected determined pixel edge to obtain overall image data of the three-dimensional object object. .
可选地,所述第一数据提取单元,还设置为:从所述三维人物对象在不同朝向所对应的图像数据中确定人脸图像所在区域之后,将人脸图像通过缩放、旋转、拉伸中的一种或多种处理得到预设的标准大小的人脸图像。Optionally, the first data extracting unit is further configured to: after determining the area where the face image is located in the image data corresponding to the three-dimensional character object in different orientations, the face image is zoomed, rotated, and stretched One or more of the processes result in a preset standard size face image.
可选地,所述生成模块包括:Optionally, the generating module includes:
模型构建单元,设置为:根据所提取的所述特征数据中与构建人物模型相关的部分特征数据进行三维重建,以生成与所拍摄的所述三维人物对象相对应的人物模型;a model building unit, configured to: perform three-dimensional reconstruction according to the extracted partial feature data related to the constructing character model in the feature data, to generate a character model corresponding to the captured three-dimensional character object;
模型渲染单元,设置为:根据所提取的所述特征数据中与人物模型渲染相关的其他部分特征数据进行人物模型渲染,以生成与所拍摄的所述三维人物对象相对应的三维人物形象。The model rendering unit is configured to: perform a character model rendering according to the extracted other part of the feature data related to the character model rendering to generate a three-dimensional character image corresponding to the captured three-dimensional character object.
可选地,所述模型构建单元,设置为:利用获取的人物头部、上半身、下半身及四肢的长度比例,计算出在三维空间中整体人物形象的长宽高及四肢比例数据,以生成与所拍摄的所述三维人物对象相对应的人物模型。Optionally, the model building unit is configured to: calculate the length, width, height, and limb ratio data of the overall character image in the three-dimensional space by using the obtained length ratios of the head, the upper body, the lower body, and the limbs of the person to generate and The character model corresponding to the captured three-dimensional character object.
可选地,所述模型渲染单元,还设置为:采用全景拼接融合技术将不 同朝向的图像信息进行拼接处理,以生成与所拍摄的所述三维人物对象相对应的三维人物形象。Optionally, the model rendering unit is further configured to: adopt a panoramic stitching fusion technology to not The splicing process is performed on the image information of the same orientation to generate a three-dimensional character image corresponding to the captured three-dimensional character object.
可选地,所述移动终端还包括:Optionally, the mobile terminal further includes:
应用关联模块,设置为:将生成的三维人物形象与所述移动终端内的应用场景进行关联;The application association module is configured to: associate the generated three-dimensional character image with an application scenario in the mobile terminal;
三维人物形象显示模块,设置为:当所关联的应用场景处于激活状态时,在所述移动终端的显示屏上显示与所关联的应用场景相对应的三维人物形象。The three-dimensional character image display module is configured to: when the associated application scene is in an active state, display a three-dimensional character image corresponding to the associated application scene on a display screen of the mobile terminal.
可选地,所述移动终端还包括:Optionally, the mobile terminal further includes:
拍摄模块,设置为:启动所述移动终端的摄像头应用中的全景拍摄模式以拍摄并存储所述三维对象的全景照片,其中,在所述移动终端进行全景拍摄过程中,实时检测当前所述移动终端的拍摄角度是否处于设定的拍摄角度范围之内,若否,则发出相应修正提示。a shooting module, configured to: activate a panoramic shooting mode in a camera application of the mobile terminal to capture and store a panoramic photo of the three-dimensional object, wherein the current movement is detected in real time during the panoramic shooting of the mobile terminal Whether the shooting angle of the terminal is within the set shooting angle range. If not, the corresponding correction prompt is issued.
可选地,所述拍摄模块,设置为:通过采用重力传感器、姿态传感器、陀螺仪、罗盘中的一种或多种,实时检测当前所述移动终端的拍摄角度是否处于设定的拍摄角度范围之内。Optionally, the shooting module is configured to: detect, by using one or more of a gravity sensor, an attitude sensor, a gyroscope, and a compass, whether the shooting angle of the current mobile terminal is in a set shooting angle range in real time. within.
本发明实施例还提供一种三维形象的生成方法,应用于移动终端,所述三维形象的生成方法包括:The embodiment of the present invention further provides a method for generating a three-dimensional image, which is applied to a mobile terminal, and the method for generating the three-dimensional image includes:
读取与所拍摄的三维对象的全景照片相对应的全景图像数据;Reading panoramic image data corresponding to the panoramic photo of the captured three-dimensional object;
从所述全景图像数据中提取生成三维形象所需的特征数据;Extracting feature data required to generate a three-dimensional image from the panoramic image data;
根据所提取的所述特征数据,启动三维形象引擎以生成与所拍摄的所述三维对象相对应的三维形象。Based on the extracted feature data, a three-dimensional image engine is launched to generate a three-dimensional image corresponding to the captured three-dimensional object.
可选地,所述三维对象为三维人物对象;所述从所述全景图像数据中提取生成三维形象所需的特征数据包括:Optionally, the three-dimensional object is a three-dimensional character object; and extracting, from the panoramic image data, feature data required to generate a three-dimensional image includes:
从所述全景图像数据中提取所述三维人物对象的整体图像数据,并对所述整体图像数据中不同朝向的所述三维人物对象进行标定; Extracting overall image data of the three-dimensional human object from the panoramic image data, and calibrating the three-dimensional human object in different orientations in the overall image data;
从所述三维人物对象在不同朝向所对应的图像数据中提取人脸图像数据,并从所述人脸图像数据中提取相关特征数据,所述相关特征数据至少包括人脸图像的纹理特征数据;Extracting face image data from the image data corresponding to the three-dimensional character object in different orientations, and extracting relevant feature data from the face image data, the related feature data including at least texture feature data of the face image;
区分所述三维人物对象在不同朝向所对应的图像数据中的头部、上半身、下半身及四肢所在区域,以相应测算所述三维人物对象的头部、上半身、下半身及四肢的长度比例;Distinguishing between the head, the upper body, the lower body, and the limbs in the image data corresponding to the different orientations of the three-dimensional character object, to correspondingly calculate the length ratios of the head, the upper body, the lower body, and the limbs of the three-dimensional object;
从所述三维人物对象在不同朝向所对应的图像数据中提取其他特征数据,所述其他特征数据包括发型特征数据、穿着特征数据、颜色特征数据。Other feature data is extracted from the image data corresponding to the three-dimensional character object in different orientations, the other feature data including hairstyle feature data, wearing feature data, and color feature data.
可选地,所述从所述全景图像数据中提取所述三维人物对象的整体图像数据包括:Optionally, the extracting the overall image data of the three-dimensional character object from the panoramic image data includes:
采用图像边缘检测算法区分三维人物与背景环境,将所检测确定的像素边缘闭合后所对应的图像数据提取出来,得到三维人物对象的整体图像数据。The image edge detection algorithm is used to distinguish the three-dimensional character from the background environment, and the image data corresponding to the detected determined pixel edge is extracted, and the overall image data of the three-dimensional character object is obtained.
可选地,所述从所述三维人物对象在不同朝向所对应的图像数据中确定人脸图像所在区域之后,还包括:Optionally, after determining the area where the face image is located in the image data corresponding to the different orientations of the three-dimensional character object, the method further includes:
将人脸图像通过缩放、旋转、拉伸中的一种或多种处理得到预设的标准大小的人脸图像。The face image is processed by one or more of zooming, rotating, and stretching to obtain a preset standard size face image.
可选地,所述根据所提取的所述特征数据,启动三维形象引擎以生成与所拍摄的所述三维对象相对应的三维形象包括:Optionally, the generating, according to the extracted feature data, the three-dimensional image engine to generate a three-dimensional image corresponding to the captured three-dimensional object comprises:
根据所提取的所述特征数据中与构建人物模型相关的部分特征数据进行三维重建,以生成与所拍摄的所述三维人物对象相对应的人物模型;And performing three-dimensional reconstruction according to the extracted partial feature data related to the constructing character model in the feature data to generate a character model corresponding to the captured three-dimensional character object;
根据所提取的所述特征数据中与人物模型渲染相关的其他部分特征数据进行人物模型渲染,以生成与所拍摄的所述三维人物对象相对应的三维人物形象。Character model rendering is performed according to the extracted other feature data related to the character model rendering in the feature data to generate a three-dimensional character image corresponding to the captured three-dimensional character object.
可选地,所述根据所提取的所述特征数据,启动三维形象引擎以生成与所拍摄的所述三维对象相对应的三维形象之后还包括:Optionally, after the three-dimensional image engine is started to generate the three-dimensional image corresponding to the captured three-dimensional object according to the extracted feature data, the method further includes:
将生成的三维人物形象与所述移动终端内的应用场景进行关联; Associating the generated three-dimensional character image with an application scenario in the mobile terminal;
当所关联的应用场景处于激活状态时,在所述移动终端的显示屏上显示与所关联的应用场景相对应的三维人物形象。When the associated application scenario is in an active state, a three-dimensional character image corresponding to the associated application scenario is displayed on a display screen of the mobile terminal.
可选地,所述根据所提取的所述特征数据中与构建人物模型相关的部分特征数据进行三维重建,以生成与所拍摄的所述三维人物对象相对应的人物模型包括:Optionally, performing three-dimensional reconstruction according to the extracted partial feature data related to the built-in character model in the extracted feature data to generate a character model corresponding to the captured three-dimensional character object includes:
利用获取的人物头部、上半身、下半身及四肢的长度比例,计算出在三维空间中整体人物形象的长宽高及四肢比例数据,以生成与所拍摄的所述三维人物对象相对应的人物模型。Calculating the length, width, height and limb ratio data of the overall character in the three-dimensional space by using the obtained length ratios of the head, upper body, lower body and limbs of the person to generate a character model corresponding to the captured three-dimensional object .
可选地,所述根据所提取的所述特征数据中与构建人物模型相关的部分特征数据进行三维重建,以生成与所拍摄的所述三维人物对象相对应的人物模型之后,还包括:Optionally, after the three-dimensional reconstruction is performed on the part of the feature data related to the built-in character model in the extracted feature data, to generate a character model corresponding to the captured three-dimensional character object, the method further includes:
采用全景拼接融合技术将不同朝向的图像信息进行拼接处理,以生成与所拍摄的所述三维人物对象相对应的三维人物形象。The image information of different orientations is spliced by a panoramic stitching fusion technique to generate a three-dimensional character image corresponding to the captured three-dimensional character object.
可选地,所述读取与所拍摄的三维对象的全景照片相对应的全景图像数据之前包括:Optionally, the reading the panoramic image data corresponding to the panoramic photo of the captured three-dimensional object includes:
启动所述移动终端的摄像头应用中的全景拍摄模式以拍摄并存储所述三维对象的全景照片,其中,在所述移动终端进行全景拍摄过程中,实时检测当前所述移动终端的拍摄角度是否处于设定的拍摄角度范围之内,若否,则发出相应修正提示。Activating a panoramic photographing mode in the camera application of the mobile terminal to capture and store a panoramic photo of the three-dimensional object, wherein during the panoramic photographing process, the mobile terminal detects whether the current shooting angle of the mobile terminal is in real time Within the set shooting angle range, if no, the corresponding correction prompt is issued.
可选地,所述实时检测当前所述移动终端的拍摄角度是否处于设定的拍摄角度范围之内包括:Optionally, the real-time detecting whether the shooting angle of the current mobile terminal is within a set shooting angle range includes:
通过采用重力传感器、姿态传感器、陀螺仪、罗盘中的一种或多种,实时检测当前所述移动终端的拍摄角度是否处于设定的拍摄角度范围之内。By using one or more of a gravity sensor, an attitude sensor, a gyroscope, and a compass, it is detected in real time whether the shooting angle of the current mobile terminal is within a set shooting angle range.
本发明实施例还提供一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现上述方法。The embodiment of the invention further provides a computer readable storage medium storing computer executable instructions, which are implemented when executed by a processor.
本发明实施例中,移动终端基于三维对象的全景照片进行生成三维形 象所需的特征数据的提取,进而根据所提取的特征数据,启动三维形象引擎以生成相应的三维形象。本发明实施例能够方便快捷生成所拍摄对象的三维形象,并便于用户与相关应用进行关联,从而满足用户的个性化使用需求,提升用户使用体验。In the embodiment of the present invention, the mobile terminal generates a three-dimensional shape based on the panoramic photo of the three-dimensional object. Like the extraction of the required feature data, and then based on the extracted feature data, the three-dimensional image engine is activated to generate a corresponding three-dimensional image. The embodiment of the invention can conveniently and quickly generate a three-dimensional image of the captured object, and facilitate the user to associate with the related application, thereby satisfying the personalized use requirement of the user and improving the user experience.
在阅读并理解了附图和详细描述后,可以明白其他方面。Other aspects will be apparent upon reading and understanding the drawings and detailed description.
附图概述BRIEF abstract
图1为实现本发明实施例中一个可选的移动终端的硬件结构示意图;1 is a schematic structural diagram of hardware of an optional mobile terminal in implementing an embodiment of the present invention;
图2为图1中相机的电气结构框图;Figure 2 is a block diagram showing the electrical structure of the camera of Figure 1;
图3为本发明移动终端第一实施例的功能模块示意图;3 is a schematic diagram of functional modules of a first embodiment of a mobile terminal according to the present invention;
图4为图3中提取模块的细化功能模块示意图;4 is a schematic diagram of a refinement function module of the extraction module of FIG. 3;
图5为图3中生成模块的细化功能模块示意图;5 is a schematic diagram of a refinement function module of the generation module in FIG. 3;
图6为本发明移动终端第二实施例的功能模块示意图;6 is a schematic diagram of functional modules of a second embodiment of a mobile terminal according to the present invention;
图7为本发明移动终端第三实施例的功能模块示意图;7 is a schematic diagram of functional modules of a third embodiment of a mobile terminal according to the present invention;
图8为本发明移动终端拍摄全景照片一实施例的示意图;8 is a schematic diagram of an embodiment of a panoramic photo taken by a mobile terminal according to the present invention;
图9为本发明三维形象的生成方法第一实施例的流程示意图;9 is a schematic flow chart of a first embodiment of a method for generating a three-dimensional image according to the present invention;
图10为步骤S20的细化流程示意图;FIG. 10 is a schematic diagram of the refinement process of step S20;
图11为步骤S30的细化流程示意图;11 is a schematic flowchart of the refinement of step S30;
图12为本发明三维形象的生成方法第二实施例的流程示意图;12 is a schematic flow chart of a second embodiment of a method for generating a three-dimensional image according to the present invention;
图13为本发明三维形象的生成方法第三实施例的流程示意图。FIG. 13 is a schematic flow chart of a third embodiment of a method for generating a three-dimensional image according to the present invention.
本发明的实施方式Embodiments of the invention
应当理解,此处所描述的实施例仅用以解释本申请,并不用于限定本申请。It is to be understood that the embodiments described herein are merely illustrative of the application and are not intended to be limiting.
现在将参考附图描述实现本发明实施例的移动终端。在后续的描述中,使用用于表示元件的诸如"模块"、"部件"或"单元"的后缀仅为了有利于本发 明实施例的说明,其本身并没有特定的意义。因此,"模块"与"部件"可以混合地使用。A mobile terminal embodying an embodiment of the present invention will now be described with reference to the accompanying drawings. In the following description, the use of a suffix such as "module", "part" or "unit" for representing an element is only advantageous for the present invention. The description of the embodiments does not have a specific meaning per se. Therefore, "module" and "component" can be used in combination.
移动终端可以以多种形式来实施。例如,本文中描述的终端可以包括诸如移动电话、智能电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。下面,假设终端是移动终端。然而,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本发明的实施方式的构造也能够应用于固定类型的终端。The mobile terminal can be implemented in a variety of forms. For example, the terminals described herein may include, for example, mobile phones, smart phones, notebook computers, digital broadcast receivers, PDAs (Personal Digital Assistants), PADs (Tablets), PMPs (Portable Multimedia Players), navigation devices, and the like. Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like. In the following, it is assumed that the terminal is a mobile terminal. However, those skilled in the art will appreciate that configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
图1为实现本发明实施例中一个可选的移动终端的硬件结构示意图。FIG. 1 is a schematic structural diagram of hardware of an optional mobile terminal in implementing an embodiment of the present invention.
移动终端100可以包括无线通信单元110、A/V(音频/视频)输入单元120、用户输入单元130、感测单元140、输出单元150、存储器160、控制器170等等。图1示出了具有多种组件的移动终端,但是应理解的是,并不要求实施所有示出的组件。可以替代地实施更多或更少的组件。将在下面详细描述移动终端的元件。The mobile terminal 100 may include a wireless communication unit 110, an A/V (Audio/Video) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, a controller 170, and the like. Figure 1 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
无线通信单元110通常包括一个或多个组件,其允许移动终端100与无线通信装置或网络之间的无线电通信。 Wireless communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication device or network.
A/V输入单元120设置为接收音频或视频信号。A/V输入单元120可以包括相机121,相机121对在视频捕获模式或图像捕获模式中由图像捕获装置获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元151上。经相机121处理后的图像帧可以存储在存储器160(或其它存储介质)中或者经由无线通信单元110进行发送,可以根据移动终端的构造提供两个或更多相机121。The A/V input unit 120 is arranged to receive an audio or video signal. The A/V input unit 120 may include a camera 121 that processes image data of still pictures or video obtained by an image capturing device in a video capturing mode or an image capturing mode. The processed image frame can be displayed on the display unit 151. The image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the configuration of the mobile terminal.
用户输入单元130可以根据用户输入的命令生成键输入数据以控制移动终端的操作。用户输入单元130允许用户输入多种类型的信息,并且可以包括键盘、锅仔片、触摸板(例如,检测由于被接触而导致的电阻、压力值、电容等等的变化的触敏组件)、滚轮、摇杆等等。特别地,当触摸板以层的形式叠加在显示单元151上时可以形成触摸屏。The user input unit 130 may generate key input data according to a command input by the user to control the operation of the mobile terminal. The user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a touch panel (eg, a touch sensitive component that detects changes in resistance, pressure values, capacitance, etc. due to contact), Roller, rocker, etc. In particular, a touch screen may be formed when the touch panel is superimposed on the display unit 151 in the form of a layer.
感测单元140检测移动终端100的当前状态,(例如,移动终端100的打开 或关闭状态)、移动终端100的位置、用户对于移动终端100的接触(即,触摸输入)的有无、移动终端100的取向、移动终端100的运动方向和倾斜角度等等,并且生成用于控制移动终端100的操作的命令或信号。例如,感测单元140包括加速度计141和陀螺仪142,加速度计141设置为检测移动终端100的实时加速度以得出移动终端100的运动方向,陀螺仪142设置为检测移动终端100相对于其所在平面的倾斜角度。The sensing unit 140 detects the current state of the mobile terminal 100 (eg, the opening of the mobile terminal 100) Or the off state), the location of the mobile terminal 100, the presence or absence of the user's contact with the mobile terminal 100 (ie, touch input), the orientation of the mobile terminal 100, the direction of movement and the tilt angle of the mobile terminal 100, and the like, and are generated for A command or signal that controls the operation of the mobile terminal 100. For example, the sensing unit 140 includes an accelerometer 141 that is configured to detect real-time acceleration of the mobile terminal 100 to derive a direction of motion of the mobile terminal 100, and a gyroscope 142 that is configured to detect that the mobile terminal 100 is relative to its location The angle of inclination of the plane.
输出单元150被构造为以视觉、音频和/或触觉方式提供输出信号(例如,音频信号、视频信号、振动信号等等)。输出单元150可以包括显示单元151、音频输出模块152等等。 Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner. The output unit 150 may include a display unit 151, an audio output module 152, and the like.
显示单元151可以显示在移动终端100中处理的信息。例如,当移动终端100处于电话通话模式时,显示单元151可以显示与通话或其它通信(例如,文本消息收发、多媒体文件下载等等)相关的用户界面(UI)或图形用户界面(GUI)。当移动终端100处于视频通话模式或者图像捕获模式时,显示单元151可以显示捕获的图像和/或接收的图像、示出视频或图像以及相关功能的UI或GUI等等。The display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
同时,当显示单元151和触摸板以层的形式彼此叠加以形成触摸屏时,显示单元151可以用作输入装置和输出装置。显示单元151可以包括液晶显示器(LCD)、薄膜晶体管LCD(TFT-LCD)、有机发光二极管(OLED)显示器、柔性显示器、三维(3D)显示器等等中的至少一种。这些显示器中的一些可以被构造为透明状以允许用户从外部观看,这可以称为透明显示器,典型的透明显示器可以例如为TOLED(透明有机发光二极管)显示器等等。根据特定想要的实施方式,移动终端100可以包括两个或更多显示单元(或其它显示装置),例如,移动终端可以包括外部显示单元(未示出)和内部显示单元(未示出)。触摸屏可用于检测触摸输入压力值以及触摸输入位置和触摸输入面积。Meanwhile, when the display unit 151 and the touch panel are superposed on each other in the form of a layer to form a touch screen, the display unit 151 can function as an input device and an output device. The display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like. Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like. According to a particular desired embodiment, the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) . The touch screen can be used to detect touch input pressure values as well as touch input positions and touch input areas.
音频输出模块152可以在移动终端处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将无线通信单元110接收的或者在存储器160中存储的音频数据转换音频信号并且输出为声音。而且,音频输出模块152可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出模块152可 以包括拾音器、蜂鸣器等等。The audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like. The audio signal is output as sound. Moreover, the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100. The audio output module 152 can To include pickups, buzzers, and so on.
存储器160可以存储由控制器170执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据(例如,电话簿、消息、静态图像、视频等等)。而且,存储器160可以存储关于当触摸施加到触摸屏时输出的多种方式的振动和音频信号的数据。The memory 160 may store a software program or the like for processing and control operations performed by the controller 170, or may temporarily store data (for example, a phone book, a message, a still image, a video, etc.) that has been output or is to be output. Moreover, the memory 160 may store data regarding vibration and audio signals of various manners that are output when a touch is applied to the touch screen.
存储器160可以包括至少一种类型的存储介质,所述存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等等。而且,移动终端100可以与通过网络连接执行存储器160的存储功能的网络存储装置协作。The memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like. Moreover, the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
控制器170通常控制移动终端的总体操作。例如,控制器170执行与语音通话、数据通信、视频通话等等相关的控制和处理。控制器170可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。 Controller 170 typically controls the overall operation of the mobile terminal. For example, controller 170 performs the control and processing associated with voice calls, data communications, video calls, and the like. The controller 170 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
这里描述的实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控制器170中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器160中并且由控制器170执行。The embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof. For hardware implementations, the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 170 Implemented in the middle. For software implementations, implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation. The software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by controller 170.
此外,本发明实施例的移动终端还可以包括:读取模块510、提取模块520、生成模块530,其中,In addition, the mobile terminal of the embodiment of the present invention may further include: a reading module 510, an extracting module 520, and a generating module 530, where
读取模块510,设置为:读取与所拍摄的三维对象的全景照片相对应的全景图像数据;提取模块520,设置为:从所述全景图像数据中提取生成三 维形象所需的特征数据;生成模块530,设置为:根据所提取的所述特征数据,启动三维形象引擎以生成与所拍摄的所述三维对象相对应的三维形象。The reading module 510 is configured to: read panoramic image data corresponding to the captured panoramic photo of the three-dimensional object; and the extracting module 520 is configured to: extract three generated from the panoramic image data The feature data required by the dimension image; the generating module 530 is configured to: according to the extracted feature data, activate the three-dimensional image engine to generate a three-dimensional image corresponding to the captured three-dimensional object.
此外,参照图2,图2为图1中相机的电气结构框图。In addition, referring to FIG. 2, FIG. 2 is a block diagram of the electrical structure of the camera of FIG. 1.
摄影镜头1211由用于形成被摄体像的多个光学镜头构成,为单焦点镜头或变焦镜头。摄影镜头1211在镜头驱动器1221的控制下能够在光轴方向上移动,镜头驱动器1221根据来自镜头驱动控制电路1222的控制信号,控制摄影镜头1211的焦点位置,在变焦镜头的情况下,也可控制焦点距离。镜头驱动控制电路1222按照来自微型计算机1217的控制命令进行镜头驱动器1221的驱动控制。The photographic lens 1211 is composed of a plurality of optical lenses for forming a subject image, and is a single focus lens or a zoom lens. The photographic lens 1211 is movable in the optical axis direction under the control of the lens driver 1221, and the lens driver 1221 controls the focus position of the photographic lens 1211 in accordance with a control signal from the lens driving control circuit 1222, and can also be controlled in the case of the zoom lens. Focus distance. The lens drive control circuit 1222 performs drive control of the lens driver 1221 in accordance with a control command from the microcomputer 1217.
在摄影镜头1211的光轴上、由摄影镜头1211形成的被摄体像的位置附近配置有摄像元件1212。摄像元件1212设置为对被摄体像摄像并取得摄像图像数据。在摄像元件1212上二维且呈矩阵状配置有构成每个像素的光电二极管。光电二极管产生与受光量对应的光电转换电流,该光电转换电流由与光电二极管连接的电容器进行电荷蓄积。每个像素的前表面配置有拜耳排列的RGB滤色器。An imaging element 1212 is disposed on the optical axis of the photographic lens 1211 near the position of the subject image formed by the photographic lens 1211. The imaging element 1212 is provided to image the subject image and acquire captured image data. Photodiodes constituting each pixel are arranged two-dimensionally and in a matrix on the imaging element 1212. The photodiode generates a photoelectric conversion current corresponding to the amount of received light, and the photoelectric conversion current is charged by a capacitor connected to the photodiode. The front surface of each pixel is provided with a Bayer array of RGB color filters.
摄像元件1212与摄像电路1213连接,该摄像电路1213在摄像元件1212中进行电荷蓄积控制和图像信号读出控制,对该读出的图像信号(模拟图像信号)降低重置噪声后进行波形整形,进而进行增益提高等以成为适当的信号电平。摄像电路1213与A/D转换器1214连接,该A/D转换器1214对模拟图像信号进行模数转换,向总线1227输出数字图像信号(以下称之为图像数据)。The imaging element 1212 is connected to the imaging circuit 1213. The imaging circuit 1213 performs charge accumulation control and image signal readout control in the imaging element 1212, and performs waveform shaping after reducing the reset noise of the read image signal (analog image signal). Further, gain improvement or the like is performed to obtain an appropriate signal level. The imaging circuit 1213 is connected to an A/D converter 1214 that performs analog-to-digital conversion on the analog image signal and outputs a digital image signal (hereinafter referred to as image data) to the bus 1227.
总线1227是用于传送在相机的内部读出或生成的多种数据的传送路径。在总线1227连接着上述A/D转换器1214,此外还连接着图像处理器1215、JPEG处理器1216、微型计算机1217、SDRAM(Synchronous Dynamic random access memory,同步动态随机存取内存)1218、存储器接口(以下称之为存储器I/F)1219、LCD(Liquid Crystal Display,液晶显示器)驱动器1220。The bus 1227 is a transmission path for transmitting a variety of data read or generated inside the camera. The A/D converter 1214 is connected to the bus 1227, and an image processor 1215, a JPEG processor 1216, a microcomputer 1217, a SDRAM (Synchronous Dynamic Random Access Memory) 1218, and a memory interface are also connected. (hereinafter referred to as memory I/F) 1219, LCD (Liquid Crystal Display) driver 1220.
图像处理器1215对基于摄像元件1212的输出的图像数据进行OB相减处理、白平衡调整、颜色矩阵运算、伽马转换、色差信号处理、噪声去除处理、同时化处理、边缘处理等多种图像处理。JPEG处理器1216在将图像数据记 录于记录介质1225时,按照JPEG压缩方式压缩从SDRAM1218读出的图像数据。此外,JPEG处理器1216为了进行图像再现显示而进行JPEG图像数据的解压缩。进行解压缩时,读出记录在记录介质1225中的文件,在JPEG处理器1216中实施了解压缩处理后,将解压缩的图像数据暂时存储于SDRAM1218中并在LCD1226上进行显示。另外,在本实施方式中,作为图像压缩解压缩方式采用的是JPEG方式,然而压缩解压缩方式不限于此,当然可以采用MPEG、TIFF、H.264等其他的压缩解压缩方式。The image processor 1215 performs OB subtraction processing, white balance adjustment, color matrix calculation, gamma conversion, color difference signal processing, noise removal processing, simultaneous processing, edge processing, and the like on the image data based on the output of the imaging element 1212. deal with. JPEG processor 1216 is remembering image data When recorded on the recording medium 1225, the image data read from the SDRAM 1218 is compressed in accordance with the JPEG compression method. Further, the JPEG processor 1216 performs decompression of JPEG image data for image reproduction display. At the time of decompression, the file recorded on the recording medium 1225 is read, and after the compression processing is performed in the JPEG processor 1216, the decompressed image data is temporarily stored in the SDRAM 1218 and displayed on the LCD 1226. Further, in the present embodiment, the JPEG method is adopted as the image compression/decompression method. However, the compression/decompression method is not limited thereto, and other compression/decompression methods such as MPEG, TIFF, and H.264 may be used.
微型计算机1217发挥作为该相机整体的控制部的功能,统一控制相机的多种处理序列。微型计算机1217连接着操作单元1223和闪存1224。The microcomputer 1217 functions as a control unit of the entire camera, and collectively controls various processing sequences of the camera. The microcomputer 1217 is connected to the operation unit 1223 and the flash memory 1224.
操作单元1223包括但不限于实体按键或者虚拟按键,该实体或虚拟按键可以为电源按钮、拍照键、编辑按键、动态图像按钮、再现按钮、菜单按钮、十字键、OK按钮、删除按钮、放大按钮等多种输入按钮和多种输入键等操作控件,检测这些操作控件的操作状态。The operating unit 1223 includes, but is not limited to, a physical button or a virtual button, and the entity or virtual button may be a power button, a camera button, an edit button, a dynamic image button, a reproduction button, a menu button, a cross button, an OK button, a delete button, an enlarge button A variety of input buttons and a variety of input keys and other operational controls to detect the operational status of these operational controls.
将检测结果向微型计算机1217输出。此外,在作为显示器的LCD1226的前表面设有触摸面板,检测用户的触摸位置,将该触摸位置向微型计算机1217输出。微型计算机1217根据来自操作单元1223的操作位置的检测结果,执行与用户的操作对应的多种处理序列。The detection result is output to the microcomputer 1217. Further, a touch panel is provided on the front surface of the LCD 1226 as a display, and the touch position of the user is detected, and the touch position is output to the microcomputer 1217. The microcomputer 1217 executes a plurality of processing sequences corresponding to the user's operation in accordance with the detection result from the operation position of the operation unit 1223.
闪存1224存储用于执行微型计算机1217的多种处理序列的程序。微型计算机1217根据该程序进行相机整体的控制。此外,闪存1224存储相机的多种调整值,微型计算机1217读出调整值,按照该调整值进行相机的控制。The flash memory 1224 stores programs for executing various processing sequences of the microcomputer 1217. The microcomputer 1217 performs overall control of the camera in accordance with the program. Further, the flash memory 1224 stores various adjustment values of the camera, and the microcomputer 1217 reads out the adjustment value, and performs control of the camera in accordance with the adjustment value.
SDRAM1218是用于对图像数据等进行暂时存储的可电改写的易失性存储器。该SDRAM1218暂时存储从A/D转换器1214输出的图像数据和在图像处理器1215、JPEG处理器1216等中进行了处理后的图像数据。The SDRAM 1218 is an electrically rewritable volatile memory for temporarily storing image data or the like. The SDRAM 1218 temporarily stores image data output from the A/D converter 1214 and image data processed in the image processor 1215, the JPEG processor 1216, and the like.
存储器接口1219与记录介质1225连接,进行将图像数据和附加在图像数据中的文件头等数据写入记录介质1225和从记录介质1225中读出的控制。记录介质1225例如为能够在相机主体上自由拆装的存储器卡等记录介质,然而不限于此,也可以是内置在相机主体中的硬盘等。The memory interface 1219 is connected to the recording medium 1225, and performs control for writing image data and a file header attached to the image data to the recording medium 1225 and reading out from the recording medium 1225. The recording medium 1225 is, for example, a recording medium such as a memory card that can be detachably attached to the camera body. However, the recording medium 1225 is not limited thereto, and may be a hard disk or the like built in the camera body.
LCD驱动器1210与LCD1226连接,将由图像处理器1215处理后的图像数 据存储于SDRAM1218,需要显示时,读取SDRAM1218存储的图像数据并在LCD1226上显示,或JPEG处理器1216压缩过的图像数据存储于SDRAM1218,在需要显示时,JPEG处理器1216读取SDRAM1218的压缩过的图像数据,再进行解压缩,将解压缩后的图像数据通过LCD1226进行显示。LCD1226配置在相机主体的背面进行图像显示。该LCD1226采用LCD,然而不限于此,也可以采用有机EL等多种显示面板。The LCD driver 1210 is connected to the LCD 1226, and the number of images to be processed by the image processor 1215 According to the storage in the SDRAM 1218, when the display is required, the image data stored by the SDRAM 1218 is read and displayed on the LCD 1226, or the image data compressed by the JPEG processor 1216 is stored in the SDRAM 1218. When it is required to be displayed, the JPEG processor 1216 reads the compression of the SDRAM 1218. The image data that has passed is decompressed, and the decompressed image data is displayed on the LCD 1226. The LCD 1226 is configured to display an image on the back of the camera body. The LCD 1226 is an LCD, but is not limited thereto, and various display panels such as an organic EL may be used.
基于上述移动终端硬件结构、相机的电气结构,提出本发明移动终端及其三维形象的生成方法实施例。Based on the hardware structure of the mobile terminal and the electrical structure of the camera, an embodiment of a method for generating a mobile terminal and a three-dimensional image thereof according to the present invention is proposed.
参照图3,图3为本发明移动终端第一实施例的功能模块示意图。本实施例中,所移动终端包括:Referring to FIG. 3, FIG. 3 is a schematic diagram of functional modules of a first embodiment of a mobile terminal according to the present invention. In this embodiment, the mobile terminal includes:
读取模块510,设置为:读取与所拍摄的三维对象的全景照片相对应的全景图像数据;The reading module 510 is configured to: read panoramic image data corresponding to the panoramic photo of the captured three-dimensional object;
本实施例中,为完成三维形象的制作,需要获得被拍摄对象的全景照片,也即对应拍摄对象的360°全方位的图像。本实施例中的拍摄对象是三维的,所述三维对象的形式不限,比如可拍摄实体人物的全景照片,或者也可以拍摄实体动物或者实体物品的全景照片以用于在移动终端上生成相应的三维形象。为实现后续进行三维形象的图像拼接处理,因此获取到的全景照片中不同角度拍摄的图片之间具有足够多的重叠信息。In this embodiment, in order to complete the creation of the three-dimensional image, it is necessary to obtain a panoramic photo of the subject, that is, a 360-degree image corresponding to the subject. The photographic subject in this embodiment is three-dimensional, and the form of the three-dimensional object is not limited, for example, a panoramic photo of a physical person may be taken, or a panoramic photo of a solid animal or a physical item may also be taken for generating a corresponding image on the mobile terminal. The three-dimensional image. In order to realize the subsequent image stitching processing of the three-dimensional image, there are enough overlapping information between the images taken at different angles in the acquired panoramic photo.
此外,三维对象的全景照片在拍摄完成后保存于移动终端或其他设备上,当需要进行三维形象生成处理时通过读取模块510进行读取,其中,读取的方式方法不限,根据实际需要进行设置。In addition, the panoramic photo of the three-dimensional object is saved on the mobile terminal or other device after the shooting is completed, and is read by the reading module 510 when the three-dimensional image generation process is required, wherein the method of reading is not limited, according to actual needs. Make settings.
提取模块520,设置为:从所述全景图像数据中提取生成三维形象所需的特征数据;The extracting module 520 is configured to: extract feature data required to generate a three-dimensional image from the panoramic image data;
本实施例中,通过提取模块520从读取的全景图像数据中提取生成三维形象所需的特征数据,比如面部纹理、身高、穿着、皮肤颜色、四肢比例等特征数据,进而根据所提取的特征数据即可用于生成相应的三维形象。In this embodiment, the feature data required to generate a three-dimensional image, such as facial texture, height, wearing, skin color, limb ratio, and the like, are extracted from the read panoramic image data by the extraction module 520, and then according to the extracted features. The data can be used to generate a corresponding three-dimensional image.
此外,本实施例中对于图像中特征数据的提取方式方法不限,可以根据实际需要进行设置。比如采用边缘检测法将三维对象从环境背景中抽取出 来,或者采用人脸检测算法检测人脸位置并提取三维人物对象的脸部纹路特征等。In addition, in this embodiment, the method for extracting feature data in an image is not limited, and may be set according to actual needs. For example, using edge detection to extract 3D objects from the environment background Come, or use the face detection algorithm to detect the face position and extract the facial texture features of the three-dimensional character object.
生成模块530,设置为:根据所提取的所述特征数据,启动三维形象引擎以生成与所拍摄的所述三维对象相对应的三维形象。The generating module 530 is configured to: according to the extracted feature data, activate a three-dimensional image engine to generate a three-dimensional image corresponding to the captured three-dimensional object.
本实施例中,为便于快速生成与拍摄的三维对象相对应的可在移动终端上显示的三维形象,生成模块530通过预置的三维形象引擎完成对提取的所有特征数据进行合成,进而生成相应的三维形象。In this embodiment, in order to quickly generate a three-dimensional image that can be displayed on the mobile terminal corresponding to the captured three-dimensional object, the generating module 530 performs synthesis on all the extracted feature data by using a preset three-dimensional image engine, thereby generating corresponding The three-dimensional image.
三维形象引擎为本实施例中为便于生成本发明实施例中的三维形象而开发的一种三维图形引擎。当前一般都是在微机上使用OpenGL或DirectX等大型开发工具编写三维图形应用,但由于三维图形涉及到许多算法和专业知识,要快速的开发三维应用程序仍然具有一定的困难。因此,3D应用程序的开发需要一个封装了硬件操作和图形算法,同时也简单易用且功能丰富的三维图形开发环境,而这个三维图形开发环境可以称作三维图形引擎。比如OGRE(Object-Oriented Graphics Rendering Engine,面向对象图形渲染引擎)、OSG(Open Scene Graph)等。The three-dimensional image engine is a three-dimensional graphics engine developed in this embodiment for facilitating generation of a three-dimensional image in the embodiment of the present invention. Currently, large-scale development tools such as OpenGL or DirectX are generally used to write 3D graphics applications on a microcomputer. However, since 3D graphics involve many algorithms and expertise, it is still difficult to quickly develop 3D applications. Therefore, the development of 3D applications requires a three-dimensional graphics development environment that encapsulates hardware operations and graphics algorithms, as well as an easy-to-use and feature-rich environment. This three-dimensional graphics development environment can be called a three-dimensional graphics engine. For example, OGRE (Object-Oriented Graphics Rendering Engine), OSG (Open Scene Graph), and the like.
本实施例中,移动终端基于三维对象的全景照片进行生成三维形象所需的特征数据的提取,进而根据所提取的特征数据,启动三维形象引擎以生成相应的三维形象。本发明实施例能够方便快捷生成所拍摄对象的三维形象,并便于用户与相关应用进行关联,从而满足用户的个性化使用需求,提升用户使用体验。In this embodiment, the mobile terminal performs extraction of feature data required to generate a three-dimensional image based on the panoramic photo of the three-dimensional object, and then starts the three-dimensional image engine to generate a corresponding three-dimensional image according to the extracted feature data. The embodiment of the invention can conveniently and quickly generate a three-dimensional image of the captured object, and facilitate the user to associate with the related application, thereby satisfying the personalized use requirement of the user and improving the user experience.
参照图4,图4为图3中提取模块的细化功能模块示意图。基于上述实施例,本实施例中以三维对象为三维人物对象进行举例说明,本实施例中,所述提取模块520包括:Referring to FIG. 4, FIG. 4 is a schematic diagram of a refinement function module of the extraction module of FIG. Based on the above embodiment, the three-dimensional object is described as a three-dimensional object in the embodiment. In this embodiment, the extraction module 520 includes:
预处理单元5201,设置为:从所述全景图像数据中提取所述三维人物对象的整体图像数据,并对所述整体图像数据中不同朝向的所述三维人物对象进行标定;The pre-processing unit 5201 is configured to: extract overall image data of the three-dimensional human object from the panoramic image data, and perform calibration on the three-dimensional human object in different orientations in the overall image data;
可选地,预处理单元5201,设置为:采用图像边缘检测算法区分三维 人物与背景环境,将所检测确定的像素边缘闭合后所对应的图像数据提取出来,得到三维人物对象的整体图像数据。Optionally, the pre-processing unit 5201 is configured to: use an image edge detection algorithm to distinguish three-dimensional The character and the background environment extract the image data corresponding to the detected pixel edge closed, and obtain the overall image data of the three-dimensional character object.
本实施例中,由于拍摄所得到的全景图像数据中一般都会包含有三维人物对象的图像数据以及该人物对象所在环境的图像数据,因此,预处理单元5201将三维人物对象的整体图像数据从其所在环境图像中提取出来单独进行处理。此外,又由于三维人物对象的整体图像数据包含有三维人物对象不同方位朝向的图像数据,因此预处理单元5201还对三维人物对象的整体图像数据中不同朝向的所三维人物对象进行一一标定以用于进行区分。In this embodiment, since the panoramic image data obtained by the shooting generally includes the image data of the three-dimensional human object and the image data of the environment in which the human object is located, the pre-processing unit 5201 takes the overall image data of the three-dimensional human object from the image data thereof. The environment image is extracted and processed separately. In addition, since the overall image data of the three-dimensional human object includes image data of different orientations of the three-dimensional human object, the pre-processing unit 5201 also performs one-to-one calibration on the three-dimensional human objects in different orientations of the overall image data of the three-dimensional human object. Used to make a distinction.
本实施例中,对于提取三维人物对象的整体图像数据的方式不限,由于全景图像数据中三维人物对象为一整体的闭合区域,因此比如可采用图像边缘检测算法区分三维人物与背景环境,进而将所检测确定的像素边缘闭合后所对应的图像数据提取出来即可得到三维人物对象的整体图像数据。In this embodiment, the manner of extracting the whole image data of the three-dimensional character object is not limited. Since the three-dimensional character object in the panoramic image data is an integral closed area, for example, an image edge detection algorithm may be used to distinguish the three-dimensional character from the background environment, and thus, The image data corresponding to the detected determined pixel edge is extracted to obtain the overall image data of the three-dimensional human object.
此外,本实施例中,对于三维人物对象的整体图像数据中不同朝向的三维人物对象进行标定的方式也不限,根据实际需要进行设置。例如,可采用人体朝向检测算法对三维人物对象进行标定,比如以人物对象的正面为参照,每隔45°标定一个人体朝向,则对于360°方位的人物对象的朝向可标定为八个朝向。不同朝向对应的人物对象图像中的特征数据绝大部分都不相同,因此可进行不同朝向方向下的人物对象特征数据提取。In addition, in the present embodiment, the manner of calibrating the three-dimensional human objects in different directions in the overall image data of the three-dimensional human object is not limited, and is set according to actual needs. For example, a three-dimensional human object may be calibrated using a human body orientation detection algorithm, such as calibrating a human body orientation every 45° with reference to the front of the human object, and the orientation of the human object for 360° orientation may be calibrated to eight orientations. The feature data in the image of the person object corresponding to the different orientations is mostly different, so the character object feature data extraction in different orientation directions can be performed.
第一数据提取单元5202,设置为:从所述三维人物对象在不同朝向所对应的图像数据中确定人脸图像所在区域,并从所述人脸图像的数据中提取相关特征数据,所述相关特征数据至少包括人脸图像的纹理特征数据;The first data extracting unit 5202 is configured to: determine an area where the face image is located from the image data corresponding to the three-dimensional character object in different orientations, and extract relevant feature data from the data of the face image, the correlation The feature data includes at least texture feature data of the face image;
可选地,第一数据提取单元5202,还设置为:从所述三维人物对象在不同朝向所对应的图像数据中确定人脸图像所在区域之后,将人脸图像通过缩放、旋转、拉伸中的一种或多种处理得到预设的标准大小的人脸图像。Optionally, the first data extracting unit 5202 is further configured to: after determining the area where the face image is located in the image data corresponding to the three-dimensional character object in different orientations, the face image is zoomed, rotated, and stretched. One or more of the processes result in a preset standard size face image.
由于脸部图像数据特征是一个重要的区别特征,因此,本实施例中第一数据提取单元5202对不同朝向所有图像数据进行人脸检测,并确定存在人脸的图像数据中人脸图像所在位置区域,然后再在此基础上定位人脸关键点的位置,比如眼睛中心、嘴角、鼻梁等,由于拍摄过程中所选定的拍摄距离、角度的不同,因而对应的图像中人物的头部大小、角度朝向也不相同,因此 可通过将人脸通过缩放和/或旋转和/或拉伸等处理以得到一个预设的标准大小的正常脸部头像后再进行脸部区域特征数据的提取。Since the facial image data feature is an important distinguishing feature, the first data extracting unit 5202 in the present embodiment performs face detection on different image data for different orientations, and determines the location of the face image in the image data of the existing face. The area, and then the location of the key points of the face, such as the center of the eye, the corner of the mouth, the bridge of the nose, etc., due to the different shooting distances and angles selected during the shooting, the size of the head of the corresponding image Angle orientation is also different, so The face region feature data can be extracted by processing the face by zooming and/or rotating and/or stretching to obtain a normal standard face avatar of a preset standard size.
本实施例中,对于脸部区域特征数据的提取方式不限,例如可采用LBP算法(Local Binary Patterns,局部二值模式),或者HOG算法(Histogram of Oriented Gradient,方向梯度直方图)、Gabor滤波器算法等进行图像的特征提取。比如提取人脸图像的纹理特征数据、亮度特征数据等。In this embodiment, the manner of extracting the feature data of the face region is not limited. For example, the LBP algorithm (Local Binary Patterns) or the HOG algorithm (Histogram of Oriented Gradient) or Gabor filter may be adopted. The algorithm and the like perform feature extraction of the image. For example, texture feature data, brightness feature data, and the like of the face image are extracted.
比例测算单元5203,设置为:区分所述三维人物对象在不同朝向所对应的图像数据中的头部、上半身、下半身及四肢所在区域,以相应测算所述三维人物对象的头部、上半身、下半身及四肢的长度比例;The ratio calculating unit 5203 is configured to: distinguish the head, the upper body, the lower body, and the limbs in the image data corresponding to the different orientations of the three-dimensional human object, to correspondingly measure the head, the upper body, and the lower body of the three-dimensional object And the length ratio of the limbs;
为使得生成的三维人物形象更为逼真,因此,除提取三维人物的脸部特征数据外,本实施例中进一步通过比例测算单元5203确定不同朝向所对应的图像数据中的头部、上半身、下半身及四肢所在区域以相应测算获得不同朝向三维人物对象的头部、上半身、下半身及四肢的长度比例。比如从人物图像中根据人体不同部位的相对位置以及相关度函数来区分出人物的头部、上半身、下半身和四肢的区域,然后可根据每个部位区域的像素坐标,比如最大坐标距离,测算并确定头部、上半身、下半身及四肢的长度比例。In order to make the generated three-dimensional character image more realistic, in addition to extracting the facial feature data of the three-dimensional character, the ratio calculating unit 5203 further determines the head, the upper body, and the lower body in the image data corresponding to the different orientations. And the area where the limbs are located is obtained by correspondingly calculating the length ratios of the head, the upper body, the lower body and the limbs of the three-dimensional object. For example, from the image of the person according to the relative position of the different parts of the human body and the correlation function to distinguish the head, upper body, lower body and limbs of the person, and then according to the pixel coordinates of each part area, such as the maximum coordinate distance, calculate and Determine the length ratio of the head, upper body, lower body and limbs.
如果拍摄对象为人物对象,则由于人体除头部外的其他部位的区分特征不是特别明显,因此也可以不用计算三维人物对象的头部、上半身、下半身及四肢的长度比例,可以根据实际需要进行设置。If the subject is a human object, since the distinguishing feature of the human body other than the head is not particularly obvious, the length ratio of the head, the upper body, the lower body, and the limbs of the three-dimensional object may not be calculated, and may be performed according to actual needs. Settings.
第二数据提取单元5204,设置为:从所述三维人物对象在不同朝向所对应的图像数据中提取其他特征数据,所述其他特征数据至少包括发型特征数据、穿着特征数据、颜色特征数据中的一种。The second data extracting unit 5204 is configured to: extract other feature data from the image data corresponding to the three-dimensional character object in different orientations, where the other feature data includes at least the hair style feature data, the wearing feature data, and the color feature data. One.
此外,为使得生成的三维人物形象更为逼真,本实施例中还通过第二数据提取单元5204继续获取拍摄的三维人物对象的发型特征数据、穿着特征数据、颜色特征数据等。比如采用边缘检测及特征提取相结合的方式,获取三维人物发型的360°外观特征数据;根据上半身与下半身区域,对三维人物的穿着进行特征检测,比如抽取衣着的外形款式以及主要印花等特征数据;然后进一步可对三维人物的头发颜色、皮肤颜色、瞳孔颜色、穿着颜色等颜色特征数据进行抽取。其中,特征数据的提取方式不限,比如可采用 LBP算法(Local Binary Patterns,局部二值模式),或者HOG算法(Histogram of Oriented Gradient,方向梯度直方图)、Gabor滤波器算法等进行图像的特征提取。In addition, in order to make the generated three-dimensional character image more realistic, the second data extraction unit 5204 continues to acquire the hairstyle feature data, the wearing feature data, the color feature data, and the like of the captured three-dimensional character object. For example, the combination of edge detection and feature extraction is used to obtain the 360° appearance characteristic data of the three-dimensional character hairstyle; according to the upper body and the lower body region, the feature detection of the three-dimensional character is performed, such as extracting the appearance style of the clothing and the characteristic data such as the main printing. Then, color characteristic data such as hair color, skin color, pupil color, and wearing color of the three-dimensional character can be further extracted. Among them, the extraction method of the feature data is not limited, for example, LBP algorithm (Local Binary Patterns), or HOG algorithm (Histogram of Oriented Gradient), Gabor filter algorithm, etc. perform feature extraction of images.
本实施例中,为使得最终生成的三维人物形象更为逼真,因此,从三维人物对象的整体图像数据中提取更多的特征数据,包括具有区别性的脸部特征数据以及身体四肢比例数据、发型特征数据、穿着特征数据、颜色特征数据等,从而为用户提供更具可玩性的三维人物形象。In this embodiment, in order to make the finally generated three-dimensional character image more realistic, more feature data is extracted from the overall image data of the three-dimensional human object, including distinctive facial feature data and body limb ratio data, Hair styling data, wearing feature data, color characterization data, etc., thereby providing the user with a more playable three-dimensional character image.
参照图5,图5为图3中生成模块的细化功能模块示意图。基于上述实施例,本实施例中,所述生成模块530包括:Referring to FIG. 5, FIG. 5 is a schematic diagram of a refinement function module of the generation module in FIG. Based on the foregoing embodiment, in this embodiment, the generating module 530 includes:
模型构建单元5301,设置为:根据所提取的所述特征数据中与构建人物模型相关的部分特征数据进行三维重建,以生成与所拍摄的所述三维人物对象相对应的人物模型;The model building unit 5301 is configured to: perform three-dimensional reconstruction according to the extracted partial feature data related to the built-in character model in the feature data, to generate a character model corresponding to the captured three-dimensional character object;
可选地,模型构建单元5301,设置为:利用获取的人物头部、上半身、下半身及四肢的长度比例,计算出在三维空间中整体人物形象的长宽高及四肢比例数据,以生成与所拍摄的所述三维人物对象相对应的人物模型。Optionally, the model building unit 5301 is configured to: calculate the length, width, height, and limb ratio data of the overall character image in the three-dimensional space by using the obtained length ratios of the head, the upper body, the lower body, and the limbs of the person to generate and The character model corresponding to the three-dimensional character object is captured.
本实施例中,由于拍摄的全景图像为二维图像,因此,为得到相应的三维图像,通过模型构建单元5301对之前提取的所有二维特征数据进行三维重建,以升维处理方式得到相应的三维特征数据。In this embodiment, since the captured panoramic image is a two-dimensional image, in order to obtain a corresponding three-dimensional image, all the two-dimensional feature data extracted before is three-dimensionally reconstructed by the model construction unit 5301, and correspondingly obtained by the ascending dimension processing method. 3D feature data.
另外,同时利用获取的人物头部、上半身、下半身及四肢的长度比例,计算出在三维空间中整体人物形象的长宽高及四肢比例数据,以用于生成与所拍摄的所述三维人物对象相对应的初步的三维人物模型。In addition, using the obtained length ratios of the head, upper body, lower body, and limbs of the person, the length, width, height, and limb ratio data of the overall figure in the three-dimensional space are calculated for generating and photographing the three-dimensional character object. Corresponding to the preliminary three-dimensional character model.
模型渲染单元5302,设置为:根据所提取的所述特征数据中与人物模型渲染相关的其他部分特征数据进行人物模型渲染,以生成与所拍摄的所述三维人物对象相对应的三维人物形象。The model rendering unit 5302 is configured to perform character model rendering according to the extracted other partial feature data related to the character model rendering in the feature data to generate a three-dimensional character image corresponding to the captured three-dimensional character object.
可选地,模型渲染单元5302,还设置为:采用全景拼接融合技术将不同朝向的图像信息进行拼接处理,以生成与所拍摄的所述三维人物对象相对应的三维人物形象。 Optionally, the model rendering unit 5302 is further configured to perform splicing processing on the image information of different orientations by using a panoramic stitching fusion technique to generate a three-dimensional character image corresponding to the photographed three-dimensional character object.
本实施例中,模型渲染单元5302对通过模型构建单元5301处理后所得到的初步的三维人物模型进行人物模型渲染,使用从全景图片中对应人物图像数据中所提取的特征数据,比如脸部特征数据、发型特征数据、穿着特征数据、颜色特征数据等一一进行渲染;此外还可采用全景拼接融合技术将不同朝向的图像信息进行拼接处理,从而最终生成与所拍摄的所述三维人物对象相对应的三维人物形象。In this embodiment, the model rendering unit 5302 performs character model rendering on the preliminary three-dimensional character model obtained by the model building unit 5301, and uses feature data extracted from the corresponding person image data in the panoramic image, such as facial features. Data, hair styling data, wearing feature data, color feature data, etc. are rendered one by one; in addition, panoramic splicing fusion technology may be used to splicing image information of different orientations to finally generate and image the three-dimensional object objects Corresponding 3D characters.
本实施例中,采用所提取的特征数据进行渲染能够使得生成的三维人物形象与所拍摄的三维人物更为贴近逼真,从而给用户带来更加有趣的使用体验。In this embodiment, the rendering using the extracted feature data enables the generated three-dimensional character image to be closer to life than the captured three-dimensional character, thereby bringing a more interesting use experience to the user.
参照图6,图6为本发明移动终端第二实施例的功能模块示意图。本实施例中,所述移动终端还包括:Referring to FIG. 6, FIG. 6 is a schematic diagram of functional modules of a second embodiment of a mobile terminal according to the present invention. In this embodiment, the mobile terminal further includes:
应用关联模块540,设置为:将生成的三维人物形象与所述移动终端内的应用场景进行关联;The application association module 540 is configured to: associate the generated three-dimensional character image with an application scenario in the mobile terminal;
三维人物形象显示模块550,设置为:当所关联的应用场景处于激活状态时,在所述移动终端的显示屏上显示与所关联的应用场景相对应的三维人物形象。The three-dimensional character image display module 550 is configured to display a three-dimensional character image corresponding to the associated application scene on the display screen of the mobile terminal when the associated application scene is in an active state.
本实施例中,为进一步满足用户在使用应用时的个性化交互需求,通过应用关联模块540将生成的三维人物形象与移动终端内的应用场景进行关联,比如将王二的三维人物形象与王二的联系电话相关联;将李三的三维人物形象与语音助手相关联。以及通过三维人物形象显示模块550在移动终端的显示屏上显示与所关联的应用场景相对应的三维人物形象,例如当接收到王二的来电时,在移动终端显示屏上显示王二的三维人物形象;或者打开语音助手时,显示李三的三维人物形象。In this embodiment, in order to further meet the personalized interaction requirement of the user when using the application, the application association module 540 associates the generated three-dimensional character image with the application scenario in the mobile terminal, for example, the two-dimensional character image of Wang Er and the king. The contact phone number of the second is associated; the three-dimensional character image of Li San is associated with the voice assistant. And displaying, by the three-dimensional character image display module 550, a three-dimensional character image corresponding to the associated application scene on the display screen of the mobile terminal, for example, when receiving the call of Wang Er, displaying the three-dimensional image of Wang Er on the display screen of the mobile terminal Character image; or when the voice assistant is turned on, the three-dimensional character image of Li San is displayed.
本实施例中,通过将三维人物形象与移动终端内置应用的应用场景进行关联,当该应用的相应应用场景被激活时,在移动终端上显示与该应用场景相关的三维人物形象,从而满足用户使用应用的个性化交互需求,比如还可以对三维人物形象赋予语音功能,或者面部表情功能,从而为用户提供更加 人性化以及可玩性更强的使用体验。In this embodiment, by associating the three-dimensional character image with the application scenario of the built-in application of the mobile terminal, when the corresponding application scenario of the application is activated, the three-dimensional character image related to the application scenario is displayed on the mobile terminal, thereby satisfying the user. Use the application's personalized interaction requirements, such as the ability to give voice to a 3D character, or facial expressions, to provide users with more User-friendly and playable experience.
参照图7,图7为本发明移动终端第三实施例的功能模块示意图。本实施例中,所述移动终端还包括:Referring to FIG. 7, FIG. 7 is a schematic diagram of functional modules of a third embodiment of a mobile terminal according to the present invention. In this embodiment, the mobile terminal further includes:
拍摄模块560,设置为:启动所述移动终端的摄像头应用中的全景拍摄模式以拍摄并存储所述三维对象的全景照片,其中,在所述移动终端进行全景拍摄过程中,实时检测当前所述移动终端的拍摄角度是否处于设定的拍摄角度范围之内,若否,则发出相应修正提示。The shooting module 560 is configured to: start a panoramic shooting mode in the camera application of the mobile terminal to capture and store a panoramic photo of the three-dimensional object, wherein the current detection is performed in real time during the panoramic shooting of the mobile terminal Whether the shooting angle of the mobile terminal is within the set shooting angle range, and if not, a corresponding correction prompt is issued.
可选地,拍摄模块560,设置为:通过采用重力传感器、姿态传感器、陀螺仪、罗盘中的一种或多种,实时检测当前所述移动终端的拍摄角度是否处于设定的拍摄角度范围之内。Optionally, the shooting module 560 is configured to detect, in real time, whether the shooting angle of the current mobile terminal is within a set shooting angle range by using one or more of a gravity sensor, an attitude sensor, a gyroscope, and a compass. Inside.
本实施例中,通过拍摄模块560启动摄像头应用中的全景拍摄模块,对拍照对象(需要生成三维形象的人物或动物或物品等)进行对焦,然后沿同一半径距离开始围绕拍照对象进行圆周运动,顺时针或逆时针均可,直到摄像头获取了拍照对象360°方位的图像数据,过程如图8所示。In this embodiment, the panoramic shooting module in the camera application is activated by the shooting module 560, and the photographing object (a person or an animal or an item that needs to generate a three-dimensional image) is focused, and then circular motion is started around the photographing object along the same radius distance. Clockwise or counterclockwise can be used until the camera acquires image data of 360° orientation of the subject, as shown in Figure 8.
本实施例中,考虑到拍摄全景照片的相关要求,比如拍摄时获取的不同角度及方位的图像之间有足够的重叠信息以供后续进行三维模型的拼接处理,因此,在拍摄过程中可使用重力传感器、姿态传感器、陀螺仪、罗盘等对拍摄过程进行监测,以判断当前的移动终端是否处于合适的水平位置。比如判断移动终端移动时的拍摄角度是否处于设定的拍摄角度范围之内,并根据检测情况给予用户一定的语音提示。比如:摄像头是否移动在一个可容忍的拍摄角度位置,摄像头转动的角度是否过大或过小等,从而使得获取的不同角度及方位的图像之间有足够的重叠信息。待拍摄完成后,拍摄模块560把拍摄的全景照片的图像数据保存至移动终端内以便于后续进行三维形象的生成处理。In this embodiment, considering the related requirements for taking a panoramic photo, for example, there is sufficient overlapping information between images of different angles and orientations acquired at the time of shooting for subsequent splicing processing of the three-dimensional model, and therefore, can be used during shooting. Gravity sensors, attitude sensors, gyroscopes, compasses, etc. monitor the shooting process to determine if the current mobile terminal is in the proper horizontal position. For example, it is determined whether the shooting angle when the mobile terminal moves is within the set shooting angle range, and the user is given a certain voice prompt according to the detection situation. For example, whether the camera moves at a tolerable shooting angle position, whether the angle of rotation of the camera is too large or too small, etc., so that there is sufficient overlapping information between images acquired at different angles and orientations. After the shooting is completed, the shooting module 560 saves the image data of the captured panoramic photo to the mobile terminal to facilitate subsequent generation processing of the three-dimensional image.
参照图9,图9为本发明三维形象的生成方法第一实施例的流程示意图。本实施中,该方法应用于移动终端,所述三维形象的生成方法包括: Referring to FIG. 9, FIG. 9 is a schematic flowchart diagram of a first embodiment of a method for generating a three-dimensional image according to the present invention. In this implementation, the method is applied to a mobile terminal, and the method for generating the three-dimensional image includes:
步骤S10,读取与所拍摄的三维对象的全景照片相对应的全景图像数据;Step S10, reading panoramic image data corresponding to the captured panoramic photo of the three-dimensional object;
本实施例中,为完成三维形象的制作,需要获得被拍摄对象的全景照片,也即对应拍摄对象的360°全方位的图像。本实施例中的拍摄对象是三维的,所述三维对象的形式不限,比如可拍摄实体人物的全景照片,或者也可以拍摄实体动物或者实体物品的全景照片以用于在移动终端上生成相应的三维形象。为实现后续进行三维形象的图像拼接处理,因此获取到的全景照片中不同角度拍摄的图片之间具有足够多的重叠信息。In this embodiment, in order to complete the creation of the three-dimensional image, it is necessary to obtain a panoramic photo of the subject, that is, a 360-degree image corresponding to the subject. The photographic subject in this embodiment is three-dimensional, and the form of the three-dimensional object is not limited, for example, a panoramic photo of a physical person may be taken, or a panoramic photo of a solid animal or a physical item may also be taken for generating a corresponding image on the mobile terminal. The three-dimensional image. In order to realize the subsequent image stitching processing of the three-dimensional image, there are enough overlapping information between the images taken at different angles in the acquired panoramic photo.
此外,三维对象的全景照片在拍摄完成后保存于移动终端或其他设备上,当需要进行三维形象生成处理时进行读取,其中,读取的方式方法不限,根据实际需要进行设置。In addition, the panoramic photo of the three-dimensional object is saved on the mobile terminal or other device after the shooting is completed, and is read when the three-dimensional image generation process is required. The method of reading is not limited, and is set according to actual needs.
步骤S20,从所述全景图像数据中提取生成三维形象所需的特征数据;Step S20, extracting feature data required to generate a three-dimensional image from the panoramic image data;
本实施例中,通过提取模块520从读取的全景图像数据中提取生成三维形象所需的特征数据,比如面部纹理、身高、穿着、皮肤颜色、四肢比例等特征数据,进而根据所提取的特征数据即可用于生成相应的三维形象。In this embodiment, the feature data required to generate a three-dimensional image, such as facial texture, height, wearing, skin color, limb ratio, and the like, are extracted from the read panoramic image data by the extraction module 520, and then according to the extracted features. The data can be used to generate a corresponding three-dimensional image.
此外,本实施例中对于图像中特征数据的提取方式方法不限,可以根据实际需要进行设置。比如采用边缘检测法将三维对象从环境背景中抽取出来,或者采用人脸检测算法检测人脸位置并提取三维人物对象的脸部纹路特征等。In addition, in this embodiment, the method for extracting feature data in an image is not limited, and may be set according to actual needs. For example, the edge detection method is used to extract the three-dimensional object from the environmental background, or the face detection algorithm is used to detect the position of the face and extract the facial texture features of the three-dimensional object.
步骤S30,根据所提取的所述特征数据,启动三维形象引擎以生成与所拍摄的所述三维对象相对应的三维形象。Step S30, according to the extracted feature data, launch a three-dimensional image engine to generate a three-dimensional image corresponding to the captured three-dimensional object.
本实施例中,为便于快速生成与拍摄的三维对象相对应的可在移动终端上显示的三维形象,生成模块530通过预置的三维形象引擎完成对提取的所有特征数据进行合成,进而生成相应的三维形象。In this embodiment, in order to quickly generate a three-dimensional image that can be displayed on the mobile terminal corresponding to the captured three-dimensional object, the generating module 530 performs synthesis on all the extracted feature data by using a preset three-dimensional image engine, thereby generating corresponding The three-dimensional image.
本实施例中,移动终端基于三维对象的全景照片进行生成三维形象所需的特征数据的提取,进而根据所提取的特征数据,启动三维形象引擎以生成相应的三维形象。本发明实施例能够方便快捷生成所拍摄对象的三维形象,并便于用户与相关应用进行关联,从而满足用户的个性化使用需求,提升用 户使用体验。In this embodiment, the mobile terminal performs extraction of feature data required to generate a three-dimensional image based on the panoramic photo of the three-dimensional object, and then starts the three-dimensional image engine to generate a corresponding three-dimensional image according to the extracted feature data. The embodiment of the invention can conveniently and quickly generate a three-dimensional image of the captured object, and is convenient for the user to associate with the related application, so as to meet the personalized use requirement of the user, and improve the use. User experience.
参照图10,图10为步骤S20的细化流程示意图。基于上述实施例,本实施例中,以拍摄的三维对象为三维人物对象进行举例说明,上述步骤S20包括:Referring to FIG. 10, FIG. 10 is a schematic diagram of the refinement process of step S20. Based on the above embodiment, in the embodiment, the three-dimensional object is taken as a three-dimensional object, and the step S20 includes:
步骤S201,从所述全景图像数据中提取所述三维人物对象的整体图像数据,并对所述整体图像数据中不同朝向的所述三维人物对象进行标定;Step S201, extracting overall image data of the three-dimensional human object from the panoramic image data, and calibrating the three-dimensional human object in different orientations in the overall image data;
可选地,步骤S201中,所述从所述全景图像数据中提取所述三维人物对象的整体图像数据包括:采用图像边缘检测算法区分三维人物与背景环境,将所检测确定的像素边缘闭合后所对应的图像数据提取出来,得到三维人物对象的整体图像数据。Optionally, in step S201, extracting the whole image data of the three-dimensional character object from the panoramic image data comprises: using an image edge detection algorithm to distinguish a three-dimensional character from a background environment, and closing the detected determined pixel edge The corresponding image data is extracted to obtain the overall image data of the three-dimensional human object.
本实施例中,由于拍摄所得到的全景图像数据中一般都会包含有三维人物对象的图像数据以及该人物对象所在环境的图像数据,因此,将三维人物对象的整体图像数据从其所在环境图像中提取出来单独进行处理。此外,又由于三维人物对象的整体图像数据包含有三维人物对象不同方位朝向的图像数据,因此对三维人物对象的整体图像数据中不同朝向的所三维人物对象进行一一标定以用于进行区分。In this embodiment, since the panoramic image data obtained by the shooting generally includes the image data of the three-dimensional human object and the image data of the environment in which the human object is located, the overall image data of the three-dimensional human object is taken from the environment image thereof. Extracted and processed separately. In addition, since the overall image data of the three-dimensional human object includes image data of different orientations of the three-dimensional human object, the three-dimensional human objects of different orientations in the overall image data of the three-dimensional human object are uniformly calibrated for distinguishing.
本实施例中,对于提取三维人物对象的整体图像数据的方式不限,由于全景图像数据中三维人物对象为一整体的闭合区域,因此比如可采用图像边缘检测算法区分三维人物与背景环境,进而将所检测确定的像素边缘闭合后所对应的图像数据提取出来即可得到三维人物对象的整体图像数据。In this embodiment, the manner of extracting the whole image data of the three-dimensional character object is not limited. Since the three-dimensional character object in the panoramic image data is an integral closed area, for example, an image edge detection algorithm may be used to distinguish the three-dimensional character from the background environment, and thus, The image data corresponding to the detected determined pixel edge is extracted to obtain the overall image data of the three-dimensional human object.
此外,本实施例中,对于三维人物对象的整体图像数据中不同朝向的三维人物对象进行标定的方式也不限,根据实际需要进行设置。例如,可采用人体朝向检测算法对三维人物对象进行标定,比如以人物对象的正面为参照,每隔45°标定一个人体朝向,则对于360°方位的人物对象的朝向可标定为八个朝向。不同朝向对应的人物对象图像中的特征数据绝大部分都不相同,因此可进行不同朝向方向下的人物对象特征数据提取。In addition, in the present embodiment, the manner of calibrating the three-dimensional human objects in different directions in the overall image data of the three-dimensional human object is not limited, and is set according to actual needs. For example, a three-dimensional human object may be calibrated using a human body orientation detection algorithm, such as calibrating a human body orientation every 45° with reference to the front of the human object, and the orientation of the human object for 360° orientation may be calibrated to eight orientations. The feature data in the image of the person object corresponding to the different orientations is mostly different, so the character object feature data extraction in different orientation directions can be performed.
步骤S202,从所述三维人物对象在不同朝向所对应的图像数据中提取 人脸图像数据,并从所述人脸图像数据中提取相关特征数据,所述相关特征数据至少包括人脸图像的纹理特征数据;Step S202, extracting from the image data corresponding to the three-dimensional character object in different orientations Face image data, and extracting related feature data from the face image data, the related feature data including at least texture feature data of the face image;
可选地,步骤S202中,所述从所述三维人物对象在不同朝向所对应的图像数据中确定人脸图像所在区域之后,还包括:将人脸图像通过缩放、旋转、拉伸中的一种或多种处理得到预设的标准大小的人脸图像。Optionally, in step S202, after determining, according to the image data corresponding to the different orientations, the image of the face image, the method further includes: converting the face image by one of zooming, rotating, and stretching One or more kinds of processing to obtain a preset standard size face image.
由于脸部图像数据特征是一个重要的区别特征,因此,本实施例中对不同朝向所有图像数据进行人脸检测,并确定存在人脸的图像数据中人脸图像所在位置区域,然后再在此基础上定位人脸关键点的位置,比如眼睛中心、嘴角、鼻梁等,由于拍摄过程中所选定的拍摄距离、角度的不同,因而对应的图像中人物的头部大小、角度朝向也不相同,因此可通过将人脸通过缩放和/或旋转和/或拉伸等处理以得到一个预设的标准大小的正常脸部头像后再进行脸部区域特征数据的提取。Since the facial image data feature is an important distinguishing feature, in this embodiment, the face detection is performed on all the image data in different directions, and the location area of the face image in the image data of the face is determined, and then Positioning the key points of the face on the basis, such as the center of the eye, the corner of the mouth, the bridge of the nose, etc., due to the different shooting distances and angles selected during the shooting, the head size and angle orientation of the characters in the corresponding images are also different. Therefore, the facial region feature data can be extracted by processing the face by scaling and/or rotating and/or stretching to obtain a preset standard size normal face avatar.
本实施例中,对于脸部区域特征数据的提取方式不限,例如可采用LBP算法(Local Binary Patterns,局部二值模式),或者HOG算法(Histogram of Oriented Gradient,方向梯度直方图)、Gabor滤波器算法等进行图像的特征提取。比如提取人脸图像的纹理特征数据、亮度特征数据等。In this embodiment, the manner of extracting the feature data of the face region is not limited. For example, the LBP algorithm (Local Binary Patterns) or the HOG algorithm (Histogram of Oriented Gradient) or Gabor filter may be adopted. The algorithm and the like perform feature extraction of the image. For example, texture feature data, brightness feature data, and the like of the face image are extracted.
步骤S203,区分所述三维人物对象在不同朝向所对应的图像数据中的头部、上半身、下半身及四肢所在区域,以相应测算所述三维人物对象的头部、上半身、下半身及四肢的长度比例;Step S203, distinguishing between the head, the upper body, the lower body, and the limbs in the image data corresponding to the different orientations of the three-dimensional human object, to correspondingly calculate the length ratio of the head, the upper body, the lower body, and the limbs of the three-dimensional object. ;
为使得生成的三维人物形象更为逼真,因此,除提取三维人物的脸部特征数据外,本实施例中进一步确定不同朝向所对应的图像数据中的头部、上半身、下半身及四肢所在区域以相应测算获得不同朝向三维人物对象的头部、上半身、下半身及四肢的长度比例。比如从人物图像中根据人体不同部位的相对位置以及相关度函数来区分出人物的头部、上半身、下半身和四肢的区域,然后可根据每个部位区域的像素坐标,比如最大坐标距离,测算并确定头部、上半身、下半身及四肢的长度比例。In order to make the generated three-dimensional character image more realistic, in addition to extracting the facial feature data of the three-dimensional character, the embodiment further determines the head, the upper body, the lower body and the limbs in the image data corresponding to the different orientations. Correspondingly, the length ratios of the head, upper body, lower body and limbs of different three-dimensional object objects are obtained. For example, from the image of the person according to the relative position of the different parts of the human body and the correlation function to distinguish the head, upper body, lower body and limbs of the person, and then according to the pixel coordinates of each part area, such as the maximum coordinate distance, calculate and Determine the length ratio of the head, upper body, lower body and limbs.
如果拍摄对象为人物对象,则由于人体除头部外的其他部位的区分特征不是特别明显,因此也可以不用计算三维人物对象的头部、上半身、下半身及四肢的长度比例,可以根据实际需要进行设置。 If the subject is a human object, since the distinguishing feature of the human body other than the head is not particularly obvious, the length ratio of the head, the upper body, the lower body, and the limbs of the three-dimensional object may not be calculated, and may be performed according to actual needs. Settings.
步骤S204,从所述三维人物对象在不同朝向所对应的图像数据中提取其他特征数据,所述其他特征数据至少包括发型特征数据、穿着特征数据、颜色特征数据中的至少一种。Step S204, extracting other feature data from the image data corresponding to the three-dimensional character object in different orientations, the other feature data including at least one of hairstyle feature data, wearing feature data, and color feature data.
此外,为使得生成的三维人物形象更为逼真,本实施例中还继续获取拍摄的三维人物对象的发型特征数据、穿着特征数据、颜色特征数据等。比如采用边缘检测及特征提取相结合的方式,获取三维人物发型的360°外观特征数据;根据上半身与下半身区域,对三维人物的穿着进行特征检测,比如抽取衣着的外形款式以及主要印花等特征数据;然后进一步可对三维人物的头发颜色、皮肤颜色、瞳孔颜色、穿着颜色等颜色特征数据进行抽取。其中,特征数据的提取方式不限,比如可采用LBP算法(Local Binary Patterns,局部二值模式),或者HOG算法(Histogram of Oriented Gradient,方向梯度直方图)、Gabor滤波器算法等进行图像的特征提取。In addition, in order to make the generated three-dimensional character image more realistic, in this embodiment, the hairstyle feature data, the wearing feature data, the color feature data, and the like of the captured three-dimensional character object are further acquired. For example, the combination of edge detection and feature extraction is used to obtain the 360° appearance characteristic data of the three-dimensional character hairstyle; according to the upper body and the lower body region, the feature detection of the three-dimensional character is performed, such as extracting the appearance style of the clothing and the characteristic data such as the main printing. Then, color characteristic data such as hair color, skin color, pupil color, and wearing color of the three-dimensional character can be further extracted. Among them, the extraction method of the feature data is not limited, for example, the LBP algorithm (Local Binary Patterns), or the HOG algorithm (Histogram of Oriented Gradient), the Gabor filter algorithm, etc. extract.
本实施例中,上述步骤S202、S203、S204的执行顺序不限。为使得最终生成的三维人物形象更为逼真,因此,从三维人物对象的整体图像数据中提取更多的特征数据,包括具有区别性的脸部特征数据以及身体四肢比例数据、发型特征数据、穿着特征数据、颜色特征数据等,从而为用户提供更具可玩性的三维人物形象。In this embodiment, the execution order of the above steps S202, S203, and S204 is not limited. In order to make the final generated three-dimensional character image more realistic, more feature data is extracted from the overall image data of the three-dimensional character object, including distinctive facial feature data and body limb ratio data, hair style characteristic data, wearing Feature data, color feature data, etc., to provide users with more playable 3D characters.
参照图11,图11为步骤S30的细化流程示意图。基于上述实施例,本实施例中,上述步骤S30包括:Referring to FIG. 11, FIG. 11 is a schematic flowchart of the refinement of step S30. Based on the above embodiment, in the embodiment, the foregoing step S30 includes:
步骤S301,根据所提取的所述特征数据中与构建人物模型相关的部分特征数据进行三维重建,以生成与所拍摄的所述三维人物对象相对应的人物模型;Step S301, performing three-dimensional reconstruction according to the extracted partial feature data related to the constructing character model in the feature data, to generate a character model corresponding to the captured three-dimensional character object;
可选地,步骤S301包括:利用获取的人物头部、上半身、下半身及四肢的长度比例,计算出在三维空间中整体人物形象的长宽高及四肢比例数据,以生成与所拍摄的所述三维人物对象相对应的人物模型。Optionally, step S301 includes: calculating, by using the obtained length ratios of the head, the upper body, the lower body, and the limbs of the person, the length, width, height, and limb ratio data of the overall character in the three-dimensional space to generate and photograph the said A character model corresponding to a three-dimensional character object.
本实施例中,由于拍摄的全景图像为二维图像,因此,为得到相应的三维图像,通过对之前提取的所有二维特征数据进行三维重建,以升维处理方 式得到相应的三维特征数据。In this embodiment, since the captured panoramic image is a two-dimensional image, in order to obtain a corresponding three-dimensional image, three-dimensional reconstruction is performed on all the two-dimensional feature data extracted before, and the processing is performed in an ascending dimension. The corresponding three-dimensional feature data is obtained.
另外,同时利用获取的人物头部、上半身、下半身及四肢的长度比例,计算出在三维空间中整体人物形象的长宽高及四肢比例数据,以用于生成与所拍摄的所述三维人物对象相对应的初步的三维人物模型。In addition, using the obtained length ratios of the head, upper body, lower body, and limbs of the person, the length, width, height, and limb ratio data of the overall figure in the three-dimensional space are calculated for generating and photographing the three-dimensional character object. Corresponding to the preliminary three-dimensional character model.
步骤S302,根据所提取的所述特征数据中与人物模型渲染相关的其他部分特征数据进行人物模型渲染,以生成与所拍摄的所述三维人物对象相对应的三维人物形象。Step S302: Perform character model rendering according to the extracted other feature data related to the character model rendering in the feature data to generate a three-dimensional character image corresponding to the captured three-dimensional character object.
可选地,步骤S302还可包括:采用全景拼接融合技术将不同朝向的图像信息进行拼接处理,以生成与所拍摄的所述三维人物对象相对应的三维人物形象。Optionally, the step S302 may further include: performing splicing processing on the image information of different orientations by using a panoramic stitching fusion technology to generate a three-dimensional character image corresponding to the photographed three-dimensional character object.
本实施例中,对得到的初步的三维人物模型进行人物模型渲染,使用从全景图片中对应人物图像数据中所提取的特征数据,比如脸部特征数据、发型特征数据、穿着特征数据、颜色特征数据等一一进行渲染;此外还可采用全景拼接融合技术将不同朝向的图像信息进行拼接处理,从而最终生成与所拍摄的所述三维人物对象相对应的三维人物形象。In this embodiment, the character model is rendered on the obtained preliminary three-dimensional character model, and the feature data extracted from the corresponding person image data in the panoramic image, such as facial feature data, hair styling data, wearing feature data, and color features, is used. Data and the like are rendered one by one; in addition, the panoramic stitching fusion technology may be used to splicing the image information of different orientations, thereby finally generating a three-dimensional character image corresponding to the photographed three-dimensional character object.
本实施例中,采用所提取的特征数据进行渲染能够使得生成的三维人物形象与所拍摄的三维人物更为贴近逼真,从而给用户带来更加有趣的使用体验。In this embodiment, the rendering using the extracted feature data enables the generated three-dimensional character image to be closer to life than the captured three-dimensional character, thereby bringing a more interesting use experience to the user.
参照图12,图12为本发明三维形象的生成方法第二实施例的流程示意图。基于上述实施例,本实施中,上述步骤S30之后还包括:Referring to FIG. 12, FIG. 12 is a schematic flowchart diagram of a second embodiment of a method for generating a three-dimensional image according to the present invention. Based on the above embodiment, in this implementation, after the step S30, the method further includes:
步骤S40,将生成的三维人物形象与所述移动终端内的应用场景进行关联;Step S40: associate the generated three-dimensional character image with an application scenario in the mobile terminal;
步骤S50,当所关联的应用场景处于激活状态时,在所述移动终端的显示屏上显示与所关联的应用场景相对应的三维人物形象。In step S50, when the associated application scenario is in an active state, a three-dimensional character image corresponding to the associated application scenario is displayed on the display screen of the mobile terminal.
本实施例中,为进一步满足用户在使用应用时的个性化交互需求,将生成的三维人物形象与移动终端内的应用场景进行关联,比如将王二的三维人物形象与王二的联系电话相关联;将李三的三维人物形象与语音助手相关 联。以及在移动终端的显示屏上显示与所关联的应用场景相对应的三维人物形象,例如当接收到王二的来电时,在移动终端显示屏上显示王二的三维人物形象;或者打开语音助手时,显示李三的三维人物形象。In this embodiment, in order to further satisfy the personalized interaction requirement of the user when using the application, the generated three-dimensional character image is associated with the application scenario in the mobile terminal, for example, the two-dimensional character image of Wang Er is related to the contact phone number of Wang Er. Link; the three-dimensional character image of Li San is related to the voice assistant Union. And displaying a three-dimensional character image corresponding to the associated application scene on the display screen of the mobile terminal, for example, when receiving the call of Wang Er, displaying the three-dimensional character image of Wang Er on the display screen of the mobile terminal; or opening the voice assistant At the time, Li San’s three-dimensional figure is displayed.
本实施例中,通过将三维人物形象与移动终端内置应用的应用场景进行关联,当该应用的相应应用场景被激活时,在移动终端上显示与该应用场景相关的三维人物形象,从而满足用户使用应用的个性化交互需求,比如还可以对三维人物形象赋予语音功能,或者面部表情功能,从而为用户提供更加人性化以及可玩性更强的使用体验。In this embodiment, by associating the three-dimensional character image with the application scenario of the built-in application of the mobile terminal, when the corresponding application scenario of the application is activated, the three-dimensional character image related to the application scenario is displayed on the mobile terminal, thereby satisfying the user. Use the application's personalized interaction requirements, such as the ability to give voice functions to three-dimensional characters, or facial expressions, to provide users with a more user-friendly and playable experience.
参照图13,图13为本发明三维形象的生成方法第三实施例的流程示意图。基于上述实施例,本实施中,上述步骤S10之前还包括:Referring to FIG. 13, FIG. 13 is a schematic flowchart diagram of a third embodiment of a method for generating a three-dimensional image according to the present invention. Based on the foregoing embodiment, in this implementation, before step S10, the method further includes:
步骤S01,启动所述移动终端的摄像头应用中的全景拍摄模式以拍摄并存储所述三维对象的全景照片,其中,在所述移动终端进行全景拍摄过程中,实时检测当前所述移动终端的拍摄角度是否处于设定的拍摄角度范围之内,若否,则发出相应修正提示。Step S01, starting a panoramic shooting mode in the camera application of the mobile terminal to capture and store a panoramic photo of the three-dimensional object, wherein in the panoramic shooting process of the mobile terminal, detecting the current shooting of the mobile terminal in real time Whether the angle is within the set shooting angle range. If not, the corresponding correction prompt is issued.
可选地,步骤S01中,通过采用重力传感器、姿态传感器、陀螺仪、罗盘中的一种或多种,实时检测当前所述移动终端的拍摄角度是否处于设定的拍摄角度范围之内。Optionally, in step S01, by using one or more of a gravity sensor, an attitude sensor, a gyroscope, and a compass, it is detected in real time whether the shooting angle of the current mobile terminal is within a set shooting angle range.
本实施例中,启动摄像头应用中的全景拍摄模块,对拍照对象(需要生成三维形象的人物或动物或物品等)进行对焦,然后沿同一半径距离开始围绕拍照对象进行圆周运动,顺时针或逆时针均可,直到摄像头获取了拍照对象360°方位的图像数据,过程如图8所示。In this embodiment, the panoramic shooting module in the camera application is activated, and the photographing object (a character or an animal or an item that needs to generate a three-dimensional image) is focused, and then circular motion is started around the photographing object along the same radius distance, clockwise or inverse The hour hand can be used until the camera acquires the image data of the 360° orientation of the photographing object, and the process is as shown in FIG. 8 .
本实施例中,考虑到拍摄全景照片的相关要求,比如拍摄时获取的不同角度及方位的图像之间有足够的重叠信息以供后续进行三维模型的拼接处理,因此,在拍摄过程中可使用重力传感器、姿态传感器、陀螺仪、罗盘等对拍摄过程进行监测,以判断当前的移动终端是否处于合适的水平位置。比如判断移动终端移动时的拍摄角度处于设定的拍摄角度范围之内,并根据检测情况给予用户一定的语音提示。比如:摄像头是否移动在一个可容忍的拍 摄角度位置,摄像头转动的角度是否过大或过小等,从而使得获取的不同角度及方位的图像之间有足够的重叠信息。待拍摄完成后,把拍摄的全景照片的图像数据保存至移动终端内以便于后续进行三维形象的生成处理。In this embodiment, considering the related requirements for taking a panoramic photo, for example, there is sufficient overlapping information between images of different angles and orientations acquired at the time of shooting for subsequent splicing processing of the three-dimensional model, and therefore, can be used during shooting. Gravity sensors, attitude sensors, gyroscopes, compasses, etc. monitor the shooting process to determine if the current mobile terminal is in the proper horizontal position. For example, it is determined that the shooting angle when the mobile terminal moves is within the set shooting angle range, and the user is given a certain voice prompt according to the detection situation. For example: whether the camera is moving in a tolerable shot Whether the angle of the camera is rotated, whether the angle of rotation of the camera is too large or too small, etc., so that there is sufficient overlapping information between the acquired images of different angles and orientations. After the shooting is completed, the image data of the captured panoramic photo is saved in the mobile terminal to facilitate subsequent generation processing of the three-dimensional image.
本发明实施例还提供一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现上述三维形象的生成方法。The embodiment of the invention further provides a computer readable storage medium storing computer executable instructions, which are implemented by the processor to implement the method for generating the three-dimensional image.
以上仅为本发明的可选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。The above is only an alternative embodiment of the present invention, and thus does not limit the scope of the patent application, and the equivalent structure or equivalent process transformation made by the specification and the drawings of the present application, or directly or indirectly applied to other related technologies. The fields are all included in the scope of patent protection of this application.
工业实用性Industrial applicability
本发明实施例能够方便快捷生成所拍摄对象的三维形象,并便于用户与相关应用进行关联,从而满足用户的个性化使用需求,提升用户使用体验。 The embodiment of the invention can conveniently and quickly generate a three-dimensional image of the captured object, and facilitate the user to associate with the related application, thereby satisfying the personalized use requirement of the user and improving the user experience.

Claims (20)

  1. 一种移动终端,所移动终端包括:A mobile terminal includes:
    读取模块,设置为:读取与所拍摄的三维对象的全景照片相对应的全景图像数据;a reading module configured to: read panoramic image data corresponding to the panoramic photo of the captured three-dimensional object;
    提取模块,设置为:从所述全景图像数据中提取生成三维形象所需的特征数据;An extraction module, configured to: extract feature data required to generate a three-dimensional image from the panoramic image data;
    生成模块,设置为:根据所提取的所述特征数据,启动三维形象引擎以生成与所拍摄的所述三维对象相对应的三维形象。And generating a module, configured to: according to the extracted feature data, activate a three-dimensional image engine to generate a three-dimensional image corresponding to the captured three-dimensional object.
  2. 如权利要求1所述的移动终端,其中,所述三维对象为三维人物对象;所述提取模块包括:The mobile terminal of claim 1, wherein the three-dimensional object is a three-dimensional character object; the extraction module comprises:
    预处理单元,设置为:从所述全景图像数据中提取所述三维人物对象的整体图像数据,并对所述整体图像数据中不同朝向的所述三维人物对象进行标定;a pre-processing unit, configured to: extract overall image data of the three-dimensional human object from the panoramic image data, and calibrate the three-dimensional human object in different orientations in the overall image data;
    第一数据提取单元,设置为:从所述三维人物对象在不同朝向所对应的图像数据中确定人脸图像所在区域,并从所述人脸图像的数据中提取相关特征数据,所述相关特征数据包括人脸图像的纹理特征数据;The first data extracting unit is configured to: determine an area where the face image is located from the image data corresponding to the three-dimensional character object in different orientations, and extract relevant feature data from the data of the face image, the related feature The data includes texture feature data of the face image;
    比例测算单元,设置为:区分所述三维人物对象在不同朝向所对应的图像数据中的头部、上半身、下半身及四肢所在区域,以相应测算所述三维人物对象的头部、上半身、下半身及四肢的长度比例;The ratio calculating unit is configured to: distinguish the head, the upper body, the lower body, and the limbs in the image data corresponding to the different orientations of the three-dimensional human object, to correspondingly measure the head, the upper body, the lower body of the three-dimensional object and The length ratio of the limbs;
    第二数据提取单元,设置为:从所述三维人物对象在不同朝向所对应的图像数据中提取其他特征数据,所述其他特征数据包括发型特征数据、穿着特征数据、颜色特征数据中的至少一种。The second data extracting unit is configured to extract other feature data from the image data corresponding to the three-dimensional character object in different orientations, the other feature data including at least one of hairstyle feature data, wearing feature data, and color feature data. Kind.
  3. 如权利要求2所述的移动终端,其中,The mobile terminal of claim 2, wherein
    所述预处理单元,设置为:采用图像边缘检测算法区分三维人物与背景环境,将所检测确定的像素边缘闭合后所对应的图像数据提取出来,得到三维人物对象的整体图像数据。The preprocessing unit is configured to: use an image edge detection algorithm to distinguish the three-dimensional character from the background environment, and extract image data corresponding to the detected determined pixel edge to obtain overall image data of the three-dimensional human object.
  4. 如权利要求2所述的移动终端,其中, The mobile terminal of claim 2, wherein
    所述第一数据提取单元,还设置为:从所述三维人物对象在不同朝向所对应的图像数据中确定人脸图像所在区域之后,将人脸图像通过缩放、旋转、拉伸中的一种或多种处理得到预设的标准大小的人脸图像。The first data extracting unit is further configured to: after determining the area where the face image is located in the image data corresponding to the different orientation of the three-dimensional human object, the face image is subjected to one of zooming, rotating, and stretching Or a plurality of processes to obtain a preset standard size face image.
  5. 如权利要求2所述的移动终端,其中,所述生成模块包括:The mobile terminal of claim 2, wherein the generating module comprises:
    模型构建单元,设置为:根据所提取的所述特征数据中与构建人物模型相关的部分特征数据进行三维重建,以生成与所拍摄的所述三维人物对象相对应的人物模型;a model building unit, configured to: perform three-dimensional reconstruction according to the extracted partial feature data related to the constructing character model in the feature data, to generate a character model corresponding to the captured three-dimensional character object;
    模型渲染单元,设置为:根据所提取的所述特征数据中与人物模型渲染相关的其他部分特征数据进行人物模型渲染,以生成与所拍摄的所述三维人物对象相对应的三维人物形象。The model rendering unit is configured to: perform a character model rendering according to the extracted other part of the feature data related to the character model rendering to generate a three-dimensional character image corresponding to the captured three-dimensional character object.
  6. 如权利要求5所述的移动终端,其中,The mobile terminal of claim 5, wherein
    所述模型构建单元,设置为:利用获取的人物头部、上半身、下半身及四肢的长度比例,计算出在三维空间中整体人物形象的长宽高及四肢比例数据,以生成与所拍摄的所述三维人物对象相对应的人物模型。The model building unit is configured to: calculate the length, width, height, and limb ratio data of the overall character in the three-dimensional space by using the obtained ratio of the length of the head, the upper body, the lower body, and the limbs to generate and photograph the image. A character model corresponding to a three-dimensional character object.
  7. 如权利要求5所述的移动终端,其中,The mobile terminal of claim 5, wherein
    所述模型渲染单元,还设置为:采用全景拼接融合技术将不同朝向的图像信息进行拼接处理,以生成与所拍摄的所述三维人物对象相对应的三维人物形象。The model rendering unit is further configured to perform splicing processing of image information of different orientations by using a panoramic stitching fusion technique to generate a three-dimensional character image corresponding to the photographed three-dimensional character object.
  8. 如权利要求5所述的移动终端,所述移动终端还包括:The mobile terminal of claim 5, the mobile terminal further comprising:
    应用关联模块,设置为:将生成的三维人物形象与所述移动终端内的应用场景进行关联;The application association module is configured to: associate the generated three-dimensional character image with an application scenario in the mobile terminal;
    三维人物形象显示模块,设置为:当所关联的应用场景处于激活状态时,在所述移动终端的显示屏上显示与所关联的应用场景相对应的三维人物形象。The three-dimensional character image display module is configured to: when the associated application scene is in an active state, display a three-dimensional character image corresponding to the associated application scene on a display screen of the mobile terminal.
  9. 如权利要求1-8中任一项所述的移动终端,所述移动终端还包括:The mobile terminal according to any one of claims 1 to 8, the mobile terminal further comprising:
    拍摄模块,设置为:启动所述移动终端的摄像头应用中的全景拍摄模式以拍摄并存储所述三维对象的全景照片,其中,在所述移动终端进行全景拍摄过程中,实时检测当前所述移动终端的拍摄角度是否处于设定的拍摄角度 范围之内,若否,则发出相应修正提示。a shooting module, configured to: activate a panoramic shooting mode in a camera application of the mobile terminal to capture and store a panoramic photo of the three-dimensional object, wherein the current movement is detected in real time during the panoramic shooting of the mobile terminal Is the shooting angle of the terminal at the set shooting angle? Within the scope, if no, the corresponding correction prompt is issued.
  10. 如权利要求9所述的移动终端,其中,The mobile terminal of claim 9, wherein
    所述拍摄模块,设置为:通过采用重力传感器、姿态传感器、陀螺仪、罗盘中的一种或多种,实时检测当前所述移动终端的拍摄角度是否处于设定的拍摄角度范围之内。The photographing module is configured to detect, in real time, whether the photographing angle of the current mobile terminal is within a set photographing angle range by using one or more of a gravity sensor, an attitude sensor, a gyroscope, and a compass.
  11. 一种三维形象的生成方法,应用于移动终端,所述三维形象的生成方法包括:A method for generating a three-dimensional image is applied to a mobile terminal, and the method for generating the three-dimensional image includes:
    读取与所拍摄的三维对象的全景照片相对应的全景图像数据;Reading panoramic image data corresponding to the panoramic photo of the captured three-dimensional object;
    从所述全景图像数据中提取生成三维形象所需的特征数据;Extracting feature data required to generate a three-dimensional image from the panoramic image data;
    根据所提取的所述特征数据,启动三维形象引擎以生成与所拍摄的所述三维对象相对应的三维形象。Based on the extracted feature data, a three-dimensional image engine is launched to generate a three-dimensional image corresponding to the captured three-dimensional object.
  12. 如权利要求11所述的三维形象的生成方法,其中,所述三维对象为三维人物对象;所述从所述全景图像数据中提取生成三维形象所需的特征数据包括:The method for generating a three-dimensional image according to claim 11, wherein the three-dimensional object is a three-dimensional object object; and extracting feature data required for generating a three-dimensional image from the panoramic image data comprises:
    从所述全景图像数据中提取所述三维人物对象的整体图像数据,并对所述整体图像数据中不同朝向的所述三维人物对象进行标定;Extracting overall image data of the three-dimensional human object from the panoramic image data, and calibrating the three-dimensional human object in different orientations in the overall image data;
    从所述三维人物对象在不同朝向所对应的图像数据中确定人脸图像所在区域,并从所述人脸图像的数据中提取相关特征数据,所述相关特征数据包括人脸图像的纹理特征数据;Determining, from the image data corresponding to the three-dimensional character object in different orientations, an area of the face image, and extracting relevant feature data from the data of the face image, the related feature data including texture feature data of the face image ;
    区分所述三维人物对象在不同朝向所对应的图像数据中的头部、上半身、下半身及四肢所在区域,以相应测算所述三维人物对象的头部、上半身、下半身及四肢的长度比例;Distinguishing between the head, the upper body, the lower body, and the limbs in the image data corresponding to the different orientations of the three-dimensional character object, to correspondingly calculate the length ratios of the head, the upper body, the lower body, and the limbs of the three-dimensional object;
    从所述三维人物对象在不同朝向所对应的图像数据中提取其他特征数据,所述其他特征数据包括发型特征数据、穿着特征数据、颜色特征数据中的至少一种。The other feature data is extracted from the image data corresponding to the three-dimensional character object in different orientations, the other feature data including at least one of hairstyle feature data, wearing feature data, and color feature data.
  13. 如权利要求12所述的三维形象的生成方法,其中,所述从所述全景图像数据中提取所述三维人物对象的整体图像数据包括: The method for generating a three-dimensional image according to claim 12, wherein the extracting the overall image data of the three-dimensional human object from the panoramic image data comprises:
    采用图像边缘检测算法区分三维人物与背景环境,将所检测确定的像素边缘闭合后所对应的图像数据提取出来,得到三维人物对象的整体图像数据。The image edge detection algorithm is used to distinguish the three-dimensional character from the background environment, and the image data corresponding to the detected determined pixel edge is extracted, and the overall image data of the three-dimensional character object is obtained.
  14. 如权利要求12所述的三维形象的生成方法,其中,所述从所述三维人物对象在不同朝向所对应的图像数据中确定人脸图像所在区域之后,还包括:The method for generating a three-dimensional image according to claim 12, wherein the determining, after the image of the face image from the image data corresponding to the different orientations of the three-dimensional character object, further comprises:
    将人脸图像通过缩放、旋转、拉伸中的一种或多种处理得到预设的标准大小的人脸图像。The face image is processed by one or more of zooming, rotating, and stretching to obtain a preset standard size face image.
  15. 如权利要求12所述的三维形象的生成方法,其中,所述根据所提取的所述特征数据,启动三维形象引擎以生成与所拍摄的所述三维对象相对应的三维形象包括:The method for generating a three-dimensional image according to claim 12, wherein the initiating the three-dimensional image engine to generate the three-dimensional image corresponding to the captured three-dimensional object according to the extracted feature data comprises:
    根据所提取的所述特征数据中与构建人物模型相关的部分特征数据进行三维重建,以生成与所拍摄的所述三维人物对象相对应的人物模型;And performing three-dimensional reconstruction according to the extracted partial feature data related to the constructing character model in the feature data to generate a character model corresponding to the captured three-dimensional character object;
    根据所提取的所述特征数据中与人物模型渲染相关的其他部分特征数据进行人物模型渲染,以生成与所拍摄的所述三维人物对象相对应的三维人物形象。Character model rendering is performed according to the extracted other feature data related to the character model rendering in the feature data to generate a three-dimensional character image corresponding to the captured three-dimensional character object.
  16. 如权利要求15所述的三维形象的生成方法,其中,所述根据所提取的所述特征数据中与构建人物模型相关的部分特征数据进行三维重建,以生成与所拍摄的所述三维人物对象相对应的人物模型包括:The method for generating a three-dimensional image according to claim 15, wherein said three-dimensional reconstruction is performed based on part of feature data related to a constructed character model among said extracted feature data, to generate said three-dimensional character object with said image The corresponding character models include:
    利用获取的人物头部、上半身、下半身及四肢的长度比例,计算出在三维空间中整体人物形象的长宽高及四肢比例数据,以生成与所拍摄的所述三维人物对象相对应的人物模型。Calculating the length, width, height and limb ratio data of the overall character in the three-dimensional space by using the obtained length ratios of the head, upper body, lower body and limbs of the person to generate a character model corresponding to the captured three-dimensional object .
  17. 如权利要求15所述的三维形象的生成方法,其中,所述根据所提取的所述特征数据中与构建人物模型相关的部分特征数据进行三维重建,以生成与所拍摄的所述三维人物对象相对应的人物模型之后,还包括:The method for generating a three-dimensional image according to claim 15, wherein said three-dimensional reconstruction is performed based on part of feature data related to a constructed character model among said extracted feature data, to generate said three-dimensional character object with said image After the corresponding character model, it also includes:
    采用全景拼接融合技术将不同朝向的图像信息进行拼接处理,以生成与所拍摄的所述三维人物对象相对应的三维人物形象。The image information of different orientations is spliced by a panoramic stitching fusion technique to generate a three-dimensional character image corresponding to the captured three-dimensional character object.
  18. 如权利要求15所述的三维形象的生成方法,其中,所述根据所提 取的所述特征数据,启动三维形象引擎以生成与所拍摄的所述三维对象相对应的三维形象之后还包括:A method of generating a three-dimensional image according to claim 15, wherein said And taking the feature data, after starting the three-dimensional image engine to generate a three-dimensional image corresponding to the captured three-dimensional object, the method further includes:
    将生成的三维人物形象与所述移动终端内的应用场景进行关联;Associating the generated three-dimensional character image with an application scenario in the mobile terminal;
    当所关联的应用场景处于激活状态时,在所述移动终端的显示屏上显示与所关联的应用场景相对应的三维人物形象。When the associated application scenario is in an active state, a three-dimensional character image corresponding to the associated application scenario is displayed on a display screen of the mobile terminal.
  19. 如权利要求11-18中任一项所述的三维形象的生成方法,其中,所述读取与所拍摄的三维对象的全景照片相对应的全景图像数据之前还包括:The method for generating a three-dimensional image according to any one of claims 11 to 18, wherein the reading of the panoramic image data corresponding to the panoramic photo of the captured three-dimensional object further comprises:
    启动所述移动终端的摄像头应用中的全景拍摄模式以拍摄并存储所述三维对象的全景照片,其中,在所述移动终端进行全景拍摄过程中,实时检测当前所述移动终端的拍摄角度是否处于设定的拍摄角度范围之内,若否,则发出相应修正提示。Activating a panoramic photographing mode in the camera application of the mobile terminal to capture and store a panoramic photo of the three-dimensional object, wherein during the panoramic photographing process, the mobile terminal detects whether the current shooting angle of the mobile terminal is in real time Within the set shooting angle range, if no, the corresponding correction prompt is issued.
  20. 如权利要求19所述的三维形象的生成方法,其中,所述实时检测当前所述移动终端的拍摄角度是否处于设定的拍摄角度范围之内包括:The method for generating a three-dimensional image according to claim 19, wherein the real-time detecting whether the shooting angle of the current mobile terminal is within a set shooting angle range comprises:
    通过采用重力传感器、姿态传感器、陀螺仪、罗盘中的一种或多种,实时检测当前所述移动终端的拍摄角度是否处于设定的拍摄角度范围之内。 By using one or more of a gravity sensor, an attitude sensor, a gyroscope, and a compass, it is detected in real time whether the shooting angle of the current mobile terminal is within a set shooting angle range.
PCT/CN2016/106637 2015-11-25 2016-11-21 Mobile terminal and three-dimensional image generation method therefor WO2017088714A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510833487.6 2015-11-25
CN201510833487.6A CN105427369A (en) 2015-11-25 2015-11-25 Mobile terminal and method for generating three-dimensional image of mobile terminal

Publications (1)

Publication Number Publication Date
WO2017088714A1 true WO2017088714A1 (en) 2017-06-01

Family

ID=55505548

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/106637 WO2017088714A1 (en) 2015-11-25 2016-11-21 Mobile terminal and three-dimensional image generation method therefor

Country Status (2)

Country Link
CN (1) CN105427369A (en)
WO (1) WO2017088714A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110717964A (en) * 2019-09-26 2020-01-21 深圳市名通科技股份有限公司 Scene modeling method, terminal and readable storage medium
CN112560556A (en) * 2019-09-25 2021-03-26 杭州海康威视数字技术股份有限公司 Action behavior image generation method, device, equipment and storage medium
CN113544746A (en) * 2019-03-11 2021-10-22 索尼集团公司 Image processing device and image processing method

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105427369A (en) * 2015-11-25 2016-03-23 努比亚技术有限公司 Mobile terminal and method for generating three-dimensional image of mobile terminal
CN106887033A (en) * 2017-01-20 2017-06-23 腾讯科技(深圳)有限公司 The rendering intent and device of scene
CN107845129A (en) * 2017-11-07 2018-03-27 深圳狗尾草智能科技有限公司 Three-dimensional reconstruction method and device, the method and device of augmented reality
WO2019127508A1 (en) * 2017-12-29 2019-07-04 深圳配天智能技术研究院有限公司 Smart terminal and 3d imaging method and 3d imaging system therefor
CN111862296B (en) * 2019-04-24 2023-09-29 京东方科技集团股份有限公司 Three-dimensional reconstruction method, three-dimensional reconstruction device, three-dimensional reconstruction system, model training method and storage medium
CN110308792B (en) * 2019-07-01 2023-12-12 北京百度网讯科技有限公司 Virtual character control method, device, equipment and readable storage medium
CN113132717A (en) * 2019-12-31 2021-07-16 华为技术有限公司 Data processing method, terminal and server

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101968892A (en) * 2009-07-28 2011-02-09 上海冰动信息技术有限公司 Method for automatically adjusting three-dimensional face model according to one face picture
CN103473804A (en) * 2013-08-29 2013-12-25 小米科技有限责任公司 Image processing method, device and terminal equipment
CN104268928A (en) * 2014-08-29 2015-01-07 小米科技有限责任公司 Picture processing method and device
CN105427369A (en) * 2015-11-25 2016-03-23 努比亚技术有限公司 Mobile terminal and method for generating three-dimensional image of mobile terminal

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101686335A (en) * 2008-09-28 2010-03-31 新奥特(北京)视频技术有限公司 Method and device for acquiring three-dimensional image model
JP5578149B2 (en) * 2010-10-15 2014-08-27 カシオ計算機株式会社 Image composition apparatus, image retrieval method, and program
CN103679788B (en) * 2013-12-06 2017-12-15 华为终端(东莞)有限公司 The generation method and device of 3D rendering in a kind of mobile terminal
CN104133553B (en) * 2014-07-30 2018-04-06 小米科技有限责任公司 Webpage content display method and device
CN104349155B (en) * 2014-11-25 2017-02-01 深圳超多维光电子有限公司 Method and equipment for displaying simulated three-dimensional image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101968892A (en) * 2009-07-28 2011-02-09 上海冰动信息技术有限公司 Method for automatically adjusting three-dimensional face model according to one face picture
CN103473804A (en) * 2013-08-29 2013-12-25 小米科技有限责任公司 Image processing method, device and terminal equipment
CN104268928A (en) * 2014-08-29 2015-01-07 小米科技有限责任公司 Picture processing method and device
CN105427369A (en) * 2015-11-25 2016-03-23 努比亚技术有限公司 Mobile terminal and method for generating three-dimensional image of mobile terminal

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113544746A (en) * 2019-03-11 2021-10-22 索尼集团公司 Image processing device and image processing method
CN112560556A (en) * 2019-09-25 2021-03-26 杭州海康威视数字技术股份有限公司 Action behavior image generation method, device, equipment and storage medium
CN110717964A (en) * 2019-09-26 2020-01-21 深圳市名通科技股份有限公司 Scene modeling method, terminal and readable storage medium
CN110717964B (en) * 2019-09-26 2023-05-02 深圳市名通科技股份有限公司 Scene modeling method, terminal and readable storage medium

Also Published As

Publication number Publication date
CN105427369A (en) 2016-03-23

Similar Documents

Publication Publication Date Title
WO2017088714A1 (en) Mobile terminal and three-dimensional image generation method therefor
TWI788630B (en) Method, device, computer equipment, and storage medium for generating 3d face model
CN113727012B (en) A shooting method and terminal
JP7058760B2 (en) Image processing methods and their devices, terminals and computer programs
KR102114377B1 (en) Method for previewing images captured by electronic device and the electronic device therefor
WO2019219065A1 (en) Video analysis method and device
WO2017114048A1 (en) Mobile terminal and method for identifying contact
CN115484380B (en) Shooting method, graphical user interface and electronic equipment
CN110064200B (en) Object construction method and device based on virtual environment and readable storage medium
TWI523517B (en) Image capturing device, image alignment method, and storage medium for performing the method
CN106713768B (en) Human scene image synthesis method and system and computer equipment
WO2017124899A1 (en) Information processing method, apparatus and electronic device
US20170026565A1 (en) Image capturing apparatus and method of operating the same
WO2017088678A1 (en) Long-exposure panoramic image shooting apparatus and method
CN114170349B (en) Image generation method, device, electronic device and storage medium
US8400532B2 (en) Digital image capturing device providing photographing composition and method thereof
CN111242090A (en) Human face recognition method, device, equipment and medium based on artificial intelligence
CN108776822B (en) Target area detection method, device, terminal and storage medium
CN113709355B (en) Sliding zoom shooting method and electronic equipment
CN112287852A (en) Face image processing method, display method, device and equipment
CN105763810B (en) Camera arrangement and method based on human eye
KR102351496B1 (en) Image processing apparatus and method for operating thereof
US10122937B2 (en) Method and apparatus for processing image obtained by camera
WO2022161011A1 (en) Method for generating image and electronic device
CN115150542B (en) Video anti-shake method and related equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16867937

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16867937

Country of ref document: EP

Kind code of ref document: A1