[go: up one dir, main page]

WO2020135269A1 - 会话创建方法及终端设备 - Google Patents

会话创建方法及终端设备 Download PDF

Info

Publication number
WO2020135269A1
WO2020135269A1 PCT/CN2019/127140 CN2019127140W WO2020135269A1 WO 2020135269 A1 WO2020135269 A1 WO 2020135269A1 CN 2019127140 W CN2019127140 W CN 2019127140W WO 2020135269 A1 WO2020135269 A1 WO 2020135269A1
Authority
WO
WIPO (PCT)
Prior art keywords
input
user
target
image
face image
Prior art date
Application number
PCT/CN2019/127140
Other languages
English (en)
French (fr)
Inventor
张玉炳
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Priority to EP19905951.0A priority Critical patent/EP3905037B1/en
Priority to KR1020217021573A priority patent/KR102657949B1/ko
Priority to ES19905951T priority patent/ES2976717T3/es
Priority to JP2021537142A priority patent/JP7194286B2/ja
Publication of WO2020135269A1 publication Critical patent/WO2020135269A1/zh
Priority to US17/357,130 priority patent/US12028476B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4482Procedural
    • G06F9/4484Executing subprograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4488Object-oriented
    • G06F9/449Object-oriented method invocation or resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L12/1818Conference organisation arrangements, e.g. handling schedules, setting up parameters needed by nodes to attend a conference, booking network resources, notifying involved parties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/146Markers for unambiguous identification of a particular session, e.g. session cookie or URL-encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72469User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons
    • H04M1/72472User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons wherein the items are sorted according to specific criteria, e.g. frequency of use
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/26Devices for calling a subscriber
    • H04M1/27Devices whereby a plurality of signals may be stored simultaneously
    • H04M1/274Devices whereby a plurality of signals may be stored simultaneously with provision for storing more than one subscriber number at a time, e.g. using toothed disc
    • H04M1/2745Devices whereby a plurality of signals may be stored simultaneously with provision for storing more than one subscriber number at a time, e.g. using toothed disc using static electronic memories, e.g. chips
    • H04M1/27467Methods of retrieving data
    • H04M1/27475Methods of retrieving data using interactive graphical means or pictorial representations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72436User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. short messaging services [SMS] or e-mails
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/62Details of telephonic subscriber devices user interface aspects of conference calls

Definitions

  • the embodiments of the present disclosure relate to the field of communication technologies, and in particular, to a session creation method and terminal device.
  • the user can find the multiple contacts in the contact list of the communication program and trigger the terminal device to provide these contacts Create a group chat, and then the user can trigger the terminal device to send messages to these contacts through the group chat, that is, these contacts can receive the message triggered by the user.
  • Embodiments of the present disclosure provide a session creation method and terminal device to solve the problem of slower group chat creation speed when a user cannot obtain a contact name.
  • an embodiment of the present disclosure provides a session creation method that receives a user's first input to a first image including at least one face image; in response to the first input, displays an icon of at least one communication program; and receives a user The second input; in response to the second input, displaying a conversation interface; wherein, the conversation interface includes M target identifiers, each target identifier is used to indicate a user, and the M target identifiers indicate that the M users include the at least A user indicated by a K face image in a face image, the M target identifiers are the identifiers in the target communication program corresponding to the second input, M and K are both positive integers, and K is less than or equal to M.
  • an embodiment of the present disclosure also provides a terminal device, the terminal device includes: a receiving module and a display module; the receiving module is configured to receive a user's first input to a first image including at least one face image
  • the display module is used to display at least one communication program icon in response to the first input received by the receiving module; the receiving module is also used to receive the user's second input; in response to the first received by the receiving module
  • the user indicated by the image, the M target identifiers are the identifiers in the target communication program corresponding to the second input, M and K are both positive integers, and K is less than or equal to M.
  • an embodiment of the present disclosure provides a terminal device, including a processor, a memory, and a computer program stored on the memory and executable on the processor.
  • the computer program When the computer program is executed by the processor, it is implemented as The steps of the session creation method described in one aspect.
  • an embodiment of the present disclosure provides a computer-readable storage medium storing a computer program on the computer-readable storage medium, which when executed by a processor implements the steps of the session creation method as described in the first aspect .
  • the terminal device receives a user's first input to a first image including at least one face image. Then, in response to the first input, the terminal device displays an icon of at least one communication program. Secondly, the second input received by the terminal device. Finally, in response to the second input, the terminal device displays a conversation interface; the conversation interface includes M target identifiers. Since each target mark is used to indicate a user, the M users indicated by the M target marks include at least one user indicated by the K face image in the face image, and the M target marks are target communication programs corresponding to the second input.
  • M and K are positive integers, and K is less than or equal to M.
  • the terminal device can display at least one icon of the communication program to the user according to the received first input of the user to the first image, thereby enabling the user to select the communication program and select which For the user corresponding to the face image, after the user selection is completed, the terminal device displays the user session interface including the indication of the K personal face image in the at least one face image. Therefore, the session creation method provided by the embodiments of the present disclosure may be based on The image of the face image quickly finds the desired contact, and can then quickly create a conversation or add the user to an existing group chat.
  • FIG. 1 is a schematic structural diagram of a possible Android operating system provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of a session creation method according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a display interface provided by an embodiment of the present disclosure.
  • FIG. 4 is a second schematic diagram of a display interface provided by an embodiment of the present disclosure.
  • FIG. 5 is a third schematic diagram of a display interface provided by an embodiment of the present disclosure.
  • FIG. 6 is a fourth schematic diagram of a display interface provided by an embodiment of the present disclosure.
  • FIG. 7 is a fifth schematic diagram of a display interface provided by an embodiment of the present disclosure.
  • FIG. 8 is a sixth schematic diagram of a display interface provided by an embodiment of the present disclosure.
  • FIG. 9 is a seventh schematic diagram of a display interface provided by an embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram 8 of a display interface provided by an embodiment of the present disclosure.
  • FIG. 11 is a schematic view 9 of a display interface provided by an embodiment of the present disclosure.
  • FIG. 12 is a schematic diagram 10 of a display interface provided by an embodiment of the present disclosure.
  • FIG. 13 is a schematic diagram 11 of a display interface provided by an embodiment of the present disclosure.
  • FIG. 14 is a twelfth schematic diagram of a display interface provided by an embodiment of the present disclosure.
  • 15 is a thirteenth schematic diagram of a display interface provided by an embodiment of the present disclosure.
  • 16 is a fourteenth schematic diagram of a display interface provided by an embodiment of the present disclosure.
  • FIG. 17 is a schematic structural diagram of a terminal device according to an embodiment of the present disclosure.
  • FIG. 18 is a schematic diagram of a hardware structure of a terminal device according to various embodiments of the present disclosure.
  • first and second in the specification and claims of the present disclosure are used to distinguish different objects, not to describe a specific order of objects.
  • the first added control and the second added control are used to distinguish different added controls, rather than describing a specific order of added controls.
  • the terminal device in the embodiment of the present disclosure may be a terminal device with an operating system.
  • the operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present disclosure.
  • the following uses the Android operating system as an example to introduce the software environment to which the session creation method provided by the embodiments of the present disclosure is applied.
  • FIG. 1 it is a schematic structural diagram of a possible Android operating system provided by an embodiment of the present disclosure.
  • the architecture of the Android operating system includes four layers, namely: an application program layer, an application program framework layer, a system runtime library layer, and a kernel layer (specifically, a Linux kernel layer).
  • the application layer includes various applications in the Android operating system (including system applications and third-party applications).
  • the application framework layer is the framework of the application. Developers can develop some applications based on the application framework layer while observing the development principles of the application framework.
  • the system runtime library layer includes a library (also called a system library) and an Android operating system operating environment.
  • the library mainly provides various resources required by the Android operating system.
  • the operating environment of the Android operating system is used to provide a software environment for the Android operating system.
  • the kernel layer is the operating system layer of the Android operating system, and belongs to the bottom layer of the Android operating system software layer.
  • the kernel layer provides core system services and hardware-related drivers for the Android operating system based on the Linux kernel.
  • the developer may develop a software program that implements the session creation method provided by the embodiment of the present disclosure based on the system architecture of the Android operating system as shown in FIG. 1, so that the session The creation method can be run based on the Android operating system shown in FIG. 1. That is, the processor or the terminal device can implement the session creation method provided by the embodiment of the present disclosure by running the software program in the Android operating system.
  • FIG. 2 is a schematic flowchart of a session creation method according to an embodiment of the present disclosure. As shown in FIG. 2, the session creation method includes steps 201 to 204:
  • Step 201 The terminal device receives a user's first input of a first image including at least one face image.
  • the first image is displayed in the first interface as an example for description.
  • the first interface may be an interface for collecting images by the terminal device (ie, a shooting preview interface), or an interface for displaying images for the terminal device (for example, The user selects an image viewing interface from an album or an image receiving application), which is not specifically limited in the embodiments of the present disclosure.
  • FIG. 3 is a schematic diagram of a display interface provided by an embodiment of the present disclosure.
  • the first interface may be the interface 301a shown in (a) in FIG. 3 or the interface 301b shown in (b) in FIG. 3.
  • the interface 301a is a shooting preview interface of the camera of the terminal device
  • the interface 301b is a display interface for displaying images of the terminal device.
  • a "face check person” or “check person” control can also be displayed in the first interface, and can be displayed in the adjacent area of other controls in the shooting interface (for example, can be displayed in the interface 301a The right area of "recording"), or after the user selects the first image, it can be displayed in the adjacent area of other controls, then the first input can be the input to the "face check person” or "check person” control.
  • the "face check person” and “check person” controls may not be displayed in the first interface, and the face check person function may be enabled by receiving a user's quick input (for example, long-pressing the screen). No specific restrictions.
  • the first input may be touch screen input, fingerprint input, gravity input, key input, and the like.
  • the touch screen input is a user's input to the terminal device's touch screen, long-press input, sliding input, click input, floating input (user input near the touch screen) and other inputs.
  • Fingerprint input is the user's sliding fingerprint, long-press fingerprint, single-click fingerprint, double-click fingerprint, etc. input to the fingerprint reader of the terminal device.
  • Gravity input refers to input such as shaking of the terminal device in a specific direction and a specified number of times.
  • the key input corresponds to the user's input such as single-click input, double-click input, long-press input, and combination key input of the power key, volume key, and home key of the terminal device.
  • the embodiment of the present disclosure does not specifically limit the manner of the first input, and may be any achievable manner.
  • the first input may be a continuous input, or may include a plurality of discontinuous sub-inputs, which is not specifically limited in the embodiment of the present disclosure.
  • Step 202 In response to the first input, the terminal device displays an icon of at least one communication program.
  • the interface where the terminal device displays the icon of at least one communication program is the second interface.
  • the terminal device updates and displays the above-mentioned first interface as a second interface, and the second interface includes an icon of at least one communication program.
  • At least one communication program in the embodiment of the present disclosure is a communication program with contacts installed in the terminal device.
  • Step 203 The terminal device receives the second input of the user.
  • the second input may be a continuous input or an input composed of a plurality of discontinuous sub-inputs, which is not specifically limited in this embodiment of the present disclosure.
  • the second input may be an input for the user to select the face image in the first image and to select the icon of the communication program.
  • Step 204 In response to the second input, the terminal device displays a conversation interface, and the conversation interface includes M target identifiers.
  • Each target identifier is used to indicate a user, and the M users indicated by the M target identifiers include users indicated by K personal face images in at least one personal face image, and the M target identifiers are corresponding to the second input
  • M and K are positive integers, and K is less than or equal to M.
  • the K personal face image may correspond to more than K users.
  • the target identifier may be the user's memo name, nickname, user name, etc.
  • the conversation interface may be a group chat interface or a group messaging interface, which is not specifically limited in the embodiment of the present disclosure.
  • the terminal device updates and displays the above second interface as a conversation interface, and the conversation interface includes M target identifiers.
  • the conversation interface is a group chat interface
  • the user can send a message to these contacts in the group chat interface, these contacts can receive the message sent by the user, and any one of these contacts can also be in the Send messages in group chat, other users can receive the messages sent by these contacts in the group chat.
  • the conversation interface is a group sending interface
  • the user can send a message to these contacts in the group sending interface, and these contacts can all receive the message sent by the user.
  • the terminal device receives a user's first input to a first image including at least one face image. Then, in response to the first input, the terminal device displays an icon of at least one communication program. Secondly, the terminal device receives the user's second input. Finally, in response to the second input, the terminal device displays a conversation interface, and the conversation interface includes M target identifiers.
  • Each target mark is used to indicate a user, and the M users indicated by the M target marks include at least one user indicated by the K face image in the face image, and the M target marks are in the target communication program corresponding to the second input , M and K are positive integers, and K is less than or equal to M.
  • the terminal device can display at least one icon of the communication program to the user according to the received first input of the user to the first image, thereby enabling the user to select the communication program and select which For the user corresponding to the face image, after the user selection is completed, the terminal device displays the user session interface including the indication of the K personal face image in the at least one face image. Therefore, the session creation method provided by the embodiments of the present disclosure may be based on The image of the face image quickly finds the required contact, and can then quickly create a conversation or add the user to an existing group.
  • step 202 may specifically be executed by step 202a1:
  • Step 202a1 In response to the first input, display the at least one face image and the icon of the at least one communication program in the first image.
  • the terminal device updates and displays the first interface as a second interface, and the second interface includes the at least one face image and the icon of at least one communication program in the first image.
  • the second interface may be an interface 302a
  • the interface 302a includes icons of 3 face images and 5 communication programs, respectively: face image 31 , Face image 32, face image 33, icon 1 of communication program 1, icon 2 of communication program 2, icon 3 of communication program 3, icon 4 of communication program 4, and icon 5 of communication program 5.
  • the second input may be an input in which the user selects only the icon of the communication program on the second interface.
  • the second input may default to the user selecting all the face images in the second interface and the corresponding icon of the communication program;
  • the second input may include a user's sub-input to a face image and a sub-input to an icon, which are not specifically limited in the embodiments of the present disclosure.
  • the second interface may further include a selection control.
  • the selection control can be used for the user to select which contacts are needed.
  • the second interface may also be the interface 302b, and the interface 302b further includes a selection control 34, in a case where the terminal device updates the first interface and displays the second interface .
  • the selection control 34 in the interface 302b can circle all the face images in a region surrounded by a dashed line, and is used to indicate that all the face images have been selected.
  • the user can move any face in the second interface
  • the user can remove any face image in the area enclosed by the dotted line (including deleting and moving to other areas in the second interface).
  • the conversation interface displayed by the terminal device may be the interface 303a shown in (a) in FIG. 5 or the interface 303b shown in (b) in FIG. 5 .
  • the interface 303a may be a group chat interface, and the group chat interface may include three user names corresponding to three face images.
  • the interface 303b may be a group sending interface, and the group sending interface may also include 3 user names corresponding to 3 face images.
  • the terminal device can display the at least one face image and the icon of the at least one communication program according to the user's first input, thereby enabling the user to select which face images to associate with based on the displayed at least one face image
  • the corresponding user establishes a session. Therefore, in the session creation method provided by the embodiment of the present disclosure, the user can more conveniently select and quickly find the required contact according to at least one face image displayed on the terminal device.
  • the session creation method provided by the embodiment of the present disclosure further includes step 205 and step 206 after step 203:
  • Step 205 In response to the second input, the terminal device displays N personal face images and N target identifiers.
  • Each face image corresponds to a target identifier
  • the N users indicated by the N target identifiers include users indicated by P personal face images in at least one face image
  • the N target identifiers are the identifiers in the target communication program
  • P is an integer less than or equal to N.
  • the terminal device displays an interface of N face images and N target identifiers as the third interface.
  • the terminal device may update and display the above-mentioned second interface as a third interface, and the third interface includes N personal face images and N target identifiers.
  • the third interface may be an interface for establishing a group chat, and after receiving the second input from the user, the terminal device may display the interface 304a shown in (a) in FIG. 6.
  • Step 206 The terminal device receives the third input of the user.
  • the third input is the input for the user to determine the establishment of the session, or the input to add the selected user to the group chat, which may be a continuous input or a plurality of discontinuous multiple sub-inputs. No specific restrictions.
  • the third input may be a user input on the third interface.
  • the third input may be an input of a control established by the user in the interface, for example, the third input may be an input of the user clicking "group chat" in the interface 304c shown in FIG. 7; the third input may also For a quick input, for example, the third input may also be an input of the user sliding up from the bottom of the screen shown in the interface 304c.
  • step 204 can be performed by step 204a.
  • Step 204a In response to the third input, the terminal device displays a conversation interface.
  • the terminal device may update and display the second interface as a conversation interface.
  • the user can determine whether the contact corresponding to the face image is the contact required by the user according to the displayed target identification and the face image.
  • the third input is a sliding input of the user in a preset direction in a blank area other than the N face images and the N target marks.
  • the third input is an input that the user determines to establish a session, or determines to join to establish a session.
  • the third input may be an input that the user slides toward the top of the screen in the blank area.
  • the user can slide the input in the preset direction in the blank area to control the terminal device to display the conversation interface, and the operation using the third input is faster.
  • the method for creating a session provided by an embodiment of the present disclosure further includes steps 207 to 209 after step 203:
  • Step 207 The terminal device displays preset controls.
  • the above-mentioned third interface further includes preset controls.
  • the preset control may be a control with added function represented by a text, or a control with added function represented by an icon.
  • the embodiment of the present disclosure does not specifically limit the type and display position of the preset control.
  • the preset control in the interface 304a shown in (a) in FIG. 6 is an “add” control 35, which is a text type added control, and the preset control in the interface 304b shown in (b) in FIG. 6
  • add controls for the icon type For the camera icon 36, add controls for the icon type.
  • Step 208 The terminal device receives the fourth input from the user to the preset control.
  • the user may also add a contact to the contact list of the communication program through the preset control, that is, the session established by the session creation method of the embodiment of the present disclosure may also include the user directly contacting Contacts manually selected in the person list.
  • the fourth input may be an input for the user to select the camera icon 36 (ie, a preset control), or an input for the user to select the camera icon 36 and slide up, such as the interface shown in (a) in FIG. 8 The input shown in 304b1.
  • the fourth input may be a continuous input or an input composed of multiple sub-inputs, which is not specifically limited in the embodiment of the present disclosure.
  • Step 209 In response to the fourth input, the terminal device displays T personal face images and T target identifiers.
  • the terminal device updates the third interface, and the updated third interface includes T personal face images and T target identifiers.
  • the T face image includes the N face image
  • the T target identifiers include the N target identifiers
  • the other face images except the N face image in the T face image are in the second image Face image of
  • the second image is the image corresponding to the fourth input
  • the user indicated by the other target identifiers of the T target identifiers other than the N target identifiers is the user indicated by other face images
  • T is positive Integer.
  • the terminal device displays a preset control in the third interface, which can facilitate the user to determine whether to continue adding other contacts according to the N target identifiers and N face images displayed according to the first image.
  • the fourth input includes a first sub-input and a second sub-input.
  • step 209 may also be executed through steps 209a and 209b:
  • Step 209a In the case where N personal face images and N target identifiers are displayed in the first area, in response to the user's first sub-input to the preset control, the terminal device displays the shooting preview interface in the second area.
  • Step 209b In response to the user's second sub-input to the preset control, perform a shooting operation, and display the captured second image in the second area, and display the first face image and the first face image in the second image in the first area Target identification, the N face images and the N target identifications.
  • the second image may include at least one face image.
  • the fourth input is an input composed of multiple sub-inputs, as shown in (a) in FIG. 8, the user first selects the camera icon 36 on the interface 304b1 and drags it upward, and then the terminal device displays FIG. 8
  • the user can select the camera icon 36 again in the interface 304b2 and slide down, as shown in the interface 304b3 shown in (a) in FIG. 9.
  • the terminal device may display the face images in the image acquired in the image collection area and the target identifiers corresponding to these face images on the interface 304b4 shown in (b) in FIG. 9.
  • the interface 304b2 only uses the image acquisition area (including the camera preview interface) as an example for description.
  • the interface 304b2 may also display a contact list of the communication program, and the user may also display the contact list To select the contact to be added, which is not specifically limited in the embodiment of the present disclosure.
  • the first area in the interface 304b4 shown in (b) of FIG. 9 may display the target identifier corresponding to the multiple face images.
  • the terminal device can display a shooting preview interface in the second area according to the user's first sub-input to the preset control, and then the terminal device receives the user's second sub-input to the preset control, performs the shooting action, and performs
  • the second area displays the captured second image
  • the first area displays the first face image, the first target logo, and the previously displayed N personal face images and N target logos in the first area, which enables the user to The image of the face image continues to add users.
  • the session creation method provided by the embodiment of the present disclosure further includes step 210 and step 211 after step 205:
  • Step 210 The terminal device receives the fifth input of the user.
  • the terminal device may receive the fifth input of the user on the third interface.
  • the fifth input may be a continuous input or a plurality of discontinuous inputs, which is not specifically limited in this embodiment of the present disclosure.
  • the fifth input is an input for the user to remove unnecessary contacts.
  • the third interface may further include a delete control
  • the fifth input may specifically be a user's input to the second face image and the delete control.
  • Step 211 In response to the fifth input, the terminal device displays J personal face images and J target identifiers.
  • the J personal face image is an image in the N personal face image
  • the J target identifiers are the identifiers in the N target identifiers
  • J is a positive integer less than N.
  • the terminal device updates the third interface, and the updated third interface includes J personal face images and J target identifiers.
  • the interface 304b5 shown in (a) in FIG. 10 is the third interface
  • the user can slide down from the position of "Wang Wu" on the interface 304b5, and the terminal device can change the third interface It is updated to the interface 304b6 shown in (b) of FIG. 10 (that is, the updated third interface), and the interface 304b6 includes Zhang San and Li Si, and face images corresponding to Zhang San and Li Si.
  • the user can delete unwanted contacts in the third interface.
  • the method for creating a session may further include step 210a and step 211a after step 205:
  • Step 210a Receive the fifth input of the second face image from the user.
  • the second face image is the face image in the N face images.
  • Step 211a In response to the fifth input, delete the second face image and the corresponding at least one target identifier.
  • the second face image and the corresponding target identifier may be both Delete, you can also delete the second face image and the corresponding part of the logo.
  • N face images may include the same face images, which is not specifically limited in the embodiment of the present disclosure.
  • the terminal device may delete the second face image and at least one target identifier corresponding to the second face image according to the user's input to the second face image in the N face images displayed on the terminal device, so that The delete operation is more convenient.
  • the first input includes a third sub-input and a fourth sub-input.
  • step 202 may specifically be executed by steps 202a and 202b:
  • Step 202a In response to the received third sub-input of the user, the terminal device displays the first control.
  • the terminal device displays the first control in the first interface.
  • the third sub-input may be an input that the user clicks on the screen or a sliding input of the user on the screen.
  • the third sub-input may be the input of clicking the screen in the interface 305a shown in FIG. 11, and the first control may be the control 37 in the interface 305a, in which the text "establish group chat" may be displayed in the control 37, or it may be "Create Session” is displayed.
  • the third sub-input may also be a sliding input in two opposite directions as shown in the interface 306a shown in FIG.
  • the first control may also be the control 38 in the interface 306a.
  • the control 38 is a circular control, and the words "join group chat" are displayed in the control 38.
  • the control 38 may be other shapes, and may also be displayed on the control 38. Other words are not specifically limited in the embodiments of the present disclosure.
  • Step 202b In response to the received fourth sub-input of the first control by the user, the terminal device updates the interface displaying the first image to display an interface including at least one communication program icon.
  • the terminal device in response to the received fourth sub-input of the user to add a control, the terminal device updates and displays the first interface as the second interface.
  • the second interface may also be the interface 306a shown in FIG. 12.
  • the user can choose to move the first control to an icon in the interface 306a, for example, as shown in the interface 306a1 in FIG. 13, so that a conversation can be established in the communication program corresponding to the icon.
  • the terminal device may also display icons of multiple communication programs when the first control is displayed.
  • the terminal device can display the first control on the display interface, so that the user can select the information to be acquired by operating on the first control.
  • step 205 may be specifically executed by step 205a:
  • Step 205a In response to the second input, the terminal device displays the N face images, the N target identifiers, and at least one alternative session identifier, and each session identifier is used to indicate an established session.
  • the above-mentioned third interface further includes at least one alternative session identifier.
  • step 206 can be executed by step 206a:
  • Step 206a The terminal device receives the user's third input to the first session identifier.
  • step 204a can be executed by step 204a1:
  • Step 204a1 in response to the third input, the terminal device displays a session interface including all target identifiers in the first session identifier and the N target identifiers.
  • the first session identifier is one of the at least one candidate session identifier.
  • the third input may also be a user input to the first session identifier and the first target identifier.
  • the first session identifier is an identifier in at least one alternative session identifier
  • the first target identifier is an identifier among N target identifiers
  • the M target identifiers include an identifier for indicating a user corresponding to the first session and the first target ID
  • the first session is the session indicated by the first session ID.
  • the session identifier may be the name of the session, for example, the name of a group chat.
  • the interface 304c shown in FIG. 14 at least one session identifier may be displayed in the third interface, and the user may select an icon of the session and the name of the contact (ie, the first target identifier), and add the contact to the In the conversation, of course, the user can also click on the conversation identifier to add all users in the third interface to the conversation, which is not specifically limited in the embodiment of the present disclosure.
  • the terminal device displays at least one alternative session identifier on the third interface, which enables the user to add the contact determined according to the image to at least one of the sessions, making the way to join the existing session more convenient and quick.
  • step 212 in the method for creating a session provided by an embodiment of the present disclosure, after step 203, step 212:
  • Step 212 The terminal device displays N indication marks corresponding to the N face images.
  • an indicator is used to indicate the similarity between a face image and a third image
  • the third image is an image in which the similarity between the at least one target image and the one face image is greater than or equal to the similarity threshold
  • the at least one target image is an image corresponding to the second target identification in the target communication program
  • the second target identification is the target identification corresponding to the one face image.
  • the third interface further includes N indication identifiers corresponding to N face images.
  • the N indication marks may be digital marks, text marks, or color marks, which are not specifically limited in the embodiment of the present disclosure.
  • the interface 304a1 in FIG. 15 is sequentially arranged from top to bottom according to the similarity from large to small.
  • the face image in Zhang San's user information has the highest similarity to the corresponding face image in the first image of Zhang San's row, followed by Li Si and Wang Wu.
  • the terminal device determines the contact corresponding to the first target similarity as the contact corresponding to the first face image; when the first target similarity is less than The first threshold, and the second target similarity is greater than or equal to the first threshold, the terminal device determines the contact corresponding to the second target similarity as the contact corresponding to the first face image; wherein, the first target is similar Degree is the similarity between the first face image and the second face image, the second face image is the face image of the avatar in the contact list or the face image in the contact label, the first face image is the first Any face image in at least one face image in an image; the second target similarity is the similarity between the first face image and the third face image, and the third face image is not in the contact list and The face image containing the avatar in the second session of the user, the second session is the session in the target communication program.
  • the terminal device can select user information in a communication program when calculating the similarity.
  • the user information can include an avatar and a label, where the avatar can be a face that the user notes for other users in his terminal device
  • An image for example, as shown in the interface 307 shown in FIG. 16, the avatar is a face image of the pig remark that the user can fly, or an image set by other users themselves; the image in the label can also be
  • the face images remarked for other users in their own terminal devices can also be images set by other users themselves.
  • the label in FIG. 16 is the image of a puppy set by the pig that the user can fly. The embodiments of the present disclosure do not specifically limit this.
  • any one of the above third interfaces can display an indicator, and the user can determine which user's face image in the user information and the face in the first image according to the indicator The similarity of the images is high, so you can refer to selecting the contact who needs to create a session to send information.
  • the terminal device can display the similarity between each contact and the corresponding face image in the third interface, which can make the acquired information more accurate, so that reference can be made to select the contact who needs to create a session to send the information.
  • FIG. 17 is a possible structural schematic diagram of a terminal device provided by an embodiment of the present disclosure.
  • the terminal device 400 includes: a receiving module 401 and a display module 402; the receiving module 401 is configured to receive a user pair including at least one person.
  • the first input of the first image of the face image; the display module 402 is used to display at least one icon of the communication program in response to the first input received by the receiving module 401; the receiving module 401 is also used to receive the second input of the user;
  • a conversation interface is displayed, and the conversation interface includes M target identifiers; wherein each target identifier is used to indicate a user, and the M users indicated by the M target identifiers include at least one face
  • the M target identifiers are the identifiers in the target communication program corresponding to the second input, M and K are both positive integers, and K is less than or equal to M.
  • the display module 402 is specifically configured to display the at least one face image and the icon of the at least one communication program in the first image in response to the first input received by the receiving module 401.
  • the display module 402 is further configured to display N face images and N target identifiers in response to the second input after the receiving module 401 receives the user's second input; wherein each face image corresponds to a target ID, N users indicated by N target IDs include users indicated by P personal face images in at least one face image, the N target IDs are IDs in the target communication program, and P is an integer less than or equal to N; receive The module 401 is also used to receive the third input of the user; the display module 402 is specifically used to display the conversation interface in response to the third input received by the receiving module 401.
  • the third input is a sliding input of the user in a preset direction in a blank area other than the N face images and the N target marks.
  • the display module 402 is further used to display the preset control after the receiving module 401 receives the second input; the receiving module 401 is also used to receive the user's fourth input to the preset control; the display module 402 is used to respond At the fourth input received by the receiving module 401, T personal face images and T target identifiers are displayed; wherein, the T personal face images include the N personal face images, and the T target identifiers include the N target identifiers, the T personal faces The other face images in the image other than the N face images are the face images in the second image, the second image is an image corresponding to the fourth input, and the T target identifiers except the N target identifiers The users indicated by other target identifiers are the users indicated by other face images, and T is a positive integer.
  • the fourth input includes a first sub-input and a second sub-input;
  • the display module 402 is specifically configured to display the N face image and the N target identifiers in the first area in response to the user’s Set the first sub-input of the control to display the shooting preview interface in the second area; in response to the user's second sub-input to the preset control, perform a shooting operation, display the second image captured in the second area, and display the first image in the first area The first face image, the first target mark, the N face images and the N target marks in the second image are displayed.
  • the receiving module 401 is also used to receive the user's fifth input after the display module 402 displays the N face images and N target identifiers; the display module 402 is also used to respond to the fifth received by the receiving module 401 Input, display J personal face image and J target marks, J personal face image is the image in N personal face image, J target marks are the marks in N target marks, J is a positive integer less than N.
  • the receiving module 401 is also used to receive the fifth input of the second face image by the user after the display module 402 displays the N face images and N target identifiers; the display module 402 is also used to respond to the received The fifth input received by the module 401 deletes the second face image and the corresponding at least one target identifier.
  • the first input includes a first sub-input and a second sub-input
  • the display module 402 is specifically configured to display an added control in response to the user's first sub-input received by the receiving module 401; and respond to the receiving module 401
  • the second sub-input of the added user to the added control displays an icon of at least one communication program.
  • the display module 402 is specifically configured to display the N face images, the N target identifiers, and at least one candidate session identifier in response to the second input, and each session identifier is used to indicate an established session;
  • the receiving module 401 is specifically configured to receive the third input of the user corresponding to the first session identifier; the display module 402 is specifically configured to display all target identifiers including the first session identifier in response to the third input received by the receiving module 401 A conversation interface with the N target identifiers; where the first conversation identifier is an identifier in at least one candidate conversation identifier.
  • the display module 402 is further configured to display N indication marks corresponding to the N personal face images after the receiving module 401 receives the second input from the user; one indication mark is used to indicate a face image and the second The similarity between the three images.
  • the third image is an image in which the similarity between the at least one target image and the one face image is greater than or equal to the similarity threshold.
  • the at least one target image is the same as the An image corresponding to the second target identification, and the second target identification is a target identification corresponding to a face image.
  • the terminal device 400 provided by an embodiment of the present disclosure can implement various processes implemented by the terminal device in the foregoing method embodiments, and to avoid repetition, details are not described herein again.
  • the terminal device receives a first input of a first image including at least one face image by a user. Then, in response to the first input, the terminal device displays an icon of at least one communication program. Second, the terminal device receives the user's second input. Finally, in response to the second input, the terminal device displays a conversation interface, which includes M target identifiers. Each target mark is used to indicate a user, and the M users indicated by the M target marks include at least one user indicated by the K face image in the face image, and the M target marks are in the target communication program corresponding to the second input , M and K are positive integers, and K is less than or equal to M.
  • the terminal device can display at least one icon of the communication program to the user according to the received first input of the user to the first image, thereby enabling the user to select the communication program and select which For the user corresponding to the face image, after the user selection is completed, the terminal device displays the user session interface including the indication of the K personal face image in the at least one face image. Therefore, the session creation method provided by the embodiments of the present disclosure may be based on The image of the face image quickly finds the desired contact, and can then quickly create a conversation or add the user to an existing group chat.
  • the terminal device 100 includes but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, and a display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, power supply 111 and other components.
  • a radio frequency unit 101 includes but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, and a display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, power supply 111 and other components.
  • the structure of the terminal device shown in FIG. 18 does not constitute a limitation on the terminal device, and the terminal device may include more or fewer components than the illustration, or combine some components, or different components Layout.
  • terminal devices include, but are not limited to, mobile phones, tablet computers, notebook computers, palmtop computers, in-vehicle terminal devices, wearable devices, and pedometers.
  • the user input unit 107 is used to receive a user's first input to a first image including at least one face image; the display unit 106 is used to display at least one icon of a communication program in response to the first input; user input Unit 107 is also used to receive the user's second input; display unit 106 is also used to display a conversation interface in response to the second input, the conversation interface includes M target identifiers; wherein each target identifier is used to indicate a Users, the M users indicated by the M target identifiers include the users indicated by the K personal face images in the at least one face image, and the M target identifiers are the identifiers in the target communication program corresponding to the second input, M And K are positive integers, and K is less than or equal to M.
  • the terminal device receives a first input of a first image including at least one face image by a user. Then, in response to the first input, the terminal device displays an icon of at least one communication program. Secondly, the terminal device receives the user's second input. Finally, in response to the second input, the terminal device displays a conversation interface, which includes M target identifiers.
  • M target identifiers is used to indicate a user
  • M users indicated by M target identifiers include users indicated by at least one K face image in the face image
  • M target identifiers are target communications corresponding to the second input
  • M and K are positive integers
  • K is less than or equal to M.
  • the terminal device can display at least one icon of the communication program to the user according to the received first input of the user to the first image, thereby enabling the user to select the communication program and select which For the user corresponding to the face image, after the user selection is completed, the terminal device displays the user session interface including the indication of the K personal face image in the at least one face image. Therefore, the session creation method provided by the embodiments of the present disclosure may be based on The image of the face image quickly finds the desired contact, and can then quickly create a conversation or add the user to an existing group chat.
  • the radio frequency unit 101 may be used to receive and send signals during sending and receiving information or during a call. Specifically, after receiving the downlink data from the base station, it is processed by the processor 110; The uplink data is sent to the base station.
  • the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency unit 101 can also communicate with the network and other devices through a wireless communication system.
  • the terminal device provides wireless broadband Internet access to the user through the network module 102, such as helping the user to send and receive emails, browse web pages, and access streaming media.
  • the audio output unit 103 may convert the audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Moreover, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (for example, call signal reception sound, message reception sound, etc.).
  • the audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
  • the input unit 104 is used to receive audio or video signals.
  • the input unit 104 may include a graphics processor (Graphics, Processing, Unit, GPU) 1041 and a microphone 1042.
  • the graphics processor 1041 pairs images of still pictures or videos obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode
  • the data is processed.
  • the processed image frame may be displayed on the display unit 106.
  • the image frame processed by the graphics processor 1041 may be stored in the memory 109 (or other storage medium) or sent via the radio frequency unit 101 or the network module 102.
  • the microphone 1042 can receive sound, and can process such sound into audio data.
  • the processed audio data can be converted into a format that can be sent to the mobile communication base station via the radio frequency unit 101 in the case of a telephone call mode and output.
  • the terminal device 100 further includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 1061 according to the brightness of the ambient light, and the proximity sensor can close the display panel 1061 and the terminal device 100 when moving to the ear /Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (generally three axes), and can detect the magnitude and direction of gravity when at rest, and can be used to identify the posture of terminal devices (such as horizontal and vertical screen switching, related games) , Magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tap), etc.; sensor 105 can also include fingerprint sensor, pressure sensor, iris sensor, molecular sensor, gyroscope, barometer, hygrometer, thermometer, Infrared sensors, etc. will not be repeated here.
  • the display unit 106 is used to display information input by the user or information provided to the user.
  • the display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display (Liquid Crystal) (LCD), an organic light emitting diode (Organic Light-Emitting Diode, OLED), or the like.
  • LCD Liquid Crystal
  • OLED Organic Light-Emitting Diode
  • the user input unit 107 may be used to receive input numeric or character information, and generate key signal input related to user settings and function control of the terminal device.
  • the user input unit 107 includes a touch panel 1071 and other input devices 1072.
  • the touch panel 1071 also known as a touch screen, can collect user's touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc. on or near the touch panel 1071 operating).
  • the touch panel 1071 may include a touch detection device and a touch controller.
  • the touch detection device detects the user's touch orientation, and detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into contact coordinates, and then sends To the processor 110, the command sent by the processor 110 is received and executed.
  • the touch panel 1071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the user input unit 107 may also include other input devices 1072.
  • other input devices 1072 may include, but are not limited to, physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, and joysticks, which will not be repeated here.
  • the touch panel 1071 may be overlaid on the display panel 1061.
  • the touch panel 1071 detects a touch operation on or near it, it is transmitted to the processor 110 to determine the type of touch event, and then the processor 110 according to the touch The type of event provides a corresponding visual output on the display panel 1061.
  • the touch panel 1071 and the display panel 1061 are implemented as two independent components to realize the input and output functions of the terminal device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated
  • the input and output functions of the terminal device are not specifically limited here.
  • the interface unit 108 is an interface for connecting an external device to the terminal device 100.
  • the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (I/O) port, video I/O port, headphone port, etc.
  • the interface unit 108 may be used to receive input (eg, data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal device 100 or may be used in the terminal device 100 and external Transfer data between devices.
  • the memory 109 may be used to store software programs and various data.
  • the memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; the storage data area may store Data created by the use of mobile phones (such as audio data, phone books, etc.), etc.
  • the memory 109 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the processor 110 is the control center of the terminal device, and uses various interfaces and lines to connect the various parts of the entire terminal device, by running or executing the software programs and/or modules stored in the memory 109, and calling the data stored in the memory 109 , Perform various functions and process data of the terminal device, so as to monitor the terminal device as a whole.
  • the processor 110 may include one or more processing units; optionally, the processor 110 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, and application programs, etc.
  • the modulation processor mainly handles wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 110.
  • the terminal device 100 may further include a power supply 111 (such as a battery) that supplies power to various components.
  • a power supply 111 (such as a battery) that supplies power to various components.
  • the power supply 111 may be logically connected to the processor 110 through a power management system, thereby managing charge, discharge, and power consumption through the power management system Management and other functions.
  • the terminal device 100 includes some not-shown functional modules, which will not be repeated here.
  • an embodiment of the present disclosure further provides a terminal device, combined with FIG. 18, includes a processor 110, a memory 109, and a computer program stored on the memory 109 and executable on the processor 110.
  • the computer program is processed by the processor During the execution of 110, the processes of the above embodiments of the session creation method are implemented, and the same technical effect can be achieved. To avoid repetition, details are not described here.
  • Embodiments of the present disclosure also provide a computer-readable storage medium that stores a computer program on the computer-readable storage medium.
  • the computer program is executed by a processor, each process of the foregoing session creation method embodiment is implemented, and the same technology can be achieved In order to avoid repetition, I will not repeat them here.
  • the computer-readable storage medium such as read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
  • the methods in the above embodiments can be implemented by means of software plus a necessary general hardware platform, and of course, can also be implemented by hardware, but in many cases the former is better Implementation.
  • the technical solutions of the present disclosure can essentially be embodied in the form of software products that contribute to related technologies, and the computer software products are stored in a storage medium (such as ROM/RAM, magnetic disk,
  • the CD-ROM includes several instructions to enable a terminal device (which may be a mobile phone, computer, server, air conditioner, or network device, etc.) to execute the methods described in various embodiments of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

一种会话创建方法及终端设备,该方法包括:终端设备接收用户对包括至少一个人脸图像的第一图像的第一输入(201);响应于该第一输入,显示至少一个通讯程序的图标(202);终端设备接收用户的第二输入(203);响应于该第二输入,显示会话界面,该会话界面包括M个目标标识(204);其中,每个目标标识用于指示一个用户,该M个目标标识指示的M个用户包括该至少一个人脸图像中的K个人脸图像指示的用户,该M个目标标识为与该第二输入对应的目标通讯程序中的标识,M和K均为正整数,且K小于或等于M。

Description

会话创建方法及终端设备
相关申请的交叉引用
本申请主张在2018年12月24日在中国提交的中国专利申请号No.201811584399.7的优先权,其全部内容通过引用包含于此。
技术领域
本公开实施例涉及通信技术领域,尤其涉及一种会话创建方法及终端设备。
背景技术
随着通信技术的发展,用户使用终端设备发送信息的方式越来越多。
通常,若用户需要通过终端设备中的一个通讯程序给多个联系人发送同一个消息,则用户可以在该通讯程序的联系人列表中查找该多个联系人,并触发终端设备为这些联系人创建一个群聊,然后用户可以通过该群聊触发终端设备向这些联系人发送消息,即这些联系人均可以接收到用户触发发送的该消息。
然而,若用户无法获知联系人的姓名,则上述查找联系人的方式可能无法快速找到需要的联系人,如此,导致创建群聊的速度较慢。
发明内容
本公开实施例提供一种会话创建方法及终端设备,以解决在用户无法获得联系人姓名的情况下创建群聊速度较慢的问题。
为了解决上述技术问题,本公开实施例是这样实现的:
第一方面,本公开实施例提供一种会话创建方法,接收用户对包括至少一个人脸图像的第一图像的第一输入;响应于该第一输入,显示至少一个通讯程序的图标;接收用户的第二输入;响应于该第二输入,显示会话界面;其中,该会话界面包括M个目标标识,每个目标标识用于指示一个用户,该M个目标标识指示的M个用户包括该至少一个人脸图像中的K个人脸图像指示的用户,该M个目标标识为与该第二输入对应的目标通讯程序中的标识,M和K均为正整数,且K小于或等于M。
第二方面,本公开实施例还提供了一种终端设备,该终端设备包括:接收模块和显示模块;该接收模块,用于接收用户对包括至少一个人脸图像的第一图像的第一输入;该显示模块,用于响应于该接收模块接收的该第一输入,显示至少一个通讯程序的图标;该接收模块,还用于接收用户的第二输入;响应于该接收模块接收的该第二输入,显示会话界面;其中,该会话界面包括M个目标标识,每个目标标识用于指示一个用户,该M个目标标识指示的M个用户包括该至少一个人脸图像中的K个人脸图像指示的用户,该M个目标标识为与该第二输入对应的目标通讯程序中的标识,M和K均为正整数,且K小于或等于M。
第三方面,本公开实施例提供了一种终端设备,包括处理器、存储器及存储在该存储器上并可在该处理器上运行的计算机程序,该计算机程序被该处理器执行时实现 如第一方面所述的会话创建方法的步骤。
第四方面,本公开实施例提供了一种计算机可读存储介质,该计算机可读存储介质上存储计算机程序,该计算机程序被处理器执行时实现如第一方面所述的会话创建方法的步骤。
在本公开实施例中,首先,终端设备接收用户对包括至少一个人脸图像的第一图像的第一输入。然后,响应于第一输入,终端设备显示至少一个通讯程序的图标。其次,终端设备接收的第二输入。最后,响应于第二输入,终端设备显示会话界面;该会话界面包括M个目标标识。由于每个目标标识用于指示一个用户,M个目标标识指示的M个用户包括至少一个人脸图像中的K个人脸图像指示的用户,M个目标标识为与第二输入对应的目标通讯程序中的标识,M和K均为正整数,且K小于或等于M。由于第一图像包括人脸图像,终端设备可以根据接收到的用户对第一图像的第一输入,向用户显示至少一个通讯程序的图标,从而能够使得用户选择通讯程序,以及选择与哪可以些人脸图像对应的用户,在用户选择完成之后,终端设备显示包括该至少一个人脸图像中的K个人脸图像指示的用户会话界面,因此,本公开实施例提供的会话创建方法可以根据包括人脸图像的图像快速找到需要的联系人,进而能够快速创建会话或者将用户加入已有群聊中。
附图说明
图1为本公开实施例提供的一种可能的安卓操作系统的架构示意图;
图2为本公开实施例提供的一种会话创建方法的流程示意图;
图3为本公开实施例提供的一种显示界面示意图之一;
图4为本公开实施例提供的一种显示界面示意图之二;
图5为本公开实施例提供的一种显示界面示意图之三;
图6为本公开实施例提供的一种显示界面示意图之四;
图7为本公开实施例提供的一种显示界面示意图之五;
图8为本公开实施例提供的一种显示界面示意图之六;
图9为本公开实施例提供的一种显示界面示意图之七;
图10为本公开实施例提供的一种显示界面示意图之八;
图11为本公开实施例提供的一种显示界面示意图之九;
图12为本公开实施例提供的一种显示界面示意图之十;
图13为本公开实施例提供的一种显示界面示意图之十一;
图14为本公开实施例提供的一种显示界面示意图之十二;
图15为本公开实施例提供的一种显示界面示意图之十三;
图16为本公开实施例提供的一种显示界面示意图之十四;
图17为本公开实施例提供的一种终端设备可能的结构示意图;
图18为本公开各个实施例的一种终端设备的硬件结构示意图。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开 中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
需要说明的是,本文中的“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。“多个”是指两个或多于两个。
本公开的说明书和权利要求书中的术语“第一”和“第二”等是用于区别不同的对象,而不是用于描述对象的特定顺序。例如,第一添加控件和第二添加控件等是用于区别不同的添加控件,而不是用于描述添加控件的特定顺序。
需要说明的是,本公开实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本公开实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更可选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
本公开实施例中的终端设备可以为具有操作系统的终端设备。该操作系统可以为安卓(Android)操作系统,可以为ios操作系统,还可以为其他可能的操作系统,本公开实施例不作具体限定。
下面以安卓操作系统为例,介绍一下本公开实施例提供的会话创建方法所应用的软件环境。
如图1所示,为本公开实施例提供的一种可能的安卓操作系统的架构示意图。在图1中,安卓操作系统的架构包括4层,分别为:应用程序层、应用程序框架层、系统运行库层和内核层(具体可以为Linux内核层)。
其中,应用程序层包括安卓操作系统中的各个应用程序(包括系统应用程序和第三方应用程序)。
应用程序框架层是应用程序的框架,开发人员可以在遵守应用程序的框架的开发原则的情况下,基于应用程序框架层开发一些应用程序。
系统运行库层包括库(也称为系统库)和安卓操作系统运行环境。库主要为安卓操作系统提供其所需的各类资源。安卓操作系统运行环境用于为安卓操作系统提供软件环境。
内核层是安卓操作系统的操作系统层,属于安卓操作系统软件层次的最底层。内核层基于Linux内核为安卓操作系统提供核心系统服务和与硬件相关的驱动程序。
以安卓操作系统为例,本公开实施例中,开发人员可以基于上述如图1所示的安卓操作系统的系统架构,开发实现本公开实施例提供的会话创建方法的软件程序,从而使得该会话创建方法可以基于如图1所示的安卓操作系统运行。即处理器或者终端设备可以通过在安卓操作系统中运行该软件程序实现本公开实施例提供的会话创建方法。
下面结合图2中对本公开实施例的会话创建方法进行说明。图2为本公开实施例提供的一种会话创建方法的流程示意图,如图2所示,该会话创建方法包括步骤201至步骤204:
步骤201、终端设备接收用户对包括至少一个人脸图像的第一图像的第一输入。
为了便于说明,以第一图像显示在第一界面中为例进行说明,第一界面可以为终端设备采集图像的界面(即,拍摄预览界面),也可以为终端设备显示图像的界面(例如,用户从相册中或者接收图像的应用中选择一张图像查看的界面),本公开实施例对此不作具 体限定。
示例性,图3为本公开实施例提供的一种显示界面的示意图。其中,第一界面可以为图3中的(a)所示的界面301a,也可以为图3中的(b)所示的界面301b。其中,界面301a为终端设备的相机的拍摄预览界面,界面301b为终端设备展示图像的展示界面。
需要说明的是,在第一界面中还可以显示一个“人脸查人”或者“查人”的控件,可以显示在拍摄界面中的其他控件的相邻区域(例如,可以显示在界面301a中“录像”的右边区域),或者在用户选中第一图像之后,可以显示在其他控件的相邻区域,则第一输入可以为对该“人脸查人”或者“查人”控件的输入。当然,第一界面中也可以不显示“人脸查人”和“查人”控件,可以通过接收用户的快捷输入(例如,长按屏幕)启用人脸查人功能,本公开实施例对此不作具体限定。
可选的,第一输入可以为触屏输入、指纹输入、重力输入、按键输入等。其中,触屏输入为用户对终端设备的触控屏的按压输入、长按输入、滑动输入、点击输入、悬浮输入(用户在触控屏附近的输入)等输入。指纹输入为用户对终端设备的指纹识别器的滑动指纹、长按指纹、单击指纹和双击指纹等输入。重力输入为用户对终端设备特定方向的晃动、特定次数的晃动等输入。按键输入对应于用户对终端设备的电源键、音量键、Home键等按键的单击输入、双击输入、长按输入、组合按键输入等输入。具体的,本公开实施例对第一输入的方式不作具体限定,可以为任一可实现的方式。
需要说明的是,本公开实施例中,第一输入可以为一个连续的输入,也可以为包括多个不连续的子输入,本公开实施例对此不作具体限定。
步骤202、响应于第一输入,终端设备显示至少一个通讯程序的图标。
为了便于说明,假设端设备显示至少一个通讯程序的图标的界面为第二界面。
具体的,终端设备将上述的第一界面更新显示为第二界面,第二界面包括至少一个通讯程序的图标。
可以理解的是,本公开实施例中的至少一个通讯程序为终端设备中已安装的具有联系人的通讯程序。
步骤203、终端设备接收用户的第二输入。
可选的,第二输入可以为一个连续的输入,也可以为不连续的多个子输入组成的输入,本公开实施例对此不作具体限定。
可以理解,第二输入可以为用户选择第一图像中的人脸图像,以及选择通讯程序的图标的输入。
步骤204、响应于第二输入,终端设备显示会话界面,该会话界面包括M个目标标识。
其中,每个目标标识用于指示一个用户,该M个目标标识指示的M个用户包括至少一个个人脸图像中的K个人脸图像指示的用户,该M个目标标识为与第二输入对应的目标通讯程序中的标识,M和K均为正整数,且K小于或等于M。
需要说明的是,由于不同的用户可能采用相同的图像的为头像,因此,K个人脸图像可以对应多于K个用户。
可选的,目标标识可以为用户的备注名称、昵称、用户名等。
可选的,在本公开实施例中,会话界面可以为群聊界面,也可以为群发消息界面,本公开实施例对此不作具体限定。
具体的,终端设备将上述第二界面更新显示为会话界面,会话界面包括M个目标标识。
通常,若会话界面为群聊界面时,用户可以在群聊界面中向这些联系人发送消息,这些联系人均可以接收到用户发送的该消息,这些联系人中的任意一个联系人也可以在该群聊中发送消息,其他用户均可以接收到这些联系人在该群聊中发送的消息。若会话界面为群发界面时,用户可以在群发界面中向这些联系人发送消息,这些联系人均可以接收到用户发送的该消息。
本公开实施例提供的会话创建方法,首先,终端设备接收用户对包括至少一个人脸图像的第一图像的第一输入。然后,响应于第一输入,终端设备显示至少一个通讯程序的图标。其次,终端设备接收用户的第二输入。最后,响应于第二输入,终端设备显示为话界面,该会话界面包括M个目标标识。每个目标标识用于指示一个用户,M个目标标识指示的M个用户包括至少一个人脸图像中的K个人脸图像指示的用户,M个目标标识为与第二输入对应的目标通讯程序中的标识,M和K均为正整数,且K小于或等于M。由于第一图像包括人脸图像,终端设备可以根据接收到的用户对第一图像的第一输入,向用户显示至少一个通讯程序的图标,从而能够使得用户选择通讯程序,以及选择与哪可以些人脸图像对应的用户,在用户选择完成之后,终端设备显示包括该至少一个人脸图像中的K个人脸图像指示的用户会话界面,因此,本公开实施例提供的会话创建方法可以根据包括人脸图像的图像快速找到需要的联系人,进而能够快速创建会话或者将用户加入已有群中。
一种可能的实现方式中,本公开实施例提供的会话创建方法,步骤202具体可以通过步骤202a1执行:
步骤202a1、响应于第一输入,显示第一图像中的该至少一个人脸图像和该至少一个通讯程序的图标。
为了便于说明,假设第二界面中还可以显示至少一个人脸图像。具体的,终端设备将上述第一界面更新显示为第二界面,该第二界面包括上述第一图像中的该至少一个人脸图像和至少一个通讯程序的图标。
示例性的,结合图3,如图4中的(a)所示,第二界面可以为界面302a,界面302a中包括3个人脸图像和5个通讯程序的图标,分别为:人脸图像31、人脸图像32、人脸图像33、通讯程序1的图标1、通讯程序2的图标2、通讯程序3的图标3、通讯程序4的图标4以及通讯程序5的图标5。
进而,第二输入可以为用户在第二界面上仅选择通讯程序的图标的输入,此时第二输入可以默认为用户选择了第二界面中所有的人脸图像以及对应的通讯程序的图标;或者,第二输入可以包括用户对人脸图像的子输入和对图标的子输入,本公开实施例对此不作具体限定。
可选的,第二界面中还可以包括一个选择控件。其中,该选择控件可以用于用户选择哪些联系人为需要的联系人。
示例性的,如图4中的(b)所示,第二界面也可以为界面302b,界面302b中还包括一个选择控件34,在终端设备将第一界面更新显示为第二界面的情况下,界面302b中的选择控件34可以将全部的人脸图像圈在一个虚线围成的区域内,用于表示已选择全部的人脸图像,当然,用户可以移动第二界面中的任意一个人脸图像,例如用户可以将虚线围 成的区域内的任意一个人脸图像移除(包括删除和移动到第二界面中的其他区域)。假设第二输入为用户在界面302a中选择了人脸图像31,人脸图像32和人脸图像33,并将图标1移动至三个人脸图像所在的区域(例如,虚线圆圈34内)的输入。在将三个人脸图像全部移动至虚线圆圈内之后,终端设备显示的会话界面可以为图5中的(a)所示的界面303a,也可以为图5中的(b)所示的界面303b。其中,界面303a可以为一个群聊界面,该群聊界面中可以包括3个人脸图像对应的3个用户姓名。界面303b可以为一个群发界面,该群发界面中也可以包括3个人脸图像对应的3个用户姓名。
基于该方案,终端设备可以根据用户的第一输入显示上述至少一个人脸图像和上述至少一个通讯程序的图标,从而能够使得用户能够根据显示的该至少一个人脸图像,选择与哪些人脸图像对应的用户建立会话,因此,本公开实施例提供的会话创建方法,用户可以更加方便地根据终端设备显示的至少一个人脸图像,选择快速找到需要的联系人。
一种可能的实现方式中,本公开实施例提供的会话创建方法,在步骤203之后,还包括步骤205和步骤206:
步骤205、响应于第二输入,终端设备显示N个人脸图像和N个目标标识。
其中,每个人脸图像分别对应一个目标标识,该N个目标标识指示的N个用户包括至少一个人脸图像中的P个人脸图像指示的用户,该N个目标标识为目标通讯程序中的标识,P为小于或等于N的整数。
为了便于理解,以终端设备显示N个人脸图像和N个目标标识的界面为第三界面。具体的,终端设备可以将上述的第二界面更新显示为第三界面,第三界面包括N个人脸图像和N个目标标识。
示例性的,第三界面可以为一个建立群聊的界面,终端设备在接收到用户的第二输入之后,可以显示图6中的(a)所示的界面304a。
步骤206、终端设备接收用户的第三输入。
需要说明的是,第三输入为用户确定建立会话的输入,或者将选择的用户加入群聊的输入,可以为一个连续的输入,也可以为不连续的多个子输入,本公开实施例对此不作具体限定。
具体的,第三输入可以为用户在第三界面上的输入。
示例性的,第三输入可以为用户在界面中的确定建立的控件的输入,例如第三输入可以为在图7所示的界面304c中用户点击“群聊”的输入;第三输入也可以为一个快捷输入,例如,第三输入也可以是界面304c中所示的用户从屏幕底端向上滑动的输入。
进而,步骤204可以通过步骤204a执行。
步骤204a、响应于第三输入,终端设备显示会话界面。
具体的,响应于第三输入,终端设备可以将上述第二界面更新显示为会话界面。
基于该方案,终端设备显示目标标识之后,用户可以根据显示的目标标识和人脸图像确定人脸图像对应的联系人是否为用户需要的联系人。
可选的,第三输入为用户在显示该N个人脸图像和该N个目标标识之外的空白区域朝预设方向的滑动输入。
可以理解,第三输入为用户确定建立会话,或者确定加入以建立会话的输入。
例如,第三输入可以为用户在空白区域朝屏幕顶端滑动的输入。
基于该方案,用户可以在空白区域朝预设方向的滑动输入,来控制终端设备显示会话界面,使用该第三输入的操作更加快捷。
一种可能的实现方式中,本公开实施例提供的会话创建方法,在步骤203之后,还包括步骤207至步骤209:
步骤207、终端设备显示预设控件。
可选的,上述的第三界面还包括预设控件。
可选的,预设控件可以为一个文字表示的具有添加功能的控件,也可以为图标表示的具有添加功能的控件,本公开实施例对于预设控件的类型和显示的位置不作具体限定。
例如,图6中的(a)所示的界面304a中的预设控件为“添加”控件35,为文字类型的添加控件,图6中的(b)所示的界面304b中的预设控件为相机图标36,为图标类型的添加控件。
步骤208、终端设备接收用户对预设控件的第四输入。
可以理解的是,本公开实施例中,用户也可以通过预设控件在通讯程序的联系人列表中添加联系人,即本公开实施例的会话创建方法建立的会话中还可以包括用户直接从联系人列表中手动选择的联系人。
示例性的,第四输入可以为用户选中相机图标36(即预设控件)的输入,也可以为用户选中相机图标36并向上滑动的输入,例如在图8中的(a)所示的界面304b1中所示的输入。
需要说明的是,第四输入可以为一个连续的输入,可以为多个子输入组成的输入,本公开实施例对此不作具体限定。
步骤209、响应于第四输入,终端设备显示T个人脸图像和T个目标标识。
具体的,响应于第四输入,终端设备更新第三界面,更新后的第三界面包括T个人脸图像和T个目标标识。
其中,上述T个人脸图像包括上述N个人脸图像,上述T个目标标识包括上述N个目标标识,上述T个人脸图像中除上述N个人脸图像之外的其他人脸图像为第二图像中的人脸图像,第二图像为与第四输入对应的图像,上述T个目标标识中除上述N个目标标识之外的其他目标标识指示的用户为其他人脸图像指示的用户,T为正整数。
基于该方案,终端设备在第三界面中显示预设控件,可以方便用户在根据第一图像显示的N个目标标识和N个人脸图像确定是否继续添加其他联系人。
可选的,第四输入包括第一子输入和第二子输入。
一种可能的实现方式中,本公开实施例提供的会话创建方法,步骤209还可以通过步骤209a和步骤209b执行:
步骤209a、在第一区域显示N个人脸图像和N个目标标识的情况下,响应于用户对预设控件的第一子输入,终端设备在第二区域显示拍摄预览界面。
步骤209b、响应于用户对预设控件的第二子输入,执行拍摄操作,并在第二区域显示拍摄的第二图像,在第一区域显示第二图像中的第一人脸图像、第一目标标识,该N个人脸图像和该N个目标标识。
可以理解的是,第二图像中可以包括至少一个人脸图像。
示例性的,当第四输入为多个子输入组成的输入时,如图8中的(a)所示,用户在界 面304b1中先选中相机图标36并向上拖动,然后终端设备显示图8中的(b)所示的界面304b2,其中,界面304b2中包括图像采集区域(例如,拍摄预览界面)。用户可以在界面304b2中再次选中相机图标36并向下滑动,如图9中的(a)所示的界面304b3所示。之后终端设备可以在图9中的(b)所示的界面304b4中显示在图像采集区域获取的图像中的人脸图像和这些人脸图像对应的目标标识。
需要说明的是,本公开实施例中,界面304b2中仅以图像采集区域(包括相机预览界面)为例进行说明,界面304b2中也可以显示通讯程序的联系人列表,用户也可以在联系人列表中选择要添加的联系人,本公开实施例对此不作具体限定。
可以理解的是,本公开实施例中,仅以在图像采集区域中采集的包括一个人脸图像的图像为例进行说明,在实际通讯程序中,若图像采集区域中采集的图像包括多个人脸图像,则在图9中的(b)所示的界面304b4中的第一区域可以显示该多个人脸图像对应的目标标识。
基于该方案,终端设备可以根据用户对预设控件的第一子输入,在第二区域显示一个拍摄预览界面,然后终端设备接收用户对预设控件执行第二子输入,执行拍摄动作,并在第二区域显示拍摄的第二图像,在第一区域显示第二图像中的第一人脸图像、第一目标标识和之前显示的N个人脸图像和N个目标标识,能够使得用户可以根据具有人脸图像的图像继续添加用户。
一种可能的实现方式中,本公开实施例提供的会话创建方法,在步骤205之后,还包括步骤210和步骤211:
步骤210、终端设备接收用户的第五输入。
具体的,终端设备可以接收用户在上述第三界面上的第五输入。
可选的,第五输入可以为一个连续的输入,也可以为多个不连续的输入,本公开实施例对此不作具体限定。
可以理解的是,第五输入为用户将不需要的联系人移除的输入。
可选的,第三界面中还可以包括删除控件,第五输入具体可以为用户对第二人脸图像和删除控件的输入。
步骤211、响应于第五输入,终端设备显示J个人脸图像和J个目标标识。
其中,J个人脸图像为N个人脸图像中的图像,J个目标标识为N个目标标识中的标识,J为小于N的正整数。
响应于第五输入,终端设备更新第三界面,更新后的第三界面包括J个人脸图像和J个目标标识。
示例性的,假设图10中的(a)所示的界面304b5为第三界面,则用户可以在界面304b5中从“王五”所在的位置开始向下滑动,则终端设备可以将第三界面更新为图10中的(b)所示的界面304b6(即为更新后的第三界面),界面304b6中包括张三和李四,以及与张三和李四对应的人脸图像。
基于该方案,用户可以在第三界面中删除不需要的联系人。
一种可能的实现方式中,本公开实施例提供的会话创建方法,在步骤205之后,还可以包括步骤210a和步骤211a:
步骤210a、接收用户对第二人脸图像的第五输入。
其中,第二人脸图像为该N个人脸图像中的人脸图像。
步骤211a、响应于第五输入,删除第二人脸图像和对应的至少一个目标标识。
示例性的,假设用户一个人脸图像对应的一个或者对个目标标识,则当终端设备接收用户对第二人脸图像的滑动输入时,可以将该第二人脸图像和对应的目标标识均删除,也可以将该第二人脸图像和对应的部分标识删除。
需要说明的是,该N个人脸图像中可以包括相同的人脸图像,本公开实施例对此不作具体限定。
基于该方案,终端设备可以根据用户对终端设备显示的N个人脸图像中的第二人脸图像的输入,删除该第二人脸图像以及该第二人脸图像对应的至少一个目标标识,使得删除操作更加便捷。
可选的,第一输入包括第三子输入和第四子输入。
一种可能的实现方式中,本公开实施例提供的会话创建方法,步骤202具体可以通过步骤202a和步骤202b执行:
步骤202a、响应于接收到的用户的第三子输入,终端设备显示第一控件。
具体的,响应于接收到的用户的第三子输入,终端设备在第一界面中显示第一控件。
示例性的,第三子输入可以为用户点击屏幕的输入,也可以是用户在屏幕上的滑动输入。具体的,第三子输入可以为图11所示的界面305a中点击屏幕的输入,第一控件可以为界面305a中的控件37,其中控件37中可以显示文字“建立群聊”,或者也可以显示“创建会话”。当然,第三子输入也可以图12中所示的界面306a所示的向两个相反方向的滑动输入。第一控件也可以为界面306a中的控件38,控件38为一个圆形的控件,控件38中显示“加入群聊”的字样,当然控件38也可以为其他的形状,控件38上也可以显示其他的字样,本公开实施例对此不作具体限定。
步骤202b、响应于接收到的用户对第一控件的第四子输入,终端设备将显示第一图像的界面更新显示为包括至少一个通讯程序的图标的界面。
具体的,响应于接收到的用户对添加控件的第四子输入,终端设备将第一界面更新显示为第二界面。
示例性的,第二界面也可以为图12所示的界面306a。用户可以在界面306a中选择将第一控件移动至一个图标上,例如图13中的界面306a1所示,从而可以在该图标对应的通讯程序中建立会话。
需要说明的是,在第一输入为图12中所示的输入时,终端设备也可以在显示第一控件的情况下并显示多个通讯程序的图标。
基于该方案,终端设备可以在显示界面中显示第一控件,从而使得用户在第一控件上操作选择需要获取的信息。
一种可能的实现方式中,本公开实施例提供的会话创建方法,步骤205具体可以通过步骤205a执行:
步骤205a、响应于第二输入,终端设备显示该N个人脸图像、该N个目标标识和至少一个备选的会话标识,每个会话标识用于指示一个已建立的会话。
具体的,上述的第三界面还包括至少一个备选的会话标识。
进而,步骤206可以通过步骤206a执行:
步骤206a、终端设备接收用户对第一会话标识的第三输入。
进而,步骤204a可以通过步骤204a1执行:
步骤204a1、响应于第三输入,终端设备显示包括所述第一会话标识中的所有目标标识和该N个目标标识的会话界面。
其中,所述第一会话标识为所述至少一个备选会话标识中的一个标识。
可选的,第三输入还可以为用户对第一会话标识和第一目标标识的输入。第一会话标识为至少一个备选的会话标识中的标识,第一目标标识为N个目标标识中的标识,M个目标标识包括用于指示与第一会话对应的用户的标识和第一目标标识,第一会话为第一会话标识指示的会话。
示例性的,会话标识可以为会话的名称,例如一个群聊的名称。如图14中所示的界面304c所示,可以在第三界面中显示至少一个会话标识,用户可以选择一个会话的图标和联系人的名称(即第一目标标识),将该联系人加入该会话中,当然用户也可以点击会话标识,将第三界面中所有用户添加至该会话中,本公开实施例对此不作具体限定。
基于该方案,终端设备在第三界面显示了至少一个备选会话标识,能够使得用户将根据图像确定的联系人加入其中至少一个会话,使得加入现有的会话中的方式更加简便快捷。
一种可能的实现方式中,本公开实施例提供的会话创建方法,在步骤203之后,步骤212:
步骤212、终端设备显示所述N个人脸图像对应的N个指示标识。
其中,一个指示标识用于指示一个人脸图像与第三图像之间的相似度,第三图像为至少一个目标图像中与该一个人脸图像之间的相似度大于或等于相似度阈值的图像,该至少一个目标图像为目标通讯程序中与第二目标标识对应的图像,第二目标标识为与该一个人脸图像对应的目标标识。
可选的,第三界面还包括与N个人脸图像对应的N个指示标识。
可选的,N个指示标识可以为数字标识,也可以为文字标识,也可以是颜色标识,本公开实施例对此不作具体限定。
示例性的,以指示标识为数字标识为例进行说明,图15中的界面304a1中,依次从上到下按照相似度从大到小排列。其中,张三的用户信息中的人脸图像与张三所在行的第一图像中对应的人脸图像的相似度最高,其次是李四和王五。
具体的,在第一目标相似度大于或等于第一阈值的情况下,终端设备将第一目标相似度对应的联系人确定为第一人脸图像对应的联系人;在第一目标相似度小于第一阈值,且第二目标相似度大于或等于第一阈值的情况下,终端设备将第二目标相似度对应的联系人确定为第一人脸图像对应的联系人;其中,第一目标相似度为第一人脸图像和第二人脸图像的相似度,第二人脸图像为联系人列表中的头像的人脸图像或者联系人标签中的人脸图像,第一人脸图像为第一图像中的至少一个人脸图像中的任意一个人脸图像;第二目标相似度为第一人脸图像和第三人脸图像的相似度,第三人脸图像为不在联系人列表中且在包含用户的第二会话中的头像的人脸图像,第二会话为目标通讯程序中的会话。
需要说明的是,终端设备在计算相似度时可以选择一个通讯程序中的用户信息,用户信息中可以包括头像和标签,其中,头像可以为用户在自己的终端设备中为其他用户备注 的人脸图像,例如图16中所示的界面307中所示,头像为用户自己为用户会飞的猪备注的人脸图像,也可以为其他用户自己设置的图像;标签中的图像也可以为用户在自己的终端设备中为其他用户备注的人脸图像,也可以为其他用户自己设置的图像例如图16中的标签为用户会飞的猪自己设置的小狗的图像。本公开实施例对此不作具体限定。
可以理解的是,上述的任意一个第三界面,例如界面304b4、界面304c等都可以显示指示标识,用户可以根据指示标识确定哪些用户的用户信息中的人脸图像与第一图像中的人脸图像的相似度较高,从而可以参考选择需要创建会话发送信息的联系人。
基于该方案,终端设备可以在第三界面中显示每个联系人与对应的人脸图像的相似度,能够使得获取的信息更加准确,从而可以参考选择需要创建会话发送信息的联系人。
图17为本公开实施例提供的一种终端设备可能的结构示意图,如图17所示,终端设备400包括:接收模块401和显示模块402;接收模块401,用于接收用户对包括至少一个人脸图像的第一图像的第一输入;显示模块402,用于响应于接收模块401接收的第一输入,显示至少一个通讯程序的图标;接收模块401,还用于接收用户的第二输入;响应于接收模块401接收的第二输入,显示会话界面,会话界面包括M个目标标识;其中,每个目标标识用于指示一个用户,该M个目标标识指示的M个用户包括至少一个人脸图像中的K个人脸图像指示的用户,该M个目标标识为与第二输入对应的目标通讯程序中的标识,M和K均为正整数,且K小于或等于M。
可选的,显示模块402,具体用于响应于接收模块401接收的第一输入,显示第一图像中的上述至少一个人脸图像和上述至少一个通讯程序的图标。
可选的,显示模块402,还用于在接收模块401接收用户的第二输入之后,响应于第二输入,显示N个人脸图像和N个目标标识;其中,每个人脸图像分别对应一个目标标识,N个目标标识指示的N个用户包括至少一个人脸图像中的P个人脸图像指示的用户,该N个目标标识为目标通讯程序中的标识,P为小于或等于N的整数;接收模块401,还用于接收用户的第三输入;显示模块402,具体用于响应于接收模块401接收的第三输入,显示会话界面。
可选的,第三输入为用户在显示该N个人脸图像和该N个目标标识之外的空白区域朝预设方向的滑动输入。
可选的,显示模块402还用于在接收模块401接收第二输入之后,显示预设控件;接收模块401,还用于接收用户对预设控件的第四输入;显示模块402,用于响应于接收模块401接收的第四输入,显示T个人脸图像和T个目标标识;其中,T个人脸图像包括该N个人脸图像,该T个目标标识包括该N个目标标识,该T个人脸图像中除该N个人脸图像之外的其他人脸图像为第二图像中的人脸图像,第二图像为与第四输入对应的图像,该T个目标标识中除该N个目标标识之外的其他目标标识指示的用户为其他人脸图像指示的用户,T为正整数。
可选的,第四输入包括第一子输入和第二子输入;显示模块402,具体用于在第一区域显示该N个人脸图像和该N个目标标识的情况下,响应于用户对预设控件的第一子输入,在第二区域显示拍摄预览界面;响应于用户对预设控件的第二子输入,执行拍摄操作,在第二区域显示拍摄的第二图像,并在第一区域显示第二图像中的第一人脸图像、第一目标标识、该N个人脸图像和该N个目标标识。
可选的,接收模块401,还用于在显示模块402显示N个人脸图像和N个目标标识之后,接收用户的第五输入;显示模块402,还用于响应于接收模块401接收的第五输入,显示J个人脸图像和J个目标标识,J个人脸图像为N个人脸图像中的图像,J个目标标识为N个目标标识中的标识,J为小于N的正整数。
可选的,接收模块401,还用于在显示模块402显示N个人脸图像和N个目标标识之后,接收用户对第二人脸图像的第五输入;显示模块402,还用于响应于接收模块401接收的第五输入,删除第二人脸图像和对应的至少一个目标标识。
可选的,第一输入包括第一子输入和第二子输入;显示模块402,具体用于响应于接收模块401接收到的用户的第一子输入,显示添加控件;响应于接收模块401接收到的用户对添加控件的第二子输入,显示至少一个通讯程序的图标。
可选的,显示模块402,具体用于响应于第二输入,显示该N个人脸图像、该N个目标标识和至少一个备选会话标识,每个会话标识用于指示一个已建立的会话;接收模块401,具体用于接收用户对应第一会话标识的第三输入;显示模块402,具体用于,响应于接收模块401接收的第三输入,显示包括该第一会话标识中的所有目标标识和该N个目标标识的会话界面;其中,第一会话标识为至少一个备选会话标识中的标识。
可选的,显示模块402,还用于在接收模块401接收用户的第二输入之后,显示该N个人脸图像对应的N个指示标识;其中,一个指示标识用于指示一个人脸图像与第三图像之间的相似度,第三图像为至少一个目标图像中与该一个人脸图像之间的相似度大于或等于相似度阈值的图像,该至少一个目标图像为与目标通讯程序中与第二目标标识对应的图像,第二目标标识为与一个人脸图像对应的目标标识。
本公开实施例提供的终端设备400能够实现上述方法实施例中终端设备实现的各个过程,为避免重复,这里不再赘述。
本公开实施例提供的终端设备,首先,终端设备接收用户对包括至少一个人脸图像的第一图像的第一输入。然后,响应于第一输入,终端设备显示至少一个通讯程序的图标。其次,终端设备接收用户第二输入。最后,响应于第二输入,终端设备显示会话界面,该会话界面包括M个目标标识。每个目标标识用于指示一个用户,M个目标标识指示的M个用户包括至少一个人脸图像中的K个人脸图像指示的用户,M个目标标识为与第二输入对应的目标通讯程序中的标识,M和K均为正整数,且K小于或等于M。由于第一图像包括人脸图像,终端设备可以根据接收到的用户对第一图像的第一输入,向用户显示至少一个通讯程序的图标,从而能够使得用户选择通讯程序,以及选择与哪可以些人脸图像对应的用户,在用户选择完成之后,终端设备显示包括该至少一个人脸图像中的K个人脸图像指示的用户会话界面,因此,本公开实施例提供的会话创建方法可以根据包括人脸图像的图像快速找到需要的联系人,进而能够快速创建会话或者将用户加入已有群聊中。
图18为实现本公开各个实施例的一种终端设备的硬件结构示意图,该终端设备100包括但不限于:射频单元101、网络模块102、音频输出单元103、输入单元104、传感器105、显示单元106、用户输入单元107、接口单元108、存储器109、处理器110、以及电源111等部件。本领域技术人员可以理解,图18中示出的终端设备结构并不构成对终端设备的限定,终端设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。在本公开实施例中,终端设备包括但不限于手机、平板电脑、笔记本电脑、 掌上电脑、车载终端设备、可穿戴设备、以及计步器等。
其中,用户输入单元107,用于接收用户对包括至少一个人脸图像的第一图像的第一输入;显示单元106,用于响应于该第一输入,显示至少一个通讯程序的图标;用户输入单元107,还用于接收用户的第二输入;显示单元106,还用于响应于该第二输入,显示会话界面,该会话界面包括M个目标标识;其中,每个目标标识用于指示一个用户,该M个目标标识指示的M个用户包括该至少一个人脸图像中的K个人脸图像指示的用户,该M个目标标识为与该第二输入对应的目标通讯程序中的标识,M和K均为正整数,且K小于或等于M。
本公开实施例提供的终端设备,首先,终端设备接收用户对包括至少一个人脸图像的第一图像的第一输入。然后,响应于第一输入,终端设备显示至少一个通讯程序的图标。其次,终端设备接收用户的第二输入。最后,响应于第二输入,终端设备显示会话界面,该会话界面包括M个目标标识。其中,每个目标标识用于指示一个用户,M个目标标识指示的M个用户包括至少一个人脸图像中的K个人脸图像指示的用户,M个目标标识为与第二输入对应的目标通讯程序中的标识,M和K均为正整数,且K小于或等于M。由于第一图像包括人脸图像,终端设备可以根据接收到的用户对第一图像的第一输入,向用户显示至少一个通讯程序的图标,从而能够使得用户选择通讯程序,以及选择与哪可以些人脸图像对应的用户,在用户选择完成之后,终端设备显示包括该至少一个人脸图像中的K个人脸图像指示的用户会话界面,因此,本公开实施例提供的会话创建方法可以根据包括人脸图像的图像快速找到需要的联系人,进而能够快速创建会话或者将用户加入已有群聊中。
应理解的是,本公开实施例中,射频单元101可用于收发信息或通话过程中,信号的接收和发送,具体的,将来自基站的下行数据接收后,给处理器110处理;另外,将上行的数据发送给基站。通常,射频单元101包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元101还可以通过无线通信系统与网络和其他设备通信。
终端设备通过网络模块102为用户提供了无线的宽带互联网访问,如帮助用户收发电子邮件、浏览网页和访问流式媒体等。
音频输出单元103可以将射频单元101或网络模块102接收的或者在存储器109中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元103还可以提供与终端设备100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元103包括扬声器、蜂鸣器以及受话器等。
输入单元104用于接收音频或视频信号。输入单元104可以包括图形处理器(Graphics Processing Unit,GPU)1041和麦克风1042,图形处理器1041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元106上。经图形处理器1041处理后的图像帧可以存储在存储器109(或其它存储介质)中或者经由射频单元101或网络模块102进行发送。麦克风1042可以接收声音,并且能够将这样的声音处理为音频数据。处理后的音频数据可以在电话通话模式的情况下转换为可经由射频单元101发送到移动通信基站的格式输出。
终端设备100还包括至少一种传感器105,比如光传感器、运动传感器以及其他传感 器。具体地,光传感器包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板1061的亮度,接近传感器可在终端设备100移动到耳边时,关闭显示面板1061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别终端设备姿态(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;传感器105还可以包括指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等,在此不再赘述。
显示单元106用于显示由用户输入的信息或提供给用户的信息。显示单元106可包括显示面板1061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板1061。
用户输入单元107可用于接收输入的数字或字符信息,以及产生与终端设备的用户设置以及功能控制有关的键信号输入。具体地,用户输入单元107包括触控面板1071以及其他输入设备1072。触控面板1071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板1071上或在触控面板1071附近的操作)。触控面板1071可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器110,接收处理器110发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1071。除了触控面板1071,用户输入单元107还可以包括其他输入设备1072。具体地,其他输入设备1072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
进一步的,触控面板1071可覆盖在显示面板1061上,当触控面板1071检测到在其上或附近的触摸操作后,传送给处理器110以确定触摸事件的类型,随后处理器110根据触摸事件的类型在显示面板1061上提供相应的视觉输出。虽然在图18中,触控面板1071与显示面板1061是作为两个独立的部件来实现终端设备的输入和输出功能,但是在某些实施例中,可以将触控面板1071与显示面板1061集成而实现终端设备的输入和输出功能,具体此处不做限定。
接口单元108为外部装置与终端设备100连接的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元108可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到终端设备100内的一个或多个元件或者可以用于在终端设备100和外部装置之间传输数据。
存储器109可用于存储软件程序以及各种数据。存储器109可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器109可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器110是终端设备的控制中心,利用各种接口和线路连接整个终端设备的各个部 分,通过运行或执行存储在存储器109内的软件程序和/或模块,以及调用存储在存储器109内的数据,执行终端设备的各种功能和处理数据,从而对终端设备进行整体监控。处理器110可包括一个或多个处理单元;可选地,处理器110可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器110中。
终端设备100还可以包括给各个部件供电的电源111(比如电池),可选地,电源111可以通过电源管理系统与处理器110逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
另外,终端设备100包括一些未示出的功能模块,在此不再赘述。
可选的,本公开实施例还提供一种终端设备,结合图18,包括处理器110,存储器109,存储在存储器109上并可在处理器110上运行的计算机程序,该计算机程序被处理器110执行时实现上述会话创建方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本公开实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述会话创建方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。其中,所述的计算机可读存储介质,如只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本公开的技术方案本质上或者说对相关技术中做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本公开各个实施例所述的方法。
上面结合附图对本公开的实施例进行了描述,但是本公开并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本公开的启示下,在不脱离本公开宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本公开的保护之内。

Claims (12)

  1. 一种会话创建方法,其中,所述方法包括:
    接收用户对包括至少一个人脸图像的第一图像的第一输入;
    响应于所述第一输入,显示至少一个通讯程序的图标;
    接收用户的第二输入;
    响应于所述第二输入,显示会话界面,所述会话界面包括M个目标标识;
    其中,每个目标标识用于指示一个用户,所述M个目标标识指示的M个用户包括所述至少一个人脸图像中的K个人脸图像指示的用户,所述M个目标标识为与所述第二输入对应的目标通讯程序中的标识,M和K均为正整数,且K小于或等于M。
  2. 根据权利要求1所述的方法,其中,所述响应于所述第一输入,显示至少一个通讯程序的图标,包括:
    响应于所述第一输入,显示所述第一图像中的所述至少一个人脸图像和所述至少一个通讯程序的图标。
  3. 根据权利要求2所述的方法,其中,所述接收用户的第二输入之后,所述方法还包括:
    响应于所述第二输入,显示N个人脸图像和N个目标标识;其中,每个人脸图像分别对应一个目标标识,所述N个目标标识指示的N个用户包括所述至少一个人脸图像中的P个人脸图像指示的用户,所述N个目标标识为所述目标通讯程序中的标识,P为小于或等于N的整数;
    接收用户的第三输入;
    所述响应于所述第二输入,显示会话界面,包括:
    响应于所述第三输入,显示所述会话界面。
  4. 根据权利要求3所述的方法,其中,所述第三输入为用户在显示所述N个人脸图像和所述N个目标标识之外的空白区域朝预设方向的滑动输入。
  5. 根据权利要求3所述的方法,其中,所述接收所述第二输入之后,所述方法还包括:
    显示预设控件;
    接收用户对所述预设控件的第四输入;
    响应于所述第四输入,显示T个人脸图像和T个目标标识;
    其中,所述T个人脸图像包括所述N个人脸图像,所述T个目标标识包括所述N个目标标识,所述T个人脸图像中除所述N个人脸图像之外的其他人脸图像为第二图像中的人脸图像,所述第二图像为与所述第四输入对应的图像,所述T个目标标识中除所述N个目标标识之外的其他目标标识指示的用户为所述其他人脸图像指示的用户,T为正整数。
  6. 根据权利要求5所述的方法,其中,所述第四输入包括第一子输入和第二子输入;所述响应于所述第四输入,显示T个人脸图像和T个目标标识,包括:
    在第一区域显示所述N个人脸图像和所述N个目标标识的情况下,响应于用户对所述预设控件的所述第一子输入,在第二区域显示拍摄预览界面;
    响应于用户对所述预设控件的第二子输入,执行拍摄操作,并在所述第二区域显 示拍摄的第二图像,在所述第一区域显示所述第二图像中的第一人脸图像、第一目标标识、所述N个人脸图像和所述N个目标标识。
  7. 根据权利要求3所述的方法,其中,所述显示N个人脸图像和N个目标标识之后,所述方法还包括:
    接收用户对第二人脸图像的第五输入;
    响应于所述第五输入,删除所述第二人脸图像和对应的至少一个目标标识。
  8. 根据权利要求3所述的方法,其中,所述响应于所述第二输入,显示会话界面,包括:
    响应于所述第二输入,显示所述N个人脸图像、所述N个目标标识和至少一个备选的会话标识,每个会话标识用于指示一个已建立的会话;
    所述接收用户的第三输入,包括:
    接收用户对第一会话标识的第三输入;
    响应于所述第三输入,显示所述会话界面包括:
    响应于所述第三输入,显示包括所述第一会话标识中的所有目标标识和所述N个目标标识的会话界面;
    其中,所述第一会话标识为所述至少一个备选会话标识中的一个标识。
  9. 根据权利要求3所述的方法,其中,所述接收用户的第二输入之后,所述方法还包括:
    显示所述N个人脸图像对应的N个指示标识;
    其中,一个指示标识用于指示一个人脸图像与第三图像之间的相似度,所述第三图像为至少一个目标图像中与所述一个人脸图像之间的相似度大于或等于相似度阈值的图像,所述至少一个目标图像为所述目标通讯程序中与第二目标标识对应的图像,所述第二目标标识为与所述一个人脸图像对应的目标标识。
  10. 一种终端设备,其中,所述终端设备包括:接收模块和显示模块;
    所述接收模块,用于接收用户对包括至少一个人脸图像的第一图像的第一输入;
    所述显示模块,用于响应于所述接收模块接收的所述第一输入,显示至少一个通讯程序的图标;
    所述接收模块,还用于接收用户的第二输入;
    响应于所述接收模块接收的所述第二输入,显示会话界面,所述会话界面包括M个目标标识;
    其中,每个目标标识用于指示一个用户,所述M个目标标识指示的M个用户包括所述至少一个人脸图像中的K个人脸图像指示的用户,所述M个目标标识为与所述第二输入对应的目标通讯程序中的标识,M和K均为正整数,且K小于或等于M。
  11. 一种终端设备,其中,所述终端设备包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至9中任一项所述的会话创建方法的步骤。
  12. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至9中任一项所述的会话创建方法的步骤。
PCT/CN2019/127140 2018-12-24 2019-12-20 会话创建方法及终端设备 WO2020135269A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
EP19905951.0A EP3905037B1 (en) 2018-12-24 2019-12-20 Session creation method and terminal device
KR1020217021573A KR102657949B1 (ko) 2018-12-24 2019-12-20 세션 생성 방법 및 단말 장치
ES19905951T ES2976717T3 (es) 2018-12-24 2019-12-20 Método de creación de sesión y dispositivo terminal
JP2021537142A JP7194286B2 (ja) 2018-12-24 2019-12-20 セッション作成方法及び端末機器
US17/357,130 US12028476B2 (en) 2018-12-24 2021-06-24 Conversation creating method and terminal device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811584399.7 2018-12-24
CN201811584399.7A CN109766156B (zh) 2018-12-24 2018-12-24 一种会话创建方法及终端设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/357,130 Continuation US12028476B2 (en) 2018-12-24 2021-06-24 Conversation creating method and terminal device

Publications (1)

Publication Number Publication Date
WO2020135269A1 true WO2020135269A1 (zh) 2020-07-02

Family

ID=66451382

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/127140 WO2020135269A1 (zh) 2018-12-24 2019-12-20 会话创建方法及终端设备

Country Status (7)

Country Link
US (1) US12028476B2 (zh)
EP (1) EP3905037B1 (zh)
JP (1) JP7194286B2 (zh)
KR (1) KR102657949B1 (zh)
CN (1) CN109766156B (zh)
ES (1) ES2976717T3 (zh)
WO (1) WO2020135269A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109766156B (zh) 2018-12-24 2020-09-29 维沃移动通信有限公司 一种会话创建方法及终端设备
CN111835531B (zh) * 2020-07-30 2023-08-25 腾讯科技(深圳)有限公司 会话处理方法、装置、计算机设备及存储介质
CN115378897B (zh) * 2022-08-26 2024-08-06 维沃移动通信有限公司 临时会话建立方法、装置、电子设备及可读存储介质
CN117676312A (zh) * 2023-12-01 2024-03-08 维沃移动通信有限公司 通讯方法、通讯装置及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105120084A (zh) * 2015-07-29 2015-12-02 小米科技有限责任公司 基于图像的通信方法及装置
US20160315886A1 (en) * 2014-06-24 2016-10-27 Tencent Technology (Shenzhen) Company Limited Network information push method, apparatus and system based on instant messaging
CN106559558A (zh) * 2015-09-30 2017-04-05 北京奇虎科技有限公司 一种基于图像识别的获取和激活通讯方式的方法及装置
CN106791182A (zh) * 2017-01-20 2017-05-31 维沃移动通信有限公司 一种基于图像的聊天方法及移动终端
CN109766156A (zh) * 2018-12-24 2019-05-17 维沃移动通信有限公司 一种会话创建方法及终端设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101978205B1 (ko) * 2012-06-07 2019-05-14 엘지전자 주식회사 이동 단말기 및 그 제어 방법, 이를 위한 기록 매체
CN104363166B (zh) * 2014-11-27 2018-09-04 小米科技有限责任公司 即时通信方法、装置和智能终端
KR101632435B1 (ko) * 2015-10-20 2016-06-21 이요훈 유무선ip기반 gui를 활용한 sns 시스템 및 이를 이용한 통화 방법
CN106302137A (zh) * 2016-10-31 2017-01-04 努比亚技术有限公司 群聊消息处理装置及方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160315886A1 (en) * 2014-06-24 2016-10-27 Tencent Technology (Shenzhen) Company Limited Network information push method, apparatus and system based on instant messaging
CN105120084A (zh) * 2015-07-29 2015-12-02 小米科技有限责任公司 基于图像的通信方法及装置
CN106559558A (zh) * 2015-09-30 2017-04-05 北京奇虎科技有限公司 一种基于图像识别的获取和激活通讯方式的方法及装置
CN106791182A (zh) * 2017-01-20 2017-05-31 维沃移动通信有限公司 一种基于图像的聊天方法及移动终端
CN109766156A (zh) * 2018-12-24 2019-05-17 维沃移动通信有限公司 一种会话创建方法及终端设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3905037A4 *

Also Published As

Publication number Publication date
KR102657949B1 (ko) 2024-04-15
CN109766156A (zh) 2019-05-17
JP7194286B2 (ja) 2022-12-21
KR20210100171A (ko) 2021-08-13
EP3905037A4 (en) 2022-02-23
JP2022515443A (ja) 2022-02-18
EP3905037B1 (en) 2024-03-13
US12028476B2 (en) 2024-07-02
ES2976717T3 (es) 2024-08-07
US20210320995A1 (en) 2021-10-14
CN109766156B (zh) 2020-09-29
EP3905037A1 (en) 2021-11-03

Similar Documents

Publication Publication Date Title
WO2019137429A1 (zh) 图片处理方法及移动终端
CN108563378B (zh) 一种消息管理方法及终端
CN109343755B (zh) 一种文件处理方法及终端设备
WO2021169954A1 (zh) 搜索方法及电子设备
WO2020135269A1 (zh) 会话创建方法及终端设备
WO2019196691A1 (zh) 一种键盘界面显示方法和移动终端
CN109871164B (zh) 一种消息发送方法及终端设备
WO2020182035A1 (zh) 图像处理方法及终端设备
WO2019149028A1 (zh) 应用程序的下载方法及终端
CN109828731B (zh) 一种搜索方法及终端设备
WO2021129536A1 (zh) 图标移动方法及电子设备
CN110868633A (zh) 一种视频处理方法及电子设备
CN110865745A (zh) 一种截屏方法及终端设备
WO2020192299A1 (zh) 信息显示方法及终端设备
WO2021057301A1 (zh) 文件控制方法及电子设备
CN110196668A (zh) 信息处理方法和终端设备
CN110768804A (zh) 一种群组创建方法及终端设备
CN110233929A (zh) 一种显示控制方法及终端设备
KR20220154825A (ko) 노트 생성 방법 및 전자기기
CN109542311B (zh) 一种文件处理方法及电子设备
US12118199B2 (en) Face picture information display method and terminal device
CN110515507A (zh) 一种图标显示方法及终端
CN111610909B (zh) 一种截图方法、装置及电子设备
CN111142998B (zh) 后台应用的分享方法及电子设备
WO2021036504A1 (zh) 图片删除方法及终端设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19905951

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021537142

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20217021573

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2019905951

Country of ref document: EP

Effective date: 20210726