WO2020135269A1 - 会话创建方法及终端设备 - Google Patents
会话创建方法及终端设备 Download PDFInfo
- Publication number
- WO2020135269A1 WO2020135269A1 PCT/CN2019/127140 CN2019127140W WO2020135269A1 WO 2020135269 A1 WO2020135269 A1 WO 2020135269A1 CN 2019127140 W CN2019127140 W CN 2019127140W WO 2020135269 A1 WO2020135269 A1 WO 2020135269A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- input
- user
- target
- image
- face image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 230000004044 response Effects 0.000 claims abstract description 63
- 238000004590 computer program Methods 0.000 claims description 11
- 238000010586 diagram Methods 0.000 description 20
- 230000006870 function Effects 0.000 description 14
- 230000008569 process Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 230000005484 gravity Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 241000699670 Mus sp. Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04817—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0486—Drag-and-drop
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/448—Execution paradigms, e.g. implementations of programming paradigms
- G06F9/4482—Procedural
- G06F9/4484—Executing subprograms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/448—Execution paradigms, e.g. implementations of programming paradigms
- G06F9/4488—Object-oriented
- G06F9/449—Object-oriented method invocation or resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/02—Details
- H04L12/16—Arrangements for providing special services to substations
- H04L12/18—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
- H04L12/1813—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
- H04L12/1818—Conference organisation arrangements, e.g. handling schedules, setting up parameters needed by nodes to attend a conference, booking network resources, notifying involved parties
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/14—Session management
- H04L67/146—Markers for unambiguous identification of a particular session, e.g. session cookie or URL-encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72469—User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons
- H04M1/72472—User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons wherein the items are sorted according to specific criteria, e.g. frequency of use
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/26—Devices for calling a subscriber
- H04M1/27—Devices whereby a plurality of signals may be stored simultaneously
- H04M1/274—Devices whereby a plurality of signals may be stored simultaneously with provision for storing more than one subscriber number at a time, e.g. using toothed disc
- H04M1/2745—Devices whereby a plurality of signals may be stored simultaneously with provision for storing more than one subscriber number at a time, e.g. using toothed disc using static electronic memories, e.g. chips
- H04M1/27467—Methods of retrieving data
- H04M1/27475—Methods of retrieving data using interactive graphical means or pictorial representations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
- H04M1/72436—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. short messaging services [SMS] or e-mails
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/62—Details of telephonic subscriber devices user interface aspects of conference calls
Definitions
- the embodiments of the present disclosure relate to the field of communication technologies, and in particular, to a session creation method and terminal device.
- the user can find the multiple contacts in the contact list of the communication program and trigger the terminal device to provide these contacts Create a group chat, and then the user can trigger the terminal device to send messages to these contacts through the group chat, that is, these contacts can receive the message triggered by the user.
- Embodiments of the present disclosure provide a session creation method and terminal device to solve the problem of slower group chat creation speed when a user cannot obtain a contact name.
- an embodiment of the present disclosure provides a session creation method that receives a user's first input to a first image including at least one face image; in response to the first input, displays an icon of at least one communication program; and receives a user The second input; in response to the second input, displaying a conversation interface; wherein, the conversation interface includes M target identifiers, each target identifier is used to indicate a user, and the M target identifiers indicate that the M users include the at least A user indicated by a K face image in a face image, the M target identifiers are the identifiers in the target communication program corresponding to the second input, M and K are both positive integers, and K is less than or equal to M.
- an embodiment of the present disclosure also provides a terminal device, the terminal device includes: a receiving module and a display module; the receiving module is configured to receive a user's first input to a first image including at least one face image
- the display module is used to display at least one communication program icon in response to the first input received by the receiving module; the receiving module is also used to receive the user's second input; in response to the first received by the receiving module
- the user indicated by the image, the M target identifiers are the identifiers in the target communication program corresponding to the second input, M and K are both positive integers, and K is less than or equal to M.
- an embodiment of the present disclosure provides a terminal device, including a processor, a memory, and a computer program stored on the memory and executable on the processor.
- the computer program When the computer program is executed by the processor, it is implemented as The steps of the session creation method described in one aspect.
- an embodiment of the present disclosure provides a computer-readable storage medium storing a computer program on the computer-readable storage medium, which when executed by a processor implements the steps of the session creation method as described in the first aspect .
- the terminal device receives a user's first input to a first image including at least one face image. Then, in response to the first input, the terminal device displays an icon of at least one communication program. Secondly, the second input received by the terminal device. Finally, in response to the second input, the terminal device displays a conversation interface; the conversation interface includes M target identifiers. Since each target mark is used to indicate a user, the M users indicated by the M target marks include at least one user indicated by the K face image in the face image, and the M target marks are target communication programs corresponding to the second input.
- M and K are positive integers, and K is less than or equal to M.
- the terminal device can display at least one icon of the communication program to the user according to the received first input of the user to the first image, thereby enabling the user to select the communication program and select which For the user corresponding to the face image, after the user selection is completed, the terminal device displays the user session interface including the indication of the K personal face image in the at least one face image. Therefore, the session creation method provided by the embodiments of the present disclosure may be based on The image of the face image quickly finds the desired contact, and can then quickly create a conversation or add the user to an existing group chat.
- FIG. 1 is a schematic structural diagram of a possible Android operating system provided by an embodiment of the present disclosure
- FIG. 2 is a schematic flowchart of a session creation method according to an embodiment of the present disclosure
- FIG. 3 is a schematic diagram of a display interface provided by an embodiment of the present disclosure.
- FIG. 4 is a second schematic diagram of a display interface provided by an embodiment of the present disclosure.
- FIG. 5 is a third schematic diagram of a display interface provided by an embodiment of the present disclosure.
- FIG. 6 is a fourth schematic diagram of a display interface provided by an embodiment of the present disclosure.
- FIG. 7 is a fifth schematic diagram of a display interface provided by an embodiment of the present disclosure.
- FIG. 8 is a sixth schematic diagram of a display interface provided by an embodiment of the present disclosure.
- FIG. 9 is a seventh schematic diagram of a display interface provided by an embodiment of the present disclosure.
- FIG. 10 is a schematic diagram 8 of a display interface provided by an embodiment of the present disclosure.
- FIG. 11 is a schematic view 9 of a display interface provided by an embodiment of the present disclosure.
- FIG. 12 is a schematic diagram 10 of a display interface provided by an embodiment of the present disclosure.
- FIG. 13 is a schematic diagram 11 of a display interface provided by an embodiment of the present disclosure.
- FIG. 14 is a twelfth schematic diagram of a display interface provided by an embodiment of the present disclosure.
- 15 is a thirteenth schematic diagram of a display interface provided by an embodiment of the present disclosure.
- 16 is a fourteenth schematic diagram of a display interface provided by an embodiment of the present disclosure.
- FIG. 17 is a schematic structural diagram of a terminal device according to an embodiment of the present disclosure.
- FIG. 18 is a schematic diagram of a hardware structure of a terminal device according to various embodiments of the present disclosure.
- first and second in the specification and claims of the present disclosure are used to distinguish different objects, not to describe a specific order of objects.
- the first added control and the second added control are used to distinguish different added controls, rather than describing a specific order of added controls.
- the terminal device in the embodiment of the present disclosure may be a terminal device with an operating system.
- the operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present disclosure.
- the following uses the Android operating system as an example to introduce the software environment to which the session creation method provided by the embodiments of the present disclosure is applied.
- FIG. 1 it is a schematic structural diagram of a possible Android operating system provided by an embodiment of the present disclosure.
- the architecture of the Android operating system includes four layers, namely: an application program layer, an application program framework layer, a system runtime library layer, and a kernel layer (specifically, a Linux kernel layer).
- the application layer includes various applications in the Android operating system (including system applications and third-party applications).
- the application framework layer is the framework of the application. Developers can develop some applications based on the application framework layer while observing the development principles of the application framework.
- the system runtime library layer includes a library (also called a system library) and an Android operating system operating environment.
- the library mainly provides various resources required by the Android operating system.
- the operating environment of the Android operating system is used to provide a software environment for the Android operating system.
- the kernel layer is the operating system layer of the Android operating system, and belongs to the bottom layer of the Android operating system software layer.
- the kernel layer provides core system services and hardware-related drivers for the Android operating system based on the Linux kernel.
- the developer may develop a software program that implements the session creation method provided by the embodiment of the present disclosure based on the system architecture of the Android operating system as shown in FIG. 1, so that the session The creation method can be run based on the Android operating system shown in FIG. 1. That is, the processor or the terminal device can implement the session creation method provided by the embodiment of the present disclosure by running the software program in the Android operating system.
- FIG. 2 is a schematic flowchart of a session creation method according to an embodiment of the present disclosure. As shown in FIG. 2, the session creation method includes steps 201 to 204:
- Step 201 The terminal device receives a user's first input of a first image including at least one face image.
- the first image is displayed in the first interface as an example for description.
- the first interface may be an interface for collecting images by the terminal device (ie, a shooting preview interface), or an interface for displaying images for the terminal device (for example, The user selects an image viewing interface from an album or an image receiving application), which is not specifically limited in the embodiments of the present disclosure.
- FIG. 3 is a schematic diagram of a display interface provided by an embodiment of the present disclosure.
- the first interface may be the interface 301a shown in (a) in FIG. 3 or the interface 301b shown in (b) in FIG. 3.
- the interface 301a is a shooting preview interface of the camera of the terminal device
- the interface 301b is a display interface for displaying images of the terminal device.
- a "face check person” or “check person” control can also be displayed in the first interface, and can be displayed in the adjacent area of other controls in the shooting interface (for example, can be displayed in the interface 301a The right area of "recording"), or after the user selects the first image, it can be displayed in the adjacent area of other controls, then the first input can be the input to the "face check person” or "check person” control.
- the "face check person” and “check person” controls may not be displayed in the first interface, and the face check person function may be enabled by receiving a user's quick input (for example, long-pressing the screen). No specific restrictions.
- the first input may be touch screen input, fingerprint input, gravity input, key input, and the like.
- the touch screen input is a user's input to the terminal device's touch screen, long-press input, sliding input, click input, floating input (user input near the touch screen) and other inputs.
- Fingerprint input is the user's sliding fingerprint, long-press fingerprint, single-click fingerprint, double-click fingerprint, etc. input to the fingerprint reader of the terminal device.
- Gravity input refers to input such as shaking of the terminal device in a specific direction and a specified number of times.
- the key input corresponds to the user's input such as single-click input, double-click input, long-press input, and combination key input of the power key, volume key, and home key of the terminal device.
- the embodiment of the present disclosure does not specifically limit the manner of the first input, and may be any achievable manner.
- the first input may be a continuous input, or may include a plurality of discontinuous sub-inputs, which is not specifically limited in the embodiment of the present disclosure.
- Step 202 In response to the first input, the terminal device displays an icon of at least one communication program.
- the interface where the terminal device displays the icon of at least one communication program is the second interface.
- the terminal device updates and displays the above-mentioned first interface as a second interface, and the second interface includes an icon of at least one communication program.
- At least one communication program in the embodiment of the present disclosure is a communication program with contacts installed in the terminal device.
- Step 203 The terminal device receives the second input of the user.
- the second input may be a continuous input or an input composed of a plurality of discontinuous sub-inputs, which is not specifically limited in this embodiment of the present disclosure.
- the second input may be an input for the user to select the face image in the first image and to select the icon of the communication program.
- Step 204 In response to the second input, the terminal device displays a conversation interface, and the conversation interface includes M target identifiers.
- Each target identifier is used to indicate a user, and the M users indicated by the M target identifiers include users indicated by K personal face images in at least one personal face image, and the M target identifiers are corresponding to the second input
- M and K are positive integers, and K is less than or equal to M.
- the K personal face image may correspond to more than K users.
- the target identifier may be the user's memo name, nickname, user name, etc.
- the conversation interface may be a group chat interface or a group messaging interface, which is not specifically limited in the embodiment of the present disclosure.
- the terminal device updates and displays the above second interface as a conversation interface, and the conversation interface includes M target identifiers.
- the conversation interface is a group chat interface
- the user can send a message to these contacts in the group chat interface, these contacts can receive the message sent by the user, and any one of these contacts can also be in the Send messages in group chat, other users can receive the messages sent by these contacts in the group chat.
- the conversation interface is a group sending interface
- the user can send a message to these contacts in the group sending interface, and these contacts can all receive the message sent by the user.
- the terminal device receives a user's first input to a first image including at least one face image. Then, in response to the first input, the terminal device displays an icon of at least one communication program. Secondly, the terminal device receives the user's second input. Finally, in response to the second input, the terminal device displays a conversation interface, and the conversation interface includes M target identifiers.
- Each target mark is used to indicate a user, and the M users indicated by the M target marks include at least one user indicated by the K face image in the face image, and the M target marks are in the target communication program corresponding to the second input , M and K are positive integers, and K is less than or equal to M.
- the terminal device can display at least one icon of the communication program to the user according to the received first input of the user to the first image, thereby enabling the user to select the communication program and select which For the user corresponding to the face image, after the user selection is completed, the terminal device displays the user session interface including the indication of the K personal face image in the at least one face image. Therefore, the session creation method provided by the embodiments of the present disclosure may be based on The image of the face image quickly finds the required contact, and can then quickly create a conversation or add the user to an existing group.
- step 202 may specifically be executed by step 202a1:
- Step 202a1 In response to the first input, display the at least one face image and the icon of the at least one communication program in the first image.
- the terminal device updates and displays the first interface as a second interface, and the second interface includes the at least one face image and the icon of at least one communication program in the first image.
- the second interface may be an interface 302a
- the interface 302a includes icons of 3 face images and 5 communication programs, respectively: face image 31 , Face image 32, face image 33, icon 1 of communication program 1, icon 2 of communication program 2, icon 3 of communication program 3, icon 4 of communication program 4, and icon 5 of communication program 5.
- the second input may be an input in which the user selects only the icon of the communication program on the second interface.
- the second input may default to the user selecting all the face images in the second interface and the corresponding icon of the communication program;
- the second input may include a user's sub-input to a face image and a sub-input to an icon, which are not specifically limited in the embodiments of the present disclosure.
- the second interface may further include a selection control.
- the selection control can be used for the user to select which contacts are needed.
- the second interface may also be the interface 302b, and the interface 302b further includes a selection control 34, in a case where the terminal device updates the first interface and displays the second interface .
- the selection control 34 in the interface 302b can circle all the face images in a region surrounded by a dashed line, and is used to indicate that all the face images have been selected.
- the user can move any face in the second interface
- the user can remove any face image in the area enclosed by the dotted line (including deleting and moving to other areas in the second interface).
- the conversation interface displayed by the terminal device may be the interface 303a shown in (a) in FIG. 5 or the interface 303b shown in (b) in FIG. 5 .
- the interface 303a may be a group chat interface, and the group chat interface may include three user names corresponding to three face images.
- the interface 303b may be a group sending interface, and the group sending interface may also include 3 user names corresponding to 3 face images.
- the terminal device can display the at least one face image and the icon of the at least one communication program according to the user's first input, thereby enabling the user to select which face images to associate with based on the displayed at least one face image
- the corresponding user establishes a session. Therefore, in the session creation method provided by the embodiment of the present disclosure, the user can more conveniently select and quickly find the required contact according to at least one face image displayed on the terminal device.
- the session creation method provided by the embodiment of the present disclosure further includes step 205 and step 206 after step 203:
- Step 205 In response to the second input, the terminal device displays N personal face images and N target identifiers.
- Each face image corresponds to a target identifier
- the N users indicated by the N target identifiers include users indicated by P personal face images in at least one face image
- the N target identifiers are the identifiers in the target communication program
- P is an integer less than or equal to N.
- the terminal device displays an interface of N face images and N target identifiers as the third interface.
- the terminal device may update and display the above-mentioned second interface as a third interface, and the third interface includes N personal face images and N target identifiers.
- the third interface may be an interface for establishing a group chat, and after receiving the second input from the user, the terminal device may display the interface 304a shown in (a) in FIG. 6.
- Step 206 The terminal device receives the third input of the user.
- the third input is the input for the user to determine the establishment of the session, or the input to add the selected user to the group chat, which may be a continuous input or a plurality of discontinuous multiple sub-inputs. No specific restrictions.
- the third input may be a user input on the third interface.
- the third input may be an input of a control established by the user in the interface, for example, the third input may be an input of the user clicking "group chat" in the interface 304c shown in FIG. 7; the third input may also For a quick input, for example, the third input may also be an input of the user sliding up from the bottom of the screen shown in the interface 304c.
- step 204 can be performed by step 204a.
- Step 204a In response to the third input, the terminal device displays a conversation interface.
- the terminal device may update and display the second interface as a conversation interface.
- the user can determine whether the contact corresponding to the face image is the contact required by the user according to the displayed target identification and the face image.
- the third input is a sliding input of the user in a preset direction in a blank area other than the N face images and the N target marks.
- the third input is an input that the user determines to establish a session, or determines to join to establish a session.
- the third input may be an input that the user slides toward the top of the screen in the blank area.
- the user can slide the input in the preset direction in the blank area to control the terminal device to display the conversation interface, and the operation using the third input is faster.
- the method for creating a session provided by an embodiment of the present disclosure further includes steps 207 to 209 after step 203:
- Step 207 The terminal device displays preset controls.
- the above-mentioned third interface further includes preset controls.
- the preset control may be a control with added function represented by a text, or a control with added function represented by an icon.
- the embodiment of the present disclosure does not specifically limit the type and display position of the preset control.
- the preset control in the interface 304a shown in (a) in FIG. 6 is an “add” control 35, which is a text type added control, and the preset control in the interface 304b shown in (b) in FIG. 6
- add controls for the icon type For the camera icon 36, add controls for the icon type.
- Step 208 The terminal device receives the fourth input from the user to the preset control.
- the user may also add a contact to the contact list of the communication program through the preset control, that is, the session established by the session creation method of the embodiment of the present disclosure may also include the user directly contacting Contacts manually selected in the person list.
- the fourth input may be an input for the user to select the camera icon 36 (ie, a preset control), or an input for the user to select the camera icon 36 and slide up, such as the interface shown in (a) in FIG. 8 The input shown in 304b1.
- the fourth input may be a continuous input or an input composed of multiple sub-inputs, which is not specifically limited in the embodiment of the present disclosure.
- Step 209 In response to the fourth input, the terminal device displays T personal face images and T target identifiers.
- the terminal device updates the third interface, and the updated third interface includes T personal face images and T target identifiers.
- the T face image includes the N face image
- the T target identifiers include the N target identifiers
- the other face images except the N face image in the T face image are in the second image Face image of
- the second image is the image corresponding to the fourth input
- the user indicated by the other target identifiers of the T target identifiers other than the N target identifiers is the user indicated by other face images
- T is positive Integer.
- the terminal device displays a preset control in the third interface, which can facilitate the user to determine whether to continue adding other contacts according to the N target identifiers and N face images displayed according to the first image.
- the fourth input includes a first sub-input and a second sub-input.
- step 209 may also be executed through steps 209a and 209b:
- Step 209a In the case where N personal face images and N target identifiers are displayed in the first area, in response to the user's first sub-input to the preset control, the terminal device displays the shooting preview interface in the second area.
- Step 209b In response to the user's second sub-input to the preset control, perform a shooting operation, and display the captured second image in the second area, and display the first face image and the first face image in the second image in the first area Target identification, the N face images and the N target identifications.
- the second image may include at least one face image.
- the fourth input is an input composed of multiple sub-inputs, as shown in (a) in FIG. 8, the user first selects the camera icon 36 on the interface 304b1 and drags it upward, and then the terminal device displays FIG. 8
- the user can select the camera icon 36 again in the interface 304b2 and slide down, as shown in the interface 304b3 shown in (a) in FIG. 9.
- the terminal device may display the face images in the image acquired in the image collection area and the target identifiers corresponding to these face images on the interface 304b4 shown in (b) in FIG. 9.
- the interface 304b2 only uses the image acquisition area (including the camera preview interface) as an example for description.
- the interface 304b2 may also display a contact list of the communication program, and the user may also display the contact list To select the contact to be added, which is not specifically limited in the embodiment of the present disclosure.
- the first area in the interface 304b4 shown in (b) of FIG. 9 may display the target identifier corresponding to the multiple face images.
- the terminal device can display a shooting preview interface in the second area according to the user's first sub-input to the preset control, and then the terminal device receives the user's second sub-input to the preset control, performs the shooting action, and performs
- the second area displays the captured second image
- the first area displays the first face image, the first target logo, and the previously displayed N personal face images and N target logos in the first area, which enables the user to The image of the face image continues to add users.
- the session creation method provided by the embodiment of the present disclosure further includes step 210 and step 211 after step 205:
- Step 210 The terminal device receives the fifth input of the user.
- the terminal device may receive the fifth input of the user on the third interface.
- the fifth input may be a continuous input or a plurality of discontinuous inputs, which is not specifically limited in this embodiment of the present disclosure.
- the fifth input is an input for the user to remove unnecessary contacts.
- the third interface may further include a delete control
- the fifth input may specifically be a user's input to the second face image and the delete control.
- Step 211 In response to the fifth input, the terminal device displays J personal face images and J target identifiers.
- the J personal face image is an image in the N personal face image
- the J target identifiers are the identifiers in the N target identifiers
- J is a positive integer less than N.
- the terminal device updates the third interface, and the updated third interface includes J personal face images and J target identifiers.
- the interface 304b5 shown in (a) in FIG. 10 is the third interface
- the user can slide down from the position of "Wang Wu" on the interface 304b5, and the terminal device can change the third interface It is updated to the interface 304b6 shown in (b) of FIG. 10 (that is, the updated third interface), and the interface 304b6 includes Zhang San and Li Si, and face images corresponding to Zhang San and Li Si.
- the user can delete unwanted contacts in the third interface.
- the method for creating a session may further include step 210a and step 211a after step 205:
- Step 210a Receive the fifth input of the second face image from the user.
- the second face image is the face image in the N face images.
- Step 211a In response to the fifth input, delete the second face image and the corresponding at least one target identifier.
- the second face image and the corresponding target identifier may be both Delete, you can also delete the second face image and the corresponding part of the logo.
- N face images may include the same face images, which is not specifically limited in the embodiment of the present disclosure.
- the terminal device may delete the second face image and at least one target identifier corresponding to the second face image according to the user's input to the second face image in the N face images displayed on the terminal device, so that The delete operation is more convenient.
- the first input includes a third sub-input and a fourth sub-input.
- step 202 may specifically be executed by steps 202a and 202b:
- Step 202a In response to the received third sub-input of the user, the terminal device displays the first control.
- the terminal device displays the first control in the first interface.
- the third sub-input may be an input that the user clicks on the screen or a sliding input of the user on the screen.
- the third sub-input may be the input of clicking the screen in the interface 305a shown in FIG. 11, and the first control may be the control 37 in the interface 305a, in which the text "establish group chat" may be displayed in the control 37, or it may be "Create Session” is displayed.
- the third sub-input may also be a sliding input in two opposite directions as shown in the interface 306a shown in FIG.
- the first control may also be the control 38 in the interface 306a.
- the control 38 is a circular control, and the words "join group chat" are displayed in the control 38.
- the control 38 may be other shapes, and may also be displayed on the control 38. Other words are not specifically limited in the embodiments of the present disclosure.
- Step 202b In response to the received fourth sub-input of the first control by the user, the terminal device updates the interface displaying the first image to display an interface including at least one communication program icon.
- the terminal device in response to the received fourth sub-input of the user to add a control, the terminal device updates and displays the first interface as the second interface.
- the second interface may also be the interface 306a shown in FIG. 12.
- the user can choose to move the first control to an icon in the interface 306a, for example, as shown in the interface 306a1 in FIG. 13, so that a conversation can be established in the communication program corresponding to the icon.
- the terminal device may also display icons of multiple communication programs when the first control is displayed.
- the terminal device can display the first control on the display interface, so that the user can select the information to be acquired by operating on the first control.
- step 205 may be specifically executed by step 205a:
- Step 205a In response to the second input, the terminal device displays the N face images, the N target identifiers, and at least one alternative session identifier, and each session identifier is used to indicate an established session.
- the above-mentioned third interface further includes at least one alternative session identifier.
- step 206 can be executed by step 206a:
- Step 206a The terminal device receives the user's third input to the first session identifier.
- step 204a can be executed by step 204a1:
- Step 204a1 in response to the third input, the terminal device displays a session interface including all target identifiers in the first session identifier and the N target identifiers.
- the first session identifier is one of the at least one candidate session identifier.
- the third input may also be a user input to the first session identifier and the first target identifier.
- the first session identifier is an identifier in at least one alternative session identifier
- the first target identifier is an identifier among N target identifiers
- the M target identifiers include an identifier for indicating a user corresponding to the first session and the first target ID
- the first session is the session indicated by the first session ID.
- the session identifier may be the name of the session, for example, the name of a group chat.
- the interface 304c shown in FIG. 14 at least one session identifier may be displayed in the third interface, and the user may select an icon of the session and the name of the contact (ie, the first target identifier), and add the contact to the In the conversation, of course, the user can also click on the conversation identifier to add all users in the third interface to the conversation, which is not specifically limited in the embodiment of the present disclosure.
- the terminal device displays at least one alternative session identifier on the third interface, which enables the user to add the contact determined according to the image to at least one of the sessions, making the way to join the existing session more convenient and quick.
- step 212 in the method for creating a session provided by an embodiment of the present disclosure, after step 203, step 212:
- Step 212 The terminal device displays N indication marks corresponding to the N face images.
- an indicator is used to indicate the similarity between a face image and a third image
- the third image is an image in which the similarity between the at least one target image and the one face image is greater than or equal to the similarity threshold
- the at least one target image is an image corresponding to the second target identification in the target communication program
- the second target identification is the target identification corresponding to the one face image.
- the third interface further includes N indication identifiers corresponding to N face images.
- the N indication marks may be digital marks, text marks, or color marks, which are not specifically limited in the embodiment of the present disclosure.
- the interface 304a1 in FIG. 15 is sequentially arranged from top to bottom according to the similarity from large to small.
- the face image in Zhang San's user information has the highest similarity to the corresponding face image in the first image of Zhang San's row, followed by Li Si and Wang Wu.
- the terminal device determines the contact corresponding to the first target similarity as the contact corresponding to the first face image; when the first target similarity is less than The first threshold, and the second target similarity is greater than or equal to the first threshold, the terminal device determines the contact corresponding to the second target similarity as the contact corresponding to the first face image; wherein, the first target is similar Degree is the similarity between the first face image and the second face image, the second face image is the face image of the avatar in the contact list or the face image in the contact label, the first face image is the first Any face image in at least one face image in an image; the second target similarity is the similarity between the first face image and the third face image, and the third face image is not in the contact list and The face image containing the avatar in the second session of the user, the second session is the session in the target communication program.
- the terminal device can select user information in a communication program when calculating the similarity.
- the user information can include an avatar and a label, where the avatar can be a face that the user notes for other users in his terminal device
- An image for example, as shown in the interface 307 shown in FIG. 16, the avatar is a face image of the pig remark that the user can fly, or an image set by other users themselves; the image in the label can also be
- the face images remarked for other users in their own terminal devices can also be images set by other users themselves.
- the label in FIG. 16 is the image of a puppy set by the pig that the user can fly. The embodiments of the present disclosure do not specifically limit this.
- any one of the above third interfaces can display an indicator, and the user can determine which user's face image in the user information and the face in the first image according to the indicator The similarity of the images is high, so you can refer to selecting the contact who needs to create a session to send information.
- the terminal device can display the similarity between each contact and the corresponding face image in the third interface, which can make the acquired information more accurate, so that reference can be made to select the contact who needs to create a session to send the information.
- FIG. 17 is a possible structural schematic diagram of a terminal device provided by an embodiment of the present disclosure.
- the terminal device 400 includes: a receiving module 401 and a display module 402; the receiving module 401 is configured to receive a user pair including at least one person.
- the first input of the first image of the face image; the display module 402 is used to display at least one icon of the communication program in response to the first input received by the receiving module 401; the receiving module 401 is also used to receive the second input of the user;
- a conversation interface is displayed, and the conversation interface includes M target identifiers; wherein each target identifier is used to indicate a user, and the M users indicated by the M target identifiers include at least one face
- the M target identifiers are the identifiers in the target communication program corresponding to the second input, M and K are both positive integers, and K is less than or equal to M.
- the display module 402 is specifically configured to display the at least one face image and the icon of the at least one communication program in the first image in response to the first input received by the receiving module 401.
- the display module 402 is further configured to display N face images and N target identifiers in response to the second input after the receiving module 401 receives the user's second input; wherein each face image corresponds to a target ID, N users indicated by N target IDs include users indicated by P personal face images in at least one face image, the N target IDs are IDs in the target communication program, and P is an integer less than or equal to N; receive The module 401 is also used to receive the third input of the user; the display module 402 is specifically used to display the conversation interface in response to the third input received by the receiving module 401.
- the third input is a sliding input of the user in a preset direction in a blank area other than the N face images and the N target marks.
- the display module 402 is further used to display the preset control after the receiving module 401 receives the second input; the receiving module 401 is also used to receive the user's fourth input to the preset control; the display module 402 is used to respond At the fourth input received by the receiving module 401, T personal face images and T target identifiers are displayed; wherein, the T personal face images include the N personal face images, and the T target identifiers include the N target identifiers, the T personal faces The other face images in the image other than the N face images are the face images in the second image, the second image is an image corresponding to the fourth input, and the T target identifiers except the N target identifiers The users indicated by other target identifiers are the users indicated by other face images, and T is a positive integer.
- the fourth input includes a first sub-input and a second sub-input;
- the display module 402 is specifically configured to display the N face image and the N target identifiers in the first area in response to the user’s Set the first sub-input of the control to display the shooting preview interface in the second area; in response to the user's second sub-input to the preset control, perform a shooting operation, display the second image captured in the second area, and display the first image in the first area The first face image, the first target mark, the N face images and the N target marks in the second image are displayed.
- the receiving module 401 is also used to receive the user's fifth input after the display module 402 displays the N face images and N target identifiers; the display module 402 is also used to respond to the fifth received by the receiving module 401 Input, display J personal face image and J target marks, J personal face image is the image in N personal face image, J target marks are the marks in N target marks, J is a positive integer less than N.
- the receiving module 401 is also used to receive the fifth input of the second face image by the user after the display module 402 displays the N face images and N target identifiers; the display module 402 is also used to respond to the received The fifth input received by the module 401 deletes the second face image and the corresponding at least one target identifier.
- the first input includes a first sub-input and a second sub-input
- the display module 402 is specifically configured to display an added control in response to the user's first sub-input received by the receiving module 401; and respond to the receiving module 401
- the second sub-input of the added user to the added control displays an icon of at least one communication program.
- the display module 402 is specifically configured to display the N face images, the N target identifiers, and at least one candidate session identifier in response to the second input, and each session identifier is used to indicate an established session;
- the receiving module 401 is specifically configured to receive the third input of the user corresponding to the first session identifier; the display module 402 is specifically configured to display all target identifiers including the first session identifier in response to the third input received by the receiving module 401 A conversation interface with the N target identifiers; where the first conversation identifier is an identifier in at least one candidate conversation identifier.
- the display module 402 is further configured to display N indication marks corresponding to the N personal face images after the receiving module 401 receives the second input from the user; one indication mark is used to indicate a face image and the second The similarity between the three images.
- the third image is an image in which the similarity between the at least one target image and the one face image is greater than or equal to the similarity threshold.
- the at least one target image is the same as the An image corresponding to the second target identification, and the second target identification is a target identification corresponding to a face image.
- the terminal device 400 provided by an embodiment of the present disclosure can implement various processes implemented by the terminal device in the foregoing method embodiments, and to avoid repetition, details are not described herein again.
- the terminal device receives a first input of a first image including at least one face image by a user. Then, in response to the first input, the terminal device displays an icon of at least one communication program. Second, the terminal device receives the user's second input. Finally, in response to the second input, the terminal device displays a conversation interface, which includes M target identifiers. Each target mark is used to indicate a user, and the M users indicated by the M target marks include at least one user indicated by the K face image in the face image, and the M target marks are in the target communication program corresponding to the second input , M and K are positive integers, and K is less than or equal to M.
- the terminal device can display at least one icon of the communication program to the user according to the received first input of the user to the first image, thereby enabling the user to select the communication program and select which For the user corresponding to the face image, after the user selection is completed, the terminal device displays the user session interface including the indication of the K personal face image in the at least one face image. Therefore, the session creation method provided by the embodiments of the present disclosure may be based on The image of the face image quickly finds the desired contact, and can then quickly create a conversation or add the user to an existing group chat.
- the terminal device 100 includes but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, and a display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, power supply 111 and other components.
- a radio frequency unit 101 includes but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, and a display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, power supply 111 and other components.
- the structure of the terminal device shown in FIG. 18 does not constitute a limitation on the terminal device, and the terminal device may include more or fewer components than the illustration, or combine some components, or different components Layout.
- terminal devices include, but are not limited to, mobile phones, tablet computers, notebook computers, palmtop computers, in-vehicle terminal devices, wearable devices, and pedometers.
- the user input unit 107 is used to receive a user's first input to a first image including at least one face image; the display unit 106 is used to display at least one icon of a communication program in response to the first input; user input Unit 107 is also used to receive the user's second input; display unit 106 is also used to display a conversation interface in response to the second input, the conversation interface includes M target identifiers; wherein each target identifier is used to indicate a Users, the M users indicated by the M target identifiers include the users indicated by the K personal face images in the at least one face image, and the M target identifiers are the identifiers in the target communication program corresponding to the second input, M And K are positive integers, and K is less than or equal to M.
- the terminal device receives a first input of a first image including at least one face image by a user. Then, in response to the first input, the terminal device displays an icon of at least one communication program. Secondly, the terminal device receives the user's second input. Finally, in response to the second input, the terminal device displays a conversation interface, which includes M target identifiers.
- M target identifiers is used to indicate a user
- M users indicated by M target identifiers include users indicated by at least one K face image in the face image
- M target identifiers are target communications corresponding to the second input
- M and K are positive integers
- K is less than or equal to M.
- the terminal device can display at least one icon of the communication program to the user according to the received first input of the user to the first image, thereby enabling the user to select the communication program and select which For the user corresponding to the face image, after the user selection is completed, the terminal device displays the user session interface including the indication of the K personal face image in the at least one face image. Therefore, the session creation method provided by the embodiments of the present disclosure may be based on The image of the face image quickly finds the desired contact, and can then quickly create a conversation or add the user to an existing group chat.
- the radio frequency unit 101 may be used to receive and send signals during sending and receiving information or during a call. Specifically, after receiving the downlink data from the base station, it is processed by the processor 110; The uplink data is sent to the base station.
- the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
- the radio frequency unit 101 can also communicate with the network and other devices through a wireless communication system.
- the terminal device provides wireless broadband Internet access to the user through the network module 102, such as helping the user to send and receive emails, browse web pages, and access streaming media.
- the audio output unit 103 may convert the audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Moreover, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (for example, call signal reception sound, message reception sound, etc.).
- the audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
- the input unit 104 is used to receive audio or video signals.
- the input unit 104 may include a graphics processor (Graphics, Processing, Unit, GPU) 1041 and a microphone 1042.
- the graphics processor 1041 pairs images of still pictures or videos obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode
- the data is processed.
- the processed image frame may be displayed on the display unit 106.
- the image frame processed by the graphics processor 1041 may be stored in the memory 109 (or other storage medium) or sent via the radio frequency unit 101 or the network module 102.
- the microphone 1042 can receive sound, and can process such sound into audio data.
- the processed audio data can be converted into a format that can be sent to the mobile communication base station via the radio frequency unit 101 in the case of a telephone call mode and output.
- the terminal device 100 further includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors.
- the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 1061 according to the brightness of the ambient light, and the proximity sensor can close the display panel 1061 and the terminal device 100 when moving to the ear /Or backlight.
- the accelerometer sensor can detect the magnitude of acceleration in various directions (generally three axes), and can detect the magnitude and direction of gravity when at rest, and can be used to identify the posture of terminal devices (such as horizontal and vertical screen switching, related games) , Magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tap), etc.; sensor 105 can also include fingerprint sensor, pressure sensor, iris sensor, molecular sensor, gyroscope, barometer, hygrometer, thermometer, Infrared sensors, etc. will not be repeated here.
- the display unit 106 is used to display information input by the user or information provided to the user.
- the display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display (Liquid Crystal) (LCD), an organic light emitting diode (Organic Light-Emitting Diode, OLED), or the like.
- LCD Liquid Crystal
- OLED Organic Light-Emitting Diode
- the user input unit 107 may be used to receive input numeric or character information, and generate key signal input related to user settings and function control of the terminal device.
- the user input unit 107 includes a touch panel 1071 and other input devices 1072.
- the touch panel 1071 also known as a touch screen, can collect user's touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc. on or near the touch panel 1071 operating).
- the touch panel 1071 may include a touch detection device and a touch controller.
- the touch detection device detects the user's touch orientation, and detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into contact coordinates, and then sends To the processor 110, the command sent by the processor 110 is received and executed.
- the touch panel 1071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
- the user input unit 107 may also include other input devices 1072.
- other input devices 1072 may include, but are not limited to, physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, and joysticks, which will not be repeated here.
- the touch panel 1071 may be overlaid on the display panel 1061.
- the touch panel 1071 detects a touch operation on or near it, it is transmitted to the processor 110 to determine the type of touch event, and then the processor 110 according to the touch The type of event provides a corresponding visual output on the display panel 1061.
- the touch panel 1071 and the display panel 1061 are implemented as two independent components to realize the input and output functions of the terminal device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated
- the input and output functions of the terminal device are not specifically limited here.
- the interface unit 108 is an interface for connecting an external device to the terminal device 100.
- the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (I/O) port, video I/O port, headphone port, etc.
- the interface unit 108 may be used to receive input (eg, data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal device 100 or may be used in the terminal device 100 and external Transfer data between devices.
- the memory 109 may be used to store software programs and various data.
- the memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; the storage data area may store Data created by the use of mobile phones (such as audio data, phone books, etc.), etc.
- the memory 109 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
- the processor 110 is the control center of the terminal device, and uses various interfaces and lines to connect the various parts of the entire terminal device, by running or executing the software programs and/or modules stored in the memory 109, and calling the data stored in the memory 109 , Perform various functions and process data of the terminal device, so as to monitor the terminal device as a whole.
- the processor 110 may include one or more processing units; optionally, the processor 110 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, and application programs, etc.
- the modulation processor mainly handles wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 110.
- the terminal device 100 may further include a power supply 111 (such as a battery) that supplies power to various components.
- a power supply 111 (such as a battery) that supplies power to various components.
- the power supply 111 may be logically connected to the processor 110 through a power management system, thereby managing charge, discharge, and power consumption through the power management system Management and other functions.
- the terminal device 100 includes some not-shown functional modules, which will not be repeated here.
- an embodiment of the present disclosure further provides a terminal device, combined with FIG. 18, includes a processor 110, a memory 109, and a computer program stored on the memory 109 and executable on the processor 110.
- the computer program is processed by the processor During the execution of 110, the processes of the above embodiments of the session creation method are implemented, and the same technical effect can be achieved. To avoid repetition, details are not described here.
- Embodiments of the present disclosure also provide a computer-readable storage medium that stores a computer program on the computer-readable storage medium.
- the computer program is executed by a processor, each process of the foregoing session creation method embodiment is implemented, and the same technology can be achieved In order to avoid repetition, I will not repeat them here.
- the computer-readable storage medium such as read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
- the methods in the above embodiments can be implemented by means of software plus a necessary general hardware platform, and of course, can also be implemented by hardware, but in many cases the former is better Implementation.
- the technical solutions of the present disclosure can essentially be embodied in the form of software products that contribute to related technologies, and the computer software products are stored in a storage medium (such as ROM/RAM, magnetic disk,
- the CD-ROM includes several instructions to enable a terminal device (which may be a mobile phone, computer, server, air conditioner, or network device, etc.) to execute the methods described in various embodiments of the present disclosure.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- User Interface Of Digital Computer (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
Claims (12)
- 一种会话创建方法,其中,所述方法包括:接收用户对包括至少一个人脸图像的第一图像的第一输入;响应于所述第一输入,显示至少一个通讯程序的图标;接收用户的第二输入;响应于所述第二输入,显示会话界面,所述会话界面包括M个目标标识;其中,每个目标标识用于指示一个用户,所述M个目标标识指示的M个用户包括所述至少一个人脸图像中的K个人脸图像指示的用户,所述M个目标标识为与所述第二输入对应的目标通讯程序中的标识,M和K均为正整数,且K小于或等于M。
- 根据权利要求1所述的方法,其中,所述响应于所述第一输入,显示至少一个通讯程序的图标,包括:响应于所述第一输入,显示所述第一图像中的所述至少一个人脸图像和所述至少一个通讯程序的图标。
- 根据权利要求2所述的方法,其中,所述接收用户的第二输入之后,所述方法还包括:响应于所述第二输入,显示N个人脸图像和N个目标标识;其中,每个人脸图像分别对应一个目标标识,所述N个目标标识指示的N个用户包括所述至少一个人脸图像中的P个人脸图像指示的用户,所述N个目标标识为所述目标通讯程序中的标识,P为小于或等于N的整数;接收用户的第三输入;所述响应于所述第二输入,显示会话界面,包括:响应于所述第三输入,显示所述会话界面。
- 根据权利要求3所述的方法,其中,所述第三输入为用户在显示所述N个人脸图像和所述N个目标标识之外的空白区域朝预设方向的滑动输入。
- 根据权利要求3所述的方法,其中,所述接收所述第二输入之后,所述方法还包括:显示预设控件;接收用户对所述预设控件的第四输入;响应于所述第四输入,显示T个人脸图像和T个目标标识;其中,所述T个人脸图像包括所述N个人脸图像,所述T个目标标识包括所述N个目标标识,所述T个人脸图像中除所述N个人脸图像之外的其他人脸图像为第二图像中的人脸图像,所述第二图像为与所述第四输入对应的图像,所述T个目标标识中除所述N个目标标识之外的其他目标标识指示的用户为所述其他人脸图像指示的用户,T为正整数。
- 根据权利要求5所述的方法,其中,所述第四输入包括第一子输入和第二子输入;所述响应于所述第四输入,显示T个人脸图像和T个目标标识,包括:在第一区域显示所述N个人脸图像和所述N个目标标识的情况下,响应于用户对所述预设控件的所述第一子输入,在第二区域显示拍摄预览界面;响应于用户对所述预设控件的第二子输入,执行拍摄操作,并在所述第二区域显 示拍摄的第二图像,在所述第一区域显示所述第二图像中的第一人脸图像、第一目标标识、所述N个人脸图像和所述N个目标标识。
- 根据权利要求3所述的方法,其中,所述显示N个人脸图像和N个目标标识之后,所述方法还包括:接收用户对第二人脸图像的第五输入;响应于所述第五输入,删除所述第二人脸图像和对应的至少一个目标标识。
- 根据权利要求3所述的方法,其中,所述响应于所述第二输入,显示会话界面,包括:响应于所述第二输入,显示所述N个人脸图像、所述N个目标标识和至少一个备选的会话标识,每个会话标识用于指示一个已建立的会话;所述接收用户的第三输入,包括:接收用户对第一会话标识的第三输入;响应于所述第三输入,显示所述会话界面包括:响应于所述第三输入,显示包括所述第一会话标识中的所有目标标识和所述N个目标标识的会话界面;其中,所述第一会话标识为所述至少一个备选会话标识中的一个标识。
- 根据权利要求3所述的方法,其中,所述接收用户的第二输入之后,所述方法还包括:显示所述N个人脸图像对应的N个指示标识;其中,一个指示标识用于指示一个人脸图像与第三图像之间的相似度,所述第三图像为至少一个目标图像中与所述一个人脸图像之间的相似度大于或等于相似度阈值的图像,所述至少一个目标图像为所述目标通讯程序中与第二目标标识对应的图像,所述第二目标标识为与所述一个人脸图像对应的目标标识。
- 一种终端设备,其中,所述终端设备包括:接收模块和显示模块;所述接收模块,用于接收用户对包括至少一个人脸图像的第一图像的第一输入;所述显示模块,用于响应于所述接收模块接收的所述第一输入,显示至少一个通讯程序的图标;所述接收模块,还用于接收用户的第二输入;响应于所述接收模块接收的所述第二输入,显示会话界面,所述会话界面包括M个目标标识;其中,每个目标标识用于指示一个用户,所述M个目标标识指示的M个用户包括所述至少一个人脸图像中的K个人脸图像指示的用户,所述M个目标标识为与所述第二输入对应的目标通讯程序中的标识,M和K均为正整数,且K小于或等于M。
- 一种终端设备,其中,所述终端设备包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至9中任一项所述的会话创建方法的步骤。
- 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至9中任一项所述的会话创建方法的步骤。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP19905951.0A EP3905037B1 (en) | 2018-12-24 | 2019-12-20 | Session creation method and terminal device |
KR1020217021573A KR102657949B1 (ko) | 2018-12-24 | 2019-12-20 | 세션 생성 방법 및 단말 장치 |
ES19905951T ES2976717T3 (es) | 2018-12-24 | 2019-12-20 | Método de creación de sesión y dispositivo terminal |
JP2021537142A JP7194286B2 (ja) | 2018-12-24 | 2019-12-20 | セッション作成方法及び端末機器 |
US17/357,130 US12028476B2 (en) | 2018-12-24 | 2021-06-24 | Conversation creating method and terminal device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811584399.7 | 2018-12-24 | ||
CN201811584399.7A CN109766156B (zh) | 2018-12-24 | 2018-12-24 | 一种会话创建方法及终端设备 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/357,130 Continuation US12028476B2 (en) | 2018-12-24 | 2021-06-24 | Conversation creating method and terminal device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020135269A1 true WO2020135269A1 (zh) | 2020-07-02 |
Family
ID=66451382
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/127140 WO2020135269A1 (zh) | 2018-12-24 | 2019-12-20 | 会话创建方法及终端设备 |
Country Status (7)
Country | Link |
---|---|
US (1) | US12028476B2 (zh) |
EP (1) | EP3905037B1 (zh) |
JP (1) | JP7194286B2 (zh) |
KR (1) | KR102657949B1 (zh) |
CN (1) | CN109766156B (zh) |
ES (1) | ES2976717T3 (zh) |
WO (1) | WO2020135269A1 (zh) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109766156B (zh) | 2018-12-24 | 2020-09-29 | 维沃移动通信有限公司 | 一种会话创建方法及终端设备 |
CN111835531B (zh) * | 2020-07-30 | 2023-08-25 | 腾讯科技(深圳)有限公司 | 会话处理方法、装置、计算机设备及存储介质 |
CN115378897B (zh) * | 2022-08-26 | 2024-08-06 | 维沃移动通信有限公司 | 临时会话建立方法、装置、电子设备及可读存储介质 |
CN117676312A (zh) * | 2023-12-01 | 2024-03-08 | 维沃移动通信有限公司 | 通讯方法、通讯装置及电子设备 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105120084A (zh) * | 2015-07-29 | 2015-12-02 | 小米科技有限责任公司 | 基于图像的通信方法及装置 |
US20160315886A1 (en) * | 2014-06-24 | 2016-10-27 | Tencent Technology (Shenzhen) Company Limited | Network information push method, apparatus and system based on instant messaging |
CN106559558A (zh) * | 2015-09-30 | 2017-04-05 | 北京奇虎科技有限公司 | 一种基于图像识别的获取和激活通讯方式的方法及装置 |
CN106791182A (zh) * | 2017-01-20 | 2017-05-31 | 维沃移动通信有限公司 | 一种基于图像的聊天方法及移动终端 |
CN109766156A (zh) * | 2018-12-24 | 2019-05-17 | 维沃移动通信有限公司 | 一种会话创建方法及终端设备 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101978205B1 (ko) * | 2012-06-07 | 2019-05-14 | 엘지전자 주식회사 | 이동 단말기 및 그 제어 방법, 이를 위한 기록 매체 |
CN104363166B (zh) * | 2014-11-27 | 2018-09-04 | 小米科技有限责任公司 | 即时通信方法、装置和智能终端 |
KR101632435B1 (ko) * | 2015-10-20 | 2016-06-21 | 이요훈 | 유무선ip기반 gui를 활용한 sns 시스템 및 이를 이용한 통화 방법 |
CN106302137A (zh) * | 2016-10-31 | 2017-01-04 | 努比亚技术有限公司 | 群聊消息处理装置及方法 |
-
2018
- 2018-12-24 CN CN201811584399.7A patent/CN109766156B/zh active Active
-
2019
- 2019-12-20 JP JP2021537142A patent/JP7194286B2/ja active Active
- 2019-12-20 KR KR1020217021573A patent/KR102657949B1/ko active Active
- 2019-12-20 ES ES19905951T patent/ES2976717T3/es active Active
- 2019-12-20 WO PCT/CN2019/127140 patent/WO2020135269A1/zh unknown
- 2019-12-20 EP EP19905951.0A patent/EP3905037B1/en active Active
-
2021
- 2021-06-24 US US17/357,130 patent/US12028476B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160315886A1 (en) * | 2014-06-24 | 2016-10-27 | Tencent Technology (Shenzhen) Company Limited | Network information push method, apparatus and system based on instant messaging |
CN105120084A (zh) * | 2015-07-29 | 2015-12-02 | 小米科技有限责任公司 | 基于图像的通信方法及装置 |
CN106559558A (zh) * | 2015-09-30 | 2017-04-05 | 北京奇虎科技有限公司 | 一种基于图像识别的获取和激活通讯方式的方法及装置 |
CN106791182A (zh) * | 2017-01-20 | 2017-05-31 | 维沃移动通信有限公司 | 一种基于图像的聊天方法及移动终端 |
CN109766156A (zh) * | 2018-12-24 | 2019-05-17 | 维沃移动通信有限公司 | 一种会话创建方法及终端设备 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3905037A4 * |
Also Published As
Publication number | Publication date |
---|---|
KR102657949B1 (ko) | 2024-04-15 |
CN109766156A (zh) | 2019-05-17 |
JP7194286B2 (ja) | 2022-12-21 |
KR20210100171A (ko) | 2021-08-13 |
EP3905037A4 (en) | 2022-02-23 |
JP2022515443A (ja) | 2022-02-18 |
EP3905037B1 (en) | 2024-03-13 |
US12028476B2 (en) | 2024-07-02 |
ES2976717T3 (es) | 2024-08-07 |
US20210320995A1 (en) | 2021-10-14 |
CN109766156B (zh) | 2020-09-29 |
EP3905037A1 (en) | 2021-11-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019137429A1 (zh) | 图片处理方法及移动终端 | |
CN108563378B (zh) | 一种消息管理方法及终端 | |
CN109343755B (zh) | 一种文件处理方法及终端设备 | |
WO2021169954A1 (zh) | 搜索方法及电子设备 | |
WO2020135269A1 (zh) | 会话创建方法及终端设备 | |
WO2019196691A1 (zh) | 一种键盘界面显示方法和移动终端 | |
CN109871164B (zh) | 一种消息发送方法及终端设备 | |
WO2020182035A1 (zh) | 图像处理方法及终端设备 | |
WO2019149028A1 (zh) | 应用程序的下载方法及终端 | |
CN109828731B (zh) | 一种搜索方法及终端设备 | |
WO2021129536A1 (zh) | 图标移动方法及电子设备 | |
CN110868633A (zh) | 一种视频处理方法及电子设备 | |
CN110865745A (zh) | 一种截屏方法及终端设备 | |
WO2020192299A1 (zh) | 信息显示方法及终端设备 | |
WO2021057301A1 (zh) | 文件控制方法及电子设备 | |
CN110196668A (zh) | 信息处理方法和终端设备 | |
CN110768804A (zh) | 一种群组创建方法及终端设备 | |
CN110233929A (zh) | 一种显示控制方法及终端设备 | |
KR20220154825A (ko) | 노트 생성 방법 및 전자기기 | |
CN109542311B (zh) | 一种文件处理方法及电子设备 | |
US12118199B2 (en) | Face picture information display method and terminal device | |
CN110515507A (zh) | 一种图标显示方法及终端 | |
CN111610909B (zh) | 一种截图方法、装置及电子设备 | |
CN111142998B (zh) | 后台应用的分享方法及电子设备 | |
WO2021036504A1 (zh) | 图片删除方法及终端设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19905951 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021537142 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20217021573 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2019905951 Country of ref document: EP Effective date: 20210726 |