CN105809660A - Information processing method and electronic device - Google Patents
Information processing method and electronic device Download PDFInfo
- Publication number
- CN105809660A CN105809660A CN201410849588.8A CN201410849588A CN105809660A CN 105809660 A CN105809660 A CN 105809660A CN 201410849588 A CN201410849588 A CN 201410849588A CN 105809660 A CN105809660 A CN 105809660A
- Authority
- CN
- China
- Prior art keywords
- image
- electronic equipment
- display
- iconic model
- display image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 10
- 238000003672 processing method Methods 0.000 title claims abstract description 10
- 230000008451 emotion Effects 0.000 claims abstract description 16
- 230000005540 biological transmission Effects 0.000 claims abstract description 7
- 230000008921 facial expression Effects 0.000 claims description 34
- 238000000034 method Methods 0.000 claims description 21
- 230000013011 mating Effects 0.000 claims description 4
- 238000012512 characterization method Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 abstract description 3
- 230000001815 facial effect Effects 0.000 description 31
- 238000004891 communication Methods 0.000 description 5
- 230000008878 coupling Effects 0.000 description 5
- 238000010168 coupling process Methods 0.000 description 5
- 238000005859 coupling reaction Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000011017 operating method Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011524 similarity measure Methods 0.000 description 1
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The invention provides an information processing method and an electronic device. During operation of a preset application program, an acquisition device is used to acquire a first image of a user in a first acquisition range, and the first image can be analyzed to acquire a first display image corresponding to the first image. The first display image can be transmitted to a second electronic device, and then the second electronic device is used to display the first display image, and therefore the image displayed by the second electronic device is not the actual human face image of the user corresponding to the first electronic device, but is the virtual image after the conversion. The first user is not required to select the virtual image for the transmission manually, and therefore the use can be convenient, and the emotion of the user can be truly expressed.
Description
Technical field
The application relates to electronic technology field, particularly relates to a kind of information processing method and electronic equipment.
Background technology
Along with electronic technology develops, various portable electric appts also get more and more, such as the portable electric appts such as mobile phone, panel computer.
Current electronic device can realize the instant messaging between other electronic equipments by MSN, such as part chat software can realize the instant messaging of word and video, the communication between so having great convenience for the user.
But, current user is to express current emotion by MSN, it is necessary to user selects the facial expression image of correspondence in MSN, so i.e. inconvenience, can not give expression to the just genuine emotion of user.
Summary of the invention
Embodiments provide a kind of information processing method and electronic equipment, in order to solve in prior art current user to express current emotion by MSN, it is accomplished by user in MSN, selects the facial expression image of correspondence, so i.e. inconvenience, the problem that the just genuine emotion of user can not be given expression to.
Its concrete technical scheme is as follows:
A kind of information processing method, is applied to the first electronic equipment, and described method includes:
When running predetermined application, gathered first image of user in an acquisition range by harvester;
Described first image is analyzed, it is determined that the first display image that described first image is corresponding;
Show that image sends to described second electronic equipment by described first, so that described second electronic equipment shows that described first display image, described first display image are the virtual images corresponding with the content information of described first image.
Optionally, described first image is analyzed, it is determined that the first display image that described first image is corresponding, including:
Described first image collected is mated with each iconic model prestored respectively;
When described first image mates with iconic model, according to the corresponding relation between the iconic model prestored and display image, it is determined that the first display image that described first iconic model is corresponding.
Optionally, the first image collected is mated with the iconic model prestored, also includes:
Obtain difference value between predetermined and region and the presumptive area of the second collection image of the first adjacent collection image;
Judge that whether described difference value is be more than or equal to predetermined threshold value;
If described difference value is be more than or equal to predetermined threshold value, then gathers image as described first image using described second, and described first image collected is mated with the iconic model prestored.
Optionally, according to the corresponding relation between the iconic model prestored and display image, determine the first display image that described first iconic model is corresponding, particularly as follows: according to the corresponding relation between iconic model and virtual animation, it is determined that the first virtual animation that described first iconic model is corresponding;
Described by described first display image transmission extremely described second electronic equipment, particularly as follows:
Described first virtual animation is sent to described second electronic equipment, so that described second electronic equipment shows described first virtual animation.
Optionally, according to the corresponding relation between the iconic model prestored and display image, determine the first display image that described first iconic model is corresponding, particularly as follows: according to the corresponding relation between iconic model and display background content, it is determined that the first display background content that described first iconic model is corresponding;
Described by described first display image transmission extremely described second electronic equipment, particularly as follows:
Described first display background content is sent to described second electronic equipment, so that described second electronic equipment shows described first display background content.
Optionally, described first image is analyzed, it is determined that the first display image that described first image is corresponding, including:
The first acoustic information is adopted by voice acquisition device;
Described first image and the first acoustic information are analyzed, it is determined that described first image and described first acoustic information corresponding described first display image.
Optionally, described first image includes face-image and/or pose presentation.
Optionally, described method also includes:
Receiving and show the second display image that the second electronic equipment sends, wherein, described second shows the emotion that the described second electronic equipment correspondence user of characterization image is current.
Optionally, described second display image is any one or the two or more combination in the second facial expression image, the second virtual expression animation, the second display background color.
A kind of electronic equipment, including:
Image collecting device, for gathering first image of the user in an acquisition range;
Processor, is connected with described image collecting device, for described first image is analyzed, it is determined that the first display image that described first image is corresponding;
Emitter, it is connected with described processor, for showing that image sends to the second electronic equipment by described first, so that described second electronic equipment shows that described first display image, described first display image are the virtual images corresponding with the content information of described first image.
Optionally, described processor, specifically for mating described first image collected with each iconic model prestored respectively;When described first image mates with iconic model, according to the corresponding relation between the iconic model prestored and display image, it is determined that the first display image that described first iconic model is corresponding.
Optionally, described processor, difference value between the presumptive area of the predetermined of image and region and the second collection image is gathered specifically for obtaining adjacent first, judge that whether described difference value is be more than or equal to predetermined threshold value, if described difference value is be more than or equal to predetermined threshold value, then gather image as described first image using described second, and described first image collected is mated with the iconic model prestored.
Optionally, also include: voice acquisition device, be used for gathering the first acoustic information;
Described processor, specifically for being analyzed described first image and described first acoustic information, it is determined that described first image and described first acoustic information corresponding first display image.
Optionally, also include:
By described second, receptor, for receiving the second display image that the second electronic equipment sends, and shows that image sends the display device to the first electronic equipment, so that display device shows the second display image.
In embodiments of the present invention, if the first electronic equipment correspondence user is interacted by MSN with the second corresponding user of electronic equipment, first electronic equipment can pass through harvester and gather user's face, then the facial image of user is converted to virtual image, finally virtual image is sent to the second electronic equipment, so that the second electronic equipment shows this virtual image.It is to say, on the second electronic equipment display be not the real facial image of the first electronic equipment correspondence user, but the virtual image after conversion.Such first user is no longer needed for the manual selection virtual image that goes and sends, and is so possible not only to the use so that user more convenient, and can also give expression to the emotion of user really.
Accompanying drawing explanation
Fig. 1 is the flow chart of a kind of information processing method in the embodiment of the present invention;
Fig. 2 is the method flow diagram in the embodiment of the present invention, the first image being analyzed;
Fig. 3 is the structural representation of a kind of electronic equipment in the embodiment of the present invention.
Detailed description of the invention
Embodiments provide a kind of information processing method, including: when running predetermined application, first image of user in an acquisition range is gathered by harvester, first image is analyzed, obtain the first display image that the first image is corresponding, then show that image sends to the second electronic equipment by first, so that the second electronic equipment shows the first display image.
Specifically, in embodiments of the present invention, if the first electronic equipment correspondence user is interacted by MSN with the second corresponding user of electronic equipment, first electronic equipment can pass through harvester and gather user's face, then the facial image of user is converted to virtual image, finally virtual image is sent to the second electronic equipment, so that the second electronic equipment shows this virtual image.It is to say, on the second electronic equipment display be not the real facial image of the first electronic equipment correspondence user, but the virtual image after conversion.Such first user is no longer needed for the manual selection virtual image that goes and sends, and is so possible not only to the use so that user more convenient, and can also give expression to the emotion of user really.Certainly, the second electronic equipment can also realize the method flow identical with the first electronic equipment.
Below by accompanying drawing and specific embodiment, technical solution of the present invention is described in detail.
The flow chart being illustrated in figure 1 in the embodiment of the present invention a kind of information processing method, the method includes:
S101, when running predetermined application, gathers first image of user in an acquisition range by harvester;
First, the method is applied in the first electronic equipment, and this predetermined application can be specifically MSN etc., such as chat software, video calling software, of course, it is possible to set different application programs according to the demand of user to be used as predetermined application.Not illustration one by one in embodiments of the present invention.
When the first electronic equipment runs predetermined application, first image of user that harvester in first electronic equipment will gather in an acquisition range, here harvester can be image collecting device, just can collect first image of user thereby through image collecting device.Here user is the user that the first electronic equipment is corresponding.And this user needs to be positioned at the acquisition range of image collecting device.
The first image collected here by harvester can be the facial image of user, or the images of gestures of user, also or the body posture image of user etc., does not enumerate one by one in the embodiment of the present invention.
S102, is analyzed the first image, it is determined that the first display image that the first image is corresponding;
By first, S103, shows that image sends to the second electronic equipment.
Specifically, when first image of the user that the harvester in the first electronic equipment collects, the first image will be analyzed by this first electronic equipment by the method flow shown in Fig. 2:
S201, mates with each iconic model prestored respectively by the first image collected;
Specifically, in embodiments of the present invention, prestored in this first electronic equipment various iconic model.Certainly this iconic model is that the type according to the first image is determined, such as during facial image that the first image is user, then each iconic model prestored just can for face iconic model;If during the images of gestures that the first image is user, then each iconic model prestored can be just images of gestures model;If during the pose presentation that the first image is user, then each iconic model prestored can be just pose presentation model.Certainly, the type of the first image can also differ with the type of each iconic model prestored, such as the first image is the facial image of user, and each iconic model prestored can be then cartoon human face iconic model.Do not limit the type of the iconic model prestored in embodiments of the present invention.
When each iconic model prestored respectively by the first image mates, can be specifically that the first image and each iconic model are carried out Similarity Measure, then determine that, with the first image, there is the iconic model of highest similarity, thus may determine that the iconic model that the first image mates.
Such as, when the first image is the facial image of user, then the facial image collected is mated by the first electronic equipment with the facial image model preset, thus being assured that out that have a highest similarity with the facial image collected and facial image model.
Further, in embodiments of the present invention, the above-mentioned process that carries out the first image with iconic model mating can be real-time carrying out, can also non real-time carry out, such as when harvester real-time image acquisition, first electronic equipment can be real-time mate with the iconic model prestored according to the image that collects, so can ensure that the real-time expression of the first electronic equipment correspondence user can change the display image of correspondence.
Certainly, this is an alternative, and when concrete application, user can choose whether to need the real-time display image that the image collected is converted to correspondence according to the demand of self.
Certainly, for the first image be images of gestures or for pose presentation time, it is possible to use identical processing mode.
Optionally, in embodiments of the present invention, step S201 specifically includes: obtaining adjacent first and gather the difference value between the presumptive area of image and the presumptive area of the second collection image, the first adjacent here collection image and the second image acquisition are the image collected by harvester.First gathers the region that the presumptive area in image can be user's face place, and second gathers the region that the presumptive area in image can also be user's face place.And this difference value is it can be understood that be the region at user's face place in the first collection image and the pixel difference in the region at user's face place in the second collection image, this pixel difference just characterizes the change of expression before and after user.
It addition, need exist for illustrating, if harvester can gather 30 two field pictures in one second, that the first collection image can be the 30th two field picture, and the second collection image can be the 31st two field picture, if there is the 3rd collection image, that the 3rd collection image can be the 61st two field picture.It is to say, the first determination gathering image and the second collection image can be determined according to the difference of harvester.
Then the first electronic equipment judges that whether this difference value is be more than or equal to predetermined threshold value, if difference value is be more than or equal to predetermined threshold value, then gathers image as the first image using second, and is mated with the iconic model prestored by the first image collected;If difference value is less than predetermined threshold value, then continues through harvester and gather image, and also do not carry out the coupling of image.Briefly, it is simply that judge whether the human face expression of user changes greatly, if changing greatly, carrying out the coupling of image, if change is less, not carrying out the coupling of image.
Such as, image collecting device is after getting the facial image of continuous print two framed user, first electronic equipment obtains the difference value between the facial image in two two field pictures, first electronic equipment judges that whether the difference value obtained is be more than or equal to predetermined threshold value, if this difference value is be more than or equal to predetermined threshold value, then illustrate that the expression shape change of the face of user is relatively big, then the first electronic equipment is using a later frame image as the first image, and is mated with the iconic model prestored by the first image;If this difference value is less than predetermined threshold value, then illustrate that the expression shape change of the face of user is less, then the first electronic equipment will not mate, and proceeds the man face image acquiring of user.
If the first electronic equipment judges difference value less than predetermined threshold value, then this first electronic equipment does not perform the step of coupling, so can so that the first electronic equipment gathers when collecting the image differed greatly carries out images match, thus decreasing the first electronic equipment to do same matching treatment process, and then avoid the wasting of resources of the first electronic equipment.
Certainly, in embodiments of the present invention, except can be two continuous frames image to except determining difference value, it is also possible to determine a difference value according to the first two field picture and the 4th two field picture or the 5th two field picture, then determine whether the step performing coupling according to this difference value.For other probabilities herein just being illustrated no longer one by one.
S202, when the first image mates with iconic model, according to the corresponding relation between the iconic model prestored and display image, it is determined that the first display image that the first iconic model is corresponding.
In embodiments of the present invention, first electronic equipment can also preserve the corresponding relation between iconic model and display image, such as iconic model is happiness face, then to should the display image of happiness face can be just happiness cartoon image, iconic model is made a living face of making angry, then to should the display image of angry face can be just anger cartoon image.Certainly, can perform according to the method for facial image for images of gestures and pose presentation.
When the first image mates with the iconic model prestored, it is possible to according to the corresponding relation between the iconic model prestored and display image, it is determined that the first display image that the first iconic model is corresponding.
Specifically, in embodiments of the present invention, owing to display image can carry out concrete being set according to the service condition difference of user, so can so that showing the setting more hommization of image, and display image can be made more diversified, below by different situations, the display image that iconic model can be corresponding is described.
Situation one:
According to the corresponding relation between the iconic model prestored and display image, determine the first display image that the first iconic model is corresponding, specifically may is that according to the corresponding relation between iconic model and facial expression image, it is determined that the first facial expression image that the first iconic model is corresponding.
Specifically, what preserve in the first electronic equipment is the corresponding relation between iconic model and facial expression image, here facial expression image is a kind of virtual cartoon image, such as the conventional Facial Expression Image in QQ chat software, or other operable facial expression images.
After determining iconic model, it is possible to according to this corresponding relation, it is determined that go out first facial expression image.Finally this first facial expression image is just sent to the second electronic equipment by this first electronic equipment.So that the second electronic equipment shows the first facial expression image.
Such as, first electronic equipment correspondence user with the second corresponding user of electronic equipment by the chat of QQ chat software time, the image collecting device of the first electronic equipment just collects the facial image of the user in corresponding acquisition range, then this facial image is mated by the first electronic equipment with the facial image model prestored, if mate with the first face iconic model, first electronic equipment is by according to the corresponding relation between facial image model and facial expression image, determine the first facial expression image that the first face iconic model is corresponding, here the first facial expression image can facial expression image in QQ chat software.Then the first facial expression image determined is sent to the second electronic equipment by the first electronic equipment, so that the second electronic equipment shows this first facial expression image.The user that such first electronic equipment is corresponding is not required to go to select a facial expression image to send to the second electronic equipment in facial expression image storehouse, and the facial image being based on user directly determines that first facial expression image sends to the second electronic equipment, so it is possible not only to reduce the operating procedure of user, it is also possible to give expression to the emotion that user is current more really.
Situation two:
According to the corresponding relation between the iconic model prestored and display image, determine the first display image that the first iconic model is corresponding, specifically may is that according to the corresponding relation between iconic model and virtual animation, it is determined that the first virtual animation that the first iconic model is corresponding.
Specifically, what preserve in the first electronic equipment is the corresponding relation between iconic model and virtual animation, and facial expression image here is a kind of virtual cartoon image.
After determining iconic model, it is possible to according to this corresponding relation, it is determined that go out a first virtual animation.Finally this first virtual animation is just sent to the second electronic equipment by this first electronic equipment.So that the second electronic equipment shows the first virtual animation.
Such as, first electronic equipment correspondence user with the second corresponding user of electronic equipment by the chat of QQ chat software time, the image collecting device of the first electronic equipment just collects the facial image of the user in corresponding acquisition range, then this facial image is mated by the first electronic equipment with the facial image model prestored, if mate with the first face iconic model, first electronic equipment is by according to the corresponding relation between facial image model and virtual animation, determine the first virtual animation that the first face iconic model is corresponding, here the first virtual animation is that one has expression animation image.Then the first virtual animation determined is sent to the second electronic equipment by the first electronic equipment, so that the second electronic equipment shows this first virtual animation.The user that such first electronic equipment is corresponding is not required to go to select a facial expression image to send to the second electronic equipment in facial expression image storehouse, and the facial image being based on user directly determines that a first virtual animation sends to the second electronic equipment, so it is possible not only to reduce the operating procedure of user, the emotion that user is current can also be given expression to more really, and also improve the interest of chat between user.
Situation three:
According to the corresponding relation between the iconic model prestored and display image, determine the first display image that the first iconic model is corresponding, specifically may is that according to the corresponding relation between iconic model and display background content, it is determined that the first display background content that the first iconic model is corresponding.
Specifically, what preserve in the first electronic equipment is the corresponding relation between iconic model and display background content, the display content of a kind of background display interface of display background content here, such as the display background content of the background display interface in QQ chat software.
After determining iconic model, it is possible to according to this corresponding relation, it is determined that go out a first display background content.Finally this first display background content is just sent to the second electronic equipment by this first electronic equipment.So that the second electronic equipment shows the first display background content.
Such as, first electronic equipment correspondence user with the second corresponding user of electronic equipment by the chat of QQ chat software time, the image collecting device of the first electronic equipment just collects the facial image of the user in corresponding acquisition range, then this facial image is mated by the first electronic equipment with the facial image model prestored, if mate with the first face iconic model, first electronic equipment is by according to the corresponding relation between facial image model and display background content, determine the first display background content that the first face iconic model is corresponding, here the first display background content can background content in the display interface in QQ chat software.Then the first display background content determined is sent to the second electronic equipment by the first electronic equipment, so that the second electronic equipment shows this first display background content.Briefly, first display background content can be the color of communication interface, namely by the facial image of user, the first electronic equipment can inform that the second electronic equipment adjusts the color of communication interface, such as user happy time, the communication interface of the second electronic equipment will be adjusted to red, when user is unhappy, then communication interface is adjusted to black.
The user that such first electronic equipment is corresponding is not required to go to select a facial expression image to send to the second electronic equipment in facial expression image storehouse, and the facial image being based on user directly determines that a first display background content sends to the second electronic equipment, so it is possible not only to reduce the operating procedure of user, it is also possible to give expression to the emotion that user is current more really.
In three kinds of above-mentioned situations, all image corresponding for the user collected is converted to virtual image, then virtual image is sent and display to the second electronic equipment, be so possible not only to reduce the operating procedure of user, it is also possible to give expression to the emotion that user is current more really.
Optionally, in embodiments of the present invention, first electronic equipment is except can determining by the first image that corresponding first shows image, the first acoustic information can also be gathered by voice acquisition device, first image and the first acoustic information are analyzed, it is determined that the first image and the first acoustic information corresponding first display image.
It is to say, the first electronic equipment is when collecting first image of user, this will collect first acoustic information of this user, and then the first image and the first acoustic information are analyzed.Here analyzing or above-mentioned flow process the first image, and the analysis of the first acoustic information can be performed by such a way:
Determine the decibel value of the first acoustic information, according to the corresponding relation between decibel value and display image, it is determined that go out the display image to be determined that the decibel value of the first acoustic information is corresponding.Then the first corresponding with the first image for display image to be determined display image is mated by the first electronic equipment, if display image to be determined and first shows in images match, it is determined that the first display image is send the image to the second electronic equipment.
Briefly, when the decibel value of the acoustic information of user is bigger, then illustrating that the emotion of user is comparatively exciting, the first display image now determined is exactly facial expression image out of sorts.Except of course that can also analyze the first acoustic information in other way outside analyzing the first acoustic information decibel value, the tone such as analyzing the first acoustic information is determined.By further determining the first display image in conjunction with first acoustic information of the first electronic equipment correspondence user, it is possible to make the first display image determined more accurate.
Further, in embodiments of the present invention, after the first electronic equipment is by the first display image transmission to the second electronic equipment, first electronic equipment also will receive and show the second display image that the second electronic equipment sends, here second shows the emotion that characterization image the second electronic equipment correspondence user is current, here the second display image and the first of the transmission of the second electronic equipment that the second electronic equipment sends shows that the mode that image can be same generates, so that user corresponding to the first electronic equipment and user corresponding to the second electronic equipment can know the emotion of the other side at any time.Wherein, the second display image can be any one or the two or more combination in the second facial expression image, the second virtual expression animation, the second display background color.
Based on Same Way thinking, the embodiment of the present invention additionally provides a kind of electronic equipment, the structural representation being illustrated in figure 3 in the embodiment of the present invention a kind of electronic equipment, and this electronic equipment includes:
Image collecting device 301, for gathering first image of the user in an acquisition range;
Processor 302, is connected with described image collecting device 301, for described first image is analyzed, it is determined that the first display image that described first image is corresponding;
Emitter 303, it is connected with described processor 302, for showing that image sends to the second electronic equipment by described first, so that described second electronic equipment shows that described first display image, described first display image are the virtual images corresponding with the content information of described first image.
Further, in embodiments of the present invention, processor 302, specifically for mating described first image collected with each iconic model prestored respectively;When described first image mates with iconic model, according to the corresponding relation between the iconic model prestored and display image, it is determined that the first display image that described first iconic model is corresponding.
Optionally, described processor 302, difference value between the presumptive area of the predetermined of image and region and the second collection image is gathered specifically for obtaining adjacent first, judge that whether described difference value is be more than or equal to predetermined threshold value, if described difference value is be more than or equal to predetermined threshold value, then gather image as described first image using described second, and described first image collected is mated with the iconic model prestored.
Optionally, in embodiments of the present invention, processor 302, specifically for according to the corresponding relation between iconic model and facial expression image, it is determined that the first facial expression image that described first iconic model is corresponding;
Emitter 303, specifically for sending described first facial expression image to described second electronic equipment, so that described second electronic equipment shows described first facial expression image.
Optionally, in embodiments of the present invention, processor 302, specifically for according to the corresponding relation between iconic model and virtual animation, it is determined that the first virtual animation that described first iconic model is corresponding;
Emitter 303, specifically for sending described first virtual animation to described second electronic equipment, so that described second electronic equipment shows described first virtual animation.
Optionally, in embodiments of the present invention, processor 302, specifically for according to the corresponding relation between iconic model and display background content, it is determined that the first display background content that described first iconic model is corresponding;
Emitter 303, specifically for sending described first display background content to described second electronic equipment, so that described second electronic equipment shows described first display background content.
Optionally, in embodiments of the present invention, it is also possible to including: voice acquisition device, it is used for gathering the first acoustic information;
Processor 302, specifically for being analyzed described first image and described first acoustic information, it is determined that described first image and described first acoustic information corresponding first display image.
Optionally, in embodiments of the present invention, this electronic equipment also includes: receptor, for receiving the second display image that the second electronic equipment sends, and show that image sends the display device to the first electronic equipment by described second, so that display device shows the second display image.
Although having been described for the preferred embodiment of the application, but one of ordinary skilled in the art is once know basic creative concept, then these embodiments can be made other change and amendment.So, claims are intended to be construed to include preferred embodiment and fall into all changes and the amendment of the application scope.
Obviously, the application can be carried out various change and modification without deviating from spirit and scope by those skilled in the art.So, if these amendments of the application and modification belong within the scope of the application claim and equivalent technologies thereof, then the application is also intended to comprise these change and modification.
Claims (14)
1. an information processing method, it is characterised in that being applied to the first electronic equipment, described method includes:
When running predetermined application, gathered first image of user in an acquisition range by harvester;
Described first image is analyzed, it is determined that the first display image that described first image is corresponding;
Show that image sends to described second electronic equipment by described first, so that described second electronic equipment shows that described first display image, described first display image are the virtual images corresponding with the content information of described first image.
2. the method for claim 1, it is characterised in that described first image is analyzed, it is determined that the first display image that described first image is corresponding, including:
Described first image collected is mated with the iconic model prestored;
When described first image mates with iconic model, according to the corresponding relation between the iconic model prestored and display image, it is determined that the first display image that described first iconic model is corresponding.
3. method as claimed in claim 2, it is characterised in that the first image collected is mated with the iconic model prestored, also includes:
Obtain difference value between predetermined and region and the presumptive area of the second collection image of the first adjacent collection image;
Judge that whether described difference value is be more than or equal to predetermined threshold value;
If described difference value is be more than or equal to predetermined threshold value, then gathers image as described first image using described second, and described first image collected is mated with the iconic model prestored.
4. method as claimed in claim 2, it is characterized in that, according to the corresponding relation between the iconic model prestored and display image, determine the first display image that described first iconic model is corresponding, particularly as follows: according to the corresponding relation between iconic model and virtual animation, it is determined that the first virtual animation that described first iconic model is corresponding;
Described by described first display image transmission extremely described second electronic equipment, particularly as follows:
Described first virtual animation is sent to described second electronic equipment, so that described second electronic equipment shows described first virtual animation.
5. method as claimed in claim 2, it is characterized in that, according to the corresponding relation between the iconic model prestored and display image, determine the first display image that described first iconic model is corresponding, particularly as follows: according to the corresponding relation between iconic model and display background content, it is determined that the first display background content that described first iconic model is corresponding;
Described by described first display image transmission extremely described second electronic equipment, particularly as follows:
Described first display background content is sent to described second electronic equipment, so that described second electronic equipment shows described first display background content.
6. the method for claim 1, it is characterised in that described first image is analyzed, it is determined that the first display image that described first image is corresponding, including:
The first acoustic information is adopted by voice acquisition device;
Described first image and the first acoustic information are analyzed, it is determined that described first image and described first acoustic information corresponding described first display image.
7. method as described in arbitrary claim in claim 1-6, it is characterised in that described first image includes face-image and/or pose presentation.
8. the method for claim 1, it is characterised in that described method also includes:
Receiving and show the second display image that the second electronic equipment sends, wherein, described second shows the emotion that the described second electronic equipment correspondence user of characterization image is current.
9. method as claimed in claim 8, it is characterised in that described second display image is any one or two or more combination in the second facial expression image, the second virtual expression animation, the second display background color.
10. an electronic equipment, it is characterised in that including:
Image collecting device, for gathering first image of the user in an acquisition range;
Processor, is connected with described image collecting device, for described first image is analyzed, it is determined that the first display image that described first image is corresponding;
Emitter, it is connected with described processor, for showing that image sends to the second electronic equipment by described first, so that described second electronic equipment shows that described first display image, described first display image are the virtual images corresponding with the content information of described first image.
11. electronic equipment as claimed in claim 10, it is characterised in that described processor, specifically for mating described first image collected with each iconic model prestored respectively;When described first image mates with iconic model, according to the corresponding relation between the iconic model prestored and display image, it is determined that the first display image that described first iconic model is corresponding.
12. electronic equipment as claimed in claim 11, it is characterized in that, described processor, difference value between the presumptive area of the predetermined of image and region and the second collection image is gathered specifically for obtaining adjacent first, judge that whether described difference value is be more than or equal to predetermined threshold value, if described difference value is be more than or equal to predetermined threshold value, then gathers image as described first image using described second, and described first image collected is mated with the iconic model prestored.
13. electronic equipment as claimed in claim 10, it is characterised in that also include: voice acquisition device, be used for gathering the first acoustic information;
Described processor, specifically for being analyzed described first image and described first acoustic information, it is determined that described first image and described first acoustic information corresponding first display image.
14. electronic equipment as claimed in claim 10, it is characterised in that also include:
By described second, receptor, for receiving the second display image that the second electronic equipment sends, and shows that image sends the display device to the first electronic equipment, so that display device shows the second display image.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410849588.8A CN105809660B (en) | 2014-12-29 | 2014-12-29 | A kind of information processing method and electronic equipment |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410849588.8A CN105809660B (en) | 2014-12-29 | 2014-12-29 | A kind of information processing method and electronic equipment |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN105809660A true CN105809660A (en) | 2016-07-27 |
| CN105809660B CN105809660B (en) | 2019-06-25 |
Family
ID=56421060
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201410849588.8A Active CN105809660B (en) | 2014-12-29 | 2014-12-29 | A kind of information processing method and electronic equipment |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN105809660B (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109167723A (en) * | 2018-08-28 | 2019-01-08 | Oppo(重庆)智能科技有限公司 | Processing method, device, storage medium and the electronic equipment of image |
| CN110298326A (en) * | 2019-07-03 | 2019-10-01 | 北京字节跳动网络技术有限公司 | A kind of image processing method and device, storage medium and terminal |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103093490A (en) * | 2013-02-02 | 2013-05-08 | 浙江大学 | Real-time facial animation method based on single video camera |
| CN103842991A (en) * | 2011-10-03 | 2014-06-04 | 索尼公司 | Image processing apparatus, image processing method, and program |
| WO2014145722A2 (en) * | 2013-03-15 | 2014-09-18 | Digimarc Corporation | Cooperative photography |
-
2014
- 2014-12-29 CN CN201410849588.8A patent/CN105809660B/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103842991A (en) * | 2011-10-03 | 2014-06-04 | 索尼公司 | Image processing apparatus, image processing method, and program |
| CN103093490A (en) * | 2013-02-02 | 2013-05-08 | 浙江大学 | Real-time facial animation method based on single video camera |
| WO2014145722A2 (en) * | 2013-03-15 | 2014-09-18 | Digimarc Corporation | Cooperative photography |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109167723A (en) * | 2018-08-28 | 2019-01-08 | Oppo(重庆)智能科技有限公司 | Processing method, device, storage medium and the electronic equipment of image |
| CN110298326A (en) * | 2019-07-03 | 2019-10-01 | 北京字节跳动网络技术有限公司 | A kind of image processing method and device, storage medium and terminal |
Also Published As
| Publication number | Publication date |
|---|---|
| CN105809660B (en) | 2019-06-25 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109670427B (en) | Image information processing method and device and storage medium | |
| CN103533016B (en) | A kind of broadband network tests the speed and intelligent analysis method | |
| CN101673475B (en) | Method for realizing making-up guidance at terminal and equipment and system | |
| CN104065911B (en) | Display control method and device | |
| US8441514B2 (en) | Method and apparatus for transmitting and receiving data using mobile terminal | |
| CN108513088B (en) | Method and device for group video session | |
| CN105207880B (en) | Group recommending method and device | |
| CN109862301B (en) | Screen video sharing method, device and electronic device | |
| CN103476145B (en) | wireless network connection processing method and device | |
| CN104066009A (en) | Method, device, terminal, server and system for program identification | |
| CN104320586A (en) | Photographing method, system and terminal | |
| CN103905296A (en) | Emotion information processing method and device | |
| EP2892205A1 (en) | Method and device for determining terminal to be shared and system | |
| CN109194866B (en) | Image acquisition method, device, system, terminal equipment and storage medium | |
| CN104301714A (en) | Method and device for detecting channel switching response time of television equipment | |
| JP2010213133A (en) | Conference terminal device, display control method, and display control program | |
| JP2010239499A (en) | COMMUNICATION TERMINAL DEVICE, COMMUNICATION CONTROL DEVICE, COMMUNICATION TERMINAL DEVICE COMMUNICATION CONTROL METHOD, COMMUNICATION CONTROL PROGRAM | |
| CN105809660A (en) | Information processing method and electronic device | |
| EP2555127A2 (en) | Display apparatus for translating conversations | |
| CN110109594A (en) | A kind of draw data sharing method, device, storage medium and equipment | |
| CN115623243A (en) | Display device, terminal device and action following method | |
| CN113419932B (en) | A device performance analysis method and device | |
| CN104104899B (en) | The method and apparatus that information transmits in video conference | |
| CN107870752B (en) | Terminal window wall mounting method, terminal, video wall and system | |
| CN104717123A (en) | Message sending/receiving method and device and message interaction system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |