CN108769799B - Information processing method and electronic equipment - Google Patents
Information processing method and electronic equipment Download PDFInfo
- Publication number
- CN108769799B CN108769799B CN201810556320.3A CN201810556320A CN108769799B CN 108769799 B CN108769799 B CN 108769799B CN 201810556320 A CN201810556320 A CN 201810556320A CN 108769799 B CN108769799 B CN 108769799B
- Authority
- CN
- China
- Prior art keywords
- information
- sound
- playing
- playing strategy
- strategy
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 21
- 230000010365 information processing Effects 0.000 title claims abstract description 20
- 238000000034 method Methods 0.000 claims abstract description 20
- 230000004044 response Effects 0.000 claims description 168
- 238000012360 testing method Methods 0.000 claims description 113
- 238000001514 detection method Methods 0.000 claims description 37
- 210000003454 tympanic membrane Anatomy 0.000 claims description 20
- 230000009471 action Effects 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 4
- 238000012074 hearing test Methods 0.000 description 15
- 210000003128 head Anatomy 0.000 description 9
- 238000010586 diagram Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 241001465754 Metazoa Species 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 3
- 238000012790 confirmation Methods 0.000 description 3
- 230000001788 irregular Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
- H04N21/4394—Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/441—Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
- H04N21/4415—Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Databases & Information Systems (AREA)
- Biomedical Technology (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The embodiment of the application provides an information processing method and electronic equipment, wherein the method comprises the following steps: if the playing strategy of the audio content is the first playing strategy, obtaining input information; determining a second playback strategy that is different from the first playback strategy based on the input information; the minimum sound intensity value correspondingly output by the second playing strategy at least on a certain sound frequency value of the audio content is different from the minimum sound intensity value correspondingly output by the first playing strategy on the same sound frequency value. The information processing method can intelligently select the playing strategy matched with the user according to different users, and greatly improves the use experience of the user.
Description
Technical Field
The embodiment of the application relates to the field of intelligent equipment, in particular to an information processing method capable of intelligently selecting a playing strategy matched with a user according to different users and electronic equipment applying the method.
Background
With the diversification of functions of electronic devices such as mobile phones, computers, smart televisions and the like, users often play various audio contents such as videos or music by using the electronic devices during leisure and entertainment at present. However, the sound intensity that can be emitted by the current electronic device in each frequency band of the audio content is preset, but the playing strategy of the current electronic device cannot be adapted to the auditory response characteristics of each user due to the different auditory response characteristics of different users. Therefore, there is a technical problem in the prior art that the electronic device cannot intelligently provide a play policy adapted to a different user, so that the user experience is poor.
Content of application
The application provides an information processing method capable of intelligently selecting a playing strategy matched with a user according to different users and electronic equipment applying the method.
In order to solve the above technical problem, an embodiment of the present application provides an information processing method, including:
if the playing strategy of the audio content is the first playing strategy, obtaining input information;
determining a second playback strategy that is different from the first playback strategy based on the input information;
the minimum sound intensity value correspondingly output by the second playing strategy at least on a certain sound frequency value of the audio content is different from the minimum sound intensity value correspondingly output by the first playing strategy on the same sound frequency value.
Preferably, the obtaining input information specifically includes:
acquiring information representing auditory response in a preset sound frequency range, wherein the information representing auditory response at least comprises a minimum sound intensity value which can receive response feedback corresponding to a plurality of sound frequency values, and the preset sound frequency range comprises each sound frequency value of the audio content;
determining, based on the input information, that the second play policy specifically is:
generating the second playback strategy based on the auditory response information.
Preferably, the acquiring of the information representing the auditory response in the preset sound frequency range specifically includes:
respectively outputting a plurality of pieces of test information with different sound intensity values based on each sound frequency value in the sound frequency range;
one or more of action information, expression information and voice information representing a feedback person are collected to serve as feedback information of the feedback person to each piece of test information;
determining the information characterizing the auditory response based on a plurality of the feedback information.
Preferably, the acquiring of the auditory response information in the preset sound frequency range specifically includes:
receiving biophysical characteristic information respectively generated by the eardrum of a feedback person based on test information of different sound intensity values obtained by a hearing detection device, wherein the test information of different sound intensities is information respectively output by the hearing detection device based on each sound frequency value in the sound frequency range;
determining the information characterizing the auditory response based on the sound frequency value, the test information for different sound intensity values, and corresponding biophysical characteristic information.
Preferably, the obtaining input information specifically includes:
acquiring biological characteristic information;
determining, based on the input information, that the second play policy specifically is:
and determining a third playing strategy matched with the biological characteristic information from candidate playing strategies as the second playing strategy based on the biological characteristic information.
Preferably, the acquiring of the biometric information of the user specifically includes:
acquiring a plurality of biological characteristic information;
the determining of the second play strategy based on the input information specifically includes:
determining a third playing strategy matched with each piece of biological characteristic information from candidate playing strategies respectively on the basis of the acquired biological characteristic information;
determining the second playback strategy based on a plurality of the third playback strategies.
Preferably, the determining the second play policy based on the plurality of third play policies specifically includes:
respectively determining a frequency response range corresponding to each third playing strategy based on each third playing strategy;
respectively determining a least significant sound intensity range in each frequency response range based on a plurality of the frequency response ranges;
and determining whether the lowest effective sound intensity ranges have intersection, and if not, selecting the third playing strategy corresponding to the minimum value in the lowest effective sound intensity ranges as the second playing strategy.
Preferably, the determining the second play policy based on the plurality of third play policies specifically includes:
respectively determining a frequency response range corresponding to each third playing strategy based on each third playing strategy;
determining a minimum value of a plurality of least significant sound intensity values corresponding to each sound frequency value based on a plurality of said frequency response ranges;
and generating the second playing strategy based on each sound frequency value and the minimum value.
Preferably, the first and second play strategies have a first difference in a first frequency band and a second difference in a second frequency band, and the first and second differences are different.
Embodiments of the present application also provide an electronic device, including:
processing means for obtaining input information when a playback policy of audio content is a first playback policy, and determining a second playback policy different from the first playback policy based on the input information;
wherein, the sound intensity value of the second playing strategy at least on a certain frequency band of the audio content is different from the sound intensity value of the first playing strategy on the same frequency band.
Based on the disclosure of the above embodiments, it can be known that the embodiments of the present application have the following beneficial effects:
the electronic equipment effectively solves the technical problems that in the prior art, when all users use the electronic equipment, the electronic equipment can only provide the same playing strategy at intervals, so that the use effect of the users is poor and the experience is poor.
Drawings
Fig. 1 is a schematic flow chart of an information processing method according to an embodiment of the present invention.
Fig. 2 is a schematic flow chart of an information processing method according to another embodiment of the present invention.
Fig. 3 is a schematic flow chart of an information processing method according to another embodiment of the present invention.
Fig. 4 is a schematic flowchart of an information processing method according to another embodiment of the present invention.
Fig. 5 is a schematic flow chart of an information processing method according to another embodiment of the present invention.
Fig. 6 is a schematic flowchart of an information processing method according to another embodiment of the present invention.
Fig. 7 is a schematic flowchart of an information processing method according to another embodiment of the present invention.
Fig. 8 is a processing state diagram of an information processing method according to another embodiment of the present invention in actual use.
Fig. 9 is another processing state diagram of an information processing method according to another embodiment of the present invention when actually applied.
Fig. 10 is a schematic block diagram of an electronic device according to another embodiment of the present invention.
Fig. 11 is a schematic block diagram of an electronic device according to another embodiment of the present invention.
Detailed Description
Specific embodiments of the present application will be described in detail below with reference to the accompanying drawings, but the present application is not limited thereto.
It will be understood that various modifications may be made to the embodiments disclosed herein. Accordingly, the foregoing description should not be construed as limiting, but merely as exemplifications of embodiments. Other modifications will occur to those skilled in the art within the scope and spirit of the disclosure.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the disclosure and, together with a general description of the disclosure given above, and the detailed description of the embodiments given below, serve to explain the principles of the disclosure.
These and other characteristics of the present application will become apparent from the following description of preferred forms of embodiment, given as non-limiting examples, with reference to the attached drawings.
It should also be understood that, although the present application has been described with reference to some specific examples, a person of skill in the art shall certainly be able to achieve many other equivalent forms of application, having the characteristics as set forth in the claims and hence all coming within the field of protection defined thereby.
The above and other aspects, features and advantages of the present disclosure will become more apparent in view of the following detailed description when taken in conjunction with the accompanying drawings.
Specific embodiments of the present disclosure are described hereinafter with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are merely examples of the disclosure that may be embodied in various forms. Well-known and/or repeated functions and structures have not been described in detail so as not to obscure the present disclosure with unnecessary or unnecessary detail. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure in virtually any appropriately detailed structure.
The specification may use the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the disclosure.
Hereinafter, embodiments of the present application will be described in detail with reference to the accompanying drawings. Fig. 1 is a schematic flow chart of an information processing method according to an embodiment of the present invention. As shown in fig. 1, an information processing method provided in an embodiment of the present application includes:
s1, if the playing strategy of the audio content is the first playing strategy, obtaining the input information;
s2, determining a second playing strategy different from the first playing strategy based on the input information;
the minimum sound intensity value correspondingly output by the second playing strategy at least on a certain sound frequency value of the audio content is different from the minimum sound intensity value correspondingly output by the first playing strategy on the same sound frequency value.
The specific contents of the first playing strategy and the second playing strategy are not fixed, the first playing strategy is the current playing strategy, and the second playing strategy is the adjusted playing strategy, and can adapt to the auditory response characteristics of the current user. That is, the lowest sound intensity values at different frequency points of the audio content in the second playing strategy are the same as the lowest sound intensity values at which the user can make auditory responses at the different frequency points. The auditory response characteristics of different users are different, and the difference can be at least represented on the minimum sound intensity value of a certain sound frequency value of the audio content. Therefore, in the present application, the minimum sound intensity value of at least one sound frequency value of the audio content is different between the second playing strategy and the first playing strategy. Alternatively, the first play strategy and the second play strategy may have a first difference in the first frequency band and a second difference in the second frequency band, and the first difference and the second difference are different. That is, the first playing strategy is used as the playing strategy adapted to the user a, the second playing strategy is used as the playing strategy adapted to the user B, and the minimum sound intensity values of each sound frequency value of the user a and the user B on the audio content, which can make an auditory response, are all different and irregular. Therefore, when the electronic equipment plays the audio content based on the first playing strategy, the sound intensity values output corresponding to the sound frequency values are all different from the sound intensity values output by the electronic equipment at the corresponding sound frequency values when the electronic equipment plays the audio content based on the second playing strategy.
Further, the first playing strategy and the second playing strategy are actually the playing strategies before and after adjustment, and have relativity therebetween, that is, they can be replaced with each other. For example, the electronic device is an intelligent television, but may also be a notebook computer, a tablet computer, a mobile phone, and the like. When the first user (the first user is not limited to a human being, and may also be an animal, that is, the first user is actually a first biological user) is a first user after the smart television is started, or a corresponding operator when receiving an instruction to play a television program, a video, music, or the like for the first time after the first user is started, at this time, the first playing policy is a standard playing policy of the electronic device, and the second playing policy is a playing policy adapted to the auditory response characteristic of the first user. And when the first user does not turn off the intelligent television but leaves the watching range of the intelligent television due to other matters, and the second user takes over the first user to continue using the intelligent television, the intelligent television takes the playing strategy adapted to the first user as a first playing strategy and takes the determined playing strategy matched with the auditory response characteristic of the second user as a second playing strategy.
Further, step S1: input information is obtained, the specific content of the input information is not unique, and different input contents may cause the second playing strategy to be determined differently in step S2, for example:
in the first embodiment shown in fig. 2:
s1: the obtained input information is specifically:
s110, acquiring information representing auditory response in a preset sound frequency range, wherein the information representing auditory response at least comprises minimum intensity values of the auditory response which can be received corresponding to a plurality of sound frequency values, and the preset sound frequency range comprises each frequency band of audio content;
s2, determining the second playing strategy based on the input information specifically as follows:
and S210, generating a second playing strategy based on the auditory response information.
Specifically, before obtaining the input information, a preset sound frequency range may be set, and the sound frequency range needs to include all frequency bands in any audio content. For example, the preset sound frequency range may be set to the maximum sound frequency range acceptable to the human ear, that is, the entire audible sound frequency range, 20Hz to 20 kHz. Of course, the range may be smaller than this range, for example, the sound frequency range of the audio content involved in daily life is the preset sound frequency range. After the user starts the electronic device, the electronic device may obtain information capable of characterizing an auditory response (i.e., information characterizing an auditory response characteristic of the user, hereinafter referred to as auditory response information) by, for example, collecting or receiving, where the auditory response information at least includes a minimum intensity value at which the user can make an auditory response corresponding to each sound frequency value and then give a response feedback. The electronic device determines a second playing strategy adapted to the user based on the plurality of sound frequency values and the corresponding minimum sound intensity value.
Further, step S110, when implemented, may include both capturing and receiving, as described above, that is, the electronic device actively captures information characterizing the auditory response, or may be captured by another device and then input into the electronic device. As shown in fig. 3, taking active collection of auditory response information as an example:
s110: the information for obtaining the characteristic auditory response in the preset sound frequency range specifically comprises:
s111, respectively outputting a plurality of test information with different sound intensity values based on each sound frequency value in the sound frequency range;
s112, collecting one or more of action information, expression information and voice information of a characteristic feedback person (namely a user, hereinafter referred to as the user) as feedback information of the user to each test information;
information characterizing the auditory response is determined based on the plurality of feedback information S113.
Specifically, still taking the electronic device as an intelligent television as an example, the intelligent television sequentially sends out test information with a plurality of sound intensity values from large to small based on the first sound frequency value in the preset sound frequency range, for example, when the intelligent television starts a data acquisition mode, the test information "you hear it" with the sound intensity value of 10 is output first, the output test information is displayed on a screen at the same time, and then feedback information of the user is acquired, the acquisition of the feedback information can be obtained through one or more of action information (for example, head nodding, head shaking, gesture representing that the test information can be heard or can not be heard, confirmation key or negative key of a remote controller is clicked, and the like) of the user, expression information (for example, smiling and pounding of the user, and the like) of the user, voice information (for example, can not, can be obtained, can not be heard, and the like), and when the intelligent television acquires information that the user smiles at one side of the user side, the feedback information representing the feedback information And if the intelligent television collects the information which can not be heard while shaking the head, the user indicates that the feedback information of the user represents that the user can not hear the test information which is just output. Alternatively, the user may not respond when the test information is not heard, and may only respond when the test information is heard. The intelligent television determines whether to continue outputting the test information different from the previous sound intensity value by analyzing whether the feedback information represents that the user listens to the test information which is just output or not, if the analysis result is yes, the intelligent television outputs the test information 'you hear the test information' with the sound intensity value of 9, then the intelligent television continues acquiring the feedback information of the user, if the feedback information still represents that the user listens to the test information which is just output, the intelligent television continues outputting the test information … with the sound intensity value of 8, and so on until the feedback information of the user indicates that the test information which is just output cannot be heard, the intelligent television does not continue outputting the test information with different sound intensity values based on the current sound frequency value, but switches to the next sound frequency value to output the test information with a plurality of sound intensity values from high to low again in sequence, and continues acquiring the feedback information according to the steps, and stopping collecting until all the sound frequency values in the preset audio frequency range are traversed. At this time, the smart television sequentially determines a minimum sound intensity value based on the collected multiple pieces of feedback information, where the minimum sound intensity value is given by the user to the feedback information in each sound frequency value, that is, the minimum sound intensity value at which the user can make an auditory response in the sound frequency value. And the intelligent television determines final auditory response information based on the determined sound frequency values and the corresponding minimum sound intensity values.
In addition, the intelligent computer can output the last output test information to the user again before deciding to switch the sound frequency value, or output the test information with lower sound intensity value, that is, detect the user to ensure whether to stop the collection of the feedback information based on the current sound frequency value.
Further, taking the receiving of the auditory response information as an example, as shown in fig. 4:
s110: the acquiring of the auditory response information in the preset sound frequency range specifically includes:
s114: receiving biophysical characteristic information which is obtained by a hearing detection device and is generated by the eardrum of a user based on test information of different sound intensity values respectively, wherein the test information of different sound intensities is information which is output by the hearing detection device based on each sound frequency value in a sound frequency range respectively;
s115: information characterizing the auditory response is determined based on the sound frequency values, the test information for different sound intensity values, and the corresponding biophysical characteristic information.
For example, still taking an electronic device as an intelligent television as an example, the intelligent television is in communication connection with a hearing detection device for collecting hearing response information of a user in advance, so as to provide a basis for subsequently receiving the hearing response information collected and sent by the hearing detection device; of course, the hearing test device may also perform the test first and then communicate with the smart tv to send information to the smart tv. Specifically, the hearing test device may be, for example, an eardrum sound reflector, which may be connected to the smart television through, for example, a USB interface, and when starting the test, the hearing test device outputs a plurality of test information with different sound intensities based on each sound frequency value in the sound frequency range, or the smart television may output the test information, but at the same time, each sound frequency value needs to be input into the hearing test device in advance, so that the subsequent hearing test device correspondingly records the biophysical characteristic information of the eardrum.
Furthermore, a plurality of test information with different sound intensities are sequentially output in the order of the sound intensities from large to small, and each time one test information with the sound intensity is output, the hearing detection device can detect whether the tympanic membrane of the user vibrates the test sound just output, and if so, the hearing detection device can determine that the user can hear the test information with the sound intensity. And if the hearing detection device detects that the eardrum of the user does not vibrate after outputting the test information of the sound intensity value, determining that the test information of the sound intensity cannot be heard, switching to the next sound frequency value and repeating the steps to continue the detection at the moment until each sound frequency value in the traversal sound frequency range is detected, and then finishing the detection. After the detection is finished, the hearing detection device transmits information that the eardrum of the user can vibrate based on the test information of each sound intensity value in each sound frequency value to the intelligent television, and the intelligent television determines the auditory response information of the user based on the sound frequency value, the test information of different sound intensity values and the corresponding biophysical characteristic information.
In the second embodiment shown in fig. 5:
s1: the obtaining of the input information is specifically:
s120: acquiring biological characteristic information of a user;
s2: determining the second play strategy based on the input information specifically comprises:
and S220, determining a third playing strategy matched with the biological characteristic information from the candidate playing strategies based on the biological characteristic information as a second playing strategy.
In specific implementation, still taking the electronic device as an intelligent television as an example, a camera or a fingerprint collector and the like may be arranged on the intelligent television to obtain biometric information of a current television viewer, such as face information or fingerprint information. Certainly, the information is not limited to the biometric information, and may be a certain voice command or gesture command of the user or a signal light emitted by a certain specific key of the remote controller that is associated with the user. Then, the smart television determines a third playing strategy matched with the biological characteristic information from the candidate playing strategies based on the acquired biological characteristic information as a second playing strategy, and starts to play the audio content based on the second playing strategy.
The candidate playing strategy can be input into the smart television in advance by the user, or the biological characteristic information (or other characteristic information) of the user can be acquired after the auditory response information of the user is acquired based on the above, and then the biological characteristic information and the other characteristic information are associated and stored in the smart television. The intelligent television can store the playing strategies of a plurality of different users, so that the intelligent television can immediately call the playing strategy matched with the user to play the audio content when the different users watch the television.
However, when the smart tv is applied to a home, there is often a situation that a family member watches tv together, and each member of the home may store a play policy adapted to the family member in the smart tv in advance, and for this situation, in order to satisfy the watching effect of all viewers, the following method steps are further included in this embodiment, as shown in fig. 6:
s120: the method for acquiring the biological characteristic information of the user specifically comprises the following steps:
s121: obtaining biological characteristic information of a plurality of users;
s2: determining the second play strategy based on the input information specifically comprises:
s211: respectively determining a third playing strategy matched with each piece of biological characteristic information from the candidate playing strategies based on the obtained plurality of pieces of biological characteristic information;
s212: the second playback strategy is determined based on a plurality of third playback strategies.
That is, if the biometric information of the multiple users is obtained simultaneously, the intelligent computer determines a third playing strategy matched with each piece of biometric information from the candidate playing strategies based on the obtained multiple pieces of biometric information, and then comprehensively analyzes a second playing strategy simultaneously adapted to the multiple current users based on the determined multiple third playing strategies. If the third playing strategy is obtained, if a playing strategy which is not matched with certain biological characteristic information appears, a prompt can be sent to the user, then the second playing strategy is determined based on the indication of the user, or the final second playing strategy is determined based on the currently obtained third playing strategy, or the final second playing strategy is directly ignored.
Further, the smart tv executes step S212: when the second playing strategy is determined based on the plurality of third playing strategies, the principle is that it is ensured that all current users (i.e., users capable of acquiring the corresponding third playing strategies) can hear specific audio contents when playing the audio contents, and the phenomenon that some users can hear sound which may not be heard at least at a certain frequency band or frequency point is avoided. Therefore, when the smart tv executes step S212, two situations may be encountered, one of which is to directly select one of the third playing strategies as the second playing strategy, or to regenerate a new playing strategy based on the third playing strategies as the second playing strategy, as shown in fig. 7, the specific steps are as follows:
in the first case described above:
s212: determining the second play strategy based on the plurality of third play strategies specifically includes:
s2120: respectively determining frequency response ranges corresponding to the third playing strategies based on the third playing strategies;
s2121: respectively determining a least significant sound intensity range in each frequency response range based on the plurality of frequency response ranges;
s2122: and determining whether the lowest effective sound intensity ranges have intersection or not, and if not, selecting a third playing strategy corresponding to the minimum value in the lowest effective sound intensity ranges as a second playing strategy.
Specifically, the smart television first determines, based on each third playing policy, a frequency response range corresponding to each third playing policy, that is, a sound intensity variation curve (i.e., a frequency response curve) in the sound frequency range. And then respectively determining the lowest effective sound intensity range in each frequency response range based on the plurality of frequency response ranges, wherein the lowest effective sound intensity range comprises the lowest sound intensity value which can be acoustically responded by the user at each frequency point. Finally, the smart television determines whether an intersection exists based on the minimum effective sound intensity ranges, and if the intersection does not exist, the intersection refers to a common intersection of the minimum effective sound intensity ranges or an intersection appearing with a minimum value in the minimum effective sound intensity ranges (that is, the minimum effective sound intensity in the sound intensity range corresponding to the minimum value is lower than or mostly lower than the minimum effective sound intensity corresponding to other sound intensity ranges) (or the intersection may appear, but the numerical range of the intersection is within a preset range, so the intersection is ignored, and the intersection is still not considered to appear with the minimum value). If the two types of intersection do not appear, the third playing strategy corresponding to the minimum value can be used as the second playing strategy finally adapted to all current users.
For example, the smart television obtains three third playing strategies altogether, and then determines three minimum effective sound intensity ranges based on the three third playing strategies, as shown in fig. 8, three curves a, b, and c in the graph represent the three minimum effective sound intensity ranges, it can be known from the graph that the curves a and b have an intersection, but do not have an intersection with the curve c, and since the sound intensity values of each point in the curve c are all lower than the curves a and b, the curve c is the minimum value in the three minimum effective sound intensity ranges. The smart television may use the third play strategy corresponding to the curve c as the second play strategy. If any one of the curves a and b intersects with the curve c, the playing strategy corresponding to the curve c cannot be directly selected as the second playing strategy, and the final second playing strategy should be determined by the following method.
Continuing with FIG. 7, when the second condition is described above:
s212: determining the second play strategy based on the plurality of third play strategies specifically includes:
s2123: respectively determining frequency response ranges corresponding to the third playing strategies based on the third playing strategies;
s2124: determining a minimum value of a plurality of least significant sound intensity values corresponding to each sound frequency value based on a plurality of frequency response ranges;
s2125: and generating a second playing strategy based on the sound frequency values and the minimum value.
Specifically, as in the foregoing embodiment, the smart television first determines the frequency response range corresponding to each third playing strategy. A minimum value of a plurality of least significant sound intensity values corresponding to each sound frequency value is then determined based on the plurality of frequency response ranges. For example, still taking the above-mentioned three users as an example, as shown in fig. 9, the intelligent computer determines three frequency response curves, i.e., a, b, and c, respectively, based on the obtained multiple frequency response ranges, and the three users have the lowest effective sound intensity values corresponding to the respective sound frequency values, i.e., each sound frequency value should have three lowest effective sound intensity values. For example, the three frequency response curves a, b and c in the figure all have a lowest effective intensity value at a point corresponding to the sound frequency value x, but the three lowest effective intensity values may be completely the same, or only partially the same or completely different. The intelligent computer determines a minimum value based on the three lowest effective sound intensity values corresponding to the same sound frequency value, then generates a new frequency response curve d based on each sound frequency value and the corresponding minimum value, and determines a final second playing strategy based on the new frequency response curve d.
Further, when the smart television determines that the playing strategies adapted to the multiple users are simultaneously adapted based on any one of the two methods, the smart computer may associate and store the playing strategies and the biometric information of the multiple users, so that the smart television may quickly determine the playing strategies adapted to the multiple users subsequently if the multiple users watch the television simultaneously again.
The information processing method according to the embodiment of the present invention is described in detail above with reference to the embodiments, and the electronic device according to the present invention will be described below with reference to the drawings and the embodiments. Fig. 6 is a schematic block diagram of an electronic device according to another embodiment of the present invention. The electronic device shown in fig. 10 includes:
the audio playing method comprises an obtaining module 610 and a determining module 620, wherein the obtaining module 610 is used for obtaining input information when a playing strategy of audio content is a first playing strategy, and the determining module 620 determines a second playing strategy different from the first playing strategy based on the input information; the sound intensity value of the second playing strategy at least on a certain frequency band of the audio content is different from the sound intensity value of the first playing strategy on the same frequency band.
The specific contents of the first playing strategy and the second playing strategy are not fixed, the first playing strategy is the current playing strategy, and the second playing strategy is the adjusted playing strategy, and can adapt to the auditory response characteristics of the current user. That is, the lowest sound intensity values at different frequency points of the audio content in the second playing strategy are the same as the lowest sound intensity values at which the user can make auditory responses at the different frequency points. The auditory response characteristics of different users are different, and the difference can be at least represented on the minimum sound intensity value of a certain sound frequency value of the audio content. Therefore, in the present application, the minimum sound intensity value of at least one sound frequency value of the audio content is different between the second playing strategy and the first playing strategy. Alternatively, the first play strategy and the second play strategy may have a first difference in the first frequency band and a second difference in the second frequency band, and the first difference and the second difference are different. That is, the first playing strategy is used as the playing strategy adapted to the user a, the second playing strategy is used as the playing strategy adapted to the user B, and the minimum sound intensity values of each sound frequency value of the user a and the user B on the audio content, which can make an auditory response, are all different and irregular. Therefore, when the electronic equipment plays the audio content based on the first playing strategy, the sound intensity values output corresponding to the sound frequency values are all different from the sound intensity values output by the electronic equipment at the corresponding sound frequency values when the electronic equipment plays the audio content based on the second playing strategy.
Further, the first playing strategy and the second playing strategy are actually the playing strategies before and after adjustment, and have relativity therebetween, that is, they can be replaced with each other. For example, the electronic device is an intelligent television, but may also be a notebook computer, a tablet computer, a mobile phone, and the like. When the first user (the first user is not limited to a human being, and may also be an animal, that is, the first user is actually a first biological user) is a first user after the smart television is started, or a corresponding operator when receiving an instruction to play a television program, a video, music, or the like for the first time after the first user is started, at this time, the first playing policy is a standard playing policy of the electronic device, and the second playing policy is a playing policy adapted to the auditory response characteristic of the first user. And when the first user does not turn off the intelligent television but leaves the watching range of the intelligent television due to other matters, and the second user takes over the first user to continue using the intelligent television, the intelligent television takes the playing strategy adapted to the first user as a first playing strategy and takes the determined playing strategy matched with the auditory response characteristic of the second user as a second playing strategy.
Further, step S1: the obtaining module 610 obtains input information, specific content of the input information is not unique, and different input content may cause the determining module 620 in step S2 to determine the second playing strategy differently, for example:
in the first embodiment shown in fig. 2:
s1: the input information obtained by the obtaining module 610 specifically includes:
s110, acquiring information representing auditory response in a preset sound frequency range, wherein the information representing auditory response at least comprises minimum intensity values of the auditory response which can be received corresponding to a plurality of sound frequency values, and the preset sound frequency range comprises each frequency band of audio content;
s2, the determining module 620 determines the second playing strategy based on the input information specifically as follows:
and S210, generating a second playing strategy based on the auditory response information.
Specifically, before obtaining the input information, a preset sound frequency range may be set, and the sound frequency range needs to include all frequency bands in any audio content. For example, the preset sound frequency range may be set to the maximum sound frequency range acceptable to the human ear, that is, the entire audible sound frequency range, 20Hz to 20 kHz. Of course, the range may be smaller than this range, for example, the sound frequency range of the audio content involved in daily life is the preset sound frequency range. After the user starts the electronic device, the electronic device may obtain information capable of characterizing an auditory response (i.e., information characterizing an auditory response characteristic of the user, hereinafter referred to as auditory response information) by, for example, collecting or receiving, where the auditory response information at least includes a minimum intensity value at which the user can make an auditory response corresponding to each sound frequency value and then give a response feedback. The electronic device determines a second playing strategy adapted to the user based on the plurality of sound frequency values and the corresponding minimum sound intensity value.
Further, step S110, when implemented, may include both capturing and receiving, as described above, that is, the electronic device actively captures information characterizing the auditory response, or may be captured by another device and then input into the electronic device. As shown in fig. 3, taking active collection of auditory response information as an example:
s110: the information for obtaining the characteristic auditory response in the preset sound frequency range specifically comprises:
s111, respectively outputting a plurality of test information with different sound intensity values based on each sound frequency value in the sound frequency range;
s112, collecting one or more of action information, expression information and voice information of a characteristic feedback person (namely a user, hereinafter referred to as the user) as feedback information of the user to each test information;
information characterizing the auditory response is determined based on the plurality of feedback information S113.
Specifically, still taking the electronic device as an intelligent television as an example, the intelligent television sequentially sends out test information with a plurality of sound intensity values from large to small based on the first sound frequency value in the preset sound frequency range, for example, when the intelligent television starts a data acquisition mode, the test information "you hear it" with the sound intensity value of 10 is output first, the output test information is displayed on a screen at the same time, and then feedback information of the user is acquired, the acquisition of the feedback information can be obtained through one or more of action information (for example, head nodding, head shaking, gesture representing that the test information can be heard or can not be heard, confirmation key or negative key of a remote controller is clicked, and the like) of the user, expression information (for example, smiling and pounding of the user, and the like) of the user, voice information (for example, can not, can be obtained, can not be heard, and the like), and when the intelligent television acquires information that the user smiles at one side of the user side, the feedback information representing the feedback information And if the intelligent television collects the information which can not be heard while shaking the head, the user indicates that the feedback information of the user represents that the user can not hear the test information which is just output. The intelligent television determines whether to continue outputting the test information different from the previous sound intensity value by analyzing whether the feedback information represents that the user listens to the test information which is just output or not, if the analysis result is yes, the intelligent television outputs the test information 'you hear the test information' with the sound intensity value of 9, then the intelligent television continues acquiring the feedback information of the user, if the feedback information still represents that the user listens to the test information which is just output, the intelligent television continues outputting the test information … with the sound intensity value of 8, and so on until the feedback information of the user indicates that the test information which is just output cannot be heard, the intelligent television does not continue outputting the test information with different sound intensity values based on the current sound frequency value, but switches to the next sound frequency value to output the test information with a plurality of sound intensity values from high to low again in sequence, and continues acquiring the feedback information according to the steps, and stopping collecting until all the sound frequency values in the preset audio frequency range are traversed. At this time, the smart television sequentially determines a minimum sound intensity value based on the collected multiple pieces of feedback information, where the minimum sound intensity value is given by the user to the feedback information in each sound frequency value, that is, the minimum sound intensity value at which the user can make an auditory response in the sound frequency value. And the intelligent television determines final auditory response information based on the determined sound frequency values and the corresponding minimum sound intensity values.
In addition, the intelligent computer can output the last output test information to the user again before deciding to switch the sound frequency value, or output the test information with lower sound intensity value, that is, detect the user to ensure whether to stop the collection of the feedback information based on the current sound frequency value.
Further, taking the receiving of the auditory response information as an example, as shown in fig. 4:
s110: the acquiring of the auditory response information in the preset sound frequency range specifically includes:
s114: receiving biophysical characteristic information which is obtained by a hearing detection device and is generated by the eardrum of a user based on test information of different sound intensity values respectively, wherein the test information of different sound intensities is information which is output by the hearing detection device based on each sound frequency value in a sound frequency range respectively;
s115: information characterizing the auditory response is determined based on the sound frequency values, the test information for different sound intensity values, and the corresponding biophysical characteristic information.
For example, still taking an electronic device as an intelligent television as an example, the intelligent television is in communication connection with a hearing detection device for collecting hearing response information of a user in advance, so as to provide a basis for subsequently receiving the hearing response information collected and sent by the hearing detection device; of course, the hearing test device may also perform the test first and then communicate with the smart tv to send information to the smart tv. Specifically, the hearing test device may be, for example, an eardrum sound reflector, which may be connected to the smart television through, for example, a USB interface, and when starting the test, the hearing test device outputs a plurality of test information with different sound intensities based on each sound frequency value in the sound frequency range, or the smart television may output the test information, but at the same time, each sound frequency value needs to be input into the hearing test device in advance, so that the subsequent hearing test device correspondingly records the biophysical characteristic information of the eardrum.
Furthermore, a plurality of test information with different sound intensities are sequentially output in the order of the sound intensities from large to small, and each time one test information with the sound intensity is output, the hearing detection device can detect whether the tympanic membrane of the user vibrates the test sound just output, and if so, the hearing detection device can determine that the user can hear the test information with the sound intensity. And if the hearing detection device detects that the eardrum of the user does not vibrate after outputting the test information of the sound intensity value, determining that the test information of the sound intensity cannot be heard, switching to the next sound frequency value and repeating the steps to continue the detection at the moment until each sound frequency value in the traversal sound frequency range is detected, and then finishing the detection. After the detection is finished, the hearing detection device transmits information that the eardrum of the user can vibrate based on the test information of each sound intensity value in each sound frequency value to the intelligent television, and the intelligent television determines the auditory response information of the user based on the sound frequency value, the test information of different sound intensity values and the corresponding biophysical characteristic information.
In the second embodiment shown in fig. 5:
s1: the obtaining module 610 specifically obtains the input information as follows:
s120: acquiring biological characteristic information of a user;
s2: the determining module 620 determines, based on the input information, that the second play policy specifically is:
and S220, determining a third playing strategy matched with the biological characteristic information from the candidate playing strategies based on the biological characteristic information as a second playing strategy.
In specific implementation, still taking the electronic device as an intelligent television as an example, a camera or a fingerprint collector may be arranged on the intelligent television, and the camera or the fingerprint collector is the obtaining module 610, so as to obtain the biometric information of the current television viewer, such as face information or fingerprint information. Certainly, the information is not limited to the biometric information, and may be a certain voice command or gesture command of the user or a signal light emitted by a certain specific key of the remote controller that is associated with the user. Then, the smart television determines a third playing strategy matched with the biological characteristic information from the candidate playing strategies based on the acquired biological characteristic information as a second playing strategy, and starts to play the audio content based on the second playing strategy.
The candidate playing strategy can be input into the smart television in advance by the user, or the biological characteristic information (or other characteristic information) of the user can be acquired after the auditory response information of the user is acquired based on the above, and then the biological characteristic information and the other characteristic information are associated and stored in the smart television. The intelligent television can store the playing strategies of a plurality of different users, so that the intelligent television can immediately call the playing strategy matched with the user to play the audio content when the different users watch the television.
However, when the smart tv is applied to a home, there is often a situation that a family member watches tv together, and each member of the home may store a play policy adapted to the family member in the smart tv in advance, and for this situation, in order to satisfy the watching effect of all viewers, the following method steps are further included in this embodiment, as shown in fig. 6:
s120: the method for acquiring the biological characteristic information of the user specifically comprises the following steps:
s121: obtaining biological characteristic information of a plurality of users;
s2: determining the second play strategy based on the input information specifically comprises:
s211: respectively determining a third playing strategy matched with each piece of biological characteristic information from the candidate playing strategies based on the obtained plurality of pieces of biological characteristic information;
s212: the second playback strategy is determined based on a plurality of third playback strategies.
That is, if the biometric information of the multiple users is obtained simultaneously, the intelligent computer determines a third playing strategy matched with each piece of biometric information from the candidate playing strategies based on the obtained multiple pieces of biometric information, and then comprehensively analyzes a second playing strategy simultaneously adapted to the multiple current users based on the determined multiple third playing strategies. If the third playing strategy is obtained, if a playing strategy which is not matched with certain biological characteristic information appears, a prompt can be sent to the user, then the second playing strategy is determined based on the indication of the user, or the final second playing strategy is determined based on the currently obtained third playing strategy, or the final second playing strategy is directly ignored.
Further, the smart tv executes step S212: when the second playing strategy is determined based on the plurality of third playing strategies, the principle is that it is ensured that all current users (i.e., users capable of acquiring the corresponding third playing strategies) can hear specific audio contents when playing the audio contents, and the phenomenon that some users can hear sound which may not be heard at least at a certain frequency band or frequency point is avoided. Therefore, when the smart tv executes step S212, two situations may be encountered, one of which is to directly select one of the third playing strategies as the second playing strategy, or to regenerate a new playing strategy based on the third playing strategies as the second playing strategy, as shown in fig. 7, the specific steps are as follows:
in the first case described above:
s212: determining the second play strategy based on the plurality of third play strategies specifically includes:
s2120: respectively determining frequency response ranges corresponding to the third playing strategies based on the third playing strategies;
s2121: respectively determining a least significant sound intensity range in each frequency response range based on the plurality of frequency response ranges;
s2122: and determining whether the lowest effective sound intensity ranges have intersection or not, and if not, selecting a third playing strategy corresponding to the minimum value in the lowest effective sound intensity ranges as a second playing strategy.
Specifically, the smart television first determines, based on each third playing policy, a frequency response range corresponding to each third playing policy, that is, a sound intensity variation curve (i.e., a frequency response curve) in the sound frequency range. And then respectively determining the lowest effective sound intensity range in each frequency response range based on the plurality of frequency response ranges, wherein the lowest effective sound intensity range comprises the lowest sound intensity value which can be acoustically responded by the user at each frequency point. Finally, the smart television determines whether an intersection exists based on the minimum effective sound intensity ranges, and if the intersection does not exist, the intersection refers to a common intersection of the minimum effective sound intensity ranges or an intersection appearing with a minimum value in the minimum effective sound intensity ranges (that is, the minimum effective sound intensity in the sound intensity range corresponding to the minimum value is lower than or mostly lower than the minimum effective sound intensity corresponding to other sound intensity ranges) (or the intersection may appear, but the numerical range of the intersection is within a preset range, so the intersection is ignored, and the intersection is still not considered to appear with the minimum value). If the two types of intersection do not appear, the third playing strategy corresponding to the minimum value can be used as the second playing strategy finally adapted to all current users.
For example, the smart television obtains three third playing strategies altogether, and then determines three minimum effective sound intensity ranges based on the three third playing strategies, as shown in fig. 8, three curves a, b, and c in the graph represent the three minimum effective sound intensity ranges, it can be known from the graph that the curves a and b have an intersection, but do not have an intersection with the curve c, and since the sound intensity values of each point in the curve c are all lower than the curves a and b, the curve c is the minimum value in the three minimum effective sound intensity ranges. The smart television may use the third play strategy corresponding to the curve c as the second play strategy. If any one of the curves a and b intersects with the curve c, the playing strategy corresponding to the curve c cannot be directly selected as the second playing strategy, and the final second playing strategy should be determined by the following method.
Continuing with FIG. 7, when the second condition is described above:
s212: determining the second play strategy based on the plurality of third play strategies specifically includes:
s2123: respectively determining frequency response ranges corresponding to the third playing strategies based on the third playing strategies;
s2124: determining a minimum value of a plurality of least significant sound intensity values corresponding to each sound frequency value based on a plurality of frequency response ranges;
s2125: and generating a second playing strategy based on the sound frequency values and the minimum value.
Specifically, as in the foregoing embodiment, the smart television first determines the frequency response range corresponding to each third playing strategy. A minimum value of a plurality of least significant sound intensity values corresponding to each sound frequency value is then determined based on the plurality of frequency response ranges. For example, still taking the above-mentioned three users as an example, as shown in fig. 9, the intelligent computer determines three frequency response curves, i.e., a, b, and c, respectively, based on the obtained multiple frequency response ranges, and the three users have the lowest effective sound intensity values corresponding to the respective sound frequency values, i.e., each sound frequency value should have three lowest effective sound intensity values. For example, the three frequency response curves a, b and c in the figure all have a lowest effective intensity value at a point corresponding to the sound frequency value x, but the three lowest effective intensity values may be completely the same, or only partially the same or completely different. The intelligent computer determines a minimum value based on the three lowest effective sound intensity values corresponding to the same sound frequency value, then generates a new frequency response curve d based on each sound frequency value and the corresponding minimum value, and determines a final second playing strategy based on the new frequency response curve d.
Further, when the smart television determines that the playing strategies adapted to the multiple users are simultaneously adapted based on any one of the two methods, the smart computer may associate and store the playing strategies and the biometric information of the multiple users, so that the smart television may quickly determine the playing strategies adapted to the multiple users subsequently if the multiple users watch the television simultaneously again.
In addition, embodiments of the electronic device correspond to those above. Fig. 11 is a schematic block diagram of an electronic device according to another embodiment of the present invention. The electronic device of fig. 7, comprising:
processor 710, memory 720, and communication bus 730, processor 710 for invoking programs and code in memory 720 via communication bus 730 to implement: when the playing strategy of the audio content is a first playing strategy, obtaining input information, and determining a second playing strategy different from the first playing strategy based on the input information; wherein, the sound intensity value of the second playing strategy at least on a certain frequency band of the audio content is different from the sound intensity value of the first playing strategy on the same frequency band.
The specific contents of the first playing strategy and the second playing strategy are not fixed, the first playing strategy is the current playing strategy, and the second playing strategy is the adjusted playing strategy, and can adapt to the auditory response characteristics of the current user. That is, the lowest sound intensity values at different frequency points of the audio content in the second playing strategy are the same as the lowest sound intensity values at which the user can make auditory responses at the different frequency points. The auditory response characteristics of different users are different, and the difference can be at least represented on the minimum sound intensity value of a certain sound frequency value of the audio content. Therefore, in the present application, the minimum sound intensity value of at least one sound frequency value of the audio content is different between the second playing strategy and the first playing strategy. Alternatively, the first play strategy and the second play strategy may have a first difference in the first frequency band and a second difference in the second frequency band, and the first difference and the second difference are different. That is, the first playing strategy is used as the playing strategy adapted to the user a, the second playing strategy is used as the playing strategy adapted to the user B, and the minimum sound intensity values of each sound frequency value of the user a and the user B on the audio content, which can make an auditory response, are all different and irregular. Therefore, when the electronic equipment plays the audio content based on the first playing strategy, the sound intensity values output corresponding to the sound frequency values are all different from the sound intensity values output by the electronic equipment at the corresponding sound frequency values when the electronic equipment plays the audio content based on the second playing strategy.
Further, the first playing strategy and the second playing strategy are actually the playing strategies before and after adjustment, and have relativity therebetween, that is, they can be replaced with each other. For example, the electronic device is an intelligent television, but may also be a notebook computer, a tablet computer, a mobile phone, and the like. When the first user (the first user is not limited to a human being, and may also be an animal, that is, the first user is actually a first biological user) is a first user after the smart television is started, or a corresponding operator when receiving an instruction to play a television program, a video, music, or the like for the first time after the first user is started, at this time, the first playing policy is a standard playing policy of the electronic device, and the second playing policy is a playing policy adapted to the auditory response characteristic of the first user. And when the first user does not turn off the intelligent television but leaves the watching range of the intelligent television due to other matters, and the second user takes over the first user to continue using the intelligent television, the intelligent television takes the playing strategy adapted to the first user as a first playing strategy and takes the determined playing strategy matched with the auditory response characteristic of the second user as a second playing strategy.
Further, step S1: the processor 710 obtains the input information, the specific content of the input information is not unique, and different input contents may cause the second playing strategy to be determined differently in step S2, for example:
in the first embodiment shown in fig. 2:
s1: the input information obtained by the processor 710 is specifically:
s110, acquiring information representing auditory response in a preset sound frequency range, wherein the information representing auditory response at least comprises minimum intensity values of the auditory response which can be received corresponding to a plurality of sound frequency values, and the preset sound frequency range comprises each frequency band of audio content;
s2, determining the second playing strategy based on the input information specifically as follows:
and S210, generating a second playing strategy based on the auditory response information.
Specifically, before obtaining the input information, a preset sound frequency range may be set, and the sound frequency range needs to include all frequency bands in any audio content. For example, the preset sound frequency range may be set to the maximum sound frequency range acceptable to the human ear, that is, the entire audible sound frequency range, 20Hz to 20 kHz. Of course, the range may be smaller than this range, for example, the sound frequency range of the audio content involved in daily life is the preset sound frequency range. After the user starts the electronic device, the electronic device may obtain information capable of characterizing an auditory response (i.e., information characterizing an auditory response characteristic of the user, hereinafter referred to as auditory response information) by, for example, collecting or receiving, where the auditory response information at least includes a minimum intensity value at which the user can make an auditory response corresponding to each sound frequency value and then give a response feedback. The electronic device determines a second playing strategy adapted to the user based on the plurality of sound frequency values and the corresponding minimum sound intensity value.
Further, step S110, when implemented, may include both capturing and receiving, as described above, that is, the electronic device actively captures information characterizing the auditory response, or may be captured by another device and then input into the electronic device. As shown in fig. 3, taking active collection of auditory response information as an example:
s110: the information for obtaining the characteristic auditory response in the preset sound frequency range specifically comprises:
s111, respectively outputting a plurality of test information with different sound intensity values based on each sound frequency value in the sound frequency range;
s112, collecting one or more of action information, expression information and voice information of a characteristic feedback person (namely a user, hereinafter referred to as the user) as feedback information of the user to each test information;
information characterizing the auditory response is determined based on the plurality of feedback information S113.
Specifically, still taking the electronic device as an intelligent television as an example, the intelligent television sequentially sends out test information with a plurality of sound intensity values from large to small based on the first sound frequency value in the preset sound frequency range, for example, when the intelligent television starts a data acquisition mode, the test information "you hear it" with the sound intensity value of 10 is output first, the output test information is displayed on a screen at the same time, and then feedback information of the user is acquired, the acquisition of the feedback information can be obtained through one or more of action information (for example, head nodding, head shaking, gesture representing that the test information can be heard or can not be heard, confirmation key or negative key of a remote controller is clicked, and the like) of the user, expression information (for example, smiling and pounding of the user, and the like) of the user, voice information (for example, can not, can be obtained, can not be heard, and the like), and when the intelligent television acquires information that the user smiles at one side of the user side, the feedback information representing the feedback information And if the intelligent television collects the information which can not be heard while shaking the head, the user indicates that the feedback information of the user represents that the user can not hear the test information which is just output. Alternatively, the user may not respond when the test information is not heard, and may only respond when the test information is heard. The intelligent television determines whether to continue outputting the test information different from the previous sound intensity value by analyzing whether the feedback information represents that the user listens to the test information which is just output or not, if the analysis result is yes, the intelligent television outputs the test information 'you hear the test information' with the sound intensity value of 9, then the intelligent television continues acquiring the feedback information of the user, if the feedback information still represents that the user listens to the test information which is just output, the intelligent television continues outputting the test information … with the sound intensity value of 8, and so on until the feedback information of the user indicates that the test information which is just output cannot be heard, the intelligent television does not continue outputting the test information with different sound intensity values based on the current sound frequency value, but switches to the next sound frequency value to output the test information with a plurality of sound intensity values from high to low again in sequence, and continues acquiring the feedback information according to the steps, and stopping collecting until all the sound frequency values in the preset audio frequency range are traversed. At this time, the smart television sequentially determines a minimum sound intensity value based on the collected multiple pieces of feedback information, where the minimum sound intensity value is given by the user to the feedback information in each sound frequency value, that is, the minimum sound intensity value at which the user can make an auditory response in the sound frequency value. And the intelligent television determines final auditory response information based on the determined sound frequency values and the corresponding minimum sound intensity values.
In addition, the intelligent computer can output the last output test information to the user again before deciding to switch the sound frequency value, or output the test information with lower sound intensity value, that is, detect the user to ensure whether to stop the collection of the feedback information based on the current sound frequency value.
Further, taking the receiving of the auditory response information as an example, as shown in fig. 4:
s110: the acquiring of the auditory response information in the preset sound frequency range specifically includes:
s114: receiving biophysical characteristic information which is obtained by a hearing detection device and is generated by the eardrum of a user based on test information of different sound intensity values respectively, wherein the test information of different sound intensities is information which is output by the hearing detection device based on each sound frequency value in a sound frequency range respectively;
s115: information characterizing the auditory response is determined based on the sound frequency values, the test information for different sound intensity values, and the corresponding biophysical characteristic information.
For example, still taking an electronic device as an intelligent television as an example, the intelligent television is in communication connection with a hearing detection device for collecting hearing response information of a user in advance, so as to provide a basis for subsequently receiving the hearing response information collected and sent by the hearing detection device; of course, the hearing test device may also perform the test first and then communicate with the smart tv to send information to the smart tv. Specifically, the hearing test device may be, for example, an eardrum sound reflector, which may be connected to the smart television through, for example, a USB interface, and when starting the test, the hearing test device outputs a plurality of test information with different sound intensities based on each sound frequency value in the sound frequency range, or the smart television may output the test information, but at the same time, each sound frequency value needs to be input into the hearing test device in advance, so that the subsequent hearing test device correspondingly records the biophysical characteristic information of the eardrum.
Furthermore, a plurality of test information with different sound intensities are sequentially output in the order of the sound intensities from large to small, and each time one test information with the sound intensity is output, the hearing detection device can detect whether the tympanic membrane of the user vibrates the test sound just output, and if so, the hearing detection device can determine that the user can hear the test information with the sound intensity. And if the hearing detection device detects that the eardrum of the user does not vibrate after outputting the test information of the sound intensity value, determining that the test information of the sound intensity cannot be heard, switching to the next sound frequency value and repeating the steps to continue the detection at the moment until each sound frequency value in the traversal sound frequency range is detected, and then finishing the detection. After the detection is finished, the hearing detection device transmits information that the eardrum of the user can vibrate based on the test information of each sound intensity value in each sound frequency value to the intelligent television, and the intelligent television determines the auditory response information of the user based on the sound frequency value, the test information of different sound intensity values and the corresponding biophysical characteristic information.
In the second embodiment shown in fig. 5:
s1: the processor 710 obtains the input information specifically as follows:
s120: acquiring biological characteristic information of a user;
s2: the processor 710 determines, based on the input information, that the second play policy specifically is:
and S220, determining a third playing strategy matched with the biological characteristic information from the candidate playing strategies based on the biological characteristic information as a second playing strategy.
In specific implementation, still taking the electronic device as an intelligent television as an example, a camera or a fingerprint collector and the like may be arranged on the intelligent television to obtain biometric information of a current television viewer, such as face information or fingerprint information. Certainly, the information is not limited to the biometric information, and may be a certain voice command or gesture command of the user or a signal light emitted by a certain specific key of the remote controller that is associated with the user. Then, the smart television determines a third playing strategy matched with the biological characteristic information from the candidate playing strategies based on the acquired biological characteristic information as a second playing strategy, and starts to play the audio content based on the second playing strategy.
The candidate playing strategy can be input into the smart television in advance by the user, or the biological characteristic information (or other characteristic information) of the user can be acquired after the auditory response information of the user is acquired based on the above, and then the biological characteristic information and the other characteristic information are associated and stored in the smart television. The intelligent television can store the playing strategies of a plurality of different users, so that the intelligent television can immediately call the playing strategy matched with the user to play the audio content when the different users watch the television.
However, when the smart tv is applied to a home, there is often a situation that a family member watches tv together, and each member of the home may store a play policy adapted to the family member in the smart tv in advance, and for this situation, in order to satisfy the watching effect of all viewers, the following method steps are further included in this embodiment, as shown in fig. 6:
s120: the method for acquiring the biological characteristic information of the user specifically comprises the following steps:
s121: obtaining biological characteristic information of a plurality of users;
s2: determining the second play strategy based on the input information specifically comprises:
s211: respectively determining a third playing strategy matched with each piece of biological characteristic information from the candidate playing strategies based on the obtained plurality of pieces of biological characteristic information;
s212: the second playback strategy is determined based on a plurality of third playback strategies.
That is, if the biometric information of the multiple users is obtained simultaneously, the intelligent computer determines a third playing strategy matched with each piece of biometric information from the candidate playing strategies based on the obtained multiple pieces of biometric information, and then comprehensively analyzes a second playing strategy simultaneously adapted to the multiple current users based on the determined multiple third playing strategies. If the third playing strategy is obtained, if a playing strategy which is not matched with certain biological characteristic information appears, a prompt can be sent to the user, then the second playing strategy is determined based on the indication of the user, or the final second playing strategy is determined based on the currently obtained third playing strategy, or the final second playing strategy is directly ignored.
Further, the smart tv executes step S212: when the second playing strategy is determined based on the plurality of third playing strategies, the principle is that it is ensured that all current users (i.e., users capable of acquiring the corresponding third playing strategies) can hear specific audio contents when playing the audio contents, and the phenomenon that some users can hear sound which may not be heard at least at a certain frequency band or frequency point is avoided. Therefore, when the smart tv executes step S212, two situations may be encountered, one of which is to directly select one of the third playing strategies as the second playing strategy, or to regenerate a new playing strategy based on the third playing strategies as the second playing strategy, as shown in fig. 7, the specific steps are as follows:
in the first case described above:
s212: determining the second play strategy based on the plurality of third play strategies specifically includes:
s2120: respectively determining frequency response ranges corresponding to the third playing strategies based on the third playing strategies;
s2121: respectively determining a least significant sound intensity range in each frequency response range based on the plurality of frequency response ranges;
s2122: and determining whether the lowest effective sound intensity ranges have intersection or not, and if not, selecting a third playing strategy corresponding to the minimum value in the lowest effective sound intensity ranges as a second playing strategy.
Specifically, the smart television first determines, based on each third playing policy, a frequency response range corresponding to each third playing policy, that is, a sound intensity variation curve (i.e., a frequency response curve) in the sound frequency range. And then respectively determining the lowest effective sound intensity range in each frequency response range based on the plurality of frequency response ranges, wherein the lowest effective sound intensity range comprises the lowest sound intensity value which can be acoustically responded by the user at each frequency point. Finally, the smart television determines whether an intersection exists based on the minimum effective sound intensity ranges, and if the intersection does not exist, the intersection refers to a common intersection of the minimum effective sound intensity ranges or an intersection appearing with a minimum value in the minimum effective sound intensity ranges (that is, the minimum effective sound intensity in the sound intensity range corresponding to the minimum value is lower than or mostly lower than the minimum effective sound intensity corresponding to other sound intensity ranges) (or the intersection may appear, but the numerical range of the intersection is within a preset range, so the intersection is ignored, and the intersection is still not considered to appear with the minimum value). If the two types of intersection do not appear, the third playing strategy corresponding to the minimum value can be used as the second playing strategy finally adapted to all current users.
For example, the smart television obtains three third playing strategies altogether, and then determines three minimum effective sound intensity ranges based on the three third playing strategies, as shown in fig. 8, three curves a, b, and c in the graph represent the three minimum effective sound intensity ranges, it can be known from the graph that the curves a and b have an intersection, but do not have an intersection with the curve c, and since the sound intensity values of each point in the curve c are all lower than the curves a and b, the curve c is the minimum value in the three minimum effective sound intensity ranges. The smart television may use the third play strategy corresponding to the curve c as the second play strategy. If any one of the curves a and b intersects with the curve c, the playing strategy corresponding to the curve c cannot be directly selected as the second playing strategy, and the final second playing strategy should be determined by the following method.
Continuing with FIG. 7, when the second condition is described above:
s212: determining the second play strategy based on the plurality of third play strategies specifically includes:
s2123: respectively determining frequency response ranges corresponding to the third playing strategies based on the third playing strategies;
s2124: determining a minimum value of a plurality of least significant sound intensity values corresponding to each sound frequency value based on a plurality of frequency response ranges;
s2125: and generating a second playing strategy based on the sound frequency values and the minimum value.
Specifically, as in the foregoing embodiment, the smart television first determines the frequency response range corresponding to each third playing strategy. A minimum value of a plurality of least significant sound intensity values corresponding to each sound frequency value is then determined based on the plurality of frequency response ranges. For example, still taking the above-mentioned three users as an example, as shown in fig. 9, the intelligent computer determines three frequency response curves, i.e., a, b, and c, respectively, based on the obtained multiple frequency response ranges, and the three users have the lowest effective sound intensity values corresponding to the respective sound frequency values, i.e., each sound frequency value should have three lowest effective sound intensity values. For example, the three frequency response curves a, b and c in the figure all have a lowest effective intensity value at a point corresponding to the sound frequency value x, but the three lowest effective intensity values may be completely the same, or only partially the same or completely different. The intelligent computer determines a minimum value based on the three lowest effective sound intensity values corresponding to the same sound frequency value, then generates a new frequency response curve d based on each sound frequency value and the corresponding minimum value, and determines a final second playing strategy based on the new frequency response curve d.
Further, when the smart television determines that the playing strategies adapted to the multiple users are simultaneously adapted based on any one of the two methods, the smart computer may associate and store the playing strategies and the biometric information of the multiple users, so that the smart television may quickly determine the playing strategies adapted to the multiple users subsequently if the multiple users watch the television simultaneously again.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the electronic device to which the data processing method described above is applied may refer to the corresponding description in the foregoing product embodiments, and details are not repeated herein.
The above embodiments are only exemplary embodiments of the present application, and are not intended to limit the present application, and the protection scope of the present application is defined by the claims. Various modifications and equivalents may be made by those skilled in the art within the spirit and scope of the present application and such modifications and equivalents should also be considered to be within the scope of the present application.
Claims (6)
1. An information processing method comprising:
if the playing strategy of the audio content is the first playing strategy, obtaining input information;
determining a second playback strategy that is different from the first playback strategy based on the input information;
wherein, the minimum sound intensity value correspondingly output by the second playing strategy at least at a certain sound frequency value of the audio content is different from the minimum sound intensity value correspondingly output by the first playing strategy at the same sound frequency value;
wherein the obtaining input information comprises:
obtaining a plurality of biometric information;
the determining, based on the input information, a second playback policy that is different from the first playback policy comprises:
determining a third playing strategy matched with each piece of biological characteristic information from candidate playing strategies respectively on the basis of the acquired biological characteristic information;
respectively determining a frequency response range corresponding to each third playing strategy based on each third playing strategy;
respectively determining a least significant sound intensity range in each frequency response range based on a plurality of the frequency response ranges;
determining whether the lowest effective sound intensity ranges have intersection, if not, selecting the third playing strategy corresponding to the minimum value in the lowest effective sound intensity ranges as the second playing strategy; or
Respectively determining a frequency response range corresponding to each third playing strategy based on each third playing strategy;
determining a minimum value of a plurality of least significant sound intensity values corresponding to each sound frequency value based on a plurality of said frequency response ranges;
and generating the second playing strategy based on each sound frequency value and the minimum value.
2. The method of claim 1, the obtaining input information comprising:
acquiring information representing auditory response in a preset sound frequency range, wherein the information representing auditory response at least comprises a minimum sound intensity value which can receive response feedback corresponding to a plurality of sound frequency values, and the preset sound frequency range comprises each sound frequency value of the audio content;
determining a second playback policy that is different from the first playback policy based on the input information includes:
generating the second playback strategy based on the auditory response information.
3. The method according to claim 2, wherein the acquiring of the information characterizing the auditory response in the preset sound frequency range is specifically:
respectively outputting a plurality of pieces of test information with different sound intensity values based on each sound frequency value in the sound frequency range;
one or more of action information, expression information and voice information representing a feedback person are collected to serve as feedback information of the feedback person to each piece of test information;
determining the information characterizing the auditory response based on a plurality of the feedback information.
4. The method according to claim 2, wherein the obtaining of the auditory response information in the preset sound frequency range includes:
receiving biophysical characteristic information respectively generated by eardrums of feedback persons based on test information of different sound intensity values obtained by a hearing detection device; the test information of different sound intensities is information which is respectively output by the hearing detection device based on each sound frequency value in the sound frequency range;
determining the information characterizing the auditory response based on the sound frequency value, the test information for different sound intensity values, and corresponding biophysical characteristic information.
5. The method of claim 1, the first and second playback strategies having a first difference in a first frequency band and a second difference in a second frequency band, and the first and second differences being different.
6. An electronic device, comprising:
processing means for obtaining input information when a playback policy of audio content is a first playback policy, and determining a second playback policy different from the first playback policy based on the input information;
wherein, the sound intensity value of the second playing strategy at least on a certain frequency band of the audio content is different from the sound intensity value of the first playing strategy on the same frequency band;
wherein the obtaining input information comprises:
obtaining a plurality of biometric information;
the determining, based on the input information, a second playback policy that is different from the first playback policy comprises:
determining a third playing strategy matched with each piece of biological characteristic information from candidate playing strategies respectively on the basis of the acquired biological characteristic information;
respectively determining a frequency response range corresponding to each third playing strategy based on each third playing strategy;
respectively determining a least significant sound intensity range in each frequency response range based on a plurality of the frequency response ranges;
determining whether the lowest effective sound intensity ranges have intersection, if not, selecting the third playing strategy corresponding to the minimum value in the lowest effective sound intensity ranges as the second playing strategy; or
Respectively determining a frequency response range corresponding to each third playing strategy based on each third playing strategy;
determining a minimum value of a plurality of least significant sound intensity values corresponding to each sound frequency value based on a plurality of said frequency response ranges;
and generating the second playing strategy based on each sound frequency value and the minimum value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810556320.3A CN108769799B (en) | 2018-05-31 | 2018-05-31 | Information processing method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810556320.3A CN108769799B (en) | 2018-05-31 | 2018-05-31 | Information processing method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108769799A CN108769799A (en) | 2018-11-06 |
CN108769799B true CN108769799B (en) | 2021-06-15 |
Family
ID=64002003
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810556320.3A Active CN108769799B (en) | 2018-05-31 | 2018-05-31 | Information processing method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108769799B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111061599B (en) * | 2019-12-06 | 2023-08-01 | 携程旅游网络技术(上海)有限公司 | Method for generating check point of interface test environment |
CN113556594A (en) * | 2020-04-26 | 2021-10-26 | 阿里巴巴集团控股有限公司 | Audio and video signal playing method, information display method, device and medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010021953A (en) * | 2008-07-14 | 2010-01-28 | Sharp Corp | Video/audio output device and television receiver |
CN104144374A (en) * | 2013-05-06 | 2014-11-12 | 展讯通信(上海)有限公司 | Listening assisting method and system based on mobile device |
CN104365085A (en) * | 2012-06-12 | 2015-02-18 | 三星电子株式会社 | Method for processing audio signal and audio signal processing apparatus adopting the same |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU1961801A (en) * | 1999-09-28 | 2001-05-10 | Sound Id | Internet based hearing assessment methods |
JP4131255B2 (en) * | 2004-07-05 | 2008-08-13 | ヤマハ株式会社 | Audio playback device |
US8031891B2 (en) * | 2005-06-30 | 2011-10-04 | Microsoft Corporation | Dynamic media rendering |
US20080129520A1 (en) * | 2006-12-01 | 2008-06-05 | Apple Computer, Inc. | Electronic device with enhanced audio feedback |
US9138178B2 (en) * | 2010-08-05 | 2015-09-22 | Ace Communications Limited | Method and system for self-managed sound enhancement |
CN102682761A (en) * | 2011-03-12 | 2012-09-19 | 谢津 | Personalized system and device for sound processing |
US9560445B2 (en) * | 2014-01-18 | 2017-01-31 | Microsoft Technology Licensing, Llc | Enhanced spatial impression for home audio |
CN105262887B (en) * | 2015-09-07 | 2020-05-05 | 惠州Tcl移动通信有限公司 | Mobile terminal and audio setting method thereof |
-
2018
- 2018-05-31 CN CN201810556320.3A patent/CN108769799B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010021953A (en) * | 2008-07-14 | 2010-01-28 | Sharp Corp | Video/audio output device and television receiver |
CN104365085A (en) * | 2012-06-12 | 2015-02-18 | 三星电子株式会社 | Method for processing audio signal and audio signal processing apparatus adopting the same |
CN104144374A (en) * | 2013-05-06 | 2014-11-12 | 展讯通信(上海)有限公司 | Listening assisting method and system based on mobile device |
Also Published As
Publication number | Publication date |
---|---|
CN108769799A (en) | 2018-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI639114B (en) | Electronic device with a function of smart voice service and method of adjusting output sound | |
CN109803003B (en) | Control method, system and related equipment | |
WO2020192222A1 (en) | Method and device for intelligent analysis of user context and storage medium | |
US20160182944A1 (en) | Television volume control method and system | |
CN108156497B (en) | Control method, control equipment and control system | |
JP2004526374A (en) | Method and apparatus for controlling a media player based on user behavior | |
WO2015135365A1 (en) | Noise control method and device | |
CN109473095A (en) | A kind of intelligent home control system and control method | |
CN104759017A (en) | Sleep assistance system and method of operation thereof | |
CN107395873B (en) | Volume adjusting method and device, storage medium and terminal | |
CN108769799B (en) | Information processing method and electronic equipment | |
CN107547732A (en) | A kind of media play volume adjusting method, device, terminal and storage medium | |
CN103607641A (en) | Method and apparatus for user registration in intelligent television | |
US10219076B2 (en) | Audio signal processing device, audio signal processing method, and storage medium | |
CN111966321A (en) | Volume adjusting method, AR device and storage medium | |
CN110286771A (en) | Interaction method and device, intelligent robot, electronic equipment and storage medium | |
CN113709629A (en) | Frequency response parameter adjusting method, device, equipment and storage medium | |
US20180277136A1 (en) | Image-Based Techniques for Audio Content | |
US9626967B2 (en) | Information processing method and electronic device | |
CN108966002B (en) | Device and method for adjusting volume by using micro expression | |
CN117751585A (en) | Control method and device of intelligent earphone, electronic equipment and storage medium | |
CN105491246A (en) | Photographing processing method and device | |
CN106603850A (en) | Incoming call reminding method and device | |
CN111814695A (en) | Cleaning equipment and audio information playing method and device | |
CN113873325B (en) | Sound processing method, device, apparatus and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |