[go: up one dir, main page]

WO2006028171A1 - Data presentation device, data presentation method, data presentation program, and recording medium containing the program - Google Patents

Data presentation device, data presentation method, data presentation program, and recording medium containing the program Download PDF

Info

Publication number
WO2006028171A1
WO2006028171A1 PCT/JP2005/016515 JP2005016515W WO2006028171A1 WO 2006028171 A1 WO2006028171 A1 WO 2006028171A1 JP 2005016515 W JP2005016515 W JP 2005016515W WO 2006028171 A1 WO2006028171 A1 WO 2006028171A1
Authority
WO
WIPO (PCT)
Prior art keywords
keyword
data
speech
actual data
feature
Prior art date
Application number
PCT/JP2005/016515
Other languages
French (fr)
Japanese (ja)
Inventor
Shigeo Matsui
Original Assignee
Pioneer Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pioneer Corporation filed Critical Pioneer Corporation
Priority to JP2006535815A priority Critical patent/JPWO2006028171A1/en
Publication of WO2006028171A1 publication Critical patent/WO2006028171A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3605Destination input or retrieval
    • G01C21/3608Destination input or retrieval using speech input, e.g. using speech recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0968Systems involving transmission of navigation instructions to the vehicle
    • G08G1/0969Systems involving transmission of navigation instructions to the vehicle having a display in the form of a map

Definitions

  • the present invention relates to data presentation means such as setting of a destination when performing navigation, and in particular, a technique of a data presentation device that searches a plurality of actual data stored in advance and presents the result. Belonging to the field.
  • a speech recognition device that recognizes speech uttered by humans is applied to each device.
  • Such a speech recognition apparatus sequentially matches the pattern of the feature amount of the uttered speech with a pattern of the feature amount of speech indicating a recognition candidate word / phrase (hereinafter referred to as a keyword) as a keyword prepared in advance. It is designed to recognize.
  • a navigation apparatus that guides a route of a moving body such as a vehicle based on map data
  • it is generally used to set a destination or a waypoint.
  • the navigation device is configured to set the recognized keyword to the current location, destination, or waypoint (hereinafter referred to as a location), and the location, such as latitude, longitude, and attribute of the set location.
  • Route data and route guidance are obtained by acquiring data related to this (hereinafter referred to as actual data) from the database.
  • the speech is stored in a database and does not match a plurality of keywords.
  • a navigation device is known that registers the uttered voice in a related keyword by a user operation (see, for example, Patent Document 1).
  • Patent Document 1 Japanese Unexamined Patent Publication No. 2003-323192
  • the present invention solves an example of the above-described problem by making the keyword recognized based on the uttered voice as a search key for data search in another database or the like, thereby making the operation complicated.
  • the present invention provides a data presentation device that improves the recognition rate of spoken speech without increasing the number of keywords.
  • the invention according to claim 1 is characterized in that the speech component of the spoken speech is acquired, the speech component is analyzed, and the feature amount of the speech component is calculated.
  • Extraction means for extracting a certain utterance voice feature quantity, first storage means in which a plurality of keyword feature quantity data indicating feature quantities related to the keyword voice are stored in advance, and a name in which the predetermined actual data indicates the name of the actual data
  • a second storage means stored in advance in association with information; a specifying means for specifying at least one keyword based on the uttered voice feature quantity and the keyword feature quantity data; and at least a part of the name information
  • a search means for searching for actual data having the specified keyword, and a presentation means for presenting the detected actual data.
  • the invention according to claim 5 is an acquisition step of acquiring a speech component of the uttered speech and analyzes the speech component to extract a speech speech feature amount that is a feature amount of the speech component.
  • An identification step for identifying at least one keyword on the basis of keyword feature amount data indicating the feature amount related to the speech voice feature amount and the keyword speech, and name information indicating a name in the second storage means.
  • the computer has an acquisition means for acquiring the speech component of the spoken speech, the speech component is analyzed, and the speech that is the feature amount of the speech component Extraction means for extracting speech feature quantity, identification means for identifying at least one keyword based on keyword feature quantity data indicating the feature quantity related to the speech voice feature quantity and keyword speech, and a name indicating the name of predetermined actual data
  • the information processing apparatus may be configured to function as search means for searching for actual data having the specified keyword in at least a part of information, and presentation means for presenting the detected actual data.
  • the invention according to claim 8 is an acquisition means for acquiring a speech component of the uttered speech, and the speech component is analyzed to extract a speech speech feature amount that is a feature amount of the speech component. Extracting means, first storage means in which a plurality of keyword feature quantity data indicating the feature quantity of the keyword speech are stored in advance, and predetermined actual data are associated with name information indicating the name of the actual data.
  • Second storage means stored in advance, and notification means for notifying an operator of the specified keyword when at least one keyword is specified based on the utterance voice feature quantity and the keyword feature quantity data Correction means used to correct the notified keyword when the notified keyword does not match the keyword desired by the operator, and the name Has a configuration comprising a retrieval means for retrieving actual data having been modified by said modifying means in some keywords, and presenting means for presenting the actual data to which the detected and even without less of distribution.
  • the invention according to claim 9 is an acquisition step of acquiring a speech component of the uttered speech, an analysis of the speech component of the uttered speech, and an utterance speech that is a feature amount of the speech component
  • An extraction step for extracting a feature amount a specifying step for specifying at least one keyword based on keyword feature amount data indicating the feature amount related to the utterance voice feature amount and the keyword speech, and an announcement of the specified keyword
  • the notification process notified by the means and the notified keyword are corrected, the name information of the predetermined actual data is changed. It has the structure provided with the search process which searches the actual data which have the said corrected keyword at least in part, and the presentation process which presents the said detected actual data.
  • the computer has an acquisition means for acquiring the speech component of the uttered speech, the speech component of the uttered speech is analyzed, and the feature amount of the speech component is calculated.
  • Extraction means for extracting a certain utterance voice feature quantity
  • specification means for specifying at least one keyword based on the utterance voice feature quantity and keyword feature quantity data indicating a feature quantity relating to the keyword voice
  • the specified keyword Annunciation means Annunciation means Announcement means
  • search means for searching the actual data having the corrected keyword in at least a part of the name information of predetermined actual data, the detected It also has a configuration that allows it to function as a presentation means for presenting actual data.
  • FIG. 1 is a block diagram showing a schematic configuration of a navigation device according to an embodiment of the present application.
  • FIG. 2 is a flowchart (I) showing an operation of a point data search process necessary for a route setting process or a route guidance process in the system control unit 250 of the embodiment.
  • FIG. 3 is a flowchart (II) showing an operation of a search process of point data required at the time of route setting processing or route guidance processing in the system control unit 250 of the embodiment.
  • FIG. 1 is a block diagram showing a schematic configuration of the navigation apparatus of the present embodiment according to the present application.
  • the navigation device 100 of the present embodiment is connected to an antenna AT and receives a GPS (Global Positioning System) data.
  • GPS Global Positioning System
  • Each of the sensors 120 detects data
  • the interface 130 calculates the vehicle position based on GPS data and driving data
  • the VICS data receiver 140 receives VICS (Vehicle Information Communication System) data
  • the user And an operation unit 150 used for setting and inputting commands to the system.
  • VICS Vehicle Information Communication System
  • the navigation device 100 includes a microphone 160 that collects the uttered voice spoken by the operator, and a command that instructs the system also the utterance voice power collected by the microphone 160 (hereinafter referred to as the “voice”). , Simply a command)), a database 180 storing data used when performing speech recognition, and various data such as map data and point data described later are recorded in advance.
  • a display control unit 220 that controls the display unit 200 using the buffer memory 210, an audio processing circuit 230 that generates audio such as route guidance, and a speaker 240 that amplifies the audio signal output from the audio processing circuit 230.
  • the system control unit 250 controls the entire system and controls each process related to speech recognition, and the ROMZRAM 260. Each unit is connected by a bus B.
  • the operation unit 150 of the present embodiment constitutes an operation unit, a selection unit, and a correction unit according to the present invention
  • the speech recognition circuit 170 includes an acquisition unit, an extraction unit, and a specification unit according to the present invention.
  • the display unit 200 and the display control unit 220 or the audio processing circuit 230 and the speaker 240 of the present embodiment constitute a presentation unit and a notification unit according to the present invention.
  • the GPS receiver 110 receives navigation radio waves of a plurality of satellite forces belonging to the GPS via an antenna (not shown), and based on the received radio waves, pseudo coordinates of the current position of the mobile object A value is calculated, and the calculated pseudo coordinate data is output to the interface 130 as GPS data.
  • the sensor unit 120 detects each travel data of the travel speed, acceleration and azimuth of the vehicle, and outputs the detected travel data to the interface 130.
  • the sensor unit 120 detects the traveling speed of the vehicle, converts the detected speed into speed data having a pulse or voltage form, and outputs the speed data to the interface 130. Further, the sensor unit 120 detects the moving state of the vehicle in the vertical direction by comparing the gravitational acceleration and the acceleration generated by the movement of the vehicle, and pulses or outputs acceleration data indicating the detected moving state. It is converted to a voltage form and output to the interface 130. Further, the sensor unit 120 is configured by a so-called gyro sensor, detects the azimuth angle of the vehicle, that is, the traveling direction in which the vehicle is traveling, and the detected azimuth angle is azimuth angle data having a pulse or voltage form. And output to the interface 130.
  • gyro sensor detects the azimuth angle of the vehicle, that is, the traveling direction in which the vehicle is traveling, and the detected azimuth angle is azimuth angle data having a pulse or voltage form.
  • the interface 130 performs interface processing between the sensor unit 120 and the GPS receiving unit 110 and the system control unit 250. Based on the row data, the vehicle position is calculated and the vehicle position is output to the system control unit 250 as the vehicle position data.
  • vehicle position data is collated with map data in the system control unit 250 and used for navigation-related processing such as map matching processing and route search processing.
  • the VICS data receiving unit 140 acquires VICS data by receiving radio waves such as FM multiplex broadcasting, and outputs the acquired VICS data to the system control unit 250.
  • VICS refers to a road traffic information communication system
  • VICS data refers to road traffic information such as traffic jams, accidents, and regulations.
  • the map data storage unit 190 is configured by, for example, a hard disk, and is used when pre-recorded map data such as road maps, route setting, or route guidance.
  • map data such as road maps, route setting, or route guidance.
  • the necessary point data and other information necessary for driving guidance are read out by setting the vehicle, etc., and various kinds of read out data are output to the system control unit 250.
  • the map data storage unit 190 is divided into a plurality of blocks in a map-wide force mesh shape, and map data corresponding to each block is managed as block map data.
  • the map data storage unit 190 includes name data indicating the names of destinations such as parks and stores, and map data of the destinations. Location data indicating the location and facility data such as an address are stored in association with road shape data for each location as location data.
  • the map data storage unit 190 includes a restaurant, a department store, a play facility, a tourist attraction, a name of the facility such as a museum, a dining place, a tour place.
  • Genre information indicating the genre (also referred to as an attribute) of the point of visit such as a play facility, location information indicating the latitude and longitude of the point, facility data such as address, telephone number, business day, and business hours Stored.
  • map data storage unit 190 of the present embodiment is used to specify a point required for route setting or route guidance such as a destination or current location.
  • the point data is searched by the system control unit 250.
  • the operation unit 150 includes a remote control device having a number of keys such as various confirmation buttons, selection buttons, and numeric keys, and a light receiving unit that receives a signal transmitted from the remote control device or various confirmation buttons.
  • the operation panel has a number of keys such as a selection button and numeric keys.
  • the operation unit 150 is used to input a driver's command such as a vehicle travel information display command and a display switching of the display unit 200.
  • the operation unit 150 displays the utterance voice recognized by the voice recognition circuit 170 when the utterance voice is presented as a recognition result in the keyword form on the display unit 200.
  • the displayed keyword selection that is, confirmation of the command or input value, correction of the presented keyword, and determination of the name of the map or the point where the facility is displayed on the map It is getting ready to be done.
  • the voice recognition circuit 170 is configured to receive the speech voice generated by the operator in the microphone 160.
  • the voice recognition circuit 170 uses the database 180 to operate the operation command of the navigation device 100. Or, the utterance voice input as a point name when searching for point data to be described later is analyzed, and the analysis result is displayed on the display unit 200.
  • the speech recognition circuit 170 of the present embodiment is configured to analyze the input speech using the HMM (Hidden Markov Models) method.
  • HMM Hidden Markov Models
  • the speech recognition circuit 170 of the present embodiment extracts feature values of speech components from the input speech speech, and is stored in the database 180 and is feature value data of key words to be recognized as commands.
  • the accuracy in other words, the input utterance is an arbitrary keyword as a command with an instruction or a point name It has come to be specified as.
  • the speech recognition circuit 170 performs a certain amount of time in the utterance speech indicating an arbitrary state in the HMM feature amount pattern and the input utterance speech. A comparison is made with the feature values of each divided speech section, and a similarity indicating the degree of coincidence between the feature value pattern of this HMM and the feature values of each speech section is calculated. Then, the speech recognition circuit 170 calculates, for each HMM keyword for which the similarity is calculated, an HMM connection called a matching process, that is, a cumulative similarity indicating the probability of the keyword connection.
  • the keyword indicating the connection of HMMs having similarities is recognized as the speech language.
  • the database 180 stores a plurality of feature value pattern data based on the utterance speech of the keyword to be recognized as a point name when searching for a point necessary for route setting or route guidance. Specifically, voice data of each phoneme emitted by a plurality of humans is acquired in advance, a feature value pattern is extracted for each phoneme, and a feature value of each phoneme is based on the feature value pattern for each phoneme The HMM for each keyword generated by learning the pattern data is stored in advance.
  • the database 180 stores point data for keywords that should be recognized as point names.
  • the database 180 together with name data such as facility names and names of places.
  • the display unit 200 is composed of, for example, a CRT, a liquid crystal display element, or an organic EL (Electro Luminescence), and displays map data and point data in various modes according to the control of the display control unit 220. Various states necessary for route guidance such as the vehicle position are displayed superimposed on map data or point data.
  • the display unit 200 displays content information other than the map data or the point data in accordance with the control of the display control unit 220, and displays a voice-recognized keyword when voice recognition is performed. Alternatively, when the recognized keyword is corrected, various information is displayed in conjunction with the operation unit 150.
  • the display control unit 220 is configured to receive map data or point data input via the system control unit 250, and the display control unit 220 includes the system control unit 2. Based on the instruction of 50, display data to be displayed on the display unit 200 as described above is generated and temporarily stored in the buffer memory 210, while the display data is read from the buffer memory 210 at a predetermined timing. Display control of the display unit 200 is performed.
  • the display control unit 220 of the present embodiment works in conjunction with the operation unit 150 when determining a command from a speech-recognized keyword, which will be described later, or when correcting the recognized keyword.
  • Display data is generated and display control is performed when the display data is displayed on the display unit 200.
  • the audio processing circuit 230 generates an audio signal based on an instruction from the system control unit 250, and amplifies the generated audio signal via the speaker 240. For example, the vehicle at the next intersection Information on route guidance including traffic congestion information or traffic stop information that should be notified directly to the driver in the direction of travel and driving guidance is output to the speaker 240 as an audio signal.
  • the system control unit 250 includes various input / output ports such as a GPS reception port, a key input port, a display control port, and the like, and comprehensively controls general functions for navigation processing. Yes.
  • the system control unit 250 reads out a control program stored in the ROMZRAM 260, executes each process, and temporarily holds data being processed in the ROMZRAM 260, thereby performing each of route setting or route guidance. Control for processing is being performed.
  • the system control unit 250 controls each unit, and specifies a command based on the utterance voice of the operator in each process of route setting or route guidance. Recognized keyword display processing, various processing when the operator confirms a command based on the displayed keyword, and correction operation processing when the operator corrects the displayed keyword Speak.
  • the system control unit 250 identifies a point necessary for route setting or route guidance, such as a destination or current location, the key word for identifying the point is recognized.
  • the map data storage unit 190 is searched based on the recognized keyword, and search processing for detecting the corresponding point data (hereinafter referred to as route setting processing). Or, search processing of point data at the time of route guidance processing! )
  • the detected point data is displayed on the display unit 200 via the display control unit 220.
  • FIG. 2 and FIG. 3 are flow charts showing the operation of the point data search process required during the route setting process or the route guidance process in the system control unit 250 of the present embodiment.
  • step S11 when an instruction to start the route setting process is input by an operation of the operator and the system control unit 250 receives the instruction (step S11), the system control unit 250 automatically receives the instruction from the GPS reception unit 110. Information indicating the current position of the car is acquired and set as the starting point of the route (step S12).
  • the system control unit 250 controls the display control unit 220 to cause the display unit 200 to display a display prompting the user to input a destination point or a waypoint, and for the destination of the operator. Wait for input (step S13).
  • the system control unit 250 may control the audio processing circuit 230 so as to notify the speaker 240 to input a destination or a stop point.
  • step S14 when the speech recognition circuit 170 detects that the utterance voice of the operator is input via the microphone 160 (step S14), the system control unit 250 stores the database 180 in the speech recognition circuit 170. Voice recognition processing that identifies the relevant keyword using This is called voice recognition processing. ) Is executed (step S15).
  • the speech recognition circuit 170 extracts the feature amount of the speech component in the input uttered speech, and the keyword to be recognized as a command stored in the database 180.
  • a keyword having a predetermined accuracy is identified as a point name as a result of comparison with feature quantity data sequentially.
  • the system control unit 250 searches the map data storage unit 190 based on the identified spot name, and detects spot data having at least a part of the spot name as name information (step). S 16).
  • step S17 the system control unit 250 detects the point identified by the voice recognition process.
  • the name is displayed on the display unit 200 and the operator is prompted to select one spot name.
  • the process waits for the operator to input an instruction and proceeds to the process of step S19 (step S17).
  • step S18 when spot data having name information that matches at least a part of the spot name specified by the voice recognition process by the system control unit 250 is detected, the system control unit 250 The point name of the detected point data is displayed on the display unit 200 together with the point name specified by the voice recognition circuit 170 on the display unit 200, and the operator is prompted to select one point name. Wait for input (step S18).
  • the system control unit 250 displays the display unit 200. Inquires whether the specified spot name is a spot name that the operator wishes to set, that is, a display for confirming the suitability of the specified spot name (step S 19).
  • the system control unit 250 may perform any operation of an instruction input via the operation unit 150.
  • a certain force is judged (step S20).
  • the point name power specified by the system control unit 250 The point name desired by the operator If the system control unit 250 determines that the spot name is the name, the system control unit 250 proceeds to the process of step S21 and determines that the point name specified by the system control unit 250 is the spot name desired by the operator. In that case, the system control unit 250 proceeds to the process of step S24.
  • the system control unit 250 displays image data for correcting the spot name on the display unit 200, and corrects the spot name by linking the display unit 200, the display control unit 220, and the operation unit 150.
  • Work processing (hereinafter referred to as correction processing) is performed (step S21).
  • the display control unit 220 converts any character of the spot name specified in conjunction with the operation of the operator in the operation unit 150 to another character, and the specified spot name
  • the image data for adding characters to and deleting the characters of the specified spot names is generated and displayed on the display unit 200 as appropriate.
  • step S22 when the system control unit 250 detects through the operation unit 150 that the correction of the spot name by the operator is completed (step S22), the map data storage unit 190 is based on the corrected spot name.
  • the site data is searched and spot data having at least a part of the spot name as name information is detected and displayed (step S23).
  • the system control unit 250 displays the detected spot name or the specified spot name on the display unit 200, and prompts the operator to select one spot name. (Step S24).
  • step S25 when the system control unit 250 detects that a point name has been selected (step S25), it sets the selected point data as a destination and sets the point indicated by the point data. Display on map 200 together with map data (step S26)
  • the system control unit 250 displays a message prompting the operator to determine whether there is another destination or waypoint to be set, and based on the input from the operator, another destination to be set. Determines whether there is a transit point (step S27).
  • step S13 the system control unit 250 should set other purposes based on the input of the operation unit 150. If it is determined that there is no ground, the system control unit 250 proceeds to the process of step S28.
  • the system control unit 250 sets a route on which the vehicle should travel based on the set departure point and destination, and starts route guidance based on the set route! This completes the main operation (step S28).
  • the navigation apparatus 100 obtains the speech component of the uttered speech, analyzes the speech component of the obtained uttered speech, and uttered speech feature that is the feature amount of the speech component.
  • the speech recognition circuit 170 for identifying at least one keyword based on the utterance voice feature quantity and the keyword feature quantity data is extracted, and a plurality of keyword feature quantity data indicating the feature quantities relating to the keyboard voice are stored in advance.
  • the map data storage unit 190 Stored in the database 180, the map data storage unit 190 in which the predetermined location data is stored in advance in association with the location name information indicating the location name of the location data, and the map data storage unit 190
  • the system control unit 250 searches for point data, and the identified keyword is at least part of the point name indicated by the point name information.
  • the system control unit 250 for searching for the spot data included in the display unit 200 and the display unit 200 for presenting the detected spot data are provided.
  • the navigation device 100 searches for point data stored in the map data storage unit 190 based on the identified keyword and having the keyword as at least a part of the point name. As a result of the search, when spot data having at least a part of the identified keyword is detected, the detected spot data is presented.
  • the navigation device 100 can search for point data using the point name included in the map data by using the specified point name as a search key, and thus recognizes the utterance voice. It is also necessary to increase the number of keywords to be recognized by performing complicated operations such as pre-registering the names of points that the operator wishes to recognize through voice recognition processing. Absent. As a result, the navigation device 100 of the present embodiment can improve the recognition rate of the keyword desired by the operator, that is, the point name, and can eliminate the complexity of the operation. .
  • the navigation device 100 of the present embodiment further includes an operation unit 150 used for correcting the specified keyword, and the system control unit 250 has the corrected keyword in at least part of the location name information. It has the structure which searches point data.
  • the navigation device 100 of the present embodiment searches for point data having the corrected keyword in at least a part of the point name indicated by the point name information.
  • the navigation device 100 of the present embodiment can correct a speech-recognized keyword even when the result of speech recognition is different from the location name desired by the operator. This eliminates the complexity of entering all names, and allows you to search for point data by voice recognition and operator correction without registering the point names to be recognized as keywords to be recognized in advance. As a result, the recognition rate of the point name desired by the operator can be improved, and the complexity of the operation can be eliminated.
  • the database 180 stores the point data related to the keyword in association with the keyword
  • the display unit Reference numeral 200 has a configuration that presents the spot data detected by the search means and the spot data associated with the identified keyword.
  • the navigation device 100 of the present embodiment presents the detected spot data and the spot data associated with the identified keyword.
  • the navigation device 100 of the present embodiment can search for spot data using the spot name included in the map data using the specified spot name as a search key, and thus recognizes the utterance voice. It is also necessary to increase the number of keywords to be recognized by performing complicated operations such as pre-registering the names of points that the operator wishes to recognize through voice recognition processing. Absent. As a result, the navigation device 100 of the present embodiment can improve the recognition rate of the keyword desired by the operator, that is, the point name, and can eliminate the complexity of the operation. .
  • the navigation device 100 of the present embodiment includes a selection unit used to select a single piece of point data when a plurality of pieces of point data are detected by the system control unit 250.
  • the display unit 200 is configured to present one selected point data.
  • the navigation device 100 of the present embodiment presents the selected point data, so that even when a plurality of keywords are recognized or searched, the point name desired by the user is specified. can do.
  • the navigation apparatus 100 acquires the speech component of the uttered speech, analyzes the speech component of the acquired uttered speech, and extracts the utterance speech feature amount that is the feature amount of the speech component.
  • a map data storage unit 190 in which predetermined point data is stored in advance in association with point name information indicating the point name of the point data, a display unit 200 for notifying the operator of the identified keyword, and a notification If the displayed keyword matches the keyword desired by the operator, the operation unit 15 used for correcting the presented keyword 15 0 And a system control unit 250 that searches for point data having the corrected keyword as at least a part of the point name information, and the display unit 200 has a configuration for presenting the detected point data. Yes.
  • the navigation device 100 of the present embodiment allows the corrected keyword to be corrected when the notified keyword does not match the keyword desired by the operator and the presented keyword is corrected.
  • point data that has at least part of the point name indicated by the point name information is detected
  • the detected point data is presented.
  • the navigation device 100 searches for point data having the corrected keyword in at least a part of the point name indicated by the point name information, and is used when recognizing the uttered voice. It is not necessary to increase the amount of data to be recognized, and it is not necessary to increase the number of keywords to be recognized by performing a troublesome operation such as registering in advance the name of a spot that the operator desires to recognize by voice recognition processing.
  • the navigation device 100 of the present embodiment can correct a keyword that has been voice-recognized even when the result of voice recognition is different from the point name desired by the operator. This eliminates the complexity of inputting all of the names, and enables voice recognition and recognition without registering the point names to be recognized as keywords to be recognized in advance.
  • the point data can be searched by the correction of the operator.
  • the navigation device 100 of the present embodiment can improve the recognition rate of the point name desired by the operator, and can eliminate the complexity of the operation.
  • the power of using point data search processing is not limited to this, and route guidance and other point data are searched.
  • the point data may be searched.
  • the navigation apparatus 100 having the microphone 160 for inputting the utterance voice to be recognized by the above-described navigation apparatus 100 for inputting the utterance voice to be recognized by the computer is added to the computer.
  • a storage medium storing a data presentation program for searching the point data as described above and reading the program by the computer to perform the same point data search process as described above. Good.
  • the display unit 200 when the keyword is specified or when spot data is detected based on the specified spot name, the display unit 200 presents and notifies the operator. Of course, it may be presented and notified to the operator by voice via the speaker 230.
  • the feature name data of the keyword stored in the database and the input speech component of the spoken speech are sequentially compared to determine the point name.
  • the ability to use the HMM method is not limited to this, as long as the speech recognition process is performed using feature data of keywords stored in the database!
  • the power applied to the point data search process in the navigation device 100 As a data search process, the power applied to the point data search process in the navigation device 100.
  • a name search in arbitrary data in a personal computer or other apparatus. Can be applied when doing

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Automation & Control Theory (AREA)
  • Health & Medical Sciences (AREA)
  • Navigation (AREA)

Abstract

There is provided a data presentation device capable of excluding complicated operation and improving recognition rate of speech without increasing the number of keywords. A navigation device (100) includes: a database (180) containing a plurality of keyword feature amount data indicating the feature amount relating to the speech of the keyword to be recognized; and a map data storage unit (190) containing predetermined location point data correlated with location point name information indicating the location point name of the location point data. According to the keyword identified by the speech recognition, search is performed for location point data stored in the map data storage unit (190) and having the keyword at least as a part of the location point name. As a result of the search, if map data having the specified keyword at least as a part of the location point name is detected, the location point data detected is presented.

Description

明 細 書  Specification
データ提示装置、データ提示方法、データ提示プログラムおよびそのプロ グラムを記録した記録媒体  DATA PRESENTATION DEVICE, DATA PRESENTATION METHOD, DATA PRESENTATION PROGRAM, AND RECORDING MEDIUM CONTAINING THE PROGRAM
技術分野  Technical field
[0001] 本発明は、ナビゲーシヨンを行う際の目的地の設定などのデータ提示手段に関し、 特に、予め格納されている複数の実データを検索し、その結果を提示するデータ提 示装置の技術分野に属する。  TECHNICAL FIELD [0001] The present invention relates to data presentation means such as setting of a destination when performing navigation, and in particular, a technique of a data presentation device that searches a plurality of actual data stored in advance and presents the result. Belonging to the field.
背景技術  Background art
[0002] 現在、自動車の運転中にナビゲーシヨン装置を利用するなどの人間が両手を使用 する作業環境であっても、キーボードやスィッチ選択の手動操作を要することなぐ安 全に各種の情報を入力させるために、人間が発声した音声を認識する音声認識装 置が各装置に適用されている。このような音声認識装置は、発話音声の特徴量のパ ターンを予め用意されたキーワードとなる認識候補の語句(以下、キーワードという。 ) を示す音声の特徴量のパターンと順次マッチングさせることにより音声認識を行うよう になっている。  [0002] At present, various information can be input safely without requiring manual operation of the keyboard and switch selection even in a work environment where humans use both hands, such as using a navigation device while driving a car. Therefore, a speech recognition device that recognizes speech uttered by humans is applied to each device. Such a speech recognition apparatus sequentially matches the pattern of the feature amount of the uttered speech with a pattern of the feature amount of speech indicating a recognition candidate word / phrase (hereinafter referred to as a keyword) as a keyword prepared in advance. It is designed to recognize.
[0003] 従来、地図データに基づいて車両などの移動体の経路誘導を行うナビゲーシヨン 装置では、目的地や経由地の設定において、音声認識を用いることが一般ィ匕してい る。例えば、ナビゲーシヨン装置は、認識されたキーワードを現在地、目的地または 経由地 (以下、地点という。)に設定するようになっており、当該設定された地点の緯 度および経度、属性など当該地点に関するデータ(以下、実データという。)をデータ ベースから取得することによって、経路設定および経路誘導を行うようになっている。  [0003] Conventionally, in a navigation apparatus that guides a route of a moving body such as a vehicle based on map data, it is generally used to set a destination or a waypoint. For example, the navigation device is configured to set the recognized keyword to the current location, destination, or waypoint (hereinafter referred to as a location), and the location, such as latitude, longitude, and attribute of the set location. Route data and route guidance are obtained by acquiring data related to this (hereinafter referred to as actual data) from the database.
[0004] 特に、最近では、キーワード数を増大させずに、発話音声の認識率を向上させるた めに、発話音声がデータベースに格納されて 、る複数のキーワードと一致しな 、場 合には、当該発話音声をユーザの操作により関連するキーワードに登録するナビゲ ーシヨン装置が知られている(例えば、特許文献 1を参照)。  In particular, recently, in order to improve the speech recognition rate without increasing the number of keywords, the speech is stored in a database and does not match a plurality of keywords. In addition, a navigation device is known that registers the uttered voice in a related keyword by a user operation (see, for example, Patent Document 1).
特許文献 1 :特開 2003— 323192号公報  Patent Document 1: Japanese Unexamined Patent Publication No. 2003-323192
発明の開示 発明が解決しょうとする課題 Disclosure of the invention Problems to be solved by the invention
[0005] しかしながら、上述のナビゲーシヨン装置などの所定のデータの提示を行うデータ 提示装置であっては、認識されたキーワードに対応付けられて記録されて ヽる実デ ータを提示するようになっており、キーワードに対応付けられて記録されて!、な!/ヽ実 データに関しては提示することができない。また、特許文献 1に記載されたナビゲー シヨン装置であっても、発話音声がキーワードと一致しない場合に、ユーザが操作部 によって予め設定する必要があり、その操作が煩わしいとともに、予め当該一致しな い発話音声をキーワードとして登録しない限り、当該発話音声を認識することができ ない。  [0005] However, in a data presentation device that presents predetermined data such as the above-described navigation device, actual data that is recorded in association with a recognized keyword is presented. It is recorded in association with keywords! / Actual data cannot be presented. Further, even in the navigation device described in Patent Document 1, when the uttered voice does not match the keyword, the user needs to set in advance by the operation unit, and the operation is troublesome and the matching is not performed in advance. Unless the utterance voice is registered as a keyword, the utterance voice cannot be recognized.
[0006] そこで、本発明は、上記の課題の一例を解決するものとして、発話音声に基づいて 認識されたキーワードを更に他のデータベースなどのデータ検索における検索キー にすることによって、操作の煩雑性を除去するとともに、キーワード数を増大させずに 、発話音声の認識率を向上させるデータ提示装置を提供することにある。  [0006] Therefore, the present invention solves an example of the above-described problem by making the keyword recognized based on the uttered voice as a search key for data search in another database or the like, thereby making the operation complicated. The present invention provides a data presentation device that improves the recognition rate of spoken speech without increasing the number of keywords.
課題を解決するための手段  Means for solving the problem
[0007] 上記の課題を解決するために、請求項 1に記載の発明は、発話された発話音声の 音声成分を取得する取得手段と、前記音声成分を分析し、当該音声成分の特徴量 である発話音声特徴量を抽出する抽出手段と、キーワードの音声に関する特徴量を 示すキーワード特徴量データが予め複数格納されている第 1格納手段と、所定の実 データが当該実データの名称を示す名称情報に対応づけられて予め格納されてい る第 2格納手段と、前記発話音声特徴量と前記キーワード特徴量データに基づいて 、少なくとも一つのキーワードを特定する特定手段と、前記名称情報の少なくとも一 部に前記特定されたキーワードを有する実データを検索する検索手段と、前記検出 された実データを提示する提示手段と、を備える構成を有して!/ヽる。  [0007] In order to solve the above-mentioned problem, the invention according to claim 1 is characterized in that the speech component of the spoken speech is acquired, the speech component is analyzed, and the feature amount of the speech component is calculated. Extraction means for extracting a certain utterance voice feature quantity, first storage means in which a plurality of keyword feature quantity data indicating feature quantities related to the keyword voice are stored in advance, and a name in which the predetermined actual data indicates the name of the actual data A second storage means stored in advance in association with information; a specifying means for specifying at least one keyword based on the uttered voice feature quantity and the keyword feature quantity data; and at least a part of the name information A search means for searching for actual data having the specified keyword, and a presentation means for presenting the detected actual data.
[0008] また、請求項 5に記載の発明は、発話された発話音声の音声成分を取得する取得 工程と、前記音声成分を分析し、当該音声成分の特徴量である発話音声特徴量を 抽出する抽出工程と、前記発話音声特徴量とキーワードの音声に関する特徴量を示 すキーワード特徴量データに基づいて、少なくとも一つのキーワードを特定する特定 工程と、第 2格納手段に名称を示す名称情報に対応付けられて実データを検索する 工程であって、前記名称情報の少なくとも一部に前記特定されたキーワードを有する 実データを検索する検索工程と、前記検出された実データを提示する提示工程と、 を含む構成を有している。 [0008] The invention according to claim 5 is an acquisition step of acquiring a speech component of the uttered speech and analyzes the speech component to extract a speech speech feature amount that is a feature amount of the speech component. An identification step for identifying at least one keyword on the basis of keyword feature amount data indicating the feature amount related to the speech voice feature amount and the keyword speech, and name information indicating a name in the second storage means. Search the actual data associated with A search step for searching for actual data having the specified keyword in at least a part of the name information, and a presentation step for presenting the detected actual data. .
[0009] また、請求項 6または 7に記載の発明は、コンピュータを、前記発話された発話音声 の音声成分を取得する取得手段、前記音声成分を分析し、当該音声成分の特徴量 である発話音声特徴量を抽出する抽出手段、前記発話音声特徴量とキーワードの 音声に関する特徴量を示すキーワード特徴量データに基づいて、少なくとも一つの キーワードを特定する特定手段、所定の実データの名称を示す名称情報の少なくと も一部に前記特定されたキーワードを有する実データを検索する検索手段、前記検 出された実データを提示する提示手段、として機能させる構成を有して ヽる。  [0009] Further, in the invention according to claim 6 or 7, the computer has an acquisition means for acquiring the speech component of the spoken speech, the speech component is analyzed, and the speech that is the feature amount of the speech component Extraction means for extracting speech feature quantity, identification means for identifying at least one keyword based on keyword feature quantity data indicating the feature quantity related to the speech voice feature quantity and keyword speech, and a name indicating the name of predetermined actual data The information processing apparatus may be configured to function as search means for searching for actual data having the specified keyword in at least a part of information, and presentation means for presenting the detected actual data.
[0010] また、請求項 8に記載の発明は、発話された発話音声の音声成分を取得する取得 手段と、前記音声成分を分析し、当該音声成分の特徴量である発話音声特徴量を 抽出する抽出手段と、前記キーワードの音声に関する特徴量を示すキーワード特徴 量データが予め複数格納されている第 1格納手段と、所定の実データが当該実デー タの名称を示す名称情報に対応づけられて予め格納される第 2格納手段と、前記発 話音声特徴量と前記キーワード特徴量データに基づいて、少なくとも一つのキーヮ ードを特定すると、前記特定されたキーワードを操作者に告知する告知手段と、前記 告知されたキーワードが、操作者が所望するキーワードと一致していない場合に、当 該告知されたキーワードを修正する際に用いられる修正手段と、前記名称情報の少 なくとも一部に前記修正手段によって修正されたキーワードを有する実データを検索 する検索手段と、前記検出された実データを提示する提示手段と、を備える構成を 有している。 [0010] The invention according to claim 8 is an acquisition means for acquiring a speech component of the uttered speech, and the speech component is analyzed to extract a speech speech feature amount that is a feature amount of the speech component. Extracting means, first storage means in which a plurality of keyword feature quantity data indicating the feature quantity of the keyword speech are stored in advance, and predetermined actual data are associated with name information indicating the name of the actual data. Second storage means stored in advance, and notification means for notifying an operator of the specified keyword when at least one keyword is specified based on the utterance voice feature quantity and the keyword feature quantity data Correction means used to correct the notified keyword when the notified keyword does not match the keyword desired by the operator, and the name Has a configuration comprising a retrieval means for retrieving actual data having been modified by said modifying means in some keywords, and presenting means for presenting the actual data to which the detected and even without less of distribution.
[0011] また、請求項 9に記載の発明は、発話された発話音声の音声成分を取得する取得 工程と、前記発話音声の音声成分を分析し、当該音声成分の特徴量である発話音 声特徴量を抽出する抽出工程と、前記発話音声特徴量とキーワードの音声に関する 特徴量を示すキーワード特徴量データに基づいて、少なくとも一つのキーワードを特 定する特定工程と、前記特定されたキーワードを告知手段によって告知する告知ェ 程と、前記告知されたキーワードが修正されると、所定の実データの前記名称情報の 少なくとも一部に前記修正されたキーワードを有する実データを検索する検索工程と 、前記検出された実データを提示する提示工程と、を備える構成を有している。 [0011] Further, the invention according to claim 9 is an acquisition step of acquiring a speech component of the uttered speech, an analysis of the speech component of the uttered speech, and an utterance speech that is a feature amount of the speech component An extraction step for extracting a feature amount, a specifying step for specifying at least one keyword based on keyword feature amount data indicating the feature amount related to the utterance voice feature amount and the keyword speech, and an announcement of the specified keyword When the notification process notified by the means and the notified keyword are corrected, the name information of the predetermined actual data is changed. It has the structure provided with the search process which searches the actual data which have the said corrected keyword at least in part, and the presentation process which presents the said detected actual data.
[0012] また、請求項 10または 11に記載の発明は、コンピュータを、発話された発話音声の 音声成分を取得する取得手段、前記発話音声の音声成分を分析し、当該音声成分 の特徴量である発話音声特徴量を抽出する抽出手段、前記発話音声特徴量とキー ワードの音声に関する特徴量を示すキーワード特徴量データに基づいて、少なくとも 一つのキーワードを特定する特定手段、前記特定されたキーワードを告知手段告知 する告知手段、前記告知されたキーワードが修正されると、所定の実データの前記 名称情報の少なくとも一部に前記修正されたキーワードを有する実データを検索す る検索手段、前記検出された実データを提示する提示手段、として機能させる構成 を有している。  [0012] In the invention according to claim 10 or 11, the computer has an acquisition means for acquiring the speech component of the uttered speech, the speech component of the uttered speech is analyzed, and the feature amount of the speech component is calculated. Extraction means for extracting a certain utterance voice feature quantity, specification means for specifying at least one keyword based on the utterance voice feature quantity and keyword feature quantity data indicating a feature quantity relating to the keyword voice, and the specified keyword Annunciation means Annunciation means Announcement means, When the announced keyword is corrected, search means for searching the actual data having the corrected keyword in at least a part of the name information of predetermined actual data, the detected It also has a configuration that allows it to function as a presentation means for presenting actual data.
図面の簡単な説明  Brief Description of Drawings
[0013] [図 1]本願に係る一実施形態のナビゲーシヨン装置の概要構成を示すブロック図であ る。  FIG. 1 is a block diagram showing a schematic configuration of a navigation device according to an embodiment of the present application.
[図 2]—実施形態のシステム制御部 250における経路設定処理または経路誘導処理 時に必要となる地点データの検索処理の動作を示すフローチャート (I)である。  FIG. 2 is a flowchart (I) showing an operation of a point data search process necessary for a route setting process or a route guidance process in the system control unit 250 of the embodiment.
[図 3]—実施形態のシステム制御部 250における経路設定処理または経路誘導処理 時に必要となる地点データの検索処理の動作を示すフローチャート (II)である。 符号の説明  FIG. 3 is a flowchart (II) showing an operation of a search process of point data required at the time of route setting processing or route guidance processing in the system control unit 250 of the embodiment. Explanation of symbols
[0014] 100 · · · ナビゲーシヨン装置  [0014] 100 · · · Navigation equipment
110 · ·· GPS受信部  110 ··· GPS receiver
120 · · · センサ部  120 ...
130 · · · インターフェース  130 · · · Interface
140 · ·· VICSデータ受信部  140 ····· VICS data receiver
150 · ·· 操作部  150
160 · · · マイクロホン  160 · · · Microphone
170 · ·· 音声認識回路  170 ··· Speech recognition circuit
180 · ·· データベース 190 … 地図データ格納部 180 ··· Database 190… Map data storage
200 … 表示部  200… Display section
210 … ノ ッファメモリ  210… Nota Memory
220 … 表示制御部  220… Display control unit
230 … スピーカ  230… Speaker
240 … 音声処理回路  240… Audio processing circuit
250 … システム制御部  250… System control unit
260 … ROM/RAM  260… ROM / RAM
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0015] 次に、本願に好適な実施の形態について、図面に基づいて説明する。  Next, an embodiment suitable for the present application will be described with reference to the drawings.
[0016] なお、以下に説明する実施の形態は、本願に係るデータ提示装置またはナビゲー シヨン装置を車載用のナビゲーシヨン装置に適用した場合の実施形態である。  Note that the embodiment described below is an embodiment in the case where the data presentation device or the navigation device according to the present application is applied to an in-vehicle navigation device.
[0017] まず、図 1を用いて本実施形態におけるナビゲーシヨン装置の全体の構成および概 要動作について説明する。なお、図 1は本願に係る本実施形態のナビゲーシヨン装 置の概要構成を示すブロック図である。  First, the overall configuration and general operation of the navigation device according to the present embodiment will be described with reference to FIG. FIG. 1 is a block diagram showing a schematic configuration of the navigation apparatus of the present embodiment according to the present application.
[0018] 本実施形態のナビゲーシヨン装置 100は、図 1に示すように、アンテナ ATに接続さ れ、 GPS (Global Positioning System)データを受信する GPS受信部 110と、車両の 走行速度などの走行データを検出するセンサ部 120と、 GPSデータおよび走行デー タに基づいて自車位置を算出するインターフェース 130と、 VICS(Vehicle Informatio n Communication System)データを受信する VICSデータ受信部 140と、ユーザが各 設定を行うとともにシステムに命令を入力する際に用いられる操作部 150と、を備え ている。  [0018] As shown in FIG. 1, the navigation device 100 of the present embodiment is connected to an antenna AT and receives a GPS (Global Positioning System) data. Each of the sensors 120 detects data, the interface 130 calculates the vehicle position based on GPS data and driving data, the VICS data receiver 140 receives VICS (Vehicle Information Communication System) data, and the user And an operation unit 150 used for setting and inputting commands to the system.
[0019] また、本実施形態のナビゲーシヨン装置 100は、操作者の発話された発話音声を 集音するマイクロホン 160と、マイクロホン 160によって集音された発話音声力もシス テムに指示された命令 (以下、単に、命令という。)を認識する音声認識回路 170と、 音声認識を行う際に用いるデータが格納されているデータベース 180と、地図データ および後述する地点データなどの各種データが予め記録されている地図データ格納 部 190と、地図データ、車両の位置および音声認識の結果を表示する表示部 200と 、バッファメモリ 210を用いて表示部 200を制御する表示制御部 220と、経路誘導な どの音声を生成する音声処理回路 230と、音声処理回路 230から出力された音声信 号を拡声するスピーカ 240と、システム全体の制御を行うとともに、音声認識に関する 各処理を制御するシステム制御部 250と、 ROMZRAM260と、を備え、各部はバス Bにより接続されるようになって 、る。 In addition, the navigation device 100 according to the present embodiment includes a microphone 160 that collects the uttered voice spoken by the operator, and a command that instructs the system also the utterance voice power collected by the microphone 160 (hereinafter referred to as the “voice”). , Simply a command)), a database 180 storing data used when performing speech recognition, and various data such as map data and point data described later are recorded in advance. A map data storage unit 190, and a display unit 200 for displaying the map data, the position of the vehicle, and the result of voice recognition; A display control unit 220 that controls the display unit 200 using the buffer memory 210, an audio processing circuit 230 that generates audio such as route guidance, and a speaker 240 that amplifies the audio signal output from the audio processing circuit 230. The system control unit 250 controls the entire system and controls each process related to speech recognition, and the ROMZRAM 260. Each unit is connected by a bus B.
[0020] なお、例えば、本実施形態の操作部 150は、本発明に係る操作手段、選択手段お よび修正手段を構成し、音声認識回路 170は、本発明に係る取得手段、抽出手段、 特定手段および検索手段を構成する。また、例えば、本実施形態の表示部 200およ び表示制御部 220または音声処理回路 230およびスピーカ 240は、本発明に係る 提示手段および告知手段を構成する。  [0020] Note that, for example, the operation unit 150 of the present embodiment constitutes an operation unit, a selection unit, and a correction unit according to the present invention, and the speech recognition circuit 170 includes an acquisition unit, an extraction unit, and a specification unit according to the present invention. Means and search means. In addition, for example, the display unit 200 and the display control unit 220 or the audio processing circuit 230 and the speaker 240 of the present embodiment constitute a presentation unit and a notification unit according to the present invention.
[0021] GPS受信部 110は、 GPSに属する複数の人工衛星力 の航法電波を、図示しな いアンテナを介して受信するとともに、この受信した電波に基づいて移動体の現在位 置の擬似座標値を算出するようになっており、当該算出された疑似座標データを GP Sデータとしてインターフェース 130に出力するようになっている。  [0021] The GPS receiver 110 receives navigation radio waves of a plurality of satellite forces belonging to the GPS via an antenna (not shown), and based on the received radio waves, pseudo coordinates of the current position of the mobile object A value is calculated, and the calculated pseudo coordinate data is output to the interface 130 as GPS data.
[0022] センサ部 120は、車両の走行速度、加速度および方位角の各走行データを検出す るようになっており、検出された走行データをインターフェース 130に出力するように なっている。 [0022] The sensor unit 120 detects each travel data of the travel speed, acceleration and azimuth of the vehicle, and outputs the detected travel data to the interface 130.
[0023] 例えば、センサ部 120は、車両の走行速度を検出し、その検出された速度をパルス 又は電圧の形態を有する速度データに変換してインターフェース 130に出力するよう になっている。また、センサ部 120は、重力加速度と車両の移動により発生する加速 度とを比較することにより、上下方向の車両の移動状態を検出し、当該検出された移 動状態を示す加速度データをパルス又は電圧の形態に変換してインターフェース 13 0に出力するようになっている。さらに、センサ部 120は、いわゆるジャイロセンサによ り構成され、車両の方位角、即ち車両が進行している進行方向を検出し、検出された 方位角をパルス又は電圧の形態を有する方位角データに変換してインターフェース 130に出力するようになっている。  [0023] For example, the sensor unit 120 detects the traveling speed of the vehicle, converts the detected speed into speed data having a pulse or voltage form, and outputs the speed data to the interface 130. Further, the sensor unit 120 detects the moving state of the vehicle in the vertical direction by comparing the gravitational acceleration and the acceleration generated by the movement of the vehicle, and pulses or outputs acceleration data indicating the detected moving state. It is converted to a voltage form and output to the interface 130. Further, the sensor unit 120 is configured by a so-called gyro sensor, detects the azimuth angle of the vehicle, that is, the traveling direction in which the vehicle is traveling, and the detected azimuth angle is azimuth angle data having a pulse or voltage form. And output to the interface 130.
[0024] インターフェース 130は、センサ部 120および GPS受信部 110とシステム制御部 25 0との間のインターフェース処理を行うようになっており、入力された GPSデータと走 行データに基づ 、て自車位置を算出して当該自車位置を自車位置データとしてシス テム制御部 250に出力するようになって 、る。 [0024] The interface 130 performs interface processing between the sensor unit 120 and the GPS receiving unit 110 and the system control unit 250. Based on the row data, the vehicle position is calculated and the vehicle position is output to the system control unit 250 as the vehicle position data.
[0025] なお、この自車位置データは、システム制御部 250において地図データと照合され てマップマッチング処理、経路探索処理などナビゲーシヨンに関する処理に用いられ るようなって ヽる。 Note that the vehicle position data is collated with map data in the system control unit 250 and used for navigation-related processing such as map matching processing and route search processing.
[0026] VICSデータ受信部 140は、 FM多重放送などの電波を受信することによって VIC Sデータを取得するようになっており、取得された VICSデータをシステム制御部 250 に出力するようになっている。なお、 VICSとは、道路交通情報通信システムのことを 示し、 VICSデータとは、渋滞、事故、規制などの道路交通情報をいう。  [0026] The VICS data receiving unit 140 acquires VICS data by receiving radio waves such as FM multiplex broadcasting, and outputs the acquired VICS data to the system control unit 250. Yes. VICS refers to a road traffic information communication system, and VICS data refers to road traffic information such as traffic jams, accidents, and regulations.
[0027] 地図データ格納部 190は、例えば、ハードディスク(Hard Disc)によって構成される ようになっており、予め記録されている道路地図などの地図データ、経路設定または 経路誘導を行うときに目的地の設定などにより必要となる地点データおよびその他の 走行案内に必要な情報を読み出すとともに、読み出した各種のデータをシステム制 御部 250に出力するようになって 、る。  [0027] The map data storage unit 190 is configured by, for example, a hard disk, and is used when pre-recorded map data such as road maps, route setting, or route guidance. The necessary point data and other information necessary for driving guidance are read out by setting the vehicle, etc., and various kinds of read out data are output to the system control unit 250.
[0028] 特に、この地図データ格納部 190には、地図全体力メッシュ状の複数のブロックに 分割され、各ブロックに対応する地図データをブロック地図データとして管理するよう になっている。また、この地図データ格納部 190には、ナビゲーシヨン動作に必要な 道路形状データを含む地図データの他に、公園や店舗などの目的地の名称を示す 名称データ、当該目的地の地図データ上の位置を示す位置データおよび住所など の施設データを各地点毎に地点データとして道路形状データに対応付けて記憶さ れている。 [0028] In particular, the map data storage unit 190 is divided into a plurality of blocks in a map-wide force mesh shape, and map data corresponding to each block is managed as block map data. In addition to the map data including road shape data necessary for the navigation operation, the map data storage unit 190 includes name data indicating the names of destinations such as parks and stores, and map data of the destinations. Location data indicating the location and facility data such as an address are stored in association with road shape data for each location as location data.
[0029] 例えば、地点データとしては、地図データ格納部 190には、レストランやデパートな どの店舗、遊戯施設、観光名所、美術館などの施設の名称を示す名称データととも に、食事場所、見学場所、遊戯施設などその地点の来訪地としてのジャンル (属性と もいう。)を示すジャンル情報、地点の緯度経度を示す位置情報、住所、電話番号、 営業日、および、営業時間などの施設データが格納されている。  [0029] For example, as the point data, the map data storage unit 190 includes a restaurant, a department store, a play facility, a tourist attraction, a name of the facility such as a museum, a dining place, a tour place. Genre information indicating the genre (also referred to as an attribute) of the point of visit such as a play facility, location information indicating the latitude and longitude of the point, facility data such as address, telephone number, business day, and business hours Stored.
[0030] なお、本実施形態の地図データ格納部 190は、後述するように、目的地または現 在地など経路設定または経路誘導を行う際に必要となる地点を特定する際に、シス テム制御部 250によって地点データが検索されるようになっている。 [0030] It should be noted that the map data storage unit 190 of the present embodiment, as will be described later, is used to specify a point required for route setting or route guidance such as a destination or current location. The point data is searched by the system control unit 250.
[0031] 操作部 150は、各種確認ボタン、選択ボタンおよび数字キー等の多数のキーを有 するリモートコントロール装置および当該リモートコントロール装置力 送信された信 号を受光する受光部、または、各種確認ボタン、選択ボタンおよび数字キー等の多 数のキーを有する操作パネルにより構成されている。そして、この操作部 150は、車 両走行情報の表示命令、表示部 200の表示切り替えなどの運転者の命令を入力す るために用いられるようになって 、る。  [0031] The operation unit 150 includes a remote control device having a number of keys such as various confirmation buttons, selection buttons, and numeric keys, and a light receiving unit that receives a signal transmitted from the remote control device or various confirmation buttons. The operation panel has a number of keys such as a selection button and numeric keys. The operation unit 150 is used to input a driver's command such as a vehicle travel information display command and a display switching of the display unit 200.
[0032] 特に、本実施形態では、操作部 150は、音声認識回路 170により認識された発話 音声が認識結果としてキーワード形式に表示部 200に提示された際に、表示部 200 および表示制御部 220と連動して、表示されたキーワードの選択、すなわち、命令ま たは入力する値の確定、当該提示されたキーワードの修正、および、地図または当 該地図上に施設を表示させる地点名称の確定を行うことができるようになつている。  In particular, in the present embodiment, the operation unit 150 displays the utterance voice recognized by the voice recognition circuit 170 when the utterance voice is presented as a recognition result in the keyword form on the display unit 200. In conjunction with this, the displayed keyword selection, that is, confirmation of the command or input value, correction of the presented keyword, and determination of the name of the map or the point where the facility is displayed on the map It is getting ready to be done.
[0033] なお、本実施形態における操作部 150の表示部 200および表示制御部 220と連動 する各動作については、後述する。  [0033] Note that each operation in conjunction with the display unit 200 and the display control unit 220 of the operation unit 150 in the present embodiment will be described later.
[0034] 音声認識回路 170には、マイクロホン 160に操作者力も発生された発話音声が入 力されるようになっており、音声認識回路 170は、データベース 180を用いてナビゲ ーシヨン装置 100の操作コマンドとして、または、後述する地点データを検索する際 の地点名称として入力された発話音声を解析し、解析結果を表示部 200に表示させ るようになっている。  [0034] The voice recognition circuit 170 is configured to receive the speech voice generated by the operator in the microphone 160. The voice recognition circuit 170 uses the database 180 to operate the operation command of the navigation device 100. Or, the utterance voice input as a point name when searching for point data to be described later is analyzed, and the analysis result is displayed on the display unit 200.
[0035] 例えば、本実施形態の音声認識回路 170は、 HMM (Hidden Markov Models:隠 れマルコフモデル)法を用いて入力された発話音声を解析するようになっている。す なわち、本実施形態の音声認識回路 170は、入力された発話音声における音声成 分の特徴量を抽出し、データベース 180に格納されて 、る命令として認識すべきキ 一ワードの特徴量データと順次比較するようになっており、比較した結果、確度、言 い換えれば、入力された発話音声が任意のキーワードである確かさらしさが高いキー ワードを指示された命令として、または、地点名称として特定するようになっている。  [0035] For example, the speech recognition circuit 170 of the present embodiment is configured to analyze the input speech using the HMM (Hidden Markov Models) method. In other words, the speech recognition circuit 170 of the present embodiment extracts feature values of speech components from the input speech speech, and is stored in the database 180 and is feature value data of key words to be recognized as commands. As a result of the comparison, the accuracy, in other words, the input utterance is an arbitrary keyword as a command with an instruction or a point name It has come to be specified as.
[0036] 具体的には、本実施形態では、音声認識回路 170は、 HMM化された特徴量バタ ーンと入力された発話音声における任意の状態を示す当該発話音声の一定時間に 区切られた各音声区間の特徴量と比較するとともに、この HMMの特徴量パターンと 各音声区間の特徴量の一致度を示す類似度を算出するようになっている。そして、こ の音声認識回路 170は、類似度が算出された HMM化された各キーワードについて 、マッチング処理と呼ばれるあらゆる HMMの繋がり、すなわち、キーワードの繋がり の確率を示す累積類似度を算出し、一定の類似度を有する HMMの繋がりを示す当 該キーワードを発話音声の言語として認識するようになって 、る。 [0036] Specifically, in the present embodiment, the speech recognition circuit 170 performs a certain amount of time in the utterance speech indicating an arbitrary state in the HMM feature amount pattern and the input utterance speech. A comparison is made with the feature values of each divided speech section, and a similarity indicating the degree of coincidence between the feature value pattern of this HMM and the feature values of each speech section is calculated. Then, the speech recognition circuit 170 calculates, for each HMM keyword for which the similarity is calculated, an HMM connection called a matching process, that is, a cumulative similarity indicating the probability of the keyword connection. The keyword indicating the connection of HMMs having similarities is recognized as the speech language.
[0037] データベース 180には、経路設定または経路誘導を行うときに必要となる地点を検 索する際の地点名称として、認識すべきキーワードにおける発話音声に基づく特徴 量のパターンデータが複数格納されており、具体的には、予め複数の人間が発する 各音素の音声データを取得し、各音素毎に特徴量のパターンを抽出して各音素毎 の特徴量のパターンに基づいて各音素の特徴量のパターンデータを学習させること によって生成された各キーワード毎の HMMが予め格納されている。  [0037] The database 180 stores a plurality of feature value pattern data based on the utterance speech of the keyword to be recognized as a point name when searching for a point necessary for route setting or route guidance. Specifically, voice data of each phoneme emitted by a plurality of humans is acquired in advance, a feature value pattern is extracted for each phoneme, and a feature value of each phoneme is based on the feature value pattern for each phoneme The HMM for each keyword generated by learning the pattern data is stored in advance.
[0038] また、このデータベース 180には、地点名称として認識すべきキーワードにおける 地点データが格納されており、例えば、地図データ格納部 190と同様に、施設名や その場所の名称などの名称データとともに、食事場所、見学場所、遊戯施設などそ の地点の来訪地としてのジャンル (属性ともいう。)を示すジャンル情報、地点の緯度 経度を示す位置情報、および、住所や電話番号などの施設データが格納されている  [0038] In addition, the database 180 stores point data for keywords that should be recognized as point names. For example, in the same manner as the map data storage unit 190, the database 180 together with name data such as facility names and names of places. Genre information that indicates the genre (also referred to as an attribute) as a visiting place such as a dining place, a tour place, or a play facility, location information that indicates the latitude and longitude of the point, and facility data such as an address or telephone number. Stored
[0039] 表示部 200は、例えば、 CRT,液晶表示素子または有機 EL (Electro Luminescenc e)によって構成され、表示制御部 220の制御にしたがって地図データ、地点データ を種々の態様で表示するとともに、当該地図データまたは地点データに重畳して自 車位置などの経路案内に必要な各種状態を表示するようになっている。 [0039] The display unit 200 is composed of, for example, a CRT, a liquid crystal display element, or an organic EL (Electro Luminescence), and displays map data and point data in various modes according to the control of the display control unit 220. Various states necessary for route guidance such as the vehicle position are displayed superimposed on map data or point data.
[0040] また、表示部 200は、表示制御部 220の制御にしたがって、地図データまたは地点 データ以外のコンテンツ情報を表示するとともに、音声認識が行われる際に、音声認 識されたキーワードの表示、または、当該認識されたキーワードの修正を行う場合に 、操作部 150と連動して種々の情報の表示を行うようになっている。  [0040] Further, the display unit 200 displays content information other than the map data or the point data in accordance with the control of the display control unit 220, and displays a voice-recognized keyword when voice recognition is performed. Alternatively, when the recognized keyword is corrected, various information is displayed in conjunction with the operation unit 150.
[0041] 表示制御部 220には、システム制御部 250を介して入力された地図データまたは 地点データが入力されるようになっており、表示制御部 220は、このシステム制御部 2 50の指示に基づ 、て上述のような表示部 200に表示すべき表示データを生成し、バ ッファメモリ 210に一時的に保存しつつ、所定のタイミングでバッファメモリ 210から表 示データを読み出して表示部 200の表示制御を行うようになって 、る。 [0041] The display control unit 220 is configured to receive map data or point data input via the system control unit 250, and the display control unit 220 includes the system control unit 2. Based on the instruction of 50, display data to be displayed on the display unit 200 as described above is generated and temporarily stored in the buffer memory 210, while the display data is read from the buffer memory 210 at a predetermined timing. Display control of the display unit 200 is performed.
[0042] 特に、本実施形態の表示制御部 220は、後述する音声認識されたキーワードから 命令を確定する際に、または、当該認識されたキーワードを修正する際に、操作部 1 50と連動して表示データを生成するとともに、当該表示データを表示部 200に表示 する際の表示制御を行うようになって 、る。  [0042] In particular, the display control unit 220 of the present embodiment works in conjunction with the operation unit 150 when determining a command from a speech-recognized keyword, which will be described later, or when correcting the recognized keyword. Display data is generated and display control is performed when the display data is displayed on the display unit 200.
[0043] 音声処理回路 230は、システム制御部 250の指示に基づいて音声信号を生成し、 生成した音声信号を、スピーカ 240を介して拡声するようになっており、例えば、次の 交差点における車両の進行方向や走行案内上運転者に直接告知すべき渋滞情報 又は通行止め情報等を含む経路誘導に関する情報を音声信号としてスピーカ 240 に出力するようになっている。  [0043] The audio processing circuit 230 generates an audio signal based on an instruction from the system control unit 250, and amplifies the generated audio signal via the speaker 240. For example, the vehicle at the next intersection Information on route guidance including traffic congestion information or traffic stop information that should be notified directly to the driver in the direction of travel and driving guidance is output to the speaker 240 as an audio signal.
[0044] システム制御部 250は、 GPS受信ポート、キー入力ポート、表示制御ポート等の各 種入出力ポートを含み、ナビゲーシヨン処理のための全般的な機能を総括的に制御 するようになっている。  [0044] The system control unit 250 includes various input / output ports such as a GPS reception port, a key input port, a display control port, and the like, and comprehensively controls general functions for navigation processing. Yes.
[0045] このシステム制御部 250は、 ROMZRAM260に格納される制御プログラムを読み 出して各処理を実行するとともに、当該 ROMZRAM260に処理中のデータを一時 的に保持することによって経路設定または経路誘導の各処理のための制御を行うよ うになつている。  [0045] The system control unit 250 reads out a control program stored in the ROMZRAM 260, executes each process, and temporarily holds data being processed in the ROMZRAM 260, thereby performing each of route setting or route guidance. Control for processing is being performed.
[0046] 特に、本実施形態のシステム制御部 250は、各部を制御しつつ、経路設定または 経路誘導の各処理において、操作者の発話音声に基づいて命令を特定する際に、 発話音声にて認識されたキーワードの表示処理、表示されたキーワードに基づいて 操作者が命令を確定した際の各種処理、および、表示されたキーワードを操作者が 修正する際の修正操作の処理を行うようになって ヽる。  [0046] In particular, the system control unit 250 according to the present embodiment controls each unit, and specifies a command based on the utterance voice of the operator in each process of route setting or route guidance. Recognized keyword display processing, various processing when the operator confirms a command based on the displayed keyword, and correction operation processing when the operator corrects the displayed keyword Speak.
[0047] また、システム制御部 250は、目的地または現在地など経路設定または経路誘導 を行う際に必要となる地点を特定する際に、当該地点を特定するための当該キーヮ ードが認識された場合には、当該認識されたキーワードに基づいて地図データ格納 部 190を検索し、該当する地点データを検出する検索処理 (以下、経路設定処理ま たは経路誘導処理時における地点データの検索処理と!/、う。)を行うようになっており[0047] In addition, when the system control unit 250 identifies a point necessary for route setting or route guidance, such as a destination or current location, the key word for identifying the point is recognized. In this case, the map data storage unit 190 is searched based on the recognized keyword, and search processing for detecting the corresponding point data (hereinafter referred to as route setting processing). Or, search processing of point data at the time of route guidance processing! )
、検出された地点データを、表示制御部 220を介して表示部 200に表示させるように なっている。 The detected point data is displayed on the display unit 200 via the display control unit 220.
[0048] なお、本実施形態のシステム制御部 250における経路設定処理または経路誘導処 理時における地点データの検索処理の詳細については後述する。  Note that the details of the point data search process during the route setting process or the route guidance process in the system control unit 250 of the present embodiment will be described later.
[0049] 次に、図 2および図 3を用いて本実施形態のシステム制御部 250の経路設定処理 または経路誘導処理時における地点データの検索処理の動作について説明する。 Next, the operation of the point data search process during the route setting process or the route guidance process of the system control unit 250 according to the present embodiment will be described with reference to FIGS. 2 and 3.
[0050] なお、図 2および図 3は、本実施形態のシステム制御部 250における経路設定処理 または経路誘導処理時に必要となる地点データの検索処理の動作を示すフローチ ヤートである。 FIG. 2 and FIG. 3 are flow charts showing the operation of the point data search process required during the route setting process or the route guidance process in the system control unit 250 of the present embodiment.
[0051] また、以下の説明では、現在地から目的地までの経路設定を行う経路設定処理時 において、地点データを目的地として設定する際の検索処理として説明するとともに 、本動作における操作者の地点データの入力に関しては音声認識回路 170を介し て行われるものとする。  [0051] Further, in the following explanation, in the route setting process for setting the route from the current location to the destination, it will be explained as a search processing when setting the location data as the destination, and the operator's location in this operation will be explained. Data input is performed through the speech recognition circuit 170.
[0052] さらに、以下の説明では、後述する音声認識処理において、必ず一つのキーワード を認識するものとする。  [0052] Further, in the following description, it is assumed that one keyword is always recognized in the speech recognition process described later.
[0053] まず、操作者の操作により経路設定処理を開始する旨の指示が入力され、システム 制御部 250が当該指示を受信すると (ステップ S11)、システム制御部 250は、 GPS 受信部 110から自車の現在位置を示す情報を取得して経路の出発地に設定する (ス テツプ S 12)。  [0053] First, when an instruction to start the route setting process is input by an operation of the operator and the system control unit 250 receives the instruction (step S11), the system control unit 250 automatically receives the instruction from the GPS reception unit 110. Information indicating the current position of the car is acquired and set as the starting point of the route (step S12).
[0054] 次 、で、システム制御部 250は、表示制御部 220を制御して、表示部 200に目的 地または経由地となる地点の入力を促す表示を行わせつつ、操作者における目的 地の入力を待機する (ステップ S 13)。  Next, the system control unit 250 controls the display control unit 220 to cause the display unit 200 to display a display prompting the user to input a destination point or a waypoint, and for the destination of the operator. Wait for input (step S13).
[0055] なお、このとき、システム制御部 250は、音声処理回路 230を制御してスピーカ 240 に目的地または経由地となる地点の入力を促す告知を行うようにしてもよい。  [0055] At this time, the system control unit 250 may control the audio processing circuit 230 so as to notify the speaker 240 to input a destination or a stop point.
[0056] 次いで、音声認識回路 170がマイクロホン 160を介して操作者の発話音声が入力 された旨を検出すると (ステップ S 14)、システム制御部 250は、当該音声認識回路 1 70にデータベース 180を用いて該当するキーワードを特定する音声認識の処理 (以 下、音声認識処理という。)を実行させる (ステップ S 15)。 [0056] Next, when the speech recognition circuit 170 detects that the utterance voice of the operator is input via the microphone 160 (step S14), the system control unit 250 stores the database 180 in the speech recognition circuit 170. Voice recognition processing that identifies the relevant keyword using This is called voice recognition processing. ) Is executed (step S15).
[0057] 具体的には、音声認識回路 170は、上述のように、入力された発話音声における 音声成分の特徴量を抽出するとともに、データベース 180に格納されている命令とし て認識すべきキーワードの特徴量データと順次比較し、比較した結果、所定の確度 を有するキーワードを地点名称として特定する。  Specifically, as described above, the speech recognition circuit 170 extracts the feature amount of the speech component in the input uttered speech, and the keyword to be recognized as a command stored in the database 180. A keyword having a predetermined accuracy is identified as a point name as a result of comparison with feature quantity data sequentially.
[0058] 次いで、システム制御部 250は、特定された地点名称に基づいて、地図データ格 納部 190内を検索し、当該地点名称を少なくとも一部に名称情報として有する地点 データを検出する (ステップ S 16)。  [0058] Next, the system control unit 250 searches the map data storage unit 190 based on the identified spot name, and detects spot data having at least a part of the spot name as name information (step). S 16).
[0059] このとき、システム制御部 250によって認識された地点名称の少なくとも一部と一致 する名称情報を有する地点データが検出されない場合には、システム制御部 250は 、音声認識処理によって特定された地点名称を表示部 200に表示させるとともに、操 作者に一つの地点名称の選択を促す表示をさせ、操作者の指示入力を待機してス テツプ S 19の処理に移行する(ステップ S 17)。  [0059] At this time, if no point data having name information that matches at least a part of the point name recognized by the system control unit 250 is detected, the system control unit 250 detects the point identified by the voice recognition process. The name is displayed on the display unit 200 and the operator is prompted to select one spot name. The process waits for the operator to input an instruction and proceeds to the process of step S19 (step S17).
[0060] 一方、システム制御部 250によって音声認識処理によって特定された地点名称の 少なくとも一部と一致する名称情報を有する地点データが検出された場合には、当 該システム制御部 250は、表示部 200に当該検出された地点データの地点名称を、 音声認識回路 170によって特定された地点名称とともに表示部 200に表示させ、操 作者に一つの地点名称の選択を促す表示をさせ、操作者の指示入力を待機する (ス テツプ S 18)。  [0060] On the other hand, when spot data having name information that matches at least a part of the spot name specified by the voice recognition process by the system control unit 250 is detected, the system control unit 250 The point name of the detected point data is displayed on the display unit 200 together with the point name specified by the voice recognition circuit 170 on the display unit 200, and the operator is prompted to select one point name. Wait for input (step S18).
[0061] 次いで、操作者によって一つの地点名称が選択され、システム制御部 250が操作 部 150を介して一つの地点名称が選択されたことを検出すると、当該システム制御部 250は、表示部 200に当該特定された地点名称が、操作者が設定することを希望す る地点名称である力否かの問い合わせ、すなわち、特定された地点名称の適否を確 認するための表示を行わせる(ステップ S 19)。  [0061] Next, when one spot name is selected by the operator and the system control unit 250 detects that one spot name has been selected via the operation unit 150, the system control unit 250 displays the display unit 200. Inquires whether the specified spot name is a spot name that the operator wishes to set, that is, a display for confirming the suitability of the specified spot name (step S 19).
[0062] 次いで、操作部 150によって、特定された地点名称の適否を特定するための操作 が入力されると、システム制御部 250は、操作部 150を介して入力された指示が何れ か操作である力判断する (ステップ S20)。  [0062] Next, when an operation for specifying the suitability of the specified spot name is input by the operation unit 150, the system control unit 250 may perform any operation of an instruction input via the operation unit 150. A certain force is judged (step S20).
[0063] このとき、システム制御部 250が特定された地点名称力 操作者が希望する地点名 称であると判断した場合には、当該システム制御部 250は、ステップ S21の処理に移 行し、システム制御部 250が特定された地点名称が、操作者が希望する地点名称で あると判断した場合には、当該システム制御部 250は、ステップ S24の処理に移行す る。 [0063] At this time, the point name power specified by the system control unit 250 The point name desired by the operator If the system control unit 250 determines that the spot name is the name, the system control unit 250 proceeds to the process of step S21 and determines that the point name specified by the system control unit 250 is the spot name desired by the operator. In that case, the system control unit 250 proceeds to the process of step S24.
[0064] 次いで、システム制御部 250は、表示部 200に地点名称を修正するための画像デ ータを表示させ、表示部 200、表示制御部 220および操作部 150を連動させて地点 名称の修正作業の処理 (以下、修正処理という。)を行わせる (ステップ S21)。  [0064] Next, the system control unit 250 displays image data for correcting the spot name on the display unit 200, and corrects the spot name by linking the display unit 200, the display control unit 220, and the operation unit 150. Work processing (hereinafter referred to as correction processing) is performed (step S21).
[0065] 具体的には、表示制御部 220は、操作部 150における操作者の操作に連動して特 定された地点名称の何れかの文字を他の文字に変換、当該特定された地点名称に 文字を追加、および、当該特定された地点名称の文字を削除するための画像データ を生成し、適宜、表示部 200に表示させる。  [0065] Specifically, the display control unit 220 converts any character of the spot name specified in conjunction with the operation of the operator in the operation unit 150 to another character, and the specified spot name The image data for adding characters to and deleting the characters of the specified spot names is generated and displayed on the display unit 200 as appropriate.
[0066] 次いで、システム制御部 250は、操作部 150を介して操作者における地点名称の 修正が終了した旨を検出すると (ステップ S22)、当該修正された地点名称に基づい て地図データ格納部 190内を検索し、当該地点名称を少なくとも一部に名称情報と して有する地点データを検出して表示する (ステップ S23)。  [0066] Next, when the system control unit 250 detects through the operation unit 150 that the correction of the spot name by the operator is completed (step S22), the map data storage unit 190 is based on the corrected spot name. The site data is searched and spot data having at least a part of the spot name as name information is detected and displayed (step S23).
[0067] 次 、で、システム制御部 250は、検出された地点名称または特定された地点名称 を表示部 200に表示させるとともに、操作者に一つの地点名称の選択を促す表示を させ、操作者の指示入力を待機する (ステップ S 24)。  [0067] Next, the system control unit 250 displays the detected spot name or the specified spot name on the display unit 200, and prompts the operator to select one spot name. (Step S24).
[0068] 次いで、システム制御部 250がーつの地点名称が選択された旨を検出すると (ステ ップ S25)、選択された地点データを目的地として設定するとともに、当該地点データ によって示される地点を地図データとともに表示部 200に表示させる (ステップ S26)  [0068] Next, when the system control unit 250 detects that a point name has been selected (step S25), it sets the selected point data as a destination and sets the point indicated by the point data. Display on map 200 together with map data (step S26)
[0069] 次いで、システム制御部 250は、他に設定すべき目的地または経由地の有無を操 作者に促す表示をさせ、当該操作者の入力に基づいて、他に設定すべき目的地ま たは経由地の有無を判断する (ステップ S27)。 [0069] Next, the system control unit 250 displays a message prompting the operator to determine whether there is another destination or waypoint to be set, and based on the input from the operator, another destination to be set. Determines whether there is a transit point (step S27).
[0070] このとき、システム制御部 250が、操作部 150の入力に基づいて他に設定すべき目 的地があると判断した場合には、当該システム制御部 250は、ステップ S13の処理に 移行し、システム制御部 250が、操作部 150の入力に基づいて他に設定すべき目的 地がないと判断した場合には、当該システム制御部 250は、ステップ S28の処理に 移行する。 [0070] At this time, if the system control unit 250 determines that there is another destination to be set based on the input of the operation unit 150, the system control unit 250 proceeds to the process of step S13. The system control unit 250 should set other purposes based on the input of the operation unit 150. If it is determined that there is no ground, the system control unit 250 proceeds to the process of step S28.
[0071] 最後に、システム制御部 250は、設定された出発地と目的地に基づいて自車が走 行すべき経路を設定し、当該設定された経路に基づ!ヽて経路誘導を開始して本動 作を終了させる (ステップ S28)。  [0071] Finally, the system control unit 250 sets a route on which the vehicle should travel based on the set departure point and destination, and starts route guidance based on the set route! This completes the main operation (step S28).
[0072] 以上本実施形態によれば、ナビゲーシヨン装置 100は、発話音声の音声成分を取 得するとともに、取得された発話音声の音声成分を分析し、当該音声成分の特徴量 である発話音声特徴量を抽出して当該発話音声特徴量とキーワード特徴量データ に基づいて、少なくとも一つのキーワードを特定する音声認識回路 170と、キーヮー ドの音声に関する特徴量を示すキーワード特徴量データが予め複数格納されている データベース 180と、所定の地点データが当該地点データの地点名称を示す地点 名称情報に対応づけられて予め格納されている地図データ格納部 190と、地図デー タ格納部 190に格納されている地点データを検索するシステム制御部 250であって 、特定されたキーワードを地点名称情報によって示される地点名称の少なくとも一部 に有する地点データを検索するシステム制御部 250と、前記検出された地点データ を提示する表示部 200と、を備える構成をしている。  [0072] As described above, according to the present embodiment, the navigation apparatus 100 obtains the speech component of the uttered speech, analyzes the speech component of the obtained uttered speech, and uttered speech feature that is the feature amount of the speech component. The speech recognition circuit 170 for identifying at least one keyword based on the utterance voice feature quantity and the keyword feature quantity data is extracted, and a plurality of keyword feature quantity data indicating the feature quantities relating to the keyboard voice are stored in advance. Stored in the database 180, the map data storage unit 190 in which the predetermined location data is stored in advance in association with the location name information indicating the location name of the location data, and the map data storage unit 190 The system control unit 250 searches for point data, and the identified keyword is at least part of the point name indicated by the point name information. The system control unit 250 for searching for the spot data included in the display unit 200 and the display unit 200 for presenting the detected spot data are provided.
[0073] この構成により、本実施形態のナビゲーシヨン装置 100は、特定されたキーワードに 基づいて、地図データ格納部 190に格納され、当該キーワードを少なくとも地点名称 の一部に有する地点データを検索し、検索の結果、特定されたキーワードを少なくと も地点名称の一部に有する地点データが検出された場合に、当該検出された地点 データを提示する。  With this configuration, the navigation device 100 according to the present embodiment searches for point data stored in the map data storage unit 190 based on the identified keyword and having the keyword as at least a part of the point name. As a result of the search, when spot data having at least a part of the identified keyword is detected, the detected spot data is presented.
[0074] 通常、音声認識が行われる場合には、認識すべきキーワードの特徴量と発話音声 の音声成分における特徴量を比較して確度の高いキーワードを認識されたキーヮー ドとして特定する。このため、任意のキーワードを認識させる際には、当該認識すべき キーワードの音声成分に関する特徴量を予めデータとして格納させておく必要があ るので、当該認識すべきキーワードの数を増大させるためには、音声認識用のデー タ量を増大させる必要がある。  [0074] Normally, when speech recognition is performed, the feature amount of the keyword to be recognized and the feature amount in the speech component of the uttered speech are compared, and the keyword with high accuracy is specified as the recognized keyword. For this reason, when recognizing an arbitrary keyword, it is necessary to store the feature amount related to the speech component of the keyword to be recognized as data in advance, so that the number of keywords to be recognized is increased. Needs to increase the amount of data for voice recognition.
[0075] 一方、音声認識が行われる際には、原則的には、格納されている全てのキーワード の音声成分に関する特徴量と発話音声の音声成分の特徴量を比較する必要がある ので、認識すべきキーワードのデータ量が多いと当該音声認識の処理に膨大な時間 が必要となる。 [0075] On the other hand, when speech recognition is performed, in principle, all stored keywords are stored. Therefore, if the amount of keyword data to be recognized is large, a large amount of time is required for the speech recognition processing.
[0076] したがって、本実施形態のナビゲーシヨン装置 100は、特定された地点名称を検索 キーにして地図データに含まれる地点名称を用いて地点データを検索することがで きるので、発話音声を認識する際に用いられるデータ量を増大させることがなぐまた 、操作者が音声認識処理によって認識を希望する地点名称を予め登録するなど煩 わしい操作を行うことによって認識すべきキーワードを増大させる必要もない。この結 果、本実施形態のナビゲーシヨン装置 100は、操作者の希望するキーワード、すなわ ち、地点名称の認識率を向上させることができるとともに、操作の煩雑さを除去するこ とがでさる。  Therefore, the navigation device 100 according to the present embodiment can search for point data using the point name included in the map data by using the specified point name as a search key, and thus recognizes the utterance voice. It is also necessary to increase the number of keywords to be recognized by performing complicated operations such as pre-registering the names of points that the operator wishes to recognize through voice recognition processing. Absent. As a result, the navigation device 100 of the present embodiment can improve the recognition rate of the keyword desired by the operator, that is, the point name, and can eliminate the complexity of the operation. .
[0077] 本実施形態のナビゲーシヨン装置 100は、特定されたキーワードを修正するために 用いる操作部 150を更に備え、システム制御部 250は、修正されたキーワードを地点 名称情報の少なくとも一部に有する地点データを検索する構成を有している。  [0077] The navigation device 100 of the present embodiment further includes an operation unit 150 used for correcting the specified keyword, and the system control unit 250 has the corrected keyword in at least part of the location name information. It has the structure which searches point data.
[0078] この構成により、本実施形態のナビゲーシヨン装置 100は、修正されたキーワードを 地点名称情報によって示される地点名称の少なくとも一部に有する地点データを検 索するので、発話音声を認識する際に用いられるデータ量を増大させることがなぐ また、操作者が音声認識処理によって認識を希望する地点名称を予め登録するなど 煩わしい操作を行うことによって認識すべきキーワードを増大させる必要もない。  [0078] With this configuration, the navigation device 100 of the present embodiment searches for point data having the corrected keyword in at least a part of the point name indicated by the point name information. In addition, it is not necessary to increase the number of keywords to be recognized by performing a cumbersome operation such as registering in advance a name of a spot that the operator desires to recognize by voice recognition processing.
[0079] すなわち、本実施形態のナビゲーシヨン装置 100は、音声認識の結果が操作者の 希望する地点名称と異なる場合であっても、音声認識されたキーワードを修正するこ とができるので、地点名称を全て入力する煩雑さを解消することができるとともに、認 識すべき地点名称を認識すべきキーワードとして予め登録しなくとも、音声認識と操 作者の修正によって地点データを検索することができるので、操作者の希望する地 点名称の認識率を向上させることができるとともに、操作の煩雑さを除去することがで きる。  [0079] That is, the navigation device 100 of the present embodiment can correct a speech-recognized keyword even when the result of speech recognition is different from the location name desired by the operator. This eliminates the complexity of entering all names, and allows you to search for point data by voice recognition and operator correction without registering the point names to be recognized as keywords to be recognized in advance. As a result, the recognition rate of the point name desired by the operator can be improved, and the complexity of the operation can be eliminated.
[0080] 本実施形態のナビゲーシヨン装置 100は、データベース 180には、キーワードに対 応付けられて当該キーワードに関する地点データが格納されているとともに、表示部 200は、検索手段によって検出された地点データと特定されたキーワードに対応付 けられて 、る地点データとを提示する構成を有して 、る。 [0080] In the navigation device 100 of the present embodiment, the database 180 stores the point data related to the keyword in association with the keyword, and the display unit Reference numeral 200 has a configuration that presents the spot data detected by the search means and the spot data associated with the identified keyword.
[0081] この構成により、本実施形態のナビゲーシヨン装置 100は、検出された地点データ と特定されたキーワードに対応付けられている地点データとを提示する。  With this configuration, the navigation device 100 of the present embodiment presents the detected spot data and the spot data associated with the identified keyword.
[0082] したがって、本実施形態のナビゲーシヨン装置 100は、特定された地点名称を検索 キーにして地図データに含まれる地点名称を用いて地点データを検索することがで きるので、発話音声を認識する際に用いられるデータ量を増大させることがなぐまた 、操作者が音声認識処理によって認識を希望する地点名称を予め登録するなど煩 わしい操作を行うことによって認識すべきキーワードを増大させる必要もない。この結 果、本実施形態のナビゲーシヨン装置 100は、操作者の希望するキーワード、すなわ ち、地点名称の認識率を向上させることができるとともに、操作の煩雑さを除去するこ とがでさる。  [0082] Therefore, the navigation device 100 of the present embodiment can search for spot data using the spot name included in the map data using the specified spot name as a search key, and thus recognizes the utterance voice. It is also necessary to increase the number of keywords to be recognized by performing complicated operations such as pre-registering the names of points that the operator wishes to recognize through voice recognition processing. Absent. As a result, the navigation device 100 of the present embodiment can improve the recognition rate of the keyword desired by the operator, that is, the point name, and can eliminate the complexity of the operation. .
[0083] 本実施形態のナビゲーシヨン装置 100は、システム制御部 250によって複数の地 点データが検出された場合に、複数の地点データ力 一つの地点データを選択する ために用いる選択手段を備え、表示部 200は、選択された一つの地点データを提示 する構成を有している。  [0083] The navigation device 100 of the present embodiment includes a selection unit used to select a single piece of point data when a plurality of pieces of point data are detected by the system control unit 250. The display unit 200 is configured to present one selected point data.
[0084] この構成により、本実施形態のナビゲーシヨン装置 100は、選択された地点データ を提示するので、複数のキーワードが認識または検索された場合であっても、ユーザ の希望する地点名称を特定することができる。  [0084] With this configuration, the navigation device 100 of the present embodiment presents the selected point data, so that even when a plurality of keywords are recognized or searched, the point name desired by the user is specified. can do.
[0085] 本実施形態のナビゲーシヨン装置 100は、発話音声の音声成分を取得するとともに 、取得された発話音声の音声成分を分析し、当該音声成分の特徴量である発話音 声特徴量を抽出して発話音声特徴量と格納されているキーワード特徴量データに基 づいて、キーワードを特定する音声認識回路 170と、キーワードの音声に関する特徴 量を示すキーワード特徴量データが予め複数格納されているデータベース 180と、 所定の地点データが当該地点データの地点名称を示す地点名称情報に対応付け て予め格納される地図データ格納部 190と、特定されたキーワードを操作者に告知 する表示部 200と、告知されたキーワードが、操作者が所望するキーワードと一致し て 、な 、場合に、当該提示されたキーワードを修正する際に用いられる操作部 150 と、修正されたキーワードを地点名称情報の少なくとも一部に有する地点データを検 索するシステム制御部 250と、を備え、表示部 200は、前記検出された地点データを 提示する構成を有している。 The navigation apparatus 100 according to the present embodiment acquires the speech component of the uttered speech, analyzes the speech component of the acquired uttered speech, and extracts the utterance speech feature amount that is the feature amount of the speech component. A speech recognition circuit 170 for identifying a keyword based on the uttered voice feature quantity and the stored keyword feature quantity data, and a database in which a plurality of keyword feature quantity data indicating the feature quantities related to the voice of the keyword are stored in advance. 180, a map data storage unit 190 in which predetermined point data is stored in advance in association with point name information indicating the point name of the point data, a display unit 200 for notifying the operator of the identified keyword, and a notification If the displayed keyword matches the keyword desired by the operator, the operation unit 15 used for correcting the presented keyword 15 0 And a system control unit 250 that searches for point data having the corrected keyword as at least a part of the point name information, and the display unit 200 has a configuration for presenting the detected point data. Yes.
[0086] この構成により、本実施形態のナビゲーシヨン装置 100は、告知されたキーワードが 、操作者が所望するキーワードと一致せず、当該提示されたキーワードを修正された 場合に、修正されたキーワードを地点名称情報によって示される地点名称の少なくと も一部に有する地点データを検索するとともに、検索の結果、修正されたキーワード に該当する地点名称を少なくとも一部に有する地点データが検出された場合に、当 該検出された地点データを提示する。  [0086] With this configuration, the navigation device 100 of the present embodiment allows the corrected keyword to be corrected when the notified keyword does not match the keyword desired by the operator and the presented keyword is corrected. When searching for point data that has at least part of the point name indicated by the point name information, and as a result of the search, point data that has at least part of the point name corresponding to the corrected keyword is detected The detected point data is presented.
[0087] 通常、音声認識が行われる場合には、認識すべきキーワードの特徴量と発話音声 の音声成分における特徴量を比較して確度の高 、キーワードを、認識されたキーヮ ードとして特定する。このため、任意のキーワードを認識させる際には、当該認識す べきキーワードの音声成分に関する特徴量を予めデータとして格納させておく必要 があるので、当該認識すべきキーワードの数を増大させるためには、音声認識用の データ量を増大させる必要がある。  [0087] Normally, when speech recognition is performed, the feature amount of the keyword to be recognized and the feature amount in the speech component of the uttered speech are compared, and the keyword is specified with high accuracy as the recognized keyword. . For this reason, when recognizing an arbitrary keyword, it is necessary to store in advance the feature amount related to the speech component of the keyword to be recognized as data, so in order to increase the number of keywords to be recognized It is necessary to increase the amount of data for voice recognition.
[0088] 一方、音声認識が行われる際には、原則的には、格納されている全てのキーワード の音声成分に関する特徴量と発話音声の音声成分の特徴量を比較する必要がある ので、認識すべきキーワードのデータ量が多いと当該音声認識の処理に膨大な時間 が必要となる。 [0088] On the other hand, when speech recognition is performed, in principle, it is necessary to compare the feature values related to the speech components of all stored keywords with the feature values of the speech components of the spoken speech. If there is a large amount of keyword data to be processed, a huge amount of time is required for the speech recognition process.
[0089] したがって、本実施形態のナビゲーシヨン装置 100は、修正されたキーワードを地 点名称情報によって示される地点名称の少なくとも一部に有する地点データを検索 するので、発話音声を認識する際に用いられるデータ量を増大させることがなぐま た、操作者が音声認識処理によって認識を希望する地点名称を予め登録するなど 煩わしい操作を行うことによって認識すべきキーワードを増大させる必要もない。  Therefore, the navigation device 100 according to the present embodiment searches for point data having the corrected keyword in at least a part of the point name indicated by the point name information, and is used when recognizing the uttered voice. It is not necessary to increase the amount of data to be recognized, and it is not necessary to increase the number of keywords to be recognized by performing a troublesome operation such as registering in advance the name of a spot that the operator desires to recognize by voice recognition processing.
[0090] すなわち、本実施形態のナビゲーシヨン装置 100は、音声認識の結果が操作者の 希望する地点名称と異なる場合であっても、音声認識されたキーワードを修正するこ とができるので、地点名称の全てを入力する煩雑さを解消することができるとともに、 認識すべき地点名称を認識すべきキーワードとして予め登録しなくとも、音声認識と 操作者の修正によって地点データを検索することができる。 That is, the navigation device 100 of the present embodiment can correct a keyword that has been voice-recognized even when the result of voice recognition is different from the point name desired by the operator. This eliminates the complexity of inputting all of the names, and enables voice recognition and recognition without registering the point names to be recognized as keywords to be recognized in advance. The point data can be searched by the correction of the operator.
[0091] この結果、本実施形態のナビゲーシヨン装置 100は、操作者の希望する地点名称 の認識率を向上させることができるとともに、操作の煩雑さを除去することができる。  As a result, the navigation device 100 of the present embodiment can improve the recognition rate of the point name desired by the operator, and can eliminate the complexity of the operation.
[0092] なお、本実施形態では、経路設定時における目的地または経由地を設定する際に 、地点データの検索処理を用いている力 これに限らず、経路誘導その他の地点デ ータを検索する必要があるときに、当該地点データの検索処理を行うようにしてもよい  In this embodiment, when setting the destination or waypoint at the time of route setting, the power of using point data search processing is not limited to this, and route guidance and other point data are searched. When it is necessary to perform the search, the point data may be searched.
[0093] また、本実施形態では、上述のナビゲーシヨン装置 100によって、地点データの検 索処理を行うようになっている力 認識すべき発話音声を入力するマイクロホン 160 を有するナビゲーシヨン装置 100にコンピュータおよび記録媒体を備え、この記録媒 体に上述の地点データ検索用のデータ提示プログラムを格納し、このコンピュータで 当該プログラムを読み込むことによって上述と同様の地点データの検索処理を行うよ うにしてもよい。 Further, in the present embodiment, the navigation apparatus 100 having the microphone 160 for inputting the utterance voice to be recognized by the above-described navigation apparatus 100 for inputting the utterance voice to be recognized by the computer is added to the computer. And a storage medium storing a data presentation program for searching the point data as described above and reading the program by the computer to perform the same point data search process as described above. Good.
[0094] また、本実施形態では、キーワードを特定する際に、または、特定された地点名称 に基づいて地点データを検出する際に、表示部 200によって操作者に提示および告 知するようになっているが、勿論、スピーカ 230を介して音声によって操作者に提示 および告知を行うようにしてもよい。  In the present embodiment, when the keyword is specified or when spot data is detected based on the specified spot name, the display unit 200 presents and notifies the operator. Of course, it may be presented and notified to the operator by voice via the speaker 230.
[0095] また、本実施形態では、音声認識処理を行う際に、データベースに格納されて 、る キーワードの特徴量データと入力された発話音声の音声成分とを順次比較して地点 名称を確定する HMM法を用いている力 これに限らず、当該データベースに格納 されて 、るキーワードの特徴量データを用いて音声認識処理を行うものであればよ!ヽ  Further, in the present embodiment, when performing the speech recognition process, the feature name data of the keyword stored in the database and the input speech component of the spoken speech are sequentially compared to determine the point name. The ability to use the HMM method is not limited to this, as long as the speech recognition process is performed using feature data of keywords stored in the database!
[0096] また、本実施形態では、データの検索処理として、ナビゲーシヨン装置 100におけ る地点データの検索処理に適用している力 勿論、パーソナルコンピュータやその他 の装置において、任意のデータにおける名称検索を行う場合に適用することができる Further, in the present embodiment, as a data search process, the power applied to the point data search process in the navigation device 100. Of course, a name search in arbitrary data in a personal computer or other apparatus. Can be applied when doing

Claims

請求の範囲 The scope of the claims
[1] 発話された発話音声の音声成分を取得する取得手段と、  [1] An acquisition means for acquiring the speech component of the uttered speech,
前記音声成分を分析し、当該音声成分の特徴量である発話音声特徴量を抽出す る抽出手段と、  Extracting means for analyzing the speech component and extracting a speech speech feature amount which is a feature amount of the speech component;
キーワードの音声に関する特徴量を示すキーワード特徴量データが予め複数格納 されている第 1格納手段と、  A first storage means for storing in advance a plurality of keyword feature amount data indicating feature amounts relating to keyword speech;
所定の実データが当該実データの名称を示す名称情報に対応づけられて予め格 納されて!/ゝる第 2格納手段と、  Second storage means for storing predetermined data in advance in association with name information indicating the name of the actual data!
前記発話音声特徴量と前記キーワード特徴量データに基づいて、少なくとも一つ のキーワードを特定する特定手段と、 前記名称情報の少なくとも一部に前記特定されたキーワードを有する実データを検 索する検索手段と、  A specifying means for specifying at least one keyword based on the utterance voice feature quantity and the keyword feature quantity data, and a search means for searching for actual data having the specified keyword in at least a part of the name information When,
前記検出された実データを提示する提示手段と、  Presenting means for presenting the detected actual data;
を備えることを特徴とするデータ提示装置。  A data presentation device comprising:
[2] 請求項 1に記載のデータ提示装置において、  [2] In the data presentation device according to claim 1,
前記特定されたキーワードを修正するために用いる操作手段を更に備え、 前記検索手段は、前記名称情報の少なくとも一部に前記修正手段によって修正さ れたキーワードを有する実データを検索することを特徴とするデータ提示装置。  An operation means used for correcting the specified keyword is further provided, wherein the search means searches for actual data having the keyword corrected by the correction means in at least a part of the name information. Data presentation device.
[3] 請求項 1または 2に記載のデータ提示装置であって、 [3] The data presentation device according to claim 1 or 2,
前記第 1格納手段には、前記キーワードに対応付けられて当該キーワードに関する 実データが格納されて!ヽるとともに、  In the first storage means, actual data related to the keyword is stored in association with the keyword!
前記提示手段は、前記検索手段によって検出された実データと前記特定されたキ 一ワードに対応付けられている実データとを提示することを特徴とするデータ提示装 置。  The presenting means presents actual data detected by the search means and actual data associated with the specified keyword.
[4] 請求項 1乃至 3の何れか一項に記載のデータ提示装置において、  [4] In the data presentation device according to any one of claims 1 to 3,
前記検索手段によって複数の実データが検出された場合に、前記複数の実データ から一つの実データを選択するために用いる選択手段を備え、  When a plurality of actual data is detected by the search means, the selecting means used for selecting one actual data from the plurality of actual data,
前記提示手段は、前記選択された一つの実データを提示することを特徴とするデ ータ提示装置。 The presenting means presents the selected one actual data. Data presentation device.
[5] 発話された発話音声の音声成分を取得する取得工程と、  [5] An acquisition step of acquiring a speech component of the uttered speech,
前記音声成分を分析し、当該音声成分の特徴量である発話音声特徴量を抽出す る抽出工程と、  An extraction step of analyzing the speech component and extracting an utterance speech feature amount which is a feature amount of the speech component;
前記発話音声特徴量とキーワードの音声に関する特徴量を示すキーワード特徴量 データに基づいて、少なくとも一つのキーワードを特定する特定工程と、  A specifying step of specifying at least one keyword based on keyword feature data indicating the feature value related to the speech feature and the keyword voice;
所定の実データの名称を示す名称情報の少なくとも一部に前記特定されたキーヮ ードを有する実データを検索する検索工程と、  A search step of searching for actual data having the specified keyword in at least a part of name information indicating a name of predetermined actual data;
前記検出された実データを提示する提示工程と、  A presentation step of presenting the detected actual data;
を含むことを特徴とするデータ提示方法。  The data presentation method characterized by including.
[6] コンピュータを、 [6] Computer
前記発話された発話音声の音声成分を取得する取得手段、  Obtaining means for obtaining a speech component of the spoken speech;
前記音声成分を分析し、当該音声成分の特徴量である発話音声特徴量を抽出す る抽出手段、  An extraction means for analyzing the speech component and extracting a speech speech feature amount which is a feature amount of the speech component;
前記発話音声特徴量とキーワードの音声に関する特徴量を示すキーワード特徴量 データに基づいて、少なくとも一つのキーワードを特定する特定手段、  A specifying means for specifying at least one keyword based on the keyword feature data indicating the feature value related to the speech feature and the keyword voice;
所定の実データの名称を示す名称情報の少なくとも一部に前記特定されたキーヮ ードを有する実データを検索する検索手段、  Search means for searching for actual data having the specified keyword in at least a part of name information indicating a name of predetermined actual data;
前記検出された実データを提示する提示手段、  Presenting means for presenting the detected actual data;
として機能させることを特徴とするデータ提示プログラム。  A data presenting program characterized by functioning as
[7] 請求項 6に記載のデータ提示プログラムをコンピュータに読み取り可能に記録した ことを特徴とする記録媒体。 [7] A recording medium, wherein the data presentation program according to claim 6 is recorded in a computer-readable manner.
[8] 発話された発話音声の音声成分を取得する取得手段と、 [8] An acquisition means for acquiring the speech component of the uttered speech,
前記音声成分を分析し、当該音声成分の特徴量である発話音声特徴量を抽出す る抽出手段と、  Extracting means for analyzing the speech component and extracting a speech speech feature amount which is a feature amount of the speech component;
キーワードの音声に関する特徴量を示すキーワード特徴量データが予め複数格納 されている第 1格納手段と、  A first storage means for storing in advance a plurality of keyword feature amount data indicating feature amounts relating to keyword speech;
所定の実データが当該実データの名称を示す名称情報に対応づけられて予め格 納される第 2格納手段と、 Predetermined actual data is associated with name information indicating the name of the actual data and stored in advance. A second storage means to be delivered;
前記発話音声特徴量と前記キーワード特徴量データに基づいて、少なくとも一つ のキーワードを特定する特定手段と、  A specifying means for specifying at least one keyword based on the utterance voice feature quantity and the keyword feature quantity data;
前記特定されたキーワードを操作者に告知する告知手段と、  Notification means for notifying the operator of the identified keyword;
前記告知されたキーワードが、操作者が所望するキーワードと一致して!/ヽな 、場合 に、当該告知されたキーワードを修正する際に用いられる修正手段と、  The announced keyword matches the keyword desired by the operator! In some cases, corrective means used to correct the announced keyword,
前記名称情報の少なくとも一部に前記修正手段によって修正されたキーワードを有 する実データを検索する検索手段と、  Search means for searching for actual data having the keyword corrected by the correction means in at least a part of the name information;
前記検出された実データを提示する提示手段と、  Presenting means for presenting the detected actual data;
を備えることを特徴とするデータ提示装置。  A data presentation device comprising:
[9] 発話された発話音声の音声成分を取得する取得工程と、 [9] An acquisition step of acquiring a speech component of the uttered speech,
前記発話音声の音声成分を分析し、当該音声成分の特徴量である発話音声特徴 量を抽出する抽出工程と、  An extraction step of analyzing a speech component of the uttered speech and extracting a speech speech feature amount that is a feature amount of the speech component;
前記発話音声特徴量とキーワードの音声に関する特徴量を示すキーワード特徴量 データに基づいて、少なくとも一つのキーワードを特定する特定工程と、  A specifying step of specifying at least one keyword based on keyword feature data indicating the feature value related to the speech feature and the keyword voice;
前記特定されたキーワードを告知する告知工程と、  A notification step of notifying the identified keyword;
前記告知されたキーワードが修正されると、所定の実データの前記名称情報の少 なくとも一部に前記修正されたキーワードを有する実データを検索する検索工程と、 前記検出された実データを提示する提示工程と、  When the announced keyword is corrected, a search step of searching for actual data having the corrected keyword in at least a part of the name information of predetermined actual data, and presenting the detected actual data A presentation process to
を備えることを特徴とするデータ提示方法。  A data presentation method comprising:
[10] コンピュータを、 [10] Computer
発話された発話音声の音声成分を取得する取得手段、  Acquisition means for acquiring the speech component of the spoken speech;
前記発話音声の音声成分を分析し、当該音声成分の特徴量である発話音声特徴 量を抽出する抽出手段、  Extracting means for analyzing a voice component of the uttered voice and extracting a uttered voice feature quantity which is a feature quantity of the voice component;
前記発話音声特徴量とキーワードの音声に関する特徴量を示すキーワード特徴量 データに基づいて、少なくとも一つのキーワードを特定する特定手段、  A specifying means for specifying at least one keyword based on the keyword feature data indicating the feature value related to the speech feature and the keyword voice;
前記特定されたキーワードを告知する告知手段、  Announcement means for announcing the identified keyword,
前記告知されたキーワードが修正されると、所定の実データの前記名称情報の少 なくとも一部に前記修正されたキーワードを有する実データを検索する検索手段、 前記検出された実データを提示する提示手段、 When the notified keyword is corrected, the name information of predetermined actual data is reduced. Search means for searching for at least part of the actual data having the corrected keyword, presentation means for presenting the detected actual data,
として機能させることを特徴とするデータ提示プログラム。  A data presenting program characterized by functioning as
請求項 10に記載のデータ提示プログラムをコンピュータに読み取り可能に記録し たことを特徴とする記録媒体。  11. A recording medium, wherein the data presentation program according to claim 10 is recorded in a computer-readable manner.
PCT/JP2005/016515 2004-09-09 2005-09-08 Data presentation device, data presentation method, data presentation program, and recording medium containing the program WO2006028171A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2006535815A JPWO2006028171A1 (en) 2004-09-09 2005-09-08 Data presentation apparatus, data presentation method, data presentation program, and recording medium recording the program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2004-261819 2004-09-09
JP2004261819 2004-09-09

Publications (1)

Publication Number Publication Date
WO2006028171A1 true WO2006028171A1 (en) 2006-03-16

Family

ID=36036452

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2005/016515 WO2006028171A1 (en) 2004-09-09 2005-09-08 Data presentation device, data presentation method, data presentation program, and recording medium containing the program

Country Status (2)

Country Link
JP (1) JPWO2006028171A1 (en)
WO (1) WO2006028171A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008164975A (en) * 2006-12-28 2008-07-17 Nissan Motor Co Ltd Speech recognition device and speech recognition method
WO2009022446A1 (en) * 2007-08-10 2009-02-19 Mitsubishi Electric Corporation Navigation device
WO2010013369A1 (en) * 2008-07-30 2010-02-04 三菱電機株式会社 Voice recognition device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0981184A (en) * 1995-09-12 1997-03-28 Toshiba Corp Interlocution support device
JPH11161464A (en) * 1997-11-25 1999-06-18 Nec Corp Japanese sentence preparing device
JPH11183190A (en) * 1997-12-24 1999-07-09 Toyota Motor Corp Voice recognition unit for navigation and navigation unit with voice recognition function
JP2000278369A (en) * 1999-03-29 2000-10-06 Sony Corp Communication apparatus, data acquiring device, and data acquiring method
JP2003167600A (en) * 2001-12-04 2003-06-13 Canon Inc Voice recognition unit and its method, page description language display device and its control method, and computer program
JP2003295891A (en) * 2002-02-04 2003-10-15 Matsushita Electric Ind Co Ltd Interface apparatus, task control method, and screen display method
JP2004133796A (en) * 2002-10-11 2004-04-30 Mitsubishi Electric Corp Information retrieval device and information retrieval method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0981184A (en) * 1995-09-12 1997-03-28 Toshiba Corp Interlocution support device
JPH11161464A (en) * 1997-11-25 1999-06-18 Nec Corp Japanese sentence preparing device
JPH11183190A (en) * 1997-12-24 1999-07-09 Toyota Motor Corp Voice recognition unit for navigation and navigation unit with voice recognition function
JP2000278369A (en) * 1999-03-29 2000-10-06 Sony Corp Communication apparatus, data acquiring device, and data acquiring method
JP2003167600A (en) * 2001-12-04 2003-06-13 Canon Inc Voice recognition unit and its method, page description language display device and its control method, and computer program
JP2003295891A (en) * 2002-02-04 2003-10-15 Matsushita Electric Ind Co Ltd Interface apparatus, task control method, and screen display method
JP2004133796A (en) * 2002-10-11 2004-04-30 Mitsubishi Electric Corp Information retrieval device and information retrieval method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008164975A (en) * 2006-12-28 2008-07-17 Nissan Motor Co Ltd Speech recognition device and speech recognition method
WO2009022446A1 (en) * 2007-08-10 2009-02-19 Mitsubishi Electric Corporation Navigation device
WO2010013369A1 (en) * 2008-07-30 2010-02-04 三菱電機株式会社 Voice recognition device
CN102105929A (en) * 2008-07-30 2011-06-22 三菱电机株式会社 Voice recognition device
JPWO2010013369A1 (en) * 2008-07-30 2012-01-05 三菱電機株式会社 Voice recognition device
US8818816B2 (en) 2008-07-30 2014-08-26 Mitsubishi Electric Corporation Voice recognition device

Also Published As

Publication number Publication date
JPWO2006028171A1 (en) 2008-07-31

Similar Documents

Publication Publication Date Title
US6067521A (en) Interrupt correction of speech recognition for a navigation device
US6064323A (en) Navigation apparatus, navigation method and automotive vehicles
JP2001296882A (en) Navigation system
EP1873491A1 (en) Navigation device
JP4642953B2 (en) Voice search device and voice recognition navigation device
US20060253251A1 (en) Method for street name destination address entry using voice
JP2000338993A (en) Voice recognition device and navigation system using this device
JP5455355B2 (en) Speech recognition apparatus and program
US6963801B2 (en) Vehicle navigation system having position correcting function and position correcting method
JP2005275228A (en) Navigation system
JP3818352B2 (en) Navigation device and storage medium
WO2006028171A1 (en) Data presentation device, data presentation method, data presentation program, and recording medium containing the program
JP4274913B2 (en) Destination search device
JP3579971B2 (en) In-vehicle map display device
JPH09114487A (en) Device and method for speech recognition, device and method for navigation, and automobile
US20150192425A1 (en) Facility search apparatus and facility search method
WO2019124142A1 (en) Navigation device, navigation method, and computer program
JP4705398B2 (en) Voice guidance device, control method and program for voice guidance device
JP2005234991A (en) Information retrieval apparatus, information retrieval method, and information retrieval program
JP4645708B2 (en) Code recognition device and route search device
JP2006090867A (en) Navigation system
JP4952379B2 (en) NAVIGATION DEVICE, NAVIGATION DEVICE SEARCH METHOD, AND SEARCH PROGRAM
JP2004069424A (en) Navigation apparatus
JP2007280104A (en) Information processor, information processing method, information processing program, and computer readable recording medium
JP2000250588A (en) Vehicle voice recognition device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2006535815

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 05782288

Country of ref document: EP

Kind code of ref document: A1