WO2013069060A1 - ナビゲーション装置および方法 - Google Patents
ナビゲーション装置および方法 Download PDFInfo
- Publication number
- WO2013069060A1 WO2013069060A1 PCT/JP2011/006292 JP2011006292W WO2013069060A1 WO 2013069060 A1 WO2013069060 A1 WO 2013069060A1 JP 2011006292 W JP2011006292 W JP 2011006292W WO 2013069060 A1 WO2013069060 A1 WO 2013069060A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- route guidance
- expression
- unit
- guidance expression
- presentation content
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3605—Destination input or retrieval
- G01C21/3608—Destination input or retrieval using speech input, e.g. using speech recognition
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3626—Details of the output of route guidance instructions
- G01C21/3632—Guidance using simplified or iconic instructions, e.g. using arrows
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0968—Systems involving transmission of navigation instructions to the vehicle
- G08G1/096855—Systems involving transmission of navigation instructions to the vehicle where the output is provided in a suitable form to the driver
- G08G1/096872—Systems involving transmission of navigation instructions to the vehicle where the output is provided in a suitable form to the driver where instructions are given per voice
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B29/00—Maps; Plans; Charts; Diagrams, e.g. route diagram
- G09B29/10—Map spot or coordinate position indicators; Map reading aids
- G09B29/106—Map spot or coordinate position indicators; Map reading aids using electronic means
Definitions
- the present invention relates to a navigation apparatus and method capable of recognizing a user's utterance content and performing navigation.
- a navigation device for in-vehicle use, for example, when a predetermined point (for example, an intersection whose traveling direction is to be changed) is approaching while traveling on a set route, a voice output or a graphic display is provided to the driver.
- a predetermined point for example, an intersection whose traveling direction is to be changed
- a voice output or a graphic display is provided to the driver.
- the navigation device can provide road guidance at a predetermined point set in advance, the content of the road guidance that the passenger is providing to the driver during driving is used as the guidance content of the navigation device. It was not possible to present it to the person.
- Patent Document 1 describes a speech recognition apparatus that always recognizes speech and displays the recognition result as it is on the screen as it is.
- the result of speech recognition is simply displayed on the screen as characters, and there is no function for extracting and displaying the route guidance expression from the recognition result.
- unrelated content may be displayed, and the content of the spoken speech is simply displayed as it is, which makes it difficult for the driver to intuitively understand the content.
- the spoken content is an abstract and ambiguous expression, it is only displayed in text as it is, so there is a problem that the driver needs to interpret the displayed content into a specific expression and is troublesome. It was.
- the present invention has been made to solve the above-described problems, and extracts only the route guidance expression that the passenger performed for the driver, interprets and abstracts the abstract route guidance expression.
- An object of the present invention is to provide a navigation apparatus and method capable of presenting the embodied contents so as to be intuitively easy to understand.
- the present invention provides a navigation device that includes a position acquisition unit that acquires the position of a moving body, and that provides route guidance based on the position of the moving body and map data acquired by the position acquisition unit.
- a voice acquisition unit that acquires the input voice
- a voice recognition unit that performs voice recognition processing from the voice data acquired by the voice acquisition unit
- a road guide expression storage unit that stores a road guide expression
- the road guide Refer to the expression storage unit, interpret the route guidance expression extracted by the route guidance expression extraction unit to extract the route guidance expression from the recognition result by the voice recognition unit,
- a route guidance expression interpreting unit for specifying a guide expression, a route guide expression presenting content storage unit for storing presentation contents corresponding to the specific route guide expression in association with the specific route guide expression, and the route guide
- a route guidance expression presentation content acquisition unit that refers to the current presentation content storage unit and acquires corresponding presentation content based on the specific route guidance expression specified by the route guidance expression interpretation unit, and the route guidance expression presentation
- a presentation control output unit that output
- the navigation device of the present invention only a route guidance expression performed by a speaker such as a passenger to the driver is extracted, the abstract route guidance expression is interpreted and embodied, and the embodied route guidance expression is realized. Since the presentation content corresponding to is output, the driver can intuitively understand the content, prevent the driver from hearing mistakes, and prevent the speaker from moving in an unintended direction. it can.
- FIG. 1 is a block diagram illustrating an example of a navigation device according to Embodiment 1.
- FIG. 3 is a diagram illustrating an example of a route guidance expression storage unit 3.
- FIG. It is a figure which shows an example of the way guidance expression presentation content storage part 8 in case a presentation content is a visual presentation content.
- 4 is a flowchart showing an operation of the navigation device according to the first embodiment.
- Embodiment 1 it is a figure which shows the example of a screen of the route guidance information shown to the user when the presentation content is a visual presentation content.
- FIG. 10 is a block diagram illustrating an example of a navigation device according to a second embodiment. 10 is a flowchart showing the operation of the navigation device according to the second embodiment.
- Embodiment 2 it is a figure which shows an example of the screen of the route guidance information shown to the user when the presentation content is a visual presentation content.
- Embodiment 2 it is a figure which shows the other example of the screen of the route guidance information shown to the user when the presentation content is a visual presentation content.
- 12 is a flowchart illustrating an operation of the navigation device according to the third embodiment.
- Embodiment 3 when the presentation content is visual presentation content and the route guidance expression includes a route guidance expression representing a place, a screen example of the route guidance information presented to the user is shown.
- FIG. FIG. 10 is a block diagram illustrating an example of a navigation device according to a fourth embodiment.
- FIG. 10 is a flowchart illustrating an operation of the navigation device according to the fourth embodiment.
- FIG. 10 is a block diagram illustrating an example of a navigation device according to a fifth embodiment.
- 10 is a flowchart illustrating an operation of the navigation device according to the fifth embodiment.
- FIG. 20 is a block diagram illustrating an example of a navigation device according to a sixth embodiment.
- 18 is a flowchart showing the operation of the navigation device according to the sixth embodiment.
- FIG. 20 is a block diagram illustrating an example of a navigation device according to a seventh embodiment. 18 is a flowchart showing the operation of the navigation device according to the seventh embodiment.
- FIG. 20 is a flowchart showing the operation of the navigation device according to the eighth embodiment.
- FIG. 38 is a block diagram illustrating an example of a navigation device according to a ninth embodiment.
- 29 is a flowchart showing the operation of the navigation device according to the ninth embodiment.
- FIG. 38 is a block diagram illustrating an example of a navigation device according to a tenth embodiment.
- 3 is a diagram illustrating an example of a cancellation / correction expression storage unit 16.
- FIG. 29 is a flowchart showing the operation of the navigation device according to the tenth embodiment.
- Embodiment 10 it is a figure which shows the example of a screen transition when the cancellation expression is extracted.
- Embodiment 10 it is a figure which shows the example of a screen transition when a correction expression is extracted.
- FIG. 1 is a block diagram showing an example of a navigation apparatus according to Embodiment 1 of the present invention.
- the navigation device includes a voice acquisition unit 1, a voice recognition unit 2, a road guidance expression storage unit 3, a road guidance expression extraction unit 4, a map data storage unit 5, and a vehicle position acquisition unit (position acquisition unit).
- the navigation device also includes a key input unit that acquires an input signal using a key, a touch panel, or the like.
- the voice acquisition unit 1 performs A / D conversion on a user utterance collected by a microphone or the like, that is, an input voice, and acquires the voice, for example, in a PCM (Pulse Code Modulation) format.
- the voice recognition unit 2 has a recognition dictionary (not shown), detects a voice section corresponding to the content spoken by a speaker such as a passenger from the voice data acquired by the voice acquisition unit 1, and features And a speech recognition process is performed using a recognition dictionary based on the feature amount. Note that the voice recognition unit 2 may use a voice recognition server on the network.
- the route guidance expression storage unit 3 normally stores expressions that are assumed to be used when a person provides route guidance.
- FIG. 2 is a diagram illustrating an example of the route guidance expression storage unit 3.
- the route guidance expression storage unit 3 includes, for example, “that”, “that”, “next”, “at the end”, and “100 meters” indicating instructions indicating the landmarks at the route guidance point where the route guidance is to be performed. "Cross”, “Familys”, “Car”, “Signal”, etc. that represent landmarks, such as “Destination", "200 meters ahead”
- route guidance expressions such as “to the road” and “here” are stored as indirect expressions representing the traveling direction.
- the route guidance expression extraction unit 4 performs morphological analysis with reference to the route guidance expression storage unit 3 and extracts a route guidance expression from the character string of the speech recognition result of the speech recognition unit 2.
- the map data storage unit 5 stores map data such as road data, intersection data, facility data, and the like.
- the map data storage unit 5 may be a storage medium such as a DVD-ROM, a hard disk, and an SD card, for example, and is present on a network and can acquire information such as road data via a communication network (map) Data acquisition unit).
- the own vehicle position acquisition unit (position acquisition unit) 6 acquires the current position (longitude and latitude) and traveling direction of the own vehicle (moving body) using information acquired from a GPS receiver or a gyroscope. is there.
- the route guidance expression interpretation unit 7 acquires the route guidance expression extracted by the route guidance expression extraction unit 4, interprets each route guidance expression, and identifies a specific route guidance expression.
- the route guidance expression indicating the traveling direction will be described. For example, if the extracted route guidance expression includes an expression indicating a direct direction such as “turn right” or “to right”, it is interpreted as indicating a specific direction as it is. , “Turn right” and “to right” are specified as specific route guidance expressions.
- the map data acquired from the map data storage unit 5 and the vehicle position acquisition unit (position acquisition unit) 6 is used to interpret the specific direction indicated by the expression indicating the indirect direction included in the route guidance expression using the position and traveling direction of the own vehicle (moving body) obtained from 6 Identify the route expression. For example, if the road is gradually turning in the upper right direction (northeast direction), it is interpreted that “to be on the road” means to go to the upper right direction, and “in the upper right direction”. Identify the appropriate directions.
- the route guidance expression presentation content storage unit 8 converts the specific route guidance expression indicating the traveling direction specified by the route guidance expression interpretation unit 7 into the presentation content (visual presentation content or auditory information) corresponding to the specific route guidance expression. Stored in association with the presentation content). If the presentation content corresponding to the specific route guidance expression is visual presentation content, the presentation content is the content visually presented to the driver on the navigation display screen or dashboard. There are, for example, an arrow or a figure indicating another direction, a character indicating the direction, a path to be moved on the map, and the like to be presented with emphasis by color change or thickness.
- FIG. 3 is a diagram illustrating an example of the route guidance expression presentation content storage unit 8 when the presentation content is visual presentation content.
- the color of the road and the thickness of the road are set to be the same for the right turn, the left turn, and the diagonally upper right direction, but different colors for each road guide expression. Or it may be thick.
- route guidance expression presentation content storage unit 8 in the case where the presented content is an auditory presentation content is omitted, for example, specific assumed route guidance expressions as shown in FIG. (Synthetic speech), as well as voice data (synthetic speech) “turn right”, “left turn”, “to left” Voice data (synthesized voice) “left turn” is stored for a specific route guidance expression “left”.
- the corresponding presentation contents are stored only for the specific route guidance expression indicating the traveling direction, but all other specific roads such as intersection names and restaurant names are assumed.
- Voice data (synthesized voice) may be stored for the guidance expression.
- the road guide expression presentation content acquisition unit 9 searches the road guide expression presentation content storage unit 8 using the specific road guide expression extracted by the road guide expression interpretation unit 7 as a search key, and matches the specific road with the search key.
- the presentation content (visual presentation content or auditory presentation content) corresponding to the guidance expression is acquired.
- the route guidance expression presentation content acquisition unit 9 may acquire the presentation content by creating a synthesized voice. Note that a method for generating synthesized speech from a character string is a known method, and thus description thereof is omitted here.
- the presentation control unit 10 outputs the presentation content acquired by the route guidance presentation content acquisition unit 9 to the display unit 21 or the voice output unit 22 (or both). That is, when the presentation content is visual presentation content, it is output to the display unit 21 (for example, on a navigation display screen, on a dashboard, on a windshield, etc.), and the presentation content is an auditory presentation content. In some cases, the sound is output to the audio output unit 22 (speaker or the like).
- FIG. 4 is a flowchart showing the operation of the navigation device according to the first embodiment.
- the voice acquisition unit 1 acquires the input voice, performs A / D conversion, and acquires it as, for example, PCM format voice data (step ST01).
- the voice recognition unit 2 recognizes the voice data acquired by the voice acquisition unit 1 (step ST02).
- the route guidance expression extraction unit 4 extracts the route guidance expression from the recognition result of the voice recognition unit 2 while referring to the route guidance expression storage unit 3 (step ST03). Thereafter, the extracted route guidance expression is interpreted by the route guidance expression interpretation unit 7 to specify a specific route guidance expression (steps ST04 to ST11).
- step ST04 it is determined whether or not the route guidance expression extracted in step ST03 includes a route guidance expression indicating the traveling direction (step ST04). If the route guidance expression indicating the traveling direction is not included (NO in step ST04), the process ends. On the other hand, if it is included (in the case of YES in step ST04), it is determined whether or not the route guidance expression representing the traveling direction is a route guidance expression representing the direct traveling direction (step ST05). .
- step ST05 when it is a route guidance expression indicating a direct traveling direction, such as “right turn” (in the case of YES in step ST05), a route guidance indicating a direct traveling direction of “right turn”.
- the expression is specified as a specific route guidance expression (step ST07).
- the route guide expression presentation content acquisition unit 9 uses the specific route guide expression specified by the route guide expression interpretation unit 7 as a search key. By searching the route guidance expression presentation content storage unit 8 and searching for a specific route guidance expression that matches the search key, the presentation content corresponding to the specific route guidance expression (visual presentation content or audio presentation content) Is searched (step ST08).
- step ST09 If a specific route guidance expression matching the search key is found (in the case of YES in step ST09), the presentation content (visual presentation content or auditory presentation content or its content) corresponding to the specific route guidance expression is found. (Both) is acquired (step ST10), and the presentation control output unit 20 outputs the presentation content (output by display or voice output or both) (step ST11). On the other hand, if a specific route guidance expression matching the search key is not found (NO in step ST09), the process ends.
- the voice acquisition unit 1 acquires the voice data (step ST01), and the voice recognition unit 2 sets “ As a result, a recognition result is obtained (step ST02).
- the route guidance expression extraction unit 4 refers to the route guidance expression storage unit 3 as shown in FIG. 2 and extracts the character string “Nari Nana” as the route guidance expression (step ST03).
- the route guidance expression interpreting unit 7 determines that the character string “Naru Nari ni” is a route guidance expression representing an indirect traveling direction (in the case of YES in step ST04 and NO in step ST05).
- the direction of travel is interpreted.
- the road on which the vehicle (moving body) is located is a road that gradually turns diagonally in the upper right direction (northeast direction)
- the route guidance expression content storage unit 8 as shown in FIG. 3 is searched using the character string “in the upper right corner” as a search key (step ST08). Then, since a specific route guidance expression “in the upper right corner” is found in FIG. 3 (in the case of YES in step ST09), the visual presentation content corresponding to the specific route guidance expression “in the upper right corner” is displayed. Certain “slanting upper right arrow graphic data” and auditory presentation content “voice data“ in the diagonal upper right direction ”” or “voice data“ in the northeast direction ”” are acquired (step ST10). If the acquired presentation content is visual presentation content, it is output on a display screen or the like, and if the presentation content is auditory presentation content, it is output from the speaker (step ST11). When the presentation content is acquired, both display and audio output are performed.
- FIG. 5 is a diagram showing a screen example in which the presentation content is output on the display screen and presented to the user when the presentation content is visual presentation content.
- FIG. 5A shows a display screen in which the host vehicle 32 is displayed in a triangle on the navigation screen 31 and a state in which the passenger has spoken “Go straight on the road”. This is indicated by a balloon 33.
- FIG. 5B shows the result of the processing of the flowchart shown in FIG. 4 performed by the navigation device at this time. As a result, “oblique right” is displayed as the visually presented content on the same navigation screen 31 as FIG. The upward arrow graphic data “34” is displayed.
- the presentation contents are visual presentation contents
- the character information may be displayed at the same time. Good.
- the display may be set by the user.
- FIG. 5B shows a state in which an arrow is displayed on the road, but the display location may be any location on the screen, may be a fixed display location, and the road is not hidden. It may be displayed in such a place. Further, it may be on the windshield instead of on the display screen of the navigation device.
- the display device specifying unit may be further provided to determine which output device to present.
- the graphic or character to be displayed may be displayed in a manner that is easier for the user to recognize, such as blinking, moving from right to left, or displaying in a fade-in manner.
- the user may be able to set which method is used for display.
- the user may be able to set whether only the visual presentation contents are displayed, whether only the auditory presentation contents are output by voice, or both are output.
- the audio presentation content is not voice data corresponding to a specific route guidance expression, for example, “Pawn” It is good also as what calls attention by a non-verbal sound (sound effect for notifying).
- the configuration may be such that the auditory presentation content of such non-language sounds is also output.
- the passenger's utterance content is always recognized.
- the button Voice recognition may be performed only while the button is pressed.
- the user may be able to set whether to always recognize or recognize only for a predetermined period.
- the guidance expression can be visually displayed as graphic data such as arrows and character data, making it easier for the driver to understand intuitively, preventing the driver from making mistakes, and in the direction that the speaker does not intend It is possible to prevent progress.
- the content presented to the driver is not the visual presentation content but the auditory presentation content
- the content of the route guidance expression spoken by the speaker such as the passenger is output by voice.
- voice it is possible to prevent the driver from hearing mistakes and to prevent the speaker from proceeding in an unintended direction.
- by outputting both visual presentation contents and auditory presentation contents corresponding to the embodied route guidance expression it is further prevented that the speaker proceeds in an unintended direction due to a driver's mistaken hearing. be able to.
- FIG. 6 is a block diagram showing an example of a navigation apparatus according to Embodiment 2 of the present invention.
- symbol is attached
- the route guidance expression interpretation unit 7 not only specifies a specific route guidance expression, but also the name of a mark (road guide point) to which the route guidance should be performed. Is output to the presentation control unit 10, or the display position (the position of the road guide point) when the presentation content is the visual presentation content is specified and output to the presentation control unit 10.
- the route guidance expression interpretation unit 7 acquires the route guidance expression extracted by the route guidance expression extraction unit 4, interprets each route guidance expression, and identifies a specific route guidance expression.
- the route guidance expression includes instructions such as “that”, “that”, “next”, “at the end”, “100 meters ahead”, “200 meters ahead”
- the map data storage Using the map data acquired from the unit 5 and the position and traveling direction of the vehicle (moving body) acquired from the vehicle position acquisition unit (position acquisition unit) 6, the instruction word included in the route guidance expression Is interpreted to identify the name and location of the road guidance point.
- the intersection is interpreted using the map data, the position of the vehicle (moving body), and the traveling direction.
- the map data, the location of the vehicle (moving body), and the direction of travel specify the specific route guidance expression and its location, “Honmachi 1-Chome intersection”, It interprets which family restaurant it is, and specifies the specific route guidance expression “Restaurant XX” and its location.
- the guidance expression of “only the end” or “200 meters ahead” not including the guidance information indicating the landmark
- the map data, the position of the vehicle (moving body), and the traveling direction are used.
- the specific route guidance expression is output to the route guidance expression presentation content acquisition unit 9, and the position is output to the presentation control unit 10. Further, as with the route guidance expression interpretation unit 7 in the first embodiment, for example, when the extracted route guidance expression includes an expression indicating a direct direction such as “turn right” or “to right”. Is interpreted as it is, and “turn right” and “to right” are specified as specific route guidance expressions.
- the map data acquired from the map data storage unit 5 and the vehicle position acquisition unit (position acquisition unit) 6 is used to interpret the specific direction indicated by the expression indicating the indirect direction included in the route guidance expression using the position and traveling direction of the own vehicle (moving body) obtained from 6 Identify the route expression. For example, if the road is gradually turning in the upper right direction (northeast direction), it is interpreted that “to be on the road” means to go to the upper right direction, and “in the upper right direction”. Identify the appropriate directions.
- FIG. 3 As a diagram showing an example of the guidance information presentation content storage unit 8 when the presentation content is visual presentation content, it is the same as FIG. 3 of the first embodiment. However, in the example shown in FIG. 3, corresponding presentation contents are stored only for the specific route guidance expression representing the traveling direction, but in the second embodiment, the specific representing the traveling direction as shown in FIG. 3 is stored.
- voice data (synthesized voice) may be stored for all possible specific route guidance expressions such as intersection names and restaurant names.
- the presentation control unit 10 When the presentation control unit 10 outputs the presentation content acquired by the route guidance expression presentation content acquisition unit 9 to the display unit 21 or the voice output unit 22 (or both), the presentation content is the visual presentation content. In this case, the presentation content is displayed at the position of the road guide point specified by the road guide expression interpretation unit 7.
- FIG. 7 is a flowchart showing the operation of the navigation device according to the first embodiment.
- the processing from steps ST21 to ST26 is the same as steps ST01 to ST06 in the flowchart of FIG.
- step ST25 if it is a route guidance expression indicating a direct traveling direction such as “right turn” (YES in step ST25), a mark is further indicated or It is determined whether there is a route guidance expression to be expressed (step ST27).
- the route guidance expression indicating the direct direction of travel “turn right” is specified as a specific route guidance expression ( Step ST28).
- the landmark is further indicated.
- the route guidance expression “next” “intersection” map data, the position and traveling direction of the own vehicle (moving body), Based on the above, it is possible to identify a road guide point that is a landmark.
- step ST29 when there is no road guidance expression representing a mark, map data, the position of the own vehicle (moving body), Based on the traveling direction, it is possible to specify the route guidance point indicated by the instruction word.
- the road guide point that is the mark can be specified (in the case of YES in step ST29)
- the name and position of the road guide point are acquired (step ST30).
- the name of the acquired road guide point and the road guide expression determined as the road guide expression indicating the direct traveling direction in step ST25 are specified as specific road guide expressions (step ST31).
- step ST31 The subsequent processing of steps ST32 to ST37 will be described later.
- step ST29 when the road guide point cannot be specified in step ST29 (NO in step ST29), the road guide expression determined as the road guide expression indicating the direct traveling direction in step ST25 is specifically set. It is specified as a target route guidance expression (step ST28). The subsequent processing of steps ST32 to ST37 will be described below.
- the road guide expression presentation content acquisition unit 9 specifies the specific route guidance specified by the route guide expression interpretation unit 7. Using the expression as a search key, the route guide expression presentation content storage unit 8 is searched, and by searching for a specific route guidance expression that matches the search key, the presentation content corresponding to the specific route guidance expression (visual presentation content) Or the auditory presentation content) is searched (step ST32).
- step ST34 the presentation content (visual presentation content or auditory presentation content or its content) corresponding to the specific route guidance expression is found. Both) are acquired (step ST34). Furthermore, when the acquired presentation content is the visual presentation content and the position of the road guide point is acquired in step ST30 (in the case of YES in step ST35), the presentation control output unit 20 displays the visual information. The target presentation content is displayed at the acquired position (step ST36). On the other hand, if the presentation content acquired in step ST35 is auditory presentation content, or if it has not gone through the process of acquiring the position of the road guide point in step ST30, the presentation content acquired in step ST34. Is output (output by display and / or sound output) (step ST37). If no specific route guidance expression matching the search key is found (NO in step ST33), the process ends.
- the voice acquisition unit 1 acquires the voice data (step ST21), and the voice recognition unit 2 A recognition result “turn right at the next intersection” is obtained (step ST22).
- the route guidance expression extraction unit 4 refers to the route guidance expression storage unit 3 as shown in FIG. 2 and extracts the character strings “next”, “intersection”, and “right turn” as the route guidance expression (step ST23).
- the route guidance expression interpreting unit 7 determines that the character string “turn right” is a route guidance expression indicating a direct direction of travel (in the case of YES in step ST24 and YES in step ST25), and further displays a mark. It is determined whether there is a route guidance expression to instruct (step ST27).
- a route guidance expression that indicates or represents the landmarks “next” and “intersection” (in the case of YES in step ST27), the route guidance expression “next” “intersection”, map data,
- a road guide point that specifically refers to which intersection (in the case of YES in step ST29), the road guide point.
- the road guidance point meaning “next” “intersection” is “Honmachi 1-chome intersection”, and acquires its name “Honmachi 1-chome intersection” and its position (step ST30).
- the name “Honmachi 1-chome intersection” of the acquired road guidance point and the road guidance expression “turn right” indicating the traveling direction are specified as specific road guidance expressions (step ST31).
- the route guidance expression content storage unit 8 as shown in FIG. 3 is searched (step ST32). And since the specific route guidance expression “right turn” is found in the route guidance expression presentation content storage unit 8 (YES in step ST33), the visual presentation corresponding to the specific route guidance expression “right turn” is provided.
- the content “graphic data of right arrow” and “character data“ turn right ”” and the audio presentation content “voice data“ turn right ”” are acquired. If voice data corresponding to the intersection name “Honmachi 1-chome intersection” is stored, it can be acquired (step ST34).
- step ST30 the position of the road guidance point “Honmachi 1-chome intersection” is also obtained in step ST30 (in the case of YES in step ST35), the display screen
- the “presentation data of right arrow” and “character data of“ turn right ””, which are visual presentation contents, are output at the position of “Honmachi 1-chome intersection” above (step ST36).
- the audio data is output from the speaker (step ST37), and if both of the presentation contents are acquired, both display and audio output are performed.
- FIG. 8 is a diagram illustrating an example of a screen that is presented to the user by outputting the presentation content on the display screen when the presentation content is visual presentation content.
- FIG. 8A shows a display screen in which the vehicle 32 is displayed as a triangle on the navigation screen 31 and a state in which the passenger speaks “turn right at the next intersection” and the content of the speech. Is indicated by a balloon 33.
- the names 35 of the two intersections are “Honmachi 1-chome intersection” and “Honmachi 2-chome intersection” from the bottom.
- FIG. 8B shows the result of the processing of the flowchart shown in FIG. 7 being performed by the navigation device at this time. As a result, “right arrow” is displayed on the same navigation screen 31 as FIG.
- the graphic data “34” is displayed at the position “Honmachi 1-chome intersection”.
- FIG. 9 is a diagram showing another example of a screen that is presented to the user by outputting the presentation content on the display screen when the presentation content is visual presentation content.
- FIG. 9A shows a display screen in which the vehicle 32 is displayed in a triangle on the navigation screen 31 and a state in which the passenger speaks “turn right at the end”, and the speech content is a balloon. 33.
- the names 35 of the two intersections are “Honmachi 1-chome intersection” and “Honmachi 2-chome intersection” from the bottom.
- FIG. 9B shows the result of the processing of the flowchart shown in FIG. 7 being performed by the navigation device at this time. As a result, “right arrow” is displayed as the visual presentation content on the same navigation screen 31 as FIG.
- the graphic data “34” is displayed at the position of “Honmachi 2-chome intersection” at the end.
- the presentation contents are visual presentation contents
- the character information may be displayed at the same time. Good.
- the display may be set by the user. Furthermore, you may make it display the name of the specified road guide point.
- the display location may be any location on the screen, and a fixed display It may be a place, or may be displayed in a place where the road is not hidden. Further, it may be on the windshield instead of on the display screen of the navigation device.
- the display device specifying unit may be further provided to determine which output device to present.
- the graphic or character to be displayed may be displayed in a manner that is easier for the user to recognize, such as blinking, moving from right to left, or displaying in a fade-in manner.
- the user may be able to set which method is used for display.
- the user may be able to set whether only the visual presentation contents are displayed, whether only the auditory presentation contents are output by voice, or both are output.
- the audio presentation content is not voice data corresponding to a specific route guidance expression, for example, “Pawn” It is good also as what calls attention by a non-verbal sound (sound effect for notifying).
- the configuration may be such that the auditory presentation content of such non-language sounds is also output.
- the auditory presentation content for example, “Honmachi 1-chome intersection, turn right”, the name of the road guide point specified by the road guide expression interpretation unit 7 and the specific direction of travel. Both voice data may be output continuously.
- the passenger's utterance content is always recognized.
- the button Voice recognition may be performed only while the button is pressed.
- the user may be able to set whether to always recognize or recognize only for a predetermined period.
- the abstract route guidance expression is interpreted and embodied, and the embodied road
- the guidance expression can be displayed visually at the location of the road guide point using graphic data such as arrows and character data, making it easier for the driver to understand intuitively and preventing the driver from making mistakes. It is possible to prevent the speaker from moving in an unintended direction.
- the content of the route guidance expression uttered by the speaker such as the passenger is designated as the name of the road guidance point.
- Embodiment 3 Since the block diagram of the navigation device according to the third embodiment of the present invention is the same as that of FIG. 6 in the second embodiment, illustration and description thereof are omitted.
- the mark indicated by the instruction word included in the route guidance expression is specified. .
- FIG. 10 is a flowchart showing the operation of the navigation device according to the third embodiment.
- the processes from step ST41 to ST49 and steps ST51 to ST57 other than step ST50 in this flowchart are the same as steps ST21 to ST29 and steps ST31 to ST37 in the flowchart of FIG. To do.
- step ST47 when there is a route guidance expression indicating or indicating a mark (eg, “YES” in step ST47) such as “next” “intersection” in step ST47, these “next” Based on the road guidance expression “intersection”, map data, and the position and traveling direction of the own vehicle (moving body), it is possible to specify the road guidance point which is the specific intersection. Therefore (in the case of YES in step ST49), the route guide point is interpreted and specified. At this time, the name and position of the road guide point are acquired in consideration of the currently set route information (step ST50).
- a route guidance expression indicating or indicating a mark eg, “YES” in step ST47
- these “next” Based on the road guidance expression “intersection”, map data, and the position and traveling direction of the own vehicle (moving body), it is possible to specify the road guidance point which is the specific intersection. Therefore (in the case of YES in step ST49), the route guide point is interpreted and specified. At this time, the name
- step ST49 if the road guide point cannot be specified in step ST49 (NO in step ST49), the road guide expression determined as the road guide expression indicating the direct traveling direction in step ST45 is specifically set. It is specified as a target route guidance expression (step ST48).
- FIG. 11 is the same diagram as FIG. 8A, and the display screen in which the vehicle 32 is displayed in a triangle on the navigation screen 31 and the passenger say “turn right at the next intersection”.
- the speech content is indicated by a balloon 33.
- FIG. 11B shows the result of the processing of the flowchart shown in FIG. 10 being performed by the navigation device at this time, and as a result, the “right arrow” is displayed as the visual presentation content on the same navigation screen 31 as FIG.
- the graphic data “34” is displayed at the position “Honmachi 2-chome intersection” in consideration of the set route information.
- the route guidance expression interpretation unit 7 obtains the expression “next” “XX” (for example, “intersection”) and then proceeds from the current position of the own vehicle (moving body).
- the “next” “XX” (intersection) set as a route (route) is interpreted. ).
- the route guidance expression interpretation unit 7 accurately indicates “XX” (intersection) indicated by “next”. It can be identified as “Honmachi 2-chome intersection”.
- the presentation contents are visual presentation contents
- the character information may be displayed at the same time. Good.
- the display may be set by the user. Furthermore, you may make it display the name of the specified road guide point.
- the display location may be any location on the screen, and a fixed display It may be a place, or may be displayed in a place where the road is not hidden. Further, it may be on the windshield instead of on the display screen of the navigation device.
- the display device specifying unit may be further provided to determine which output device to present.
- the graphic or character to be displayed may be displayed in a manner that is easier for the user to recognize, such as blinking, moving from right to left, or displaying in a fade-in manner.
- the user may be able to set which method is used for display.
- the user may be able to set whether only the visual presentation contents are displayed, whether only the auditory presentation contents are output by voice, or both are output.
- the audio presentation content is not voice data corresponding to a specific route guidance expression, for example, “Pawn” It is good also as what calls attention by a non-verbal sound (sound effect for notifying).
- the configuration may be such that the auditory presentation content of such non-language sounds is also output.
- the auditory presentation content for example, the name of the road guide point specified by the road guide expression interpretation unit 7 and the specific traveling direction, such as “Honmachi 2-chome intersection, turn right”. Both voice data may be output continuously.
- the passenger's utterance content is always recognized.
- the passenger starts the route guidance for example, when the passenger starts the route guidance, the user presses the button for voice recognition, Voice recognition may be performed only while the button is pressed.
- the user may be able to set whether to always recognize or recognize only for a predetermined period.
- the user may be able to set whether or not to use the function that takes into account the set route information in the third embodiment.
- the location of a specific route guide point can be identified and displayed more accurately from the points indicated by simple route guidance expressions, preventing misunderstandings and misunderstandings by the driver, and directions that are not intended by the speaker It is possible to further prevent the process from proceeding to.
- FIG. FIG. 12 is a block diagram showing an example of a navigation device according to Embodiment 4 of the present invention. Note that the same components as those described in the first to third embodiments are denoted by the same reference numerals, and redundant description is omitted.
- Embodiment 4 shown below compared with Embodiment 3, the external object recognition part 11 is shown, and the route guidance expression interpretation part 7 is related to the object output by the external object recognition part 11. Using the information, the contents of the route guidance expression related to the landmark extracted by the route guidance expression extraction unit 4 are interpreted.
- the external object recognition unit 11 analyzes information acquired by a sensor such as a camera, for example, recognizes surrounding objects (for example, objects such as cars and landmarks), and features of the objects and the objects Output the distance to the object.
- a sensor such as a camera
- FIG. 13 is a diagram illustrating an example of the route guidance expression storage unit 3 according to the fourth embodiment. As shown in this figure, the route guidance expression storage unit 3 adds information on the object such as “red”, “white”, “tall”, “large”, “round”, etc. Have information.
- FIG. 14 is a flowchart showing the operation of the navigation device according to the fourth embodiment.
- the processing from steps ST61 to ST68 and steps ST72 to ST78 other than steps ST69 to ST71 in this flowchart is the same as steps ST21 to ST28 and steps ST31 to ST37 in the flowchart of FIG. Is omitted.
- the external object recognition unit. 11 recognizes the surrounding (external) object and outputs the characteristics of the object (step ST69). Further, the route guidance expression interpretation unit 7 determines whether or not the route guidance expression extracted by the route guidance expression extraction unit 4 in step ST63 matches the object recognized by the external object recognition unit 11. (Step ST70).
- step ST70 if they match (in the case of YES in step ST70), the distance to the object output by the external object recognition unit 11 is acquired by a known method, the distance information, the map data, and the self Based on the position of the car (moving body) and the traveling direction, the target object, that is, the road guidance point where the road guidance should be performed is specified, and the name and position are acquired (step ST71).
- the route guidance expression determined as the route guidance expression indicating the direct traveling direction in step ST65 is used as the specific route guidance. It is specified as an expression (step ST68).
- the voice acquisition unit 1 acquires the voice data (step ST61), and the voice recognition unit 2 obtains a recognition result “turn right when the red car turns” (step ST62).
- the route guidance expression extraction unit 4 refers to the route guidance expression storage unit 3 as shown in FIG. 13 and extracts character strings “that”, “red”, “car”, “turn right” and the like as the route guidance expression ( Step ST63).
- step ST69 the surrounding object is recognized by the external object recognition unit 11 such as a camera, and the characteristics of the object are output (step ST69). Further, it is determined whether or not the route guidance expression extracted in step ST63 matches the object recognized by the external object recognition unit 11 in step ST69 (step ST70).
- step ST69 the image acquired by the camera is analyzed by a known technique, and the object ahead of the own vehicle (moving body) is “car” and the color of the car is recognized as “red”. Is done.
- a route guidance expression “car” representing a landmark and a route guidance expression “red” representing additional information are extracted.
- the “red car” of the utterance content is The distance to the red car (object) that is identified as the car ahead of the host vehicle (moving body) and acquired by the camera is acquired by a known method, and the position where the red car is bent (direction guidance is performed) (Road guide point) is acquired (step ST71). Then, the name of the road guidance point and the road guidance expression “right turn” indicating the direction of travel are specified as a specific road guidance expression (step ST72). " Figure data” is displayed (steps ST73 to ST78).
- the passenger's utterance content is always recognized.
- the passenger starts the route guidance for example, when the passenger starts the route guidance, the user presses the button for voice recognition, Voice recognition may be performed only while the button is pressed.
- the user may be able to set whether to always recognize or recognize only for a predetermined period.
- the user may be able to set whether to use the function of recognizing surrounding (external) objects in the fourth embodiment.
- the route guidance expression spoken by a speaker such as a passenger is related to a surrounding (external) object
- the instruction content is interpreted. Since it is presented, it is possible to cope with various directions of the speaker's route guidance, and it is possible to prevent the speaker from moving in an unintended direction.
- FIG. FIG. 15 is a block diagram showing an example of a navigation apparatus according to Embodiment 5 of the present invention. Note that the same components as those described in the first to fourth embodiments are denoted by the same reference numerals, and redundant description is omitted.
- a gesture recognition unit 12 is further provided, and when the information by a speaker such as a passenger includes a gesture, the route guidance expression interpretation unit 7 However, the route guidance expression extracted by the route guidance expression extraction unit 4 is interpreted based on the recognition result of the gesture by the gesture recognition unit 12, and the specific route guidance expression indicating the traveling direction intended by the passenger is specified. Is.
- FIG. 16 is a flowchart showing the operation of the navigation device according to the fifth embodiment.
- the processing from step ST81 to ST83 and the processing from step ST87 to ST90 are the same as steps ST01 to ST03 and steps ST08 to ST11 in the flowchart of FIG. To do.
- the gesture recognizing unit 12 recognizes, for example, a gesture indicating the direction spoken by the fellow passenger, The direction is specified and output (step ST84). Since a method for recognizing a gesture and specifying an instructed direction is known, the description thereof is omitted here.
- step ST83 it is determined whether or not the route guidance expression extracted in step ST83 includes a route guidance expression representing the traveling direction, or whether the gesture recognized in step ST84 is a gesture representing the traveling direction (step). ST85). If the route guidance expression indicating the traveling direction is not included and the gesture is not the traveling direction (NO in step ST85), the process ends.
- the road guidance expression interpretation unit 7 determines that the gesture output from the gesture recognition unit 12 in step ST84 and the own vehicle ( Based on the traveling direction of the moving body), the route guidance expression representing the traveling direction extracted by the route guidance expression extracting unit 4 in step ST83 is interpreted to identify a specific route guidance expression (step ST86).
- the voice acquisition unit 1 acquires the voice data (step ST81), the speech recognition unit 2 obtains a recognition result “turn over here” (step ST82). Then, the route guidance expression extraction unit 4 refers to the route guidance expression storage unit 3 as shown in FIG. 2 and extracts the character string “here” as the route guidance expression (step ST83). Moreover, the gesture recognition part 12 recognizes the rightward gesture pointed by the finger (step ST84).
- the azimuth obtained from the direction indicated by the gesture is 90 degrees
- the azimuth obtained from the traveling direction of the own vehicle (moving body) acquired by the own vehicle position acquisition unit (position acquisition unit) 6 is 45 degrees.
- the indirect route guidance expression “here” representing the traveling direction is identified as “right”. That is, the road guide expression interpreting unit 7 determines the specific road “right” from the character string “here” indicating the traveling direction, the right gesture indicating the traveling direction, and the traveling direction of the own vehicle (moving body). A guidance expression is specified (steps ST85 to ST86).
- the passenger's utterance content is always recognized.
- the passenger starts the route guidance for example, when the passenger starts the route guidance, the user presses the button for voice recognition, Voice recognition may be performed only while the button is pressed.
- the user may be able to set whether to always recognize or recognize only for a predetermined period. Further, the user may be able to set whether or not to use the gesture recognition function in the fifth embodiment.
- the route guidance expression spoken by a speaker such as a passenger is an abstract route guidance expression, and the map data and the position of the vehicle (moving body) Even if specific content cannot be interpreted from the above, it is possible to identify a specific route guidance expression by associating it with the result of gesture recognition, thus preventing the speaker from moving in an unintended direction. Can do.
- FIG. 17 is a block diagram showing an example of a navigation apparatus according to Embodiment 6 of the present invention. Note that the same components as those described in the first to fifth embodiments are denoted by the same reference numerals, and redundant description is omitted.
- the contradiction determining unit 13 is further provided, and the route guidance expression extracted from the utterance content by the utterer such as the passenger and the result of the gesture recognition are inconsistent. In this case, the content of the route guidance expression is specified based on the result determined by the route guidance expression interpretation unit 7.
- FIG. 18 is a flowchart showing the operation of the navigation device according to the sixth embodiment.
- the processing from step ST101 to ST105 and the processing from step ST109 to ST112 are the same as steps ST81 to ST85 and steps ST87 to ST90 in the flowchart of FIG. To do.
- it is determined whether or not the route guidance expression extracted in step ST103 matches the recognition result of the gesture recognized in step ST104 (step ST106). If they match (in the case of YES in step ST106), a route guidance expression representing the traveling direction ( a gesture representing the traveling direction) is specified as a specific route guidance expression (step ST107).
- a specific route guidance expression according to a predetermined rule in the route guidance expression interpretation unit 7 is used. Is identified (step ST108).
- the predetermined rule in the route guidance expression interpreting unit 7 is based on a statistic that, for example, it is more common for a passenger to make a mistake than to indicate a wrong direction with a gesture, “when the results are inconsistent. Can be set in advance, such as the rule “adopts the result of gesture recognition” or the rule “if the route is set, the one that matches the set route (route) is selected”. That's fine.
- the voice acquisition unit 1 stores the voice data. Acquired (step ST101), the speech recognition unit 2 obtains a recognition result “turn right at the next intersection” (step ST102). Then, the route guidance expression extraction unit 4 refers to the route guidance expression storage unit 3 as shown in FIG. 2 and extracts the character strings “next”, “intersection”, and “right” as the route guidance expression (step ST103). . In addition, the gesture recognition unit 12 recognizes a leftward gesture pointed by a finger (step ST104). In this case, it is determined whether or not the route guidance expression “right” extracted in step ST103 matches the gesture “left” recognized in step ST104 (steps ST105 to ST106).
- the route guidance expression interpretation unit 7 specifies a specific route guidance expression according to a predetermined rule such as adopting the result of gesture recognition.
- a predetermined rule such as adopting the result of gesture recognition.
- the sixth embodiment even when the guidance expression spoken by a speaker such as a fellow passenger and the recognition result of the gesture by the fellow passenger contradict each other, the interpretation of the guidance expression is performed. Since it is adopted in accordance with the predetermined rule in the section 7, it is possible to prevent the passenger from going in the wrong direction due to a wrong passenger's saying or a wrong gesture.
- the navigation device in the first embodiment has been described as further including the gesture recognition unit 12 and the contradiction determination unit 13, but the navigation device in the second embodiment performs gesture recognition. It goes without saying that the unit 12 and the contradiction determination unit 13 may be provided.
- FIG. FIG. 19 is a block diagram showing an example of a navigation apparatus according to Embodiment 7 of the present invention. Note that the same components as those described in the first to sixth embodiments are denoted by the same reference numerals, and redundant description is omitted.
- the route guidance expression suitability determination unit 14 is further provided, and the route guidance expression is presented after determining whether or not the presented content is qualified. Is.
- the route guidance expression eligibility determination unit 14 determines eligibility as to whether or not to present the presentation content.
- the eligibility is, for example, whether or not the vehicle can pass in the direction designated by the speaker, and whether or not the route (route) set when traveling in the designated direction does not deviate. And so on.
- FIG. 20 is a flowchart showing the operation of the navigation device according to the seventh embodiment.
- the processes from step ST141 to ST147 are the same as steps ST01 to ST07 in the flowchart of FIG.
- the road guidance expression eligibility determination part 14 is acquired by the road guidance expression extracted by the road guidance expression extraction part by step ST143, and the own vehicle position acquisition part (position acquisition part) 6. Based on the vehicle position (position of the moving body) and the map data stored in the map data storage unit 5, it is determined whether or not it is appropriate to present the route guidance expression (step ST148).
- step ST148 the presentation content corresponding to the route guidance expression is retrieved in the same manner as in steps ST08 to ST11 of FIG. 4 in the first embodiment.
- Step ST149 When the corresponding presentation content is found, the presentation content is acquired and output (steps ST150 to ST152).
- step ST148 the process ends.
- the voice acquisition unit 1 acquires the voice data (step ST141), and the voice recognition unit 2 A recognition result “turn right at the next intersection” is obtained (step ST142).
- the route guidance expression extracting unit 4 refers to the route guidance expression storage unit 3 as shown in FIG. 2 and extracts the character strings “next intersection” and “right turn” as the route guidance expression (step ST143).
- the route guidance expression interpretation unit 7 changes the route guidance expression “right turn”. It is specified as a specific route guidance expression (step ST147).
- the “next intersection” is identified, and the road information when turning right at the intersection is checked using the map data. If it is, it is determined that the route guidance expression is not appropriate (NO in step ST148), and the process is terminated. Further, for example, when a right turn at the intersection deviates from a set route (route), it is similarly determined that the route guidance expression is not eligible. On the other hand, if it is determined that the route guidance expression is eligible (YES in step ST148), the same processing (steps ST149 to ST152) as the processing in steps ST08 to ST11 of the first embodiment is performed. Information such as “Right arrow graphic data”, “Right turn character string data”, “Make road color red”, “Make road thickness XX dots”, or auditory presentation content. Outputs “voice data“ turn right ””.
- the passenger's utterance content is always recognized.
- the passenger starts the route guidance for example, when the passenger starts the route guidance, the user presses the button for voice recognition, Voice recognition may be performed only while the button is pressed.
- the user may be able to set whether to always recognize or recognize only for a predetermined period.
- the user may be able to set whether or not to use the function for determining route guide expression eligibility in the seventh embodiment.
- the presentation content is presented based on an ineligible utterance of the speaker, and it is possible to prevent the driver from following an incorrect route or causing a traffic violation.
- Embodiment 8 Since the block diagram showing an example of the navigation device according to the eighth embodiment of the present invention is the same as the block diagram shown in FIG. 19 in the seventh embodiment, the illustration and description thereof will be omitted.
- the route guidance expression eligibility determination unit 14 determines that the route guidance expression is not eligible, the content of presentation that the route guidance expression is not eligible Is presented.
- the road guide expression presentation content acquisition unit 9 acquires, from the road guide expression presentation content storage unit 8, presentation content indicating that the road guide expression is not eligible.
- the route guidance expression eligibility determination part 14 is qualified. For example, graphic data “ ⁇ ”, a character string “cannot pass”, a character string “departs from the route”, and the like are stored as the presentation contents corresponding to the case where it is determined that it is not.
- FIG. 21 is a flowchart showing the operation of the navigation device according to the eighth embodiment. Since the processes from step ST161 to ST172 are substantially the same as steps ST141 to ST152 in the flowchart of FIG. 20 in the seventh embodiment, the description of the same processes is omitted.
- the route guidance expression eligibility determination unit 14 determines whether or not it is eligible to present the route guidance expression in step ST168, it is determined that the route is appropriate (step ST168).
- the presentation content corresponding to the route guidance expression is searched (step ST169), and the corresponding presentation content is found. The contents of the presentation are acquired and output (steps ST170 to ST172).
- step ST168 when it is determined in step ST168 that there is no qualification (NO in step ST168), the processing is ended only in the seventh embodiment.
- the presentation content acquisition unit 9 acquires the presentation content indicating that the route guidance expression is not appropriate (step ST173), and outputs the presentation content (step ST172).
- the voice acquisition unit 1 acquires the voice data (step ST161)
- the speech recognition unit 2 obtains a recognition result “turn right at the next intersection” (step ST162).
- the route guidance expression extraction unit 4 refers to the route guidance expression storage unit 3 as shown in FIG. 2 and extracts the character strings “next intersection” and “right turn” as the route guidance expression (step ST163).
- the route guidance expression interpretation unit 7 changes the route guidance expression “right turn”. It is specified as a specific route guidance expression (step ST167).
- the route guidance expression presentation content acquisition unit 9 obtains, for example, graphic data “ ⁇ ”, character string data “cannot pass”, or “depart from the route” from the route guidance presentation content storage unit 8. Character string data and the like are acquired (step ST173) and output (step ST172).
- the passenger's utterance content is always recognized.
- the passenger starts the route guidance for example, when the passenger starts the route guidance, the user presses the button for voice recognition, Voice recognition may be performed only while the button is pressed.
- the user may be able to set whether to always recognize or recognize only for a predetermined period.
- the user may be able to set whether or not to use the function for determining the eligibility for route guidance expression in the eighth embodiment.
- the eighth embodiment it is determined whether or not the route guidance expression spoken by a speaker such as a passenger is appropriate, and if not, the fact is indicated. Therefore, in addition to the effects in the first embodiment, not only the presentation contents are not presented based on the ineligible utterances of the speaker, but the driver knows that the utterance contents recognized by the driver are ineligible, It is possible to prevent the route and traffic violation from occurring.
- FIG. FIG. 22 is a block diagram showing an example of a navigation apparatus according to Embodiment 9 of the present invention. Note that the same components as those described in the first to eighth embodiments are denoted by the same reference numerals, and redundant description is omitted.
- the route resetting unit 15 is further provided, and the route guidance expression eligibility determining unit 14 deviates from the set route (route). When it is determined that the presented content (way guidance expression) is not eligible, the route to the destination using the deviated route as a waypoint is reset. When the route guidance expression eligibility determination unit 14 determines that the route guidance expression is not eligible because it deviates from the set route, the route resetting unit 15 uses the deviated route as a waypoint. Reset the route to.
- FIG. 23 is a flowchart showing the operation of the navigation device according to the ninth embodiment. Since the processes from step ST181 to ST192 are almost the same as steps ST141 to ST152 in the flowchart of FIG. 20 in the seventh embodiment, the description of the same processes is omitted. And in this Embodiment 9, when it determines with the route guidance expression eligibility determination part 14 having eligibility, when it determines whether it is eligible to show a route guidance expression in step ST188 ( In the case of YES in step ST188), as in steps ST149 to ST152 of FIG. 20 in the seventh embodiment, the presentation content corresponding to the route guidance expression is searched (step ST189), and the corresponding presentation content is found. The content of the presentation is acquired and output (steps ST190 to ST192).
- step ST188 when it is determined in step ST188 that the route guidance expression is not appropriate (NO in step ST188), the processing is ended only in the seventh embodiment, but in the ninth embodiment, further, Then, it is determined whether or not it is determined not to be eligible to deviate from the set route (step ST193), and when it is determined that it is not eligible in step ST188 to deviate from the set route (YES in step ST193) In this case, the route resetting unit 15 resets the route to the destination so as to pass through the deviated route (step ST194). On the other hand, if it is determined that there is no qualification for another reason (NO in step ST193), the process ends.
- the voice acquisition unit 1 acquires the voice data (step ST181)
- the speech recognition unit 2 obtains a recognition result “turn right at the next intersection” (step ST182).
- the route guidance expression extraction unit 4 refers to the route guidance expression storage unit 3 as shown in FIG. 2 and extracts the character strings “next intersection” and “right turn” as the route guidance expression (step ST183).
- the route guidance expression interpretation unit 7 changes the route guidance expression “right turn”. It is specified as a specific route guidance expression (step ST187).
- step ST188 based on the location of the vehicle and the map data, the “next intersection” is identified, and the road information when turning right at the intersection is checked using the map data. If it is, it is determined that the route guidance expression is not appropriate (in the case of NO in step ST188). Further, it is determined whether or not the reason why it is determined not to be qualified is to deviate from the set route (step ST188), and since the answer is NO, the process ends. On the other hand, at the time of the determination in step ST188, for example, if the intersection is deviated from the route (route) set to turn right, in step ST188, it is similarly determined that the route guidance expression is not appropriate (NO in step ST188).
- the route resetting unit 15 goes to the destination so as to pass through the deviated route. Is reset (step ST194).
- the passenger's utterance content is always recognized.
- the passenger starts the route guidance for example, when the passenger starts the route guidance, the user presses the button for voice recognition, Voice recognition may be performed only while the button is pressed.
- the user may be able to set whether to always recognize or recognize only for a predetermined period.
- the user may be able to set whether or not to use the route resetting function in the ninth embodiment.
- the ninth embodiment when it is determined whether or not the route guidance expression spoken by a speaker such as a passenger is eligible, and it is determined that the route is not eligible for departure from the route.
- the speaker since the route to the destination is reset so as to pass through the deviated route, in addition to the effect in the first embodiment, the speaker gives route guidance with the intention of changing the route. Therefore, it can be prevented that the intention is not reflected and the intention is not reflected.
- the navigation device in the first embodiment has been described as further including the route guidance expression suitability determining unit 14 and the route resetting unit 15, but the navigation in the second embodiment.
- the apparatus may include a route guidance expression eligibility determination unit 14 and a route resetting unit 15.
- FIG. FIG. 24 is a block diagram showing an example of a navigation apparatus according to Embodiment 10 of the present invention. Note that the same components as those described in the first to ninth embodiments are denoted by the same reference numerals, and redundant description is omitted.
- a cancellation / correction expression storage unit 16 and a cancellation / correction expression extraction unit 17 are further provided, and an expression for canceling the presentation of the presentation content is extracted. In such a case, the presentation content is not output, and when an expression to correct the presentation content is extracted, the corrected presentation content is output.
- the cancellation / correction expression storage unit 16 normally stores expressions that are assumed to be used when a person is making a route guidance and makes a mistake or makes a mistake.
- FIG. 25 is a diagram illustrating an example of the cancellation / correction expression storage unit 16. As shown in this figure, the cancellation / correction expression storage unit 16 stores, for example, cancellation expressions such as “different”, “different”, and “wrong”, and correction expressions such as “not” and “stop”. is doing.
- the cancellation / correction expression extraction unit 17 performs morphological analysis with reference to the cancellation / correction expression storage unit 16 and extracts the cancellation expression and the correction expression from the character string of the speech recognition result of the speech recognition unit 2. When the corrected expression is extracted, the route guidance expression storage unit 3 is referred to follow the expression, that is, the corrected route guidance expression is also extracted.
- FIG. 26 is a flowchart showing the operation of the navigation device according to the tenth embodiment.
- the processing in steps ST201 and ST202 is the same as steps ST01 and ST02 in the flowchart of FIG.
- the route guidance expression extraction unit 4 extracts the route guidance expression while referring to the route guidance expression storage unit 3 from the recognition result of the voice recognition unit 2 in step ST202, and the cancellation / correction expression.
- the extraction unit 17 extracts the cancellation / correction expression while referring to the cancellation / correction expression storage unit 16.
- the route guidance expression extraction unit 4 performs a process of extracting and presenting the route guidance expression (here, illustration and description are omitted).
- the cancellation / correction expression stored in the cancellation / correction expression storage unit 16 is extracted by the cancellation / correction expression extraction unit 17 (in the case of YES in step ST203)
- the extracted cancellation / correction expression If the corrected expression is a canceled expression such as “I made a mistake” (YES in step ST204), the canceled corrected expression extracting unit 17 provides the presentation control unit 10 with the visually presented content. In some cases, a notice is given to cancel the display of the visual presentation content. That is, when the visual presentation content has already been displayed (YES in step ST205), the presentation control unit 10 stops displaying the visual presentation content (step ST206). If the visual presentation content has not been displayed yet (NO in step ST205), the process ends without displaying anything.
- step ST207 the presentation control unit 10 stops displaying the visual presentation content (step ST208). If the visual presentation content is not yet displayed (NO in step ST207), the process of step ST209 is performed as it is.
- step ST209 the cancellation / correction expression extraction unit 17 refers to the route guidance expression storage unit 3 and extracts the route guidance expression that follows the corrected expression (step ST209). Based on the route guidance expression, the route guidance expression presentation content acquisition unit 9 acquires the corresponding presentation content (step ST210), and the presentation control output unit 20 outputs the presentation content (step ST211).
- the voice acquisition unit 1 acquires the voice data. Then (step ST201), the speech recognition unit 2 obtains recognition results “turn right at the next intersection” and “haha, wrong” (step ST202).
- the cancellation / correction expression extraction unit 17 refers to the cancellation / correction expression storage unit 16 and extracts the character string “different” as the cancellation expression. That is, from the speech recognition result, a cancellation / correction expression is extracted (in the case of YES in step ST203), and since the extracted expression is a cancellation expression (in the case of YES in step ST204), a visual indicating "turn right" is displayed. If the presentation content has already been displayed (YES in step ST205), the display of the visual presentation content is stopped (step ST206). If the visual presentation content is not displayed yet (NO in step ST205), the process ends without displaying anything.
- FIG. 27 is a diagram showing an example of screen transition when a cancellation expression is extracted.
- FIG. 27A shows a display screen in which the vehicle 32 is displayed as a triangle on the navigation screen 31 and a state in which the passenger speaks “turn right at the next intersection” and the content of the speech. Is indicated by a balloon 33.
- FIG. 27B shows the visual presentation contents on the same navigation screen 31 as FIG. 27A as a result of the process of outputting the presentation contents described in the second embodiment by the navigation device at this time.
- the “right arrow graphic data” 34 is displayed on the road that passes after the right turn, and the passenger has spoken “yes, wrong” here. This utterance content is also indicated by a balloon 33.
- steps ST204 to ST206 are performed, and the display of the “right arrow graphic data” 34 is canceled (see FIG. 27C), and FIG. The state shown in d) is obtained.
- the utterance contents by the passengers are a series such as “turn right at the next intersection. Oh, it was different” before visual presentation contents as shown in FIG. 27B are displayed. If a cancellation expression “different” is extracted, the process ends without displaying anything. That is, a transition is made directly from FIG. 27A to the state of FIG. 27D, and nothing changes on the display.
- the cancellation / correction expression extraction unit 17 extracts the correction expression “not,”. That is, since it is the case of NO in step ST204, “left turn” is extracted by referring to the road guide expression storage unit 3 as the road guide expression following “not” (step ST209). Then, the presentation content corresponding to the “left turn” is acquired by referring to the route guidance expression presentation content storage unit 8 (step ST210), and is displayed or voiced (step ST211).
- FIG. 28 is a diagram showing an example of screen transition when a corrected expression is extracted.
- FIG. 28A shows a display screen in which the vehicle 32 is displayed in a triangle on the navigation screen 31 and a state in which the passenger speaks “turn right at the next intersection, not left ...”. The speech content is indicated by a balloon 33.
- FIG. 28B shows the visual presentation content on the same navigation screen 31 as FIG. 28A as a result of the processing of outputting the presentation content described in the second embodiment by the navigation device at this time.
- “right arrow graphic data” 34 is displayed on a road that passes after a right turn.
- the passenger's utterance content is always recognized.
- the passenger starts the route guidance for example, when the passenger starts the route guidance, the user presses the button for voice recognition, Voice recognition may be performed only while the button is pressed.
- the user may be able to set whether to always recognize or recognize only for a predetermined period.
- the visual presentation content to be canceled has already been displayed, it has been described as canceling (deleting), but the displayed visual presentation content is canceled or The user may be allowed to set whether to display the presentation contents and the visual presentation contents to cancel.
- the user may be able to set whether or not to use the cancellation / correction expression extraction function in the tenth embodiment.
- the cancellation / correction expression when the cancellation / correction expression is included in the route guidance expression uttered by the speaker such as the passenger, it is also extracted.
- Form 1 it is possible to prevent the driver from following the wrong route by taking into account that a speaker such as a passenger has mistakenly guided and canceling or correcting the visual presentation content to be canceled. Can do.
- the navigation apparatus in the first embodiment has been described as further including the cancellation / correction expression storage unit 16 and the cancellation / correction expression extraction unit 17, but the navigation apparatus in the second embodiment.
- the cancellation / correction expression storage unit 16 and the cancellation / correction expression extraction unit 17 may be provided.
- the navigation device for a vehicle has been described.
- the navigation device of the present invention is not limited to a vehicle, and is a navigation device for a moving body including a person, a vehicle, a railway, a ship, an aircraft, or the like.
- the present invention relates to a navigation device suitable for being brought into a vehicle or mounted on a vehicle, and any type of device can be used as long as the device can perform navigation by voice interaction between a user and the device, such as a portable navigation device. It can also be applied to things.
- the navigation device of the present invention can be applied to an in-vehicle navigation device or a portable navigation device capable of performing navigation by voice dialogue between a user and the device.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Navigation (AREA)
Abstract
Description
しかし、ナビゲーション装置が予め設定した所定の地点での道案内を行うことはできても、走行中に同乗者が運転者に対して行なっている道案内の内容を、ナビゲーション装置の案内内容として運転者に提示することはできなかった。
このような問題に対し、例えば特許文献1には、音声を常時認識し、認識結果をそのまま文字で画面に表示する音声認識装置が記載されている。
実施の形態1.
この発明は、地図データと自車(移動体)の位置に基づいて道案内を行うナビゲーション装置において、同乗者が運転者に対して行なった道案内表現のみを抽出し、抽象的な道案内表現を解釈して具体化し、その具体化された内容を運転者が直観的に理解しやすいように提示するものである。
図1は、この発明の実施の形態1によるナビゲーション装置の一例を示すブロック図である。このナビゲーション装置は、音声取得部1と、音声認識部2と、道案内表現記憶部3と、道案内表現抽出部4と、地図データ記憶部5と、自車位置取得部(位置取得部)6と、道案内表現解釈部7と、道案内表現提示内容記憶部8と、道案内表現提示内容取得部9と、提示制御部10と、表示部21、音声出力部22により構成されている。なお、この提示制御部10と、表示部21および音声出力部22が、提示制御出力部20を構成する。また、図示は省略したが、このナビゲーション装置は、キーやタッチパネル等による入力信号を取得するキー入力部も備えている。
音声認識部2は、認識辞書(図示せず)を有し、音声取得部1により取得された音声データから、同乗者などの発話者が発話した内容に該当する音声区間を検出し、特徴量を抽出し、その特徴量に基づいて認識辞書を用いて音声認識処理を行う。なお、音声認識部2は、ネットワーク上の音声認識サーバを使用してもよい。
図2は、道案内表現記憶部3の一例を示す図である。この図に示すように、道案内表現記憶部3は、例えば、道案内を行うべき道案内地点にある目印を指し示す指示語を表す「あの」「その」「次の」「突き当り」「100メートル先」「200メートル先」など、目印を表す「交差点」「ファミレス」「車」「信号」など、また、進行方向を表す直接的表現として「右折」「右」「左」「西」など、および、進行方向を表す間接的表現として「道なりに」「こっち」などの道案内表現を記憶している。
道案内表現抽出部4は、道案内表現記憶部3を参照しながら形態素解析を行い、音声認識部2の音声認識結果の文字列から道案内表現を抽出する。
自車位置取得部(位置取得部)6は、GPS受信機やジャイロスコープなどから取得した情報を用いて、現在の自車(移動体)の位置(経緯度)と進行方向を取得するものである。
図3は、提示内容が視覚的提示内容である場合の道案内表現提示内容記憶部8の一例を示す図である。この図に示すように、例えば、「右折」「右へ」「右に」などの具体的道案内表現に対して、右矢印の図形データ、「右折」「右」などの文字表現(文字列データ)、地図上の右折すべき道の色や道の太さの情報などが記憶されている。なお、この図3では、右折の場合も左折の場合も、また、斜め右上方向の場合にも、道の色や道の太さは同じに設定されているが、道案内表現ごとに異なる色や太さとしてもよい。
なお、図3に示す例では、進行方向を表す具体的道案内表現についてのみ対応する提示内容が記憶されているが、その他、交差点の名称やレストランの名称など、想定されるすべての具体的道案内表現について、音声データ(合成音声)を記憶しておくようにしてもよい。
なお、ここでは、提示内容が聴覚的提示内容である場合には、予め合成音声を作成して道案内表現提示内容記憶部8に記憶しておくものとしたが、道案内表現提示内容記憶部8に記憶されている音声データに基づいて、道案内表現提示内容取得部9が合成音声を作成することにより提示内容を取得するようにしてもよい。なお、文字列から合成音声を生成する方法については、公知の方法であるためここでは説明を省略する。
まず初めに、何等かの発話入力があると、音声取得部1が入力された音声を取得し、A/D変換して、例えばPCM形式の音声データとして取得する(ステップST01)。次に、音声取得部1で取得された音声データを音声認識部2が認識する(ステップST02)。そして、道案内表現抽出部4が、音声認識部2の認識結果から、道案内表現記憶部3を参照しながら道案内表現を抽出する(ステップST03)。その後、抽出された道案内表現を、道案内表現解釈部7が解釈して、具体的道案内表現を特定する(ステップST04~ST11)。
一方、ステップST05において、例えば「右折」など、直接的な進行方向を表す道案内表現であった場合(ステップST05のYESの場合)には、「右折」という直接的な進行方向を表す道案内表現を、具体的道案内表現として特定する(ステップST07)。
また、図5(b)ではちょうど道路上に矢印が表示されている状態を示しているが、表示場所については画面上のどの場所でもよく、固定の表示場所としてもよいし、道路が隠れないような場所に表示されるようにしてもよい。また、ナビゲーション装置の表示画面上ではなく、フロントガラス上であってもよい。さらに、表示画面とフロントガラスなど、出力デバイスが複数ある場合には、提示デバイス特定部をさらに備える構成とし、どの出力デバイスに提示させるかを決定するようにしてもよい。
また、表示させる図形や文字を点滅させたり、右から左へ移動させたり、フェードインさせるように表示するなど、ユーザがより認識しやすい方法で表示させるようにしてもよい。また、いずれの方法で表示するかをユーザが設定できるようにしてもよい。
なお、提示内容として視覚的提示内容と聴覚的提示内容の両方が出力される場合には、聴覚的提示内容としては具体的道案内表現に対応する音声データではなく、例えば「ポーン」というような非言語音により注意を促すもの(気づかせるための効果音)としてもよい。また、視覚的提示内容のみが出力される場合に、そのような非言語音の聴覚的提示内容を合わせて出力するような構成としてもよい。
図6は、この発明の実施の形態2によるナビゲーション装置の一例を示すブロック図である。なお、実施の形態1で説明したものと同様の構成には、同一の符号を付して重複した説明を省略する。以下に示す実施の形態2では、実施の形態1と比べると、道案内表現解釈部7が、具体的道案内表現を特定するだけではなく、道案内を行うべき目印(道案内地点)の名称を提示制御部10に出力したり、提示内容が視覚的提示内容である場合の表示位置(道案内地点の位置)を特定して提示制御部10に出力するものである。
ステップST21~ST26までの処理については、実施の形態1における図4のフローチャートのステップST01~ST06と同じであるため、説明を省略する。そして、この実施の形態2では、ステップST25において、例えば「右折」など、直接的な進行方向を表す道案内表現であった場合(ステップST25のYESの場合)には、さらに、目印を指示または表す道案内表現があるか否かを判断する(ステップST27)。ここで、目印を指示または表す道案内表現がない場合(ステップST27のNOの場合)には、「右折」という直接的な進行方向を表す道案内表現を、具体的道案内表現として特定する(ステップST28)。その後の、ステップST32~ST37の処理については、後述する。
また、提示内容が視覚的提示内容であっても、ステップST30の道案内地点の位置を取得する処理を経由していない場合には、表示場所については画面上のどの場所でもよく、固定の表示場所としてもよいし、道路が隠れないような場所に表示されるようにしてもよい。また、ナビゲーション装置の表示画面上ではなく、フロントガラス上であってもよい。さらに、表示画面とフロントガラスなど、出力デバイスが複数ある場合には、提示デバイス特定部をさらに備える構成とし、どの出力デバイスに提示させるかを決定するようにしてもよい。
また、表示させる図形や文字を点滅させたり、右から左へ移動させたり、フェードインさせるように表示するなど、ユーザがより認識しやすい方法で表示させるようにしてもよい。また、いずれの方法で表示するかをユーザが設定できるようにしてもよい。
なお、提示内容として視覚的提示内容と聴覚的提示内容の両方が出力される場合には、聴覚的提示内容としては具体的道案内表現に対応する音声データではなく、例えば「ポーン」というような非言語音により注意を促すもの(気づかせるための効果音)としてもよい。また、視覚的提示内容のみが出力される場合に、そのような非言語音の聴覚的提示内容を合わせて出力するような構成としてもよい。
また、聴覚的提示内容のみが出力される場合には、例えば、「本町一丁目交差点、右折」のように、道案内表現解釈部7により特定された道案内地点の名称と具体的な進行方向との両方の音声データを続けて出力するようにしてもよい。
この発明の実施の形態3によるナビゲーション装置のブロック図については、実施の形態2における図6と同じであるので、図示および説明を省略する。以下に示す実施の形態3では、実施の形態2のナビゲーション装置において、現在設定されている経路(ルート)情報を考慮して、道案内表現に含まれる指示語が指し示す目印を特定するものである。
このフローチャートのステップST50以外のステップST41~ST49およびステップST51~ST57までの処理については、実施の形態2における図7のフローチャートのステップST21~ST29およびステップST31~ST37と同じであるため、説明を省略する。
一方、ステップST49において、道案内地点の特定ができなかった場合(ステップST49のNOの場合)には、ステップST45において直接的な進行方向を表す道案内表現と判断された道案内表現を、具体的道案内表現として特定する(ステップST48)。
また、提示内容が視覚的提示内容であっても、ステップST50の道案内地点の位置を取得する処理を経由していない場合には、表示場所については画面上のどの場所でもよく、固定の表示場所としてもよいし、道路が隠れないような場所に表示されるようにしてもよい。また、ナビゲーション装置の表示画面上ではなく、フロントガラス上であってもよい。さらに、表示画面とフロントガラスなど、出力デバイスが複数ある場合には、提示デバイス特定部をさらに備える構成とし、どの出力デバイスに提示させるかを決定するようにしてもよい。
また、表示させる図形や文字を点滅させたり、右から左へ移動させたり、フェードインさせるように表示するなど、ユーザがより認識しやすい方法で表示させるようにしてもよい。また、いずれの方法で表示するかをユーザが設定できるようにしてもよい。
なお、提示内容として視覚的提示内容と聴覚的提示内容の両方が出力される場合には、聴覚的提示内容としては具体的道案内表現に対応する音声データではなく、例えば「ポーン」というような非言語音により注意を促すもの(気づかせるための効果音)としてもよい。また、視覚的提示内容のみが出力される場合に、そのような非言語音の聴覚的提示内容を合わせて出力するような構成としてもよい。
また、聴覚的提示内容のみが出力される場合には、例えば、「本町二丁目交差点、右折」のように、道案内表現解釈部7により特定された道案内地点の名称と具体的な進行方向との両方の音声データを続けて出力するようにしてもよい。
また、この実施の形態3における設定経路(ルート)情報を考慮する機能を使用するかしないかを、ユーザが設定できるようにしてもよい。
図12は、この発明の実施の形態4によるナビゲーション装置の一例を示すブロック図である。なお、実施の形態1~3で説明したものと同様の構成には、同一の符号を付して重複した説明を省略する。以下に示す実施の形態4では、実施の形態3と比べると、外部対象物認識部11が示されており、道案内表現解釈部7が、外部対象物認識部11により出力された対象物に関する情報を用いて、道案内表現抽出部4により抽出された目印に関する道案内表現の内容を解釈するものである。
また、図13は、この実施の形態4における道案内表現記憶部3の一例を示す図である。この図に示すとおり、この道案内表現記憶部3は、例えば「赤い」「白い」「背の高い」「大きい」「丸い」など、周囲の対象物の色や外観など、その対象物に関する付加情報を有している。
このフローチャートのステップST69~ST71以外のステップST61~ST68およびステップST72~ST78までの処理については、実施の形態2における図7のフローチャートのステップST21~ST28およびステップST31~ST37と同じであるため、説明を省略する。
一方、ステップST70において、一致しないと判断された場合(ステップST70のNOの場合)には、ステップST65において直接的な進行方向を表す道案内表現と判断された道案内表現を、具体的道案内表現として特定する(ステップST68)。
また、この実施の形態4における周囲(外部)対象物を認識する機能を使用するかしないかを、ユーザが設定できるようにしてもよい。
図15は、この発明の実施の形態5によるナビゲーション装置の一例を示すブロック図である。なお、実施の形態1~4で説明したものと同様の構成には、同一の符号を付して重複した説明を省略する。以下に説明する実施の形態5では、実施の形態1と比べると、ジェスチャー認識部12を更に備え、同乗者などの発話者による情報にジェスチャーも含まれていた場合に、道案内表現解釈部7が、道案内表現抽出部4により抽出された道案内表現を、ジェスチャー認識部12によるジェスチャーの認識結果に基づいて解釈して、同乗者が意図する進行方向を表す具体的道案内表現を特定するものである。
ステップST81~ST83までの処理、および、ステップST87~ST90までの処理については、実施の形態1における図4のフローチャートのステップST01~ST03、および、ステップST08~ST11と同じであるため、説明を省略する。そして、この実施の形態5では、このステップST81~ST83の処理と並行して、同乗者によるジェスチャー入力があると、ジェスチャー認識部12が、例えば同乗者が発話した方向を指し示すジェスチャーを認識し、その方向を特定して出力する(ステップST84)。なお、ジェスチャーを認識し、指示された方向を特定する方法は公知であるため、ここでは説明を省略する。
また、この実施の形態5におけるジェスチャー認識の機能を使用するかしないかを、ユーザが設定できるようにしてもよい。
図17は、この発明の実施の形態6によるナビゲーション装置の一例を示すブロック図である。なお、実施の形態1~5で説明したものと同様の構成には、同一の符号を付して重複した説明を省略する。以下に説明する実施の形態6では、実施の形態5と比べると、矛盾判定部13を更に備え、同乗者などの発話者による発話内容から抽出された道案内表現と、ジェスチャー認識の結果が矛盾する場合には、道案内表現解釈部7において判断された結果に基づいて道案内表現の内容を特定するものである。
ステップST101~ST105までの処理、および、ステップST109~ST112までの処理については、実施の形態5における図16のフローチャートのステップST81~ST85、および、ステップST87~ST90と同じであるため、説明を省略する。そして、この実施の形態6では、ステップST103で抽出された道案内表現と、ステップST104で認識されたジェスチャーの認識結果とが一致するかどうかを判断する(ステップST106)。一致する場合(ステップST106のYESの場合)には、進行方向を表す道案内表現(=進行方向を表すジェスチャー)を具体的道案内表現として特定する(ステップST107)。
図19は、この発明の実施の形態7によるナビゲーション装置の一例を示すブロック図である。なお、実施の形態1~6で説明したものと同様の構成には、同一の符号を付して重複した説明を省略する。以下に説明する実施の形態7では、実施の形態1と比べると、道案内表現適格性判定部14を更に備え、提示内容が適格であるか否かを判定してから道案内表現を提示するものである。
道案内表現適格性判定部14は、提示内容を提示してよいかどうかの適格性を判定する。ここで、適格性とは、例えば、発話者により指示された方向へ車両が通行できるか否かや、当該指示された方向へ進行した場合に設定された経路(ルート)を逸脱しないか否かなどをいう。
ステップST141~ST147までの処理については、実施の形態1における図4のフローチャートのステップST01~ST07と同じであるため、説明を省略する。そして、この実施の形態7では、道案内表現適格性判定部14が、ステップST143で道案内表現抽出部により抽出された道案内表現と、自車位置取得部(位置取得部)6により取得された自車位置(移動体の位置)と、地図データ記憶部5に記憶されている地図データとに基づいて、道案内表現を提示することが適格か否かを判定する(ステップST148)。この際、適格性があると判定された場合(ステップST148のYESの場合)には、実施の形態1における図4のステップST08~ST11と同様に、道案内表現に対応する提示内容を検索し(ステップST149)、対応する提示内容が見つかった場合には、その提示内容を取得して出力する(ステップST150~ST152)。
一方、ステップST148において適格性がないと判定された場合(ステップST148のNOの場合)には、処理を終了する。
一方、道案内表現が適格であると判定された場合(ステップST148のYESの場合)には、実施の形態1のステップST08~ST11の処理と同様の処理(ステップST149~ST152)を行い、 “右矢印の図形データ”や“「右折」の文字列データ”、“道の色を赤にする”や“道の太さを○○ドットにする”といった情報、または、聴覚的提示内容である“「右折」という音声データ”を出力する。
また、この実施の形態7における道案内表現適格性判定の機能を使用するかしないかを、ユーザが設定できるようにしてもよい。
この発明の実施の形態8によるナビゲーション装置の一例を示すブロック図は、実施の形態7における図19に示すブロック図と構成としては同じであるので、図示および説明を省略する。以下に説明する実施の形態8では、実施の形態7と比べると、道案内表現適格性判定部14により道案内表現が適格でないと判定された場合に、道案内表現が適格でない旨の提示内容を提示するものである。
また、その際、道案内表現提示内容取得部9は、道案内表現提示内容記憶部8から、道案内表現が適格でない旨を表す提示内容を取得する。なお、道案内表現提示内容記憶部8に記憶されている、道案内表現が適格でない旨を表す提示内容については、図示を省略するが、道案内表現適格性判定部14により道案内表現が適格でないと判定された場合に対応する提示内容として、例えば、「×」という図形データや、「通行できません」という文字列、「経路を逸脱します」という文字列などを記憶している。
ステップST161~ST172までの処理については、実施の形態7における図20のフローチャートのステップST141~ST152とほぼ同じであるため、同一の処理については説明を省略する。そして、この実施の形態9では、道案内表現適格性判定部14が、ステップST168において道案内表現を提示することが適格か否かを判定した際に、適格性があると判定された場合(ステップST168のYESの場合)には、実施の形態7における図20のステップST149~ST152と同様に、道案内表現に対応する提示内容を検索し(ステップST169)、対応する提示内容が見つかった場合には、その提示内容を取得して出力する(ステップST170~ST172)。
一方、ステップST168において適格性がないと判定された場合(ステップST168のNOの場合)には、実施の形態7では処理を終了するのみであったが、この実施の形態8では、道案内表現提示内容取得部9が、道案内表現が適格でない旨の提示内容を取得し(ステップST173)、その提示内容を出力する(ステップST172)。
また、この実施の形態8における道案内表現適格性判定の機能を使用するかしないかを、ユーザが設定できるようにしてもよい。
図22は、この発明の実施の形態9によるナビゲーション装置の一例を示すブロック図である。なお、実施の形態1~8で説明したものと同様の構成には、同一の符号を付して重複した説明を省略する。以下に説明する実施の形態9では、実施の形態7と比べると、経路再設定部15を更に備え、道案内表現適格性判定部14により、設定された経路(ルート)を逸脱するために、提示内容(道案内表現)が適格でないと判定された場合に、当該逸脱した経路を経由地とした目的地への経路を再設定するものである。
経路再設定部15は、道案内表現適格性判定部14により、設定された経路から逸脱するために道案内表現が適格でないと判定された場合に、当該逸脱した経路を経由地とする目的地への経路を再設定する。
ステップST181~ST192までの処理については、実施の形態7における図20のフローチャートのステップST141~ST152とほぼ同じであるため、同一の処理については説明を省略する。そして、この実施の形態9では、道案内表現適格性判定部14が、ステップST188において道案内表現を提示することが適格か否かを判定した際に、適格性があると判定された場合(ステップST188のYESの場合)には、実施の形態7における図20のステップST149~ST152と同様に、道案内表現に対応する提示内容を検索し(ステップST189)、対応する提示内容が見つかった場合には、その提示内容を取得して出力する(ステップST190~ST192)。
一方、ステップST188において道案内表現が適格でないと判定された場合(ステップST188のNOの場合)には、実施の形態7では処理を終了するのみであったが、この実施の形態9では、さらに、設定された経路から逸脱するために適格性がないと判定されたのかどうかを判断し(ステップST193)、設定経路から逸脱するためにステップST188において適格でないと判定された場合(ステップST193のYESの場合)には、経路再設定部15が、その逸脱した経路を経由するように目的地への経路を再設定する(ステップST194)。一方、別の理由で適格性がないと判定された場合(ステップST193のNOの場合)には、処理を終了する。
また、この実施の形態9における経路再設定の機能を使用するかしないかを、ユーザが設定できるようにしてもよい。
図24は、この発明の実施の形態10によるナビゲーション装置の一例を示すブロック図である。なお、実施の形態1~9で説明したものと同様の構成には、同一の符号を付して重複した説明を省略する。以下に説明する実施の形態10では、実施の形態1と比べると、取り消し/訂正表現記憶部16および取り消し/訂正表現抽出部17を更に備え、提示内容の提示を取り消す旨の表現が抽出された場合は、当該提示内容を出力しないようにし、また、提示内容の提示を訂正する旨の表現が抽出された場合には、訂正後の提示内容を出力するようにしたものである。
図25は、取り消し/訂正表現記憶部16の一例を示す図である。この図に示すように、取り消し/訂正表現記憶部16は、例えば、「違う」「違った」「間違えた」などの取り消し表現や、「~ではなくて」「やめて」などの訂正表現を記憶している。
取り消し/訂正表現抽出部17は、取り消し/訂正表現記憶部16を参照しながら形態素解析を行い、音声認識部2の音声認識結果の文字列から取り消し表現および訂正表現を抽出する。また、訂正表現を抽出した場合には、道案内表現記憶部3を参照して当該表現に続く、すなわち、訂正後の道案内表現も抽出する。
ステップST201,ST202の処理については、実施の形態1における図4のフローチャートのステップST01,ST02と同じであるため、説明を省略する。そして、この実施の形態10では、ステップST202による音声認識部2の認識結果から、道案内表現抽出部4が道案内表現記憶部3を参照しながら道案内表現を抽出するとともに、取り消し/訂正表現抽出部17が、取り消し/訂正表現記憶部16を参照しながら取り消し/訂正表現を抽出する。この際、取り消し/訂正表現抽出部17により取り消し/訂正表現が抽出されなかった場合(ステップST203のNOの場合)には、このフローチャートとしては処理を終了し、実施の形態1~9と同じように、道案内表現抽出部4が道案内表現を抽出して提示する処理(ここでは、図示および説明を省略する)を行う。
また、この実施の形態10においては、取り消すべき視覚的提示内容が既に表示されていた場合には、取り消す(削除する)ものとして説明したが、表示されている視覚的提示内容を取り消すか、当該提示内容と取り消す旨の視覚的提示内容を表示するかを、ユーザが設定できるようにしてもよい。また、この実施の形態10における取り消し/訂正表現抽出の機能を使用するかしないかを、ユーザが設定できるようにしてもよい。
Claims (17)
- 移動体の位置を取得する位置取得部を備え、当該位置取得部により取得された移動体の位置と地図データとに基づいて道案内を行うナビゲーション装置において、
入力された音声を取得する音声取得部と、
前記音声取得部により取得された音声データから音声認識処理を行う音声認識部と、
道案内表現を記憶する道案内表現記憶部と、
前記道案内表現記憶部を参照して、前記音声認識部による認識結果から道案内表現を抽出する道案内表現抽出部と、
前記道案内表現抽出部により抽出された道案内表現を解釈して、具体的道案内表現を特定する道案内表現解釈部と、
前記具体的道案内表現に対応する提示内容を、前記具体的道案内表現と対応付けて記憶する道案内表現提示内容記憶部と、
前記道案内表現提示内容記憶部を参照して、前記道案内表現解釈部により特定された具体的道案内表現に基づいて、対応する提示内容を取得する道案内表現提示内容取得部と、
前記道案内表現提示内容取得部により取得された提示内容を出力する提示制御出力部とを備える
ことを特徴とするナビゲーション装置。 - 前記道案内表現解釈部は、前記道案内表現抽出部により抽出された道案内表現を、前記移動体の位置と前記地図データとに基づいて解釈して、具体的道案内表現を特定する
ことを特徴とする請求項1記載のナビゲーション装置。 - ジェスチャーを認識するジェスチャー認識部をさらに備え、
前記道案内表現解釈部は、前記道案内表現抽出部により抽出された道案内表現を、前記ジェスチャー認識部による認識結果に基づいて解釈して、具体的道案内表現を特定する
ことを特徴とする請求項1記載のナビゲーション装置。 - 前記道案内表現抽出部により抽出された道案内表現と、前記ジェスチャー認識部による認識結果が矛盾するか否かを判定する矛盾判定部をさらに備え、
前記矛盾判定部により矛盾すると判定された場合には、
前記道案内表現解釈部が、所定の規則にしたがって、前記道案内表現抽出部により抽出された道案内表現または前記ジェスチャー認識部による認識結果のいずれか一方を採択して、前記具体的道案内表現を特定する
ことを特徴とする請求項3記載のナビゲーション装置。 - 前記矛盾判定部により矛盾すると判定された場合には、
前記道案内表現解釈部は、前記ジェスチャー認識部による認識結果を採択して、前記具体的道案内表現を特定する
ことを特徴とする請求項4記載のナビゲーション装置。 - 前記矛盾判定部により矛盾すると判定された場合であって、目的地への経路が設定されている場合には、
前記道案内表現解釈部は、前記道案内表現抽出部により抽出された道案内表現または前記ジェスチャー認識部による認識結果のうち、前記設定されている経路に合致する方を採択して、前記具体的道案内表現を特定する
ことを特徴とする請求項4記載のナビゲーション装置。 - 前記提示制御出力部は、前記道案内表現提示内容取得部により取得された提示内容を、前記道案内表現解釈部により解釈された結果に基づいて出力する
ことを特徴とする請求項1記載のナビゲーション装置。 - 前記道案内表現解釈部は、前記道案内表現抽出部により抽出された道案内表現を解釈して、前記提示内容を提示する位置も特定し、
前記提示内容が視覚的提示内容である場合に、前記提示制御出力部は、前記道案内表現解釈部により特定された位置に、当該視覚的提示内容を表示する
ことを特徴とする請求項1記載のナビゲーション装置。 - 目的地への経路が設定されている場合には、
前記道案内表現解釈部は、前記設定されている経路情報に基づいて、前記提示内容を提示する位置を特定する
ことを特徴とする請求項8記載のナビゲーション装置。 - 周囲の対象物を認識する外部対象物認識部をさらに備え、
前記道案内表現解釈部は、前記外部対象物認識部による認識結果に基づいて、前記提示内容を提示する位置を特定する
ことを特徴とする請求項8記載のナビゲーション装置。 - 前記道案内表現解釈部により特定された具体的道案内表現が適格であるか否かを判定する道案内表現適格性判定部をさらに備え、
前記提示制御出力部は、前記道案内表現適格性判定部により前記具体的道案内表現が適格でないと判定された場合は、前記提示内容を出力しない
ことを特徴とする請求項1記載のナビゲーション装置。 - 前記道案内表現解釈部により抽出された具体的道案内表現が適格であるか否かを判定する道案内表現適格性判定部をさらに備え、
前記提示制御出力部は、前記道案内表現適格性判定部により前記具体的道案内表現が適格でないと判定された場合は、前記道案内表現が適格でない旨の提示内容を出力する
ことを特徴とする請求項1記載のナビゲーション装置。 - 前記道案内表現適格性判定部により前記具体的道案内表現が適格でないと判定された場合に、経路を再設定する経路再設定部をさらに備える
ことを特徴とする請求項11または請求項12記載のナビゲーション装置。 - 取り消しまたは訂正する際に使用される表現を記憶する取り消し/訂正表現記憶部と、
前記取り消し/訂正表現記憶部を参照して、前記音声認識部による認識結果から取り消し/訂正に使用される表現を抽出する取り消し/訂正表現抽出部とをさらに備え、
前記提示制御出力部は、前記取り消し/訂正表現が抽出された場合には、前記提示内容を出力しない
ことを特徴とする請求項1記載のナビゲーション装置。 - 取り消しまたは訂正する際に使用される表現を記憶する取り消し/訂正表現記憶部と、
前記取り消し/訂正表現記憶部を参照して、前記音声認識部による認識結果から取り消し/訂正に使用される表現を抽出する取り消し/訂正表現抽出部とをさらに備え、
前記提示制御出力部は、前記提示内容として視覚的提示内容が出力されている状態で、前記取り消し/訂正表現が抽出された場合には、前記視覚的提示内容の出力を取り消す
ことを特徴とする請求項1記載のナビゲーション装置。 - 取り消しまたは訂正する際に使用される表現を記憶する取り消し/訂正表現記憶部と、
前記取り消し/訂正表現記憶部を参照して、前記音声認識部による認識結果から取り消し/訂正に使用される表現を抽出する取り消し/訂正表現抽出部とをさらに備え、
前記提示制御出力部は、前記取り消し/訂正表現が抽出された場合には、前記取り消し/訂正表現の後に続いて抽出された道案内表現に対応する提示内容を出力する
ことを特徴とする請求項1記載のナビゲーション装置。 - 位置取得部が、移動体の位置を取得するステップと、前記位置取得部により取得された移動体の位置と地図データとに基づいて道案内を行うステップを有するナビゲーション方法において、
音声取得部が、入力された音声を取得するステップと、
音声認識部が、前記音声取得部により取得された音声データから音声認識処理を行うステップと、
道案内表現記憶部が、道案内表現を記憶するステップと、
道案内表現抽出部が、前記道案内表現記憶部を参照して、前記音声認識部による認識結果から道案内表現を抽出するステップと、
道案内表現解釈部が、前記道案内表現抽出部により抽出された道案内表現を解釈して、具体的道案内表現を特定するステップと、
道案内表現提示内容記憶部が、前記具体的道案内表現に対応する提示内容を、前記具体的道案内表現と対応付けて記憶するステップと、
道案内表現提示内容取得部が、前記道案内表現提示内容記憶部を参照して、前記道案内表現解釈部により特定された具体的道案内表現に基づいて、対応する提示内容を取得するステップと、
提示制御出力部が、前記道案内表現提示内容取得部により取得された提示内容を出力するステップとを備える
ことを特徴とするナビゲーション方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE112011105833.0T DE112011105833B4 (de) | 2011-11-10 | 2011-11-10 | Navigationsvorrichtung, Navigationsverfahren und Navigationsprogramm |
US14/130,417 US9341492B2 (en) | 2011-11-10 | 2011-11-10 | Navigation device, navigation method, and navigation program |
PCT/JP2011/006292 WO2013069060A1 (ja) | 2011-11-10 | 2011-11-10 | ナビゲーション装置および方法 |
CN201180074648.8A CN103917848B (zh) | 2011-11-10 | 2011-11-10 | 导航装置及方法 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2011/006292 WO2013069060A1 (ja) | 2011-11-10 | 2011-11-10 | ナビゲーション装置および方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013069060A1 true WO2013069060A1 (ja) | 2013-05-16 |
Family
ID=48288649
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/006292 WO2013069060A1 (ja) | 2011-11-10 | 2011-11-10 | ナビゲーション装置および方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US9341492B2 (ja) |
CN (1) | CN103917848B (ja) |
DE (1) | DE112011105833B4 (ja) |
WO (1) | WO2013069060A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108731699A (zh) * | 2018-05-09 | 2018-11-02 | 上海博泰悦臻网络技术服务有限公司 | 智能终端及其基于语音的导航路线重新规划方法、及车辆 |
WO2020028103A1 (en) * | 2018-08-03 | 2020-02-06 | Gracenote, Inc. | Vehicle-based media system with audio ad and navigation-related action synchronization feature |
WO2020065892A1 (ja) * | 2018-09-27 | 2020-04-02 | 日産自動車株式会社 | 車両の走行制御方法及び走行制御装置 |
JP7461770B2 (ja) | 2020-03-25 | 2024-04-04 | 古河電気工業株式会社 | 監視装置および監視装置の動作方法 |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014109017A1 (ja) * | 2013-01-09 | 2014-07-17 | 三菱電機株式会社 | 音声認識装置および表示方法 |
US20150142251A1 (en) * | 2013-11-21 | 2015-05-21 | International Business Machines Corporation | Vehicle control based on colors representative of navigation information |
US10203211B1 (en) * | 2015-12-18 | 2019-02-12 | Amazon Technologies, Inc. | Visual route book data sets |
JP6272594B1 (ja) * | 2016-03-29 | 2018-01-31 | 三菱電機株式会社 | 音声案内装置及び音声案内方法 |
CN106289304A (zh) * | 2016-09-30 | 2017-01-04 | 百度在线网络技术(北京)有限公司 | 导航信息展示方法和装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0791977A (ja) * | 1993-09-28 | 1995-04-07 | Mazda Motor Corp | 音声対話式ナビゲーション装置 |
JPH1151685A (ja) * | 1997-08-08 | 1999-02-26 | Aisin Aw Co Ltd | 車両用ナビゲーション装置及び記憶媒体 |
JP2001133283A (ja) * | 1999-11-08 | 2001-05-18 | Alpine Electronics Inc | ナビゲーション装置 |
JP2002221430A (ja) * | 2001-01-29 | 2002-08-09 | Sony Corp | ナビゲーション装置、ナビゲーション方法及びナビゲーション装置のプログラム |
JP2003329476A (ja) * | 2002-05-13 | 2003-11-19 | Denso Corp | 車両用案内装置 |
Family Cites Families (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09206329A (ja) | 1996-01-31 | 1997-08-12 | Sony Corp | 聴力補助装置 |
JPH1151674A (ja) * | 1997-08-08 | 1999-02-26 | Aisin Aw Co Ltd | 車両用ナビゲーション装置及び記憶媒体 |
JP3619380B2 (ja) * | 1998-12-25 | 2005-02-09 | 富士通株式会社 | 車載入出力装置 |
JP3980791B2 (ja) * | 1999-05-03 | 2007-09-26 | パイオニア株式会社 | 音声認識装置を備えたマンマシンシステム |
JP2001056228A (ja) | 1999-08-18 | 2001-02-27 | Alpine Electronics Inc | ナビゲーション装置 |
JP2001324345A (ja) | 2000-05-15 | 2001-11-22 | Alpine Electronics Inc | ナビゲーション装置及びナビゲーション装置の経路案内方法 |
JP3567864B2 (ja) * | 2000-07-21 | 2004-09-22 | 株式会社デンソー | 音声認識装置及び記録媒体 |
US20020133353A1 (en) * | 2001-01-24 | 2002-09-19 | Kavita Gaitonde | System, method and computer program product for a voice-enabled search engine for business locations, searchable by category or brand name |
JP2003156340A (ja) * | 2001-09-10 | 2003-05-30 | Pioneer Electronic Corp | ナビゲーションシステム、ナビゲーションシステム用情報サーバ装置および通信端末装置、並びに、ナビゲーションシステムにおける移動体の変更方法および変更処理プログラム |
JP4104313B2 (ja) * | 2001-10-03 | 2008-06-18 | 株式会社デンソー | 音声認識装置、プログラム及びナビゲーションシステム |
JP3907994B2 (ja) | 2001-10-12 | 2007-04-18 | アルパイン株式会社 | 誘導経路探索方法及びナビゲーション装置 |
JPWO2003078930A1 (ja) * | 2002-03-15 | 2005-07-14 | 三菱電機株式会社 | 車両用ナビゲーション装置 |
JP3951954B2 (ja) | 2003-04-08 | 2007-08-01 | 株式会社デンソー | 経路案内装置 |
CN1898529A (zh) * | 2003-12-26 | 2007-01-17 | 松下电器产业株式会社 | 导航装置 |
JP2005201793A (ja) * | 2004-01-16 | 2005-07-28 | Xanavi Informatics Corp | ナビゲーション装置の経路探索方法 |
CN1934416A (zh) * | 2004-03-22 | 2007-03-21 | 日本先锋公司 | 导航装置、导航方法、导航程序和计算机可读取记录介质 |
EP1693830B1 (en) * | 2005-02-21 | 2017-12-20 | Harman Becker Automotive Systems GmbH | Voice-controlled data system |
US7826945B2 (en) * | 2005-07-01 | 2010-11-02 | You Zhang | Automobile speech-recognition interface |
JP4804052B2 (ja) * | 2005-07-08 | 2011-10-26 | アルパイン株式会社 | 音声認識装置、音声認識装置を備えたナビゲーション装置及び音声認識装置の音声認識方法 |
JP2007071581A (ja) * | 2005-09-05 | 2007-03-22 | Xanavi Informatics Corp | ナビゲーション装置 |
JP2007127419A (ja) * | 2005-10-31 | 2007-05-24 | Aisin Aw Co Ltd | 経路案内システム及び経路案内方法 |
JP2007132870A (ja) * | 2005-11-11 | 2007-05-31 | Pioneer Electronic Corp | ナビゲーション装置、コンピュータプログラム、画面制御方法及び測定間隔制御方法 |
CN101331036B (zh) * | 2005-12-16 | 2011-04-06 | 松下电器产业株式会社 | 移动体用输入装置及方法 |
JP4878160B2 (ja) * | 2006-01-04 | 2012-02-15 | クラリオン株式会社 | 交通情報表示方法及びナビゲーションシステム |
NL1031202C2 (nl) * | 2006-02-21 | 2007-08-22 | Tomtom Int Bv | Navigatieapparaat en werkwijze voor het ontvangen en afspelen van geluidsmonsters. |
JP2007302223A (ja) | 2006-04-12 | 2007-11-22 | Hitachi Ltd | 車載装置の非接触入力操作装置 |
US8688451B2 (en) * | 2006-05-11 | 2014-04-01 | General Motors Llc | Distinguishing out-of-vocabulary speech from in-vocabulary speech |
KR100819234B1 (ko) * | 2006-05-25 | 2008-04-02 | 삼성전자주식회사 | 네비게이션 단말의 목적지 설정 방법 및 장치 |
KR100810275B1 (ko) * | 2006-08-03 | 2008-03-06 | 삼성전자주식회사 | 차량용 음성인식 장치 및 방법 |
US8170798B2 (en) * | 2006-09-22 | 2012-05-01 | Mitsubishi Electric Corporation | Navigation system and operation guidance display method for use in this navigation system |
US7937667B2 (en) | 2006-09-27 | 2011-05-03 | Donnelly Corporation | Multimedia mirror assembly for vehicle |
DE602006005830D1 (de) * | 2006-11-30 | 2009-04-30 | Harman Becker Automotive Sys | Interaktives Spracherkennungssystem |
EP2102596B1 (en) * | 2007-01-10 | 2018-01-03 | TomTom Navigation B.V. | Method of indicating traffic delays, computer program and navigation system therefor |
JP4225356B2 (ja) | 2007-04-09 | 2009-02-18 | トヨタ自動車株式会社 | 車両用ナビゲーション装置 |
WO2009073806A2 (en) * | 2007-12-05 | 2009-06-11 | Johnson Controls Technology Company | Vehicle user interface systems and methods |
JP5068202B2 (ja) | 2008-03-14 | 2012-11-07 | インターナショナル・ビジネス・マシーンズ・コーポレーション | ナビゲーションシステムおよびプログラム。 |
DE112009000554B4 (de) * | 2008-04-28 | 2013-12-12 | Mitsubishi Electric Corp. | Navigationsgerät |
JP2010145262A (ja) | 2008-12-19 | 2010-07-01 | Pioneer Electronic Corp | ナビゲーション装置 |
JP4973722B2 (ja) * | 2009-02-03 | 2012-07-11 | 株式会社デンソー | 音声認識装置、音声認識方法、及びナビゲーション装置 |
US8788267B2 (en) * | 2009-09-10 | 2014-07-22 | Mitsubishi Electric Research Laboratories, Inc. | Multi-purpose contextual control |
US20110320114A1 (en) * | 2010-06-28 | 2011-12-29 | Microsoft Corporation | Map Annotation Messaging |
JP5414951B2 (ja) | 2011-10-12 | 2014-02-12 | 三菱電機株式会社 | ナビゲーション装置、方法およびプログラム |
US9689680B2 (en) * | 2013-06-04 | 2017-06-27 | Here Global B.V. | Method and apparatus for approaches to provide for combining contexts related to items of interest and navigation |
-
2011
- 2011-11-10 DE DE112011105833.0T patent/DE112011105833B4/de not_active Expired - Fee Related
- 2011-11-10 CN CN201180074648.8A patent/CN103917848B/zh not_active Expired - Fee Related
- 2011-11-10 WO PCT/JP2011/006292 patent/WO2013069060A1/ja active Application Filing
- 2011-11-10 US US14/130,417 patent/US9341492B2/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0791977A (ja) * | 1993-09-28 | 1995-04-07 | Mazda Motor Corp | 音声対話式ナビゲーション装置 |
JPH1151685A (ja) * | 1997-08-08 | 1999-02-26 | Aisin Aw Co Ltd | 車両用ナビゲーション装置及び記憶媒体 |
JP2001133283A (ja) * | 1999-11-08 | 2001-05-18 | Alpine Electronics Inc | ナビゲーション装置 |
JP2002221430A (ja) * | 2001-01-29 | 2002-08-09 | Sony Corp | ナビゲーション装置、ナビゲーション方法及びナビゲーション装置のプログラム |
JP2003329476A (ja) * | 2002-05-13 | 2003-11-19 | Denso Corp | 車両用案内装置 |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108731699A (zh) * | 2018-05-09 | 2018-11-02 | 上海博泰悦臻网络技术服务有限公司 | 智能终端及其基于语音的导航路线重新规划方法、及车辆 |
WO2020028103A1 (en) * | 2018-08-03 | 2020-02-06 | Gracenote, Inc. | Vehicle-based media system with audio ad and navigation-related action synchronization feature |
US10880023B2 (en) | 2018-08-03 | 2020-12-29 | Gracenote, Inc. | Vehicle-based media system with audio advertisement and external-device action synchronization feature |
US10887031B2 (en) | 2018-08-03 | 2021-01-05 | Gracenote, Inc. | Vehicle-based media system with audio ad and navigation-related action synchronization feature |
US11444711B2 (en) | 2018-08-03 | 2022-09-13 | Gracenote, Inc. | Vehicle-based media system with audio ad and navigation-related action synchronization feature |
US11799574B2 (en) | 2018-08-03 | 2023-10-24 | Gracenote, Inc. | Vehicle-based media system with audio ad and navigation-related action synchronization feature |
US12237910B2 (en) | 2018-08-03 | 2025-02-25 | Gracenote, Inc. | Vehicle-based media system with audio ad and navigation-related action synchronization feature |
WO2020065892A1 (ja) * | 2018-09-27 | 2020-04-02 | 日産自動車株式会社 | 車両の走行制御方法及び走行制御装置 |
JP7461770B2 (ja) | 2020-03-25 | 2024-04-04 | 古河電気工業株式会社 | 監視装置および監視装置の動作方法 |
Also Published As
Publication number | Publication date |
---|---|
DE112011105833T5 (de) | 2014-08-28 |
US20140156181A1 (en) | 2014-06-05 |
CN103917848B (zh) | 2016-09-28 |
CN103917848A (zh) | 2014-07-09 |
DE112011105833B4 (de) | 2019-07-04 |
US9341492B2 (en) | 2016-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013069060A1 (ja) | ナビゲーション装置および方法 | |
US8694323B2 (en) | In-vehicle apparatus | |
CN103917847B (zh) | 导航装置及方法 | |
JP6173477B2 (ja) | ナビゲーション用サーバ、ナビゲーションシステムおよびナビゲーション方法 | |
WO2014068788A1 (ja) | 音声認識装置 | |
JP4997796B2 (ja) | 音声認識装置、及びナビゲーションシステム | |
JP6214297B2 (ja) | ナビゲーション装置および方法 | |
JP2009251388A (ja) | 母国語発話装置 | |
US9476728B2 (en) | Navigation apparatus, method and program | |
JP4914632B2 (ja) | ナビゲーション装置 | |
JP2000338993A (ja) | 音声認識装置、その装置を用いたナビゲーションシステム | |
KR101063607B1 (ko) | 음성인식을 이용한 명칭 검색 기능을 가지는 네비게이션시스템 및 그 방법 | |
JPWO2013069060A1 (ja) | ナビゲーション装置、方法およびプログラム | |
JP2011038983A (ja) | 情報表示装置、経路設定方法およびプログラム | |
JP2000122685A (ja) | ナビゲーションシステム | |
JP2008164809A (ja) | 音声認識装置 | |
US20110218809A1 (en) | Voice synthesis device, navigation device having the same, and method for synthesizing voice message | |
WO2019124142A1 (ja) | ナビゲーション装置およびナビゲーション方法、ならびにコンピュータプログラム | |
JPH09114487A (ja) | 音声認識装置,音声認識方法,ナビゲーション装置,ナビゲート方法及び自動車 | |
JP2005114964A (ja) | 音声認識方法および音声認識処理装置 | |
WO2013051072A1 (ja) | ナビゲーション装置、方法およびプログラム | |
JP4645708B2 (ja) | コード認識装置および経路探索装置 | |
JP2007280104A (ja) | 情報処理装置、情報処理方法、情報処理プログラムおよびコンピュータに読み取り可能な記録媒体 | |
JP2877045B2 (ja) | 音声認識装置,音声認識方法,ナビゲーション装置,ナビゲート方法及び自動車 | |
JP2009086132A (ja) | 音声認識装置、音声認識装置を備えたナビゲーション装置、音声認識装置を備えた電子機器、音声認識方法、音声認識プログラム、および記録媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11875591 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2013542690 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14130417 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1120111058330 Country of ref document: DE Ref document number: 112011105833 Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11875591 Country of ref document: EP Kind code of ref document: A1 |