[go: up one dir, main page]

WO2018033424A1 - Method for recognizing at least one object in an environmental region of a motor vehicle based on object features from images, camera system as well as motor vehicle - Google Patents

Method for recognizing at least one object in an environmental region of a motor vehicle based on object features from images, camera system as well as motor vehicle Download PDF

Info

Publication number
WO2018033424A1
WO2018033424A1 PCT/EP2017/069997 EP2017069997W WO2018033424A1 WO 2018033424 A1 WO2018033424 A1 WO 2018033424A1 EP 2017069997 W EP2017069997 W EP 2017069997W WO 2018033424 A1 WO2018033424 A1 WO 2018033424A1
Authority
WO
WIPO (PCT)
Prior art keywords
object features
list
features
image
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/EP2017/069997
Other languages
French (fr)
Inventor
Pullarao MADDU
Senthil Kumar Yogamani
Sunil Chandra
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Connaught Electronics Ltd
Original Assignee
Connaught Electronics Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Connaught Electronics Ltd filed Critical Connaught Electronics Ltd
Publication of WO2018033424A1 publication Critical patent/WO2018033424A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Definitions

  • the present invention relates to a method for recognizing at least one object in an environmental region of a motor vehicle, in which a first image and a second image of the environmental region are provided by means of a camera of the motor vehicle, first object features are determined in the first image and second object features are determined in the second image, wherein the first and the second object features each describe the at least one object, and it is checked in a respective association step for each of the first object features if a second object feature is present, which has a predetermined similarity to the first object feature, and if the second object feature has the predetermined similarity to the first object feature, the first object feature is associated with the second object feature.
  • the present invention relates to a camera system for a motor vehicle.
  • the present invention relates to a motor vehicle with such a camera system.
  • the interest is directed to the recognition of objects in an environmental region of a motor vehicle.
  • the recognition is effected based on images provided by a camera of the motor vehicle.
  • a sequence of images is provided by the camera and object features are determined in each image, which describe objects in the environmental region.
  • object features are determined in the respective images and the object features of the images are associated with each other. This association can then be used to recognize the objects.
  • obstacles, further vehicles, pedestrians and/or roadway markings in the environmental region can for example be recognized.
  • the association of the object features can be used to recognize three-dimensional objects in the environmental region.
  • the object features which have been recognized in the respective images, are provided in a one-dimensional list. Therein, it is checked for each of the object features of an image if it has a predetermined similarity to the object features of a further image.
  • this approach of brute force entails high complexity.
  • a further approach is in using corresponding search trees, such as for example the k-d tree, in a multi-dimensional feature space.
  • this approach entails high storage requirement and cannot be efficiently realized.
  • this In recognizing objects in the environment of the motor vehicle, this usually results in the fact that only few object features of the consecutive images are associated with each other.
  • this object is solved by a method, by a camera system as well as a motor vehicle having the features according to the respective independent claims.
  • Advantageous developments of the present invention are the subject matter of the dependent claims.
  • a first image and a second image of the environmental region are provided by means of a camera of the motor vehicle.
  • First object features are preferably determined in the first image and second object features are preferably determined in the second image, wherein the first and the second object features in particular each describe the at least one object.
  • For each of the first object features it is in particular checked in a respective association step if a second object feature is present, which has a predetermined similarity to the first object feature. If the second object feature has the predetermined similarity to the first object feature, the first object feature is in particular associated with the second object feature.
  • a plurality of first list sections is in particular determined and the first object features are associated with the first list sections based on their position in the first image.
  • a plurality of second list sections corresponding to the first list sections is preferably determined and the second object features are associated with the second list sections based on their position in the second image. In the respective association step, it is then preferably checked if a second object feature in the second list section corresponding to the first list section of the first object feature has the predetermined similarity to the first object feature.
  • a method according to the invention serves for recognizing at least one object in an environmental region of a motor vehicle.
  • a first image and a second image of the environmental region are provided by means of a camera of the motor vehicle.
  • First object features are determined in the first image and second object features are determined in the second image, wherein the first and the second object features each describe the at least one object.
  • For each of the first object features it is checked in a respective association step if a second object feature is present, which has a predetermined similarity to the first object feature, if the second object feature has the predetermined similarity to the first object feature, the first object feature is associated with the second object feature.
  • a plurality of first list sections is determined and the first object features are associated with the first list sections based on their position in the first image.
  • a plurality of second list sections corresponding to the first list sections is determined and the second object features are associated with the second list sections based on their position in the second image. In the respective association step, it is checked if a second object feature in the second list section corresponding to the first list section of the first object feature has the predetermined similarity to the first object feature.
  • objects in the environmental region of the motor vehicle are to be recognized.
  • Such objects can for example be obstacles, further traffic participants or roadway markings.
  • a sequence of images can be captured by the camera of the motor vehicle.
  • a first image and at least a second image are captured by the camera, wherein first the first image and subsequently the second image are captured.
  • object features are determined, which describe the objects in the environmental region.
  • a corresponding object recognition algorithm such as for example SIFT (Scale- Invariant Feature Transform) or SURF (Speeded Up Robust Features) can be used.
  • SIFT Scale- Invariant Feature Transform
  • SURF Speeded Up Robust Features
  • the second object features are determined in the second image.
  • a second object feature is present in the second image, which can be associated with the first object feature.
  • it is checked if a second object feature is present, which has a predetermined similarity to the first object feature.
  • it can be checked if the first object feature can be associated with a second object feature with the aid of a matching method.
  • a plurality of first list sections is determined and the first object features are associated with the first list sections based on their position in the first image.
  • the first object features are provided as a one-dimensional list.
  • this list is divided into first list sections. Therein, the division into the first list sections is effected depending on the position of the first object features in the first image.
  • a plurality of second list sections is present and the second object features are associated with the respective second list sections based on their position in the second image.
  • both the first object features and the second object features are not only stored in a list, but information about the position of the object features is present by the division of the object features into the respective list sections. It can be used in the association of the object features with each other.
  • the number of second object features, which are checked to the effect if they can be associated with the first object feature is thus limited. Thereby, the complexity in the respective association step can be considerably reduced.
  • the recognition of the objects in the environmental region can be more efficiently and reliably performed.
  • the first image and the second image are each divided into areas, wherein each of the areas includes at least one line of pixels or at least one column of pixels, the first object features of an area are associated with a respective first list section and the second object features of an area are associated with a respective second list section.
  • both the first image and the second image can be divided into areas.
  • the respective areas can for example include at least one column of pixels of the respective image.
  • the respective areas include at least one line of pixels of the respective image. This is in particular suitable if the object feature is associated with a pixel. If the respective object feature is associated with multiple pixels, it can also be the case that the respective area includes a plurality of lines. In the association, thus, the first object features and the second object features located in the same line in the images can be compared to each other. Thus, the area can be restricted, in which it is searched for matching second object features for the first object feature.
  • the first object features of an area are successively entered into the respective first list section depending on their position in the area and the second object features of an area are successively entered into the respective second list section depending on their position in the area.
  • the respective area can include at least one line of pixels of the respective image.
  • the first object features of a line of the first image are for example successively entered into one of the first list sections from left to right.
  • the second object features of a line of the second image are entered into the second list section from left to right.
  • the first object features can be compared to the second object features arranged in the corresponding lines in the images in simple manner.
  • the objects usually move along the direction of the lines in the respective images. This allows efficient association of the respective first object features with the second object features. In this manner, the number of object features, which are associated with each other, can be increased.
  • the first object features are stored in a two-dimensional data field, wherein the lines of the data field describe the respective first list sections, and the second object features are stored in a two-dimensional data field, wherein the numbers of the data field describe the respective second list sections.
  • both the first object features and the second object features can be stored in a two-dimensional data field or array. This can already be effected in determining the object features with the aid of a corresponding object recognition algorithm.
  • information about the position and in particular the height of the object features in the image is additionally present.
  • the first object features are stored in a one- dimensional data field and an additional data field is determined, which associates the first object features with the first list sections.
  • the second object features are stored in a two-dimensional data field and an additional data field is determined, which associates the second object features with the second list sections. It can also be provided that both the first and the second object features are stored in a one-dimensional data field in known manner. In this case, an additional data field or an additional array can be determined, which describes, which of the object features is to be associated with which of the list sections. In this manner, information about the position and height of the respective object features in the image, respectively, can be provided.
  • a first object feature is selected from the first object features of the first image, with which a second object feature from the second image is to be associated.
  • first list section the selected first object feature is arranged.
  • the corresponding second list section is then determined in the data field of second object features.
  • the second object features in this second list section are checked to the effect if they have the predetermined similarity to the first object feature.
  • second object features in a predetermined number of adjacent second list sections are checked in the respective association step starting from the second list section corresponding to the first list section of the first object feature.
  • a second object feature was not found in the corresponding second list section, which has the predetermined similarity to the first object feature, it is searched for second object features in the adjacent second list sections, which have the predetermined similarity to the first object feature.
  • the number of adjacent second list sections, in which it is searched for the second object feature with the predetermined similarity is also restricted. Within the adjacent second list sections, the number of second object features, which are checked for the predetermined similarity, can be limited. This allows considerable reduction of the computational effort in the respective association step.
  • the predetermined number of second object features and/or the predetermined number of adjacent second list sections is determined based on a predetermined distance value. If a first object feature is selected, with which a second object feature is to be associated, first, the corresponding second list section is determined. In this second list section, the predetermined number of second object features is checked to the effect if they have the predetermined similarity to the first object feature. In addition, the second object features in the adjacent list sections are examined to the effect if they have the similarity. By the distance value, the search area can be limited and thus computing time can be saved.
  • a position of the object and/or a movement of the object in the environmental region are determined based on the association of the respective first object features with the second object features. If the motor vehicle or the camera is moved relatively to the at least one object, a relative position of the object in the environmental region can be determined based on the association of the first object features with the second object features.
  • the association of the first object features with the second object features can be used to perform a three-dimensional object recognition.
  • a movement of the object can be determined based on the association of the object features with each other.
  • the object in the environmental region of the motor vehicle can be tracked.
  • the association of the object features can be used for simultaneous localization and map creation (SLAM).
  • a camera system according to the invention for a motor vehicle is adapted to perform a method according to the invention and the advantageous configuration thereof.
  • the camera system can include at least one camera.
  • the camera system can include a computing device, by means of which the first object features in the first image and the second object features in the second image can be determined.
  • the computing device can be adapted to divide the respective object features into the respective list sections depending on their position.
  • a motor vehicle according to the invention includes a camera system according to the invention.
  • the motor vehicle is in particular formed as a passenger car.
  • Fig. 1 a motor vehicle according to an embodiment of the invention, which has a camera system with a plurality of cameras;
  • Fig. 2 a first image and a second image, which are provided by one of the
  • Fig. 3 object features recognized in the first image, which describe objects in the environmental region of the motor vehicle;
  • Fig. 4 a schematic flow diagram of a method for recognizing objects in the
  • Fig. 5 a list of first object features, which are associated with a list of second object features
  • Fig. 6 a two-dimensional data field of first object features, which are associated with second object features in a further two-dimensional data field;
  • Fig. 7 a schematic flow diagram of a method for recognizing objects.
  • Fig. 1 shows a motor vehicle 1 according to an embodiment of the present invention in a plan view.
  • the motor vehicle 1 is formed as a passenger car.
  • the motor vehicle 1 includes a camera system 2, which serves for assisting a driver in driving the motor vehicle 1 .
  • the camera system 2 includes a plurality of cameras 4, which are arranged distributed at the motor vehicle 1 .
  • the camera system 2 includes four cameras 4, wherein one of the cameras 4 is arranged in a rear area 5 of the motor vehicle 1 , one of the cameras 4 is arranged in a front area 7 of the motor vehicle 1 and two cameras 4 are arranged in respective lateral areas 6 of the motor vehicle 1 .
  • Images 10, 1 1 of an environmental region 8 of the motor vehicle 1 can be provided with the aid of the cameras 4.
  • the images 10, 1 1 can describe objects 9 in the
  • the camera system 2 includes a computing device 3, by means of which the images 10, 1 1 of the cameras 4 can be evaluated.
  • the Cameras 4 are connected to the computing device 3 for data transmission.
  • the objects 9 in the images 10, 1 1 of the cameras 4 can be recognized 4 by means of the computing device 3.
  • Fig. 2 shows a first image 10 and a second image 1 1 , which are provided by one of the cameras 4.
  • the images 10, 1 1 show objects 9 in the form of parked vehicles and a pedestrian.
  • object features 12, 13 are recognized with the aid of a corresponding object recognition algorithm.
  • first object features 12 are
  • first image 10 which describe the objects 9.
  • second image 1 1 second object features 13 are also exemplarily shown, which also describe the objects 9.
  • first object features 12 from the first image 10 are associated with the second object features 13 in the second image 1 1 .
  • Fig. 3 schematically shows the individual first object features 12, which have been recognized in a first image 10.
  • the first object features 12 can be recognized by means of a corresponding object recognition algorithm. Therein, it can be provided that the respective object features 12 are associated with a pixel or multiple pixels of the first image. Therein, the first image 10 is divided into individual areas 14. Therein, individual areas 14 correspond to the lines 15 of the first image 10.
  • the second object features 13 in the second image 1 1 can be determined in analogous manner.
  • Fig. 4 shows a schematic flow diagram of a method for recognizing objects 9 according to the prior art.
  • a first step S1 the first image 10 is provided.
  • the first object features 12 are detected and extracted in the first image 10.
  • a list 16 with the first object features 12 is then determined.
  • the second image 1 1 is provided.
  • the second object features 13 are detected and extracted in the second image 1 1 .
  • a list 17 with the second object features 13 is determined in a step S3'.
  • the first object features 12 are associated with the second object features 13.
  • pairs of matching object features 12, 13 are output in a step S5.
  • Fig. 5 schematically shows the list 16 with the first object features 12 and the list 17 with the second object features 13.
  • a method according to the prior art is illustrated, which describes, how first object features 12 are associated with the second object features 13.
  • each of the first object features 12 is compared to each of the second object features 13 and it is respectively checked if the second object feature 13 has a predetermined similarity to the first object feature 12.
  • a corresponding matching method can be used.
  • high computational effort arises since the complexity increases quadratically with the number of object features.
  • both the list 16 with the first object features 12 and the list 17 with the second object features 13 are divided into respective list sections 18, 19. This is schematically shown in Fig. 6.
  • the first object features 12 are entered in a two-dimensional array.
  • the second object features 13 are also entered in a two-dimensional array.
  • the first object features 12 are divided into the first list sections 18 and the second object features 13 are divided into the second list sections 19.
  • the respective list sections 18, 19 correspond to the lines of the respective two-dimensional array.
  • the list sections 18, 19 are determined based on the lines 15 of the images 10, 1 1 .
  • all of the object features 12, 13 of a line 15 are successively entered into a line of the two-dimensional array.
  • a second object feature 13 is present for one of the first object features 12, which has a predetermined similarity to the first object feature 12.
  • a first object feature 12 is selected.
  • it is indicated by the star 20.
  • list section 18 the first object feature 12 is arranged.
  • the corresponding list section 19 is determined in the two-dimensional array of the second object features 13.
  • a predetermined number of second object features 13 are checked in this corresponding list section 19 to the effect if they have a predetermined similarity to the selected first object feature 12. Therein, the check is begun starting from the second object feature 13 at the beginning of the second list section 19. This second object feature 13 is presently indicated by the star 21 .
  • this second list section 19 only a predetermined number of second object features 13 are examined.
  • the predetermined number of second object features 13 is preset by a distance value d. If a second object feature 13 has not been found in this second list section 19, which has a predetermined similarity to the first object feature 12, the adjacent second list sections 19 are examined. Therein, the number of the adjacent second list sections 19, in which the second object features 13 are examined, is also limited by the distance value d. Overall, the area, in which it is searched for matching second object features 13 can be restricted. Thereby, the complexity in the association of the object features 12, 13 can be considerably reduced. In this case, the complexity constantly increases with the number of object features 12, 13.
  • Fig. 7 shows a schematic flow diagram of a method for recognizing the objects 9 in the environmental region 8 according to an embodiment of the invention.
  • a first object feature 12 is selected from the two-dimensional array.
  • a value k which describes the columns in the two-dimensional array of second object features 13 is set to 0.
  • a value h which describes the lines in the two-dimensional array of the second object features 13, is set to the value of the line, in which the selected first object feature 12 is also located.
  • a threshold value is fixed, which describes the distance value d.
  • it is checked if the second object feature 13 at the beginning of the first section 19 has a predetermined similarity to the selected first object feature 12.
  • a distance between the selected first object feature 12 and the second object feature 13 is determined, to which the comparison was performed.
  • a step S9 it is checked if this distance is less than the threshold value. If this is the case, the threshold value is set to the distance in a step S10. If this is not the case, the method is continued in a step S1 1 . Herein, it is checked if all of the second object features 13 in the list section 19 have already been checked. If this is not the case, the method is again continued in the step S8. If this is the case, the method is continued in a step S12. Herein, it is checked if the second object features 13 in the adjacent second list sections 19 have been checked.
  • step S8 If this is not the case, the method is continued with a step S8. If this is the case, the method is continued in a step S13. Herein, it is checked if all of the second object features 13 have been checked. If this has not been effected, the method is again continued in the step S6. If this is the case, the method is finally terminated in a step S14.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method for recognizing at least one object (9) in an environmental region (8) of a motor vehicle (1), in which a first image (10) and a second image (11) of the environmental region (8) are provided by means of a camera (4) of the motor vehicle (1), first object features (12) are determined in the first image (10) and second object features (13) are determined in the second image (11), wherein the first and the second object features (12, 13) each describe the at least one object (9), in a respective association step it is checked for each of the first object features (12) if a second object feature (13) is present, which has a predetermined similarity to the first object feature (12), and if the second object feature (13) has the predetermined similarity to the first object feature (12), the first object feature (12) is associated with the second object feature (13), wherein a plurality of first list sections (18) is determined and the first object features (12) are associated with the first list sections (18) based on their position in the first image (10), a plurality of second list sections (19), which correspond to the first list sections (18), is determined and the second object features (13) are associated with the second list sections (19) based on their position in the second image (11), and it is checked in the respective association step if a second object feature (13) in the second list section (19) corresponding to the first list section (18) of the first object feature (12) has the predetermined similarity to the first object feature (12).

Description

Method for recognizing at least one object in an environmental region of a motor vehicle based on object features from images, camera system as well as motor vehicle
The present invention relates to a method for recognizing at least one object in an environmental region of a motor vehicle, in which a first image and a second image of the environmental region are provided by means of a camera of the motor vehicle, first object features are determined in the first image and second object features are determined in the second image, wherein the first and the second object features each describe the at least one object, and it is checked in a respective association step for each of the first object features if a second object feature is present, which has a predetermined similarity to the first object feature, and if the second object feature has the predetermined similarity to the first object feature, the first object feature is associated with the second object feature. Moreover, the present invention relates to a camera system for a motor vehicle. Finally, the present invention relates to a motor vehicle with such a camera system.
Presently, the interest is directed to the recognition of objects in an environmental region of a motor vehicle. Therein, the recognition is effected based on images provided by a camera of the motor vehicle. Hereto, it is known from the prior art that a sequence of images is provided by the camera and object features are determined in each image, which describe objects in the environmental region. Therein, it is further known that object features are determined in the respective images and the object features of the images are associated with each other. This association can then be used to recognize the objects. Thus, obstacles, further vehicles, pedestrians and/or roadway markings in the environmental region can for example be recognized. Further, the association of the object features can be used to recognize three-dimensional objects in the environmental region.
Therein, it is usually provided that the object features, which have been recognized in the respective images, are provided in a one-dimensional list. Therein, it is checked for each of the object features of an image if it has a predetermined similarity to the object features of a further image. However, this approach of brute force entails high complexity. A further approach is in using corresponding search trees, such as for example the k-d tree, in a multi-dimensional feature space. However, this approach entails high storage requirement and cannot be efficiently realized. In recognizing objects in the environment of the motor vehicle, this usually results in the fact that only few object features of the consecutive images are associated with each other. However, in order to perform a reliable recognition of objects and in particular a three-dimensional object recognition, it is desirable to associate as many object features with each other as possible.
It is the object of the present invention to demonstrate a solution, how objects in an environmental region of a motor vehicle can be more reliably recognized based on the association of object features in images.
According to the invention, this object is solved by a method, by a camera system as well as a motor vehicle having the features according to the respective independent claims. Advantageous developments of the present invention are the subject matter of the dependent claims.
In an embodiment of a method for recognizing at least one object in an environmental region of a motor vehicle, a first image and a second image of the environmental region are provided by means of a camera of the motor vehicle. First object features are preferably determined in the first image and second object features are preferably determined in the second image, wherein the first and the second object features in particular each describe the at least one object. For each of the first object features, it is in particular checked in a respective association step if a second object feature is present, which has a predetermined similarity to the first object feature. If the second object feature has the predetermined similarity to the first object feature, the first object feature is in particular associated with the second object feature. Furthermore, a plurality of first list sections is in particular determined and the first object features are associated with the first list sections based on their position in the first image. Moreover, a plurality of second list sections corresponding to the first list sections is preferably determined and the second object features are associated with the second list sections based on their position in the second image. In the respective association step, it is then preferably checked if a second object feature in the second list section corresponding to the first list section of the first object feature has the predetermined similarity to the first object feature.
A method according to the invention serves for recognizing at least one object in an environmental region of a motor vehicle. Herein, a first image and a second image of the environmental region are provided by means of a camera of the motor vehicle. First object features are determined in the first image and second object features are determined in the second image, wherein the first and the second object features each describe the at least one object. For each of the first object features, it is checked in a respective association step if a second object feature is present, which has a predetermined similarity to the first object feature, if the second object feature has the predetermined similarity to the first object feature, the first object feature is associated with the second object feature. Furthermore, a plurality of first list sections is determined and the first object features are associated with the first list sections based on their position in the first image. In addition, a plurality of second list sections corresponding to the first list sections is determined and the second object features are associated with the second list sections based on their position in the second image. In the respective association step, it is checked if a second object feature in the second list section corresponding to the first list section of the first object feature has the predetermined similarity to the first object feature.
With the aid of the method, objects in the environmental region of the motor vehicle are to be recognized. Such objects can for example be obstacles, further traffic participants or roadway markings. For recognizing the objects, a sequence of images can be captured by the camera of the motor vehicle. Preferably, it is provided that a first image and at least a second image are captured by the camera, wherein first the first image and subsequently the second image are captured. In each of the images, object features are determined, which describe the objects in the environmental region. For determining the object features, a corresponding object recognition algorithm such as for example SIFT (Scale- Invariant Feature Transform) or SURF (Speeded Up Robust Features) can be used. The first object features are determined in the first image. In the same manner, the second object features are determined in the second image. For each first object feature in the first image, it is now checked if a second object feature is present in the second image, which can be associated with the first object feature. Hereto, it is checked if a second object feature is present, which has a predetermined similarity to the first object feature. Thus, it can be checked if the first object feature can be associated with a second object feature with the aid of a matching method.
According to an essential aspect of the present invention, it is provided that a plurality of first list sections is determined and the first object features are associated with the first list sections based on their position in the first image. Usually, the first object features are provided as a one-dimensional list. Now, it is provided that this list is divided into first list sections. Therein, the division into the first list sections is effected depending on the position of the first object features in the first image. In the same manner, a plurality of second list sections is present and the second object features are associated with the respective second list sections based on their position in the second image. In the association step, it is then checked, in which first list section the first object feature is arranged. The corresponding second list section is determined to this first list section. In this second list section, it is now searched for second object features, which can be associated with the first object feature. Thereby, both the first object features and the second object features are not only stored in a list, but information about the position of the object features is present by the division of the object features into the respective list sections. It can be used in the association of the object features with each other. In particular, it is provided that the number of second object features, which are checked to the effect if they can be associated with the first object feature, is thus limited. Thereby, the complexity in the respective association step can be considerably reduced. In addition, the recognition of the objects in the environmental region can be more efficiently and reliably performed.
Preferably, the first image and the second image are each divided into areas, wherein each of the areas includes at least one line of pixels or at least one column of pixels, the first object features of an area are associated with a respective first list section and the second object features of an area are associated with a respective second list section. For determining the respective list sections, both the first image and the second image can be divided into areas. The respective areas can for example include at least one column of pixels of the respective image. Preferably, it is provided that the respective areas include at least one line of pixels of the respective image. This is in particular suitable if the object feature is associated with a pixel. If the respective object feature is associated with multiple pixels, it can also be the case that the respective area includes a plurality of lines. In the association, thus, the first object features and the second object features located in the same line in the images can be compared to each other. Thus, the area can be restricted, in which it is searched for matching second object features for the first object feature.
In a further embodiment, the first object features of an area are successively entered into the respective first list section depending on their position in the area and the second object features of an area are successively entered into the respective second list section depending on their position in the area. As already explained, the respective area can include at least one line of pixels of the respective image. The first object features of a line of the first image are for example successively entered into one of the first list sections from left to right. In the same manner, the second object features of a line of the second image are entered into the second list section from left to right. Thus, the first object features can be compared to the second object features arranged in the corresponding lines in the images in simple manner. Herein, it is taken into account that the objects usually move along the direction of the lines in the respective images. This allows efficient association of the respective first object features with the second object features. In this manner, the number of object features, which are associated with each other, can be increased.
According to an embodiment, the first object features are stored in a two-dimensional data field, wherein the lines of the data field describe the respective first list sections, and the second object features are stored in a two-dimensional data field, wherein the numbers of the data field describe the respective second list sections. In other words, both the first object features and the second object features can be stored in a two-dimensional data field or array. This can already be effected in determining the object features with the aid of a corresponding object recognition algorithm. Thus, compared to the known method, in which the object features are stored in a one-dimensional list, information about the position and in particular the height of the object features in the image is additionally present.
According to an alternative embodiment, the first object features are stored in a one- dimensional data field and an additional data field is determined, which associates the first object features with the first list sections. Further, the second object features are stored in a two-dimensional data field and an additional data field is determined, which associates the second object features with the second list sections. It can also be provided that both the first and the second object features are stored in a one-dimensional data field in known manner. In this case, an additional data field or an additional array can be determined, which describes, which of the object features is to be associated with which of the list sections. In this manner, information about the position and height of the respective object features in the image, respectively, can be provided.
Furthermore, it is advantageous if in the respective association step it is checked for a predetermined number of second object features in the second list section corresponding to the first list section of the first object feature if they have the predetermined similarity to the first object feature. Thus, a first object feature is selected from the first object features of the first image, with which a second object feature from the second image is to be associated. Hereto, it is determined, in which first list section the selected first object feature is arranged. The corresponding second list section is then determined in the data field of second object features. In addition, the second object features in this second list section are checked to the effect if they have the predetermined similarity to the first object feature. Therein, it is in particular provided that a predetermined number of second object features are checked in the second list section to the effect if they have the predetermined similarity to the first object feature. In this manner, the area is limited, in which it is searched for similar second object features. Thus, the computational effort can be considerably reduced.
According to a further embodiment, second object features in a predetermined number of adjacent second list sections are checked in the respective association step starting from the second list section corresponding to the first list section of the first object feature. In particular, it is provided that if a second object feature was not found in the corresponding second list section, which has the predetermined similarity to the first object feature, it is searched for second object features in the adjacent second list sections, which have the predetermined similarity to the first object feature. The number of adjacent second list sections, in which it is searched for the second object feature with the predetermined similarity, is also restricted. Within the adjacent second list sections, the number of second object features, which are checked for the predetermined similarity, can be limited. This allows considerable reduction of the computational effort in the respective association step.
Further, it is in particular provided that the predetermined number of second object features and/or the predetermined number of adjacent second list sections is determined based on a predetermined distance value. If a first object feature is selected, with which a second object feature is to be associated, first, the corresponding second list section is determined. In this second list section, the predetermined number of second object features is checked to the effect if they have the predetermined similarity to the first object feature. In addition, the second object features in the adjacent list sections are examined to the effect if they have the similarity. By the distance value, the search area can be limited and thus computing time can be saved.
According to a further configuration, a position of the object and/or a movement of the object in the environmental region are determined based on the association of the respective first object features with the second object features. If the motor vehicle or the camera is moved relatively to the at least one object, a relative position of the object in the environmental region can be determined based on the association of the first object features with the second object features. Herein, the association of the first object features with the second object features can be used to perform a three-dimensional object recognition. In addition, a movement of the object can be determined based on the association of the object features with each other. Thus, the object in the environmental region of the motor vehicle can be tracked. Further, the association of the object features can be used for simultaneous localization and map creation (SLAM).
A camera system according to the invention for a motor vehicle is adapted to perform a method according to the invention and the advantageous configuration thereof. The camera system can include at least one camera. Further, the camera system can include a computing device, by means of which the first object features in the first image and the second object features in the second image can be determined. In addition, the computing device can be adapted to divide the respective object features into the respective list sections depending on their position.
A motor vehicle according to the invention includes a camera system according to the invention. The motor vehicle is in particular formed as a passenger car.
The preferred embodiments presented with respect to the method according to the invention and the advantages thereof correspondingly apply to the camera system according to the invention as well as to the motor vehicle according to the invention.
Further features of the invention are apparent from the claims, the figures and the description of figures. The features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations or alone without departing from the scope of the invention. Thus, implementations are also to be considered as encompassed and disclosed by the invention, which are not explicitly shown in the figures and explained, but arise from and can be generated by separated feature combinations from the explained implementations. Implementations and feature combinations are also to be considered as disclosed, which thus do not have all of the features of an originally formulated independent claim. Moreover, implementations and feature combinations are to be considered as disclosed, in particular by the implementations set out above, which extend beyond or deviate from the feature combinations set out in the relations of the claims.
Now, the invention is explained in more detail based on preferred embodiments as well as with reference to the attached drawings.
There show: Fig. 1 a motor vehicle according to an embodiment of the invention, which has a camera system with a plurality of cameras;
Fig. 2 a first image and a second image, which are provided by one of the
cameras;
Fig. 3 object features recognized in the first image, which describe objects in the environmental region of the motor vehicle;
Fig. 4 a schematic flow diagram of a method for recognizing objects in the
images;
Fig. 5 a list of first object features, which are associated with a list of second object features;
Fig. 6 a two-dimensional data field of first object features, which are associated with second object features in a further two-dimensional data field; and
Fig. 7 a schematic flow diagram of a method for recognizing objects.
In the figures, identical and functionally identical elements are provided with the same reference characters.
Fig. 1 shows a motor vehicle 1 according to an embodiment of the present invention in a plan view. Presently, the motor vehicle 1 is formed as a passenger car. The motor vehicle 1 includes a camera system 2, which serves for assisting a driver in driving the motor vehicle 1 . The camera system 2 includes a plurality of cameras 4, which are arranged distributed at the motor vehicle 1 . In the present example, the camera system 2 includes four cameras 4, wherein one of the cameras 4 is arranged in a rear area 5 of the motor vehicle 1 , one of the cameras 4 is arranged in a front area 7 of the motor vehicle 1 and two cameras 4 are arranged in respective lateral areas 6 of the motor vehicle 1 . Images 10, 1 1 of an environmental region 8 of the motor vehicle 1 can be provided with the aid of the cameras 4. In particular, the images 10, 1 1 can describe objects 9 in the
environmental region 8. Moreover, the camera system 2 includes a computing device 3, by means of which the images 10, 1 1 of the cameras 4 can be evaluated. The Cameras 4 are connected to the computing device 3 for data transmission. The objects 9 in the images 10, 1 1 of the cameras 4 can be recognized 4 by means of the computing device 3.
Fig. 2 shows a first image 10 and a second image 1 1 , which are provided by one of the cameras 4. The images 10, 1 1 show objects 9 in the form of parked vehicles and a pedestrian. In the images 10, 1 1 , object features 12, 13 are recognized with the aid of a corresponding object recognition algorithm. Therein, first object features 12 are
exemplarily shown in the first image 10, which describe the objects 9. In the second image 1 1 , second object features 13 are also exemplarily shown, which also describe the objects 9. Therein, it is provided that the first object features 12 from the first image 10 are associated with the second object features 13 in the second image 1 1 .
Fig. 3 schematically shows the individual first object features 12, which have been recognized in a first image 10. The first object features 12 can be recognized by means of a corresponding object recognition algorithm. Therein, it can be provided that the respective object features 12 are associated with a pixel or multiple pixels of the first image. Therein, the first image 10 is divided into individual areas 14. Therein, individual areas 14 correspond to the lines 15 of the first image 10. The second object features 13 in the second image 1 1 can be determined in analogous manner.
Fig. 4 shows a schematic flow diagram of a method for recognizing objects 9 according to the prior art. In a first step S1 , the first image 10 is provided. In a step S2, the first object features 12 are detected and extracted in the first image 10. In a step S3, a list 16 with the first object features 12 is then determined. In a step S1 ', the second image 1 1 is provided. In a step S2', the second object features 13 are detected and extracted in the second image 1 1 . Further, a list 17 with the second object features 13 is determined in a step S3'. In a step S4, the first object features 12 are associated with the second object features 13. Finally, pairs of matching object features 12, 13 are output in a step S5.
Fig. 5 schematically shows the list 16 with the first object features 12 and the list 17 with the second object features 13. Presently, a method according to the prior art is illustrated, which describes, how first object features 12 are associated with the second object features 13. Here, it is provided that each of the first object features 12 is compared to each of the second object features 13 and it is respectively checked if the second object feature 13 has a predetermined similarity to the first object feature 12. Hereto, a corresponding matching method can be used. In this method, high computational effort arises since the complexity increases quadratically with the number of object features. According to an embodiment of the invention, it is provided that both the list 16 with the first object features 12 and the list 17 with the second object features 13 are divided into respective list sections 18, 19. This is schematically shown in Fig. 6. Herein, it is apparent that the first object features 12 are entered in a two-dimensional array. The second object features 13 are also entered in a two-dimensional array. Therein, the first object features 12 are divided into the first list sections 18 and the second object features 13 are divided into the second list sections 19. The respective list sections 18, 19 correspond to the lines of the respective two-dimensional array. Therein, the list sections 18, 19 are determined based on the lines 15 of the images 10, 1 1 . Thus, all of the object features 12, 13 of a line 15 are successively entered into a line of the two-dimensional array.
In the association steps, it is checked if a second object feature 13 is present for one of the first object features 12, which has a predetermined similarity to the first object feature 12. Presently, a first object feature 12 is selected. Presently, it is indicated by the star 20. Subsequently, it is determined, in which list section 18 the first object feature 12 is arranged. The corresponding list section 19 is determined in the two-dimensional array of the second object features 13. A predetermined number of second object features 13 are checked in this corresponding list section 19 to the effect if they have a predetermined similarity to the selected first object feature 12. Therein, the check is begun starting from the second object feature 13 at the beginning of the second list section 19. This second object feature 13 is presently indicated by the star 21 .
In this second list section 19, only a predetermined number of second object features 13 are examined. The predetermined number of second object features 13 is preset by a distance value d. If a second object feature 13 has not been found in this second list section 19, which has a predetermined similarity to the first object feature 12, the adjacent second list sections 19 are examined. Therein, the number of the adjacent second list sections 19, in which the second object features 13 are examined, is also limited by the distance value d. Overall, the area, in which it is searched for matching second object features 13 can be restricted. Thereby, the complexity in the association of the object features 12, 13 can be considerably reduced. In this case, the complexity constantly increases with the number of object features 12, 13.
Fig. 7 shows a schematic flow diagram of a method for recognizing the objects 9 in the environmental region 8 according to an embodiment of the invention. In a step S6, a first object feature 12 is selected from the two-dimensional array. In a step S7, a value k, which describes the columns in the two-dimensional array of second object features 13, is set to 0. Further, a value h, which describes the lines in the two-dimensional array of the second object features 13, is set to the value of the line, in which the selected first object feature 12 is also located. Furthermore, a threshold value is fixed, which describes the distance value d. Moreover, it is checked if the second object feature 13 at the beginning of the first section 19 has a predetermined similarity to the selected first object feature 12. In a step S8, a distance between the selected first object feature 12 and the second object feature 13 is determined, to which the comparison was performed. In a step S9, it is checked if this distance is less than the threshold value. If this is the case, the threshold value is set to the distance in a step S10. If this is not the case, the method is continued in a step S1 1 . Herein, it is checked if all of the second object features 13 in the list section 19 have already been checked. If this is not the case, the method is again continued in the step S8. If this is the case, the method is continued in a step S12. Herein, it is checked if the second object features 13 in the adjacent second list sections 19 have been checked. If this is not the case, the method is continued with a step S8. If this is the case, the method is continued in a step S13. Herein, it is checked if all of the second object features 13 have been checked. If this has not been effected, the method is again continued in the step S6. If this is the case, the method is finally terminated in a step S14.

Claims

Claims
1 . Method for recognizing at least one object (9) in an environmental region (8) of a motor vehicle (1 ), in which a first image (10) and a second image (1 1 ) of the environmental region (8) are provided by means of a camera (4) of the motor vehicle (1 ), first object features (12) are determined in the first image (10) and second object features (13) are determined in the second image (1 1 ), wherein the first and the second object features (12, 13) each describe the at least one object (9), for each of the first object features (12) it is checked in a respective association step if a second object feature (13) is present, which has a predetermined similarity to the first object feature (12), and if the second object feature (13) has the predetermined similarity to the first object feature (12), the first object feature (12) is associated with the second object feature (13),
characterized in that
a plurality of first list sections (18) is determined and the first object features (12) are associated with the first list sections (18) based on their position in the first image (10), a plurality of second list sections (19) corresponding to the first list sections (18) is determined and the second object features (13) are associated with the second list sections (19) based on their position in the second image (1 1 ), and it is checked in the respective association step if a second object feature (13) in the second list section (19) corresponding to the first list section (18) of the first object feature (12) has the predetermined similarity to the first object feature (12).
2. Method according to claim 1 ,
characterized in that
the first and the second image (10, 1 1 ) are each divided into areas (14), wherein each of the areas (14) includes at least one line (15) of pixels or at least one column of pixels, the first object features (12) of an area (14) are associated with a respective first list section (18) and the second object features (13) of an area (14) are associated with a respective second list section (19).
Method according to claim 2,
characterized in that
the first object features (12) of an area (14) are successively entered into the respective first list section (18) depending on their position in the area (14) and the second object features (13) of an area (14) are successively entered into the respective second list section (19) depending on their position in the area (14).
Method according to any one of the preceding claims,
characterized in that
the first object features (12) are stored in a two-dimensional data field, wherein the lines of the data field describe the respective first list sections (18), and the second object features (13) are stored in a two-dimensional data field, wherein the lines of the data field describe the respective second list sections (19).
Method according to any one of claims 1 to 3,
characterized in that
the first object features (12) are stored in a one-dimensional data field, an additional data field is determined, which associates the first object features (12) with the first list sections (18), the second object features (19) are stored in a one-dimensional data field and an additional data field is determined, which associates the second object features (13) with the second list sections (19).
Method according to any one of the preceding claims,
characterized in that
in the respective association step it is checked for a predetermined number of second object features (13) in the second list section (19) corresponding to the first list section (18) of the first object feature (12) if they have the predetermined similarity to the first object feature (12).
Method according to any one of the preceding claims,
characterized in that
in the respective association step, starting from the second list section (19), which corresponds to the first list section (18) of the first object feature (12), second object features (13) are checked in a predetermined number of adjacent second list sections (19).
8. Method according to claim 6 or 7,
characterized in that
the predetermined number of second object features (13) and/or the predetermined number of adjacent second list sections (19) are determined based on a
predetermined distance value (d).
9. Method according to any one of the preceding claims,
characterized in that
a position of the object (9) and/or a movement of the object (9) in the environmental region (8) are determined based on the association of the respective first object features (12) with the second object features (13).
10. Camera system (2) for a motor vehicle (1 ), which is adapted to perform a method according to any one of the preceding claims.
1 1 . Motor vehicle (1 ) with a camera system (2) according to claim 10.
PCT/EP2017/069997 2016-08-18 2017-08-08 Method for recognizing at least one object in an environmental region of a motor vehicle based on object features from images, camera system as well as motor vehicle Ceased WO2018033424A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102016115327.3A DE102016115327A1 (en) 2016-08-18 2016-08-18 Method for detecting at least one object in an environmental region of a motor vehicle on the basis of object features from images, camera system and motor vehicle
DE102016115327.3 2016-08-18

Publications (1)

Publication Number Publication Date
WO2018033424A1 true WO2018033424A1 (en) 2018-02-22

Family

ID=59626604

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2017/069997 Ceased WO2018033424A1 (en) 2016-08-18 2017-08-08 Method for recognizing at least one object in an environmental region of a motor vehicle based on object features from images, camera system as well as motor vehicle

Country Status (2)

Country Link
DE (1) DE102016115327A1 (en)
WO (1) WO2018033424A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210043456A (en) * 2019-10-11 2021-04-21 도요타지도샤가부시키가이샤 Vehicle parking assist apparatus
US11810368B2 (en) 2021-01-27 2023-11-07 Toyota Jidosha Kabushiki Kaisha Parking assist apparatus

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109819207B (en) * 2018-12-25 2020-07-21 深圳市天彦通信股份有限公司 Target searching method and related equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2023267A1 (en) * 2007-08-07 2009-02-11 Honda Motor Co., Ltd Object type determination apparatus, vehicle and object type determination method
US20100097456A1 (en) * 2008-04-24 2010-04-22 Gm Global Technology Operations, Inc. Clear path detection using a hierachical approach
EP2463843A2 (en) * 2010-12-07 2012-06-13 Mobileye Technologies Limited Method and system for forward collision warning
US20160063330A1 (en) * 2014-09-03 2016-03-03 Sharp Laboratories Of America, Inc. Methods and Systems for Vision-Based Motion Estimation
US20160093065A1 (en) * 2014-09-30 2016-03-31 Valeo Schalter Und Sensoren Gmbh Method for detecting an object in an environmental region of a motor vehicle, driver assistance system and motor vehicle

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102008036219A1 (en) * 2008-08-02 2010-02-04 Bayerische Motoren Werke Aktiengesellschaft Method for identification of object i.e. traffic sign, in surrounding area of e.g. passenger car, involves determining similarity measure between multiple characteristics of image region and multiple characteristics of characteristic set
DE102014111948A1 (en) * 2014-08-21 2016-02-25 Connaught Electronics Ltd. Method for determining characteristic pixels, driver assistance system and motor vehicle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2023267A1 (en) * 2007-08-07 2009-02-11 Honda Motor Co., Ltd Object type determination apparatus, vehicle and object type determination method
US20100097456A1 (en) * 2008-04-24 2010-04-22 Gm Global Technology Operations, Inc. Clear path detection using a hierachical approach
EP2463843A2 (en) * 2010-12-07 2012-06-13 Mobileye Technologies Limited Method and system for forward collision warning
US20160063330A1 (en) * 2014-09-03 2016-03-03 Sharp Laboratories Of America, Inc. Methods and Systems for Vision-Based Motion Estimation
US20160093065A1 (en) * 2014-09-30 2016-03-31 Valeo Schalter Und Sensoren Gmbh Method for detecting an object in an environmental region of a motor vehicle, driver assistance system and motor vehicle

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
RICHARD SZELISKI: "Computer Vision: Algorithms and Applications", COMPUTER VISION : ALGORITHMS AND APPLICATIONS, 3 September 2010 (2010-09-03), pages 1 - 979, XP055317897, ISBN: 978-1-84882-935-0, Retrieved from the Internet <URL:http://szeliski.org/Book/drafts/SzeliskiBook_20100903_draft.pdf> [retrieved on 20161109] *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210043456A (en) * 2019-10-11 2021-04-21 도요타지도샤가부시키가이샤 Vehicle parking assist apparatus
KR102387689B1 (en) * 2019-10-11 2022-04-18 도요타지도샤가부시키가이샤 Vehicle parking assist apparatus
KR20220052881A (en) * 2019-10-11 2022-04-28 도요타지도샤가부시키가이샤 Vehicle parking assist apparatus
US11458961B2 (en) 2019-10-11 2022-10-04 Toyota Jidosha Kabushiki Kaisha Vehicle parking assist apparatus
KR102502115B1 (en) 2019-10-11 2023-02-21 도요타지도샤가부시키가이샤 Vehicle parking assist apparatus
KR20230029726A (en) * 2019-10-11 2023-03-03 도요타지도샤가부시키가이샤 Vehicle parking assist apparatus
KR102560484B1 (en) 2019-10-11 2023-07-31 도요타지도샤가부시키가이샤 Vehicle parking assist apparatus
US11718343B2 (en) 2019-10-11 2023-08-08 Toyota Jidosha Kabushiki Kaisha Vehicle parking assist apparatus
US12071173B2 (en) 2019-10-11 2024-08-27 Toyota Jidosha Kabushiki Kaisha Vehicle parking assist apparatus
CN119078802A (en) * 2019-10-11 2024-12-06 丰田自动车株式会社 Vehicle parking assistance device
CN119116929A (en) * 2019-10-11 2024-12-13 丰田自动车株式会社 Vehicle parking assistance device
US11810368B2 (en) 2021-01-27 2023-11-07 Toyota Jidosha Kabushiki Kaisha Parking assist apparatus

Also Published As

Publication number Publication date
DE102016115327A1 (en) 2018-02-22

Similar Documents

Publication Publication Date Title
EP3007099B1 (en) Image recognition system for a vehicle and corresponding method
AU2017302833B2 (en) Database construction system for machine-learning
CN107273788B (en) Imaging systems that perform lane detection in vehicles vs. vehicle imaging systems
US9042639B2 (en) Method for representing surroundings
EP3690713B1 (en) Method and device for detecting vehicle occupancy using passengers keypoint detected through image analysis for humans status recognition
US20090060273A1 (en) System for evaluating an image
EP3699814A1 (en) Method and device for adjusting driver assistance apparatus automatically for personalization and calibration according to driver&#39;s status
KR102031503B1 (en) Method and system for detecting multi-object
CN105006175B (en) The method and system of the movement of initiative recognition traffic participant and corresponding motor vehicle
CN113269163B (en) Stereo parking space detection method and device based on fisheye image
DE102009009047A1 (en) Method for object detection
WO2017042224A1 (en) Method for generating an environmental map of an environment of a motor vehicle based on an image of a camera, driver assistance system as well as motor vehicle
US10803605B2 (en) Vehicle exterior environment recognition apparatus
KR20090098167A (en) Lane Recognition Method and System Using Distance Sensor
WO2018033424A1 (en) Method for recognizing at least one object in an environmental region of a motor vehicle based on object features from images, camera system as well as motor vehicle
John et al. A reliable method for detecting road regions from a single image based on color distribution and vanishing point location
Yeol Baek et al. Scene understanding networks for autonomous driving based on around view monitoring system
EP3029602A1 (en) Method and apparatus for detecting a free driving space
DE102018114963A1 (en) Relocation process of a vehicle using different 3D clouds
US20150379372A1 (en) Object recognition apparatus
US10982967B2 (en) Method and device for fast detection of repetitive structures in the image of a road scene
JP6660367B2 (en) Outside environment recognition device
Neto et al. A simple and efficient road detection algorithm for real time autonomous navigation based on monocular vision
JP5195377B2 (en) Lane boundary deriving device
Lee et al. Stereo vision-based obstacle detection using dense disparity map

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17752091

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17752091

Country of ref document: EP

Kind code of ref document: A1