GB2624653A - A system and method for object detection from a curved mirror - Google Patents
A system and method for object detection from a curved mirror Download PDFInfo
- Publication number
- GB2624653A GB2624653A GB2217545.9A GB202217545A GB2624653A GB 2624653 A GB2624653 A GB 2624653A GB 202217545 A GB202217545 A GB 202217545A GB 2624653 A GB2624653 A GB 2624653A
- Authority
- GB
- United Kingdom
- Prior art keywords
- image
- subset
- images
- detection
- detector module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 172
- 238000000034 method Methods 0.000 title claims abstract description 90
- 238000012545 processing Methods 0.000 claims abstract description 66
- 238000010801 machine learning Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 description 13
- 230000010354 integration Effects 0.000 description 10
- 238000011410 subtraction method Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 230000036581 peripheral resistance Effects 0.000 description 7
- 238000011156 evaluation Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000002372 labelling Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
A system and a computer-implemented method for object detection from a curved mirror are disclosed. The system comprises an image capturing device of a vehicle, the image capturing device arranged to obtain a plurality of images as a set, a processing unit of the vehicle coupled to the image capturing device and arranged to receive the plurality of images from the image capturing device. The system further comprises the processing unit obtaining a first subset of images from the plurality of images with the remainder forming a second subset of images. The processing unit includes a first object detector module arranged to process the first subset of images and to output for each image of the first subset whether the image has a labelled detection. The processing unit also includes a second object detector module arranged to process the second subset of images by comparing each image of the second subset against a preceding image of the subset and performing background subtraction to determine whether the image contains a difference from the preceding image with the processing unit being capable of performing the object detection based on the difference.
Description
A System and Method for Object Detection From A Curved Mirror
TECHNICAL FIELD
The present disclosure relates broadly to a system for object detection from a curved mirror and to a computer-implemented method of object detection from a curved mirror.
BACKGROUND
Curved mirrors or blind spot mirrors are typically installed at traffic intersections or at road positions such as those having sharp curves where there is poor visibility of traffic from different directions to a vehicle. Such curved mirrors may provide the vehicle with a view of blind spots and the like that exist at these road positions. As would be appreciated, the curved mirrors are external to the vehicles.
It has been proposed for image capturing devices, such as video cameras, to be provided on vehicles to detect objects that may appear in curved mirrors so as to provide addiiional information and/or warnings to users of the vehicles. With usage of such image capturing devices, it has been proposed for image processing methods to be used on images for object detection. For example, such image processing methods may be machine learning or artificial intelligence methods.
It has been recognised that for object detection from a curved mirror, an object size is typically calculated image by image or from a video, frame:made by frame image. The object size calculation may be inaccurate when the object is too small in the image, for example due to the object being at a distance away. However, to prevent a collision between a vehicle arid the object, the object size needs to be calculated before the object moves close to the vehicle.
Furthermore, existing methods to detect objects from a curved mirror may take a substantial amount of time before an object is determined to be detected.
Hence, there exists a need for a system for object detection from a curved mirror 5 and to a computer-implemented method of object detection from a curved mirror that seek to address at least one of the above problems.
SUMMARY
In accordance with an aspect of the present disclosure, there is provided a system for object detection from a curved mirror, the system comprising: an image capturing device of a vehicle, the image capturing device arranged to obtain a plurality of images as a set; a processing unit of the vehicle, the processing unit being coupled to the image capturing device and arranged to receive the plurality of images from the image capturing device; characterised in that the system further comprises the processing unit being further arranged to obtain a first subset of images from the plurality of images with a remainder of the plurality of images forming a second subset of images; the processing unit including a first object detector module, the first object zo detector module arranged to process the first subset of images, the first object detector module also arranged to output for each image of the first subset of images whether the each image of the first subset of images has a labelled detection; the processing unit including a second object detector module, the second object detector module arranged to process the second subset of images, the second object detector module arranged to compare each image of the second subset of images against a preceding image of the each image of the second subset of images and to perform background subtraction to determine whether the each image of the second subset of images contains a difference from the preceding image of the each image of the second subset; and wherein the processing unit is capable of performing the object detection based on the difference.
If it is determined by the second object detector module that the each image of the second subset of images contains a difference from the preceding image of the each image of the second subset, the system may further comprise the second object detector module being arranged to determine whether the preceding image of the each image of the second subset has a labelled detection.
If it is determined by the second object detector module that the preceding image of the each image of the second subset has a labelled detection, the system may further comprise the second object detector module being arranged to calculate the difference of the each image of the second subset of images and the preceding image of the each image of the second subset; wherein the second object detector module is further arranged to match the calculated difference to a pixel size rule; and if the calculated difference matches the pixel size rule, the second object detector module is arranged to output for the each image of the second subset of images that the each image of the second subset of images has a labelled detection; and wherein the labelled detection of the each image of the second subset of images is based on the labelled detection of the preceding image of the each image of the second subset.
If the calculated difference does not match the pixel size rule, the system may further comprise the second object detector module being arranged to output for the each image of the second subset of images that the each image of the second subset of images has an unlabelled detection.
If it is determined by the second object detector module that the preceding image of the each image of the second subset does not have a labelled detection, the system may further comprise the second object detector module being arranged to output for the each image of the second subset of images that the each image of the second subset of images has an unlabelled detection.
The first object detector module may be trained using machine learning to perform single shot detection to process the first subset of images.
The system may further comprise an action unit and wherein the processing of the first subset of images and the processing of the second subset of images are used to determine one or more instructions to be transmitted to the action unit.
In accordance with another aspect of the present disclosure, there is provided a computer-implemented method of object detection from a curved mirror, the method comprising: obtaining a plurality of images as a set using an image capturing device of a vehicle; obtaining a first subset of images from the plurality of images, a remainder of the plurality of images forming a second subset of images; characterised in that the method comprises processing the first subset of images and outputting for each image of the first subset of images whether the each image of the first subset of images has a labelled detection; processing the second subset of images and to comparing each image of the second subset of images against a preceding image of the each image of the second subset of images; performing background subtraction to determine whether the each image of the second subset of images contains a difference from the preceding image of the each image of the second subset; and performing the object detection based on the difference.
If it is determined that the each image of the second subset of images contains a difference from the preceding image of the each image of the second subset, the method may further comprise determining whether the preceding image of the each image of the second subset has a labelled detection.
If it is determined that the preceding image of the each image of the second subset has a labelled detection, the method may further comprise calculating the difference of the each image of the second subset of images and the preceding image of the each image of the second subset; matching the calculated difference to a pixel size rule; if the calculated difference matches the pixel size rule, outputting for the each image of the second subset of images that the each image of the second subset of images has a labelled detection; and wherein the labelled detection of the each image of the second subset of images is based on the labelled detection of the preceding image of the each image of the second subset.
If the calculated difference does not match the pixel size rule, the method may further comprise outputting for the each image of the second subset of images that the each image of the second subset of images has an unlabelled detection.
If it is determined that the preceding image of the each image of the second subset does not have a labelled detection, the method may further comprise outputting for the each image of the second subset of images that the each image of the second 5 subset of images has an unlabelled detection The first subset of images may be processed using single shot detection by a trained machine learning model.
lo The processing of the first subset of images may comprise using a first object detector module of the vehicle and the processing the second subset of images may comprise using a second object detector module of the vehicle.
The method may further comprise using the processing of the first subset of 15 images and the processing of the second subset of images to determine one or more instructions to be transmitted to an action unit of the vehicle.
In accordance with another aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon software instructions that, when executed by a processing unit of a system for object detection from a curved mirror, cause the processing unit to perform a method of object detection from a curved mirror as disclosed herein.
BRIEF DESCRIPTION OF THE DRAWINGS
Exemplary embodiments of the present disclosure will be better understood and readily apparent to one of ordinary skill in the art from the following written description, by way of example only, and in conjunction with the drawings, in which: FIG. 1 is a schematic block diagram for illustrating a system for object detection from a curved mirror in an exemplary embodiment.
FIG. 2 is a schematic flowchart for illustrating integration of Single Shot Detection (SSD) and Background Subtraction Method (BSM) for object detection from a curved mirror in an exemplary embodiment.
FIGs. 3A to 3C show a series of images for illustrating an exemplary implementation of a method for object detection from a curved mirror of FIG. 2.
FIG. 4 shows an example of a created dataset for evaluating performance of an exemplary embodiment.
FIG. 5 is an exemplary illustration of a basic unit for machine learning in relation to Single Shot Detection (SSD).
FIG. 6 is a schematic flowchart for illustrating an example for Background 15 Subtraction Method.
FIG. 7 is a schematic flowchart for illustrating an example for a calculation of a difference between a current image and a preceding image and a matching of the calculated difference to a pixel size rule.
FIG. 8 is a schematic flowchart for illustrating a computer-implemented method of object detection from a curved mirror in an exemplary embodiment.
DETAILED DESCRIPTION
Exemplary embodiments described herein may provide a system and a method for object detection from a curved or blind spot mirror.
FIG. 1 is a schematic block diagram for illustrating a system for object detection from a curved mirror in an exemplary embodiment. The system 102 is disposed onboard a vehicle. The system 102 comprises an image capturing device 104 and a processing unit 106 coupled to the image capturing device 104.
In the exemplary embodiment, the image capturing device 104 of the vehicle may be a camera that is disposed in a vicinity of a rear-view mirror at a front of the vehicle body or on a front grille of the vehicle. The image capturing device 104 is disposed so as to have an image capture area at a predetermined angle toward the front of the vehicle. In the exemplary embodiment, the processing unit 106 may be an electronic control unit (ECU) of the vehicle. The processing unit 106 is provided to control the functions of the components of the system 102.
In the exemplary embodiment, the processing unit 106 is coupled to, or comprises, a first object detector module 108 and a second object detector module 110. The respective outputs 112, 114 of the first object detector module 108 and the second object detector module 110 are coupled to an action unit 116.
In the exemplary embodiment, a cross road or a traffic intersection may be identified by the image capturing device 104. In some exemplary embodiments, map information may be provided to the processing unit 106 such that the processing unit 106 may use the map information, e.g. matching a map against Global Positioning System (GPS) coordinates, to identify a cross road or a traffic intersection. At such a cross road or traffic intersection, a curved mirror or its curved mirror area may be identified by the image capturing device 104. For example, the processing unit 106 may perform a curved mirror detection function to identify or seek the curved mirror area. For example, in a video captured by the image capturing device 104, image processing may be performed e.g. using a shape, color, etc. of stored curved mirror data to identify the curved mirror area.
Thereafter, object detection from the curved mirror is performed. For example, for a vehicle, a safety function process may move from detection of a curved mirror to a phase of object detection on/from the curved mirror.
In the exemplary embodiment, the image capturing device 104 obtains a plurality of images as a set. For example, three frame images (e.g. frame by frame images from a video) are obtained. The processing unit 106 receives the plurality of images from the image capturing device 104. The processing unit 106 may then obtain a first subset of images from the plurality of images with a remainder of the plurality of images forming a second subset of images. For example, the first subset may comprise a first frame image from the three frame images.
In the exemplary embodiment, the processing unit 106 uses the first object detector module 108 to process the first subset of images and the first object detector module 108 may output for each image of the first subset of images whether the each image of the first subset of images has a labelled detection. For example, the first object detector module 108 may use Single Shot Detection (SSD) to process the first frame image from the three frame images and to label whether the first frame image has a labelled detection or whether there is no detection. The processing unit 106 uses the second object detector module 110 to process the second subset of images and the second object detector module 110 may compare each image of the second subset of images against a preceding image of the each image of the second subset of images (i.e. against a preceding image of this each image from the second subset of images) and to perform background subtraction to determine whether the each image of the second subset of images contains a difference from the preceding image (of the each image of the second subset). For example, the second object detector module 110 may use a Background Subtraction Method (BSM) to process the second frame image from the three frame images against the first frame image (processed by the first object detector module 108 using SSD) and to determine whether the second frame image contains a difference as compared to the first frame image (which is a preceding image to the second frame image). As another example, the second object detector module 110 may use the Background Subtraction Method (BSM) to process the third frame image from the three frame images against the second frame image (processed by the second object detector module 110 using BSM) and to determine whether the third frame image contains a difference as compared to the second frame image (which is a preceding image to the third frame image).
In the exemplary embodiment, the processing unit 106 then performs object detection from the curved mirror based on the difference (for example, determination of whether the second frame image contains a difference as compared to the first frame image and/or determination of whether the third frame image contains a difference as compared to the second frame image).
In the exemplary embodiment, the processing of the first subset of images and the processing of the second subset of images are used to determine one or more instructions to be transmitted from the processing unit 106 to the action unit 116. The one or more instructions to the action unit 116 may cause the action unit 116 to, for example but not limited to, activate a braking function of the vehicle, and/or activate a warning system to a user of the vehicle, and/or taking over steering control, or take no action.
In an example, using an integration of SSD and BSM, the calculation speed for object detection from a curved mirror may be as fast as about 9.83 ms. Having a fast object detection calculation speed is better for predicting the risk of a collision (between a vehicle and an object detected in the curved mirror).
FIG. 2 is a schematic flowchart 200 for illustrating integration of Single Shot Detection (SSD) and Background Subtraction Method (BSM) for object detection from a curved mirror in an exemplary embodiment. The object detection may be implemented using the system 102 of FIG. 1. For example, the determinations performed in the flowchart may be performed by the processing unit 106.
At step 202, an object detection process from a curved mirror is started. For 25 example, a trigger to begin the process may be when an image capturing device detects a curved mirror. A plurality of images of the curved mirror is obtained as a set. The object detection process cycles through each of the plurality of images.
At step 204, "method_choice_count = n" and "count" are variables. In the exemplary embodiment, detection by SSD is performed once or in one frame in n frames, n frames being the set. Thus, the first subset of images for SSD detection is one frame image. The second subset of images in this exemplary embodiment comprises the remaining two frame images of the set of three frame images. In the exemplary embodiment, then frames is three frames or three frame images. Thus, method_choice_count is 3. At step 204, at the start of the process for a set of images, count is initialised to 0.
At step 206, it is determined which frame image is being processed. For a first frame or frame image, it is determined whether count % method_choice_count=0. In the exemplary embodiment, the operator % for modulo is used such that at a fourth image, i.e. count=3, the process moves from step 206 to step 208. For a first image (or the first subset of images), as the first image is at count=0, the above equation is satisfied. Thus, the first image is processed using SSD. With the system 102 of FIG. 1, the first object detector module 108 is used.
At step 208, SSD is used for object detection on the first image. At step 210, it is determined whether an object is detected from the first image using SSD.
If an object is detected at step 210, at step 212, a labelled detection is output for the first image. The labelled detection can identify the kind/type of the object. With the system 102 of FIG. 1, the first object detector module 108 is used to output for the first image (or the first subset of images) that the first image has a labelled zo detection.
If an object is not detected at step 210, at step 214, a no detection data ("no detection") is output for the first image. With the system 102 of FIG. 1, the first object detector module 108 is used to output for the first image (or the first subset of images) that the first image has no detection data.
At step 216, count is incremented by 1 and the process loops to step 206. At step 206, for the subsequent frame or frame image (e.g. the second image), it is determined whether count % method_choice_count=0. As count=1, the above equation is not satisfied. Thus, the subsequent image (or the second subset of images) is processed using BSM. With the system 102 of FIG. 1, the second object detector module 110 is used.
At step 218, BSM (or difference image) is used for object detection on the second image (or the subsequent current image). At step 220, it is determined whether a difference between the second image (current image of the second subset) and the first image (the preceding image, can be from the first subset) is detected using BSM.
If no difference is detected at step 220, at step 222, a no detection data ("no detection") is output for the second image. With the system 102 of FIG. 1, the second object detector module 110 is used to output for the second image (or the second subset of images) that the second image has no detection data. The process proceeds to step 216 where count is incremented by 1 and the process loops to step 206.
If a difference is detected at step 220, at step 224, it is determined whether there is a detection in a preceding image or frame image (whether from the first subset or the second subset of images). If there is no detection in the preceding image, at step 226, an unlabelled detection ("unlabelled detection") is output for the second image. With the system 102 of FIG. 1, the second object detector module 110 is used to output for the second image (or the second subset of images) that the second image has an unlabelled detection. The process proceeds to step 216 where count is incremented by 1 and the process loops to step 206.
If at step 224, it is determined that there is a detection in a preceding image or frame image (whether from the first subset or the second subset of images), at step 228, it is determined whether there is a labelled detection in the preceding image. If there is no labelled detection in the preceding image, at step 226, an unlabelled detection ("unlabelled detection") is output for the second image. With the system 102 of FIG. 1, the second object detector module 110 is used to output for the second image (or the second subset of images) that the second image has an unlabelled detection. The process proceeds to step 216 where count is incremented by 1 and the process loops to step 206.
If at step 228, it is determined that there is a labelled detection in the preceding image, at step 230, a calculation is performed to calculate the difference detected in step 218. In the exemplary embodiment, a pixel distance from a predetermined maximum point between the current second image and the preceding first image is calculated for the difference detected. The calculated difference or the pixel distance is then used to match against a pixel size rule. With the system 102 of FIG. 1, the second object detector module 110 is used for the calculation of the difference and the matching to the pixel size rule.
In the exemplary embodiment, the pixel size rule is based on whether an object is moving towards or away from the vehicle, i.e. increasing or decreasing risk to the vehicle. In the exemplary embodiment, the pixel size rule is set as "less than 10 pixels". In the exemplary embodiment, a pixel size difference of more than 10 pixels is determined as an object bearing lesser risk to the vehicle while a pixel size difference of less than 10 pixels is determined as an object bearing increasing risk to the vehicle.
If the calculated difference does not match the pixel size rule at step 230, i.e. the difference is more than 10 pixels, at step 226, an unlabelled detection ("unlabelled detection") is output for the second image. With the system 102 of FIG. 1, the second object detector module 110 is used to output for the second image (or the second subset of images) that the second image has an unlabelled detection. The process proceeds to step 216 where count is incremented by 1 and the process loops to step 206.
If the calculated difference matches the pixel size rule at step 230, i.e. the difference is less than 10 pixels, at step 232, a heritage label, i.e. using the result from the preceding image, is applied to the current image. At step 234, a labelled detection is output for the second image. With the system 102 of FIG. 1, the second object detector module 110 is used to output for the second image (or the second subset of images) that the second image has a labelled detection. The process proceeds to step 216 where count is incremented by 1 and the process loops to step 206.
The process proceeds for the plurality of images and also onto a next set of images. As such, in the exemplary embodiment, there is an integration of using SSD and BSM. Thus, object detection may be performed based on a difference to be determined at step 220, e.g. eventually resulting in an output of no detection (at step 222), unlabelled detection (at step 226) or labelled detection (at step 234).
With the system 102 of FIG. 1, the processing unit 106 is capable of performing the object detection based on the difference.
In the exemplary embodiment, the outputs regarding the labels of each to image of the plurality of images are sent to an action unit (compare action unit 116 of FIG. 1). At the action unit, depending on the outputs, an action may be taken. Thus, the outputs are used to determine one or more instructions to be transmitted to the action unit. For example, for images with labelled detection, it may be determined that an object is approaching the vehicle. Thus, the action unit may perform an action. For example, a braking action may be taken for the vehicle to avoid a collision. For example, a warning system may be activated to alert a user of the vehicle e.g. a warning light or audio signal may be output to the user. For example, if there is no detection, then no action is taken by the action unit.
In exemplary embodiments, a series of labelled detection may signify that more urgent action be taken. In addition, a series of unlabelled detection may signify warning signals/signs be provided to the user, e.g. while the object may not be identified, a series of unlabelled detection may mean that an object is approaching the vehicle.
FIGs. 3A to 3C show a series of images for illustrating an exemplary implementation of a method for object detection from a curved mirror of FIG. 2.
As described with reference to FIG. 2, the method uses an integration of SSD 30 and BSM. In the exemplary implementation, SSD is performed once in three frames.
At FIG. 3A, at a first frame or (n-1) frame, an object e.g. a car 302 is detected from a curved mirror using SSD. Compare step 212 of FIG. 2. Thus, there is a labelled detection. The labelled detection is that of a car (as the object detected) and for the detection, a bounding box is applied to the detected object (car 302).
At FIG. 36, at a second frame or n frame, BSM is performed and it is determined that there is a detection in this frame, e.g. the car 302 has moved from FIG. 3A. It is also determined that there is a labelled detection in the preceding frame (i.e. (n-1) frame). Thus, the type/kind of object (car 302) detected in the previous frame is distinguished. Compare steps 220, 224 and 228 of FIG. 2. In the exemplary implementation, the movement of the bounding box between the n frame and (n-1) frame is calculated and matched to the pixel size rule. A calculation of the difference between n frame and (n-1) frame determines that there is a difference of less than 10 pixels, thus matching the pixel size rule of FIG. 2. Compare step 230 of FIG. 2. Based on steps 232 and 234 of FIG. 2, a heritage label from (n-1) frame is applied to n frame, e.g. car 302 detected, i.e. n frame takes over the preceding frame's result. Thereafter, there is a labelled detection, i.e. the car as the object detected, for the second frame or n frame.
At FIG. 3C, at a third frame or (n+1) frame, BSM is performed and it is determined that there is a detection in this frame, e.g. the car 302 has moved from FIG. 3B. It is also determined that there is a labelled detection in the preceding frame (i.e. n frame). Compare steps 220, 224 and 228 of FIG. 2. A calculation of the difference between n+1 frame and n frame determines that there is a difference of less than 10 pixels, thus matching the pixel size rule of FIG. 2. Compare step 230 of FIG. 2. Based on steps 232 and 234 of FIG. 2, a heritage label from n frame is applied to n+1 frame, e.g. car 302 detected, i.e. (n+1) frame takes over the preceding frame's result, and there is a labelled detection, i.e. the car as the object detected, for the third frame or (n+1) frame.
In the exemplary implementation, with the series of labelled detection in FIGs. 3A to 3C, one or more instructions may be transmitted to an action unit of a vehicle performing the object detection. For example, with three labelled detections, it may be determined that the risk of collision is high and the action unit may activate a braking function of the vehicle.
In the exemplary implementation, to perform fast calculation for object detection in a curved mirror, BSM (Background Subtraction Method) has been applied to complement a more definitive but relatively slower object detection technique (such as SSD).
To evaluate the above described integration method of SSD and BSM in to detection of objects from a curved mirror, a self-made dataset is used. For creating the dataset, a camera was attached to a car and a series of videos (of a curved mirror) were captured on public road when the car was stopping. Thereafter, the coordinates around the mirror of the frame images were specified, and images of the mirror and the vehicle(s) inside (i.e. reflected in) the mirror were obtained by cutting out the areas outside the mirror continuously (based on the coordinates). As a result, 2,722 mirrors images and vehicle images in the mirrors were obtained from seven scenes. The dataset was created by randomly selecting 300 images from these images and manually annotating the vehicles in the mirror.
FIG. 4 shows an example 402 of the created dataset.
The evaluation was performed by comparing the detection results with the teaching data and calculating the F value. When calculating the F value, it is observed that there are cases where all the vehicles shown in an image are targeted, and cases where only the largest vehicle in an image is targeted. While it is recognised that an evaluation using the former cases may be ideal, but it is decided that the closest object to one's own vehicle is of interest in determining the approach of the object. In other words, it is more significant to be able to detect a vehicle that appears large in the mirror. Therefore, the latter evaluation method was adopted.
Table 1 below shows the results of the evaluation using SSD only, using difference image only (BSM) and the integrated method of the exemplary embodiment described with reference to FIGs. 2 and 3.
Table 1: Evaluation of three detection methods for detection/calculation speeds The evaluation is performed using a confusion matrix method. In Table 1, TP refers to the number of detections detected correctly; TPR refers to the number of detections detected and distinguished (identified) correctly. FP refers to the number of false positives and FN refers to the number of false negatives. In other words, TPR is the number of those objects whose positions can be detected correctly and which can be labelled.
In Table 1, the result of vehicle detection by SSD only is shown in the first row and the result of vehicle detection by only BSM (Background Subtraction Method) is shown in the second row. By using SSD, it is possible to recognize/identify the kind of moving objects which are reflected from the curved mirror (compare identical TPR and TP) but SSD is relatively time consuming and has low recall. Recall is typically defined as TP/(TP+FN) and is used to focus on actual positives. For SSD, the recall value is 0.156 (the lowest of the three detection methods). On the other hand, by using BSM, BSM is not able to recognize/identify the kind of moving objects (compare TPR=0) but BSM detection time is relatively short and has high recall. It is desirable to provide information of the kinds of objects since it is desired to realise a judgement of an object approaching from a blind spot of a vehicle (as reflected from a curved mirror).
From Table 1, it is observed that the integration method of SSD and BSM (at the third row) as described with reference to FIGs. 2 and 3 can usefully provide fast detection time, high recall and comparable TPR.
MetIlisd ii> IRK FR FN SSD onh: Yin differenceiienrt diffy 207
NO
integreied In view of the results obtained in Table 1, the integration method of SSD and BSM as described with reference to FIGs. 2 and 3 is useful. For example, since object detection by SSD is performed once in every 3 frames (frame images), if 5 labels are not inherited between frames, it could be expected that the number of TPRs to be one-third (1/3) as compared to using SSD alone for 3 frames. However, in a surprising way, from Table 1, the TPR of the integration method of SSD and BSM as described with reference to FIGs. 2 and 3 is much higher than one-third (1/3) as compared to using SSD alone for 3 frames. Therefore, the the labelling io made using SSD is effectively taken over when using the difference method (BSM).
Furthermore, based on the detection time, it is considered that it takes about 21.5 + (9.83 x 2) = 41.16 [ms] for the integration method of SSD and BSM as described with reference to FIGs. 2 and 3, when considering the mirror detection is (about 21.5 ms) and the total process. Although processes such as pretreatment and post-treatment are not taken into consideration, it can be observed that the detection time is within a target margin of 100 ms, which is a desirable target.
In an exemplary embodiment, it is described that SSD is used for object detection in a curved mirror. For example, SSD is used once in every 3 frames for object detection. It will be appreciated that SSD may be implemented in several ways. SSD is based on machine learning.
For example, the deep neural network Faster R-CNN may be used. As for feature extraction in Faster R-CNN, Resnet 50 may be used. It is a deep residual network to handle the problem of vanishing/exploding gradients. A basic unit 502 for the machine learning is shown in FIG. 5, which has better performance in small objects detection. For an example implementation, a pre-trained Faster R-CNN with Resnet 50 model based on Microsoft Coco dataset may be used.
The following 7 classes are targeted as road objects, which are possibly the most-seen objects on roads: (1) Pedestrian (person) (2) Bicycle (3) Car (4) Motorcycle (5) Bus (6) Train (7) Truck Upon completion of the machine learning, SSD may be implemented.
In an exemplary embodiment, it is described that BSM is used for detection for differences in images, e.g. between a current image and a preceding image. It will be appreciated that BSM may be implemented in several ways.
FIG. 6 is a schematic flowchart 600 for illustrating an example for 15 Background Subtraction Method.
In an example implementation, at step 602, a current image and a preceding image are obtained. At step 604, pixel information of both the current image and the preceding image are obtained. At step 606, the pixel information of the preceding image is deducted/subtracted from the pixel information of the current image and it is determined if there is a difference between the current image and the preceding image. At step 608, if desired, the difference may be calculated based on the pixel information difference between the current image and the preceding image.
In an exemplary embodiment, it is described that BSM is used for detection for differences in images, e.g. between a current image and a preceding image. It is also described that the difference may be calculated and then matched to a pixel size rule. Both the calculation and the pixel size rule may be implemented in several ways.
FIG. 7 is a schematic flowchart 700 for illustrating an example for a calculation of a difference between a current image and a preceding image and a matching of the calculated difference to a pixel size rule.
In an example implementation, at step 702, a difference between the current image and the preceding image is obtained. Compare e.g. step 606 of FIG. 6.
At step 704, the difference of the current image and the preceding image is calculated. For example, a bounding box around a detected object is identified from both the current image and the preceding image. The difference in terms of pixels of the bounding box that has moved between the current image and the preceding image is calculated. For example, a vertex of the bounding box (e.g. closest to the lo vehicle performing the object detection) may be designated as a maximum point. The difference, in terms of pixels, of the position of the vertex in the current image and the position of the vertex in the preceding image is calculated.
At step 706, the calculated difference is matched to a pixel size rule. For example, the pixel size rule is pre-determined and may be based on a direction of travel towards or away from the vehicle performing the object detection. For example, a pixel distance travelled towards the vehicle may signify a risk of collision as compared to a pixel distance travelled away from the vehicle. Compare e.g. FIGs. 3A and 3B. For example, the pixel size rule may be whether a calculated difference (compare step 704) is within 10 pixels or more than 10 pixels. As another example, the pixel size rule may also vary depending on the curved mirror detected, e.g. whether it is on the left of the vehicle or on the right of the vehicle.
Based on the pixel size rule, if it is determined that a detected object is at a risk of collision with the vehicle performing the object detection, a heritage label, i.e. a labelled detection from the preceding image, is outputted for the current image.
FIG. 8 is a schematic flowchart 800 for illustrating a computer-implemented method of object detection from a curved mirror in an exemplary embodiment. At step 802, a plurality of images as a set are obtained using an image capturing device of a vehicle. At step 804, a first subset of images is obtained from the plurality of images, with a remainder of the plurality of images forming a second subset of images. At step 806, the first subset of images is processed and for each image of the first subset of images, it is outputted whether the each image of the first subset of images has a labelled detection. At step 808, the second subset of images is processed and each image of the second subset of images is compared against a preceding image of the each image of the second subset of images. At step 810, background subtraction is performed to determine whether the each image of the second subset of images contains a difference from the preceding image of the each image of the second subset. At step 812, the object detection is performed based on the difference.
In the exemplary embodiment, a labelling for the each image of the second subset of images may be outputted depending at least on the determination on whether the each image of the second subset of images contains a difference from the preceding image of the each image of the second subset. The labelling may be a heritage label from the preceding image of the each image of the second subset.
In the exemplary embodiment, the processing of the first subset of images and the processing of the second subset of images may be performed broadly by a processing unit of the vehicle. In the exemplary embodiment, the object detection may be performed by the processing unit of the vehicle. As an example, the processing of the first subset of images comprises using a first object detector module of the vehicle and the processing the second subset of images comprises using a second object detector module of the vehicle.
In the exemplary embodiment, the processing of the first subset of images comprises using single shot detection by a trained machine learning model.
In the exemplary embodiment, the processing of the first subset of images and the processing of the second subset of images are used to determine one or more instructions to be transmitted to an action unit of the vehicle.
The described exemplary embodiments, in comparison to an approach using only known deep learning models, can usefully provide faster calculation and can usefully provide time savings.
The described exemplary embodiments may be used for collision prediction and collision avoidance, for example at traffic intersections or blind spots.
The described exemplary embodiments may be usefully applied to Advanced Driver Assistance Systems (ADAS) and Automated Driving Systems (ADS). The described exemplary embodiments may also be usefully applied to other forms of vehicles such as unmanned ground vehicles (uGVs), automated guided vehicles (AGVs), autonomous vehicles, drones etc. to The terms "coupled" or "connected" as used in this description are intended to cover both directly connected or connected through one or more intermediate means, unless otherwise stated.
The terms "configured to (perform a task/action)", "configured for (performing a task/action)" and the like as used in this description include being programmable, programmed, connectable, wired or otherwise constructed to have the ability to perform the task/action when arranged or installed as described herein. The terms "configured to (perform a task/action)", "configured for (performing a task/action)" and the like are intended to cover "when in use, the task/action is performed", e.g. specifically to and/or specifically configured to and/or specifically arranged to and/or specifically adapted to do or perform a task/action.
The term "and/or", e.g., "X and/or Y" is understood to mean either "X and Y" or "X or Y" and should be taken to provide explicit support for both meanings or for either 25 meaning.
The terms "associated with", "related to" and the like used herein when referring to two elements refers to a broad relationship between the two elements. The relationship includes, but is not limited to, a physical, a chemical or a biological relationship. For example, when element A is associated with element B, elements A and B may be directly or indirectly attached to each other or element A may contain element B or vice versa.
The terms "exemplary embodiment", "example embodiment", "exemplary implementation", "exemplarily" and the like used herein are intended to indicate an example of matters described in the present disclosure. Such an example may relate to one or more features defined in the claims and is not necessarily intended to emphasise a best example or any essentialness of any features.
The description herein may be, in certain portions, explicitly or implicitly described as algorithms and/or functional operations that operate on data within a computer memory or an electronic circuit. These algorithmic descriptions and/or functional operations are usually used by those skilled in the information/data processing arts for efficient description. An algorithm is generally relating to a self-consistent sequence of steps leading to a desired result. The algorithmic steps can include physical manipulations of physical quantities, such as electrical, magnetic or optical signals capable of being stored, transmitted, transferred, combined, compared, and otherwise manipulated.
Further, unless specifically stated otherwise, and would ordinarily be apparent from the following, a person skilled in the art will appreciate that throughout the present specification, discussions utilizing terms such as "scanning", "calculating", "determining", "replacing", "generating", "initializing", "outputting", and the like, refer to action and processes of an instructing processor/computer system, or similar electronic circuit/device/component, that manipulates/processes and transforms data represented as physical quantities within the described system into other data similarly represented as physical quantities within the system or other information storage, transmission or display devices etc. The description also discloses relevant device/apparatus for performing the steps of the described methods. Such apparatus may be specifically constructed for the purposes of the methods, or may comprise a general purpose computer/processor or other device selectively activated or reconfigured by a computer program stored in a storage member. The algorithms and displays described herein are not inherently related to any particular computer or other apparatus. It is understood that general purpose devices/machines may be used in accordance with the teachings herein.
Alternatively, the construction of a specialized device/apparatus to perform the method steps may be desired.
In addition, it is submitted that the description also implicitly covers a computer program, in that it would be clear that the steps of the methods described herein may be put into effect by computer code. It will be appreciated that a large variety of programming languages and coding can be used to implement the teachings of the description herein. Moreover, the computer program if applicable is not limited to any particular control flow and can use different control flows without departing from the scope of the invention.
Furthermore, one or more of the steps of the computer program if applicable may be performed in parallel and/or sequentially. Such a computer program if applicable may be stored on any computer readable medium. The computer readable medium may include storage devices such as magnetic or optical disks, memory chips, or other storage devices suitable for interfacing with a suitable reader/general purpose computer. In such instances, the computer readable storage medium is non-transitory. Such storage medium also covers all computer-readable media e.g. medium that stores data only for short periods of time and/or only in the presence of power, such as register memory, processor cache and Random Access Memory (RAM) and the like. The computer readable medium may even include a wired medium such as exemplified in the Internet system, or wireless medium such as exemplified in Bluetooth technology. The computer readable medium may be, for example, cloud storage in the Internet or within an intranet. The computer program when loaded and executed on a suitable reader effectively results in an apparatus that can implement the steps of the described methods, e.g. in a physical embodiment. The computer readable medium is intended to be transferable and is reproducible in that the computer program if applicable is reproducible.
The exemplary embodiments may also be implemented as hardware modules. A module is a functional hardware unit designed for use with other components or modules. For example, a module may be implemented using digital or discrete electronic components, or it can form a portion of an entire electronic circuit such as an Application Specific Integrated Circuit (ASIC). A person skilled in the art will understand that the exemplary embodiments can also be implemented as a combination of hardware and software modules.
Additionally, when describing some embodiments, the disclosure may have disclosed a method and/or process as a particular sequence of steps. However, unless otherwise required, it will be appreciated the method or process should not be limited to the particular sequence of steps disclosed. Other sequences of steps may be possible. The particular order of the steps disclosed herein should not be construed as undue io limitations. Unless otherwise required, a method and/or process disclosed herein should not be limited to the steps being carried out in the order written. The sequence of steps may be varied and still remain within the scope of the disclosure.
Further, in the description herein, the word "substantially" whenever used is is understood to include, but not restricted to, "entirely" or "completely" and the like. In addition, terms such as "comprising", "comprise", and the like whenever used, are intended to be non-restricting descriptive language in that they broadly include elements/components recited after such terms, in addition to other components not explicitly recited. For an example, when "comprising" is used, reference to a "one" zo feature is also intended to be a reference to "at least one" of that feature. Terms such as "consisting", "consist", and the like, may, in the appropriate context, be considered as a subset of terms such as "comprising", "comprise", and the like. Therefore, in embodiments disclosed herein using the terms such as "comprising", "comprise", and the like, it will be appreciated that these embodiments provide teaching for corresponding embodiments using terms such as "consisting", "consist", and the like.
Further, terms such as "about', "approximately" and the like whenever used, typically means a reasonable variation, for example a variation of +/-5% of the disclosed value, or a variance of 4% of the disclosed value, or a variance of 3% of the disclosed value, a variance of 2% of the disclosed value or a variance of 1% of the disclosed value.
Furthermore, in the description herein, certain values may be disclosed in a range. The values showing the end points of a range are intended to illustrate a preferred range. Whenever a range has been described, it is intended that the range covers and teaches all possible sub-ranges as well as individual numerical values within that range. That is, the end points of a range should not be interpreted as inflexible limitations. For example, a description of a range of 1% to 5% is intended to have specifically disclosed sub-ranges 1% to 2%, 1% to 3%, 1% to 4%, 2% to 3% etc., as well as individually, values within that range such as 1%, 2%, 3%, 4% and 5%. It is to be appreciated that the individual numerical values within the range also include integers, fractions and decimals. Furthermore, whenever a range has been described, it is also intended that the range covers and teaches values of up to 2 additional decimal places or significant figures (where appropriate) from the shown numerical end points.
For example, a description of a range of 1% to 5% is intended to have specifically disclosed the ranges 1.00% to 5.00% and also 1.0% to 5.0% and all their intermediate values (such as 1.01%, 1.02% ... 4.98%, 4.99%, 5.00% and 1.1%, 1.2% ... 4.8%, 4.9%, 5.0% etc.,) spanning the ranges. The intention of the above specific disclosure is applicable to any depth/breadth of a range.
In the described exemplary embodiments, a first subset of images exemplarily includes a first frame image and a second subset of images exemplarily includes a second frame image and a third frame image. The plurality of images is exemplarily three frame images. It will be appreciated that the exemplary embodiments are not limited as such. For example, the plurality of images may be any number of images, the first subset of images may include more than one image and the second subset of images may include a different number of images than two.
In the described exemplary embodiments, a vehicle may have been described as a user-driven car. It will be appreciated that the exemplary embodiments are not limited as such. For example, the vehicle may comprise any movable object, including e.g. robots, that can detect a curved mirror and detect an object from the curved mirror.
It will be appreciated by a person skilled in the art that other variations and/or modifications may be made to the specific embodiments without departing from the scope of the claimed invention as broadly described. For example, in the description herein, features of different exemplary embodiments may be mixed, combined, interchanged, incorporated, adopted, modified, included etc. or the like across different exemplary embodiments. For example, exemplary embodiments are not necessarily mutually exclusive as some may be combined with one or more embodiments to form new exemplary embodiments. Furthermore, it will be appreciated that while the present disclosure provides embodiments having one or more of the features/characteristics discussed herein, one or more of these features/characteristics may also be disclaimed in other alternative embodiments and the present disclosure provides support for such disclaimers and these associated alternative embodiments. The present embodiments are, therefore, to be considered in all respects to be illustrative and not restrictive.
Reference Signs List 102 a system for object detection from a curved mirror 104 an image capturing device 106 a processing unit 108 a first object detector module a second object detector module 112 output of the first object detector module 114 output of the second object detector module 116 an action unit 302 a detected car 402 an example dataset 502 a basic unit for machine learning
Claims (16)
- CLAIMS1 A system for object detection from a curved mirror, the system comprising: an image capturing device (104) of a vehicle, the image capturing device arranged to obtain a plurality of images as a set; a processing unit (106) of the vehicle, the processing unit being coupled to the image capturing device and arranged to receive the plurality of images from the image capturing device; characterised in that the system further comprises lo the processing unit being further arranged to obtain a first subset of images from the plurality of images with a remainder of the plurality of images forming a second subset of images; the processing unit including a first object detector module (108), the first object detector module arranged to process the first subset of images, the first object detector module also arranged to output for each image of the first subset of images whether the each image of the first subset of images has a labelled detection; the processing unit including a second object detector module (110), the second object detector module arranged to process the second subset of images, the second object detector module arranged to compare each image of the second subset of images against a preceding image of the each image of the second subset of images and to perform background subtraction to determine whether the each image of the second subset of images contains a difference from the preceding image of the each image of the second subset; and wherein the processing unit is capable of performing the object detection based on the difference.
- 2. The system as claimed in claim 1, wherein if it is determined by the second object detector module that the each image of the second subset of images contains a difference from the preceding image of the each image of the second subset, the system further comprises, the second object detector module being arranged to determine whether the preceding image of the each image of the second subset has a labelled detection.
- 3. The system as claimed in claim 2, wherein if it is determined by the second object detector module that the preceding image of the each image of the second subset has a labelled detection, the system further comprises, the second object detector module being arranged to calculate the difference of 5 the each image of the second subset of images and the preceding image of the each image of the second subset; wherein the second object detector module is further arranged to match the calculated difference to a pixel size rule; and if the calculated difference matches the pixel size rule, the second object to detector module is arranged to output for the each image of the second subset of images that the each image of the second subset of images has a labelled detection; and wherein the labelled detection of the each image of the second subset of images is based on the labelled detection of the preceding image of the each image of the is second subset.
- 4. The system as claimed in claim 3, wherein if the calculated difference does not match the pixel size rule, the system further comprises, the second object detector module being arranged to output for the each image 20 of the second subset of images that the each image of the second subset of images has an unlabelled detection.
- 5. The system as claimed in claim 2, wherein if it is determined by the second object detector module that the preceding image of the each image of the second subset does not have a labelled detection, the system further comprises, the second object detector module being arranged to output for the each image of the second subset of images that the each image of the second subset of images has an unlabelled detection.
- 6. The system as claimed in any one of claims 1 to 5, wherein the first object detector module is trained using machine learning to perform single shot detection to process the first subset of images.
- 7. The system as claimed in any one of claims 1 to 6, further comprising an action unit (116) and wherein the processing of the first subset of images and the processing of the second subset of images are used to determine one or more instructions to be transmitted to the action unit.
- 8. A computer-implemented method of object detection from a curved mirror, the method comprising: obtaining a plurality of images as a set using an image capturing device (104) of a vehicle; obtaining a first subset of images from the plurality of images, a remainder of the plurality of images forming a second subset of images; characterised in thatthemethodcomprises processing the first subset of images and outputting for each image of the first subset of images whether the each image of the first subset of images has a labelled 15 detection; processing the second subset of images and comparing each image of the second subset of images against a preceding image of the each image of the second subset of images; performing background subtraction to determine whether the each image of the 20 second subset of images contains a difference from the preceding image of the each image of the second subset; and performing the object detection based on the difference.
- 9. The method as claimed in claim 8, wherein if it is determined that the each 25 image of the second subset of images contains a difference from the preceding image of the each image of the second subset, the method further comprises, determining whether the preceding image of the each image of the second subset has a labelled detection.
- 10. The method as claimed in claim 9, wherein if it is determined that the preceding image of the each image of the second subset has a labelled detection, the method further comprises, calculating the difference of the each image of the second subset of images and the preceding image of the each image of the second subset; matching the calculated difference to a pixel size rule; if the calculated difference matches the pixel size rule, outputting for the each 5 image of the second subset of images that the each image of the second subset of images has a labelled detection; and wherein the labelled detection of the each image of the second subset of images is based on the labelled detection of the preceding image of the each image of the second subset.
- 11. The method as claimed in claim 10, wherein if the calculated difference does not match the pixel size rule, the method further comprises, outputting for the each image of the second subset of images that the each image of the second subset of images has an unlabelled detection.
- 12. The method as claimed in claim 9, wherein if it is determined that the preceding image of the each image of the second subset does not have a labelled detection, the method further comprises, outputting for the each image of the second subset of images that the each image of the second subset of images has an unlabelled detection.
- 13. The method as claimed in any one of claims 8 to 12, wherein the first subset of images is processed using single shot detection by a trained machine learning model.
- 14. The method as claimed in any one of claims 8 to 13, wherein the processing of the first subset of images comprises using a first object detector module (108) of the vehicle and the processing the second subset of images comprises using a second object detector module (110) of the vehicle.
- 15. The method as claimed in any one of claims 8 to 14, further comprising using the processing of the first subset of images and the processing of the second subset of images to determine one or more instructions to be transmitted to an action unit (116) of the vehicle.
- 16. A computer readable storage medium having stored thereon software instructions that, when executed by a processing unit (106) of a system (102) for object detection from a curved mirror, cause the processing unit to perform a method of object detection from a curved mirror as claimed in any one of claims 8 to 15.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2217545.9A GB2624653A (en) | 2022-11-24 | 2022-11-24 | A system and method for object detection from a curved mirror |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2217545.9A GB2624653A (en) | 2022-11-24 | 2022-11-24 | A system and method for object detection from a curved mirror |
Publications (2)
Publication Number | Publication Date |
---|---|
GB202217545D0 GB202217545D0 (en) | 2023-01-11 |
GB2624653A true GB2624653A (en) | 2024-05-29 |
Family
ID=84889471
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB2217545.9A Pending GB2624653A (en) | 2022-11-24 | 2022-11-24 | A system and method for object detection from a curved mirror |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2624653A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
NZ332051A (en) * | 1996-03-29 | 2000-05-26 | Commw Scient Ind Res Org | Detecting position of aircraft flying overhead for accurately recording registration markings |
CN103679690A (en) * | 2012-09-24 | 2014-03-26 | 中国航天科工集团第二研究院二O七所 | Object detection method based on segmentation background learning |
WO2016139868A1 (en) * | 2015-03-04 | 2016-09-09 | ノ-リツプレシジョン株式会社 | Image analysis device, image analysis method, and image analysis program |
CN109325423A (en) * | 2018-08-29 | 2019-02-12 | 安徽超清科技股份有限公司 | A kind of optimization SSD algorithm for pedestrian detection |
CN109344749A (en) * | 2018-09-20 | 2019-02-15 | 北京理工大学 | Low-angle precision peripheral image acquisition device to realize moving target detection method |
CN112418330A (en) * | 2020-11-26 | 2021-02-26 | 河北工程大学 | Improved SSD (solid State drive) -based high-precision detection method for small target object |
US20220026917A1 (en) * | 2020-07-22 | 2022-01-27 | Motional Ad Llc | Monocular 3d object detection from image semantics network |
-
2022
- 2022-11-24 GB GB2217545.9A patent/GB2624653A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
NZ332051A (en) * | 1996-03-29 | 2000-05-26 | Commw Scient Ind Res Org | Detecting position of aircraft flying overhead for accurately recording registration markings |
CN103679690A (en) * | 2012-09-24 | 2014-03-26 | 中国航天科工集团第二研究院二O七所 | Object detection method based on segmentation background learning |
WO2016139868A1 (en) * | 2015-03-04 | 2016-09-09 | ノ-リツプレシジョン株式会社 | Image analysis device, image analysis method, and image analysis program |
CN109325423A (en) * | 2018-08-29 | 2019-02-12 | 安徽超清科技股份有限公司 | A kind of optimization SSD algorithm for pedestrian detection |
CN109344749A (en) * | 2018-09-20 | 2019-02-15 | 北京理工大学 | Low-angle precision peripheral image acquisition device to realize moving target detection method |
US20220026917A1 (en) * | 2020-07-22 | 2022-01-27 | Motional Ad Llc | Monocular 3d object detection from image semantics network |
CN112418330A (en) * | 2020-11-26 | 2021-02-26 | 河北工程大学 | Improved SSD (solid State drive) -based high-precision detection method for small target object |
Also Published As
Publication number | Publication date |
---|---|
GB202217545D0 (en) | 2023-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11847917B2 (en) | Fixation generation for machine learning | |
US11967109B2 (en) | Vehicle localization using cameras | |
JP7499256B2 (en) | System and method for classifying driver behavior - Patents.com | |
CN108082037B (en) | Brake light detection | |
US10055652B2 (en) | Pedestrian detection and motion prediction with rear-facing camera | |
US10691962B2 (en) | Systems and methods for rear signal identification using machine learning | |
CN106873580B (en) | Autonomous driving at intersections based on perception data | |
RU2701051C2 (en) | Method, system and machine-readable storage media for detecting objects using recurrent neural network and linked feature map | |
US10336326B2 (en) | Lane detection systems and methods | |
CN108230731B (en) | Parking lot navigation system and method | |
CN112673407B (en) | System and method for predictive vehicle accident warning and avoidance | |
US10328932B2 (en) | Parking assist system with annotated map generation | |
US10486707B2 (en) | Prediction of driver intent at intersection | |
US11120690B2 (en) | Method and device for providing an environmental image of an environment of a mobile apparatus and motor vehicle with such a device | |
JP2009037622A (en) | Method and device for evaluating image | |
CN111989915B (en) | Methods, media, and systems for automatic visual inference of environment in an image | |
WO2022193992A1 (en) | Light projection apparatus and method, and storage medium | |
CN111824003A (en) | Control method and control system of car lamp | |
EP3989031A1 (en) | Systems and methods for fusing road friction data to enhance vehicle maneuvering | |
CN114333339B (en) | Deep neural network functional module de-duplication method | |
US20240166204A1 (en) | Vehicle Collision Threat Assessment | |
EP3666594B1 (en) | System and method for warning a driver of a vehicle of an object in a proximity of the vehicle | |
GB2624653A (en) | A system and method for object detection from a curved mirror | |
US20220309799A1 (en) | Method for Automatically Executing a Vehicle Function, Method for Evaluating a Computer Vision Method and Evaluation Circuit for a Vehicle | |
EP4224434A1 (en) | Apparatus and method for evaluating a machine learning model used for assisted or autonomous driving of a vehicle |