[go: up one dir, main page]

WO2020137908A1 - Lighting fixture for vehicle, and vehicle - Google Patents

Lighting fixture for vehicle, and vehicle Download PDF

Info

Publication number
WO2020137908A1
WO2020137908A1 PCT/JP2019/050170 JP2019050170W WO2020137908A1 WO 2020137908 A1 WO2020137908 A1 WO 2020137908A1 JP 2019050170 W JP2019050170 W JP 2019050170W WO 2020137908 A1 WO2020137908 A1 WO 2020137908A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
pixels
photodetector
image
intensity
Prior art date
Application number
PCT/JP2019/050170
Other languages
French (fr)
Japanese (ja)
Inventor
真太郎 杉本
祐介 笠羽
安男 中村
正人 五味
修己 山本
健人 新田
修 廣田
祐太 春瀬
輝明 鳥居
健佑 荒井
Original Assignee
株式会社小糸製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社小糸製作所 filed Critical 株式会社小糸製作所
Priority to CN201980085931.7A priority Critical patent/CN113227838B/en
Priority to JP2020563214A priority patent/JP7408572B2/en
Publication of WO2020137908A1 publication Critical patent/WO2020137908A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/02Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments
    • B60Q1/04Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F21LIGHTING
    • F21SNON-PORTABLE LIGHTING DEVICES; SYSTEMS THEREOF; VEHICLE LIGHTING DEVICES SPECIALLY ADAPTED FOR VEHICLE EXTERIORS
    • F21S41/00Illuminating devices specially adapted for vehicle exteriors, e.g. headlamps
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Definitions

  • the present invention relates to a vehicle lamp.
  • An object identification system that senses the position and type of objects existing around the vehicle is used for automatic driving and automatic control of headlamp light distribution.
  • the object identification system includes a sensor and an arithmetic processing unit that analyzes the output of the sensor.
  • the sensor is selected from cameras, LiDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging), millimeter wave radar, ultrasonic sonar, etc., in consideration of application, required accuracy and cost.
  • Ghost imaging irradiates an object while randomly switching the intensity distribution (pattern) of reference light, and measures the light detection intensity of reflected light for each pattern.
  • the photodetection intensity is an integrated value of energy or intensity over a certain plane, not an intensity distribution. Then, the image of the object is reconstructed by taking the correlation between the corresponding pattern and the light detection intensity.
  • One of the exemplary purposes of the first aspect of the present invention is to accurately irradiate a distant object with reference light.
  • the amount of correlation calculation explosively increases according to the number of pixels of the restored image. Specifically, when the number of times of random reference light irradiation is M and the number of pixels is (X ⁇ Y), the number of calculations is M x (X x Y) 2 becomes
  • One of the exemplary purposes of the second aspect of the present invention is to provide an imaging apparatus or an imaging method with a reduced amount of calculation.
  • Patterning devices such as DMD (Digital Micromirror Device) and liquid crystal are used for patterning the reference light.
  • the DMD and the liquid crystal have a plurality of pixels arranged in a matrix, and the reflectance and the transmittance can be controlled for each pixel.
  • the in-vehicle imaging device can detect various objects such as cars, people, motorcycles and bicycles, structures, and plants and animals.
  • the situation in which the imaging device is used also changes greatly depending on the traveling environment such as weather, time of day, traveling road, traveling speed, and the like.
  • the imaging device itself moves, the objects also move, and their relative movement directions are various.
  • One of the exemplary objects of the third aspect of the present invention is to provide an illuminating device suitable for imaging for a specific application.
  • One of the exemplary objects of the fourth aspect of the present invention is to provide an illumination device suitable for a distant imaging device.
  • One of the exemplary purposes of the fifth aspect of the present invention is to provide an imaging device capable of reducing the number of irradiations.
  • the first aspect of the present invention relates to a vehicle lamp.
  • the vehicular lamp includes a headlight and a pseudo heat light source.
  • the pseudo heat light source can irradiate the object while randomly changing the intensity distribution of the reference light.
  • the pseudo thermal light source constitutes an imaging device together with a photodetector that measures reflected light from an object, and an arithmetic processing device that reconstructs a restored image of the object based on the output of the photodetector and the intensity distribution of the reference light. .. At least some components of the headlamp are shared with the pseudo heat source.
  • the headlight is designed with a light source and an optical system for the purpose of irradiating light up to several tens of meters. Therefore, by incorporating the pseudo heat light source of the imaging device in the vehicle lamp and diverting some of the components of the headlight to the pseudo heat light source, it is possible to accurately irradiate the distant object with the reference light. .. Also, the overall cost can be reduced.
  • the pseudo heat light source may share the optical system with the headlight.
  • the headlamp optics may include a patterning device that controls the light distribution.
  • the pseudo heat source and the headlight may share a patterning device.
  • the headlight optical system may include a reflector that reflects the light emitted from the light source toward the front of the vehicle.
  • the pseudo heat source and the headlight may share a reflector.
  • the reference light may be infrared or ultraviolet.
  • the pseudo heat light source may share the light source with the headlight.
  • the reference light may be white light.
  • the entire headlamp may be operable as a pseudo heat source for the imaging device.
  • the second aspect of the present invention relates to an in-vehicle imaging device.
  • the in-vehicle imaging device divides the measurement range into a plurality of sections, and illuminates a reference light whose intensity distribution is random while switching the sections, a photodetector that measures reflected light from an object, and a plurality of sections. Each of them is provided with an arithmetic processing unit that reconstructs a restored image of a portion included in the section of the object based on the detected intensity based on the output of the photodetector and the intensity distribution of the reference light.
  • the calculation time can be reduced.
  • the number of multiple partitions may be set so that the amount of decrease in the computation time due to the division is larger than the amount of increase in the measurement time. This enables high-speed sensing.
  • a third aspect of the present invention relates to an illumination device used for an imaging device based on ghost imaging.
  • the lighting device has a plurality of pixels arranged in a matrix, and is configured to be able to modulate the light intensity distribution based on a combination of ON and OFF of the plurality of pixels.
  • the intensity distribution is controlled in units of pixel blocks including at least one pixel, and the pixel blocks are variable.
  • the illumination device has a plurality of pixels arranged in a matrix, and is configured to be able to modulate the light intensity distribution based on a combination of ON and OFF of the plurality of pixels.
  • the intensity distribution is controlled by a combination of predetermined patterns including two or more ON pixels and OFF pixels.
  • a fourth aspect of the present invention is a lighting device.
  • the lighting device has a plurality of pixels arranged in a matrix, and is configured to be able to modulate the light intensity distribution based on a combination of ON and OFF of the plurality of pixels. On/off of a plurality of pixels is controlled under a predetermined constraint condition.
  • a fifth aspect of the invention relates to an imaging device.
  • the imaging device includes an illumination that illuminates an object while changing the intensity distribution of the reference light in a plurality of M ways, a photodetector that measures reflected light from the object for each of the plurality of intensity distributions I 1 to I M , and a plurality of photodetectors.
  • a photodetector that measures reflected light from the object for each of the plurality of intensity distributions I 1 to I M
  • a plurality of photodetectors Of the intensity distributions I 1 to I M and a plurality of detected intensities b 1 to b M based on the output of the photodetector, and an arithmetic processing unit for reconstructing a restored image of the object.
  • the plurality of intensity distributions I 1 to I M are (i) modeling the transfer characteristics of the path from the illumination through the object to the photodetector, and (ii) defining the reference object and its corresponding reference image.
  • the first aspect of the present invention it is possible to accurately irradiate the reference light at a distance. 2. According to the second aspect of the present invention, the amount of calculation can be reduced while obtaining high resolution. 3. According to the third aspect of the present invention, it is possible to provide an illumination device suitable for imaging for a specific application. 4. According to the fourth aspect of the present invention, it is possible to provide an illumination device suitable for an imaging device for a distant object. 5. According to the fifth aspect of the present invention, the number of irradiations can be reduced.
  • FIG. 1 is a diagram showing a vehicle lighting device according to a first embodiment.
  • FIG. 6 is a diagram showing a vehicle lamp according to a second embodiment.
  • FIG. 10 is a diagram illustrating first pattern control in the second embodiment.
  • FIG. 9 is a diagram illustrating second pattern control in the second embodiment.
  • FIG. 7 is a diagram showing a vehicle lamp according to a third embodiment.
  • FIG. 9 is a diagram illustrating a first control in the third embodiment.
  • FIG. 10 is a diagram illustrating third control in the third embodiment. It is a block diagram of an object identification system. It is a figure which shows a motor vehicle.
  • FIG. 6 is a diagram showing an imaging device according to a second embodiment.
  • FIG. 7 is a diagram illustrating the intensity distribution of reference light according to the second embodiment. It is a figure explaining the trade-off of calculation time and measurement time.
  • 16(a) and 16(b) are diagrams showing a modified example of a section.
  • FIG. 7 is a diagram showing an imaging device according to a third embodiment.
  • FIGS. 18A to 18C are views for explaining the pixels of the DMD that is the patterning device.
  • 19A to 19D are diagrams showing pixel blocks B having different sizes.
  • 20A and 20B are diagrams showing an example of the pattern signal PTN based on the pixel blocks B having different sizes.
  • 22A and 22B are diagrams illustrating the layout of the pixel blocks B having different sizes according to the running scene. It is a figure explaining the dynamic layout of the pixel block B from which size differs.
  • 24A to 24C are diagrams showing a pixel block B according to Modification 3.1.
  • 25A to 25D are diagrams showing a pixel block B according to Modification 3.2.
  • 26A to 26D are diagrams showing pixel blocks B having different shapes.
  • 27A to 27C are diagrams showing an example of the pattern signal PTN based on pixel blocks having different shapes.
  • FIGS. 28A and 28B are views for explaining sensing based on the pattern signal PTN having the pixel block B having a characteristic shape.
  • 29A to 29D are diagrams for explaining the pattern block PB according to the embodiment 3.5.
  • 30A and 30B are diagrams showing examples of pattern signals based on the combination of pattern blocks.
  • FIGS. 31A and 31B are diagrams for explaining the improvement of the spatial incoherence of the reference light.
  • 32A and 32B are diagrams showing examples of intensity distributions that can improve spatial incoherence.
  • FIGS. 33A to 33D are diagrams for explaining the pattern control with the lighting rate as a constraint condition.
  • 34(a) and 34(b) are diagrams for explaining the control of the lighting rate according to the modification. It is a figure which shows the imaging device which concerns on Embodiment 4.
  • FIG. 6 is a flowchart showing a method of determining a set of a plurality of intensity distributions I 1 to I M. It is a figure explaining the relationship between a reference object and reference image T (x, y).
  • a set I 100 ⁇ shown FIG consisting of 100 kinds of the intensity distribution I 1 ⁇ I 100 obtained for M 100. It is a figure which shows the restored image when using the set of optimized intensity distribution. It is a figure which shows the restored image when using a set of random intensity distribution.
  • the intensity distribution is random in the present specification does not mean that it is completely random, but may be random as long as an image can be reconstructed in ghost imaging. Therefore, “random” in the present specification can include a certain degree of regularity therein. Also, “random” does not require to be completely unpredictable, but may be predictable and reproducible.
  • FIG. 1 is a diagram showing a vehicle lamp 400 according to the first embodiment.
  • the vehicle lamp (or lamp system) 400 includes a headlamp 410 and a pseudo heat light source 420.
  • the headlamp 410 and the pseudo heat light source 420 are housed in the housing 402.
  • the front surface of the housing 402 is covered with a transparent cover 404.
  • the headlight 410 includes a low beam, a high beam, or both, and emits a beam Sb for forming a light distribution in front of the vehicle.
  • the pseudo heat light source 420 constitutes the imaging device 100 together with the photodetector 120 and the arithmetic processing device 130.
  • the photodetector 120 and the arithmetic processing unit 130 may be built in the housing 402 or may be provided outside the housing 402.
  • the imaging apparatus 100 is a correlation function image sensor (also referred to as single pixel imaging) that uses the principle of ghost imaging, and includes a pseudo thermal light source 110 (pseudo thermal light source 420 in FIG. 1), a photodetector 120, and an arithmetic processing unit 130. ..
  • the imaging device 100 is also called a quantum radar camera.
  • the pseudo heat light source 110 generates the reference light S1 having the intensity distribution I(x, y) that can be regarded as substantially random, and irradiates the object OBJ. Irradiation of the reference light S1 onto the object OBJ is performed while changing its intensity distribution according to a plurality of M patterns.
  • the pseudo-thermal light source 110 may include, for example, a light source 112 that generates a light S0 having a uniform intensity distribution, and a patterning device 114 that can spatially modulate the intensity distribution I of the light S0.
  • the light source 112 may use a laser, a light emitting diode, or the like.
  • the wavelength and spectrum of the reference light S1 are not particularly limited, and may be white light having a plurality of or continuous spectra, or monochromatic light having a predetermined wavelength.
  • a DMD Digital Micromirror Device
  • a liquid crystal device can be used as the patterning device 114.
  • the pattern signal PTN (image data) designating the intensity distribution I is given to the patterning device 114 from the arithmetic processing unit 130. Therefore, the arithmetic processing unit 130 currently receives the reference light S1 irradiated on the object OBJ. Know the intensity distribution I r .
  • the photodetector 120 measures the reflected light from the object OBJ and outputs a detection signal D r .
  • the detection signal D r is a spatial integral value of light energy (or intensity) incident on the photodetector 120 when the object OBJ is irradiated with the reference light having the intensity distribution I r . Therefore, as the photodetector 120, a single-pixel photodetector (photodetector) can be used as the photodetector 120.
  • the photodetector 120 outputs a plurality of detection signals D 1 to D M corresponding to a plurality of M intensity distributions I 1 to I M, respectively.
  • the arithmetic processing unit 130 includes a pattern generator 132 and a reconstruction processing unit 134.
  • the arithmetic processing unit 130 can be implemented by a combination of a processor (hardware) such as a CPU (Central Processing Unit), an MPU (Micro Processing Unit), and a microcomputer, and a software program executed by the processor (hardware).
  • a processor such as a CPU (Central Processing Unit), an MPU (Micro Processing Unit), and a microcomputer
  • the arithmetic processing unit 130 may be a combination of a plurality of processors. Alternatively, the arithmetic processing unit 130 may be composed of only hardware.
  • the intensity distribution of the reference light S1 generated by the pseudo heat light source 110 may be randomly generated each time.
  • a set of a plurality of intensity distributions I 1 to I M may be defined in advance.
  • a set of the plurality of pattern signals PTN 1 to PTN M defining the plurality of intensity distributions I 1 to I M may be stored in advance in a memory (pattern memory) inside the pattern generator 132.
  • the reconstruction processing unit 134 reconstructs the restored image G(x, y) of the object OBJ by correlating the plurality of intensity distributions I 1 to I M and the plurality of detected intensities b 1 to b M.
  • the detection intensities b 1 to b M are based on the detection signals D 1 to D M. The relationship between the detection intensity and the detection signal may be determined in consideration of the type and method of the photodetector 120.
  • the detection signal D r is assumed to represent the amount of light received at a certain time (or a minute time), that is, an instantaneous value.
  • the detection signal D r may be sampled multiple times during the irradiation period, and the detection intensity b r may be an integrated value, an average value, or a maximum value of all sampling values of the detection signal D r .
  • some of all the sampling values may be selected, and the integrated value, average value, or maximum value of the selected sampling values may be used.
  • the order xth to yth counting from the maximum value may be extracted, sampling values lower than an arbitrary threshold value may be excluded, or the magnitude of signal fluctuation may be excluded. It is also possible to extract a sampling value in a range where is small.
  • the output D r of the photodetector 120 can be directly used as the detection intensity b r .
  • the conversion from the detection signal D r to the detection intensity b r may be performed by the arithmetic processing device 130 or may be performed outside the arithmetic processing device 130.
  • I r is the r-th intensity distribution
  • b r is the r-th detected intensity value
  • the pseudo heat light source 420 is built in the vehicle lamp 400. At least a part of the components (common member) 430 of the headlamp 410 is shared with the pseudo heat light source 420.
  • the above is the configuration of the vehicle lamp 400.
  • the headlight 410 is designed with a light source and an optical system for the purpose of irradiating light up to several tens of meters. Therefore, by incorporating the pseudo heat light source 420 of the imaging device in the vehicle lamp 400 and diverting some of the components of the headlight 410 to the pseudo heat light source 420, the reference light S1 can be accurately applied to a distant object. Can be irradiated. Moreover, since the number of overlapping members can be reduced, the overall cost can be reduced.
  • the present invention extends to various devices and methods understood as the block diagram of FIG. 1 or derived from the above description, and is not limited to a specific configuration.
  • more specific configuration examples and examples will be described in order to help understanding of the essence and operation of the invention and to clarify them, not to narrow the scope of the invention.
  • FIG. 3 is a diagram showing a vehicular lamp 400A according to the first embodiment.
  • the pseudo heat light source 420 shares an optical system with the headlight 410.
  • the headlamp 410 includes a light source 412 and a reflector 414.
  • the light source 412 includes a white light emitting diode (or a semiconductor laser) and its lighting circuit.
  • the reflector 414 reflects the light emitted from the light source 412 toward the front of the vehicle.
  • the reflector 414 of the headlight 410 is the common member 430 of FIG. 1 and is shared with the pseudo heat light source 420.
  • Pseudo thermal light source 420 includes light source 422 and patterning device 424. The beam whose intensity distribution is randomized by the patterning device 424 is reflected by the reflector 414 toward the front of the vehicle.
  • the light S0 generated by the light source 422 may be infrared light or ultraviolet light.
  • the photodetector 120 is insensitive in the visible wavelength band and may be configured to have sensitivity only to the wavelength of the light S0 (reference light S1). Thereby, the sensing by the imaging device 100 is not affected by the headlight.
  • the light S0 generated by the light source 422 may include a single wavelength in the visible range or may be white light.
  • the photodetector 120 is sensitive to both the beam Sb of the headlight 410 and the reference light S1.
  • the influence of the beam Sb on the output of the photodetector 120 may be reduced by arithmetic processing.
  • the beam Sb can be regarded as direct current and the reference light S1 can be regarded as alternating current
  • the influence of the beam Sb may be reduced by a high-pass filter.
  • the offset process of subtracting the estimated value of the component caused by the beam Sb may be performed.
  • FIG. 4 is a diagram illustrating a vehicular lamp 400B according to the second embodiment. Also in the second embodiment, the pseudo heat light source 420 shares the optical system with the headlight 410.
  • the headlight 410 is a variable light distribution lamp (ADB: Adaptive Driving Beam), and includes a light source 412, a patterning device 416, and a reflector 414.
  • the light source 412 includes a white LED or LD and its lighting circuit.
  • the patterning device 416 is, for example, a DMD, and spatially modulates the intensity distribution of the light emitted from the light source 412 so as to obtain a desired light distribution pattern.
  • the reflector 414 reflects the light flux corresponding to an on-pixel in the reflected light of the patterning device 416 to the front of the vehicle.
  • the patterning device 416 and the reflector 414 are the common member 430 shared with the pseudo heat light source 420.
  • the emitted light S0 of the light source 422 is incident on the patterning device 416 and is randomly modulated, and the reference light S1 is generated.
  • the patterning device 416 since the patterning device 416 is shared, it is necessary to suppress mutual influence of the pattern (intensity distribution) of the beam Sb and the pattern of the reference light S1.
  • the light source 422 and the light source 412 may be complementarily turned on.
  • the light source 412 and the light source 422 are alternately turned on in a time-sharing manner, and during the lighting period of the light source 412, image data (light distribution image data) PTNb corresponding to the light distribution pattern is given to the patterning device 416,
  • the random image data (random image data) PTN 1 to PTN M may be given to the patterning device 416 (first pattern control).
  • FIG. 5 is a diagram illustrating the first pattern control in the second embodiment.
  • FIG. 6 is a diagram illustrating the second pattern control in the second embodiment. ON indicates an irradiation area defined by the light distribution pattern, and OFF indicates a light shielding area defined by the light distribution pattern.
  • the random image data PTN i has pixel values in which 1 and 0 are randomly distributed.
  • the pattern of FIG. 6 can be generated by calculating the logical product of the random image data PTN i and the light distribution image data PTNb.
  • FIG. 7 is a diagram illustrating a vehicle lamp 400C according to the third embodiment.
  • the headlight 410 all the components of the headlight 410 are shared with the pseudo heat light source 420. That is, the headlight 410 has the function of the pseudo heat light source 420.
  • the reference light S1 naturally becomes white light.
  • FIG. 8 is a diagram illustrating the first control in the third embodiment. As shown in FIG. 8, the pattern of the reference light S1 may be switched a plurality of times during one sensing period Ts.
  • the second pattern control of the second embodiment may be performed.
  • FIG. 9 is a diagram illustrating the third control in the third embodiment.
  • Some patterning devices such as DMDs are capable of gradation control.
  • corresponding pixel values of the random image data PTN i and the light distribution image data PTNb may be added together and given to the patterning device 416.
  • the intensity distribution of the outgoing beam of the vehicular lamp 400 changes randomly with time with the base level determined by the light distribution image data PTNb as a reference.
  • the pseudo thermal light source 420 is composed of the combination of the light source 422 and the patterning device 424, but it is not limited thereto.
  • the pseudo heat light source 420 is composed of an array of a plurality of semiconductor light sources (LED (light emitting diode) or LD (laser diode)) arranged in a matrix, and can control ON/OFF (or brightness) of each semiconductor light source. You may comprise.
  • the illumination device 110 is composed of the combination of the light source 112 and the patterning device 114, but it is not limited thereto.
  • the illuminating device 110 is composed of an array of a plurality of semiconductor light sources (LED (light emitting diode) or LD (laser diode)) arranged in a matrix, and it is possible to control ON/OFF (or brightness) of each semiconductor light source. You may comprise.
  • FIG. 10 is a block diagram of the object identification system 10.
  • the object identification system 10 is mounted on a vehicle such as an automobile or a motorcycle and determines the type (category) of an object OBJ existing around the vehicle.
  • the object identification system 10 includes an imaging device 100 and an arithmetic processing device 40. As described above, the imaging apparatus 100 irradiates the object OBJ with the reference light S1 and measures the reflected light S2 to generate the restored image G of the object OBJ.
  • the arithmetic processing device 40 processes the output image G of the imaging device 100 and determines the position and type (category) of the object OBJ.
  • the classifier 42 of the arithmetic processing device 40 receives the image G as an input and determines the position and type of the object OBJ included in the image G.
  • the classifier 42 is implemented based on the model generated by machine learning.
  • the algorithm of the classifier 42 is not particularly limited, but YOLO (You Only Look Once), SSD (Single Shot Multi Box Detector), R-CNN (Region-based Convolutional Neural Network), SPPnet (Spatial Pyramid Pooling), Faster R-CNN , DSSD (Deconvolution-SSD), MaskR-CNN, etc. can be adopted, or an algorithm developed in the future can be adopted.
  • Information about the object OBJ detected by the arithmetic processing unit 40 may be used for light distribution control of the vehicular lamp 200. Specifically, an appropriate light distribution pattern can be generated based on the information on the type and position of the object OBJ generated by the arithmetic processing device 40.
  • Information regarding the object OBJ detected by the arithmetic processing unit 40 may be transmitted to the vehicle-side ECU.
  • the vehicle-side ECU may perform automatic driving based on this information.
  • the above is the configuration of the object identification system 10.
  • noise resistance is significantly improved. For example, it is difficult to recognize the object OBJ with naked eyes when it is raining, snowing, or traveling in fog, but even in such a situation, the restored image G of the object OBJ is not affected by rain, snow, or fog. Can be obtained.
  • FIG. 11 is a diagram showing an automobile.
  • the automobile 300 includes vehicle lamps 302L and 302R.
  • the pseudo heat light source 420 is built in at least one of the vehicular lamps 302L and 302R in a mode in which a part of the hardware is shared with the headlight.
  • FIG. 12 is a block diagram showing a vehicle lamp 200 including an object detection system 210.
  • the vehicle lamp 200 constitutes a lamp system 310 together with the vehicle-side ECU 304.
  • the vehicular lamp 200 includes a light source 202, a lighting circuit 204, and an optical system 206. Further, the vehicle lighting device 200 is provided with an object detection system 210.
  • the object detection system 210 corresponds to the object identification system 10 described above, and includes the imaging device 100 and the arithmetic processing device 40.
  • Information about the object OBJ detected by the arithmetic processing unit 40 may be used for light distribution control of the vehicular lamp 200.
  • the lamp-side ECU 208 generates an appropriate light distribution pattern based on the information regarding the type and the position of the object OBJ generated by the arithmetic processing device 40.
  • the lighting circuit 204 and the optical system 206 operate so that the light distribution pattern generated by the lamp-side ECU 208 is obtained.
  • Information regarding the object OBJ detected by the arithmetic processing unit 40 may be transmitted to the vehicle-side ECU 304.
  • the vehicle-side ECU may perform automatic driving based on this information.
  • FIG. 13 is a diagram showing the imaging apparatus 100 according to the second embodiment.
  • the imaging device 100 is a correlation function image sensor that uses the principle of ghost imaging, and includes an illumination device 110, a photodetector 120, and an arithmetic processing device 130.
  • the imaging device 100 is also called a quantum radar camera.
  • the lighting device 110 is a pseudo heat light source, and generates the reference light S1 having the intensity distribution I that can be regarded as substantially random, and irradiates the object OBJ.
  • FIG. 14 is a diagram illustrating the intensity distribution of the reference light S1 according to the second embodiment. In the figure, the part where the intensity is zero is shown in white, and the part where the intensity is not zero is shown in black.
  • the illumination device 110 irradiates the reference light S1 in which the intensity distribution I(x, y) in the irradiation section is substantially random, while switching the section (referred to as an irradiation section) 602_i that irradiates light.
  • the intensity in the sections other than the irradiation section (called non-irradiation section) is zero.
  • Each of the plurality of sections 602_1 to 602_N is irradiated with the reference light S1 having M random intensity distributions. Therefore, the total number of irradiation times per sensing is M ⁇ N.
  • the i-th section (1 ⁇ i ⁇ N) is selected, the j-th intensity distribution is I i,j , and the reference light S1 at that time is shown as S1 i,j .
  • the photodetector 120 measures the reflected light from the object OBJ and outputs a detection signal D.
  • the detection signal D i,j is a spatial integral value of light energy (or intensity) incident on the photodetector 120 when the object OBJ is irradiated with the reference light having the intensity distribution I i,j . Therefore, as the photodetector 120, a single-pixel photodetector (photodetector) can be used as the photodetector 120.
  • Reference light S1 i,j (i ⁇ 1 to N) having M ⁇ N intensity distributions I 1,1 to I 1,M , I 2,1 to I 2,M , I N,1 to I N,M , J ⁇ 1 to M), the photodetector 120 outputs M ⁇ N detection signals D i,j (i ⁇ 1 to N, j ⁇ 1 to M).
  • the order of irradiation is not particularly limited.
  • the next irradiation section may be selected.
  • the order of selecting the irradiation sections is not particularly limited, and the irradiation sections can be selected according to a predetermined rule.
  • the sections in the first row may be selected in order from left to right, and after moving to the rightmost, the next row may be moved.
  • the sections in the first column may be selected in order from top to bottom, and after moving to the bottom, move to the next row.
  • the illuminator 110 may include, for example, a light source 112 that generates a light S0 having a uniform intensity distribution, and a patterning device 114 that can spatially modulate the intensity distribution of the light S0.
  • the light source 112 may use a laser, a light emitting diode, or the like.
  • the wavelength and spectrum of the reference light S1 are not particularly limited, and may be white light having a plurality of or continuous spectra, or monochromatic light having a predetermined wavelength.
  • the patterning device 114 As the patterning device 114, a DMD (Digital Micromirror Device) or a liquid crystal device can be used. In this embodiment, the patterning device 114 covers the entire measurement range 600 and has the ability to simultaneously illuminate the entire measurement range 600, but turns off the pixels corresponding to the non-illuminated sections of the patterning device 114. As a result, it is possible to give a random pattern only to the irradiation section.
  • a pattern signal PTN i,j (image data) designating the intensity distribution I i,j is given to the patterning device 114 from the arithmetic processing unit 130. Therefore, the arithmetic processing unit 130 knows the current position of the irradiation section and the intensity distribution I i,j of the reference light S1.
  • the arithmetic processing unit 130 includes a pattern generator 132 and a reconstruction processing unit 134.
  • the pattern generator 132 may randomly generate the intensity distribution I i,j of the reference light S1 each time.
  • the pattern generator 132 may include a pseudo random signal generator.
  • a set of a plurality of intensity distributions I i,j may be defined in advance.
  • a plurality (for example, M) of sets of intensity distributions I 1 to I M having the same size as the partition 602 may be defined in advance.
  • I 1 to I M may be assigned to the irradiation section in order or randomly.
  • a set of a plurality of pattern signals defining a plurality of intensity distributions I 1 to I M may be held in advance in a memory (pattern memory) inside the pattern generator 132.
  • the arithmetic processing unit 130 can be implemented by a combination of a processor (hardware) such as a CPU (Central Processing Unit), an MPU (Micro Processing Unit), and a microcomputer, and a software program executed by the processor (hardware).
  • a processor such as a CPU (Central Processing Unit), an MPU (Micro Processing Unit), and a microcomputer
  • the arithmetic processing unit 130 may be a combination of a plurality of processors. Alternatively, the arithmetic processing unit 130 may be composed of only hardware.
  • the reconstruction processing unit 134 for each of the plurality of sections 602_1 to 602_N (602_i), a plurality of detection intensities b i,1 to b i,M and an intensity distribution I i of the reference light S1 i,1 to S1 i,M .
  • a plurality of detection intensities b i,1 to b i,M and an intensity distribution I i of the reference light S1 i,1 to S1 i,M .
  • the detection intensities b i,1 to b i,M are based on the detection signals D i,1 to D i,M .
  • the relationship between the detection intensity b i,j and the detection signal D i,j may be determined in consideration of the type and method of the photodetector 120.
  • the detection signal D i,j is assumed to represent the amount of received light at a certain time (or a minute time), that is, an instantaneous value.
  • the detection signal D i,j may be sampled a plurality of times during the irradiation period, and the detection intensity b i,j may be an integral value, an average value, or a maximum value of all sampling values of the detection signal D i,j .
  • some of all the sampling values may be selected, and the integrated value, average value, or maximum value of the selected sampling values may be used.
  • the order xth to yth counting from the maximum value may be extracted, sampling values lower than an arbitrary threshold value may be excluded, or the magnitude of signal fluctuation may be excluded. It is also possible to extract a sampling value in a range where is small.
  • the output D i,j of the photodetector 120 can be directly used as the detection intensity b i,j .
  • the conversion from the detection signal D i,j to the detection intensity b i,j may be performed by the arithmetic processing device 130 or may be performed outside the arithmetic processing device 130.
  • the correlation function of Expression (2) is used to restore the image G i of the i-th (1 ⁇ i ⁇ N) section 602 — i .
  • I i,j is the j-th (1 ⁇ j ⁇ M) intensity distribution
  • b i,j is the j-th detected intensity value.
  • the number of calculations in this imaging apparatus 100 is as follows.
  • the number of pixels in the entire measurement range is X ⁇ Y, and the number of pixels in the horizontal and vertical directions in one section is x and y.
  • X ⁇ Y (x ⁇ y) ⁇ N.
  • the intensity distribution of the reference light is stored in the memory in the pattern generator 132, the intensity distribution of one section (x ⁇ y) may be held instead of the entire irradiation area (X ⁇ Y).
  • the memory capacity can be reduced.
  • ⁇ Reduction in the number of calculations means that the calculation time can be shortened when using the same speed processor.
  • a slower (and thus cheaper) processor can be employed to finish the process in the same amount of time.
  • FIG. 15 is a diagram for explaining the trade-off between calculation time and measurement time.
  • the number N of partitions may be set so that the decrease amount ⁇ 1 of the calculation time due to the division is larger than the increase amount ⁇ 2 of the measurement time. As a result, the frame rate required for in-vehicle use can be realized.
  • the number of irradiations M is the same for each section, but the number of irradiations M may be different for each section.
  • the irradiation number M i the more accurate the image can be restored, but depending on the position of the section, the accuracy may not be required so much. Therefore, by optimizing the irradiation number for each section, the number of calculations (calculation time) and the measurement time can be adjusted for each section.
  • 16(a) and 16(b) are diagrams showing a modified example of a section. As shown in FIG. 16A, it may be divided into horizontally long sections. Alternatively, it may be divided into vertically long sections.
  • the sizes (number of pixels) of a plurality of partitions are the same, but this is not the case. As shown in FIG. 16B, the number of pixels may be different for each section.
  • the patterning device 114 that covers the entire measurement range 600 is used, but it is not limited thereto.
  • the illuminating device 110 having the irradiation ability for one section may be provided, and the emitted light may be scanned in the horizontal direction or the horizontal direction using the movable mirror.
  • the illumination device 110 is configured by the combination of the light source 112 and the patterning device 114 in the second embodiment, the present invention is not limited to this.
  • the illuminating device 110 is composed of an array of a plurality of semiconductor light sources (LED (light emitting diode) or LD (laser diode)) arranged in a matrix, and it is possible to control ON/OFF (or brightness) of each semiconductor light source. You may comprise.
  • the imaging device 100 can be used for the object identification system 10 of FIG.
  • the imaging device 100 described in the second embodiment as the sensor of the object identification system 10 the following advantages can be obtained.
  • the use of the imaging device 100 that is, the quantum radar camera significantly improves noise resistance. For example, when it is raining, snowing, or traveling in fog, it is difficult for the naked eye to recognize the object OBJ, but by using the imaging device 100, the object OBJ is not affected by rain, snow, or fog.
  • the restored image G can be obtained.
  • the calculation range can be reduced by dividing the measurement range into multiple sections and restoring the image for each section. This makes it possible to increase the frame rate or select an inexpensive processor as the arithmetic processing unit.
  • the number N of sections may be adaptively changed according to the traveling environment.
  • the imaging device 100 described in the second embodiment can be mounted on the vehicle shown in FIG. 11 and may be incorporated in the vehicle lighting device shown in FIG.
  • FIG. 17 is a diagram showing the imaging apparatus 100 according to the third embodiment.
  • the imaging device 100 is a correlation function image sensor that uses the principle of ghost imaging, and includes an illumination device 110, a photodetector 120, and an arithmetic processing device 130.
  • the imaging device 100 is also called a quantum radar camera.
  • the lighting device 110 is a pseudo heat light source, and generates the reference light S1 having the intensity distribution I(x, y) that can be regarded as substantially random, and irradiates the object OBJ. Irradiation of the reference light S1 onto the object OBJ is performed while changing its intensity distribution according to a plurality of M patterns.
  • the lighting device 110 includes a light source 112 and a patterning device 114.
  • the light source 112 generates the light S0 having a uniform intensity distribution.
  • the light source 112 may use a laser, a light emitting diode, or the like.
  • the wavelength and spectrum of the reference light S1 are not particularly limited, and may be white light having a plurality of or continuous spectra, or monochromatic light having a predetermined wavelength.
  • the wavelength of the reference light S1 may be infrared or ultraviolet.
  • the patterning device 114 has a plurality of pixels arranged in a matrix, and the intensity distribution I of light can be spatially modulated based on the combination of ON and OFF of the plurality of pixels.
  • a pixel in an on state is referred to as an on pixel
  • a pixel in an off state is referred to as an off pixel. Note that, in the following description, for ease of understanding, each pixel takes only two values (1, 0) of ON and OFF, but the present invention is not limited to this and may take an intermediate gradation.
  • a reflective DMD Digital Micromirror Device
  • a transmissive liquid crystal device can be used as the patterning device 114.
  • a pattern signal PTN (image data) generated by the pattern generator 116 is applied to the patterning device 114.
  • the patterning device 114 is assumed to be a DMD.
  • the photodetector 120 measures the reflected light from the object OBJ and outputs a detection signal D r .
  • the detection signal D r is a spatial integral value of light energy (or intensity) incident on the photodetector 120 when the object OBJ is irradiated with the reference light having the intensity distribution I r . Therefore, as the photodetector 120, a single-pixel photodetector (photodetector) can be used as the photodetector 120.
  • the photodetector 120 outputs a plurality of detection signals D 1 to D M corresponding to a plurality of M intensity distributions I 1 to I M, respectively.
  • the arithmetic processing unit 130 includes a reconstruction processing unit 134.
  • the reconstruction processing unit 134 reconstructs the restored image G(x, y) of the object OBJ by correlating the plurality of intensity distributions I 1 to I M and the plurality of detected intensities b 1 to b M.
  • the detection intensities b 1 to b M are based on the detection signals D 1 to D M.
  • the relationship between the detection intensity and the detection signal may be determined in consideration of the type and method of the photodetector 120.
  • the detection signal D r is assumed to represent the amount of light received at a certain time (or a minute time), that is, an instantaneous value.
  • the detection signal D r may be sampled multiple times during the irradiation period, and the detection intensity b r may be an integrated value, an average value, or a maximum value of all sampling values of the detection signal D r .
  • some of all the sampling values may be selected, and the integrated value, average value, or maximum value of the selected sampling values may be used.
  • the order xth to yth counting from the maximum value may be extracted, sampling values lower than an arbitrary threshold value may be excluded, or the magnitude of signal fluctuation may be excluded. It is also possible to extract a sampling value in a range where is small.
  • the output D r of the photodetector 120 can be directly used as the detection intensity b r .
  • the conversion from the detection signal D r to the detection intensity b r may be performed by the arithmetic processing device 130 or may be performed outside the arithmetic processing device 130.
  • the arithmetic processing unit 130 can be implemented by a combination of a processor (hardware) such as a CPU (Central Processing Unit), an MPU (Micro Processing Unit), and a microcomputer, and a software program executed by the processor (hardware).
  • the arithmetic processing unit 130 may be a combination of a plurality of processors. Alternatively, the arithmetic processing unit 130 may be composed of only hardware.
  • the pattern generator 116 may be mounted inside the arithmetic processing unit 130.
  • FIGS. 18A to 18C are views for explaining the pixels of the DMD that is the patterning device 114.
  • the DMD is an array of a plurality of pixels PIX arranged in a matrix of m rows and n columns.
  • each pixel PIX is a square mirror, and can be tilted in the ON direction and the OFF direction about a hinge provided diagonally as an axis.
  • the patterning device 114 is configured so that all pixels can be independently turned on and off.
  • the shape of the matrix is simplified and shown as shown in FIG.
  • the pattern generator 116 controls the intensity distribution of the reference light (that is, the pattern signal PTN) in units of the pixel block B including at least one pixel, and the pixel block B is variable.
  • the pixel block B can be understood as a set of continuous (adjacent) ON pixels (or a set of OFF pixels, or a set of ON pixels and OFF pixels). In Example 3.1, the size of the pixel block B is variable.
  • FIGS. 19A to 19D are diagrams showing pixel blocks B having different sizes. The size can be grasped as the number of pixels (that is, the area) included in the pixel block B.
  • FIGS. 19A to 19D show pixel blocks B 1 ⁇ 1 to B 4 ⁇ 4 of vertical and horizontal 1 ⁇ 1 pixels, 2 ⁇ 2 pixels, 3 ⁇ 3 pixels, and 4 ⁇ 4 pixels, respectively. Pixels included in the same pixel block B are in the same state (on, off).
  • FIGS. 20A and 20B are diagrams showing examples of pattern signals (image data) PTN based on pixel blocks B having different sizes.
  • the pixel block B 2 ⁇ 2 of 2 ⁇ 2 pixels in FIG. 19B is applied, and ON/OFF is controlled for each pixel block B 2 ⁇ 2 .
  • the pattern signal PTN changes in M ways per sensing.
  • the hatched pixel PIX is an ON pixel, and the pixel PIX that is not hatched is an OFF pixel.
  • the pixel block B 4 ⁇ 4 of 4 ⁇ 4 pixels in FIG. 19D is applied, and ON/OFF is controlled for each pixel block B 4 ⁇ 4 .
  • the number M of the pattern signals PTN may be different according to the size of the pixel block. Generally, by increasing the size of the pixel block and decreasing the number of pixel blocks, the pattern per sensing is performed once. It is possible to reduce the number M of
  • FIG. 21 is a diagram showing a modification of the pattern control.
  • the arrangement of 4 ⁇ 4 pixel pixel blocks B 4 ⁇ 4 is not perfectly aligned in the horizontal direction, but is offset by 2 pixels in the horizontal direction in some rows.
  • the pattern control according to the embodiment 3.1. It can be understood that this pattern control dynamically changes the effective resolution of the patterning device 114.
  • the calculation amount in the reconstruction processing unit 134 increases according to the resolution. However, in a situation where spatial resolution is not so required, the calculation amount can be reduced by increasing the size of the pixel block B.
  • Example 3.2 In Example 3.1, the same pixel block B was included in one pattern, but this is not the case.
  • the layout of pixel blocks having different sizes is defined in advance and selected according to the running scene.
  • 22A and 22B are diagrams illustrating the layout of the pixel blocks B having different sizes according to the running scene.
  • the pixel block B S having a smaller size is arranged in the lower region, and the pixel block B L having a larger size is arranged in the upper region.
  • the sky a space in which no vehicle or pedestrian exists
  • the size of the pixel block B is increased to reduce the resolution.
  • the lower side corresponds to the road surface and there is a high possibility that an important object (or road surface sign) is present. Therefore, the size of the pixel block B is reduced to improve the resolution.
  • a pixel block B S having a smaller size is arranged closer to the center, and a pixel block B L having a larger size is arranged closer to the outer periphery.
  • the vanishing point is located near the center of the screen, and a distant oncoming vehicle appears from the vanishing point, the size is small at the beginning, and the size increases as the vehicle approaches.
  • a plurality of layouts are defined in advance, and one suitable for the running scene is adaptively selected from the plurality of layouts. May be dynamically changed.
  • FIG. 23 is a diagram illustrating a dynamic layout of pixel blocks B having different sizes.
  • a pattern of pixel blocks B 2 ⁇ 2 of uniform size (2 ⁇ 2 pixels) is used.
  • the position of the object OBJ is estimated from the image restored in this frame 1.
  • the pixel block B having a smaller size may be arranged in the area where the object OBJ exists, and the size of the pixel block B may be increased as the distance from the pixel block B increases.
  • Modifications related to Examples 3.1 to 3.3 Modification 1
  • the pixel block B has a square shape in the above description, the shape is not limited thereto.
  • 24A to 24C are diagrams showing a pixel block B according to Modification 1. As shown in FIG. The pixel block B is a horizontally long rectangle, and its size dynamically changes.
  • Modification 2 In Examples 3.1 to 3.3, the vertical direction and the horizontal direction are changed on the same scale, but only the number of pixels in the vertical direction or only the number of pixels in the horizontal direction may be changed.
  • 25A to 25D are diagrams showing a pixel block B according to the second modification. In this example, the pixel block B has a variable number of pixels in the horizontal direction.
  • Example 3.4 In Embodiments 3.1 to 3.3, the case where the size of the pixel block B is changed has been described. In Example 3.4, the shape of the pixel block B is adaptively changed.
  • 170(a) to 170(d) are diagrams showing pixel blocks B having different shapes.
  • 170(a) shows a pixel block B X long in the horizontal direction (X direction)
  • FIG. 170(b) shows a pixel block B Y long in the vertical direction (Y direction)
  • FIG. , A diagonally long pixel block B XY is shown.
  • FIG. 170(d) shows a basic square pixel block B S.
  • the pixel blocks B in FIGS. 170(a) to 170(d) have the same size.
  • 27A to 27C are diagrams showing an example of the pattern signal PTN based on pixel blocks having different shapes. At least one of the length and the position of the pixel block B is randomly determined.
  • 27A shows an example of the pattern signal PTN X using the horizontally long pixel block B X
  • FIG. 27B shows an example of the pattern signal PTN Y using the vertically long pixel block B Y.
  • (C) shows an example of the pattern signal PTN XY using the diagonal pixel blocks B XY .
  • the horizontal pixel block B S is used as compared with the case where the square pixel block B S is used. Since the effective resolution in the direction decreases, the sharpness of the image in the horizontal direction decreases, or the detection accuracy of the horizontal position decreases, but the capture time (in other words, the exposure time) increases and the detection intensity D Is increased, the S/N ratio is increased to facilitate detection (that is, the sensitivity is increased).
  • the detection time is shorter because the capture time is shorter than when the square pixel block B S is used, and the detection sensitivity is reduced.
  • the vertical resolution is improved and the vertical position detection accuracy is increased.
  • the detection sensitivity can be increased for an object moving in the direction in which the pixel block B extends, and a sharp image can be obtained or the position detection accuracy can be increased for an object moving in a direction perpendicular to the detection direction.
  • FIGS. 28A and 28B are views for explaining sensing based on the pattern signal PTN having the pixel block B having a characteristic shape.
  • the pattern signal PTN S including the pixel block B S of a certain shape square of 5 ⁇ 5 pixels
  • the position of the tip of the object OBJ is determined to be X′, and there is an error from the actual position X of the tip of the object OBJ.
  • the shape of the pixel block B is switched in order to positively capture a certain object OBJ, but this is not the only option, and in order to erase a specific object OBJ
  • the shape of the block B may be used. For example, it can be said that rain and snow are noise for an in-vehicle sensing device and there is no need to restore an image. Since the moving directions of rain and snow are constant, optimizing the shape of the pixel block B makes it easier to eliminate the influence of rain and snow.
  • a pixel block B having a shape that is short in the vertical direction (vertical direction), in other words, long in the horizontal direction, is suitable.
  • the light of a specific pixel block reaches the object OBJ on the other side of the rain and returns to the photodetector 120 on the front side of the rain,
  • the probability that the light of a specific pixel block will be significantly affected (blocked) by the raindrops is reduced. That is, since the influence of rain on each pixel is made uniform, the process of removing the influence of rain (noise canceling) becomes easy.
  • Example 3.5 In Examples 3.1 to 3.4 described above, all the pixels included in the same pixel block B are on (or off). On the other hand, in Example 3.5, the same pixel block B includes both ON pixels and OFF pixels. That is, the pixel block B includes two or more ON pixels and OFF pixels which are arranged in a predetermined manner. Such a pixel block is called a pattern block.
  • the illumination device 110 defines the intensity distribution by the combination of pattern blocks.
  • FIGS. 29A to 29D are diagrams for explaining the pattern block PB according to the embodiment 3.5. 29A to 29D show distributions (patterns) of ON pixels and OFF pixels in the pattern block PB.
  • the off pixels are arranged along the four sides of the pattern block PB, and in FIG. 29B, the off pixels are arranged along the two adjacent sides of the pattern block PB.
  • the ON pixels are arranged so as to cross diagonally.
  • the ON pixels are arranged so as to cross each other vertically and horizontally.
  • FIG. 30A and 30B are diagrams showing examples of pattern signals based on the combination of pattern blocks.
  • FIG. 30A is an example of the pattern signal formed by the pattern block of FIG. 29A.
  • FIG. 30B is an example of the pattern signal formed by the pattern block of FIG. 29D.
  • 30A and 30B have the same arrangement of the pattern blocks that are turned on.
  • the spatial incoherence can be improved as described below.
  • FIGS. 31A and 31B are diagrams for explaining the improvement of the spatial incoherence of the reference light.
  • a light flux emitted from a certain light source travels with a certain divergence angle.
  • the spread of the luminous flux for each pixel does not matter.
  • the vehicle-mounted imaging apparatus 100 needs to detect an object in the far field, the spread of the light flux causes a problem.
  • two light fluxes emitted from two adjacent ON areas (or pixels) A and B of the illumination device 110 are located at the position of the object OBJ far from the illumination device 110. Overlap in. Such overlapping of the light beams reduces the spatial incoherence.
  • the number of continuous ON pixels can be limited to two. This means that an OFF pixel is inserted between two adjacent ON regions. As a result, as shown in FIG. 31B, the two adjacent on-regions A and B can be spatially separated, so that the overlapping of the light fluxes can be reduced even for the reference light with which the object OBJ is irradiated. , Spatial incoherence can be improved. Also in the case of using the pattern block B of FIG. 29C, the number of continuous ON pixels can be set to one or two in the horizontal and vertical directions.
  • the pattern block B shown in FIGS. 29A to 29C may be used, and when the object OBJ is close, a normal pattern block (or pixel block) may be used. ..
  • the intensity distribution that can improve the spatial incoherence may be generated based on a predetermined constraint condition without using the pattern block PB.
  • FIGS. 32(a) and 32(b) are diagrams showing examples of intensity distributions that can improve spatial incoherence.
  • on/off of a plurality of pixels is randomly determined under the constraint that adjacent pixels are not turned on.
  • FIG. 32A adjacency of ON pixels in the vertical, horizontal, and diagonal directions is prohibited.
  • FIG. 32B adjacency of ON pixels in the diagonal direction is allowed, and adjacency of ON pixels in the vertical direction and the horizontal direction is prohibited.
  • FIGS. 33A to 33D are diagrams for explaining the pattern control with the lighting rate as a constraint condition.
  • the lighting rates (the ratio of the number of ON pixels to the total number of pixels) are different, and the lighting rates are 20%, 40%, 60%, and 80%, respectively.
  • PRBS pseudo random signal
  • Increasing the lighting rate will increase the amount of light, so it will be possible to sense objects farther away. Alternatively, it is possible to detect an object having a lower reflectance or an object having a smaller reflection area. By increasing the lighting rate, the amount of reflected light can be increased even in a dense fog environment where the light attenuation rate is high, and thus the detection sensitivity can be increased.
  • the lighting rate may be reduced when detecting an object with high reflectance or a large object.
  • the lighting rate is dynamically controlled according to the traveling environment to improve the visibility from the driver and give attention and warning to other traffic participants. Glare to preceding vehicles, oncoming vehicles, and pedestrians can be reduced.
  • the reference light S1 can be made to blink in a pseudo manner, and the driver of the own vehicle and other traffic participants can be given a warning or an alarm.
  • the lighting rate for all pixels is defined, but the lighting rate may be defined for each area by dividing into a plurality of areas.
  • 34(a) and 34(b) are diagrams for explaining the control of the lighting rate according to the modification. For example, if the lighting rate is 50% and the PRBS is generated with the mark rate through all pixels set to 50%, ON pixels are concentrated in the upper half and OFF pixels are concentrated in the lower half, resulting in uneven brightness. .. Therefore, as shown in FIG. 34(a), unevenness in brightness can be reduced by generating PRBS with a mark ratio of 50% for each of the upper half region and the lower half region to define the intensity distribution.
  • the lighting rate may be independently designated for each area.
  • control such as lowering the lighting rate can be performed in the area where the oncoming vehicle and the preceding vehicle exist.
  • the illuminating device 110 is configured by the combination of the light source 112 and the patterning device 114, but it is not limited thereto.
  • the illuminating device 110 is configured by an array of a plurality of semiconductor light sources (LED (light emitting diode) or LD (laser diode)) arranged in a matrix, and ON/OFF (or brightness) of each semiconductor light source can be controlled. You may comprise.
  • This imaging device 100 can be used for the object identification system 10 of FIG.
  • the imaging device 100 described in the second embodiment as the sensor of the object identification system 10 the following advantages can be obtained.
  • the imaging device 100 that is, the quantum radar camera
  • noise resistance is significantly increased.
  • the imaging device 100 that is, the quantum radar camera
  • the object OBJ is not affected by rain, snow, or fog.
  • the restored image G can be obtained.
  • the detection targets of the vehicle-mounted imaging device 100 are various, such as cars, people, motorcycles and bicycles, structures and animals and plants.
  • the situation in which the imaging device is used also changes greatly depending on the traveling environment such as weather, time of day, traveling road, traveling speed, and the like.
  • the imaging device itself moves, the objects also move, and their relative movement directions are various.
  • the imaging device 100 described in the third embodiment can be mounted on the vehicle shown in FIG. 11 and may be built in the vehicle lighting device shown in FIG.
  • Embodiment 4 (Outline of Embodiment 4)
  • Embodiment 4 described below relates to an imaging apparatus using the principle of ghost imaging.
  • the imaging device includes an illumination that illuminates an object while changing the intensity distribution of the reference light in a plurality of M ways, a photodetector that measures reflected light from the object for each of the plurality of intensity distributions I 1 to I M , and a plurality of photodetectors. And an arithmetic processing unit for reconstructing a restored image of the object by taking a correlation between the intensity distributions I 1 to I M of the above and a plurality of detection intensities b 1 to b M based on the output of the photodetector.
  • the plurality of intensity distributions I 1 to I M can be determined by the following processing.
  • (I) Model the transfer characteristics of the path from the illumination to the photodetector through the object.
  • (Ii) Define a reference object and a corresponding reference image.
  • (Iii) Initial values are given to the plurality of intensity distributions I 1 to I M.
  • (V) Reconstruct the restored image of the reference object by correlating the plurality of intensity distributions I 1 to I M and the plurality of estimated values b 1 to b M.
  • Each of the plurality of intensity distributions I 1 to I M is modified so that the error between the restored image and the reference image becomes small.
  • the number of irradiations can be reduced by defining the reference image according to the assumed subject and optimizing the pattern.
  • a plurality of N sets (N ⁇ 2) of reference objects and reference images may be defined.
  • versatility can be improved.
  • the error may be represented by the objective function F(I) of Expression (4).
  • W is the width of the image
  • H is the height of the image
  • T i (x, y) is the i-th reference image
  • G i (x, y, I) is the i-th restored image.
  • a plurality of sets of a plurality of intensity distributions I 1 to I M may be prepared and one set according to the traveling environment may be selectively used. As a result, the image quality can be improved as compared with the case where the same set of intensity distributions is always used in various traveling environments.
  • FIG. 35 is a diagram showing the imaging apparatus 100 according to the fourth embodiment.
  • the imaging device 100 is a correlation function image sensor that uses the principle of ghost imaging, and includes an illumination 110, a photodetector 120, and a calculation processing device 130.
  • the imaging device 100 is also called a quantum radar camera.
  • the illumination 110 is a pseudo heat light source and generates the reference light S1 having the intensity distribution I(x, y) that can be regarded as substantially random and irradiates the object OBJ. Irradiation of the reference light S1 onto the object OBJ is performed while changing its intensity distribution according to a plurality of M patterns. Illumination 110 may include, for example, a light source 112 that produces light S0 having a uniform intensity distribution, and a patterning device 114 that is capable of spatially modulating the intensity distribution I of this light S0.
  • the light source 112 may use a laser, a light emitting diode, or the like.
  • the wavelength and spectrum of the reference light S1 are not particularly limited, and may be white light having a plurality of or continuous spectra, or monochromatic light having a predetermined wavelength.
  • a DMD Digital Micromirror Device
  • a liquid crystal device can be used as the patterning device 114.
  • the pattern signal PTN (image data) designating the intensity distribution I is given to the patterning device 114 from the arithmetic processing unit 130. Therefore, the arithmetic processing unit 130 currently outputs the reference light S1 irradiated to the object OBJ. Know the intensity distribution I r .
  • the photodetector 120 measures the reflected light from the object OBJ and outputs a detection signal D r .
  • the detection signal D r is a spatial integral value of light energy (or intensity) incident on the photodetector 120 when the object OBJ is irradiated with the reference light having the intensity distribution I r . Therefore, the photodetector 120 can use a single pixel device (photodetector).
  • the photodetector 120 outputs a plurality of detection signals D 1 to D M corresponding to a plurality of M intensity distributions I 1 to I M, respectively.
  • the arithmetic processing unit 130 includes a pattern generator 132 and a reconstruction processing unit 134.
  • the intensity distribution of the reference light S1 generated by the illumination 110 is randomly generated, but in the present embodiment, a set of a plurality of predetermined intensity distributions I 1 to I M is used. Therefore, a set of a plurality of pattern signals PTN 1 to PTN M defining a plurality of intensity distributions I 1 to I M is held in advance in a memory (pattern memory) inside the pattern generator 132.
  • the reconstruction processing unit 134 reconstructs the restored image G(x, y) of the object OBJ by correlating the plurality of intensity distributions I 1 to I M and the plurality of detected intensities b 1 to b M.
  • the detection intensities b 1 to b M are based on the detection signals D 1 to D M.
  • the relationship between the detection intensity b and the detection signal D may be determined in consideration of the type and method of the photodetector 120.
  • the detection signal D r is assumed to represent the amount of received light at a certain time (or a minute time), that is, the instantaneous value.
  • the detection signal D r may be sampled multiple times during the irradiation period, and the detection intensity b r may be an integrated value, an average value, or a maximum value of all sampling values of the detection signal D r .
  • some of all the sampling values may be selected, and the integrated value, average value, or maximum value of the selected sampling values may be used.
  • the x-th to the y-th order from the maximum value may be extracted, sampling values lower than an arbitrary threshold value may be excluded, or the magnitude of signal fluctuation may be excluded. It is also possible to extract a sampling value in a range where is small.
  • the output D r of the photodetector 120 can be directly used as the detection intensity b r .
  • the conversion from the detection signal D r to the detection intensity b r may be performed by the arithmetic processing device 130 or may be performed outside the arithmetic processing device 130.
  • the above is the basic configuration of the entire imaging apparatus 100.
  • the plurality of intensities I 1 to I M are determined in advance by using a computer.
  • FIG. 36 is a flowchart showing a method of determining a set of a plurality of intensity distributions I 1 to I M.
  • the transfer characteristic of the path from the illumination 110 to the photodetector 120 via the object OBJ is modeled (S100).
  • the transfer characteristics include the transfer characteristics of light from the illumination 110 to the object OBJ, the reflection characteristics of the object OBJ, the propagation characteristics of light from the object OBJ to the photodetector 120, and the conversion characteristics of the photodetector 120. Be done.
  • FIG. 37 is a diagram illustrating the relationship between the reference object and the reference image T(x, y).
  • the pixel value of the reference image T(x,y) is normalized with 0 to 1.
  • the pixel value of each pixel p represents the reflectance of the corresponding portion of the reference object. For example, when the pixel value of a certain pixel p is 1, the reflectance of the corresponding reference object is 1 (that is, 100%), and when the pixel value is 0, the reflectance of the corresponding reference object is 0 ( That is, 0%), and when the pixel value is 0.5, the reflectance of the corresponding reference object can be associated as 0.5 (that is, 50%).
  • the detected intensities b 1 to b 1 when the reference light S1 having a plurality of intensity distributions I 1 to I M is irradiated to the reference object OBJ estimate of b M b ⁇ 1 calculates the ⁇ b ⁇ M (S106).
  • the light is not attenuated in the optical path from the illumination 110 to the object OBJ, and the reference light S1 is emitted over the entire rectangle including the reference object OBJ (rectangle shown by the broken line on the right side of FIG. 37). Further, it is assumed that the light is not attenuated in the optical path from the object OBJ to the photodetector 120, and all the reflected light from the object OBJ is incident on the photodetector 120.
  • the estimated value b ⁇ r of the detected intensity when the reference light whose intensity distribution is I r (x, y) is applied to the reference object is represented by Expression (6).
  • W represents the width of the image
  • H represents the height of the image.
  • a combination (or state) of the current intensity distributions I 1 (x, y) to I M (x, y) is denoted by I.
  • the restored image G(x, y, I) is reconstructed using the set I of the intensity distribution based on the correlation function of Expression (7) (S108).
  • Expression (7) is obtained by replacing the detection intensity b r in Expression (5) with the estimated value b ⁇ r .
  • the reference image T(x,y) corresponds to the correct answer of the restored image G(x,y,I). Therefore, the error ⁇ between the restored image G(x,y,I) and the reference image T(x,y) is calculated (S110), and each of the plurality of intensity distributions I 1 to I M is reduced so as to reduce the error ⁇ . It is corrected (S114).
  • the number of irradiations can be reduced by defining the reference image according to the assumed subject and optimizing the pattern.
  • the error ⁇ in this case may be represented by the objective function F(I) of Expression (8).
  • T i (x, y) represents the i-th set of reference images.
  • the algorithm for minimizing the error ⁇ is not particularly limited, and a known algorithm can be used. For example, stochastic gradient descent can be used for the minimization. This problem can be formulated by the following equation (9).
  • I ⁇ is a set of optimal intensity distributions I 1 to I M. Since the pixel values of the intensity distributions I 1 to I M do not have a negative value, a non-negative constraint condition can be set.
  • the number M of the intensity distributions I 1 to I M is set to 100 , 500 , and 1000 , and optimum intensity distribution sets I 100 ⁇ , I 500 ⁇ , and I 1000 ⁇ are obtained.
  • FIG. 39 is a diagram showing a restored image when an optimized intensity distribution set is used.
  • the leftmost image is a correct answer image, and photographs of the alphabet K, frog, train, and truck are used in order from the top.
  • Below the restored image the numerical value of PSNR indicating the error from the correct image is shown. Images of frogs, trains, and trucks are taken from CIFAR-10 (Alex Krizhevsky, “Learning multiple layers of features from tiny images” 2009). The larger the numerical value of PSNR, the smaller the error.
  • FIG. 40 is a diagram showing a restored image when a set of random intensity distributions is used.
  • the PSNR is about 9.578 even when irradiation is performed 10,000 times.
  • a plurality of intensity distribution sets may be prepared and switched according to the traveling environment.
  • the imaging device 100 it is assumed that there is no light attenuation or the like between the imaging device 100 and the object OBJ. This can be associated when the visibility in fine weather is good.
  • the set of intensity distributions obtained under this assumption is also effective in situations where the vehicle is driving in rainfall, snowfall, or thick fog, but depending on the driving environment such as rainfall, snowfall, or thick fog, the intensity distribution By switching the set of, the error of the restored image G can be further reduced.
  • the transfer characteristics may be modeled in consideration of their influences.
  • the calculation formula of the estimated value b ⁇ r of the detection intensity b r is modified from the formula (6).
  • the set of intensity distributions may be optimized based on the modified estimated value b ⁇ r .
  • a reference object is photographed in each driving environment (that is, rainfall, snowfall, and thick fog), and the obtained image is used as a reference.
  • the above machine learning may be performed as an image.
  • the modeling of the transfer characteristic (light propagation characteristic) can be simplified.
  • a driving environment in addition to or in addition to the difference of rain, snow, fog, etc., it is suitable for each driving environment in consideration of daytime driving and nighttime driving, low speed driving and high speed driving.
  • a set of intensity distributions may be prepared.
  • the illumination 110 is composed of the combination of the light source 112 and the patterning device 114, but it is not limited thereto.
  • the illumination 110 is composed of an array of a plurality of semiconductor light sources (LED (light emitting diode) or LD (laser diode)) arranged in a matrix, and it is possible to control ON/OFF (or brightness) of each semiconductor light source. You may comprise.
  • This imaging device 100 can be used for the object identification system 10 of FIG.
  • the imaging device 100 described in the fourth embodiment as the sensor of the object identification system 10 the following advantages can be obtained.
  • the use of the imaging device 100 that is, the quantum radar camera significantly improves noise resistance. For example, when it is raining, snowing, or traveling in fog, it is difficult for the naked eye to recognize the object OBJ, but by using the imaging device 100, the object OBJ is not affected by rain, snow, or fog.
  • the restored image G can be obtained.
  • the imaging device 100 described in the fourth embodiment can be mounted on the automobile of FIG. 11 and may be incorporated in the vehicle lamp of FIG.
  • the method using correlation calculation has been described as the ghost imaging (or single pixel imaging) method, but the image reconstruction method is not limited to this.
  • an analytical method using Fourier transform or Hadamard inverse transform instead of the correlation calculation, a method for solving an optimization problem such as sparse modeling, and an algorithm using AI/machine learning, The image may be reconstructed.
  • the present invention relates to a vehicle lamp.
  • OBJ... Object, 10... Object identification system 20... Imaging device, 40... Arithmetic processing device, 42... Classifier, 100... Imaging device, 110... Illumination, 120... Photodetector, 130... Arithmetic processing device, 132... Pattern Generator, 134... Reconstruction processing unit, 200... Vehicle lamp, 202... Light source, 204... Lighting circuit, 206... Optical system, 300... Automotive, 302... Headlight, 310... Lamp system, 304... Vehicle side ECU , 400... Vehicle lamp, 402... Housing, 404... Cover, 410... Headlight, 412... Light source, 414... Reflector, 416... Patterning device, 420... Pseudo thermal light source, 422... Light source, 424... Patterning device, 430... Common member.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Lighting Device Outwards From Vehicle And Optical Signal (AREA)
  • Photometry And Measurement Of Optical Pulse Characteristics (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

This vehicle lighting fixture 400 comprises a headlamp 410 and a pseudo-heat light source 420. The pseudo-heat light source 420 projects reference light S1 onto an object, varying the intensity distribution of the light in M-different ways. The pseudo-heat light source 420, along with a light detector 120 and an arithmetic processing device 130, constitutes an imaging device 100. At least some constituent elements 430 of the headlamp 410 are shared with the pseudo-heat light source 420.

Description

車両用灯具および車両Vehicle lighting and vehicle
 本発明は、車両用灯具に関する。 The present invention relates to a vehicle lamp.
 自動運転やヘッドランプの配光の自動制御のために、車両の周囲に存在する物体の位置および種類をセンシングする物体識別システムが利用される。物体識別システムは、センサと、センサの出力を解析する演算処理装置を含む。センサは、カメラ、LiDAR(Light Detection and Ranging、Laser Imaging Detection and Ranging)、ミリ波レーダ、超音波ソナーなどの中から、用途、要求精度やコストを考慮して選択される。 An object identification system that senses the position and type of objects existing around the vehicle is used for automatic driving and automatic control of headlamp light distribution. The object identification system includes a sensor and an arithmetic processing unit that analyzes the output of the sensor. The sensor is selected from cameras, LiDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging), millimeter wave radar, ultrasonic sonar, etc., in consideration of application, required accuracy and cost.
 イメージング装置(センサ)のひとつとして、ゴーストイメージングの原理を利用したものが知られている。ゴーストイメージングは、参照光の強度分布(パターン)をランダムに切り替えながら物体に照射し、パターンごとに反射光の光検出強度を測定する。光検出強度はある平面にわたるエネルギーあるいは強度の積分値であり、強度分布ではない。そして、対応するパターンと光検出強度との相関をとることにより、物体の画像を再構成(reconstruct)する。 As one of the imaging devices (sensors), one that uses the principle of ghost imaging is known. Ghost imaging irradiates an object while randomly switching the intensity distribution (pattern) of reference light, and measures the light detection intensity of reflected light for each pattern. The photodetection intensity is an integrated value of energy or intensity over a certain plane, not an intensity distribution. Then, the image of the object is reconstructed by taking the correlation between the corresponding pattern and the light detection intensity.
特許第6412673号公報Japanese Patent No. 6412673
課題1. 物体の画像を正しく復元するためには、参照光の強度分布を高精度に制御し、物体に照射する必要がある。イメージング装置を車載する場合、数十m先の物体を検出する必要があり、したがって数十m先まで、強度分布を保ったまま参照光を照射する必要がある。 Problem 1. In order to correctly restore the image of the object, it is necessary to control the intensity distribution of the reference light with high accuracy and irradiate the object. When the imaging device is mounted on a vehicle, it is necessary to detect an object several tens of meters ahead, and therefore it is necessary to irradiate the reference light up to several tens of meters while maintaining the intensity distribution.
 本発明の第1態様の例示的な目的のひとつは、参照光を遠方の物体に精度よく照射することにある。 One of the exemplary purposes of the first aspect of the present invention is to accurately irradiate a distant object with reference light.
課題2. 相関計算の量は、復元画像の画素数に応じて爆発的に増加する。具体的には、ランダムな参照光の照射回数をM、画素数を(X×Y)とするとき、演算回数は、
 M×(X×Y)
となる。
Problem 2. The amount of correlation calculation explosively increases according to the number of pixels of the restored image. Specifically, when the number of times of random reference light irradiation is M and the number of pixels is (X×Y), the number of calculations is
M x (X x Y) 2
Becomes
 1画素の復元に1回のパターン照射が必要と仮定すると、全画素(X×Y)の復元には、M=(X×Y)回の照射が必要である。また、1回の照射当たりの相関計算の演算回数は、(X×Y)である。したがって、1回のセンシングあたりの演算回数Oは、
 O=(X×Y)
となる。
Assuming that one pattern irradiation is required to restore one pixel, M=(X×Y) irradiations are required to restore all pixels (X×Y). Further, the number of times of calculation of correlation calculation per irradiation is (X×Y). Therefore, the number of calculations O per sensing is
O=(X×Y) 2
Becomes
 実験室に設置されるイメージング装置の場合、演算処理装置として高速なコンピュータやワークステーションを利用できるため、演算回数の増加はそれほど問題にならない。ところが車載用途のイメージング装置では、コストやスペースの関係から、演算処理装置の性能が制約を受ける。したがって、相関計算の演算量を低減することが求められる。 In the case of an imaging device installed in a laboratory, since a high-speed computer or workstation can be used as a processing device, the increase in the number of calculations does not matter so much. However, in an imaging device for in-vehicle use, the performance of the arithmetic processing device is restricted due to the cost and space. Therefore, it is required to reduce the calculation amount of the correlation calculation.
 本発明の第2態様の例示的な目的のひとつは、演算量を低減したイメージング装置あるいはイメージング方法の提供にある。 One of the exemplary purposes of the second aspect of the present invention is to provide an imaging apparatus or an imaging method with a reduced amount of calculation.
課題3. 参照光のパターニングには、DMD(Digital Micromirror Device)や液晶などのパターニングデバイスを用いられる。DMDや液晶は、マトリクス状に配置される複数の画素を有し、画素毎に、反射率や透過率が制御可能となっている。 Problem 3. Patterning devices such as DMD (Digital Micromirror Device) and liquid crystal are used for patterning the reference light. The DMD and the liquid crystal have a plurality of pixels arranged in a matrix, and the reflectance and the transmittance can be controlled for each pixel.
 車載用のイメージング装置の検出対象は、車、人、バイクや自転車、構造物や動植物とさまざまである。またイメージング装置が使用される状況も、天候、時間帯、走行道路、走行速度などの走行環境に応じて大きく変化する。また、イメージング装置自体が移動するとともに、対象物も移動し、それらの相対的な移動方向もさまざまである。  The in-vehicle imaging device can detect various objects such as cars, people, motorcycles and bicycles, structures, and plants and animals. The situation in which the imaging device is used also changes greatly depending on the traveling environment such as weather, time of day, traveling road, traveling speed, and the like. In addition, the imaging device itself moves, the objects also move, and their relative movement directions are various.
 このように、ある特定用途のイメージングでは、パターニングデバイスの全画素をランダムに制御することが必ずしも最適であるとは限らない。 In this way, it is not always optimal to randomly control all pixels of a patterning device in a specific purpose imaging.
 本発明の第3態様の例示的な目的のひとつは、特定の用途のイメージングに適した照明装置の提供にある。 One of the exemplary objects of the third aspect of the present invention is to provide an illuminating device suitable for imaging for a specific application.
課題4. ニアフィールドを観察する顕微鏡では、画素ごとの光束の広がりは問題とならない。しかしながら車載用などのイメージング装置100は、ファーフィールドの物体を検出する必要がある。遠方の物体をセンシング対象とするイメージング装置では、パターニングデバイスの全画素をランダムに制御することが必ずしも最適であるとは限らない。 Problem 4. In a near-field observing microscope, the spread of the luminous flux for each pixel does not matter. However, the in-vehicle imaging device 100 needs to detect an object in the far field. In an imaging apparatus that senses a distant object, it is not always optimal to randomly control all pixels of the patterning device.
 本発明の第4態様の例示的な目的のひとつは、遠方のイメージング装置に適した照明装置の提供にある。 One of the exemplary objects of the fourth aspect of the present invention is to provide an illumination device suitable for a distant imaging device.
課題5. 従来、物体の画像を再構成するために必要な測定回数(照射回数)は、数千にも達しており、データ測定に時間がかかっていた。加えて、もとの画像を再構成するために必要な演算量、演算時間は、照射回数の増加にともない爆発的に増加する。 Problem 5. Conventionally, the number of measurements (irradiation times) required to reconstruct an image of an object has reached several thousand, and it took time to measure data. In addition, the amount of calculation and the calculation time required to reconstruct the original image explosively increase as the number of irradiations increases.
 物体が実質的に静止しているアプリケーションでは、大きな照射回数や演算量はそれほど問題とならないが、物体が移動するアプリケーションでは、高いフレームレートが要求されるため、照射回数を減らすことが求められる。  In an application in which an object is substantially stationary, a large number of irradiations and the amount of calculation do not pose a problem, but in an application in which an object moves, a high frame rate is required, so it is necessary to reduce the number of irradiations.
 本発明の第5態様の例示的な目的のひとつは、照射回数を低減可能なイメージング装置の提供にある。 One of the exemplary purposes of the fifth aspect of the present invention is to provide an imaging device capable of reducing the number of irradiations.
1. 本発明の第1態様は、車両用灯具に関する。車両用灯具は、前照灯と、疑似熱光源と、を備える。疑似熱光源は、参照光の強度分布をランダムに変化させながら物体に照射可能である。疑似熱光源は、物体からの反射光を測定する光検出器、ならびに光検出器の出力と参照光の強度分布にもとづいて、物体の復元画像を再構成する演算処理装置とともにイメージング装置を構成する。前照灯の少なくとも一部の構成要素は、疑似熱光源と共有される。 1. The first aspect of the present invention relates to a vehicle lamp. The vehicular lamp includes a headlight and a pseudo heat light source. The pseudo heat light source can irradiate the object while randomly changing the intensity distribution of the reference light. The pseudo thermal light source constitutes an imaging device together with a photodetector that measures reflected light from an object, and an arithmetic processing device that reconstructs a restored image of the object based on the output of the photodetector and the intensity distribution of the reference light. .. At least some components of the headlamp are shared with the pseudo heat source.
 前照灯は、数十m先まで光を照射することを目的として、光源や光学系が設計されている。そこで、イメージング装置の疑似熱光源を車両用灯具に内蔵し、前照灯の構成要素の一部を疑似熱光源に流用することで、遠方の物体に、精度よく参照光を照射することができる。また全体のコストを下げることができる。  The headlight is designed with a light source and an optical system for the purpose of irradiating light up to several tens of meters. Therefore, by incorporating the pseudo heat light source of the imaging device in the vehicle lamp and diverting some of the components of the headlight to the pseudo heat light source, it is possible to accurately irradiate the distant object with the reference light. .. Also, the overall cost can be reduced.
 疑似熱光源は、前照灯と光学系を共有してもよい。前照灯の光学系は、配光を制御するパターニングデバイスを含んでもよい。疑似熱光源と前照灯は、パターニングデバイスを共有してもよい。 ▽ The pseudo heat light source may share the optical system with the headlight. The headlamp optics may include a patterning device that controls the light distribution. The pseudo heat source and the headlight may share a patterning device.
 前照灯の光学系は、光源の出射光を車両前方に反射するリフレクタを含んでもよい。疑似熱光源と前照灯は、リフレクタを共有してもよい。 The headlight optical system may include a reflector that reflects the light emitted from the light source toward the front of the vehicle. The pseudo heat source and the headlight may share a reflector.
 参照光は、赤外または紫外であってもよい。 The reference light may be infrared or ultraviolet.
 疑似熱光源は、前照灯と光源を共有してもよい。この場合、参照光は白色光であってもよい。さらには、前照灯全体が、イメージング装置の疑似熱光源として動作可能であってもよい。 ▽ The pseudo heat light source may share the light source with the headlight. In this case, the reference light may be white light. Further, the entire headlamp may be operable as a pseudo heat source for the imaging device.
2. 本発明の第2態様は、車載用イメージング装置に関する。車載用イメージング装置は、測定範囲を複数の区画に分割し、区画を切り替えながら強度分布がランダムな参照光を照射する照明装置と、物体からの反射光を測定する光検出器と、複数の区画それぞれについて、前記光検出器の出力にもとづく検出強度と前記参照光の強度分布にもとづいて前記物体の当該区画に含まれる部分の復元画像を再構成する演算処理装置と、を備える。 2. The second aspect of the present invention relates to an in-vehicle imaging device. The in-vehicle imaging device divides the measurement range into a plurality of sections, and illuminates a reference light whose intensity distribution is random while switching the sections, a photodetector that measures reflected light from an object, and a plurality of sections. Each of them is provided with an arithmetic processing unit that reconstructs a restored image of a portion included in the section of the object based on the detected intensity based on the output of the photodetector and the intensity distribution of the reference light.
 この態様によると、演算時間を減らすことができる。 According to this aspect, the calculation time can be reduced.
 複数の区画の個数は、分割にともなう演算時間の減少量が、測定時間の増加量より大きくなるように定められてもよい。これにより高速なセンシングが可能となる。 -The number of multiple partitions may be set so that the amount of decrease in the computation time due to the division is larger than the amount of increase in the measurement time. This enables high-speed sensing.
3. 本発明の第3態様は、ゴーストイメージングにもとづくイメージング装置に使用される照明装置に関する。照明装置は、マトリクス状に配置された複数の画素を有し、複数の画素のオン、オフの組み合わせにもとづいて、光の強度分布を変調可能に構成される。少なくともひとつの画素を含むピクセルブロックを単位として強度分布が制御され、ピクセルブロックは可変である。 3. A third aspect of the present invention relates to an illumination device used for an imaging device based on ghost imaging. The lighting device has a plurality of pixels arranged in a matrix, and is configured to be able to modulate the light intensity distribution based on a combination of ON and OFF of the plurality of pixels. The intensity distribution is controlled in units of pixel blocks including at least one pixel, and the pixel blocks are variable.
 本発明の別の態様もまた、照明装置である。照明装置は、マトリクス状に配置された複数の画素を有し、複数の画素のオン、オフの組み合わせにもとづいて、光の強度分布を変調可能に構成される。二以上のオン画素とオフ画素を含む所定パターンの組み合わせにより、強度分布が制御される。 Another aspect of the present invention is also a lighting device. The illumination device has a plurality of pixels arranged in a matrix, and is configured to be able to modulate the light intensity distribution based on a combination of ON and OFF of the plurality of pixels. The intensity distribution is controlled by a combination of predetermined patterns including two or more ON pixels and OFF pixels.
4. 本発明の第4態様は、照明装置である。照明装置は、マトリクス状に配置された複数の画素を有し、複数の画素のオン、オフの組み合わせにもとづいて、光の強度分布を変調可能に構成される。所定の制約条件のもと、複数の画素のオン、オフが制御される。 4. A fourth aspect of the present invention is a lighting device. The lighting device has a plurality of pixels arranged in a matrix, and is configured to be able to modulate the light intensity distribution based on a combination of ON and OFF of the plurality of pixels. On/off of a plurality of pixels is controlled under a predetermined constraint condition.
5. 本発明の第5態様はイメージング装置に関する。イメージング装置は、参照光の強度分布を複数M通りで変化させながら物体に照射する照明と、複数の強度分布I~Iそれぞれについて、物体からの反射光を測定する光検出器と、複数の強度分布I~Iと、光検出器の出力にもとづく複数の検出強度b~bの相関をとることにより、物体の復元画像を再構成する演算処理装置と、を備える。複数の強度分布I~Iは、(i)照明から物体を経て光検出器に至る経路の伝達特性をモデル化するステップと、(ii)基準物体およびそれに対応する基準画像を定義するステップと、(iii)複数の強度分布I~Iに初期値を与えるステップと、(iv)伝達特性にもとづいて、複数の強度分布I~Iそれぞれを有する参照光を基準物体に照射したときの、検出強度b~bの推定値(あるいは計算値)b^~b^を計算するステップと、(v)複数の強度分布I~Iと複数の推定値b^~b^の相関をとることにより、基準物体の復元画像を再構築するステップと、(vi)復元画像と基準画像の誤差が小さくなるように、複数の強度分布I~Iそれぞれを修正するステップと、(vii)ステップ(iv)~(vi)を繰り返すことにより複数の強度分布I~Iを決定するステップにより得られる。 5. A fifth aspect of the invention relates to an imaging device. The imaging device includes an illumination that illuminates an object while changing the intensity distribution of the reference light in a plurality of M ways, a photodetector that measures reflected light from the object for each of the plurality of intensity distributions I 1 to I M , and a plurality of photodetectors. Of the intensity distributions I 1 to I M and a plurality of detected intensities b 1 to b M based on the output of the photodetector, and an arithmetic processing unit for reconstructing a restored image of the object. The plurality of intensity distributions I 1 to I M are (i) modeling the transfer characteristics of the path from the illumination through the object to the photodetector, and (ii) defining the reference object and its corresponding reference image. When the steps of providing an initial value to (iii) a plurality of intensity distributions I 1 ~ I M, (iv ) based on the transfer characteristics, irradiated with the reference light having a plurality of intensity distributions I 1 ~ I M to the reference object And a step of calculating estimated values (or calculated values) b^ 1 to b^ M of the detected intensities b 1 to b M , and (v) a plurality of intensity distributions I 1 to I M and a plurality of estimated values b A step of reconstructing the restored image of the reference object by taking the correlation of ^ 1 to b ^ M ; and (vi) a plurality of intensity distributions I 1 to I M so that the error between the restored image and the reference image becomes small. It is obtained by modifying each of them and (vii) determining a plurality of intensity distributions I 1 to I M by repeating steps (iv) to (vi).
 1. 本発明の第1態様によれば、参照光を遠方に正確に照射できる。
 2. 本発明の第2態様によれば、高い分解能を得つつ、演算量を低減できる。
 3. 本発明の第3態様によれば、特定用途のイメージングに適した照明装置を提供できる。
 4. 本発明の第4態様によれば、遠方の物体のイメージング装置に適した照明装置を提供できる。
 5. 本発明の第5態様によれば、照射回数を減らすことができる。
1. According to the first aspect of the present invention, it is possible to accurately irradiate the reference light at a distance.
2. According to the second aspect of the present invention, the amount of calculation can be reduced while obtaining high resolution.
3. According to the third aspect of the present invention, it is possible to provide an illumination device suitable for imaging for a specific application.
4. According to the fourth aspect of the present invention, it is possible to provide an illumination device suitable for an imaging device for a distant object.
5. According to the fifth aspect of the present invention, the number of irradiations can be reduced.
実施の形態1に係る車両用灯具を示す図である。It is a figure which shows the vehicle lamp which concerns on Embodiment 1. 図1のイメージング装置を示す図である。It is a figure which shows the imaging device of FIG. 実施例1に係る車両用灯具を示す図である。FIG. 1 is a diagram showing a vehicle lighting device according to a first embodiment. 実施例2に係る車両用灯具を示す図である。FIG. 6 is a diagram showing a vehicle lamp according to a second embodiment. 実施例2における第1パターン制御を説明する図である。FIG. 10 is a diagram illustrating first pattern control in the second embodiment. 実施例2における第2パターン制御を説明する図である。FIG. 9 is a diagram illustrating second pattern control in the second embodiment. 実施例3に係る車両用灯具を示す図である。FIG. 7 is a diagram showing a vehicle lamp according to a third embodiment. 実施例3における第1制御を説明する図である。FIG. 9 is a diagram illustrating a first control in the third embodiment. 実施例3における第3制御を説明する図である。FIG. 10 is a diagram illustrating third control in the third embodiment. 物体識別システムのブロック図である。It is a block diagram of an object identification system. 自動車を示す図である。It is a figure which shows a motor vehicle. 物体検出システムを備える車両用灯具を示すブロック図である。It is a block diagram which shows the vehicle lamp provided with an object detection system. 実施の形態2に係るイメージング装置を示す図である。FIG. 6 is a diagram showing an imaging device according to a second embodiment. 実施の形態2に係る参照光の強度分布を説明する図である。FIG. 7 is a diagram illustrating the intensity distribution of reference light according to the second embodiment. 演算時間と測定時間のトレードオフを説明する図である。It is a figure explaining the trade-off of calculation time and measurement time. 図16(a)、(b)は、区画の変形例を示す図である。16(a) and 16(b) are diagrams showing a modified example of a section. 実施の形態3に係るイメージング装置を示す図である。FIG. 7 is a diagram showing an imaging device according to a third embodiment. 図18(a)~(c)は、パターニングデバイスであるDMDの画素を説明する図である。FIGS. 18A to 18C are views for explaining the pixels of the DMD that is the patterning device. 図19(a)~(d)は、サイズが異なるピクセルブロックBを示す図である。19A to 19D are diagrams showing pixel blocks B having different sizes. 図20(a)、(b)は、サイズが異なるピクセルブロックBにもとづくパターン信号PTNの例を示す図である。20A and 20B are diagrams showing an example of the pattern signal PTN based on the pixel blocks B having different sizes. パターン制御の変形例を示す図である。It is a figure which shows the modification of pattern control. 図22(a)、(b)は、走行シーンに応じたサイズの異なるピクセルブロックBのレイアウトを説明する図である。22A and 22B are diagrams illustrating the layout of the pixel blocks B having different sizes according to the running scene. サイズの異なるピクセルブロックBの動的なレイアウトを説明する図である。It is a figure explaining the dynamic layout of the pixel block B from which size differs. 図24(a)~(c)は、変形例3.1に係るピクセルブロックBを示す図である。24A to 24C are diagrams showing a pixel block B according to Modification 3.1. 図25(a)~(d)は、変形例3.2に係るピクセルブロックBを示す図である。25A to 25D are diagrams showing a pixel block B according to Modification 3.2. 図26(a)~(d)は、形状が異なるピクセルブロックBを示す図である。26A to 26D are diagrams showing pixel blocks B having different shapes. 図27(a)~(c)は、形状が異なるピクセルブロックにもとづくパターン信号PTNの例を示す図である。27A to 27C are diagrams showing an example of the pattern signal PTN based on pixel blocks having different shapes. 図28(a)、(b)は、形状に特徴を有するピクセルブロックBを有するパターン信号PTNにもとづくセンシングを説明する図である。FIGS. 28A and 28B are views for explaining sensing based on the pattern signal PTN having the pixel block B having a characteristic shape. 図29(a)~(d)は、実施例3.5に係るパターンブロックPBを説明する図である。29A to 29D are diagrams for explaining the pattern block PB according to the embodiment 3.5. 図30(a)、(b)は、パターンブロックの組み合わせにもとづくパターン信号の例を示す図である。30A and 30B are diagrams showing examples of pattern signals based on the combination of pattern blocks. 図31(a)、(b)は、参照光の空間インコヒーレント性の改善を説明する図である。FIGS. 31A and 31B are diagrams for explaining the improvement of the spatial incoherence of the reference light. 図32(a)、(b)は、空間インコヒーレント性を改善できる強度分布の例を示す図である。32A and 32B are diagrams showing examples of intensity distributions that can improve spatial incoherence. 図33(a)~(d)は、点灯率を制約条件としたパターン制御を説明する図である。FIGS. 33A to 33D are diagrams for explaining the pattern control with the lighting rate as a constraint condition. 図34(a)、(b)は、変形例に係る点灯率の制御を説明する図である。34(a) and 34(b) are diagrams for explaining the control of the lighting rate according to the modification. 実施の形態4に係るイメージング装置を示す図である。It is a figure which shows the imaging device which concerns on Embodiment 4. 複数の強度分布I~Iのセットの決定方法を示すフローチャートである。6 is a flowchart showing a method of determining a set of a plurality of intensity distributions I 1 to I M. 基準物体と基準画像T(x,y)の関係を説明する図である。It is a figure explaining the relationship between a reference object and reference image T (x, y). M=100に対して得られる100通りの強度分布I~I100からなるセットI100^示す図である。A set I 100 ^ shown FIG consisting of 100 kinds of the intensity distribution I 1 ~ I 100 obtained for M = 100. 最適化された強度分布のセットを用いたときの復元画像を示す図である。It is a figure which shows the restored image when using the set of optimized intensity distribution. ランダムな強度分布のセットを用いたときの復元画像を示す図である。It is a figure which shows the restored image when using a set of random intensity distribution.
(実施の形態1)
 以下、本発明を好適な実施の形態をもとに図面を参照しながら説明する。各図面に示される同一または同等の構成要素、部材、処理には、同一の符号を付するものとし、適宜重複した説明は省略する。また、実施の形態は、発明を限定するものではなく例示であって、実施の形態に記述されるすべての特徴やその組み合わせは、必ずしも発明の本質的なものであるとは限らない。
(Embodiment 1)
Hereinafter, the present invention will be described based on preferred embodiments with reference to the drawings. The same or equivalent constituent elements, members, and processes shown in each drawing will be denoted by the same reference numerals, and duplicative description will be appropriately omitted. Further, the embodiments are merely examples and do not limit the invention, and all features and combinations thereof described in the embodiments are not necessarily essential to the invention.
 本明細書における「強度分布がランダム」とは、完全なランダムであることを意味するものではなく、ゴーストイメージングにおいて画像を再構築できる程度に、ランダムであればよい。したがって本明細書における「ランダム」は、その中にある程度の規則性を内包することができる。また「ランダム」は、完全に予測不能であることを要求するものではなく、予想可能、再生可能であってもよい。 “The intensity distribution is random” in the present specification does not mean that it is completely random, but may be random as long as an image can be reconstructed in ghost imaging. Therefore, "random" in the present specification can include a certain degree of regularity therein. Also, "random" does not require to be completely unpredictable, but may be predictable and reproducible.
 図1は、実施の形態1に係る車両用灯具400を示す図である。車両用灯具(もしくは灯具システム)400は、前照灯410と、疑似熱光源420を備える。前照灯410および疑似熱光源420は、筐体402に収容されている。筐体402の前面は透明のカバー404で覆われている。前照灯410は、ロービーム、ハイビームのいずれか、あるいはそれらの両方を含み、車両前方の配光を形成するためのビームSbを照射する。 FIG. 1 is a diagram showing a vehicle lamp 400 according to the first embodiment. The vehicle lamp (or lamp system) 400 includes a headlamp 410 and a pseudo heat light source 420. The headlamp 410 and the pseudo heat light source 420 are housed in the housing 402. The front surface of the housing 402 is covered with a transparent cover 404. The headlight 410 includes a low beam, a high beam, or both, and emits a beam Sb for forming a light distribution in front of the vehicle.
 疑似熱光源420は、光検出器120および演算処理装置130とともに、イメージング装置100を構成する。光検出器120および演算処理装置130は、筐体402に内蔵されてもよいし、筐体402の外部に設けられてもよい。 The pseudo heat light source 420 constitutes the imaging device 100 together with the photodetector 120 and the arithmetic processing device 130. The photodetector 120 and the arithmetic processing unit 130 may be built in the housing 402 or may be provided outside the housing 402.
 図2は、図1のイメージング装置100を示す図である。イメージング装置100はゴーストイメージングの原理を用いた相関関数イメージセンサ(シングルピクセルイメージングともいう)であり、疑似熱光源110(図1の疑似熱光源420)、光検出器120、演算処理装置130を備える。イメージング装置100を、量子レーダカメラとも称する。 2 is a diagram showing the imaging apparatus 100 of FIG. The imaging apparatus 100 is a correlation function image sensor (also referred to as single pixel imaging) that uses the principle of ghost imaging, and includes a pseudo thermal light source 110 (pseudo thermal light source 420 in FIG. 1), a photodetector 120, and an arithmetic processing unit 130. .. The imaging device 100 is also called a quantum radar camera.
 疑似熱光源110は、実質的にランダムとみなしうる強度分布I(x,y)を有する参照光S1を生成し、物体OBJに照射する。物体OBJへの参照光S1の照射は、その強度分布を、複数のM通りのパターンに応じて変化させながら行われる。疑似熱光源110は、たとえば均一な強度分布を有する光S0を生成する光源112と、この光S0の強度分布Iを空間的に変調可能なパターニングデバイス114を含みうる。光源112は、レーザや発光ダイオードなどを用いてもよい。参照光S1の波長やスペクトルは特に限定されず、複数のあるいは連続スペクトルを有する白色光であってもよいし、所定の波長を含む単色光であってもよい。 The pseudo heat light source 110 generates the reference light S1 having the intensity distribution I(x, y) that can be regarded as substantially random, and irradiates the object OBJ. Irradiation of the reference light S1 onto the object OBJ is performed while changing its intensity distribution according to a plurality of M patterns. The pseudo-thermal light source 110 may include, for example, a light source 112 that generates a light S0 having a uniform intensity distribution, and a patterning device 114 that can spatially modulate the intensity distribution I of the light S0. The light source 112 may use a laser, a light emitting diode, or the like. The wavelength and spectrum of the reference light S1 are not particularly limited, and may be white light having a plurality of or continuous spectra, or monochromatic light having a predetermined wavelength.
 パターニングデバイス114としては、DMD(Digital Micromirror Device)や液晶デバイスを用いることができる。パターニングデバイス114には、演算処理装置130から、強度分布Iを指定するパターン信号PTN(画像データ)が与えられており、したがって演算処理装置130は、現在、物体OBJに照射される参照光S1の強度分布Iを知っている。 A DMD (Digital Micromirror Device) or a liquid crystal device can be used as the patterning device 114. The pattern signal PTN (image data) designating the intensity distribution I is given to the patterning device 114 from the arithmetic processing unit 130. Therefore, the arithmetic processing unit 130 currently receives the reference light S1 irradiated on the object OBJ. Know the intensity distribution I r .
 光検出器120は、物体OBJからの反射光を測定し検出信号Dを出力する。検出信号Dは、強度分布Iを有する参照光を物体OBJに照射したときに、光検出器120に入射する光エネルギー(あるいは強度)の空間的な積分値である。したがって光検出器120は、シングルピクセルの光検出器(フォトディテクタ)を用いることができる。光検出器120からは、複数M通りの強度分布I~Iそれぞれに対応する複数の検出信号D~Dが出力される。 The photodetector 120 measures the reflected light from the object OBJ and outputs a detection signal D r . The detection signal D r is a spatial integral value of light energy (or intensity) incident on the photodetector 120 when the object OBJ is irradiated with the reference light having the intensity distribution I r . Therefore, as the photodetector 120, a single-pixel photodetector (photodetector) can be used. The photodetector 120 outputs a plurality of detection signals D 1 to D M corresponding to a plurality of M intensity distributions I 1 to I M, respectively.
 演算処理装置130は、パターン発生器132および再構成処理部134を含む。パターン発生器132は、参照光S1の強度分布Iを指定するパターン信号PTNを発生し、時間とともにパターン信号PTNを切り替える(r=1,2,…M)。 The arithmetic processing unit 130 includes a pattern generator 132 and a reconstruction processing unit 134. The pattern generator 132 generates a pattern signal PTN r designating the intensity distribution I of the reference light S1 and switches the pattern signal PTN r with time (r=1, 2,... M).
 演算処理装置130は、CPU(Central Processing Unit)やMPU(Micro Processing Unit)、マイコンなどのプロセッサ(ハードウェア)と、プロセッサ(ハードウェア)が実行するソフトウェアプログラムの組み合わせで実装することができる。演算処理装置130は、複数のプロセッサの組み合わせであってもよい。あるいは演算処理装置130はハードウェアのみで構成してもよい。 The arithmetic processing unit 130 can be implemented by a combination of a processor (hardware) such as a CPU (Central Processing Unit), an MPU (Micro Processing Unit), and a microcomputer, and a software program executed by the processor (hardware). The arithmetic processing unit 130 may be a combination of a plurality of processors. Alternatively, the arithmetic processing unit 130 may be composed of only hardware.
 疑似熱光源110が発生する参照光S1の強度分布は、その都度、ランダムに生成してもよい。あるいは、複数の強度分布I~Iのセットは、予め規定しておいてもよい。この場合、複数の強度分布I~Iを規定する複数のパターン信号PTN~PTNのセットは、パターン発生器132の内部のメモリ(パターンメモリ)に予め保持しておいてもよい。 The intensity distribution of the reference light S1 generated by the pseudo heat light source 110 may be randomly generated each time. Alternatively, a set of a plurality of intensity distributions I 1 to I M may be defined in advance. In this case, a set of the plurality of pattern signals PTN 1 to PTN M defining the plurality of intensity distributions I 1 to I M may be stored in advance in a memory (pattern memory) inside the pattern generator 132.
 再構成処理部134は、複数の強度分布I~Iと、複数の検出強度b~bの相関をとることにより、物体OBJの復元画像G(x,y)を再構成する。検出強度b~bは、検出信号D~Dにもとづいている。検出強度と検出信号の関係は、光検出器120の種類や方式などを考慮して定めればよい。 The reconstruction processing unit 134 reconstructs the restored image G(x, y) of the object OBJ by correlating the plurality of intensity distributions I 1 to I M and the plurality of detected intensities b 1 to b M. The detection intensities b 1 to b M are based on the detection signals D 1 to D M. The relationship between the detection intensity and the detection signal may be determined in consideration of the type and method of the photodetector 120.
 ある強度分布Iの参照光S1を、ある照射期間にわたり照射するとする。また検出信号Dは、ある時刻(あるいは微小時間)の受光量、すなわち瞬時値を表すとする。この場合、照射期間において検出信号Dを複数回サンプリングし、検出強度bを、検出信号Dの全サンプリング値の積分値、平均値あるいは最大値としてもよい。あるいは、全サンプリング値のうちのいくつかを選別し、選別したサンプリング値の積分値や平均値、最大値を用いてもよい。複数のサンプリング値の選別は、たとえば最大値から数えて序列x番目からy番目を抽出してもよいし、任意のしきい値より低いサンプリング値を除外してもよいし、信号変動の大きさが小さい範囲のサンプリング値を抽出してもよい。 It is assumed that the reference light S1 having a certain intensity distribution I r is irradiated for a certain irradiation period. The detection signal D r is assumed to represent the amount of light received at a certain time (or a minute time), that is, an instantaneous value. In this case, the detection signal D r may be sampled multiple times during the irradiation period, and the detection intensity b r may be an integrated value, an average value, or a maximum value of all sampling values of the detection signal D r . Alternatively, some of all the sampling values may be selected, and the integrated value, average value, or maximum value of the selected sampling values may be used. In the selection of a plurality of sampling values, for example, the order xth to yth counting from the maximum value may be extracted, sampling values lower than an arbitrary threshold value may be excluded, or the magnitude of signal fluctuation may be excluded. It is also possible to extract a sampling value in a range where is small.
 光検出器120として、カメラのように露光時間が設定可能なデバイスを用いる場合には、光検出器120の出力Dをそのまま、検出強度bとすることができる。 When a device such as a camera whose exposure time can be set is used as the photodetector 120, the output D r of the photodetector 120 can be directly used as the detection intensity b r .
 検出信号Dから検出強度bへの変換は、演算処理装置130が実行してもよいし、演算処理装置130の外部で行ってもよい。 The conversion from the detection signal D r to the detection intensity b r may be performed by the arithmetic processing device 130 or may be performed outside the arithmetic processing device 130.
 相関には、式(1)の相関関数が用いられる。Iは、r番目の強度分布であり、bはr番目の検出強度の値である。
Figure JPOXMLDOC01-appb-M000003
The correlation function of Expression (1) is used for the correlation. I r is the r-th intensity distribution, and b r is the r-th detected intensity value.
Figure JPOXMLDOC01-appb-M000003
 以上がイメージング装置100の全体の基本構成である。図1に戻る。 The above is the basic configuration of the entire imaging apparatus 100. Returning to FIG.
 本実施の形態において、疑似熱光源420は、車両用灯具400に内蔵される。そして前照灯410の少なくとも一部の構成要素(共通部材)430は、疑似熱光源420と共有される。 In the present embodiment, the pseudo heat light source 420 is built in the vehicle lamp 400. At least a part of the components (common member) 430 of the headlamp 410 is shared with the pseudo heat light source 420.
 以上が車両用灯具400の構成である。 The above is the configuration of the vehicle lamp 400.
 前照灯410は、数十m先まで光を照射することを目的として、光源や光学系が設計されている。そこで、イメージング装置の疑似熱光源420を車両用灯具400に内蔵し、前照灯410の構成要素の一部を疑似熱光源420に流用することで、遠方の物体に、精度よく参照光S1を照射することができる。また重複する部材を減らすことができるため、全体のコストを下げることができる。 The headlight 410 is designed with a light source and an optical system for the purpose of irradiating light up to several tens of meters. Therefore, by incorporating the pseudo heat light source 420 of the imaging device in the vehicle lamp 400 and diverting some of the components of the headlight 410 to the pseudo heat light source 420, the reference light S1 can be accurately applied to a distant object. Can be irradiated. Moreover, since the number of overlapping members can be reduced, the overall cost can be reduced.
 本発明は、図1のブロック図として把握され、あるいは上述の説明から導かれるさまざまな装置、方法に及ぶものであり、特定の構成に限定されるものではない。以下、本発明の範囲を狭めるためではなく、発明の本質や動作の理解を助け、またそれらを明確化するために、より具体的な構成例や実施例を説明する。 The present invention extends to various devices and methods understood as the block diagram of FIG. 1 or derived from the above description, and is not limited to a specific configuration. Hereinafter, more specific configuration examples and examples will be described in order to help understanding of the essence and operation of the invention and to clarify them, not to narrow the scope of the invention.
(実施例1)
 図3は、実施例1に係る車両用灯具400Aを示す図である。疑似熱光源420は、前照灯410と光学系を共有する。前照灯410は、光源412と、リフレクタ414を備える。光源412は白色の発光ダイオード(あるいは半導体レーザ)およびその点灯回路を含む。リフレクタ414は、光源412の出射光を、車両前方に反射する。
(Example 1)
FIG. 3 is a diagram showing a vehicular lamp 400A according to the first embodiment. The pseudo heat light source 420 shares an optical system with the headlight 410. The headlamp 410 includes a light source 412 and a reflector 414. The light source 412 includes a white light emitting diode (or a semiconductor laser) and its lighting circuit. The reflector 414 reflects the light emitted from the light source 412 toward the front of the vehicle.
 実施例1では、前照灯410のリフレクタ414が図1の共通部材430であり、疑似熱光源420と共有される。疑似熱光源420は、光源422と、パターニングデバイス424を含む。パターニングデバイス424によって強度分布がランダム化されたビームは、リフレクタ414によって車両前方に反射される。 In the first embodiment, the reflector 414 of the headlight 410 is the common member 430 of FIG. 1 and is shared with the pseudo heat light source 420. Pseudo thermal light source 420 includes light source 422 and patterning device 424. The beam whose intensity distribution is randomized by the patterning device 424 is reflected by the reflector 414 toward the front of the vehicle.
 光源422が生成する光S0は、赤外光あるいは紫外光であってもよい。この場合、光検出器120は、可視波長帯において不感であり、光S0(参照光S1)の波長にのみ、感度を有するように構成すればよい。これにより、イメージング装置100によるセンシングは、前照灯の影響を受けない。 The light S0 generated by the light source 422 may be infrared light or ultraviolet light. In this case, the photodetector 120 is insensitive in the visible wavelength band and may be configured to have sensitivity only to the wavelength of the light S0 (reference light S1). Thereby, the sensing by the imaging device 100 is not affected by the headlight.
 光源422が生成する光S0は、可視域の単一波長を含んでもよいし、白色光であってもよい。この場合、光検出器120は、前照灯410のビームSbと参照光S1の両方に感度を有することとなる。この場合、演算処理によって、光検出器120の出力から、ビームSbの影響を低減してもよい。たとえば、ビームSbが直流とみなせる一方、参照光S1は交流とみなせる場合には、ハイパスフィルタによりビームSbの影響を低減してもよい。あるいは単純に、ビームSbに起因する成分の推定値を、減算するオフセット処理を行ってもよい。 The light S0 generated by the light source 422 may include a single wavelength in the visible range or may be white light. In this case, the photodetector 120 is sensitive to both the beam Sb of the headlight 410 and the reference light S1. In this case, the influence of the beam Sb on the output of the photodetector 120 may be reduced by arithmetic processing. For example, when the beam Sb can be regarded as direct current and the reference light S1 can be regarded as alternating current, the influence of the beam Sb may be reduced by a high-pass filter. Alternatively, simply, the offset process of subtracting the estimated value of the component caused by the beam Sb may be performed.
(実施例2)
 図4は、実施例2に係る車両用灯具400Bを示す図である。実施例2においても、疑似熱光源420は、前照灯410と光学系を共有する。
(Example 2)
FIG. 4 is a diagram illustrating a vehicular lamp 400B according to the second embodiment. Also in the second embodiment, the pseudo heat light source 420 shares the optical system with the headlight 410.
 実施例2において、前照灯410は配光可変ランプ(ADB:Adaptive Driving Beam)であり、光源412と、パターニングデバイス416と、リフレクタ414を備える。光源412は白色のLEDあるいはLDおよびその点灯回路を含む。パターニングデバイス416は、たとえばDMDであり、光源412の出射光の強度分布を、所望の配光パターンが得られるように、空間的に変調する。リフレクタ414は、パターニングデバイス416の反射光のうち、オンピクセルに相当する光束を車両前方に反射する。 In the second embodiment, the headlight 410 is a variable light distribution lamp (ADB: Adaptive Driving Beam), and includes a light source 412, a patterning device 416, and a reflector 414. The light source 412 includes a white LED or LD and its lighting circuit. The patterning device 416 is, for example, a DMD, and spatially modulates the intensity distribution of the light emitted from the light source 412 so as to obtain a desired light distribution pattern. The reflector 414 reflects the light flux corresponding to an on-pixel in the reflected light of the patterning device 416 to the front of the vehicle.
 実施例2では、パターニングデバイス416およびリフレクタ414が、疑似熱光源420と共有される共通部材430である。光源422の出射光S0は、パターニングデバイス416に入射し、ランダムに変調され、参照光S1が生成される。 In the second embodiment, the patterning device 416 and the reflector 414 are the common member 430 shared with the pseudo heat light source 420. The emitted light S0 of the light source 422 is incident on the patterning device 416 and is randomly modulated, and the reference light S1 is generated.
 実施例2では、パターニングデバイス416が共有されるため、ビームSbのパターン(強度分布)と、参照光S1のパターンの相互の影響を抑える必要がある。そのためには、光源422と光源412を相補的に点灯させればよい。たとえば光源412と光源422を時分割で交互に点灯させ、光源412の点灯期間は、配光パターンに応じた画像データ(配光画像データ)PTNbをパターニングデバイス416に与え、光源422の点灯期間は、ランダムな画像データ(ランダム画像データ)PTN~PTNをパターニングデバイス416に与えればよい(第1パターン制御)。図5は、実施例2における第1パターン制御を説明する図である。 In the second embodiment, since the patterning device 416 is shared, it is necessary to suppress mutual influence of the pattern (intensity distribution) of the beam Sb and the pattern of the reference light S1. For that purpose, the light source 422 and the light source 412 may be complementarily turned on. For example, the light source 412 and the light source 422 are alternately turned on in a time-sharing manner, and during the lighting period of the light source 412, image data (light distribution image data) PTNb corresponding to the light distribution pattern is given to the patterning device 416, The random image data (random image data) PTN 1 to PTN M may be given to the patterning device 416 (first pattern control). FIG. 5 is a diagram illustrating the first pattern control in the second embodiment.
 あるいは、ビームSbが照射される範囲(照射領域)内でのみ、イメージング装置100によるセンシングを行えば足りるケースも想定される。この場合、光源412と光源422を同時点灯させて、パターニングデバイス416には、配光画像データPTNbと、ランダム画像データPTN(i=1,2,…M)をピクセルごとに演算し、パターニングデバイス416に与えてもよい(第2パターン制御)。図6は、実施例2における第2パターン制御を説明する図である。ONは、配光パターンで規定される照射領域を、OFFは、配光パターンで規定される遮光領域を示す。点灯に対応する画素値を1、消灯に対応する画素値を0とすれば、画像データSbの画素値は、照射領域ON内で1、遮光領域OFF内で0となる。ランダム画像データPTNは、画素値はランダムに1と0が分布する。ランダム画像データPTNと配光画像データPTNbの論理積を計算すれば、図6のパターンを生成できる。 Alternatively, a case in which it is sufficient to perform the sensing by the imaging device 100 only within the range (irradiation region) where the beam Sb is irradiated is assumed. In this case, the light source 412 and the light source 422 are simultaneously turned on, and the patterning device 416 calculates the light distribution image data PTNb and the random image data PTN i (i=1, 2,... M) for each pixel to perform patterning. It may be given to the device 416 (second pattern control). FIG. 6 is a diagram illustrating the second pattern control in the second embodiment. ON indicates an irradiation area defined by the light distribution pattern, and OFF indicates a light shielding area defined by the light distribution pattern. If the pixel value corresponding to lighting is 1 and the pixel value corresponding to extinction is 0, the pixel value of the image data Sb is 1 in the irradiation area ON and 0 in the light shielding area OFF. The random image data PTN i has pixel values in which 1 and 0 are randomly distributed. The pattern of FIG. 6 can be generated by calculating the logical product of the random image data PTN i and the light distribution image data PTNb.
(実施例3)
 図7は、実施例3に係る車両用灯具400Cを示す図である。実施例3では、前照灯410のすべての構成要素が、疑似熱光源420と共有される。すなわち前照灯410が、疑似熱光源420の機能を有する。実施例3では、参照光S1は、自ずと白色光となる。
(Example 3)
FIG. 7 is a diagram illustrating a vehicle lamp 400C according to the third embodiment. In the third embodiment, all the components of the headlight 410 are shared with the pseudo heat light source 420. That is, the headlight 410 has the function of the pseudo heat light source 420. In the third embodiment, the reference light S1 naturally becomes white light.
 実施例3におけるパターニングデバイス416の制御を説明する。
・第1制御
 実施例3では、通常点灯期間Tbとセンシング期間Tsを時分割で切り替えてもよい。図8は、実施例3における第1制御を説明する図である。図8に示すように、1回のセンシング期間Tsの間に、参照光S1のパターンを複数回、切り替えてもよい。
Control of the patterning device 416 in Example 3 will be described.
First Control In the third embodiment, the normal lighting period Tb and the sensing period Ts may be switched in a time division manner. FIG. 8 is a diagram illustrating the first control in the third embodiment. As shown in FIG. 8, the pattern of the reference light S1 may be switched a plurality of times during one sensing period Ts.
・第2制御
 実施例2の第2パターン制御を行ってもよい。
-Second control The second pattern control of the second embodiment may be performed.
・第3制御
 図9は、実施例3における第3制御を説明する図である。DMDなどのパターニングデバイスは、階調制御可能なものが存在する。この場合、ランダム画像データPTNと配光画像データPTNbについて、対応する画素値同士を加算し、パターニングデバイス416に与えてもよい。この場合、車両用灯具400の出射ビームの強度分布は、配光画像データPTNbで決まるベースレベルを基準として、時間とともにランダムに変化する。
-Third control FIG. 9 is a diagram illustrating the third control in the third embodiment. Some patterning devices such as DMDs are capable of gradation control. In this case, corresponding pixel values of the random image data PTN i and the light distribution image data PTNb may be added together and given to the patterning device 416. In this case, the intensity distribution of the outgoing beam of the vehicular lamp 400 changes randomly with time with the base level determined by the light distribution image data PTNb as a reference.
(実施例4)
 実施例1~実施例3では、疑似熱光源420を、光源422とパターニングデバイス424の組み合わせで構成したがその限りでない。疑似熱光源420は、マトリクス状に配置される複数の半導体光源(LED(発光ダイオード)やLD(レーザダイオード))のアレイで構成し、個々の半導体光源のオン、オフ(あるいは輝度)を制御可能に構成してもよい。
(Example 4)
In the first to third embodiments, the pseudo thermal light source 420 is composed of the combination of the light source 422 and the patterning device 424, but it is not limited thereto. The pseudo heat light source 420 is composed of an array of a plurality of semiconductor light sources (LED (light emitting diode) or LD (laser diode)) arranged in a matrix, and can control ON/OFF (or brightness) of each semiconductor light source. You may comprise.
 実施の形態1では、照明装置110を、光源112とパターニングデバイス114の組み合わせで構成したがその限りでない。照明装置110は、マトリクス状に配置される複数の半導体光源(LED(発光ダイオード)やLD(レーザダイオード))のアレイで構成し、個々の半導体光源のオン、オフ(あるいは輝度)を制御可能に構成してもよい。 In the first embodiment, the illumination device 110 is composed of the combination of the light source 112 and the patterning device 114, but it is not limited thereto. The illuminating device 110 is composed of an array of a plurality of semiconductor light sources (LED (light emitting diode) or LD (laser diode)) arranged in a matrix, and it is possible to control ON/OFF (or brightness) of each semiconductor light source. You may comprise.
 続いてイメージング装置100の用途を説明する。 Next, the use of the imaging device 100 will be described.
 図10は、物体識別システム10のブロック図である。この物体識別システム10は、自動車やバイクなどの車両に搭載され、車両の周囲に存在する物体OBJの種類(カテゴリ)を判定する。 FIG. 10 is a block diagram of the object identification system 10. The object identification system 10 is mounted on a vehicle such as an automobile or a motorcycle and determines the type (category) of an object OBJ existing around the vehicle.
 物体識別システム10は、イメージング装置100と、演算処理装置40を備える。イメージング装置100は、上述のように、物体OBJに参照光S1を照射し、反射光S2を測定することにより、物体OBJの復元画像Gを生成する。 The object identification system 10 includes an imaging device 100 and an arithmetic processing device 40. As described above, the imaging apparatus 100 irradiates the object OBJ with the reference light S1 and measures the reflected light S2 to generate the restored image G of the object OBJ.
 演算処理装置40は、イメージング装置100の出力画像Gを処理し、物体OBJの位置および種類(カテゴリ)を判定する。 The arithmetic processing device 40 processes the output image G of the imaging device 100 and determines the position and type (category) of the object OBJ.
 演算処理装置40の分類器42は、画像Gを入力として受け、それに含まれる物体OBJの位置および種類を判定する。分類器42は、機械学習によって生成されたモデルにもとづいて実装される。分類器42のアルゴリズムは特に限定されないが、YOLO(You Only Look Once)とSSD(Single Shot MultiBox Detector)、R-CNN(Region-based Convolutional Neural Network)、SPPnet(Spatial Pyramid Pooling)、Faster R-CNN、DSSD(Deconvolution -SSD)、Mask R-CNNなどを採用することができ、あるいは、将来開発されるアルゴリズムを採用できる。 The classifier 42 of the arithmetic processing device 40 receives the image G as an input and determines the position and type of the object OBJ included in the image G. The classifier 42 is implemented based on the model generated by machine learning. The algorithm of the classifier 42 is not particularly limited, but YOLO (You Only Look Once), SSD (Single Shot Multi Box Detector), R-CNN (Region-based Convolutional Neural Network), SPPnet (Spatial Pyramid Pooling), Faster R-CNN , DSSD (Deconvolution-SSD), MaskR-CNN, etc. can be adopted, or an algorithm developed in the future can be adopted.
 演算処理装置40が検出した物体OBJに関する情報は、車両用灯具200の配光制御に利用してもよい。具体的には、演算処理装置40が生成する物体OBJの種類とその位置に関する情報にもとづいて、適切な配光パターンを生成することができる。 Information about the object OBJ detected by the arithmetic processing unit 40 may be used for light distribution control of the vehicular lamp 200. Specifically, an appropriate light distribution pattern can be generated based on the information on the type and position of the object OBJ generated by the arithmetic processing device 40.
 また演算処理装置40が検出した物体OBJに関する情報は、車両側ECUに送信してもよい。車両側ECUは、この情報にもとづいて、自動運転を行ってもよい。 Information regarding the object OBJ detected by the arithmetic processing unit 40 may be transmitted to the vehicle-side ECU. The vehicle-side ECU may perform automatic driving based on this information.
 以上が物体識別システム10の構成である。物体識別システム10のセンサとして、イメージング装置100を用いることで、ノイズ耐性が格段に高まる。たとえば降雨時、降雪時、あるいは霧の中を走行する場合、肉眼では物体OBJを認識しにくいが、このような状況でも、雨、雪、霧の影響を受けずに、物体OBJの復元画像Gを得ることができる。 The above is the configuration of the object identification system 10. By using the imaging device 100 as the sensor of the object identification system 10, noise resistance is significantly improved. For example, it is difficult to recognize the object OBJ with naked eyes when it is raining, snowing, or traveling in fog, but even in such a situation, the restored image G of the object OBJ is not affected by rain, snow, or fog. Can be obtained.
 図11は、自動車を示す図である。自動車300は、車両用灯具302L,302Rを備える。上述のように、疑似熱光源420は、車両用灯具302L,302Rの少なくとも一方に、前照灯とハードウェアの一部を共有した態様にて内蔵される。 FIG. 11 is a diagram showing an automobile. The automobile 300 includes vehicle lamps 302L and 302R. As described above, the pseudo heat light source 420 is built in at least one of the vehicular lamps 302L and 302R in a mode in which a part of the hardware is shared with the headlight.
 図12は、物体検出システム210を備える車両用灯具200を示すブロック図である。車両用灯具200は、車両側ECU304とともに灯具システム310を構成する。車両用灯具200は、光源202、点灯回路204、光学系206を備える。さらに車両用灯具200には、物体検出システム210が設けられる。物体検出システム210は、上述の物体識別システム10に対応しており、イメージング装置100および演算処理装置40を含む。 FIG. 12 is a block diagram showing a vehicle lamp 200 including an object detection system 210. The vehicle lamp 200 constitutes a lamp system 310 together with the vehicle-side ECU 304. The vehicular lamp 200 includes a light source 202, a lighting circuit 204, and an optical system 206. Further, the vehicle lighting device 200 is provided with an object detection system 210. The object detection system 210 corresponds to the object identification system 10 described above, and includes the imaging device 100 and the arithmetic processing device 40.
 演算処理装置40が検出した物体OBJに関する情報は、車両用灯具200の配光制御に利用してもよい。具体的には、灯具側ECU208は、演算処理装置40が生成する物体OBJの種類とその位置に関する情報にもとづいて、適切な配光パターンを生成する。点灯回路204および光学系206は、灯具側ECU208が生成した配光パターンが得られるように動作する。 Information about the object OBJ detected by the arithmetic processing unit 40 may be used for light distribution control of the vehicular lamp 200. Specifically, the lamp-side ECU 208 generates an appropriate light distribution pattern based on the information regarding the type and the position of the object OBJ generated by the arithmetic processing device 40. The lighting circuit 204 and the optical system 206 operate so that the light distribution pattern generated by the lamp-side ECU 208 is obtained.
 また演算処理装置40が検出した物体OBJに関する情報は、車両側ECU304に送信してもよい。車両側ECUは、この情報にもとづいて、自動運転を行ってもよい。 Information regarding the object OBJ detected by the arithmetic processing unit 40 may be transmitted to the vehicle-side ECU 304. The vehicle-side ECU may perform automatic driving based on this information.
(実施の形態2)
 図13は、実施の形態2に係るイメージング装置100を示す図である。イメージング装置100はゴーストイメージングの原理を用いた相関関数イメージセンサであり、照明装置110、光検出器120、演算処理装置130を備える。イメージング装置100を、量子レーダカメラとも称する。
(Embodiment 2)
FIG. 13 is a diagram showing the imaging apparatus 100 according to the second embodiment. The imaging device 100 is a correlation function image sensor that uses the principle of ghost imaging, and includes an illumination device 110, a photodetector 120, and an arithmetic processing device 130. The imaging device 100 is also called a quantum radar camera.
 照明装置110は疑似熱光源であり、実質的にランダムとみなしうる強度分布Iを有する参照光S1を生成し、物体OBJに照射する。図14は、実施の形態2に係る参照光S1の強度分布を説明する図である。図中、強度がゼロの部分を白で、強度が非ゼロの部分を黒で示す。 The lighting device 110 is a pseudo heat light source, and generates the reference light S1 having the intensity distribution I that can be regarded as substantially random, and irradiates the object OBJ. FIG. 14 is a diagram illustrating the intensity distribution of the reference light S1 according to the second embodiment. In the figure, the part where the intensity is zero is shown in white, and the part where the intensity is not zero is shown in black.
 本実施の形態では、測定範囲600を複数の区画602_1~602_Nに分割される。この例では、縦方向に4、横方向に4に分割され、N=16である。照明装置110は、光を照射する区画(照射区画という)602_iを切り替えながら、照射区画内の強度分布I(x,y)が実質的にランダムな参照光S1を照射する。照射区画以外の区画(非照射区画という)内の強度はゼロである。 In the present embodiment, the measurement range 600 is divided into a plurality of sections 602_1 to 602_N. In this example, it is divided into 4 in the vertical direction and 4 in the horizontal direction, and N=16. The illumination device 110 irradiates the reference light S1 in which the intensity distribution I(x, y) in the irradiation section is substantially random, while switching the section (referred to as an irradiation section) 602_i that irradiates light. The intensity in the sections other than the irradiation section (called non-irradiation section) is zero.
 複数の区画602_1~602_Nそれぞれについて、M通りのランダムな強度分布の参照光S1が照射される。したがって1回のセンシングあたり、トータルの照射回数はM×Nとなる。i番目の区画(1≦i≦N)を選択しているときの、j回目の強度分布をIi,j、そのときの参照光S1を、S1i,jのように示すこととする。 Each of the plurality of sections 602_1 to 602_N is irradiated with the reference light S1 having M random intensity distributions. Therefore, the total number of irradiation times per sensing is M×N. When the i-th section (1≦i≦N) is selected, the j-th intensity distribution is I i,j , and the reference light S1 at that time is shown as S1 i,j .
 図13に戻る。光検出器120は、物体OBJからの反射光を測定し検出信号Dを出力する。検出信号Di,jは、強度分布Ii,jを有する参照光を物体OBJに照射したときに、光検出器120に入射する光エネルギー(あるいは強度)の空間的な積分値である。したがって光検出器120は、シングルピクセルの光検出器(フォトディテクタ)を用いることができる。 Return to FIG. The photodetector 120 measures the reflected light from the object OBJ and outputs a detection signal D. The detection signal D i,j is a spatial integral value of light energy (or intensity) incident on the photodetector 120 when the object OBJ is irradiated with the reference light having the intensity distribution I i,j . Therefore, as the photodetector 120, a single-pixel photodetector (photodetector) can be used.
 M×N通りの強度分布I1,1~I1,M、I2,1~I2,M、IN,1~IN,Mを有する参照光S1i,j(i∈1~N,j∈1~M)を照射すると、光検出器120からは、M×N個の検出信号Di,j(i∈1~N,j∈1~M)が出力される。なお、照射の順番は特に限定されない。 Reference light S1 i,j (iε1 to N) having M×N intensity distributions I 1,1 to I 1,M , I 2,1 to I 2,M , I N,1 to I N,M , Jε1 to M), the photodetector 120 outputs M×N detection signals D i,j (iε1 to N, jε1 to M). The order of irradiation is not particularly limited.
 たとえば、ある照射区画について、M個の強度分布を設定し終わった後に、次の照射区画を選択してもよい。照射区画を選択する順序は特に限定されず、所定の規則で選択することができる。たとえば1行目の区画を左から右に順に選択し、一番右まで移動したら、次の行に移動してもよい。あるいは1列目の区画を上から下に順に選択し、一番下まで移動したら、次の列に移動してもよい。 For example, after setting M intensity distributions for one irradiation section, the next irradiation section may be selected. The order of selecting the irradiation sections is not particularly limited, and the irradiation sections can be selected according to a predetermined rule. For example, the sections in the first row may be selected in order from left to right, and after moving to the rightmost, the next row may be moved. Alternatively, the sections in the first column may be selected in order from top to bottom, and after moving to the bottom, move to the next row.
 照明装置110は、たとえば均一な強度分布を有する光S0を生成する光源112と、この光S0の強度分布を空間的に変調可能なパターニングデバイス114を含みうる。光源112は、レーザや発光ダイオードなどを用いてもよい。参照光S1の波長やスペクトルは特に限定されず、複数のあるいは連続スペクトルを有する白色光であってもよいし、所定の波長を含む単色光であってもよい。 The illuminator 110 may include, for example, a light source 112 that generates a light S0 having a uniform intensity distribution, and a patterning device 114 that can spatially modulate the intensity distribution of the light S0. The light source 112 may use a laser, a light emitting diode, or the like. The wavelength and spectrum of the reference light S1 are not particularly limited, and may be white light having a plurality of or continuous spectra, or monochromatic light having a predetermined wavelength.
 パターニングデバイス114としては、DMD(Digital Micromirror Device)や液晶デバイスを用いることができる。本実施の形態において、パターニングデバイス114は、測定範囲600全体をカバーしており、測定範囲600全体を同時照射可能な能力を有するが、パターニングデバイス114の非照射区画に対応する画素をオフにすることで、照射区画にのみランダムなパターンを与えることが可能となっている。 As the patterning device 114, a DMD (Digital Micromirror Device) or a liquid crystal device can be used. In this embodiment, the patterning device 114 covers the entire measurement range 600 and has the ability to simultaneously illuminate the entire measurement range 600, but turns off the pixels corresponding to the non-illuminated sections of the patterning device 114. As a result, it is possible to give a random pattern only to the irradiation section.
 パターニングデバイス114には、演算処理装置130から、強度分布Ii,jを指定するパターン信号PTNi,j(画像データ)が与えられている。したがって演算処理装置130は、現在の照射区画の位置と参照光S1の強度分布Ii,jを知っている。 A pattern signal PTN i,j (image data) designating the intensity distribution I i,j is given to the patterning device 114 from the arithmetic processing unit 130. Therefore, the arithmetic processing unit 130 knows the current position of the irradiation section and the intensity distribution I i,j of the reference light S1.
 演算処理装置130は、パターン発生器132および再構成処理部134を含む。 The arithmetic processing unit 130 includes a pattern generator 132 and a reconstruction processing unit 134.
 パターン発生器132は、参照光S1の強度分布Ii,jを、その都度、ランダムに生成してもよい。この場合、パターン発生器132は、擬似ランダム信号発生器を含むことができる。 The pattern generator 132 may randomly generate the intensity distribution I i,j of the reference light S1 each time. In this case, the pattern generator 132 may include a pseudo random signal generator.
 あるいは、複数の強度分布Ii,jのセットは、予め規定しておいてもよい。たとえば、区画602と同じサイズを有する複数(たとえばM個)の強度分布のセットI~Iを予め規定していてもよい。i番目の区画を照射区画とするとき、照射区画に、I~Iを順に、あるいはランダムに割り当てればよい。 Alternatively, a set of a plurality of intensity distributions I i,j may be defined in advance. For example, a plurality (for example, M) of sets of intensity distributions I 1 to I M having the same size as the partition 602 may be defined in advance. When the i-th section is the irradiation section, I 1 to I M may be assigned to the irradiation section in order or randomly.
 この場合、複数の強度分布I~Iを規定する複数のパターン信号のセットを、パターン発生器132の内部のメモリ(パターンメモリ)に予め保持しておいてもよい。 In this case, a set of a plurality of pattern signals defining a plurality of intensity distributions I 1 to I M may be held in advance in a memory (pattern memory) inside the pattern generator 132.
 演算処理装置130は、CPU(Central Processing Unit)やMPU(Micro Processing Unit)、マイコンなどのプロセッサ(ハードウェア)と、プロセッサ(ハードウェア)が実行するソフトウェアプログラムの組み合わせで実装することができる。演算処理装置130は、複数のプロセッサの組み合わせであってもよい。あるいは演算処理装置130はハードウェアのみで構成してもよい。 The arithmetic processing unit 130 can be implemented by a combination of a processor (hardware) such as a CPU (Central Processing Unit), an MPU (Micro Processing Unit), and a microcomputer, and a software program executed by the processor (hardware). The arithmetic processing unit 130 may be a combination of a plurality of processors. Alternatively, the arithmetic processing unit 130 may be composed of only hardware.
 再構成処理部134は、複数の区画602_1~602_Nそれぞれ(602_i)について、複数の検出強度bi,1~bi,Mと参照光S1i,1~S1i,Mの強度分布Ii,1~Ii,Mの相関をとることにより、物体OBJの当該区画602_iに含まれる部分の復元画像Gを再構成する。 The reconstruction processing unit 134, for each of the plurality of sections 602_1 to 602_N (602_i), a plurality of detection intensities b i,1 to b i,M and an intensity distribution I i of the reference light S1 i,1 to S1 i,M . By taking the correlation of 1 to I i,M , the restored image G i of the part of the object OBJ included in the section 602_i is reconstructed.
 検出強度bi,1~bi,Mは、検出信号Di,1~Di,Mにもとづいている。検出強度bi,jと検出信号Di,jの関係は、光検出器120の種類や方式などを考慮して定めればよい。 The detection intensities b i,1 to b i,M are based on the detection signals D i,1 to D i,M . The relationship between the detection intensity b i,j and the detection signal D i,j may be determined in consideration of the type and method of the photodetector 120.
 ある強度分布Ii,jの参照光S1を、ある照射期間にわたり照射するとする。また検出信号Di,jは、ある時刻(あるいは微小時間)の受光量、すなわち瞬時値を表すとする。この場合、照射期間において検出信号Di,jを複数回サンプリングし、検出強度bi,jを、検出信号Di,jの全サンプリング値の積分値、平均値あるいは最大値としてもよい。あるいは、全サンプリング値のうちのいくつかを選別し、選別したサンプリング値の積分値や平均値、最大値を用いてもよい。複数のサンプリング値の選別は、たとえば最大値から数えて序列x番目からy番目を抽出してもよいし、任意のしきい値より低いサンプリング値を除外してもよいし、信号変動の大きさが小さい範囲のサンプリング値を抽出してもよい。 It is assumed that the reference light S1 having a certain intensity distribution I i,j is irradiated for a certain irradiation period. Further, the detection signal D i,j is assumed to represent the amount of received light at a certain time (or a minute time), that is, an instantaneous value. In this case, the detection signal D i,j may be sampled a plurality of times during the irradiation period, and the detection intensity b i,j may be an integral value, an average value, or a maximum value of all sampling values of the detection signal D i,j . Alternatively, some of all the sampling values may be selected, and the integrated value, average value, or maximum value of the selected sampling values may be used. In the selection of a plurality of sampling values, for example, the order xth to yth counting from the maximum value may be extracted, sampling values lower than an arbitrary threshold value may be excluded, or the magnitude of signal fluctuation may be excluded. It is also possible to extract a sampling value in a range where is small.
 光検出器120として、カメラのように露光時間が設定可能なデバイスを用いる場合には、光検出器120の出力Di,jをそのまま、検出強度bi,jとすることができる。 When a device such as a camera whose exposure time can be set is used as the photodetector 120, the output D i,j of the photodetector 120 can be directly used as the detection intensity b i,j .
 検出信号Di,jから検出強度bi,jへの変換は、演算処理装置130が実行してもよいし、演算処理装置130の外部で行ってもよい。 The conversion from the detection signal D i,j to the detection intensity b i,j may be performed by the arithmetic processing device 130 or may be performed outside the arithmetic processing device 130.
 i番目(1≦i≦N)の区画602_iの画像Gの復元には、式(2)の相関関数が用いられる。Ii,jは、j番目(1≦j≦M)の強度分布であり、bi,jはj番目の検出強度の値である。
Figure JPOXMLDOC01-appb-M000004
The correlation function of Expression (2) is used to restore the image G i of the i-th (1≦i≦N) section 602 — i . I i,j is the j-th (1≦j≦M) intensity distribution, and b i,j is the j-th detected intensity value.
Figure JPOXMLDOC01-appb-M000004
 N個すべての区画602_1~602_Nそれぞれの復元画像G~Gを結合することにより、測定範囲全体の像を得ることができる。 By combining the restored images G 1 to G N of all N sections 602_1 to 602_N, an image of the entire measurement range can be obtained.
 以上がイメージング装置100の構成である。続いてその利点を説明する。 The above is the configuration of the imaging apparatus 100. Then, the advantage is demonstrated.
 このイメージング装置100における演算回数は以下の通りである。
 測定範囲全体の画素数をX×Yとし、1区画の水平および垂直方向の画素数をx,yとする。ただし、X×Y=(x×y)×Nである。
The number of calculations in this imaging apparatus 100 is as follows.
The number of pixels in the entire measurement range is X×Y, and the number of pixels in the horizontal and vertical directions in one section is x and y. However, X×Y=(x×y)×N.
 1画素の復元に1回のパターン照射が必要と仮定するとき、x×y画素の復元に必要な照射回数M’は、
 M’=(x×y)
となる。なお、実際の照射回数は、M’より多いあるいは少ない場合があるが、おおよそ(x×y)に比例することに留意されたい。したがって1区画あたりの計算回数は、(x×y)となる。N個すべての区間に対するトータルの演算回数O’は、
 O’=N×(x×y)
となる。
Assuming that one pattern irradiation is required to restore one pixel, the irradiation number M′ required to restore x×y pixels is
M′=(x×y)
Becomes It should be noted that the actual number of irradiations may be larger or smaller than M′, but is approximately proportional to (x×y). Therefore, the number of calculations per partition is (x×y) 2 . The total number of operations O′ for all N intervals is
O′=N×(x×y) 2
Becomes
 従来の測定範囲全体を照射する手法の計算回数Oは、
 O=(X×Y)
であった。X×Y=(x×y)×Nの関係が成り立つから、本実施の形態によれば従来に比べて、計算回数をO’/O=1/N倍に減らすことができる。
The calculation number O of the conventional method of irradiating the entire measurement range is
O=(X×Y) 2
Met. Since the relationship of X×Y=(x×y)×N is established, according to the present embodiment, the number of calculations can be reduced to O′/O=1/N times as compared with the conventional case.
 たとえば、X=Y=1024のケースを考える。この場合、x=y=32の区画に区切ると、N=32×32=1024となり、計算回数は1/1024倍に減らすことができる。 For example, consider the case of X=Y=1024. In this case, when divided into sections of x=y=32, N=32×32=1024, and the number of calculations can be reduced to 1/1024 times.
 また、パターン発生器132に、参照光の強度分布をメモリに格納しておく場合、照射領域全体(X×Y)ではなく、1区画(x×y)の強度分布を保持すればよいため、メモリの容量を減らすことができる。 Further, when the intensity distribution of the reference light is stored in the memory in the pattern generator 132, the intensity distribution of one section (x×y) may be held instead of the entire irradiation area (X×Y). The memory capacity can be reduced.
 計算回数の減少は、同じ速度の演算処理装置を用いた場合、演算時間を短縮できることを意味する。あるいは、同じ時間で処理を終了するために、より遅い(したがって安価な)演算処理装置を採用することができる。 ㆍReduction in the number of calculations means that the calculation time can be shortened when using the same speed processor. Alternatively, a slower (and thus cheaper) processor can be employed to finish the process in the same amount of time.
 なお、車載用途では、ある程度大きなフレームレートで、測定範囲のセンシングを行う必要がある。本実施の形態では、計算回数(ひいては演算時間)の減少と引き換えに、照射回数Mは、N倍(Nは区画の個数)に増えており、測定時間の増加をもたらす。図15は、演算時間と測定時間のトレードオフを説明する図である。 Note: For in-vehicle use, it is necessary to sense the measurement range at a frame rate that is somewhat high. In the present embodiment, the number of irradiations M is increased N times (N is the number of sections) in exchange for the reduction in the number of calculations (and thus the calculation time), resulting in an increase in measurement time. FIG. 15 is a diagram for explaining the trade-off between calculation time and measurement time.
 区画の個数Nは、分割にともなう演算時間の減少量δ1が、測定時間の増加量δ2より大きくなるように定めるとよい。これにより車載用途で必要なフレームレートを実現できる。 The number N of partitions may be set so that the decrease amount δ1 of the calculation time due to the division is larger than the increase amount δ2 of the measurement time. As a result, the frame rate required for in-vehicle use can be realized.
 続いて実施の形態2に関連する変形例を説明する。 Next, a modified example related to the second embodiment will be described.
(変形例2.1)
 実施の形態では、区画ごとに、照射回数Mを同一としたが、区画毎に照射回数Mは異なっていてもよい。i番目の区画の照射回数をMと書くとき、演算回数は、
 O=Σi=1:N×(x×y)
となる。
(Modification 2.1)
In the embodiment, the number of irradiations M is the same for each section, but the number of irradiations M may be different for each section. When the irradiation number of the i-th section is written as M i , the calculation number is
O=Σ i=1:N M i ×(x×y)
Becomes
 照射回数Mが多いほど、正確な画像を復元できるが、区画の位置によっては、それほどの正確さが要求されない場合もある。そこで区画毎に照射回数を最適化することで、区画毎に、計算回数(演算時間)と測定時間を調整できる。 The more the irradiation number M i is, the more accurate the image can be restored, but depending on the position of the section, the accuracy may not be required so much. Therefore, by optimizing the irradiation number for each section, the number of calculations (calculation time) and the measurement time can be adjusted for each section.
(変形例2.2)
 区画の区切り方は、上述のそれに限定されない。図16(a)、(b)は、区画の変形例を示す図である。図16(a)に示すように、横長の区画に区切ってもよい。あるいは縦長の区画に区切ってもよい。
(Modification 2.2)
The division method of the section is not limited to the above. 16(a) and 16(b) are diagrams showing a modified example of a section. As shown in FIG. 16A, it may be divided into horizontally long sections. Alternatively, it may be divided into vertically long sections.
 実施の形態では、複数の区画のサイズ(画素数)を同一としたがその限りでない。図16(b)に示すように、区画ごとに画素数が異なっていてもよい。 In the embodiment, the sizes (number of pixels) of a plurality of partitions are the same, but this is not the case. As shown in FIG. 16B, the number of pixels may be different for each section.
(変形例2.3)
 実施の形態では、測定範囲600全体をカバーするパターニングデバイス114を利用したがその限りでない。1区画分の照射能力を有する照明装置110を設け、その出射光を可動ミラーを用いて水平方向あるいは水平方向にスキャンしてもよい。
(Modification 2.3)
In the embodiment, the patterning device 114 that covers the entire measurement range 600 is used, but it is not limited thereto. The illuminating device 110 having the irradiation ability for one section may be provided, and the emitted light may be scanned in the horizontal direction or the horizontal direction using the movable mirror.
(変形例2.4)
 実施の形態2では、照明装置110を、光源112とパターニングデバイス114の組み合わせで構成したがその限りでない。照明装置110は、マトリクス状に配置される複数の半導体光源(LED(発光ダイオード)やLD(レーザダイオード))のアレイで構成し、個々の半導体光源のオン、オフ(あるいは輝度)を制御可能に構成してもよい。
(Modification 2.4)
Although the illumination device 110 is configured by the combination of the light source 112 and the patterning device 114 in the second embodiment, the present invention is not limited to this. The illuminating device 110 is composed of an array of a plurality of semiconductor light sources (LED (light emitting diode) or LD (laser diode)) arranged in a matrix, and it is possible to control ON/OFF (or brightness) of each semiconductor light source. You may comprise.
(用途)
 続いて実施の形態2に係るイメージング装置100の用途を説明する。イメージング装置100は、図10の物体識別システム10に利用できる。物体識別システム10のセンサとして、実施の形態2で説明したイメージング装置100を用いることで、以下の利点を得ることができる。
(Use)
Next, the application of the imaging device 100 according to the second embodiment will be described. The imaging device 100 can be used for the object identification system 10 of FIG. By using the imaging device 100 described in the second embodiment as the sensor of the object identification system 10, the following advantages can be obtained.
 第1に、イメージング装置100すなわち量子レーダカメラを用いることで、ノイズ耐性が格段に高まる。たとえば、降雨時、降雪時、あるいは霧の中を走行する場合、肉眼では物体OBJを認識しにくいが、イメージング装置100を用いることで、雨、雪、霧の影響を受けずに、物体OBJの復元画像Gを得ることができる。 First, the use of the imaging device 100, that is, the quantum radar camera significantly improves noise resistance. For example, when it is raining, snowing, or traveling in fog, it is difficult for the naked eye to recognize the object OBJ, but by using the imaging device 100, the object OBJ is not affected by rain, snow, or fog. The restored image G can be obtained.
 第2に、測定範囲を複数の区画に分割して、区画毎に画像を復元することで、演算量を減らすことができる。これによりフレームレートを高め、あるいは演算処理装置として安価なプロセッサを選択できるようになる。 Second, the calculation range can be reduced by dividing the measurement range into multiple sections and restoring the image for each section. This makes it possible to increase the frame rate or select an inexpensive processor as the arithmetic processing unit.
 なお、イメージング装置100において、区画の個数Nを、走行環境に応じて適応的に変化させてもよい。 Note that in the imaging apparatus 100, the number N of sections may be adaptively changed according to the traveling environment.
 実施の形態2で説明したイメージング装置100は、図11の自動車に搭載でき、図12の車両用灯具に内蔵してもよい。 The imaging device 100 described in the second embodiment can be mounted on the vehicle shown in FIG. 11 and may be incorporated in the vehicle lighting device shown in FIG.
(実施の形態3)
 図17は、実施の形態3に係るイメージング装置100を示す図である。イメージング装置100はゴーストイメージングの原理を用いた相関関数イメージセンサであり、照明装置110、光検出器120、演算処理装置130を備える。イメージング装置100を、量子レーダカメラとも称する。
(Embodiment 3)
FIG. 17 is a diagram showing the imaging apparatus 100 according to the third embodiment. The imaging device 100 is a correlation function image sensor that uses the principle of ghost imaging, and includes an illumination device 110, a photodetector 120, and an arithmetic processing device 130. The imaging device 100 is also called a quantum radar camera.
 照明装置110は、疑似熱光源であり、実質的にランダムとみなしうる強度分布I(x,y)を有する参照光S1を生成し、物体OBJに照射する。物体OBJへの参照光S1の照射は、その強度分布を、複数のM通りのパターンに応じて変化させながら行われる。 The lighting device 110 is a pseudo heat light source, and generates the reference light S1 having the intensity distribution I(x, y) that can be regarded as substantially random, and irradiates the object OBJ. Irradiation of the reference light S1 onto the object OBJ is performed while changing its intensity distribution according to a plurality of M patterns.
 照明装置110は、光源112とパターニングデバイス114を含む。光源112は、均一な強度分布を有する光S0を生成する。光源112は、レーザや発光ダイオードなどを用いてもよい。参照光S1の波長やスペクトルは特に限定されず、複数のあるいは連続スペクトルを有する白色光であってもよいし、所定の波長を含む単色光であってもよい。参照光S1の波長は、赤外あるいは紫外であってもよい。 The lighting device 110 includes a light source 112 and a patterning device 114. The light source 112 generates the light S0 having a uniform intensity distribution. The light source 112 may use a laser, a light emitting diode, or the like. The wavelength and spectrum of the reference light S1 are not particularly limited, and may be white light having a plurality of or continuous spectra, or monochromatic light having a predetermined wavelength. The wavelength of the reference light S1 may be infrared or ultraviolet.
 パターニングデバイス114は、マトリクス状に配置される複数の画素を有し、複数の画素のオン、オフの組み合わせにもとづいて、光の強度分布Iを空間的に変調可能に構成される。本明細書においてオン状態の画素をオン画素、オフ状態の画素をオフ画素という。なお、以下の説明では理解の容易化のために、各画素は、オンとオフの2値(1,0)のみをとるものとするがその限りでなく、中間的な階調をとってもよい。 The patterning device 114 has a plurality of pixels arranged in a matrix, and the intensity distribution I of light can be spatially modulated based on the combination of ON and OFF of the plurality of pixels. In this specification, a pixel in an on state is referred to as an on pixel, and a pixel in an off state is referred to as an off pixel. Note that, in the following description, for ease of understanding, each pixel takes only two values (1, 0) of ON and OFF, but the present invention is not limited to this and may take an intermediate gradation.
 パターニングデバイス114としては、反射型のDMD(Digital Micromirror Device)や透過型の液晶デバイスを用いることができる。パターニングデバイス114には、パターン発生器116が発生するパターン信号PTN(画像データ)が与えられている。パターニングデバイス114はDMDであるとする。 As the patterning device 114, a reflective DMD (Digital Micromirror Device) or a transmissive liquid crystal device can be used. A pattern signal PTN (image data) generated by the pattern generator 116 is applied to the patterning device 114. The patterning device 114 is assumed to be a DMD.
 パターン発生器116は、参照光S1の強度分布Iを指定するパターン信号PTNを発生し、時間とともにパターン信号PTNを切り替える(r=1,2,…M)。 The pattern generator 116 generates a pattern signal PTN r designating the intensity distribution I r of the reference light S1 and switches the pattern signal PTN r with time (r=1, 2,... M).
 光検出器120は、物体OBJからの反射光を測定し検出信号Dを出力する。検出信号Dは、強度分布Iを有する参照光を物体OBJに照射したときに、光検出器120に入射する光エネルギー(あるいは強度)の空間的な積分値である。したがって光検出器120は、シングルピクセルの光検出器(フォトディテクタ)を用いることができる。光検出器120からは、複数M通りの強度分布I~Iそれぞれに対応する複数の検出信号D~Dが出力される。 The photodetector 120 measures the reflected light from the object OBJ and outputs a detection signal D r . The detection signal D r is a spatial integral value of light energy (or intensity) incident on the photodetector 120 when the object OBJ is irradiated with the reference light having the intensity distribution I r . Therefore, as the photodetector 120, a single-pixel photodetector (photodetector) can be used. The photodetector 120 outputs a plurality of detection signals D 1 to D M corresponding to a plurality of M intensity distributions I 1 to I M, respectively.
 演算処理装置130は、再構成処理部134を含む。再構成処理部134は、複数の強度分布I~Iと、複数の検出強度b~bの相関をとることにより、物体OBJの復元画像G(x,y)を再構成する。 The arithmetic processing unit 130 includes a reconstruction processing unit 134. The reconstruction processing unit 134 reconstructs the restored image G(x, y) of the object OBJ by correlating the plurality of intensity distributions I 1 to I M and the plurality of detected intensities b 1 to b M.
 検出強度b~bは、検出信号D~Dにもとづいている。検出強度と検出信号の関係は、光検出器120の種類や方式などを考慮して定めればよい。 The detection intensities b 1 to b M are based on the detection signals D 1 to D M. The relationship between the detection intensity and the detection signal may be determined in consideration of the type and method of the photodetector 120.
 ある強度分布Iの参照光S1を、ある照射期間にわたり照射するとする。また検出信号Dは、ある時刻(あるいは微小時間)の受光量、すなわち瞬時値を表すとする。この場合、照射期間において検出信号Dを複数回サンプリングし、検出強度bを、検出信号Dの全サンプリング値の積分値、平均値あるいは最大値としてもよい。あるいは、全サンプリング値のうちのいくつかを選別し、選別したサンプリング値の積分値や平均値、最大値を用いてもよい。複数のサンプリング値の選別は、たとえば最大値から数えて序列x番目からy番目を抽出してもよいし、任意のしきい値より低いサンプリング値を除外してもよいし、信号変動の大きさが小さい範囲のサンプリング値を抽出してもよい。 It is assumed that the reference light S1 having a certain intensity distribution I r is irradiated for a certain irradiation period. The detection signal D r is assumed to represent the amount of light received at a certain time (or a minute time), that is, an instantaneous value. In this case, the detection signal D r may be sampled multiple times during the irradiation period, and the detection intensity b r may be an integrated value, an average value, or a maximum value of all sampling values of the detection signal D r . Alternatively, some of all the sampling values may be selected, and the integrated value, average value, or maximum value of the selected sampling values may be used. In the selection of a plurality of sampling values, for example, the order xth to yth counting from the maximum value may be extracted, sampling values lower than an arbitrary threshold value may be excluded, or the magnitude of signal fluctuation may be excluded. It is also possible to extract a sampling value in a range where is small.
 光検出器120として、カメラのように露光時間が設定可能なデバイスを用いる場合には、光検出器120の出力Dをそのまま、検出強度bとすることができる。 When a device such as a camera whose exposure time can be set is used as the photodetector 120, the output D r of the photodetector 120 can be directly used as the detection intensity b r .
 検出信号Dから検出強度bへの変換は、演算処理装置130が実行してもよいし、演算処理装置130の外部で行ってもよい。 The conversion from the detection signal D r to the detection intensity b r may be performed by the arithmetic processing device 130 or may be performed outside the arithmetic processing device 130.
 相関には、式(3)の相関関数が用いられる。Iは、r番目の強度分布であり、bはr番目の検出強度の値である。
Figure JPOXMLDOC01-appb-M000005
The correlation function of Expression (3) is used for the correlation. I r is the r-th intensity distribution, and b r is the r-th detected intensity value.
Figure JPOXMLDOC01-appb-M000005
 演算処理装置130は、CPU(Central Processing Unit)やMPU(Micro Processing Unit)、マイコンなどのプロセッサ(ハードウェア)と、プロセッサ(ハードウェア)が実行するソフトウェアプログラムの組み合わせで実装することができる。演算処理装置130は、複数のプロセッサの組み合わせであってもよい。あるいは演算処理装置130はハードウェアのみで構成してもよい。パターン発生器116は、演算処理装置130の内部に実装してもよい。 The arithmetic processing unit 130 can be implemented by a combination of a processor (hardware) such as a CPU (Central Processing Unit), an MPU (Micro Processing Unit), and a microcomputer, and a software program executed by the processor (hardware). The arithmetic processing unit 130 may be a combination of a plurality of processors. Alternatively, the arithmetic processing unit 130 may be composed of only hardware. The pattern generator 116 may be mounted inside the arithmetic processing unit 130.
 以上がイメージング装置100の全体の基本構成である。続いて、パターニングデバイス114の制御について、いくつかの実施例をもとに説明する。 The above is the basic configuration of the entire imaging apparatus 100. Subsequently, control of the patterning device 114 will be described based on some examples.
(実施例3.1)
 図18(a)~(c)は、パターニングデバイス114であるDMDの画素を説明する図である。図18(a)に示すように、DMDは、m行n列のマトリクス状に配置される複数の画素PIXのアレイである。図18(b)に示すように、各画素PIXは、正方形のミラーであり、対角に設けられたヒンジを軸として、ON方向とOFF方向に傾動可能となっている。パターニングデバイス114は、全画素を独立してオン、オフ制御可能に構成される。以下の説明では、マトリクスの形状を図18(c)のように簡略化して示す。
(Example 3.1)
FIGS. 18A to 18C are views for explaining the pixels of the DMD that is the patterning device 114. As shown in FIG. 18A, the DMD is an array of a plurality of pixels PIX arranged in a matrix of m rows and n columns. As shown in FIG. 18B, each pixel PIX is a square mirror, and can be tilted in the ON direction and the OFF direction about a hinge provided diagonally as an axis. The patterning device 114 is configured so that all pixels can be independently turned on and off. In the following description, the shape of the matrix is simplified and shown as shown in FIG.
 パターン発生器116は、少なくともひとつの画素からなるピクセルブロックBを単位として、参照光の強度分布(すなわちパターン信号PTN)を制御し、さらにピクセルブロックBは、可変である。ピクセルブロックBは、連続(隣接)するオン画素の集合(またはオフ画素の集合、もしくはオン画素とオフ画素の集合)と把握することができる。実施例3.1では、ピクセルブロックBのサイズが可変であるものとする。 The pattern generator 116 controls the intensity distribution of the reference light (that is, the pattern signal PTN) in units of the pixel block B including at least one pixel, and the pixel block B is variable. The pixel block B can be understood as a set of continuous (adjacent) ON pixels (or a set of OFF pixels, or a set of ON pixels and OFF pixels). In Example 3.1, the size of the pixel block B is variable.
 図19(a)~(d)は、サイズが異なるピクセルブロックBを示す図である。サイズは、ピクセルブロックBに含まれるピクセル数(すなわち面積)と把握できる。図19(a)~(d)はそれぞれ、縦横1×1ピクセル、2×2ピクセル、3×3ピクセル、4×4ピクセルのピクセルブロックB1×1~B4×4を示す。同じピクセルブロックBに含まれる画素は、同じ状態(オン、オフ)とされる。 19A to 19D are diagrams showing pixel blocks B having different sizes. The size can be grasped as the number of pixels (that is, the area) included in the pixel block B. FIGS. 19A to 19D show pixel blocks B 1×1 to B 4×4 of vertical and horizontal 1×1 pixels, 2×2 pixels, 3×3 pixels, and 4×4 pixels, respectively. Pixels included in the same pixel block B are in the same state (on, off).
 図20(a)、(b)は、サイズが異なるピクセルブロックBにもとづくパターン信号(画像データ)PTNの例を示す図である。以下では、m=n=16のパターニングデバイス114を想定する。 FIGS. 20A and 20B are diagrams showing examples of pattern signals (image data) PTN based on pixel blocks B having different sizes. In the following, we assume patterning device 114 with m=n=16.
 図20(a)では、図19(b)の2×2ピクセルのピクセルブロックB2×2が適用され、ピクセルブロックB2×2ごとにオン、オフが制御される。1回のセンシングあたり、パターン信号PTNはM通りで変化する。なお、ハッチングを付した画素PIXがオン画素であり、そうでない画素PIXはオフ画素である。 In FIG. 20A, the pixel block B 2×2 of 2×2 pixels in FIG. 19B is applied, and ON/OFF is controlled for each pixel block B 2×2 . The pattern signal PTN changes in M ways per sensing. The hatched pixel PIX is an ON pixel, and the pixel PIX that is not hatched is an OFF pixel.
 図20(b)では、図19(d)の4×4ピクセルのピクセルブロックB4×4が適用され、ピクセルブロックB4×4ごとにオン、オフが制御される。パターン信号PTNの個数Mは、ピクセルブロックのサイズに応じて異なっていてもよく、一般的には、ピクセルブロックのサイズを大きくし、ピクセルブロックの数を減らすことで、1回のセンシング当たりのパターンの個数Mを減らすことができる。 In FIG. 20B, the pixel block B 4×4 of 4×4 pixels in FIG. 19D is applied, and ON/OFF is controlled for each pixel block B 4×4 . The number M of the pattern signals PTN may be different according to the size of the pixel block. Generally, by increasing the size of the pixel block and decreasing the number of pixel blocks, the pattern per sensing is performed once. It is possible to reduce the number M of
 図21は、パターン制御の変形例を示す図である。4×4ピクセルのピクセルブロックB4×4の配置が、横方向に関して完全に揃っておらず、いくつかの行において横方向に2ピクセル、オフセットしている。 FIG. 21 is a diagram showing a modification of the pattern control. The arrangement of 4×4 pixel pixel blocks B 4×4 is not perfectly aligned in the horizontal direction, but is offset by 2 pixels in the horizontal direction in some rows.
 以上が実施例3.1に係るパターン制御である。このパターン制御は、パターニングデバイス114の実効的な解像度を動的に変化させているものと把握できる。再構成処理部134における演算量は、解像度に応じて増加するところ、空間的な分解能がそれほど必要とされない状況では、ピクセルブロックBのサイズを大きくすることで、演算量を減らすことができる。 The above is the pattern control according to the embodiment 3.1. It can be understood that this pattern control dynamically changes the effective resolution of the patterning device 114. The calculation amount in the reconstruction processing unit 134 increases according to the resolution. However, in a situation where spatial resolution is not so required, the calculation amount can be reduced by increasing the size of the pixel block B.
 あるいは、高速に移動する物体OBJを捉えたい場合には、ピクセルブロックBのサイズを大きくし、パターンの個数を減らして照射時間を減らすことで、ブレのない画像を復元できる。また演算量、照射回数を減らすことにより、フレームレートを高めることができ、物体OBJの移動に追従することができる。 Alternatively, if you want to capture an object OBJ that moves at high speed, you can restore a blur-free image by increasing the size of the pixel block B and reducing the number of patterns to reduce the irradiation time. Further, the frame rate can be increased and the movement of the object OBJ can be followed by reducing the calculation amount and the number of irradiations.
(実施例3.2)
 実施例3.1では、1つのパターンには、同じピクセルブロックBが含まれたがその限りでない。実施例3.2では、サイズが異なるピクセルブロックのレイアウトを予め規定しておき、走行シーンに応じて選択する。図22(a)、(b)は、走行シーンに応じたサイズの異なるピクセルブロックBのレイアウトを説明する図である。
(Example 3.2)
In Example 3.1, the same pixel block B was included in one pattern, but this is not the case. In the embodiment 3.2, the layout of pixel blocks having different sizes is defined in advance and selected according to the running scene. 22A and 22B are diagrams illustrating the layout of the pixel blocks B having different sizes according to the running scene.
 図22(a)を参照すると、下側の領域には、サイズが小さいピクセルブロックBが配置され、上側ほど、サイズが大きいピクセルブロックBが配置される。郊外の走行シーンでは、車両前方の上方には、空(車両や歩行者が存在しない空間)が広がっているため、ピクセルブロックBのサイズを大きくして、解像度を低下させる。反対に、下側は路面に対応し、重要な物体(あるいは路面標識)が存在する可能性が高いため、ピクセルブロックBのサイズを小さくして、解像度を高めている。 Referring to FIG. 22A, the pixel block B S having a smaller size is arranged in the lower region, and the pixel block B L having a larger size is arranged in the upper region. In a driving scene in the suburbs, the sky (a space in which no vehicle or pedestrian exists) extends above the front of the vehicle, and therefore the size of the pixel block B is increased to reduce the resolution. On the other hand, the lower side corresponds to the road surface and there is a high possibility that an important object (or road surface sign) is present. Therefore, the size of the pixel block B is reduced to improve the resolution.
 図22(b)を参照すると、中央に近いほどサイズが小さいピクセルブロックBが配置され、外周側ほど、サイズが大きいピクセルブロックBが配置される。高速道路などの走行シーンでは、画面中央付近に消失点が位置しており、遠方の対向車は消失点から出現し、出現当初はサイズが小さく、実車に接近するにしたがって、サイズが大きくなる。図22(b)のレイアウトによれば、遠方の小さく見える物体OBJを検出しやすくなる。 Referring to FIG. 22B, a pixel block B S having a smaller size is arranged closer to the center, and a pixel block B L having a larger size is arranged closer to the outer periphery. In a driving scene such as on a highway, the vanishing point is located near the center of the screen, and a distant oncoming vehicle appears from the vanishing point, the size is small at the beginning, and the size increases as the vehicle approaches. According to the layout of FIG. 22B, it is easy to detect a small object OBJ that looks small in the distance.
(実施例3.3)
 実施例3.2では、予め複数のレイアウトを規定しておき、複数のレイアウトの中から走行シーンに適したひとつを適応的に選択したが、その限りでなく、サイズの異なるピクセルブロックBのレイアウトを動的に変化させてもよい。
(Example 3.3)
In the embodiment 3.2, a plurality of layouts are defined in advance, and one suitable for the running scene is adaptively selected from the plurality of layouts. May be dynamically changed.
 図23は、サイズの異なるピクセルブロックBの動的なレイアウトを説明する図である。あるフレーム1において、均一なサイズ(2×2ピクセル)のピクセルブロックB2×2からなるパターンが使用される。このフレーム1において復元された画像から、物体OBJの位置が推定される。そして、次のフレーム2では、物体OBJが存在する領域には、サイズが小さいピクセルブロックBを配置し、そこから遠ざかるほど、ピクセルブロックBのサイズを大きくしてもよい。 FIG. 23 is a diagram illustrating a dynamic layout of pixel blocks B having different sizes. In one frame 1, a pattern of pixel blocks B 2×2 of uniform size (2×2 pixels) is used. The position of the object OBJ is estimated from the image restored in this frame 1. Then, in the next frame 2, the pixel block B having a smaller size may be arranged in the area where the object OBJ exists, and the size of the pixel block B may be increased as the distance from the pixel block B increases.
(実施例3.1~3.3に関連する変形例)
(変形例1)
 これまでの説明では、ピクセルブロックBの形状が正方形であったが、その限りでない。図24(a)~(c)は、変形例1に係るピクセルブロックBを示す図である。ピクセルブロックBは横に長い長方形であり、そのサイズが動的に変化する。
(Modifications related to Examples 3.1 to 3.3)
(Modification 1)
Although the pixel block B has a square shape in the above description, the shape is not limited thereto. 24A to 24C are diagrams showing a pixel block B according to Modification 1. As shown in FIG. The pixel block B is a horizontally long rectangle, and its size dynamically changes.
(変形例2)
 実施例3.1~3.3では、縦方向と横方向を同じスケールで変化させたが、縦方向のピクセル数のみ、あるいは横方向のピクセル数のみを変化させてもよい。図25(a)~(d)は、変形例2に係るピクセルブロックBを示す図である。この例では、ピクセルブロックBは横方向のピクセル数が可変である。
(Modification 2)
In Examples 3.1 to 3.3, the vertical direction and the horizontal direction are changed on the same scale, but only the number of pixels in the vertical direction or only the number of pixels in the horizontal direction may be changed. 25A to 25D are diagrams showing a pixel block B according to the second modification. In this example, the pixel block B has a variable number of pixels in the horizontal direction.
(実施例3.4)
 実施例3.1~3.3では、ピクセルブロックBのサイズを変化させる場合を説明した。実施例3.4では、ピクセルブロックBの形状を適応的に変化させる。
(Example 3.4)
In Embodiments 3.1 to 3.3, the case where the size of the pixel block B is changed has been described. In Example 3.4, the shape of the pixel block B is adaptively changed.
 図170(a)~(d)は、形状が異なるピクセルブロックBを示す図である。図170(a)には横方向(X方向)に長いピクセルブロックBが、図170(b)には、縦方向(Y方向)に長いピクセルブロックBが、図170(c)には、斜め方向に長いピクセルブロックBXYが示される。図170(d)には基本となる正方形のピクセルブロックBが示される。図170(a)~(d)のピクセルブロックBのサイズはすべて等しい。 170(a) to 170(d) are diagrams showing pixel blocks B having different shapes. 170(a) shows a pixel block B X long in the horizontal direction (X direction), FIG. 170(b) shows a pixel block B Y long in the vertical direction (Y direction), and FIG. , A diagonally long pixel block B XY is shown. FIG. 170(d) shows a basic square pixel block B S. The pixel blocks B in FIGS. 170(a) to 170(d) have the same size.
 図27(a)~(c)は、形状が異なるピクセルブロックにもとづくパターン信号PTNの例を示す図である。ピクセルブロックBの長さおよび位置の少なくとも一方は、ランダムに決定される。図27(a)は、横長のピクセルブロックBを用いたパターン信号PTNの一例を、図27(b)は、縦長のピクセルブロックBを用いたパターン信号PTNの一例を、図27(c)は、斜め方向のピクセルブロックBXYを用いたパターン信号PTNXYの一例を示す。 27A to 27C are diagrams showing an example of the pattern signal PTN based on pixel blocks having different shapes. At least one of the length and the position of the pixel block B is randomly determined. 27A shows an example of the pattern signal PTN X using the horizontally long pixel block B X , and FIG. 27B shows an example of the pattern signal PTN Y using the vertically long pixel block B Y. (C) shows an example of the pattern signal PTN XY using the diagonal pixel blocks B XY .
 物体OBJとイメージング装置100の相対的な運動を考える。 Consider the relative motion of the object OBJ and the imaging device 100.
 物体OBJがイメージング装置100に対して相対的に水平方向に移動するとき、横長のピクセルブロックBからなるパターン信号PTNを用いると、正方形のピクセルブロックBを用いた場合に比べて、横方向の実効的な解像度は低下するため、横方向の画像のシャープさは低下し、あるいは横方向の位置の検出精度は低下するが、捕捉時間(言い換えれば露光時間)が長くなり、検出強度Dが増加するため、S/N比を高めて検出がしやすくなる(つまり感度が上がる)。 When the object OBJ moves in the horizontal direction relative to the imaging apparatus 100, when the pattern signal PTN X including the horizontally long pixel block B X is used, the horizontal pixel block B S is used as compared with the case where the square pixel block B S is used. Since the effective resolution in the direction decreases, the sharpness of the image in the horizontal direction decreases, or the detection accuracy of the horizontal position decreases, but the capture time (in other words, the exposure time) increases and the detection intensity D Is increased, the S/N ratio is increased to facilitate detection (that is, the sensitivity is increased).
 同じパターン信号PTNを用いて、縦方向に移動する物体OBJをセンシングする場合、正方形のピクセルブロックBを用いた場合に比べて、捕捉時間は短くなるため検出強度Dが低下し、検出感度は下がるが、縦方向の解像度は改善され、縦方向の位置の検出精度が高まる。 When the object OBJ moving in the vertical direction is sensed using the same pattern signal PTN X , the detection time is shorter because the capture time is shorter than when the square pixel block B S is used, and the detection sensitivity is reduced. However, the vertical resolution is improved and the vertical position detection accuracy is increased.
 パターン信号PTN、パターン信号PTNにも同様の考えが適用できる。一般化すると、ピクセルブロックBが伸びる方向に運動する物体に対して検出感度を高め、それと垂直な方向に移動する物体についてはシャープな像が得られ、あるいは位置の検出精度を高めることができる。 The same idea can be applied to the pattern signal PTN Y and the pattern signal PTN Z. As a generalization, the detection sensitivity can be increased for an object moving in the direction in which the pixel block B extends, and a sharp image can be obtained or the position detection accuracy can be increased for an object moving in a direction perpendicular to the detection direction.
 図28(a)、(b)は、形状に特徴を有するピクセルブロックBを有するパターン信号PTNにもとづくセンシングを説明する図である。はじめは図28(a)に示すように、とある形状(5×5ピクセルの正方形)のピクセルブロックBを含むパターン信号PTNを利用してセンシングしている。このセンシングにより、物体OBJは自動車であり、横方向に移動していることが検出される。パターン信号PTNにもとづくセンシングでは、物体OBJの先端の位置はX’と判定され、実際の物体OBJの先端の位置Xとは誤差がある。 FIGS. 28A and 28B are views for explaining sensing based on the pattern signal PTN having the pixel block B having a characteristic shape. First, as shown in FIG. 28A, the pattern signal PTN S including the pixel block B S of a certain shape (square of 5×5 pixels) is used for sensing. By this sensing, it is detected that the object OBJ is a car and is moving laterally. In the sensing based on the pattern signal PTN S , the position of the tip of the object OBJ is determined to be X′, and there is an error from the actual position X of the tip of the object OBJ.
 そこで、物体OBJの横方向の位置を正確に検出するために、縦長のピクセルブロックBを含むパターン信号PTNに切り替える。これにより、横方向の解像度が高まるため、物体OBJの先端の位置はX”と判定され、実際の物体OBJの先端の位置Xに近づけることができる。 Therefore, in order to accurately detect the lateral position of the object OBJ, switching to the pattern signal PTN Y comprising elongated pixel block B Y. As a result, the lateral resolution is increased, so that the position of the tip of the object OBJ is determined to be X″, and it is possible to approach the position X of the tip of the actual object OBJ.
 縦方向に関しては、物体OBJは移動しないため、縦長のピクセルブロックBを用いることのデメリットはほとんど存在しないと言える。 Since the object OBJ does not move in the vertical direction, it can be said that there is almost no demerit of using the vertically long pixel block BY .
 図28(a)、(b)の例では、ある物体OBJを積極的に捕捉するために、ピクセルブロックBの形状を切り替えたが、その限りでなく、特定の物体OBJを消すために、ピクセルブロックBの形状を利用してもよい。たとえば、車載のセンシング装置にとって、雨や雪はノイズであり、画像を復元する必要がないといえる。そして雨や雪の移動方向は一定であるから、ピクセルブロックBの形状を最適化すると、雨や雪の影響を排除しやすくなる。 In the examples of FIGS. 28A and 28B, the shape of the pixel block B is switched in order to positively capture a certain object OBJ, but this is not the only option, and in order to erase a specific object OBJ The shape of the block B may be used. For example, it can be said that rain and snow are noise for an in-vehicle sensing device and there is no need to restore an image. Since the moving directions of rain and snow are constant, optimizing the shape of the pixel block B makes it easier to eliminate the influence of rain and snow.
 たとえば雨が鉛直方向に降っている場合、縦方向(鉛直方向)に短い、言い換えれば横方向に長い形状のピクセルブロックBが好適である。複数のパターンを連続して照射する一連のセンシング動作において、特定のピクセルブロックの光が、雨の向こう側の物体OBJへ到達し、雨の手前側の光検出器120に戻ってくる過程で、特定のピクセルブロックの光が雨粒によって大きく影響を受ける(遮蔽される)確率が小さくなる。つまり各画素の雨の影響が均一化されることで、雨の影響を除去する処理(ノイズキャンセリング)が容易になる。 For example, when it is raining in the vertical direction, a pixel block B having a shape that is short in the vertical direction (vertical direction), in other words, long in the horizontal direction, is suitable. In a series of sensing operations for continuously irradiating a plurality of patterns, the light of a specific pixel block reaches the object OBJ on the other side of the rain and returns to the photodetector 120 on the front side of the rain, The probability that the light of a specific pixel block will be significantly affected (blocked) by the raindrops is reduced. That is, since the influence of rain on each pixel is made uniform, the process of removing the influence of rain (noise canceling) becomes easy.
(実施例3.5)
 これまでの実施例3.1~3.4では、同一のピクセルブロックBに含まれる画素はすべてオン(もしくはオフ)であった。これに対して、実施例3.5では、同一のピクセルブロックBは、オン画素とオフ画素の両方を含む。つまり、ピクセルブロックBは、所定配置された二以上のオン画素とオフ画素を含む。このようなピクセルブロックをパターンブロック(Patterned Block)と称する。照明装置110は、パターンブロックの組み合わせによって、強度分布を規定する。
(Example 3.5)
In Examples 3.1 to 3.4 described above, all the pixels included in the same pixel block B are on (or off). On the other hand, in Example 3.5, the same pixel block B includes both ON pixels and OFF pixels. That is, the pixel block B includes two or more ON pixels and OFF pixels which are arranged in a predetermined manner. Such a pixel block is called a pattern block. The illumination device 110 defines the intensity distribution by the combination of pattern blocks.
 図29(a)~(d)は、実施例3.5に係るパターンブロックPBを説明する図である。図29(a)~(d)には、パターンブロックPB内のオン画素とオフ画素の分布(パターン)が示される。 FIGS. 29A to 29D are diagrams for explaining the pattern block PB according to the embodiment 3.5. 29A to 29D show distributions (patterns) of ON pixels and OFF pixels in the pattern block PB.
 図29(a)では、パターンブロックPBの4辺に沿ってオフ画素が配置されており、図29(b)では、パターンブロックPBの隣接する2辺に沿ってオフ画素が配置されている。 In FIG. 29A, the off pixels are arranged along the four sides of the pattern block PB, and in FIG. 29B, the off pixels are arranged along the two adjacent sides of the pattern block PB.
 図29(c)では、オン画素が斜め方向にクロスするように配置される。図29(d)では、オン画素が縦横でクロスするように配置される。 In FIG. 29(c), the ON pixels are arranged so as to cross diagonally. In FIG. 29D, the ON pixels are arranged so as to cross each other vertically and horizontally.
 図30(a)、(b)は、パターンブロックの組み合わせにもとづくパターン信号の例を示す図である。図30(a)は、図29(a)のパターンブロックにより形成されるパターン信号の一例である。図30(b)は、図29(d)のパターンブロックにより形成されるパターン信号の一例である。図30(a)と図30(b)とは、オンであるパターンブロックの配置は同じである。 30A and 30B are diagrams showing examples of pattern signals based on the combination of pattern blocks. FIG. 30A is an example of the pattern signal formed by the pattern block of FIG. 29A. FIG. 30B is an example of the pattern signal formed by the pattern block of FIG. 29D. 30A and 30B have the same arrangement of the pattern blocks that are turned on.
 パターンブロックの概念を導入し、オン画素とオフ画素の分布を最適化することで、ある特定のシーンや物体に対して適したセンシングを提供することができる。 By introducing the concept of pattern blocks and optimizing the distribution of on-pixels and off-pixels, it is possible to provide suitable sensing for a specific scene or object.
 さらに、パターンブロックを複数規定しておき、それらを、物体OBJの形状や動き、あるいは走行シーンに合わせて適応的に選択することにより、物体OBJを正確に検出できるようになり、あるいは、演算量やフレームレートを制御できるようになる。 Further, by defining a plurality of pattern blocks and adaptively selecting them according to the shape and movement of the object OBJ or the traveling scene, it becomes possible to accurately detect the object OBJ, or You can control the frame rate.
 ゴーストイメージングによるセンシングでは、参照光S1のランダム性、空間インコヒーレント性が画質に大きな影響を与える。図29(a)や(b)のパターンブロックによれば、以下で説明するように、空間インコヒーレント性を改善できる。 In sensing by ghost imaging, the randomness and spatial incoherence of the reference beam S1 have a great influence on image quality. According to the pattern blocks of FIGS. 29A and 29B, the spatial incoherence can be improved as described below.
 図31(a)、(b)は、参照光の空間インコヒーレント性の改善を説明する図である。一般に、ある光源から出射する光束は、ある広がり角を持って進む。ニアフィールドを観察する顕微鏡では、画素ごとの光束の広がりは問題とならない。しかしながら車載用のイメージング装置100は、ファーフィールドの物体を検出する必要があるため、光束の広がりが問題を引き起こす。具体的には図31(a)に示すように、照明装置110の隣接する2つのオン領域(あるいは画素)A,Bから出射した2つの光束は、照明装置110から遠く離れた物体OBJの位置においてオーバーラップする。このような光束の重なりは、空間インコヒーレント性を低下させる。 FIGS. 31A and 31B are diagrams for explaining the improvement of the spatial incoherence of the reference light. Generally, a light flux emitted from a certain light source travels with a certain divergence angle. In a near-field observing microscope, the spread of the luminous flux for each pixel does not matter. However, since the vehicle-mounted imaging apparatus 100 needs to detect an object in the far field, the spread of the light flux causes a problem. Specifically, as shown in FIG. 31A, two light fluxes emitted from two adjacent ON areas (or pixels) A and B of the illumination device 110 are located at the position of the object OBJ far from the illumination device 110. Overlap in. Such overlapping of the light beams reduces the spatial incoherence.
 図29(a)や(b)のパターンブロックを利用すると、連続するオン画素の個数を2個に制限することができる。これは、隣接する2個のオン領域の間に、オフ画素が挿入されることを意味する。これにより、図31(b)に示すように、隣接する2つのオン領域A,Bを空間的に分離できるため、物体OBJに照射される参照光に関しても、光束の重なりを低減することができ、空間インコヒーレント性を改善できる。図29(c)のパターンブロックBを用いた場合も、横方向、縦方向に関しては、連続するオン画素の個数を1個または2個とすることができる。 By using the pattern blocks shown in FIGS. 29A and 29B, the number of continuous ON pixels can be limited to two. This means that an OFF pixel is inserted between two adjacent ON regions. As a result, as shown in FIG. 31B, the two adjacent on-regions A and B can be spatially separated, so that the overlapping of the light fluxes can be reduced even for the reference light with which the object OBJ is irradiated. , Spatial incoherence can be improved. Also in the case of using the pattern block B of FIG. 29C, the number of continuous ON pixels can be set to one or two in the horizontal and vertical directions.
 たとえば、遠方に物体OBJが存在する場合には、図29(a)~(c)のパターンブロックBを用い、物体OBJが近い場合には通常のパターンブロック(あるいはピクセルブロック)を用いてもよい。 For example, when the object OBJ is present at a distance, the pattern block B shown in FIGS. 29A to 29C may be used, and when the object OBJ is close, a normal pattern block (or pixel block) may be used. ..
(実施例3.6)
 ピクセルブロックBあるいはパターンブロックPBにもとづく強度分布の制御のいくつかは、照明装置110の複数の画素のオン、オフ制御に、制約条件を課していると把握することができる。
(Example 3.6)
It can be understood that some of the intensity distribution control based on the pixel block B or the pattern block PB imposes a constraint condition on the on/off control of a plurality of pixels of the illumination device 110.
 たとえば、図29(a)のパターンブロックPBを利用した強度分布制御は、隣接する2個のオン画素の間には、2+4×n個(n≧0)のオフ画素が挿入されるという制約条件と把握できる。 For example, in the intensity distribution control using the pattern block PB of FIG. 29A, a constraint condition that 2+4×n (n≧0) off pixels are inserted between two adjacent on pixels. I can understand.
 あるいは、図29(b)のパターンブロックPBを利用した強度分布制御は、隣接する3個のオン画素の間には、1+4×n個(n≧0)のオフ画素が挿入されるという制約条件と把握できる。 Alternatively, in the intensity distribution control using the pattern block PB of FIG. 29B, a constraint condition that 1+4×n (n≧0) off pixels are inserted between three adjacent on pixels. I can understand.
 言い換えれば、空間インコヒーレント性を改善できる強度分布は、パターンブロックPBを用いずに、所定の制約条件にもとづいて生成してもよい。 In other words, the intensity distribution that can improve the spatial incoherence may be generated based on a predetermined constraint condition without using the pattern block PB.
 図32(a)、(b)は、空間インコヒーレント性を改善できる強度分布の例を示す図である。図32(a)、(b)では、隣接する画素がオンとならないという制約条件のもと、複数の画素のオン、オフがランダムに決定されている。図32(a)では、縦方向、横方向、斜め方向に関するオン画素の隣接が禁止される。図32(b)では、斜め方向のオン画素の隣接は許容され、縦方向と横方向に関するオン画素の隣接が禁止される。 FIGS. 32(a) and 32(b) are diagrams showing examples of intensity distributions that can improve spatial incoherence. In FIGS. 32(a) and 32(b), on/off of a plurality of pixels is randomly determined under the constraint that adjacent pixels are not turned on. In FIG. 32A, adjacency of ON pixels in the vertical, horizontal, and diagonal directions is prohibited. In FIG. 32B, adjacency of ON pixels in the diagonal direction is allowed, and adjacency of ON pixels in the vertical direction and the horizontal direction is prohibited.
(実施例3.7)
 制約条件にもとづく強度分布の制御の別の例を説明する。図33(a)~(d)は、点灯率を制約条件としたパターン制御を説明する図である。図33(a)~(d)では、点灯率(全画素数に対するオン画素の個数の比率)が異なっており、それぞれ、点灯率は20%、40%、60%、80%である。パターン発生器を擬似ランダム信号(PRBS:Pseudo Random Bit Sequence)発生器で構成する場合、点灯率は、RPBSのマーク率に対応付けられる。
(Example 3.7)
Another example of controlling the intensity distribution based on the constraint conditions will be described. FIGS. 33A to 33D are diagrams for explaining the pattern control with the lighting rate as a constraint condition. In FIGS. 33A to 33D, the lighting rates (the ratio of the number of ON pixels to the total number of pixels) are different, and the lighting rates are 20%, 40%, 60%, and 80%, respectively. When the pattern generator is composed of a pseudo random signal (PRBS) generator, the lighting rate is associated with the mark rate of the RPBS.
 点灯率を高めると、光量が増加するため、より遠方の物体をセンシングすることが可能となる。あるいは、より反射率が低い物体や、反射面積が小さい物体の検出が可能となる。点灯率を高めることで、光の減衰率が高い濃霧環境下でも、反射光の量を増やすことができるため、検出感度を高めることができる。 ▽ Increasing the lighting rate will increase the amount of light, so it will be possible to sense objects farther away. Alternatively, it is possible to detect an object having a lower reflectance or an object having a smaller reflection area. By increasing the lighting rate, the amount of reflected light can be increased even in a dense fog environment where the light attenuation rate is high, and thus the detection sensitivity can be increased.
 反対に、近い物体をセンシングする場合、反射率が高い物体、大きい物体を検出する際には、点灯率を低下させてもよい。 On the contrary, when sensing a close object, the lighting rate may be reduced when detecting an object with high reflectance or a large object.
 また参照光S1を白色光とする場合、走行環境に応じて点灯率を動的に制御することにより、運転者からの視認性を高めたり、他の交通参加者に注意や警報を与えたり、先行車や対向車、歩行者へのグレアを低減できる。 Further, when the reference light S1 is white light, the lighting rate is dynamically controlled according to the traveling environment to improve the visibility from the driver and give attention and warning to other traffic participants. Glare to preceding vehicles, oncoming vehicles, and pedestrians can be reduced.
 たとえば、点灯率を低下させれば、前方が暗くなるため、対向車や歩行者へのグレアを防止することができる。反対に点灯率を高めれば、前方が明るくなるため、運転者からの視認性を高めたりできる。また点灯率を時間的に増減させれば、参照光S1を擬似的に点滅させることができ、自車の運転者や、他の交通参加者に注意や警報を与えることができる。  For example, if the lighting rate is reduced, the area ahead becomes darker, which can prevent glare to oncoming vehicles and pedestrians. On the contrary, if the lighting rate is increased, the front becomes brighter, and the visibility for the driver can be improved. Further, if the lighting rate is increased or decreased over time, the reference light S1 can be made to blink in a pseudo manner, and the driver of the own vehicle and other traffic participants can be given a warning or an alarm.
 図33(a)~(d)の例では、全画素に関する点灯率を規定しているが、複数の領域に分割して、領域ごとに、点灯率を規定してもよい。図34(a)、(b)は、変形例に係る点灯率の制御を説明する図である。たとえば点灯率が50%とする場合に、全画素を通してのマーク率を50%としてPRBSを生成すると、上半分にオン画素が集中し、下半分にオフ画素が集中し、明るさにムラが生ずる。そこで図34(a)に示すように、上半分の領域と下半分の領域それぞれについて、マーク率が50%のPRBSを生成して強度分布を規定すれば、明るさのムラを低減できる。 In the examples of FIGS. 33A to 33D, the lighting rate for all pixels is defined, but the lighting rate may be defined for each area by dividing into a plurality of areas. 34(a) and 34(b) are diagrams for explaining the control of the lighting rate according to the modification. For example, if the lighting rate is 50% and the PRBS is generated with the mark rate through all pixels set to 50%, ON pixels are concentrated in the upper half and OFF pixels are concentrated in the lower half, resulting in uneven brightness. .. Therefore, as shown in FIG. 34(a), unevenness in brightness can be reduced by generating PRBS with a mark ratio of 50% for each of the upper half region and the lower half region to define the intensity distribution.
 図34(b)に示すように、複数の領域に分割し、領域ごとに点灯率を独立に指定可能としてもよい。この場合、対向車や先行車が存在する領域については点灯率を下げるなどの制御を行うことができる。 As shown in FIG. 34(b), it may be divided into a plurality of areas, and the lighting rate may be independently designated for each area. In this case, control such as lowering the lighting rate can be performed in the area where the oncoming vehicle and the preceding vehicle exist.
 続いて実施の形態3に関連する変形例を説明する。 Next, a modified example related to the third embodiment will be described.
(変形例3.1)
 実施の形態では、照明装置110を、光源112とパターニングデバイス114の組み合わせで構成したがその限りでない。たとえば照明装置110は、マトリクス状に配置される複数の半導体光源(LED(発光ダイオード)やLD(レーザダイオード))のアレイで構成し、個々の半導体光源のオン、オフ(あるいは輝度)を制御可能に構成してもよい。
(Modification 3.1)
In the embodiment, the illuminating device 110 is configured by the combination of the light source 112 and the patterning device 114, but it is not limited thereto. For example, the illuminating device 110 is configured by an array of a plurality of semiconductor light sources (LED (light emitting diode) or LD (laser diode)) arranged in a matrix, and ON/OFF (or brightness) of each semiconductor light source can be controlled. You may comprise.
(用途)
 続いて実施の形態3に係るイメージング装置100の用途を説明する。このイメージング装置100は、図10の物体識別システム10に利用できる。物体識別システム10のセンサとして、実施の形態2で説明したイメージング装置100を用いることで、以下の利点を得ることができる。
(Use)
Next, the application of the imaging device 100 according to the third embodiment will be described. This imaging device 100 can be used for the object identification system 10 of FIG. By using the imaging device 100 described in the second embodiment as the sensor of the object identification system 10, the following advantages can be obtained.
 イメージング装置100すなわち量子レーダカメラを用いることで、ノイズ耐性が格段に高まる。たとえば、降雨時、降雪時、あるいは霧の中を走行する場合、肉眼では物体OBJを認識しにくいが、イメージング装置100を用いることで、雨、雪、霧の影響を受けずに、物体OBJの復元画像Gを得ることができる。 By using the imaging device 100, that is, the quantum radar camera, noise resistance is significantly increased. For example, when it is raining, snowing, or traveling in fog, it is difficult for the naked eye to recognize the object OBJ, but by using the imaging device 100, the object OBJ is not affected by rain, snow, or fog. The restored image G can be obtained.
 車載用のイメージング装置100の検出対象は、車、人、バイクや自転車、構造物や動植物とさまざまである。またイメージング装置が使用される状況も、天候、時間帯、走行道路、走行速度などの走行環境に応じて大きく変化する。また、イメージング装置自体が移動するとともに、対象物も移動し、それらの相対的な移動方向もさまざまである。参照光の強度分布を、ピクセルブロックやパターンブロックにもとづいて動的に制御することにより、車載用途に適したセンシングが可能となる。 The detection targets of the vehicle-mounted imaging device 100 are various, such as cars, people, motorcycles and bicycles, structures and animals and plants. The situation in which the imaging device is used also changes greatly depending on the traveling environment such as weather, time of day, traveling road, traveling speed, and the like. In addition, the imaging device itself moves, the objects also move, and their relative movement directions are various. By dynamically controlling the intensity distribution of the reference light based on the pixel block and the pattern block, it is possible to perform sensing suitable for in-vehicle use.
 実施の形態3で説明したイメージング装置100は、図11の自動車に搭載でき、図12の車両用灯具に内蔵してもよい。 The imaging device 100 described in the third embodiment can be mounted on the vehicle shown in FIG. 11 and may be built in the vehicle lighting device shown in FIG.
(実施の形態4)
(実施の形態4の概要)
 以下で説明する実施の形態4は、ゴーストイメージングの原理を用いたイメージング装置に関する。イメージング装置は、参照光の強度分布を複数M通りで変化させながら物体に照射する照明と、複数の強度分布I~Iそれぞれについて、物体からの反射光を測定する光検出器と、複数の強度分布I~Iと光検出器の出力にもとづく複数の検出強度b~bの相関をとることにより、物体の復元画像を再構成する演算処理装置と、を備える。
(Embodiment 4)
(Outline of Embodiment 4)
Embodiment 4 described below relates to an imaging apparatus using the principle of ghost imaging. The imaging device includes an illumination that illuminates an object while changing the intensity distribution of the reference light in a plurality of M ways, a photodetector that measures reflected light from the object for each of the plurality of intensity distributions I 1 to I M , and a plurality of photodetectors. And an arithmetic processing unit for reconstructing a restored image of the object by taking a correlation between the intensity distributions I 1 to I M of the above and a plurality of detection intensities b 1 to b M based on the output of the photodetector.
 複数の強度分布I~Iは以下の処理によって決定することができる。
 (i)照明から物体を経て光検出器に至る経路の伝達特性をモデル化する。
 (ii)基準物体およびそれに対応する基準画像を定義する。
 (iii)複数の強度分布I~Iに初期値を与える。
 (iv)モデル化した伝達特性にもとづいて、複数の強度分布I~Iそれぞれを有する参照光を基準物体に照射したときの、検出強度b~bの推定値b^~b^を計算する。
 (v)複数の強度分布I~Iと複数の推定値b^~b^の相関をとることにより、基準物体の復元画像を再構築する。
 (vi)復元画像と基準画像の誤差が小さくなるように、複数の強度分布I~Iそれぞれを修正する。処理(iv)~(vi)を繰り返すことにより、複数の強度分布I~Iを決定することができる。
The plurality of intensity distributions I 1 to I M can be determined by the following processing.
(I) Model the transfer characteristics of the path from the illumination to the photodetector through the object.
(Ii) Define a reference object and a corresponding reference image.
(Iii) Initial values are given to the plurality of intensity distributions I 1 to I M.
(Iv) Estimated values b^ 1 to b of the detected intensities b 1 to b M when the reference object is irradiated with the reference lights having the plurality of intensity distributions I 1 to I M based on the modeled transfer characteristics. ^ Calculate M.
(V) Reconstruct the restored image of the reference object by correlating the plurality of intensity distributions I 1 to I M and the plurality of estimated values b 1 to b M.
(Vi) Each of the plurality of intensity distributions I 1 to I M is modified so that the error between the restored image and the reference image becomes small. By repeating the processes (iv) to (vi), a plurality of intensity distributions I 1 to I M can be determined.
 この実施の形態によれば、想定される被写体などに応じて基準画像を定義して、パターンを最適化することにより、照射回数を減らすことができる。 According to this embodiment, the number of irradiations can be reduced by defining the reference image according to the assumed subject and optimizing the pattern.
 基準物体および基準画像は、複数Nセット(N≧2)、定義されてもよい。この場合、複数の被写体を想定できるため、より汎用性を高めることができる。 A plurality of N sets (N≧2) of reference objects and reference images may be defined. In this case, since a plurality of subjects can be assumed, versatility can be improved.
 誤差は、式(4)の目的関数F(I)で表されてもよい。
Figure JPOXMLDOC01-appb-M000006
 但し、Wは画像の幅、Hは画像の高さ、T(x,y)はi番目の基準画像、G(x,y,I)はi番目の復元画像を表す。
The error may be represented by the objective function F(I) of Expression (4).
Figure JPOXMLDOC01-appb-M000006
However, W is the width of the image, H is the height of the image, T i (x, y) is the i-th reference image, and G i (x, y, I) is the i-th restored image.
 複数の強度分布I~Iを、複数セット用意しておき、走行環境に応じたひとつのセットを選択的に使用してもよい。これにより、さまざまな走行環境において常に同じ強度分布のセットを用いる場合に比べて、画質を改善できる。 A plurality of sets of a plurality of intensity distributions I 1 to I M may be prepared and one set according to the traveling environment may be selectively used. As a result, the image quality can be improved as compared with the case where the same set of intensity distributions is always used in various traveling environments.
(実施の形態4の詳細な説明)
 図35は、実施の形態4に係るイメージング装置100を示す図である。イメージング装置100はゴーストイメージングの原理を用いた相関関数イメージセンサであり、照明110、光検出器120、演算処理装置130を備える。イメージング装置100を、量子レーダカメラとも称する。
(Detailed Description of Embodiment 4)
FIG. 35 is a diagram showing the imaging apparatus 100 according to the fourth embodiment. The imaging device 100 is a correlation function image sensor that uses the principle of ghost imaging, and includes an illumination 110, a photodetector 120, and a calculation processing device 130. The imaging device 100 is also called a quantum radar camera.
 照明110は、疑似熱光源であり、実質的にランダムとみなしうる強度分布I(x,y)を有する参照光S1を生成し、物体OBJに照射する。物体OBJへの参照光S1の照射は、その強度分布を、複数のM通りのパターンに応じて変化させながら行われる。照明110は、たとえば均一な強度分布を有する光S0を生成する光源112と、この光S0の強度分布Iを空間的に変調可能なパターニングデバイス114を含みうる。光源112は、レーザや発光ダイオードなどを用いてもよい。参照光S1の波長やスペクトルは特に限定されず、複数のあるいは連続スペクトルを有する白色光であってもよいし、所定の波長を含む単色光であってもよい。 The illumination 110 is a pseudo heat light source and generates the reference light S1 having the intensity distribution I(x, y) that can be regarded as substantially random and irradiates the object OBJ. Irradiation of the reference light S1 onto the object OBJ is performed while changing its intensity distribution according to a plurality of M patterns. Illumination 110 may include, for example, a light source 112 that produces light S0 having a uniform intensity distribution, and a patterning device 114 that is capable of spatially modulating the intensity distribution I of this light S0. The light source 112 may use a laser, a light emitting diode, or the like. The wavelength and spectrum of the reference light S1 are not particularly limited, and may be white light having a plurality of or continuous spectra, or monochromatic light having a predetermined wavelength.
 パターニングデバイス114としては、DMD(Digital Micromirror Device)や液晶デバイスを用いることができる。パターニングデバイス114には、演算処理装置130から、強度分布Iを指定するパターン信号PTN(画像データ)が与えられており、したがって演算処理装置130は、現在、物体OBJに照射される参照光S1の強度分布Iを知っている。 A DMD (Digital Micromirror Device) or a liquid crystal device can be used as the patterning device 114. The pattern signal PTN (image data) designating the intensity distribution I is given to the patterning device 114 from the arithmetic processing unit 130. Therefore, the arithmetic processing unit 130 currently outputs the reference light S1 irradiated to the object OBJ. Know the intensity distribution I r .
 光検出器120は、物体OBJからの反射光を測定し、検出信号Dを出力する。検出信号Dは、強度分布Iを有する参照光を物体OBJに照射したときに、光検出器120に入射する光エネルギー(あるいは強度)の空間的な積分値である。したがって光検出器120は、シングルピクセルのデバイス(フォトディテクタ)を用いることができる。光検出器120からは、複数M通りの強度分布I~Iそれぞれに対応する複数の検出信号D~Dが出力される。 The photodetector 120 measures the reflected light from the object OBJ and outputs a detection signal D r . The detection signal D r is a spatial integral value of light energy (or intensity) incident on the photodetector 120 when the object OBJ is irradiated with the reference light having the intensity distribution I r . Therefore, the photodetector 120 can use a single pixel device (photodetector). The photodetector 120 outputs a plurality of detection signals D 1 to D M corresponding to a plurality of M intensity distributions I 1 to I M, respectively.
 演算処理装置130は、パターン発生器132および再構成処理部134を含む。パターン発生器132は、参照光S1の強度分布Iを指定するパターン信号PTNを発生し、時間とともにパターン信号PTNを切り替える(r=1,2,…M)。従来では、照明110が発生する参照光S1の強度分布は、ランダムに生成していたが、本実施の形態では、予め決められた複数の強度分布I~Iのセットが用いられる。したがって複数の強度分布I~Iを規定する複数のパターン信号PTN~PTNのセットは、パターン発生器132の内部のメモリ(パターンメモリ)に予め保持されている。 The arithmetic processing unit 130 includes a pattern generator 132 and a reconstruction processing unit 134. The pattern generator 132 generates a pattern signal PTN r designating the intensity distribution I of the reference light S1 and switches the pattern signal PTN r with time (r=1, 2,... M). Conventionally, the intensity distribution of the reference light S1 generated by the illumination 110 is randomly generated, but in the present embodiment, a set of a plurality of predetermined intensity distributions I 1 to I M is used. Therefore, a set of a plurality of pattern signals PTN 1 to PTN M defining a plurality of intensity distributions I 1 to I M is held in advance in a memory (pattern memory) inside the pattern generator 132.
 再構成処理部134は、複数の強度分布I~Iと、複数の検出強度b~bの相関をとることにより、物体OBJの復元画像G(x,y)を再構成する。検出強度b~bは、検出信号D~Dにもとづいている。検出強度bと検出信号Dの関係は、光検出器120の種類や方式などを考慮して定めればよい。 The reconstruction processing unit 134 reconstructs the restored image G(x, y) of the object OBJ by correlating the plurality of intensity distributions I 1 to I M and the plurality of detected intensities b 1 to b M. The detection intensities b 1 to b M are based on the detection signals D 1 to D M. The relationship between the detection intensity b and the detection signal D may be determined in consideration of the type and method of the photodetector 120.
 ある強度分布Iの参照光S1を、ある照射期間にわたり照射するとする。また検出信号Dは、ある時刻(あるいは微小時間)の受光量すなわち瞬時値を表すとする。この場合、照射期間において検出信号Dを複数回サンプリングし、検出強度bを、検出信号Dの全サンプリング値の積分値、平均値あるいは最大値としてもよい。あるいは、全サンプリング値のうちのいくつかを選別し、選別したサンプリング値の積分値や平均値、最大値を用いてもよい。複数のサンプリング値の選別は、たとえば最大値から数えて序列x番目からy番目を抽出してもよいし、任意のしきい値より低いサンプリング値を除外してもよいし、信号変動の大きさが小さい範囲のサンプリング値を抽出してもよい。 It is assumed that the reference light S1 having a certain intensity distribution I r is irradiated for a certain irradiation period. The detection signal D r is assumed to represent the amount of received light at a certain time (or a minute time), that is, the instantaneous value. In this case, the detection signal D r may be sampled multiple times during the irradiation period, and the detection intensity b r may be an integrated value, an average value, or a maximum value of all sampling values of the detection signal D r . Alternatively, some of all the sampling values may be selected, and the integrated value, average value, or maximum value of the selected sampling values may be used. For selecting a plurality of sampling values, for example, the x-th to the y-th order from the maximum value may be extracted, sampling values lower than an arbitrary threshold value may be excluded, or the magnitude of signal fluctuation may be excluded. It is also possible to extract a sampling value in a range where is small.
 光検出器120として、カメラのように露光時間が設定可能なデバイスを用いる場合には、光検出器120の出力Dをそのまま、検出強度bとすることができる。 When a device such as a camera whose exposure time can be set is used as the photodetector 120, the output D r of the photodetector 120 can be directly used as the detection intensity b r .
 検出信号Dから検出強度bへの変換は、演算処理装置130が実行してもよいし、演算処理装置130の外部で行ってもよい。 The conversion from the detection signal D r to the detection intensity b r may be performed by the arithmetic processing device 130 or may be performed outside the arithmetic processing device 130.
 相関には、式(5)の相関関数が用いられる。Iは、r番目の強度分布であり、bはr番目の検出強度の値である。
Figure JPOXMLDOC01-appb-M000007
The correlation function of Expression (5) is used for the correlation. I r is the r-th intensity distribution, and b r is the r-th detected intensity value.
Figure JPOXMLDOC01-appb-M000007
 以上がイメージング装置100の全体の基本構成である。以下、複数の強度分布I~Iの決め方を説明する。複数の強度I~Iは、予めコンピュータを用いて決定される。 The above is the basic configuration of the entire imaging apparatus 100. Hereinafter, how to determine a plurality of intensity distributions I 1 to I M will be described. The plurality of intensities I 1 to I M are determined in advance by using a computer.
 図36は、複数の強度分布I~Iのセットの決定方法を示すフローチャートである。照明110から物体OBJを経て光検出器120に至る経路の伝達特性をモデル化する(S100)。この伝達特性には、照明110から物体OBJまでの光の伝達特性と、物体OBJの反射特性と、物体OBJから光検出器120までの光の伝搬特性と、光検出器120の変換特性が含まれる。 FIG. 36 is a flowchart showing a method of determining a set of a plurality of intensity distributions I 1 to I M. The transfer characteristic of the path from the illumination 110 to the photodetector 120 via the object OBJ is modeled (S100). The transfer characteristics include the transfer characteristics of light from the illumination 110 to the object OBJ, the reflection characteristics of the object OBJ, the propagation characteristics of light from the object OBJ to the photodetector 120, and the conversion characteristics of the photodetector 120. Be done.
 基準物体およびそれに対応する基準画像T(x,y)を定義する(S102)。基準画像T(x,y)は、基準物体の反射特性を規定する。図37は、基準物体と基準画像T(x,y)の関係を説明する図である。 Define a reference object and a reference image T(x, y) corresponding to it (S102). The reference image T(x,y) defines the reflection characteristic of the reference object. FIG. 37 is a diagram illustrating the relationship between the reference object and the reference image T(x, y).
 ここでは説明の簡素化、理解の容易化のために、グレースケールとして考える。基準画像T(x,y)の画素値は0~1で正規化されるものとする。この場合、各画素pの画素値は、基準物体の対応する部分の反射率を表す。たとえばある画素pの画素値が1であるとき、それに対応する基準物体の反射率は1(すなわち100%)であり、画素値が0であるとき、それに対応する基準物体の反射率は0(すなわち0%)であり、画素値が0.5であるとき、それに対応する基準物体の反射率は0.5(すなわち50%)のように対応付けることができる。  In order to simplify the explanation and facilitate understanding, consider grayscale here. It is assumed that the pixel value of the reference image T(x,y) is normalized with 0 to 1. In this case, the pixel value of each pixel p represents the reflectance of the corresponding portion of the reference object. For example, when the pixel value of a certain pixel p is 1, the reflectance of the corresponding reference object is 1 (that is, 100%), and when the pixel value is 0, the reflectance of the corresponding reference object is 0 ( That is, 0%), and when the pixel value is 0.5, the reflectance of the corresponding reference object can be associated as 0.5 (that is, 50%).
 図36に戻る。複数のパターンに初期値を与えるステップと、(iv)伝達特性にもとづいて、複数の強度分布I~Iそれぞれを有する参照光S1を基準物体OBJに照射したときの、検出強度b~bの推定値b^~b^を計算する(S106)。 Returning to FIG. Based on the step of giving initial values to a plurality of patterns and (iv) the transfer characteristics, the detected intensities b 1 to b 1 when the reference light S1 having a plurality of intensity distributions I 1 to I M is irradiated to the reference object OBJ estimate of b M b ^ 1 calculates the ~ b ^ M (S106).
 たとえば照明110から物体OBJまでの光路において光は減衰せず、参照光S1は、基準物体OBJを包含する矩形(図37の右の破線で示す矩形)の全体にわたって照射されるものとする。また物体OBJから光検出器120までの光路において光は減衰せず、物体OBJからの反射光はすべて光検出器120に入射するものと仮定する。この仮定のもとでは、強度分布がI(x,y)である参照光を基準物体に照射したときの検出強度の推定値b^は、式(6)で表される。但し、Wは画像の幅、Hは画像の高さを表す。
Figure JPOXMLDOC01-appb-M000008
For example, it is assumed that the light is not attenuated in the optical path from the illumination 110 to the object OBJ, and the reference light S1 is emitted over the entire rectangle including the reference object OBJ (rectangle shown by the broken line on the right side of FIG. 37). Further, it is assumed that the light is not attenuated in the optical path from the object OBJ to the photodetector 120, and all the reflected light from the object OBJ is incident on the photodetector 120. Under this assumption, the estimated value b^ r of the detected intensity when the reference light whose intensity distribution is I r (x, y) is applied to the reference object is represented by Expression (6). However, W represents the width of the image and H represents the height of the image.
Figure JPOXMLDOC01-appb-M000008
 現在の強度分布I(x,y)~I(x,y)の組み合わせ(あるいは状態)をIと表記する。式(7)の相関関数にもとづいて、強度分布のセットIを用いて、復元画像G(x,y,I)を再構成する(S108)。式(7)は、式(5)の検出強度bを、推定値b^に置き換えたものである。
Figure JPOXMLDOC01-appb-M000009
A combination (or state) of the current intensity distributions I 1 (x, y) to I M (x, y) is denoted by I. The restored image G(x, y, I) is reconstructed using the set I of the intensity distribution based on the correlation function of Expression (7) (S108). Expression (7) is obtained by replacing the detection intensity b r in Expression (5) with the estimated value b̂ r .
Figure JPOXMLDOC01-appb-M000009
 基準画像T(x,y)は、復元画像G(x,y,I)の正解に相当する。そこで、復元画像G(x,y,I)と、基準画像T(x,y)の誤差εを計算し(S110)、誤差ε小さくなるように、複数の強度分布I~Iそれぞれを修正する(S114)。 The reference image T(x,y) corresponds to the correct answer of the restored image G(x,y,I). Therefore, the error ε between the restored image G(x,y,I) and the reference image T(x,y) is calculated (S110), and each of the plurality of intensity distributions I 1 to I M is reduced so as to reduce the error ε. It is corrected (S114).
 この処理は誤差εがその許容値εMAXより大きい間繰り返される(S112のY)。ε<εMAXとなると(S112のN)、そのときの複数の強度分布I~IのセットIを保存し(S116)、最適化処理が終了する。 This process is repeated while the error ε is larger than the allowable value ε MAX (Y of S112). When ε<ε MAX (N in S112), the set I of the plurality of intensity distributions I 1 to I M at that time is saved (S116), and the optimization process ends.
 この実施の形態によれば、想定される被写体などに応じて基準画像を定義して、パターンを最適化することにより、照射回数を減らすことができる。 According to this embodiment, the number of irradiations can be reduced by defining the reference image according to the assumed subject and optimizing the pattern.
 好ましくは、基準画像T(x,y)と基準物体を、複数Nセット用意し、それらについて総合的な誤差εが小さくなるように、複数の強度分布I~Iを最適化するとよい。この場合、複数の被写体を想定できるため、より汎用性を高めることができる。 It is preferable to prepare a plurality of N sets of the reference image T(x, y) and the reference object and optimize the plurality of intensity distributions I 1 to I M so that the total error ε for them becomes small. In this case, since a plurality of subjects can be assumed, versatility can be improved.
 この場合の誤差εは、式(8)の目的関数F(I)で表されてもよい。
Figure JPOXMLDOC01-appb-M000010
 T(x,y)は、i番目のセットの基準画像を表す。
The error ε in this case may be represented by the objective function F(I) of Expression (8).
Figure JPOXMLDOC01-appb-M000010
T i (x, y) represents the i-th set of reference images.
 誤差εを最小化するアルゴリズムは特に限定されず、公知のものを用いることができる。たとえば最小化には、確率的勾配降下法を用いることができる。この問題は以下の式(9)で定式化できる。
Figure JPOXMLDOC01-appb-M000011
 I^は、最適な強度分布I~Iのセットである。なお、強度分布I~Iの画素値は負をとらないから、非負の制約条件を設けることができる。
The algorithm for minimizing the error ε is not particularly limited, and a known algorithm can be used. For example, stochastic gradient descent can be used for the minimization. This problem can be formulated by the following equation (9).
Figure JPOXMLDOC01-appb-M000011
I^ is a set of optimal intensity distributions I 1 to I M. Since the pixel values of the intensity distributions I 1 to I M do not have a negative value, a non-negative constraint condition can be set.
(検証)
 以下、具体的な強度分布I~Iのセットの決定および検証について説明する。基準画像(基準物体)としては、notMNISTと呼ばれるさまざまなフォントで表されたアルファベット等の画像データのセットを利用した(https://kaggle.com/lubaroli/notmnist/home)。このデータセットに含まれる画像は529114枚である。機械学習において、データの多様性は性能向上につながる可能性が高いため、以下の処理を加えて擬似的にデータ数を増加させた(かさ増し)。これらのデータを用いて、バッチサイズ64で10エポック学習を行った。
 ・ランダムに上下方向、左右方向に10%シフト
 ・ランダムに10%ズーム
 ・ランダムに10%回転
 ・ランダムに上下、左右反転
(Verification)
Hereinafter, determination and verification of a specific set of intensity distributions I 1 to I M will be described. As the reference image (reference object), a set of image data such as alphabets represented by various fonts called notMNIST was used (https://kaggle.com/lubaroli/notmnist/home). The number of images included in this data set is 529114. In machine learning, since the diversity of data is likely to improve performance, the following processing was added to artificially increase the number of data (bulk). Using these data, 10 epochs were learned with a batch size of 64.
・Randomly move up/down and left/right 10% ・Random 10% zoom ・Random 10% rotation
 強度分布I~Iの数Mは、100、500,1000として、それぞれの最適な強度分布のセットI100^、I500^、I1000^を求めた。最適化アルゴリズムとしては、Adam(Kingma Diederik, Jimmy Ba, "Adam-: A Method for Stochastic Optimization", arXiv:1412.6980,2014)を用いた。パラメータは文献にしたがい、α=0.001、β=0.9、β=0.999とした。 The number M of the intensity distributions I 1 to I M is set to 100 , 500 , and 1000 , and optimum intensity distribution sets I 100 ^, I 500 ^, and I 1000 ^ are obtained. Adam (Kingma Diederik, Jimmy Ba, "Adam-: A Method for Stochastic Optimization", arXiv:1412.6980, 2014) was used as the optimization algorithm. Parameters were set to α=0.001, β 1 =0.9, and β 2 =0.999 according to the literature.
 図38は、M=100に対して得られる100通りの強度分布I~I100からなるセットI100^示す図である。各画素の値は、0~255の範囲に正規化されている。M=500,M=1000についても同様のセットが得られるが、スペースの関係で図示は省略する。 FIG. 38 is a diagram showing a set I 100 ^ composed of 100 intensity distributions I 1 to I 100 obtained for M=100. The value of each pixel is normalized in the range of 0 to 255. A similar set can be obtained for M=500 and M=1000, but the illustration is omitted due to space limitations.
 図39は、最適化された強度分布のセットを用いたときの復元画像を示す図である。一番左は正解画像であり、上から順に、アルファベットのK、カエル、電車、トラックの写真を用いている。復元画像の下には、正解画像との誤差を表すPSNRの数値を示す。カエル、電車、トラックの画像は、CIFAR-10から引用した(Alex Krizhevsky, "Learning multiple layers of features from tiny images" 2009)。PSNRは、数値が大きいほど、誤差が小さいことを示す。 FIG. 39 is a diagram showing a restored image when an optimized intensity distribution set is used. The leftmost image is a correct answer image, and photographs of the alphabet K, frog, train, and truck are used in order from the top. Below the restored image, the numerical value of PSNR indicating the error from the correct image is shown. Images of frogs, trains, and trucks are taken from CIFAR-10 (Alex Krizhevsky, “Learning multiple layers of features from tiny images” 2009). The larger the numerical value of PSNR, the smaller the error.
 最適化された強度分布のセットを用いることにより、M=100の場合であっても、元の物体をある程度復元できており、M=500ではさらに正確に復元でき、M=1000ではさらに正確に復元できることがわかる。 By using the optimized set of intensity distributions, the original object can be restored to some extent even when M=100, more accurately with M=500 and more accurately with M=1000. You can see that it can be restored.
 比較のために、従来のランダムな強度分布のセットを用いたときの復元画像を計算した。ここではアルファベットのKについての結果を示す。図40は、ランダムな強度分布のセットを用いたときの復元画像を示す図である。ランダムな強度分布を用いると、10000回照射を行う場合でも、PSNRは9.578程度である。これに対して、本実施の形態によれば、100回照射でPSNR=14.20、500回照射でPSNR=18.63、1000回照射でPSNR=22.13が得られており、従来に比べてPSNRは著しく改善されることがわかる。 For comparison, we calculated the restored image when using a conventional set of random intensity distributions. The results for the letter K are shown here. FIG. 40 is a diagram showing a restored image when a set of random intensity distributions is used. Using a random intensity distribution, the PSNR is about 9.578 even when irradiation is performed 10,000 times. On the other hand, according to the present embodiment, PSNR=14.20 after 100 irradiations, PSNR=18.63 after 500 irradiations, and PSNR=22.13 after 1000 irradiations are obtained. It can be seen that the PSNR is significantly improved by comparison.
 続いて実施の形態4に関する変形例を説明する。 Next, a modified example of the fourth embodiment will be described.
(変形例4.1)
 イメージング装置100において、強度分布のセットを複数用意しておき、走行環境に応じてそれらを切り替えるようにしてもよい。
(Modification 4.1)
In the imaging apparatus 100, a plurality of intensity distribution sets may be prepared and switched according to the traveling environment.
 上述の説明では、イメージング装置100から物体OBJの間で、光の減衰等が存在しないとした。これは、晴天時の見通しがよい場合に対応付けることができる。もちろん、この仮定のもとで得られた強度分布のセットは、降雨、降雪あるいは濃霧の中を走行する状況でも有効であるが、降雨、降雪、あるいは濃霧などの走行環境に応じて、強度分布のセットを切り替えれば、さらに復元画像Gの誤差を小さくできる。 In the above description, it is assumed that there is no light attenuation or the like between the imaging device 100 and the object OBJ. This can be associated when the visibility in fine weather is good. Of course, the set of intensity distributions obtained under this assumption is also effective in situations where the vehicle is driving in rainfall, snowfall, or thick fog, but depending on the driving environment such as rainfall, snowfall, or thick fog, the intensity distribution By switching the set of, the error of the restored image G can be further reduced.
 たとえばイメージング装置100と物体OBJの間に、雨、雪、霧が存在する走行環境を想定する場合、それぞれ影響を考慮して、伝達特性(光の伝搬特性)をモデル化すればよい。この場合、検出強度bの推定値b^の計算式が式(6)から修正されることとなる。そして、修正された推定値b^にもとづいて、強度分布のセットを最適化すればよい。 For example, in the case of assuming a traveling environment in which rain, snow, and fog exist between the imaging device 100 and the object OBJ, the transfer characteristics (light propagation characteristics) may be modeled in consideration of their influences. In this case, the calculation formula of the estimated value b^ r of the detection intensity b r is modified from the formula (6). Then, the set of intensity distributions may be optimized based on the modified estimated value b^ r .
 複数の走行環境を想定し、各走行環境に適した強度分布のセットを最適化する際には、各走行環境(すなわち降雨、降雪、濃霧)において基準物体を撮影し、得られた画像を基準画像として、上述の機械学習を行ってもよい。この場合、伝達特性(光の伝搬特性)のモデル化を簡略化できる。 When optimizing a set of intensity distribution suitable for each driving environment assuming multiple driving environments, a reference object is photographed in each driving environment (that is, rainfall, snowfall, and thick fog), and the obtained image is used as a reference. The above machine learning may be performed as an image. In this case, the modeling of the transfer characteristic (light propagation characteristic) can be simplified.
 あるいは、走行環境としては、雨、雪、霧などの違いに加えて、あるいはそれらに変えて、昼の走行と夜の走行、低速走行と高速走行などを考慮して、各走行環境に適した強度分布のセットを用意してもよい。 Alternatively, as a driving environment, in addition to or in addition to the difference of rain, snow, fog, etc., it is suitable for each driving environment in consideration of daytime driving and nighttime driving, low speed driving and high speed driving. A set of intensity distributions may be prepared.
(変形例4.2)
 実施の形態4では、照明110を、光源112とパターニングデバイス114の組み合わせで構成したがその限りでない。たとえば照明110は、マトリクス状に配置される複数の半導体光源(LED(発光ダイオード)やLD(レーザダイオード))のアレイで構成し、個々の半導体光源のオン、オフ(あるいは輝度)を制御可能に構成してもよい。
(Modification 4.2)
In the fourth embodiment, the illumination 110 is composed of the combination of the light source 112 and the patterning device 114, but it is not limited thereto. For example, the illumination 110 is composed of an array of a plurality of semiconductor light sources (LED (light emitting diode) or LD (laser diode)) arranged in a matrix, and it is possible to control ON/OFF (or brightness) of each semiconductor light source. You may comprise.
 続いて実施の形態4に係るイメージング装置100の用途を説明する。このイメージング装置100は、図10の物体識別システム10に利用できる。物体識別システム10のセンサとして、実施の形態4で説明したイメージング装置100を用いることで、以下の利点を得ることができる。 Next, applications of the imaging device 100 according to the fourth embodiment will be described. This imaging device 100 can be used for the object identification system 10 of FIG. By using the imaging device 100 described in the fourth embodiment as the sensor of the object identification system 10, the following advantages can be obtained.
 第1に、イメージング装置100すなわち量子レーダカメラを用いることで、ノイズ耐性が格段に高まる。たとえば、降雨時、降雪時、あるいは霧の中を走行する場合、肉眼では物体OBJを認識しにくいが、イメージング装置100を用いることで、雨、雪、霧の影響を受けずに、物体OBJの復元画像Gを得ることができる。 First, the use of the imaging device 100, that is, the quantum radar camera significantly improves noise resistance. For example, when it is raining, snowing, or traveling in fog, it is difficult for the naked eye to recognize the object OBJ, but by using the imaging device 100, the object OBJ is not affected by rain, snow, or fog. The restored image G can be obtained.
 第2に、参照光S1として、予め機械学習によって最適化した強度分布I~Iのセットを用いることで、少ない照射回数で、物体OBJの像を復元できる。上の対比では従来のランダムな強度分布では、数千回にも及ぶ照射回数が必要であったのに対して、本実施形態では、100~1000回程度の照射回数まで減らすことができる。これにより、1枚の復元画像を得るために要する時間、すなわちフレームレートを従来よりも増やすことができる。これにより、イメージング装置100と物体OBJが相対的に移動する車載用途において必要なフレームレートを達成することができる。 Secondly, by using a set of intensity distributions I 1 to I M optimized beforehand by machine learning as the reference light S1, it is possible to restore the image of the object OBJ with a small number of irradiations. In contrast to the above, in the conventional random intensity distribution, the number of irradiations required up to several thousand times is required, but in the present embodiment, the number of irradiations can be reduced to about 100 to 1000 times. As a result, it is possible to increase the time required to obtain one restored image, that is, the frame rate, as compared with the conventional case. As a result, it is possible to achieve the frame rate required for in-vehicle use in which the imaging apparatus 100 and the object OBJ move relatively.
 実施の形態4で説明したイメージング装置100は、図11の自動車に搭載でき、図12の車両用灯具に内蔵してもよい。 The imaging device 100 described in the fourth embodiment can be mounted on the automobile of FIG. 11 and may be incorporated in the vehicle lamp of FIG.
 実施の形態1~4において、ゴーストイメージング(あるいはシングルピクセルイメージング)の手法として、相関計算を用いた手法を説明したが、画像の再構築の手法はそれに限定されない。いくつかの実施の形態では、相関計算に変えて、フーリエ変換やアダマール逆変換を使用した解析的手法や、スパースモデリングなどの最適化問題を解く手法、およびAI・機械学習を利用したアルゴリズムによって、画像を再構築してもよい。 In Embodiments 1 to 4, the method using correlation calculation has been described as the ghost imaging (or single pixel imaging) method, but the image reconstruction method is not limited to this. In some embodiments, instead of the correlation calculation, an analytical method using Fourier transform or Hadamard inverse transform, a method for solving an optimization problem such as sparse modeling, and an algorithm using AI/machine learning, The image may be reconstructed.
 実施の形態にもとづき、具体的な語句を用いて本発明を説明したが、実施の形態は、本発明の原理、応用の一側面を示しているにすぎず、実施の形態には、請求の範囲に規定された本発明の思想を逸脱しない範囲において、多くの変形例や配置の変更が認められる。 Although the present invention has been described by using specific words and phrases based on the embodiments, the embodiments merely show one aspect of the principle and application of the present invention. Many modifications and changes in arrangement are possible without departing from the spirit of the present invention defined in the range.
 本発明は、車両用灯具に関する。 The present invention relates to a vehicle lamp.
OBJ…物体、10…物体識別システム、20…イメージング装置、40…演算処理装置、42…分類器、100…イメージング装置、110…照明、120…光検出器、130…演算処理装置、132…パターン発生器、134…再構成処理部、200…車両用灯具、202…光源、204…点灯回路、206…光学系、300…自動車、302…前照灯、310…灯具システム、304…車両側ECU、400…車両用灯具、402…筐体、404…カバー、410…前照灯、412…光源、414…リフレクタ、416…パターニングデバイス、420…疑似熱光源、422…光源、424…パターニングデバイス、430…共通部材。 OBJ... Object, 10... Object identification system, 20... Imaging device, 40... Arithmetic processing device, 42... Classifier, 100... Imaging device, 110... Illumination, 120... Photodetector, 130... Arithmetic processing device, 132... Pattern Generator, 134... Reconstruction processing unit, 200... Vehicle lamp, 202... Light source, 204... Lighting circuit, 206... Optical system, 300... Automotive, 302... Headlight, 310... Lamp system, 304... Vehicle side ECU , 400... Vehicle lamp, 402... Housing, 404... Cover, 410... Headlight, 412... Light source, 414... Reflector, 416... Patterning device, 420... Pseudo thermal light source, 422... Light source, 424... Patterning device, 430... Common member.

Claims (51)

  1.  車両用灯具であって、
     前照灯と、
     参照光の強度分布をランダムに切り替えながら物体に照射する疑似熱光源と、
     を備え、
     前記疑似熱光源は、前記物体からの反射光を測定する光検出器、および、前記検出器の出力と前記参照光の強度分布にもとづいて、前記物体の復元画像を再構成する演算処理装置とともにイメージング装置を構成するものであり、
     前記前照灯の少なくとも一部の構成要素は、前記疑似熱光源と共有されることを特徴とする車両用灯具。
    A lamp for a vehicle,
    A headlight,
    A pseudo heat light source that irradiates an object while randomly switching the intensity distribution of the reference light,
    Equipped with
    The pseudo thermal light source is a photodetector that measures reflected light from the object, and, based on the output of the detector and the intensity distribution of the reference light, an arithmetic processing unit that reconstructs a restored image of the object. Which constitutes an imaging device,
    At least a part of the components of the headlight is shared with the pseudo heat light source.
  2.  前記疑似熱光源は、前記前照灯と光学系を共有することを特徴とする請求項1に記載の車両用灯具。 The vehicular lamp according to claim 1, wherein the pseudo heat light source shares an optical system with the headlight.
  3.  前記前照灯の前記光学系は、配光を制御するパターニングデバイスを含み、前記疑似熱光源と前記前照灯は、前記パターニングデバイスを共有することを特徴とする請求項2に記載の車両用灯具。 The vehicle according to claim 2, wherein the optical system of the headlamp includes a patterning device that controls light distribution, and the pseudo-heat light source and the headlamp share the patterning device. Lamp.
  4.  前記前照灯の前記光学系は、光源の出射光を車両前方に反射するリフレクタを含み、前記疑似熱光源と前記前照灯は、前記リフレクタを共有することを特徴とする請求項2に記載の車両用灯具。 The optical system of the headlamp includes a reflector that reflects light emitted from a light source toward the front of the vehicle, and the pseudo-heat light source and the headlamp share the reflector. Vehicle lighting.
  5.  前記参照光は、赤外または紫外であることを特徴とする請求項2から4のいずれかに記載の車両用灯具。 The vehicular lamp according to any one of claims 2 to 4, wherein the reference light is infrared light or ultraviolet light.
  6.  前記疑似熱光源は、前記前照灯と光源を共有することを特徴とする請求項1から5のいずれかに記載の車両用灯具。 The vehicular lamp according to any one of claims 1 to 5, wherein the pseudo heat light source shares a light source with the headlight.
  7.  前記参照光は白色光であることを特徴とする請求項6に記載の車両用灯具。 The vehicular lamp according to claim 6, wherein the reference light is white light.
  8.  請求項1から7のいずれかに記載の車両用灯具を備えることを特徴とする車両。 A vehicle comprising the vehicle lamp according to any one of claims 1 to 7.
  9.  測定範囲を複数の区画に分割し、区画を切り替えながら強度分布がランダムな参照光を照射する照明装置と、
     物体からの反射光を測定する光検出器と、
     前記複数の区画それぞれについて、前記光検出器の出力にもとづく検出強度と前記参照光の強度分布にもとづいて、前記物体の当該区画に含まれる部分の復元画像を再構成する演算処理装置と、
     を備えることを特徴とする車載用イメージング装置。
    An illumination device that divides the measurement range into a plurality of sections and irradiates the reference light with a random intensity distribution while switching the sections,
    A photodetector that measures the reflected light from the object,
    For each of the plurality of sections, based on the detection intensity based on the output of the photodetector and the intensity distribution of the reference light, an arithmetic processing device that reconstructs a restored image of a portion of the object included in the section,
    An in-vehicle imaging device comprising:
  10.  前記複数の区画の個数は、分割にともなう演算時間の減少量が、測定時間の増加量より大きくなるように定められることを特徴とする請求項9に記載の車載用イメージング装置。 10. The vehicle-mounted imaging device according to claim 9, wherein the number of the plurality of sections is determined such that the amount of decrease in the calculation time due to the division is larger than the amount of increase in the measurement time.
  11.  請求項9または10に記載の車載用イメージング装置を備えることを特徴とする車両用灯具。 A vehicle lamp comprising the vehicle-mounted imaging device according to claim 9.
  12.  請求項9または10に記載の車載用イメージング装置を備えることを特徴とする車両。 A vehicle comprising the on-vehicle imaging device according to claim 9 or 10.
  13.  測定範囲を複数の区画に分割し、区画を切り替えながら強度分布がランダムな参照光を照射するステップと、
     光検出器によって物体からの反射光を測定するステップと、
     前記複数の区画それぞれについて、前記光検出器の出力にもとづく検出強度と前記参照光の強度分布にもとづいて前記物体の当該区画に含まれる部分の復元画像を再構成するステップと、
     を備えることを特徴とするイメージング方法。
    Dividing the measurement range into a plurality of sections, irradiating a reference light with a random intensity distribution while switching the sections,
    Measuring the reflected light from the object with a photodetector,
    For each of the plurality of sections, reconstructing a restored image of the portion of the object included in the section based on the detected intensity based on the output of the photodetector and the intensity distribution of the reference light,
    An imaging method comprising:
  14.  前記複数の区画の個数は、分割にともなう演算時間の減少量が、測定時間の増加量より大きくなるように定められることを特徴とする請求項13に記載のイメージング方法。 The imaging method according to claim 13, wherein the number of the plurality of sections is determined so that the amount of decrease in calculation time due to the division is larger than the amount of increase in measurement time.
  15.  ゴーストイメージングにもとづくイメージング装置に使用される照明装置であって、
     マトリクス状に配置された複数の画素を有し、前記複数の画素のオン、オフの組み合わせにもとづいて、光の強度分布を変調可能に構成され、
     少なくともひとつの画素を含むピクセルブロックを単位として前記強度分布が制御され、前記ピクセルブロックは可変であることを特徴とする照明装置。
    An illumination device used in an imaging device based on ghost imaging,
    It has a plurality of pixels arranged in a matrix and is configured to be able to modulate the light intensity distribution based on a combination of ON and OFF of the plurality of pixels,
    The illumination device is characterized in that the intensity distribution is controlled in units of a pixel block including at least one pixel, and the pixel block is variable.
  16.  前記ピクセルブロックのサイズが可変であることを特徴とする請求項15に記載の照明装置。 The lighting device according to claim 15, wherein the size of the pixel block is variable.
  17.  過去のセンシングにより物体が存在すると判定される領域において前記ピクセルブロックのサイズを小さくすることを特徴とする請求項16に記載の照明装置。 The lighting device according to claim 16, wherein the size of the pixel block is reduced in an area where it is determined that an object exists by past sensing.
  18.  前記ピクセルブロックの形状が可変であることを特徴とする請求項15から17のいずれかに記載の照明装置。 The lighting device according to any one of claims 15 to 17, wherein the shape of the pixel block is variable.
  19.  前記ピクセルブロックの形状は、横長の図形、縦長の図形、斜め方向に伸びる図形のうち少なくとも二種類で変化することを特徴とする請求項18に記載の照明装置。 The lighting device according to claim 18, wherein the shape of the pixel block changes in at least two types of a horizontally long figure, a vertically long figure, and a figure that extends in an oblique direction.
  20.  前記ピクセルブロックの形状は、物体の移動方向にもとづいて選択されることを特徴とする請求項18または19に記載の照明装置。 The lighting device according to claim 18 or 19, wherein the shape of the pixel block is selected based on a moving direction of an object.
  21.  前記ピクセルブロックは、所定のパターンにしたがって配置されるオン画素とオフ画素を含むことを特徴とする請求項15から20のいずれかに記載の照明装置。 The lighting device according to any one of claims 15 to 20, wherein the pixel block includes ON pixels and OFF pixels arranged according to a predetermined pattern.
  22.  前記ピクセルブロックの隣接する少なくとも2辺に沿ってオフ画素が配置されることを特徴とする請求項21に記載の照明装置。 22. The lighting device according to claim 21, wherein off-pixels are arranged along at least two adjacent sides of the pixel block.
  23.  ゴーストイメージングにもとづくイメージング装置に使用される照明装置であって、
     マトリクス状に配置された複数の画素を有し、前記複数の画素のオン、オフの組み合わせにもとづいて、光の強度分布を変調可能に構成され、
     を備え、
     二以上のオン画素とオフ画素を含む所定パターンを用いて、前記強度分布が制御されることを特徴とする照明装置。
    An illumination device used in an imaging device based on ghost imaging,
    It has a plurality of pixels arranged in a matrix and is configured to be able to modulate the light intensity distribution based on a combination of ON and OFF of the plurality of pixels,
    Equipped with
    An illumination device, wherein the intensity distribution is controlled using a predetermined pattern including two or more ON pixels and OFF pixels.
  24.  前記所定パターンが複数規定され、前記強度分布は、複数の所定パターンから動的に選択される少なくともひとつにもとづいて制御されることを特徴とする請求項23に記載の照明装置。 The lighting device according to claim 23, wherein a plurality of the predetermined patterns are defined, and the intensity distribution is controlled based on at least one dynamically selected from the plurality of predetermined patterns.
  25.  物体に参照光を照射する請求項15から24のいずれかに記載の照明装置と、
     前記物体からの反射光を測定する光検出器と、
     前記光検出器の出力にもとづく検出強度と前記参照光の強度分布にもとづいて前記物体の復元画像を再構成する演算処理装置と、
     を備えることを特徴とするイメージング装置。
    The illumination device according to any one of claims 15 to 24, which irradiates an object with reference light,
    A photodetector for measuring reflected light from the object,
    An arithmetic processing unit that reconstructs a restored image of the object based on the detected intensity based on the output of the photodetector and the intensity distribution of the reference light,
    An imaging device comprising:
  26.  請求項25に記載のイメージング装置を備えることを特徴とする車両用灯具。 A vehicle lighting device comprising the imaging device according to claim 25.
  27.  請求項25に記載のイメージング装置を備えることを特徴とする車両。 A vehicle comprising the imaging device according to claim 25.
  28.  ゴーストイメージングにもとづくイメージング装置に使用される照明装置であって、
     マトリクス状に配置された複数の画素を有し、前記複数の画素のオン、オフの組み合わせにもとづいて、光の強度分布を変調可能に構成され、
     所定の制約条件のもと、前記複数の画素のオン、オフが制御されることを特徴とする照明装置。
    An illumination device used in an imaging device based on ghost imaging,
    It has a plurality of pixels arranged in a matrix and is configured to be able to modulate the light intensity distribution based on a combination of ON and OFF of the plurality of pixels,
    An illumination device, wherein on/off of the plurality of pixels is controlled under a predetermined constraint condition.
  29.  前記所定の制約条件は、隣接する画素がオンとならないことを含むことを特徴とする請求項28に記載の照明装置。 29. The lighting device according to claim 28, wherein the predetermined constraint condition includes that adjacent pixels are not turned on.
  30.  前記所定の制約条件は、動的に変化することを特徴とする請求項28または29に記載の照明装置。 The lighting device according to claim 28 or 29, wherein the predetermined constraint condition is dynamically changed.
  31.  前記所定の制約条件は、オン画素とオフ画素の比率である点灯率を規定することを特徴とする請求項28から30のいずれかに記載の照明装置。 The lighting device according to any one of claims 28 to 30, wherein the predetermined constraint condition defines a lighting rate that is a ratio of ON pixels to OFF pixels.
  32.  前記複数の画素が複数の領域に分割され、領域それぞれにおいて、オンとオフの画素の割合が規定されることを特徴とする請求項28から31のいずれかに記載の照明装置。 The lighting device according to any one of claims 28 to 31, wherein the plurality of pixels are divided into a plurality of regions, and a ratio of ON and OFF pixels is defined in each region.
  33.  物体に参照光を照射する請求項28から32のいずれかに記載の照明装置と、
     前記物体からの反射光を測定する光検出器と、
     前記光検出器の出力にもとづく検出強度と前記参照光の強度分布にもとづいて前記物体の復元画像を再構成する演算処理装置と、
     を備えることを特徴とするイメージング装置。
    The illumination device according to any one of claims 28 to 32, which irradiates an object with reference light,
    A photodetector for measuring reflected light from the object,
    An arithmetic processing unit that reconstructs a restored image of the object based on the detected intensity based on the output of the photodetector and the intensity distribution of the reference light,
    An imaging device comprising:
  34.  請求項33に記載のイメージング装置を備えることを特徴とする車両用灯具。 A vehicle lamp comprising the imaging device according to claim 33.
  35.  請求項33に記載のイメージング装置を備えることを特徴とする車両。 A vehicle comprising the imaging device according to claim 33.
  36.  参照光の強度分布を複数M通りに変化させながら物体に照射する照明と、
     複数の強度分布I~Iそれぞれについて、前記物体からの反射光を測定する光検出器と、
     前記複数の強度分布I~Iと、前記光検出器の出力にもとづく複数の検出強度b~bの相関をとることにより、前記物体の復元画像を再構成する演算処理装置と、
     を備え、
     前記複数の強度分布I~Iは、機械学習によって予め生成されていることを特徴とするイメージング装置。
    Illumination for irradiating an object while changing the intensity distribution of reference light in a plurality of M ways;
    A photodetector that measures reflected light from the object for each of a plurality of intensity distributions I 1 to I M ;
    An arithmetic processing unit that reconstructs a restored image of the object by correlating the plurality of intensity distributions I 1 to I M and the plurality of detected intensities b 1 to b M based on the output of the photodetector;
    Equipped with
    The imaging apparatus, wherein the plurality of intensity distributions I 1 to I M are generated in advance by machine learning.
  37.  前記複数の強度分布I~Iが、複数の走行環境に対応して複数セット用意され、走行環境に応じたひとつのセットが選択的に使用されることを特徴とする請求項37に記載のイメージング装置。 38. The plurality of intensity distributions I 1 to I M are prepared in a plurality of sets corresponding to a plurality of traveling environments, and one set according to the traveling environments is selectively used. Imaging equipment.
  38.  参照光の強度分布を複数M通りに変化させながら物体に照射する照明と、
     複数の強度分布I~Iそれぞれについて、前記物体からの反射光を測定する光検出器と、
     前記複数の強度分布I~Iと、前記光検出器の出力にもとづく複数の検出強度b~bの相関をとることにより、前記物体の復元画像を再構成する演算処理装置と、
     を備え、
     前記複数の強度分布I~Iは、
     (i)前記照明から前記物体を経て前記光検出器に至る経路の伝達特性をモデル化するステップと、
     (ii)基準物体およびそれに対応する基準画像を定義するステップと、
     (iii)前記複数の強度分布I~Iに初期値を与えるステップと、
     (iv)前記伝達特性にもとづいて、前記複数の強度分布I~Iそれぞれを有する参照光を前記基準物体に照射したときの、前記検出強度b~bの推定値b^~b^を計算するステップと、
     (v)前記複数の強度分布I~Iと、前記複数の推定値b^~b^の相関をとることにより、前記基準物体の復元画像を再構築するステップと、
     (vi)前記復元画像と、前記基準画像の誤差が小さくなるように、前記複数の強度分布~Iそれぞれを修正するステップと、
     (vii)ステップ(iv)~(vi)を繰り返すことにより前記複数の強度分布I~Iを決定するステップと、
     によって得られることを特徴とするイメージング装置。
    Illumination for irradiating an object while changing the intensity distribution of reference light in a plurality of M ways;
    A photodetector that measures reflected light from the object for each of a plurality of intensity distributions I 1 to I M ;
    An arithmetic processing unit for reconstructing a restored image of the object by correlating the plurality of intensity distributions I 1 to I M and the plurality of detected intensities b 1 to b M based on the output of the photodetector;
    Equipped with
    The plurality of intensity distributions I 1 to I M are
    (I) modeling the transfer characteristic of the path from the illumination through the object to the photodetector;
    (Ii) defining a reference object and its corresponding reference image;
    (Iii) giving initial values to the plurality of intensity distributions I 1 to I M ,
    (Iv) An estimated value b^ 1 of the detected intensities b 1 to b M when the reference object having the plurality of intensity distributions I 1 to I M is irradiated to the reference object based on the transfer characteristics calculating b^ M ,
    (V) reconstructing a restored image of the reference object by correlating the plurality of intensity distributions I 1 to I M with the plurality of estimated values b 1 to b M
    (Vi) modifying each of the plurality of intensity distributions 1 to I M so that an error between the restored image and the reference image becomes small,
    (vii) determining the plurality of intensity distributions I 1 to I M by repeating steps (iv) to (vi),
    An imaging device characterized by being obtained by.
  39.  前記基準物体および前記基準画像は、複数Nセット(N≧2)、定義されることを特徴とする請求項38に記載のイメージング装置。 39. The imaging apparatus according to claim 38, wherein the reference object and the reference image are defined by a plurality of N sets (N≧2).
  40.  前記誤差は、式(1)の目的関数F(I)で表されることを特徴とする請求項39に記載のイメージング装置。
    Figure JPOXMLDOC01-appb-M000001
     但し、Wは画像の幅、Hは画像の高さ、T(x,y)はi番目の基準画像、G(x,y,I)はi番目の復元画像を表す。
    40. The imaging apparatus according to claim 39, wherein the error is represented by an objective function F(I) of Expression (1).
    Figure JPOXMLDOC01-appb-M000001
    However, W is the width of the image, H is the height of the image, T i (x, y) is the i-th reference image, and G i (x, y, I) is the i-th restored image.
  41.  ステップ(vi)において確率的勾配降下法を用いることを特徴とする請求項38から40のいずれかに記載のイメージング装置。 The imaging device according to any one of claims 38 to 40, characterized in that a stochastic gradient descent method is used in step (vi).
  42.  前記基準物体および前記基準画像として、アルファベットの画像データのセットを用いることを特徴とする請求項39に記載のイメージング装置。 The imaging apparatus according to claim 39, wherein a set of alphabetic image data is used as the reference object and the reference image.
  43.  異なる複数の環境を想定し、前記複数の環境それぞれについて、前記複数の強度分布I~Iが用意されることを特徴とする請求項39から42のいずれかに記載のイメージング装置。 43. The imaging apparatus according to claim 39, wherein a plurality of different environments are assumed, and the plurality of intensity distributions I 1 to I M are prepared for each of the plurality of environments.
  44.  請求項36から43のいずれかに記載のイメージング装置と、
     前記イメージング装置によって得られる画像にもとづいて、物体の種類を識別可能な演算処理装置と、
     を備えることを特徴とする物体識別システム。
    An imaging device according to any one of claims 36 to 43,
    Based on the image obtained by the imaging device, an arithmetic processing device capable of identifying the type of object,
    An object identification system comprising:
  45.  請求項44に記載の物体識別システムを備えることを特徴とする車両用灯具。 A vehicle lamp comprising the object identification system according to claim 44.
  46.  請求項44に記載の物体識別システムを備えることを特徴とする車両。 A vehicle comprising the object identification system according to claim 44.
  47.  イメージング装置に用いる複数の強度分布のセットを決定する方法であって、
     前記イメージング装置は、
     参照光の強度分布を複数M通りで変化させながら物体に照射する照明と、
     前記複数の強度分布I~Iそれぞれについて、前記物体からの反射光を測定する光検出器と、
     前記複数の強度分布I~Iと、前記光検出器の出力にもとづく複数の検出強度b~bの相関をとることにより、前記物体の復元画像を再構成する演算処理装置と、
     を備え、
     前記方法は、
     (i)前記照明から前記物体を経て前記光検出器に至る経路の伝達特性をモデル化するステップと、
     (ii)基準物体およびそれに対応する基準画像を定義するステップと、
     (iii)前記複数の強度分布I~Iに初期値を与えるステップと、
     (iv)前記伝達特性にもとづいて、前記複数の強度分布I~Iそれぞれを有する参照光を前記基準物体に照射したときの、前記検出強度b~bの推定値b^~b^を計算するステップと、
     (v)前記複数の強度分布I~Iと、前記複数の推定値b^~b^の相関をとることにより、前記基準物体の復元画像を再構築するステップと、
     (vi)前記復元画像と、前記基準画像の誤差が小さくなるように、前記複数の強度分布I~Iそれぞれを修正するステップと、
     を備え、ステップ(iv)~(vi)を繰り返すことにより前記複数の強度分布I~Iのセットを決定することを特徴とする方法。
    A method of determining a set of multiple intensity distributions for use in an imaging device, comprising:
    The imaging device,
    Illumination for irradiating an object while changing the intensity distribution of reference light in a plurality of M ways;
    A photodetector that measures reflected light from the object for each of the plurality of intensity distributions I 1 to I M ;
    An arithmetic processing unit for reconstructing a restored image of the object by correlating the plurality of intensity distributions I 1 to I M and the plurality of detected intensities b 1 to b M based on the output of the photodetector;
    Equipped with
    The method is
    (I) modeling the transfer characteristic of the path from the illumination through the object to the photodetector;
    (Ii) defining a reference object and its corresponding reference image;
    (Iii) giving initial values to the plurality of intensity distributions I 1 to I M ,
    (Iv) An estimated value b^ 1 of the detected intensities b 1 to b M when the reference object having the plurality of intensity distributions I 1 to I M is irradiated to the reference object based on the transfer characteristics calculating b^ M ,
    (V) reconstructing a restored image of the reference object by correlating the plurality of intensity distributions I 1 to I M with the plurality of estimated values b 1 to b M
    (Vi) modifying each of the plurality of intensity distributions I 1 to I M so that an error between the restored image and the reference image becomes small,
    And determining the set of the plurality of intensity distributions I 1 to I M by repeating steps (iv) to (vi).
  48.  前記基準物体および前記基準画像は、複数Nセット(N≧2)、定義されることを特徴とする請求項47に記載の方法。 The method according to claim 47, wherein the reference object and the reference image are defined in a plurality of N sets (N≧2).
  49.  前記誤差は、式(1)の目的関数F(I)で表されることを特徴とする請求項48に記載の方法。
    Figure JPOXMLDOC01-appb-M000002
     但し、Wは画像の幅、Hは画像の高さ、T(x,y)はi番目の基準画像、G(x,y,I)はi番目の復元画像を表す。
    49. The method of claim 48, wherein the error is represented by the objective function F(I) of equation (1).
    Figure JPOXMLDOC01-appb-M000002
    However, W is the width of the image, H is the height of the image, T i (x, y) is the i-th reference image, and G i (x, y, I) is the i-th restored image.
  50.  ステップ(vi)において確率的勾配降下法を用いることを特徴とする請求項47から49のいずれかに記載の方法。 The method according to any one of claims 47 to 49, characterized in that a stochastic gradient descent method is used in step (vi).
  51.  前記基準物体および前記基準画像として、アルファベットの画像データのセットを用いることを特徴とする請求項47から50のいずれかに記載の方法。 The method according to any one of claims 47 to 50, wherein a set of alphabetic image data is used as the reference object and the reference image.
PCT/JP2019/050170 2018-12-27 2019-12-20 Lighting fixture for vehicle, and vehicle WO2020137908A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980085931.7A CN113227838B (en) 2018-12-27 2019-12-20 Vehicle lamp and vehicle
JP2020563214A JP7408572B2 (en) 2018-12-27 2019-12-20 Vehicle lights and vehicles

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
JP2018-245517 2018-12-27
JP2018-244951 2018-12-27
JP2018245517 2018-12-27
JP2018244951 2018-12-27
JP2019002818 2019-01-10
JP2019002820 2019-01-10
JP2019-002818 2019-01-10
JP2019002819 2019-01-10
JP2019-002819 2019-01-10
JP2019-002820 2019-01-10

Publications (1)

Publication Number Publication Date
WO2020137908A1 true WO2020137908A1 (en) 2020-07-02

Family

ID=71127977

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/050170 WO2020137908A1 (en) 2018-12-27 2019-12-20 Lighting fixture for vehicle, and vehicle

Country Status (3)

Country Link
JP (1) JP7408572B2 (en)
CN (1) CN113227838B (en)
WO (1) WO2020137908A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020218282A1 (en) * 2019-04-22 2020-10-29 株式会社小糸製作所 Imaging device, vehicle light, automobile, and imaging method
WO2021193646A1 (en) * 2020-03-26 2021-09-30 株式会社小糸製作所 Imaging device, vehicle lighting, and vehicle
WO2022270476A1 (en) * 2021-06-22 2022-12-29 株式会社小糸製作所 Imaging device, vehicle lamp, and vehicle
CN115989430A (en) * 2020-08-28 2023-04-18 株式会社小糸制作所 Imaging device, imaging method, vehicle lamp, and vehicle
WO2023074759A1 (en) * 2021-10-27 2023-05-04 株式会社小糸製作所 Imaging apparatus, vehicle lamp fitting, and vehicle
WO2023171732A1 (en) * 2022-03-11 2023-09-14 スタンレー電気株式会社 Vehicle lamp
JP7490161B1 (en) 2023-11-09 2024-05-24 三菱電機株式会社 Image acquisition device and image acquisition method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0112673B2 (en) * 1983-06-01 1989-03-01 Sumitomo Beekuraito Kk
US20140029850A1 (en) * 2011-09-28 2014-01-30 U.S. Army Research Laboratory ATTN:RDRL-LOC-I System and method for image improved image enhancement
WO2017073737A1 (en) * 2015-10-28 2017-05-04 国立大学法人東京大学 Analysis device
WO2018124285A1 (en) * 2016-12-29 2018-07-05 国立大学法人東京大学 Imaging device and imaging method
JP2018156862A (en) * 2017-03-17 2018-10-04 トヨタ自動車株式会社 Vehicle headlamp device
JP2018155658A (en) * 2017-03-17 2018-10-04 株式会社東芝 Object detector, method for object detection, and object detection program

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7415126B2 (en) * 1992-05-05 2008-08-19 Automotive Technologies International Inc. Occupant sensing system
US6830189B2 (en) * 1995-12-18 2004-12-14 Metrologic Instruments, Inc. Method of and system for producing digital images of objects with subtantially reduced speckle-noise patterns by illuminating said objects with spatially and/or temporally coherent-reduced planar laser illumination
DE29825026U1 (en) * 1997-04-02 2004-07-01 Gentex Corp., Zeeland Control system for automatically dimming vehicle head lamp - includes microprocessor to discriminate between light received from head lamps and tail lamps
JP2012146621A (en) * 2010-12-20 2012-08-02 Stanley Electric Co Ltd Vehicular lamp
JP6296401B2 (en) * 2013-06-27 2018-03-20 パナソニックIpマネジメント株式会社 Distance measuring device and solid-state imaging device
CN103472456B (en) * 2013-09-13 2015-05-06 中国科学院空间科学与应用研究中心 Active imaging system and method based on sparse aperture compressing calculation correlation
US10473916B2 (en) * 2014-09-30 2019-11-12 Washington University Multiple-view compressed-sensing ultrafast photography (MV-CUP)
CN107107809B (en) * 2014-12-25 2019-10-01 株式会社小糸制作所 Lighting circuit and lamps apparatus for vehicle
DE102015120204A1 (en) * 2015-11-23 2017-05-24 Hella Kgaa Hueck & Co. Method for operating at least one headlight of a vehicle
CN106019307A (en) * 2016-05-18 2016-10-12 北京航空航天大学 Single-pixel imaging system and method based on array light source
JP2018036102A (en) * 2016-08-30 2018-03-08 ソニーセミコンダクタソリューションズ株式会社 Ranging device and control method of ranging device
US10671873B2 (en) * 2017-03-10 2020-06-02 Tusimple, Inc. System and method for vehicle wheel detection
JP6412673B1 (en) * 2017-07-21 2018-10-24 学校法人玉川学園 Image processing apparatus and method, and program
CN108710215A (en) * 2018-06-20 2018-10-26 深圳阜时科技有限公司 A kind of light source module group, 3D imaging devices, identity recognition device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0112673B2 (en) * 1983-06-01 1989-03-01 Sumitomo Beekuraito Kk
US20140029850A1 (en) * 2011-09-28 2014-01-30 U.S. Army Research Laboratory ATTN:RDRL-LOC-I System and method for image improved image enhancement
WO2017073737A1 (en) * 2015-10-28 2017-05-04 国立大学法人東京大学 Analysis device
WO2018124285A1 (en) * 2016-12-29 2018-07-05 国立大学法人東京大学 Imaging device and imaging method
JP2018156862A (en) * 2017-03-17 2018-10-04 トヨタ自動車株式会社 Vehicle headlamp device
JP2018155658A (en) * 2017-03-17 2018-10-04 株式会社東芝 Object detector, method for object detection, and object detection program

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020218282A1 (en) * 2019-04-22 2020-10-29 株式会社小糸製作所 Imaging device, vehicle light, automobile, and imaging method
US12003839B2 (en) 2019-04-22 2024-06-04 Koito Manufacturing Co., Ltd. Imaging apparatus using ghost imaging
WO2021193646A1 (en) * 2020-03-26 2021-09-30 株式会社小糸製作所 Imaging device, vehicle lighting, and vehicle
US12108164B2 (en) 2020-03-26 2024-10-01 Koito Manufacturing Co., Ltd. Imaging apparatus
JP7625582B2 (en) 2020-03-26 2025-02-03 株式会社小糸製作所 Imaging device and vehicle lighting device, vehicle
CN115989430A (en) * 2020-08-28 2023-04-18 株式会社小糸制作所 Imaging device, imaging method, vehicle lamp, and vehicle
WO2022270476A1 (en) * 2021-06-22 2022-12-29 株式会社小糸製作所 Imaging device, vehicle lamp, and vehicle
WO2023074759A1 (en) * 2021-10-27 2023-05-04 株式会社小糸製作所 Imaging apparatus, vehicle lamp fitting, and vehicle
WO2023171732A1 (en) * 2022-03-11 2023-09-14 スタンレー電気株式会社 Vehicle lamp
JP7490161B1 (en) 2023-11-09 2024-05-24 三菱電機株式会社 Image acquisition device and image acquisition method

Also Published As

Publication number Publication date
CN113227838B (en) 2024-07-12
JP7408572B2 (en) 2024-01-05
JPWO2020137908A1 (en) 2021-11-11
CN113227838A (en) 2021-08-06

Similar Documents

Publication Publication Date Title
WO2020137908A1 (en) Lighting fixture for vehicle, and vehicle
KR102472631B1 (en) Lidar systems and methods
US10558866B2 (en) System and method for light and image projection
US12047667B2 (en) Imaging device
CN113383280A (en) Ballistic light modulation for image enhancement via fog
US12108164B2 (en) Imaging apparatus
WO2021079810A1 (en) Imaging device, vehicle lamp, vehicle, and imaging method
WO2023085329A1 (en) Imaging system, sensor unit, vehicle lamp fitting, and vehicle
JP7656584B2 (en) Sensor, vehicle and surrounding environment sensing method
KR20250044785A (en) Lidar systems and methods

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19904219

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020563214

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19904219

Country of ref document: EP

Kind code of ref document: A1