WO2023222219A1 - Measuring device for measuring particle sizes - Google Patents
Measuring device for measuring particle sizes Download PDFInfo
- Publication number
- WO2023222219A1 WO2023222219A1 PCT/EP2022/063486 EP2022063486W WO2023222219A1 WO 2023222219 A1 WO2023222219 A1 WO 2023222219A1 EP 2022063486 W EP2022063486 W EP 2022063486W WO 2023222219 A1 WO2023222219 A1 WO 2023222219A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- particle
- measuring device
- measurement volume
- light
- image
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N15/00—Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
- G01N15/02—Investigating particle size or size distribution
- G01N15/0205—Investigating particle size or size distribution by optical means
- G01N15/0211—Investigating a scatter or diffraction pattern
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N15/00—Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
- G01N15/02—Investigating particle size or size distribution
- G01N15/0205—Investigating particle size or size distribution by optical means
- G01N15/0227—Investigating particle size or size distribution by optical means using imaging; using holography
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N15/00—Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
- G01N15/02—Investigating particle size or size distribution
- G01N15/0205—Investigating particle size or size distribution by optical means
- G01N15/0211—Investigating a scatter or diffraction pattern
- G01N2015/0222—Investigating a scatter or diffraction pattern from dynamic light scattering, e.g. photon correlation spectroscopy
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N15/00—Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
- G01N15/02—Investigating particle size or size distribution
- G01N15/0205—Investigating particle size or size distribution by optical means
- G01N2015/0238—Single particle scatter
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N15/00—Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
- G01N15/02—Investigating particle size or size distribution
- G01N15/0205—Investigating particle size or size distribution by optical means
- G01N2015/025—Methods for single or grouped particles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N15/00—Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
- G01N15/10—Investigating individual particles
- G01N2015/1029—Particle size
Definitions
- the present invention relates to the field of analysis of behavior of particles in a given measurement volume.
- the particle tracking technique relies on accurate determination of the position of particle images on several camera sensors, and the stereoscopic reconstruction of the particle position in the world coordinates from these image positions by means of a well-calibrated camera model. It is clear that the particle images potentially contain more useful information than just the particle position. It seems natural to expect that the overall brightness of the particle image should provide a hint of the particle size, as larger particles scatter more light due to their larger cross-sectional area. However, for a general particle the amount of light scattered in a particular direction is a hopelessly complicated function of the particle size, orientation, shape and variations of surface albedo and roughness. Even for a perfectly spherical and smooth particle such as a small droplet of pure water, the dependence of the scattered light intensity on particle size is complicated and does not offer much hope for accurate inversion (see Fig. 2A).
- the invention relates to a measuring device for measuring particle sizes.
- the measuring device comprises a control unit for controlling the measuring device, a light source for generating a beam of light for illuminating a measurement volume, and one or more image sensors.
- the control unit comprises a processor and a memory storing machine-executable program instructions, execution of the program instructions by the processor causes the processor to control the measuring device to illuminate the measurement volume with the beam of light and to acquire image data of the measurement volume using the one or more image sensors.
- Examples may have the beneficial effect, that a measuring device configured for accurately determining a scattering angle ⁇ of light scattered by particles, a scattered light intensity l out for these particles and an incident light intensity l in at the particle position is provided. Based on these values the size of the particle using the ratio l out / l in as well as the scattering angle ⁇ may be determined accurately.
- the measuring device comprises a plurality of the image sensors arranged at different positions relative to the measurement volume.
- the acquired image data of the measurement volume may be stereoscopic image data acquired using the plurality of image sensors. Examples may have the beneficial effect, that the stereoscopic data may be used for determining positions of individual particles within the measurement volume. Thus, position dependencies of the incident light intensity l in and/or the scattering angle ⁇ may be taken into account for determining particle sizes.
- the measuring device further comprises a diffuser for diffusing the beam of light. Examples may have the beneficial effect, that the diffuser may support an averaging of Mie oscillations, i.e., the oscillation of the scattered light intensity l out as predicted by Lorenz-Mie scattering theory.
- An averaging of the Mie oscillations may already be implemented due to the finite size of the apertures of the image sensors used to acquire image data.
- This averaging may be increased by using a diffuser.
- a diffuser with a diffusion angle of 1° fullwidth at half-maximum (FWHM) may be used.
- the measuring device comprises a chamber with an inlet for the particles and an outlet for the particles.
- the chamber comprises the measurement volume.
- a measuring device may be provided with all components mounted onto an encased frame.
- a measuring device may, e.g., also be factory-calibrated.
- the inlet and the outlet may be arranged on opposite sides of the chamber, e.g., as openings in opposite sidewalls of the chamber.
- particles may enter the chamber via the inlet, pass through the chamber with the measurement volume and leave the chamber via the outlet.
- the beam of light generated by the light source for illuminating the measurement volume may be oriented in a direction perpendicular to a straight line extending from the inlet to the outlet.
- the light source comprises a collimator for generating the beam of light as a collimated beam of light.
- a collimated beam of light may be provided enabling an accurate measurement of particle sizes.
- the light source is a pulsed laser or a flashlamp.
- the measuring device comprises a motion unit for moving the measuring device.
- Examples may have the beneficial effect, that the position of the measuring device and thus the position of the measurement volume may be adjusted.
- the motion unit comprises a linear motor configured for moving the measuring device along a set of one or more rails.
- the set of one or more rails is pivotable around an axis extending perpendicular to the one or more rails in a common plane with the one or more rails implementing a seesaw mechanism for pivoting the measuring device.
- Examples may have the beneficial effect, that an orientation of the measuring device and thus the orientation of the measurement volume may be adjusted.
- the image sensors each are configured for an off-optical-axis observation of the measurement volume via a mirror and a tube.
- An orientation of the tube defines a view axis under which the respective image sensor observes the measurement volume via the mirror and the tube.
- Examples may have the beneficial effect, that by an off-optical-axis observation the image sensors may be protected from environmental influences, e.g., within a box.
- the tube comprises a first end and a second end.
- the first end is a distal end relative to the mirror.
- the second end is a proximal end relative to the mirror.
- the tube further comprises a window arranged at the second end. Examples may have the beneficial effect, that using a mirror an off-optical-axis observation may be implemented.
- the tube further comprises a ventilation system configured for injecting a stream of air into the tube and onto the window and for sucking out the stream of air in order to remove with the stream of air liquid that may have collected on the window.
- a ventilation system configured for injecting a stream of air into the tube and onto the window and for sucking out the stream of air in order to remove with the stream of air liquid that may have collected on the window. Examples may have the beneficial effect, that liquid collected on the window may be efficiently removed.
- the tube further comprises a movable lid arranged at the first end of the tube and configured for opening and closing the tube at the first end.
- a movable lid arranged at the first end of the tube and configured for opening and closing the tube at the first end.
- the lid may protect the tube from liquids entering via the first end, while the measuring device is not in use. While not in use, the movable lid may be closed.
- a relative angle a between a view axis under which the image sensor observes the measurement volume and a direction of illumination of the measurement volume by the light source is less than or equal to 45 degrees.
- the relative angle a lies within a range from 25 to 35 degrees. Examples may have the beneficial effect, that for such a relative angle a a measurement of particle sizes may highly accurate.
- control unit further being configured as an analysis unit.
- the execution of the program instructions by the processor further causes the processor to control the measuring device to determine the size of one or more of the particles within the measurement volume using the acquired image data.
- the determining of the size of the one or more of the particles comprises for each of the particles determining a scattered light intensity l out of light of the light source scattered by the respective particle to an individual image sensor of the one or more image sensors using image data from the image data acquired by the individual image sensor, determining a scattering angle ⁇ of the light of the light source being scattered by the respective particle to the individual image sensor, the scattering angle ⁇ being determined relative to a direction of illumination of the measurement volume by the light source, determining an incident light intensity l in of the light of the light source at the position of the respective particle within the measurement volume, and determining the size of the particle using a ratio l out /l in of the scattered light intensity l out to the incident light intensity l in and the scattering angle ⁇ .
- Examples may have the beneficial effect, that an analysis unit configured for an efficient and effective determination of particle sizes, e.g., droplets, is provided.
- the particles e.g., droplets, may, e.g., be particles in the pm-range.
- the scattering angle ⁇ may be assumed to be approximately the same for all particles within the measurement volume, i.e., the scattering angle ⁇ may be assumed to be independent of the individual positions of the particles within the measurement volume.
- the scattering angle ⁇ may be determined using a position of the measurement volume relative to the one or more image sensors. This relative position of the measurement volume may be defined by the setup of a measuring device used for measuring the particle sizes.
- the scattering angle ⁇ may be provided as a constant.
- a position of the particle under consideration within the measurement volume may be determined and used for determining a position dependent scattering angle ⁇ .
- a stereoscopic setup may be used with a plurality of image sensors arranged at different positions relative to the measurement volume. This plurality of image sensors may enable an acquisition of stereoscopic image data for determining positions of individual particles within the measurement volume.
- positions of particles with the measurement volume and thus the scattering angle ⁇ of light scattered by these particles may be determined, in case the dependency of the scattering angle ⁇ on the particle position is relevant.
- a scattered light intensity l out for these particles and an incident light intensity l in at the particle position may be determined.
- this incident light intensity l in can be approximated as being constant within the measurement volume, i.e., is independent on the position of the particle under consideration within the measurement volume, the constant incident light intensity l in is determined.
- the constant incident light intensity l in may be provided by measuring the constant incident light intensity l in for the measuring device being used.
- variation of the incident light intensity l in within the measurement volume i.e., a dependence of the incident light intensity l in on the position of the particle under consideration may be taken into account.
- a distribution profile of the incident light intensity l in may be determined, e.g., using variations of the scattered light intensity l out of a particle under consideration along its trajectory through the measurement volume.
- a stereoscopic setup may be used with a plurality of image sensors arranged at different positions relative to the measurement volume for acquiring stereoscopic data in order to determine the positions of the particle along its trajectory through the measurement volume. Based on these values the size of the particle using the ratio l out /l in as well as the scattering angle ⁇ .
- the illumination beam may be approximated as being perfectly parallel, i.e., the incident light direction may be assumed to be perfectly uniform everywhere. It may be assumed that along the beam direction, the local light intensity does not change.
- the light intensity may be a function of two variables only, i.e., two coordinate value defining the position within a beam cross-section of the incident light beam.
- light at any point within the beam may be coming from a range of angles, with angular distribution dependent on the size and distance to the light source, e.g., an end of an optical fiber, and properties of a diffuser used, if any.
- a diffuser with a diffusion angle of 1° full-width at half-maximum (FWHM) may be used.
- the size of the one or more particles is quantified by a diameter d p of the respective particle.
- the diameter d p is determined using the following relation between the diameter d p a nd the ratio l out /l in of the scattered light intensity l out to the incident light intensity l in as well as the scattering angle ⁇ : with q 1 and q 2 being constants, c being a calibration constant. Examples may have the beneficial effect, that the analysis unit may be enabled to efficiently and effectively determining particle sizes using the relation above.
- the constants q 1 and q 2 may be predefined constants. These predefined constants may, e.g., be determined with an exponential fit using the Lorenz-Mie scattering theory.
- a prediction of the scattered light intensity l out may be provided (cf., e.g., Fig. 9 and Fig.1 ⁇ ), for which the constants q 1 and q 2 may be determined as fitting parameters of an exponential fit.
- the calibration constant c may be determined by calibrating the measuring device used for measuring the particle sizes using particles of known sizes.
- the constants qi and c may be determined together using a calibration of the measuring device, while constant qz is determined using Lorenz-Mie scattering theory
- the received image data of the measurement volume is stereoscopic image data acquired using a plurality of the image sensors arranged at different positions relative to the measurement volume.
- the execution of the program instructions by the processor may further cause the processor to control the analysis unit to determine a position of the respective particle within the measurement volume using the stereoscopic image data acquired by the plurality of image sensors.
- the position of the respective particle within the measurement volume is used for the determining of the scattering angle ⁇ . Examples may have the beneficial effect, that using stereoscopic image data a dependency of the scattering angle ⁇ on the position of the scattering particle within the measurement volume may be taken into account.
- the scattered light intensity l out of an individual particle is determined as an average over an aperture of the individual image sensor.
- Examples may have the beneficial effect, that as the scattered light intensity l out an average over the aperture of the individual image sensor may be used.
- the determining of the scattered light intensity l out of the individual particle using image data from the image data acquired by the individual image sensor comprises an iterative fitting using a point-spread function.
- a point-spread function may be fitted in order to identify light intensities origination form individual particles within the image data acquired by the image sensors.
- I B is a local particle independent background intensity.
- I A is a total particle dependent intensity measured by the image sensor.
- the point-spread function used for the iterative fitting ⁇ (r, ⁇ ) is a canonical point-spread function of the form with J o being a Bessel function of the first kind and zero order, d SF being a scaling factor from canonical coordinates to image sensor coordinates, and cp being a defocus parameter of the form with being a wavenumber of light of the light source with wavelength A.
- r A is a radius of the aperture of the individual image sensor
- a is a parameter quantifying an effect of a lens of the image sensor on the phase of light passing through the aperture of the image senor
- z D is a coordinate value of the particle in the direction of illumination
- z s is a coordinate value of the individual image sensor in the direction of illumination.
- Examples may have the beneficial effect, that an efficient and effective approach for assigning light intensities acquired by the image sensors to individual particles within the measurement volume.
- Examples may have the beneficial effect, that an efficient and effective approach is provided for fitting light intensities to a circular distribution.
- a distortion of the ellipse is area preserving with
- Examples may have the beneficial effect, that an efficient and effective approach is provided for fitting light intensities to a distorted ellipse distribution.
- Examples may have the beneficial effect, that an efficient and effective approach is provided for fitting light intensities to a distorted ellipse distribution.
- Examples may have the beneficial effect, that this form may be appropriate for approximating the way that the point spread function tends to be brighter on one side of the image center than on the other, due to the effects of the particle finite size and/or interference.
- the iterative fitting comprises at least one fitting step, in which fitting parameters of scattered light intensities l out of a plurality of particles are adjusted simultaneously for all of the respective scattered light intensities l out .
- Examples may have the beneficial effect, that an improved fitting may be implemented, which takes into account dependences between the particles.
- the execution of the program instructions by the processor further causes the processor to control the analysis unit to perform an extracting of the image data of individual particles from a video frame of image data acquired by the individual image sensor from the image data of the measurement volume for the determining of the scattered light intensities l out of the individual particles.
- the extracting comprises selecting the video frame of image data acquired by the individual image sensor from the received image data, selecting a set of decreasing particle image intensity thresholds, for each of the selected particle image intensity thresholds, selecting a set of target defocus parameters, for each of the selected target defocus parameters, selecting a range of acceptable defocus parameters, filtering the selected video frame of image data using a filter matching the point spread function with the current target defocus parameter, finding local intensity maxima of the filtered video frame of image data, for each of the local intensity maxima found in decreasing order with respect to filtered intensity values of the local intensity maxima, using the local maximum found as an initial estimate of a position of an individual particle image, determining an initial estimate of a defocus parameter from the intensity distribution of the local maximum found, refining the initial estimates using an optimization algorithm, checking whether a resulting refined individual particle image satisfies current constraints with its particle image intensity exceeding the current particle image intensity threshold and its defocus parameter lying within the current range of acceptable defocus parameters, and if the refined individual particle image satisfies the current
- Examples may have the beneficial effect, that an efficient and effective approach for extracting image data of individual particles from the image data acquired by the image sensors, e.g., from a video frame.
- the range of acceptable defocus parameters selected for a selected target defocus parameter is the range of all defocus parameters being equal to or smaller than the target defocus parameter. Examples may have the beneficial effect, that a suitable range of acceptable defocus parameters may be provided.
- the filter being used for filtering the selected frame of image data is a Gaussian filter.
- examples may have the beneficial effect, that a suitable filter matching the point spread function with the current target defocus parameter may be implemented.
- the optimization algorithm used for refining the initial estimates of the position and defocus parameter of the individual particle image is a Levenberg-Marquardt algorithm. Examples may have the beneficial effect, that an efficient and effective optimization algorithm for refining the initial estimates of the position and defocus parameter of the individual particle image may be provided.
- an efficient and effective optimization algorithm for refining the initial estimates of the position and defocus parameter of the individual particle image may be provided.
- an accurate model of the particle image shape i.e., a suitable point spread function may be required.
- the intensities of particle images may be defined in grayscale levels of the image sensor.
- an approach to obtain initial estimates of particle image location and defocus may be required.
- an approach to refine the initial estimates in order to arrive at an accurate final fit may be required.
- the refinement is may be done using the Levenberg-Marquardt algorithm and trying to minimize a difference between the model of the particle image shape and true distribution of pixel intensities.
- the Levenberg-Marquardt algorithm is an algorithm used to solve non-linear least squares problems.
- the Levenberg-Marquardt algorithm interpolates between the Gauss-Newton algorithm and the algorithm of gradient descent.
- the Levenberg-Marquardt algorithm is more robust than the Gauss-Newton algorithm.
- the Levenberg-Marquardt algorithm may be viewed as a Gauss-Newton approach using a trust region approach.
- An intensity profile of an image of a small particle on an image sensor may depend on several parameters. These parameters may comprise a particle radius r D , a particle distance from the image sensor aperture L D , an index of refraction n R of the material from which the particle is made, a wavelength of light ⁇ used to illuminate the particle, a relative angle ⁇ between the direction of the incoming light hitting the particle and the direction of view of the image sensor (scattering angle), a radius of aperture r A of an objective of the image sensor, a magnification M of the imaging system of the image sensor, and/or a sensor pixel size l p of the image sensor.
- parameters may comprise a particle radius r D , a particle distance from the image sensor aperture L D , an index of refraction n R of the material from which the particle is made, a wavelength of light ⁇ used to illuminate the particle, a relative angle ⁇ between the direction of the incoming light hitting the particle and the direction of view of the image sensor (scattering angle),
- an approximate formula for the intensity profile of a particle image may be derived that is fast enough to produce synthetic videos comprising a plurality of image frames in a reasonable time, that is asymptotically correct in the limit of r D — * 0, r A — * 0, and yet is reasonably accurate for typical parameter values.
- An exemplary approximate formula for the intensity profile may have the beneficial effect of being fast enough, so that a creation of a synthetic video does not take more than a few core- days of computation time.
- One synthetic video may, e.g., comprise 1 ⁇ 4 image frames. Each image frame may, e.g., comprise up to 1 ⁇ 4 particle images. This may, e.g., amount to a requirement of several ms computation time per particle image.
- a single particle image e.g., consists of about 1 ⁇ 3 pixels, each of which may probably require 12 evaluations of the formula as the pixel grayscale intensity value may be given by an integral over its lightcollecting area, one formula evaluation may be required take a small multiple of 1 x 10“ 7 s.
- an exemplary core speed of 2 GHz that may, e.g., give an upper limit of about 1 x 10 3 floating point operations per single call to the point spread function formula.
- the limiting case of an infinitesimally small particle which scatters light uniformly in all directions, may be considered.
- the lens may be modelled as locally spherical, but infinitely thin.
- an effect of the objective may be condensed on the phase of the light passing through the aperture at [x,y, 0] as an additive factor ⁇ (x 2 + y 2 ).
- the particle maximum tangent of a deviation of the particle position from the optical axis may be assumed to be small, i.e., ⁇ view « 1.
- a view + 1280 2 (28 • 10 -6 ) « 0.021.
- the light ha a wavelength ⁇ and wavenumber
- the particle may, e.g., be assumed to be a point source of light.
- the Fresnel-Kirchhoff formula may be used to model a propagation of light in a wide range of configurations, either analytically or using numerical modelling.
- the Fresnel-Kirchhoff formula gives an expression for a wave disturbance, when a monochromatic spherical wave is tan incoming wave of a situation under consideration. This formula is derived by applying the Kirchhoff integral theorem, which uses the Green's second identity to derive the solution to the homogeneous scalar wave equation, to a spherical wave with some approximations.
- This formula may, e.g., be approximated by simply 2. This approximation may at most incur a relative error of Focusing on the region of the image
- the Fresnel-Kirchhoff formula may be simplified to Furthermore, the exponential term inside the integral may be approximated as follows: using the following approximation to a square root: Exponential terms in the approximation of the exponential term inside the integral the Fresnel-Kirchhoff formula may be replaced by ones, provided that the corresponding argument is sufficiently small.
- the exponential arguments may be bound from above by 2ik
- upper bounds on the magnitude of the various parameters may be summarized as following:
- the Fresnel-Kirchhoff formula may be further simplified to dr. Collecting terms of the same power in r and transforming to polar coordinates allows for rewrite the Fresnel-Kirchhoff formula as From the definition of the Bessel functions of the first kind Bessel follows for any real c:
- the parameter cp may be referred to as the defocus parameter, since it encapsulates the effects of the particle's departure from the plane of focus on the shape of its sensor image.
- the grayscale intensity on the image sensor may then be proportional to
- a canonical point spread function may be written as
- the pre-factor is chosen so that when integrated over the whole sensor plane, the result gives a value of 1.
- First quantity of interest is the radius p m in of the Airy disk, which is the distance between the center and the closest minimum of the point spread function of a point source on the focal plane.
- Setting ⁇ 0 in the canonical point spread function allows for computing the integral analytically, giving Then p m in may be obtained by considering the first zero of from which where is the wavelength of light used and j 1 ,1 ⁇ 3.8317 is the first zero of J 1 (x).
- a set of decreasing particle image intensity thresholds may be selected and it may be looped over them.
- a set of target defocus parameters may be selected and it may be looped over them.
- a range of defocus parameters may be picked that will be accepted in this step.
- the best-performing choice may, e.g., be to pick all below the target value.
- the image frame may be filtered using a filter, e.g., a Gaussian filter that may best match the point spread function with the target defocus parameter.
- the local maxima of the filtered intensity may be determined. It may be looped through the local maxima, e.g., in decreasing order of their filtered intensity value.
- Each local maximum may be considered as an initial estimate of a position of a new particle image, and its parameters may be refined, e.g., as follows: a) estimate the image defocus parameter from the local intensity distribution; b) optimize the image parameters using a small number of iterations of the Levenberg-Marquardt algorithm. It may be decided, whether each refined image obtained at the end of the previous step is real (or a noise artefact), and if so, if it satisfies current constraints, that is, if its intensity lies above the current threshold, and its defocus lies within the currently accepted range. If the image is deemed real and allowable, its intensity may be subtracted from the residual frame intensity and it may be added to a set of extracted images.
- each loop or, e.g., when a sufficient number of new images have been added, all the images extracted so far may be taken, their intensities may be added back to the frame and their parameters may be optimized simultaneously. Then their intensities may be subtracted again.
- the intensity profile may be a function of five parameters, since d SF is constant for the given image sensor calibration, namely the background intensity I B ,i.e., a local light intensity that would be recorded were the particle not present, a total image intensity I A , i.e., an integral of grayscale intensity over a given image sensor attributable solely to the presence of the particle, an x-image sensor coordinate x 0 of a particle image center position, an y-image sensor coordinate y 0 of the particle image center position, the defocus parameter which provides a quantitative measure of how out-of-focus the particle image appears.
- This form may, e.g., be used to approximate the way that a point spread function tends to be brighter on one side of the image center than on the other, due to the effects of the particle finite size or, alternatively, interference.
- the vector a determines the direction and magnitude of the resulting intensity asymmetry.
- yet more complex distortions may, e.g., be taken into account, such as area-preserving, tilted ellipsoidal distortions.
- Each particle image may technically cover an entire image sensor, as the intensity decrease away from an image center may be slow and second moment of the intensity may be infinite. However, for reasons of computation speed, each particle image may be considered to be confined to a relatively small circular region centered at (x 0 ,y 0 )as defined above. The radius of this region may depend on the image defocus and, e.g., also on the intensity.
- a smaller fitting radius r FIT may be used for computing partial derivatives of the local intensity with respect to the particle image parameters. These derivatives may be used to construct a Jacobian matrix J and the product J T J may be used in the Levenberg-Marquardt algorithm during optimization.
- the Jacobian matrix of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as input as the number of vector components of its output, its determinant is referred to as the Jacobian determinant.
- a larger manipulation radius r ADD may be used, when subtracting and/or adding back particle images, which may collectively be referred to as image manipulation, before and after each stage of extraction of new images.
- a larger radius may, e.g., be desirable during image manipulation, since it is not required to compute partial derivatives or a product of Jacobians, only the model intensities have to be computed. Thus, computation time penalty for doing so may only be mild.
- subtracting a particle image over a larger region may reduce a likelihood, that a false particle image, e.g., a noise artefact, would be extracted from its residual halo.
- the choice of a formula for r ADD may be guided by a desire to encompass all the pixels, where the model intensity I(x) — I B is significant compared to a thermal noise level o T of the image sensor. The requirement of may result in
- Both r FIT and r ADD may be required to be at least and at most r MAX , which, e.g., may either be provided by the user or set to a maximum radius at which the point-source function was interpolated.
- a first estimate of the image defocus may be provided based on the available intensity profile.
- a better way may be to subdivide a disk centered at the estimated image center into rings and base the defocus parameter estimate on the relative magnitudes of the intensities summed over the individual rings.
- a choice of radii separating the rings may, e.g., be the zeros j 1, k of J 1 (r), which are the locations of local maxima and minima of intensity of the underlying, not integrated point-spread function for all cp, except a discrete set of values.
- the results may, e.g., be least sensitive to the particle choice of image center.
- the defocus may, e.g., be estimated in one of the following two ways:
- ratios may be computed progressively and, if they surpass a predefined threshold value q thr (k), ⁇ may be derived from a locally linear fit to q(k) as a function of cp obtained from the canonical form of the point-spread function ⁇ .
- the second approach may be advantageous compared to the first approach, as it corrects for potentially wrongly placed image center and thus may not tend to overestimate the defocus parameter, where that may be an issue, e.g., mostly for cp > 1 ⁇ .
- the execution of the program instructions by the processor further causes the processor to control the analysis unit to filter out noise artefacts from the refined individual particle images by determining for each refined individual particle image a ratio of probabilities that the refined individual particle image is actually a noise artefact and that the refined individual particle image is actually an individual particle image. If the ratio exceeds a predefined threshold, the refined individual particle image is rejected as a noise artefact, else the refined individual particle image is accepted.
- Examples may have the beneficial effect, that noise artefacts may be identified and dismissed, such that only real particle images may be taken into account for determining the particle sizes.
- the extracted image parameters may be decided whether the image really is an image of a particle in view or whether it rather came about by chance fluctuations of intensity noise or interaction of other particle image residuals. For example, a log likelihood ratio test may be used for differentiating between real images and noise artefacts.
- the standard deviation of grayscale intensity at a pixel with total intensity I is o(I).
- I the standard deviation of grayscale intensity at a pixel with total intensity I.
- I B is a fitting constant, i.e., a hypothetical local background intensity. Under a null hypothesis that the image is actually a noise artefact, S o ⁇ x 2 (N — 1) holds true.
- N is the number of pixels included in the sum above.
- Si ⁇ x 2 (N — N P ) holds true.
- NP is the effective number of parameters of the image fit, e.g., N P « 5.
- the log likelihood ratio may be rejected if where ⁇ 0 is a given threshold.
- the determining of the incident light intensity l in comprises taking into account variations of the incident light intensity l in by iteratively adjusting the determined incident light intensity l in in different sections of the measurement volume until an observed variation of particle image intensity over particle trajectories of particles passing through the measurement volume matches a variation of the local incident light intensity.
- Examples may have the beneficial effect, that variations of the incident light intensity l in within the measurement volume may be taken into account.
- the iteratively adjusting comprises starting with an initial uniform light intensity profile of the incident light intensity l in , dividing a beam cross-section of the incident light in sections, for each step of the iteration assembling for each of these sections of the beam cross- section statistics of a local relative particle intensity, and updating the light intensity profile using a mean local relative particle intensity determined from the determined statistics.
- the assembling of statistics comprises for each particle position along a trajectory through the measurement volume determining a local averaged image intensity overthe image data acquired by more than one of the image sensors, determining a trajectory average of the image intensity using an average of the local averaged image intensities over the trajectory, and dividing each particle intensity by the trajectory average of the image intensity resulting in the mean local relative particle intensity.
- An illumination profile may, e.g., be obtained based on the following line of reasoning: If effects of Mie scattering are neglected and effects of distance to image sensor are corrected, i.e., closer particles appear brighter, for a given particle size a particle image intensity may be assumed to be proportional to an incident light intensity. Thus, if a mean particle image intensity over each individual particle trajectory of a plurality of particle trajectories is calculated, a ratio of an instantaneous particle image intensity and the mean particle image intensity may be used as an indication of a local incident light intensity of the beam of light at the instantaneous position. The only problem is that, to obtain the mean particle image intensity, the intensity profile of the beam of light, i.e., an illumination profile, should ideally already be known.
- the illumination profile may be determined using an iterative approach.
- the beam cross-section may be divided into small sections, e.g., squares.
- the squares may, e.g., have a side-length of ⁇ .125 mm.
- statistics of the local relative particle intensity may be assembled. After processing a plurality of trajectories, e.g., all the available trajectories, the resulting mean local relative particle intensity may be used as a source value of an updated model of the light intensity profile .
- Assembling the statistics may comprise for each trajectory ⁇ comprise adjusting the particle image intensities to account for angular dependence of the scattered light intensity and the distance to image sensor.
- an overall particle intensity may be computed.
- the overall particle intensity may be computed as a weighted geometric average of the image intensities, while taking into account a differential brightness of the different image sensors.
- a weighted geometric average of the overall particle intensity over the entire trajectory may be computed and each particle intensity may be divided by the resulting trajectory average to obtain relative intensities.
- the resulting relative intensities may be recorded by associating them with the corresponding section of the crosssection of the beam based on a three-dimensional position of the particle positions along a trajectory, for which the relative intensities have been calculated.
- the particle trajectories may be used, e.g., determined. If temporal linkage, i.e., tracking, is not available, e.g., a simplified, non-iterative approach may be used.
- the local relative light intensity may be set as a mean particle intensity over all particles at the same position within the laser beam cross-section, i.e., within the same section of the cross-section of the light beam.
- a raw intensity threshold may be used. For example, all particle images with a raw intensity below the raw intensity threshold may be excluded. Similarly, all particles with no particle images that pass the raw intensity threshold may be excluded.
- particle images may be excluded whose sensor position lies too close to the frame boundary of the image frame acquired by the image sensor.
- the intensity values there may be unreliable.
- particle images with sensor positions with a distance from the frame boundary below a minimum threshold may be excluded.
- a minimum threshold may, e.g., be two Airy radii.
- An Airy radius is a radius of an Airy disk.
- the Airy disk is a description of a best-focused spot of light that a perfect lens with a circular aperture can make, limited by the diffraction of light.
- an accurate knowledge of a direction of the illumination beam may be required, e.g., within 1 ⁇ -3 accuracy.
- An approach for obtaining the direction of the illumination beam may basically consist of computing the beam profile for two thin slices of the measurement volume.
- the slice parameters zi ow and Zhigh may be selected such that the resulting slice is not too thin, in order to contain enough data to establish an accurate representation of the beam profile. Conversely, the slice parameters zi ow and Zhigh may be selected may be selected sch that the resulting slice is not too thick. In case a slice is to thick, fine beam profile features may be blurred in case the beam direction is not yet known with good accuracy.
- a rough estimate of the beam direction may be, e.g., perfectly vertical, and is described below.
- the beam direction may be controlled by adjusting the beam direction in x-, y- and z-direction.
- two beam profiles for two horizontal slices through the measurement volume may be defined. For example, a correlation between the two profiles of the two slices may be computed, the resulting correlation profile may be fitted with a Gaussian and best Gaussian fit parameters may be output. Thus, x- and y- coordinates of a peak as close to zero as possible may be obtained.
- the execution of the program instructions by the processor further causes the processor to control the analysis unit to determine the calibration constant c.
- the determining of the calibration constant c comprises determining a scattered light intensity l out , a scattering angle ⁇ , and an incident light intensity l in for particles of known diameter d p with the calibration constant c being
- Examples may have the beneficial effect, that the calibration constant c may be determined accurately.
- the invention in another aspect relates to a computer program product comprising a non- volatile computer-readable storage medium having computer-readable program instructions embodied therewith.
- the program instructions are executable by a processor of a control unit configured for controlling a measuring device for measuring particle sizes.
- the measuring device comprises a light source for generating a beam of light for illuminating a measurement volume, and one or more image sensors. Execution of the program instructions by the processor causes the processor to control the measuring device to illuminate the measurement volume with the beam of light and to acquire image data of a measurement volume using the one or more image sensors.
- execution of the program instructions by the processor causes the processor for one or more of the particles within the measurement volume to determine a scattered light intensity l out of light of the light source scattered by the respective particle to an individual image sensor of the one or more image sensors using image data from the image data acquired by the individual image sensor, determine a scattering angle ⁇ of the light of the light source being scattered by the respective particle to the individual image sensor with the scattering angle ⁇ being determined relative to a direction of illumination of the measurement volume by the light source, determine an incident light intensity l in of the light of the light source at the position of the respective particle within the measurement volume, determine the size of the particle using a ratio l out /lin of the scattered light intensity l out to the incident light intensity l in and the scattering angle ⁇ .
- Fig. 1 illustrates a scattering of light by a spherical particle
- Fig. 2 shows an exemplary image of a measurement volume comprising a plurality of particle images
- Fig. 3A shows a calculated scattering matrix element S 11 as a function of scattering particle diameter
- Fig. 3B shows a calculated scattering matrix element S 11 as a function of scattering angle
- Fig. 4 shows a model drawing of an exemplary measuring device for measuring particle sizes
- Fig. 5 shows a model drawing illustrating an exemplary off-axis observation of a measurement volume containing particles
- Fig. 6 shows a model drawing of an exemplary measuring device for measuring particle sizes
- Fig. 7 a diagram showing boundaries of an exemplary measurement volume
- Fig. 8 shows a model drawing of a further exemplary measuring device for measuring particle sizes
- Fig. 9 shows a comparison of an observed light intensity, both without and with diffuser, with Lorentz-Mie scattering.
- Fig. 1 ⁇ shows a comparison of an observed light intensity, both without and with diffuser, with Lorentz-Mie scattering as well as a fit of an exponential function for different particle sizes
- Fig. 11 shows exemplary droplet size distribution inferred with the intensity-based approach
- Fig. 12 shows exemplary relative uncertainties for various ranges of particle sizes
- Fig. 13 shows histograms of a square-root-particle-intensity for particles of different sizes produced with a calibrated aerosol generator in order to determine a calibration constant
- Fig. 14 shows a schematic drawing of an exemplary analysis unit
- Fig. 15 shows a schematic drawing of an exemplary analysis unit.
- Embodiments of the device may, e.g., be used for enhancing the functionality of standard particle tracking using high-speed video cameras by carefully deducing the brightness of each of the tracked particles' images.
- Embodiments of the device may be used to determine the size of spherical particles in a large volume.
- the device relies on multiple approaches:
- the combination of these technologies may allow to measure sizes of droplets or other nearly spherical particles spread throughout a volume simultaneously.
- Fig. 1 illustrates light scattering on a spherical particle 102 of diameter d p in the context of embodiments of the device.
- This particle 102 may, e.g., be a droplet.
- light of an incident light intensity l in (x) is scattered by an angle ⁇ away from the direction the light would travel if the scattering particle 102 was not there.
- the outgoing light appears to an observer with an intensity l out that is proportional to the incident light intensity and varies with the particle diameter d p and the scattering angle ⁇ :
- the image intensity is proportional to the integral of the scattered light intensity l out over the corresponding camera aperture.
- Knowledge of the camera aperture diameter may allow to determine the value of l out averaged over the solid angle corresponding to the camera aperture, up to a multiplicative constant.
- the incident light intensity l in may be constant within the measurement volume. In this case, no statistics of the particle image intensities may be required for determining the incident light intensity l in , which is rather the same for all particles within the measurement volume.
- the complicated dependence of l out / l in on d p and ⁇ may be simplified by usage of a diffuser, a sufficiently large aperture and an appropriate placement of the cameras relative to the incident light direction, so that it is sufficiently well approximated by: where q 1 and q 2 are known constants. These constants may, e.g., be determined using a fitting of an exponential function to a theoretical prediction of the Mie scattering.
- the two unknown proportionality constants may be combined into a single one, c, which may be determined experimentally by a calibration using particles 102 with known size.
- the amount of light that a particle scatters onto a camera sensor can be determined from the camera images. Usually this is done by summing the intensity of the pixels that make up the particle's image. However, if the particle seeding density is sufficiently high to make particle images' overlap likely, any given pixel cannot be uniquely ascribed to single particle image, and this approach breaks down. If particles are far from the camera's plane of focus, their images become larger, which greatly increases the probability that they overlap. This approach therefore is not expected to work for volumetric measurements, even for moderate particle number densities.
- the approach employed by embodiments of the device uses a physics-based mathematical model of the particle image that would result from a point source observed by a camera fitted with an ideal lens and a finite aperture.
- the model has various parameters, one of which is the total intensity of the particle image.
- the model is instantiated once, and at least one of the items optimizes the models' parameters simultaneously for all particle images on the sensor.
- the incident light intensity within the measurement volume can be derived from the statistics of the image intensities. It can be assumed that the incident light intensity does not vary over time or along the direction of the illumination. A further assumption is that the scattered light intensity is roughly proportional to the incident light intensity via the relationship expressed by the equation Finally, it may be assumed that the individual particle size does not change significantly during its traverse of the measurement volume. Then, the model relative incident light intensity in the various parts of the measurement volume is iteratively adjusted until the observed variation of particle image intensity over each particle trajectory best matches the modelled variation of the local incident light intensity.
- the size uncertainty is a function of the particle size that derives from Mie scattering theory and takes into account the light source spectrum and diffusivity, aperture sizes, and camera spectral sensitivity.
- Fig. 2 shows exemplary image data 112 of a measurement volume acquired with an imaging sensor.
- the image data 112 comprises a plurality of particle images, i.e., image data of particles 102 present in the measurement volume.
- This image data of the particles 102 is given in form of greyscale level intensity distributions acquired by pixels of the image sensor.
- the image data of the particles 102 results from a scattered light intensity l out of light of a light source scattered by the respective particles 102 in the measurement volume to the respective image sensor.
- Figs. 3A and 3B show the behavior of the scattering matrix element S 11 as a function of the particle diameter d p (Fig. 3A) and the scattering angle ⁇ (Fig. 3B), respectively.
- the values have been computed for a spherical particle numerically using the popular BHMIE code for unpolarized light.
- S 11 grows generally proportional to but also oscillates strongly and has a fine structure.
- the intensity ratio between peaks and valleys is about 2.5, as can be estimated from the inset.
- Fig. 3B it can be seen that even though in general S 11 is a very irregular function of ⁇ , in the range from 25° to 35° it is slightly better behaved: it decreases more or less exponentially, and "only" oscillates strongly.
- the device consists of a light source, a measurement volume, a set of cameras, and a control unit.
- the light source may be a pulsed laser or a flashlamp, the light of which is manipulated into a thick, collimated, slightly diffuse beam that passes through the measurement volume.
- the cameras are aimed at the measurement volume and are arranged such that the scattering angles with respect to the beam vary by as much as the apertures' subtended angles. Such arrangement increases the total effective aperture radius which helps smooth out the dependence of l out / l in on d p and ⁇ .
- the device uses software that allows accurate tracking of individual particles' positions in three dimensions and, provided that the particles are sufficiently spherical and smooth, also allows to deduce each particle size.
- the device is conceived to simultaneously measure the size and position of small liquid droplets.
- it can be used also with other kinds of particles as long as they scatter light in such a way that knowing the scattering angle and the intensity of light scattered in that angle allows to infer the particle size with sufficient accuracy. This precludes particles for which the scattered light intensity heavily depends on their orientation, such as strongly elongated ellipsoidal particles or particles with significant surface irregularities or reflectivity variations. Fig.
- FIG. 4 shows an overview of a prototype of measuring device 1 ⁇ for measuring particle sizes that is built for operation at a research station at the Schneefernerhaus, mount Buchspitze, Germany.
- the setup shown in Fig. 4 is an open-frame design that minimizes flow resistance under open-air conditions and may be mounted on rails and be equipped with a linear motor to allow for wind-synchronous observations of cloud-borne water droplets in situ.
- An alternative design is shown in Fig. 8, where all components are mounted in an encased frame. While the setup of Fig. 4 may be more suitable for separate assembly and calibration by the user or OEM, the setup of Fig. 8 may simplify factory calibration.
- the exemplary measuring device 1 ⁇ comprises a light source 114 mounted on an open frame configured to illuminate a measurement volume 104 below the light source 114 with a beam of light 116.
- the light source 114 may, e.g., be a pulsed laser or a flash light.
- the light source may, e.g., be equipped with a collimator 124 and/or a diffuser 126. Using a collimator 124, the light source may be enabled to for generate a collimated beam of light.
- the diffuser 126 may be used for diffusing the collimated beam of light.
- a plurality of two or more image sensors 138 in case of Fig. 4 there are three image sensors 138 in form of three cameras, may be arranged at different positions relative to the measurement volume 100.
- the image sensors 138 are arranged within a box 152. Furthermore, the measuring device comprises a control unit 11, e.g., arranged within the box 152.
- the control unit 11 comprises a processor and a memory storing machine-executable program instructions. Execution of the program instructions by the processor causes the processor to control the measuring device 100 to illuminate the measurement volume 1 ⁇ 4 with the beam of light 116 and to acquire stereoscopic image data of the measurement volume 1 ⁇ 4 using the image sensors 128.
- Fig. 4 illustrates a first exemplary implementation of the measuring device, which may in the following alternatively be referred to as "the box” for simplicity. It consists of a vibration- damped aluminum box housing the high-speed cameras used for Lagrangian particle tracking.
- the box has been designed in such a way to minimize its total weight and cross-sectional area exposed to the particle-bearing fluid while being extremely rigid and able to fit three cameras (an exemplary model being Vision Research v2511) with their corresponding optics. Further streamlining of the box would have added a lot of complexity in terms of fabrication, usability, and serviceability, but if the current shape of the box proves to be problematic, lightweight streamlining elements made out of, e.g., polystyrene foam can be added.
- the optical system of the measuring device shown in Fig. 4 has been designed using the optional feature of off-axis observation of the measurement volume.
- the box's main components are 24 aluminum parts providing a rigid skeleton, three window sub-assemblies through which the cameras observe the measurement volume off-axis, and six transparent polycarbonate plates that provide visual and manual access to the box in the case of malfunction.
- Its external dimensions are 930 x 720 x 360 mm 3 , and internally, the box is subdivided into three upper sections containing one camera each and three lower sections, which contain the camera, power supplies, Ethernet and trigger cables, cooling hoses, a control unit (e.g., based on PC), and temperature, humidity, and acceleration sensors.
- the maximum weight of the camera box is, in principle, limited by the maximum payload that can be moved bythe motor of the seesaw unit used for mean wind compensation (s. Fig.6).
- more restrictive practical limits were imposed by the need to carry the box up to the roof manually through a narrow staircase, which allowed only two people to carry the box at a time. For this reason, the weight of the box without cameras or the beam expander (see below) was kept below 6 ⁇ kg. Even so, should the movable table crash into the emergency shock absorbers, very large forces would be generated both at the braces keeping the box above the table and at the pillars fixing the seesaw to the roof.
- Fig. 5 shows an exemplary setup for an off-optical-axis observation implemented, e.g., by the measuring device 100 of Fig. 4.
- the off-optical-axis observation of the measurement volume may be implemented via a mirror 134 and through a tube 138.
- An orientation of the tube 138 defines a view axis under which the respective image sensor observes the measurement volume via the mirror 134 and the tube138.
- the image sensor may be equipped with an objective 108 mounted on objective holder 110.
- An inclination of the mirror 134 may be adjustable using a mirror adjustment device 138. By adjusting the inclination of the mirror 134, the view axis may be adjusted.
- the tube 138 comprises a first end 140 and a second end 142.
- the first end 142 is a distal end relative to the mirror 134.
- the second end 142 is a proximal end relative to the mirror 134.
- the tube further may comprise a window 144 arranged at the second end 142 of the tube 138.
- the tube 138 may further comprise a ventilation system with an air inlet 146 and an air outlet 148 configured for injecting a stream of air via the air inlet 146 into the tube138 and onto the window 144 and for sucking out the stream of air via the air outlet in order to remove with the stream of air liquid that may have collected on the window 144.
- the tube 138 may comprise a movable lid 150 arranged at the first end 140 of the tube 138 and configured for opening and closing the tube 138 at the first end 140.
- the first end 140 of the tube 138 may be cut vertically relative to a horizontal plane, such that, e.g., rain, falling vertically is prevented from entering the tube 138.
- the second end 142 of the tube 138 may, e.g., be cut vertically relative to a longitudinal axis of the tube 138.
- Each window is mounted at the lower end of a short tube (see Fig. 5) with the inner diameter of 24 mm just large enough to not interfere with any of the light rays between the camera apertures and the measurement volume.
- the tubes are oriented along the line of sight of the cameras, which is at 3 ⁇ degrees with respect to the illumination light direction, which coincides with the vertical in Figs. 3 and 4.
- the upper ends of the tubes are cut vertically to decrease the chance of liquid entering the tube and are capped by movable lids.
- the lids may be triggered so as to open only for the duration of each video capture and close during the much longer duration of the data transfer. Moreover, a powerful stream of air may be injected at the higher end of each window to remove any droplets that might still make their way through the tube. On the lower end of each window, the air and any liquid that collects there may be sucked out and removed from the camera box. To aid the water removal process, the windows may be coated with a hydrophobic coating.
- the box may be supplied with a steady small stream of dry air to prevent water condensation on the underside of the windows. Moreover, the box may be temperature controlled. If the temperatures within the box stay within the operation range of the cameras, which may be typically in the range from 10 to 50 °C, no cooling or heating may be required for operation.
- cooling channels running through the camera base plates, which can also contribute to bringing the inside temperature down.
- the temperature fluctuations may affect the camera calibration, but the changes are usually small enough to be removed by self-calibration without any adverse effects on the data quality.
- additional instruments may be mounted immediately next to or downstream of the measurement volume.
- Small instruments such as fast temperature and humidity probes, can be mounted in between the vertical supports shown in Fig. 4.
- Larger instruments can be mounted on any structure supporting the box, e.g., the seesaw's table. It must be noted that if additional instruments are mounted directly downstream of the measurement volume, they must be removed and remounted every time the fluid stream direction reverses; otherwise, they may disturb the flow into the measurement volume.
- the box may be suspended on four extension springs with spring constant 6.07 N/mm and unstressed length 149 mm (model example Z-1951 made by Gutekunst Federn).
- the height of the spring attachments to the box can be adjusted to be at the height of the box center of mass, which prevents the box from tilting during acceleration and deceleration to or, respectively, from the fluid stream synchronous system of inertia.
- the box may be further constrained by six pairs of rubber buffers. The buffers' height can be adjusted so that they are level with the box's center of mass.
- a partially active approach may be useful where the box is clamped during the acceleration phase and then released to be held only by the springs during the constant velocity phase.
- the experiment may be controlled by a computer or computer cluster.
- a cluster is used that consists of a main node, six compute nodes, and two storage nodes, each of which is connected to 10 Gbit Ethernet switches and contains 35TB of storage.
- the cameras are also connected to the 10 Gbit network, through which they are controlled and read out.
- the main node runs a Python code that controls and triggers the cameras, controls the box's window shutters, monitors the box's environmental parameters, and controls the light source.
- the compute nodes download videos from the cameras to their internal hard drives. This takes 40 s typically, which is the limiting factor for the repetition rate of the measurement. The videos are then copied to both storage nodes. After a measurement campaign, the disks from the second storage node are taken out and transported to the Max Planck Institute of Dynamics and Self-Organization Gottingen, where data analysis takes place.
- the ideal particles for particle tracking are so small that each of the multiple glare points that a single particle has overlap on the camera sensors.
- the distance between the glare points is proportional to magnification particle size, and since it may be impossible to control the size of the particles to be observed, it may be feasible to use an optical system with a small magnification.
- a large magnification is required. In the example of the Schneefernerhaus experiment, a magnification of 28 ⁇ m per pixel was chosen, which means that if diffraction effects are ignored, a typical cloud droplet is of the order of 1 pixel on the sensor.
- each camera may be equipped with two 2 teleconverters and a 200 mm objective.
- each objective is supported on one end by a finely adjustable clamp ring and on the other end by three plastic adjustment screws. The cameras are also firmly bolted to the base plates underneath them.
- each upper section also contains an 85 x 60 mm 2 mirror housed within a custom-built mirror holder (see Fig. 5).
- the holder can be rotated along a vertical axis, and the mirror tilt can be finely adjusted using a fine-threaded adjustment screw. Once the desired rotation and tilt of the mirror are reached, this orientation can be kept in place by tightening a set of fixing bolts.
- the mirror adjustment mechanism can likewise be implemented for manual or automated adjustment of the screws.
- the resulting measurement volume is defined as the set of points visible by all cameras in the particle-tracking setup.
- the measurement volume is diamond-shaped and measures about 16.6 cm 3 ; see Fig. 7. If one allows particles to be triangulated by using fewer than three cameras, the usable volume may be much larger; then, however, in a large portion of it the particles may be so much out of focus that they would be hard to detect and their positions would be obtained with large uncertainty.
- the exemplary Schneefernerhaus experiment uses a Trumpf TruMicro 7240 laser with a wavelength of 515 nm, a maximum pulse energy of 7.5 mJ, and a pulse length of 3 ⁇ ns. Although at higher repetition rates the laser can achieve a light power output of 3 ⁇ 0 W, at lower repetition rates it is limited by the maximum pulse energy. For an exemplary sampling rate of 1 ⁇ kHz, the light power output is effectively 75 W. It should be noted that other light sources, including non-coherent light sources, may be used depending on the deployment objective of the measurement device.
- the beam coming out of the laser fiber first passes through a diverging lens in order to reach the desired beam diameter within the small amount of space available in the beam expander. It then passes through a converging lens, which makes it nearly parallel with a diameter of 39 mm. This diameter fits the measurement volume well, i.e., only little light is wasted illuminating particles outside the measurement volume.
- the lenses were chosen such that their spherical aberrations expand the beam's center, while compressing its edges, which may help to illuminate the measurement volume more uniformly.
- the beam finally passes through an optical diffuser, the function of which is to smooth the dependence of the intensity of scattered light on the scattering angle and thus simplify the process by which the particle size is deduced. As the beam leaves the beam expander, it is clipped from both sides, so it becomes narrower in the x-direction. By doing so, it fits the shape of the measurement volume better; in particular, the volume that is illuminated but not seen by all cameras is reduced.
- Intensity data describing the dependence of the illumination beam intensity on the location within its cross section inside of the measurement volume may be obtained from the tracking data by comparing instantaneous particle intensity with its mean value over its whole trajectory.
- data are available only for positions within view of at least two cameras, where triangulation is possible.
- beam profile is nearly flat, with a slight asymmetry most likely due to a slight offset of the laser head from the optical axis of the other optics and a smooth decrease in intensity at the edges due to the diffuser.
- the number of cameras onboard the measuring device does not necessarily have to be three; rather, a different multiplicity of cameras such as two, four, or yet a larger number of cameras may be deployed as well.
- One parameter that sets limits on the maximum tractable seeding density and the uncertainty of the triangulated particles' three-dimensional position is the mean variance of the positioning error of particle images on the sensor, - F° r a given image, ' where is the total variance of camera thermal and shot noise and I is the total intensity of the given particle image on the sensor.
- the dominant source of noise is usually the shot noise, so
- the image intensity is sensitively dependent on the scattering angle, which is the angle between the illumination beam direction and the camera viewing axis (see Fig. 3B).
- the viewing axis of any camera toward the measurement volume may be different from the actual optical axis of the camera sensor (e.g., defined by the optical axis of the objective and / or a normal vector of the camera sensor plane), which is true especially for any off-axis observations, such as that shown in Fig. 5 where the light emerging from the measurement volume, and traveling along said camera viewing axis defined by the center of the feed-through tube, is reflected toward the camera objective by a mirror.
- the components of the three-dimensional particle position error variance are Here, the z direction points along the illumination light beam.
- small ⁇ decreases the variance of the position error components perpendicular to the laser beam, in the direction along the beam, small ⁇ leads to a catastrophic amplification of the error.
- the considerations of position error variance apply only to the particles with image intensity low enough to not saturate the sensor pixels.
- the smallest particle that can possibly lead to saturation (grey scale level of near 4096) when perfectly in focus has a diameter of ⁇ 23 ⁇ m. This number was obtained using particle sizing as described herein, combined with a model for a particle's point spread function. For larger particles, the optimum angle would be closer to the one obtained by geometric considerations only, which is 54.7° (cameras view axes perpendicular to each other).
- the particle tracking algorithm was inspired by the Shake-the-Box (STB) algorithm developed by Schanz, Gesemann, and Schroder and heavily adapted to the specific needs and challenges of the present measuring device. A comprehensive description of the algorithm is beyond the scope of this work and is presented in a different publication. Here, its main features and differences to the standard version of STB are only briefly outlined.
- the total area of all the tracked images may be several times higher than the total sensor area. Since the images heavily overlap and do not necessarily achieve their peak intensity at their center like well-focused images do, finding and fitting the images is a difficult and computationally expensive task. Without having a good model for the point spread function, subtracting the intensity profiles of already tracked particles, and performing several iterations of fitting the image parameters, it seems impossible to achieve a high yield necessary to study particle or droplet clustering.
- the suggested tracking algorithm may be implemented by the analysis unit described herein and comprises the following items that may be performed in the stated order at each time instance:
- Each video frame is pre-processed in order to most closely correspond to the light intensity that an ideal sensor would read out.
- This item consists of subtracting background intensity, correcting hardware artifacts such as image lag (where intensity values on the current frame are affected by their values on the previous frame), and correcting for smudges or scratches on the sensor and optical elements.
- the optimized particle images obtained in the previous item are subtracted, and each sensor is searched for new images.
- This item proceeds in several rounds, where in each round, a search for images within a certain narrow interval of defocus is conducted, starting with the well-focused ones.
- the frame is first filtered using a Gaussian filter of standard deviation matching best the images within this range of defocus. Local maxima of the filtered intensity are used as the initial guesses for new image locations.
- the image properties are optimized in the same manner as in the previous item.
- triangulation takes into account not only the image locations but also their level of defocus and their intensity. This enables to reliably triangulate using just two cameras and compensate for the lower position accuracy of the defocused images.
- Each triangulated point is assigned a likelihood of being "real,” i.e., being triangulated from images that indeed correspond to the same particle.
- the existing 2D/3D trajectories are extended to the current time instance by the addition of optimized images / triangulated points that lie close to their extrapolated position.
- Each of the trajectories consists of a single tail and possibly multiple heads.
- the tail is made out of objects from the distant past for which confidence is given that they belong to this trajectory.
- Each head is made out of recent objects, for which the certainty level is much lower, as they could be ghosts or belong to another trajectory.
- trajectory heads calculating a likelihood of each trajectory head being real based on its smoothness and on the product of likelihoods of each object being real, as calculated above. Heads with low likelihood are deleted. Further trimming is achieved by ensuring that older objects are not used in more than one trajectory. Tracks with no remaining heads are deleted, but those of sufficient length are saved first.
- the particle tracking algorithm described above may address some or all of the challenges posed by a harsh work environment such as the weather exposure at the exemplary Schneefernerhaus experiment referred to above:
- the particles may stay within the measurement volume for only a few Kolmogorov times or less, which poses an additional challenge when it comes to choosing between alternative triangulations or temporal links. It may be important to make the right choice from the beginning as there may not be enough time left for initiating a new correct track before the particle leaves the measurement volume again. Using the strategy of having multiple trajectory heads, this risk may be minimized. As an additional benefit, this way it may be possible to reduce the sampling rate to as low as 5 kHz without adverse effects on the temporal-linkage reliability.
- the used point spread function model is based on the assumption that the particles can be approximated as point sources of light. This assumption works well for water droplets and ice particles up to a diameter of about 3 ⁇ pm for the setup shown in Figs. 3 and 4 (in general, this limit depends on the wavelength of light used, aperture size, object distance, and scattering angle). For larger particles, their images become increasingly different from the point-source model and more likely to be interpreted as a close cluster of individual particle images rather than as a single entity. It may be possible to detect such occurrences and join these clusters into single images again, but the detection of another particle image nearby becomes increasingly unlikely, with potential impacts on the clustering analysis.
- Droplets and ice crystals larger than about 7 ⁇ pm may saturate the pixels (with the exemplary aperture size given above) to such a degree that their image subtraction becomes impossible. While it may still be possible to track their movement and also presumably their orientation in case their shape significantly differs from spherical, the position accuracy may be diminished compared to that of the smaller particles. However, if tracking of large particles were the aim, one could change the magnification, aperture size and beam width accordingly to achieve better results.
- the camera model may be self-calibrated at each frame to account for apparent motion, for which five possible causes may be assumed in the example of the Schneefernerhaus experiment: (1) thermal gradients due to thermal convection within and above the camera box; (2) thermal expansion of the camera box itself; (3) vibrations caused by the camera fans; (4) vibrations caused by the linear motor; and (5) permanent displacement of the optics due to mechanical shocks.
- the first three effects may be present regularly and are discussed here. The last two only may occur if the camera box is moving to compensate for the mean wind. Best results in terms of compromise between robustness and precision may be achieved by calibrating five parameters of the model per camera at each frame, corresponding to a small shift of the apparent position of the world coordinate origin (two parameters) and a small change in the view angle (three parameters). Often, the shift of the apparent position of the world coordinate origin is the most sensitive to changes in calibration. In the following, this will be used as an indicator of the severity of the previously mentioned effects.
- the largest component of the camera shifts may be caused by changes in the camera box's temperature during the course of the day. Changes in temperature cause the box to slightly expand and/or contract, which, in turn, slightly displaces all the optics and hence influences the calibration. Observations of the apparent shifts of the world coordinate origin's sensor positions on camera box temperature over the span of two months at the exemplary Schneefernerhaus experiment show that even the largest apparent shifts of about 7 pixel ⁇ 200 ⁇ m were well within the limits of self-calibration and small relative to the size of the sensor (1280 x 800 pixel 2 ). The second largest contributor to changes in the calibration may be thermal convection inside and above the camera box.
- a further contributor to changes in the calibration are the vibrations caused by the cooling fans built into the cameras in the exemplary Schneefernerhaus experiment. These result in very small high frequency oscillations with shifts being small on the order of 0.02 pixel, but due their high frequency of 340 Hz, they may create an uncertainty in the acceleration of the world coordinate frame of up to 2 m/s -2 . The contribution from the thermal convection is of similar magnitude.
- the tolerable amount of change due to self-calibration depends on the quantity of interest. If one is interested in quantities that do not depend on an inertial frame of reference, such as the radial distribution function, relative particle velocities, or velocity structure functions, then successful self-calibration may be sufficient. However, if one is interested, for example, in particle accelerations, then successful self-calibration may be no longer sufficient, as it is insensitive to solid-body translations and rotations of the world coordinate system. In this case, additional constraints may be imposed to uniquely determine the self-calibration parameters, such as zero mean acceleration over all particles in view, which might affect the overall statistics.
- Particle sizes may be inferred by relating them to the observed brightness of each particle: with di being the diameter of particle i and l i being its observed brightness.
- the particle size uncertainty may be computed from the brightness standard deviation using standard uncertainty propagation, which, in turn, is computed as the standard deviation of the particle's brightness over all frames in its track. Typical relative particle size uncertainties are less than 10%.
- the calibration constant is dimensional with the unit pm / (grayscale value) 1/2 and that it usually depends on the power of the light source, the scattering angle, the lens used, the aperture, and the camera type and settings.
- the particle sizing approach may be verified in situ by comparing the size distributions thus obtained with those measured using a commercial interferometer (Artium PDI-FPDR).
- An exemplary test run showed that the particle sizing approach disclosed herein performs well but shows some oscillations below 35 ⁇ m. These oscillations may be caused by the scattering pattern, which is well-described by Mie scattering theory and is known to have strong oscillations as a function of both particle size and scattering angle.
- Fig. 8 shows a schematic drawing of an alternative setup of a measuring device 1 ⁇ for measuring particle sizes where the measurement volume 104 is surrounded by walls of a chamber 118, i.e., a housing, with a particle inlet opening 120 and a larger particle outlet opening 122 at the opposite side.
- the exemplary measuring device 100 of Fig. 8 comprises a light source 114 configured to illuminate the measurement volume 1 ⁇ 4 within the chamber 118 with a beam of light 116.
- the light source 114 may, e.g., be a pulsed laser or a flash light.
- the light source may, e.g., be equipped with a collimator 124 and/or a diffuser 126.
- the light source may be enabled to for generate a collimated beam of light.
- the diffuser 126 may be used for diffusing the collimated beam of light.
- a plurality of two or more image sensors 138 in case of Fig. 3 there are three image sensors 138 in form of three cameras, may be arranged at different positions relative to the measurement volume 104.
- the image sensors 138 are arranged within a box 152.
- the measuring device comprises a control unit 11, e.g., arranged within the box 152.
- the control unit 11 comprises a processor and a memory storing machine-executable program instructions. Execution of the program instructions by the processor causes the processor to control the measuring device 1 ⁇ to illuminate the measurement volume 104 with the beam of light 116 and to acquire stereoscopic image data of the measurement volume 104 using the image sensors 128.
- An illumination beam from a light source as described herein illuminates the measurement volume that is located between the inlet and outlet openings.
- the particles in the measurement volume are observed by three cameras in a flat angle of up to 4 ⁇ degrees, and preferably within a range from 25 to 35 degrees, with respect to the direction of the essentially parallel illumination light beam, where alternatively two or more than three cameras may be used as well, and likewise, the cameras may be mounted in an on-axis setup where the optical axis of the sensor and / or the objective of each camera crosses the measurement volume, or alternatively, in an off-axis setup similar to that shown in Fig.
- the measuring device shown also comprises a control panel allowing a user to interact with the control unit and / or the analysis unit that may both be installed in the left side of the housing or, alternatively, one or both of which may be located elsewhere and may be communicatively coupled to an appropriate communications interface, as disclosed herein, in the left part of the housing.
- the setup shown in Fig. 8 may be especially suitable as an out-of-the-box device that is pre-calibrated by a manufacturer. Apart from that, the design principles and considerations given above for the setup shown in Figs. 4 and 5 can be applied straightforwardly to the alternative setup of Fig. 8.
- the measuring device and its control unit and / or analysis unit as disclosed herein may allow to perform in situ particle tracking on droplets of a liquid or other nearly spherical particles.
- the whole of the particle tracking setup may be built into a stiff, weather-proof box called the camera box.
- the volume probed by the exemplary particle tracking setup is 16.6 cm 3 ; however, due to its vertical extension, many particles are out of focus and therefore difficult to locate accurately, if at all.
- the vertical size of the measurement volume By limiting the vertical size of the measurement volume, its shape can then be approximated by a cuboid of 40 x 200 12 mm 3 , particle positions may be measured with an uncertainty of 5 pm, and their accelerations may be measured with an uncertainty of 0.1 m s -2 .
- the smallest particles that may be accurately tracked within this volume are about 5 pm in diameter.
- the experimental setup may be typically operated at 5 or 10 kHz; however, frame rates of up to 25 kHz may be possible without losing accuracy.
- Fig. 6 shows the measuring device 100 of Fig. 4 mounted on a pair of rails 13 ⁇ and equipped with a linear motor 128 enabling the measuring device 100 to move along the rails 130.
- This may allow for wind-synchronous observations of cloud-borne water droplets in situ by the measuring device 1000
- the movement of the measuring device 100 along the rails 130 is limited by shock absorbers 160 arranged on both ends of the rails 13 ⁇ .
- the rails 130 are arranged on a table 162 pivotable around a pivot 156 implementing a seesaw 132.
- the table 162 is pivotable around an axis extending perpendicular to the rails 160 in a common plane with the rails 130 implementing.
- the table 162 is supported at the pivot 156 by a support structure 158.
- the table 162 is supported telescopic cylinders 154, which may be configured to tilt the table 162.
- the camera box is mounted on a table that can be moved over a set of rails by using an electromotor, which allows to compensate for the mean wind.
- the wind at the Schneefernerhaus is predominantly east-west, so the rails are aligned in this direction.
- the rails are bolted onto a structure dubbed the "seesaw" which can be tipped up to 15° east or west.
- the system has been designed for fluid velocities of up to 7.5 m s -1 , in practice, the maximum velocity to run it at is set by the travel of the table (5.3 m), the maximum acceleration (10 m s -2 ), and the desired duration of the constant velocity phase.
- the cameras can record for up to 1.6 s; if the constant velocity phase is set to be at least this long, the velocity is limited to at most 2.8 m s -1 .
- the electromotor introduces extra vibrations into the camera box, which are dampened passively by decoupling the box from the table with springs and rubber buffers. This way, the rms accelerations of the camera box may be reduced down to 0.1 m s -2 . It may also be possible to implement an actively damped approach, where the camera box is stiffly coupled to the table during the acceleration and deceleration phases but almost completely decoupled otherwise. By using this mean wind compensation, one may expect to see fewer particles, but their average residence time in the measurement volume should increase.
- the number of tracks longer than 1 ⁇ ms may increase; the number of tracks of 1 ⁇ ms may double. This may be beneficial for particle tracking, in general, since longer particle tracks allow to apply better filtering to the tracks, which helps to increase accuracy, in particular of higher-order derivatives such as the particle acceleration.
- particle tracking is done with an in-house code, with which it may be possible to accurately track particles that are badly illuminated and/or strongly out of focus. It is inspired by the Shake-The-Box algorithm, but instead of trying to use the particles to resolve the underlying flow, it puts emphasis on finding and tracking the largest possible number of particles in view and accurately determining the amount of light associated with each particle image. It may be possible to track up to 10 4 particles in simultaneous view of all cameras and up to 8 x 10 4 particles in total (due to the shallow view angles). By performing a self-calibration at every frame, it may be possible to deal with thermal convection and expansion and vibrations caused by the camera box movement.
- each individual particle size By extracting not only particle positions as a function of time but also their brightness as registered by each individual camera, it may further be possible to deduce each individual particle size. For particles larger than 35 pm in diameter, one can estimate the size of each particle accurately by a simple quadratic fit to the dependence of scattered light on particle size. For smaller particles, the effects of Mie scattering should be taken into account for better accuracy.
- embodiments disclosed herein demonstrate the ability to measure the radial distribution function (RDF) and the longitudinal relative velocity distribution within the limits set by the effect of particle shadowing.
- RDF radial distribution function
- Other quantities that may be measurable include, e.g., the particle acceleration statistics and various measures for particle clustering, such as the fractal dimension and the size distribution of Voronoi cells.
- Fig. 9 shows an exemplary comparison of an observed light intensity Io, both without and with diffuser, with Lorentz-Mie scattering.
- the observed intensity is calculated by integrating Lorentz-Mie scattering overthe diffuser and over the camera's aperture. The curves are offset, so they do not overlap.
- the scattering angle in this case is 30°.
- the aperture of the objective used subtend angles of 1° and the beam expander is equipped with a 1° FWHM diffuser.
- the approximated equation is evaluated for many droplet sizes, both without and with the diffuser, and is compared with Lorentz-Mie scattering. The result is shown in Fig. 9.
- the aperture by itself reduces the oscillations to some extent: they practically disappear at 55 pm, but come back for larger sizes, although there they are not as strong. Adding the diffuser reduces the oscillations even further: they disappear at 35 pm, and do not return. It is likely that the diffuser is more effective at reducing oscillations than the aperture is, because the diffuser has smooth edges due to its Gaussian character, whereas the aperture has sharp edges. So, it is possible to compute the size of particles from their observed intensity as
- Fig. 1 ⁇ shows another exemplary comparison of an observed light intensity Io, both without and with diffuser, with Lorentz-Mie scattering as well as a fit of an exponential function for different particle sizes.
- Fig. 10 the same quantities as in Fig. 9 are plotted, but now as a function of (central) scattering angle 6.
- the aperture and diffuser together are effective at averaging out the oscillation for particles, i.e., droplets.
- oscillation for particle of 30 ⁇ m is very effectively averaged out.
- An analogous exponential fit may be used for determining the constants
- Fig. 11 shows exemplary droplet size distributions inferred with the intensity-based approach described herein for different time intervals ATI, AT2, AT3, and AT4. These time intervals are, e., half-hour-long periods.
- Fig. 12 shows exemplary relative uncertainties for various ranges of particle sizes. Cumulative distributions of particle size and relative uncertainty ⁇ p/d p are shown. These distributions show P( ⁇ P /d p > X), i.e., the probability that the relative uncertainty is larger than a certain value, and therefore decrease instead of increase. In general, the relative uncertainty may decrease with increasing drop size. The median relative uncertainty is approximately 0.05. For particles larger than 10 ⁇ m in diameter, the relative uncertainty may tend to exceed 0.1 in no more than 1% of cases. Fig.
- FIG. 13 shows exemplary histograms of a square-root-particle-intensity for particles of different sizes produced with a calibrated aerosol generator in order to determine a calibration constant c of
- particles in form of droplets with known diameter d p are generated using the calibrated aerosol generator.
- a scattered light intensity l out , a scattering angle ⁇ , and an incident light intensity L are determined.
- Fig. 14 shows a schematic diagram of an exemplary analysis unit 1 ⁇ for determining particle sizes.
- the analysis unit 10 may, e.g., be a computational device and may be operational with numerous other general-purpose or special-purpose computing system environments or configurations.
- Analysis unit 10 may be described in the general context of computer device executable instructions, such as program modules comprising executable program instructions, being executable by the analysis unit 10.
- program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types.
- Analysis unit 1 ⁇ may be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer device storage media including memory storage devices.
- Analysis unit 10 may, e.g., be comprised by a control unit configured for controlling a measuring device for measuring particle sizes.
- analysis unit 10 is shown in the form of a general-purpose computing device.
- the components of analysis unit 10 may include, but are not limited to, one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including system memory 28 to processor 16.
- Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.
- bus architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
- ISA Industry Standard Architecture
- MCA Micro Channel Architecture
- EISA Enhanced ISA
- VESA Video Electronics Standards Association
- PCI Peripheral Component Interconnect
- Analysis unit 10 may comprise a variety of computer device readable storage media. Such media may be any available storage media accessible by analysis unit 1 ⁇ , and include both volatile and non-volatile storage media, removable and non-removable storage media.
- a system memory 28 may include computer device readable storage media in the form of volatile memory, such as random-access memory (RAM) 30 and/or cache memory 32.
- Analysis unit 10 may further include other removable/non-removable, volatile/non-volatile computer device storage media.
- storage system 34 may be provided for reading from and writing to a non-removable, non-volatile magnetic media also referred to as a hard drive.
- a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk, e.g., a floppy disk
- an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical storage media may be provided.
- each storage medium may be connected to bus 18 by one or more data media interfaces.
- Memory 28 may, e.g., comprise machine-executable program instructions configured for controlling the analysis unit 10 to determining particle sizes.
- Memory 28 may, e.g., comprise machine-executable program instructions configured for controlling a measuring device to measure particle sizes.
- Program 40 may have a set of one or more program modules 42 and by way of example be stored in memory 28.
- the program modules 42 may comprise an operating system, one or more application programs, other program modules, and/or program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment.
- One or more of the program modules 42 may be configured for controlling the analysis unit 10 to determining particle sizes.
- One or more of the program modules 42 may be configured for controlling a measuring device to measure particle sizes.
- Analysis unit 10 may further communicate with one or more external devices 14 such as a keyboard, a pointing device, like a mouse, and a display 24 enabling a user to interact with analysis unit 10. Such communication can occur via input/output (I/O) interfaces 22. Analysis unit 10 may further communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network, like the Internet, via network adapter 20. Network adapter 20 may communicate with other components of analysis unit 1 ⁇ via bus 18. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with analysis unit 10.
- external devices 14 such as a keyboard, a pointing device, like a mouse, and a display 24 enabling a user to interact with analysis unit 10. Such communication can occur via input/output (I/O) interfaces 22. Analysis unit 10 may further communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network, like the Internet, via network
- Fig. 15 shows an exemplary analysis unit 10 comprised by a control unit 11 of a measuring device.
- the analysis unit 10 and the control unit 11 may be provided as separated computational devices.
- the analysis unit 10 may, e.g., be configured as shown in Fig. 14.
- the analysis unit 1 ⁇ may comprise a hardware component 54 comprising one or more processors as well as a memory storing machine-executable program instructions. Execution of the program instructions by the one or more processors may cause the one or more processors to control the analysis unit 10 to determining particle sizes. Additionally or alternatively, execution of the program instructions by the one or more processors may cause the one or more processors to control a measuring device to measure particle sizes.
- the analysis unit 10 may further comprise one or more input devices, like a keyboard 58 and a mouse 56, enabling a user to interact with the analysis unit 10. Furthermore, the analysis unit 10 may comprise one or more output devices, like a display 24 providing a graphical user interface 50 with control elements 52, e.g., GUI elements, enabling the user to control the determining of particle sizes and/or the measuring of particle sizes. On display 24 image data 112, like the image data 112 of Fig. 2, may be shown.
- a single processor or other unit may fulfill the functions of several items recited in the claims.
- a computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
- aspects of the present invention may be embodied as an apparatus, computer program or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer executable code embodied thereon. A computer program comprises the computer executable code or "program instructions".
- the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
- a "computer-readable storage medium” as used herein encompasses any tangible storage medium which may store instructions which are executable by a processor of a computing device.
- the computer-readable storage medium may be referred to as a computer- readable non-transitory storage medium.
- the computer-readable storage medium may also be referred to as a tangible computer readable medium.
- a computer- readable storage medium may also be able to store data which is able to be accessed by the processor of the computing device.
- Examples of computer-readable storage media include, but are not limited to: a floppy disk, a magnetic hard disk drive, a solid-state hard disk, flash memory, a USB thumb drive, Random Access Memory (RAM), Read Only Memory (ROM), an optical disk, a magneto-optical disk, and the register file of the processor.
- Examples of optical disks include Compact Disks (CD) and Digital Versatile Disks (DVD), for example CD-ROM, CD- RW, CD-R, DVD-ROM, DVD-RW, or DVD-R disks.
- a further example of an optical disk may be a Blu-ray disk.
- the term computer readable-storage medium also refers to various types of recording media capable of being accessed by the computer device via a network or communication link.
- a data may be retrieved over a modem, over the internet, or over a local area network.
- Computer executable code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
- a computer readable signal medium may include a propagated data signal with computer executable code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electromagnetic, optical, or any suitable combination thereof.
- a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
- Computer memory or “memory” is an example of a computer-readable storage medium.
- Computer memory is any memory which is directly accessible to a processor.
- Computer storage or “storage” is a further example of a computer-readable storage medium.
- Computer storage is any non-volatile computer-readable storage medium. In some embodiments, computer storage may also be computer memory or vice versa.
- a "processor” as used herein encompasses an electronic component which is able to execute a program or machine executable instruction or computer executable code.
- References to the computing device comprising "a processor” should be interpreted as possibly containing more than one processor or processing core.
- the processor may for instance be a multi-core processor.
- a processor may also refer to a collection of processors within a single computer device or distributed amongst multiple computer devices.
- the term computing device should also be interpreted to possibly refer to a collection or network of computing devices each comprising a processor or processors.
- the computer executable code may be executed by multiple processors that may be within the same computing device or which may even be distributed across multiple computing devices.
- Computer executable code may comprise machine executable instructions or a program which causes a processorto perform an aspect of the present invention.
- Computer executable code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages and compiled into machine executable instructions.
- the computer executable code may be in the form of a high-level language or in a pre-compiled form and be used in conjunction with an interpreter which generates the machine executable instructions on the fly.
- the computer executable code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- the program instructions can be executed on one processor or on several processors. In the case of multiple processors, they can be distributed over several different entities like clients, servers etc. Each processor could execute a portion of the instructions intended for that entity.
- the computer program or program instructions are understood to be adapted to be executed by a processor associated or related to the respective entity.
- a "user interface” as used herein is an interface which allows a user or operator to interact with a computer or computer device.
- a 'user interface' may also be referred to as a 'human interface device.
- a user interface may provide information or data to the operator and/or receive information or data from the operator.
- a user interface may enable input from an operator to be received by the computer and may provide output to the user from the computer.
- the user interface may allow an operator to control or manipulate a computer and the interface may allow the computer indicate the effects of the operator's control or manipulation.
- the display of data or information on a display or a graphical user interface is an example of providing information to an operator.
- the receiving of data through a keyboard, mouse, trackball, touchpad, pointing stick, graphics tablet, joystick, gamepad, webcam, headset, gear sticks, steering wheel, pedals, wired glove, dance pad, remote control, one or more switches, one or more buttons, and accelerometer are all examples of user interface components which enable the receiving of information or data from an operator.
- a GUI element is a data object some of which's attributes specify the shape, layout and/or behavior of an area displayed on a graphical user interface, e.g., a screen.
- a GUI element can be a standard GUI element such as a button, a text box, a tab, an icon, a text field, a pane, a check-box item or item group or the like.
- a GUI element can likewise be an image, an alphanumeric character or any combination thereof. At least some of the properties of the displayed GUI elements depend on the data value aggregated on the group of data object said GUI element represents.
- These computer program instructions may be provided to a processor of a general- purpose computer, special-purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions specified in the block diagram block or blocks.
Landscapes
- Chemical & Material Sciences (AREA)
- Dispersion Chemistry (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
Abstract
The invention relates to a measuring device (100) for measuring particle sizes. The measuring device (100) comprises a control unit (11) for controlling the measuring device (100), a light source (114) for generating a beam of light (116) for illuminating a measurement volume (104), and one or more image sensors (106) arranged at different positions relative to the measurement volume (104). The control unit (11) comprises a processor (16) and a memory (28) storing machine-executable program instructions (40). Execution of the program instructions (40) by the processor (16) causes the processor (16) to control the measuring device (100) to illuminate the measurement volume (104) with the beam of light (116) and to acquire image data (112) of the measurement volume (104) using the one or more image sensors (106).
Description
DESCRIPTION
MEASURING DEVICE FOR MEASURING PARTICLE SIZES
Technical Field
The present invention relates to the field of analysis of behavior of particles in a given measurement volume.
Background
In many scientific experiments and industrial applications, one studies behavior of particles in a given measurement volume and would like to know as much about them as possible. Specifically, in the approximate order of measurement difficulty, the most common properties of interest are: the particle position, velocity, size, shape, orientation, rotation rate, electric charge, temperature and material composition. The different quantities, when known simultaneously foreach particle individually, can allow novel insights into the particle behavior that would otherwise not be possible. For example, simultaneous measurement of particle position and velocity allows one to estimate the properties of the underlying fluid flow based on the derived spatial velocity gradients.
For each of the quantities mentioned above, there exists a technique to measure that quantity accurately, but often in a way that makes measurement of the other quantities difficult or meaningless. Even though it is in principle possible to bring several devices together in such a way that they share a common measurement volume, through their physical presence and proximity requirements they would often influence each other's measurements, defeating the purpose of such a combination. It is therefore highly desirable to have a single device that can measure several particle properties simultaneously.
In the last two decades, thanks to the development of commercially available high-speed video cameras, the technique of particle tracking has gained increasing popularity for its ability to produce accurate simultaneous measurements of particle positions and velocity. However,
on many occasions, the usefulness of such results would be greatly enhanced if the particle size was known, too. Some possible scenarios where this may be beneficial are:
1. Measurement of fluid properties such as dissipation rate. Often, particles are used as tracers to visualize the fluid velocity, but the degree of correlation between the particle and fluid velocity is greater for smaller particles. Being able to exclude large particles from the analysis would thus increase accuracy.
2. Often one wants to measure behavior of particles with a particular size, but producing only that size at a rapid rate is difficult to achieve in practice, with many particle generators producing either a broad range or a number of well-defined sizes that arise either through the generation mechanism or subsequent aggregation. This issue would go away if one could isolate just the size of interest in the data.
The particle tracking technique relies on accurate determination of the position of particle images on several camera sensors, and the stereoscopic reconstruction of the particle position in the world coordinates from these image positions by means of a well-calibrated camera model. It is clear that the particle images potentially contain more useful information than just the particle position. It seems natural to expect that the overall brightness of the particle image should provide a hint of the particle size, as larger particles scatter more light due to their larger cross-sectional area. However, for a general particle the amount of light scattered in a particular direction is a hopelessly complicated function of the particle size, orientation, shape and variations of surface albedo and roughness. Even for a perfectly spherical and smooth particle such as a small droplet of pure water, the dependence of the scattered light intensity on particle size is complicated and does not offer much hope for accurate inversion (see Fig. 2A).
There are techniques to estimate a droplet diameter from counting interference fringes on their defocused sensor image. However, these require a coherent, powerful source of light. Furthermore, the defocusing has an unacceptable detrimental effect on the position (and thus velocity) measurement accuracy, and limits the maximum acceptable particle seeding density.
Another approach is required which would work for focused particle images and would not have too strict illumination requirements.
Summary
Various embodiments provide a measuring device and a computer program product as described by the subject matter of the independent claims. Advantageous embodiments are described in the dependent claims. Embodiments of the present invention can be freely combined with each other if they are not mutually exclusive.
In one aspect, the invention relates to a measuring device for measuring particle sizes. The measuring device comprises a control unit for controlling the measuring device, a light source for generating a beam of light for illuminating a measurement volume, and one or more image sensors. The control unit comprises a processor and a memory storing machine-executable program instructions, execution of the program instructions by the processor causes the processor to control the measuring device to illuminate the measurement volume with the beam of light and to acquire image data of the measurement volume using the one or more image sensors.
Examples may have the beneficial effect, that a measuring device configured for accurately determining a scattering angle θ of light scattered by particles, a scattered light intensity lout for these particles and an incident light intensity lin at the particle position is provided. Based on these values the size of the particle using the ratio lout/ lin as well as the scattering angle θ may be determined accurately.
For example, the measuring device comprises a plurality of the image sensors arranged at different positions relative to the measurement volume. The acquired image data of the measurement volume may be stereoscopic image data acquired using the plurality of image sensors. Examples may have the beneficial effect, that the stereoscopic data may be used for determining positions of individual particles within the measurement volume. Thus, position dependencies of the incident light intensity lin and/or the scattering angle θ may be taken into account for determining particle sizes.
For example, the measuring device further comprises a diffuser for diffusing the beam of light. Examples may have the beneficial effect, that the diffuser may support an averaging of Mie oscillations, i.e., the oscillation of the scattered light intensity lout as predicted by Lorenz-Mie scattering theory. An averaging of the Mie oscillations may already be implemented due to the finite size of the apertures of the image sensors used to acquire image data. This averaging may be increased by using a diffuser. For example, a diffuser with a diffusion angle of 1° fullwidth at half-maximum (FWHM) may be used.
For example, the measuring device comprises a chamber with an inlet for the particles and an outlet for the particles. The chamber comprises the measurement volume.
Examples may have the beneficial effect, that a measuring device may be provided with all components mounted onto an encased frame. Such a measuring device may, e.g., also be factory-calibrated. For example, the inlet and the outlet may be arranged on opposite sides of the chamber, e.g., as openings in opposite sidewalls of the chamber. Thus, particles may enter the chamber via the inlet, pass through the chamber with the measurement volume and leave the chamber via the outlet. The beam of light generated by the light source for illuminating the measurement volume may be oriented in a direction perpendicular to a straight line extending from the inlet to the outlet.
For example, the light source comprises a collimator for generating the beam of light as a collimated beam of light. Examples may have the beneficial effect, that a collimated beam of light may be provided enabling an accurate measurement of particle sizes.
For example, the light source is a pulsed laser or a flashlamp.
For example, the measuring device comprises a motion unit for moving the measuring device. Examples may have the beneficial effect, that the position of the measuring device and thus the position of the measurement volume may be adjusted.
For example, the motion unit comprises a linear motor configured for moving the measuring device along a set of one or more rails. Examples may have the beneficial effect, that the position of the measuring device and thus the position of the measurement volume may be effectively and precisely adjusted.
For example, the set of one or more rails is pivotable around an axis extending perpendicular to the one or more rails in a common plane with the one or more rails implementing a seesaw mechanism for pivoting the measuring device.
Examples may have the beneficial effect, that an orientation of the measuring device and thus the orientation of the measurement volume may be adjusted.
For example, the image sensors each are configured for an off-optical-axis observation of the measurement volume via a mirror and a tube. An orientation of the tube defines a view axis under which the respective image sensor observes the measurement volume via the mirror and the tube.
Examples may have the beneficial effect, that by an off-optical-axis observation the image sensors may be protected from environmental influences, e.g., within a box.
For example, the tube comprises a first end and a second end. The first end is a distal end relative to the mirror. The second end is a proximal end relative to the mirror. The tube further comprises a window arranged at the second end. Examples may have the beneficial effect, that using a mirror an off-optical-axis observation may be implemented.
For example, the tube further comprises a ventilation system configured for injecting a stream of air into the tube and onto the window and for sucking out the stream of air in order to remove with the stream of air liquid that may have collected on the window. Examples may have the beneficial effect, that liquid collected on the window may be efficiently removed.
For example, the tube further comprises a movable lid arranged at the first end of the tube and configured for opening and closing the tube at the first end. Examples may have the beneficial effect, that the lid may protect the tube from liquids entering via the first end, while the measuring device is not in use. While not in use, the movable lid may be closed.
For example, for each of the image sensors a relative angle a between a view axis under which the image sensor observes the measurement volume and a direction of illumination of the measurement volume by the light source is less than or equal to 45 degrees. Preferably the
relative angle a lies within a range from 25 to 35 degrees. Examples may have the beneficial effect, that for such a relative angle a a measurement of particle sizes may highly accurate.
For example, the control unit further being configured as an analysis unit. The execution of the program instructions by the processor further causes the processor to control the measuring device to determine the size of one or more of the particles within the measurement volume using the acquired image data. The determining of the size of the one or more of the particles comprises for each of the particles determining a scattered light intensity lout of light of the light source scattered by the respective particle to an individual image sensor of the one or more image sensors using image data from the image data acquired by the individual image sensor, determining a scattering angle θ of the light of the light source being scattered by the respective particle to the individual image sensor, the scattering angle θ being determined relative to a direction of illumination of the measurement volume by the light source, determining an incident light intensity lin of the light of the light source at the position of the respective particle within the measurement volume, and determining the size of the particle using a ratio lout/lin of the scattered light intensity lout to the incident light intensity lin and the scattering angle θ.
Examples may have the beneficial effect, that an analysis unit configured for an efficient and effective determination of particle sizes, e.g., droplets, is provided. The particles, e.g., droplets, may, e.g., be particles in the pm-range. In case the measurement volume is sufficiently small compared to the distance from the one or more image sensors, the scattering angle θ may be assumed to be approximately the same for all particles within the measurement volume, i.e., the scattering angle θ may be assumed to be independent of the individual positions of the particles within the measurement volume. Thus, the scattering angle θ may be determined using a position of the measurement volume relative to the one or more image sensors. This relative position of the measurement volume may be defined by the setup of a measuring device used for measuring the particle sizes. In this case, the scattering angle θ may be provided as a constant. Alternatively, in case a dependency of the scattering angle θ on the individual positions of the particles within the measurement volume is to be taken into account, a position of the particle under consideration within the measurement volume may be determined and used for determining a position dependent
scattering angle θ. For determining the position of the particle under consideration within the measurement volume a stereoscopic setup may be used with a plurality of image sensors arranged at different positions relative to the measurement volume. This plurality of image sensors may enable an acquisition of stereoscopic image data for determining positions of individual particles within the measurement volume. Using a stereoscopic setup, positions of particles with the measurement volume and thus the scattering angle θ of light scattered by these particles may be determined, in case the dependency of the scattering angle θ on the particle position is relevant. In addition, a scattered light intensity lout for these particles and an incident light intensity lin at the particle position may be determined. In case this incident light intensity lin can be approximated as being constant within the measurement volume, i.e., is independent on the position of the particle under consideration within the measurement volume, the constant incident light intensity lin is determined. The constant incident light intensity lin may be provided by measuring the constant incident light intensity lin for the measuring device being used. Alternatively, variation of the incident light intensity lin within the measurement volume, i.e., a dependence of the incident light intensity lin on the position of the particle under consideration may be taken into account. In this case, a distribution profile of the incident light intensity lin may be determined, e.g., using variations of the scattered light intensity lout of a particle under consideration along its trajectory through the measurement volume. For this purpose, a stereoscopic setup may be used with a plurality of image sensors arranged at different positions relative to the measurement volume for acquiring stereoscopic data in order to determine the positions of the particle along its trajectory through the measurement volume. Based on these values the size of the particle using the ratio lout/lin as well as the scattering angle θ.
In order to accurately deduce a proportionality between a local incident light intensity of and an amount of light scattered by the particle onto the image sensor, knowledge of the local incident light intensity, a scattering angle with which the light is scattered and an effective radius of an aperture of an image sensor used to acquire image data of the particle may be required. The effective radius of aperture may, e.g., be derived from properties of a pointspread function used to approximate the scattering by the particle or may be measured experimentally.
In order to simplify the analysis, some simplifying approximations may be made: The illumination beam may be approximated as being perfectly parallel, i.e., the incident light direction may be assumed to be perfectly uniform everywhere. It may be assumed that along the beam direction, the local light intensity does not change. In this case, the light intensity may be a function of two variables only, i.e., two coordinate value defining the position within a beam cross-section of the incident light beam. In reality, light at any point within the beam may be coming from a range of angles, with angular distribution dependent on the size and distance to the light source, e.g., an end of an optical fiber, and properties of a diffuser used, if any. For example, a diffuser with a diffusion angle of 1° full-width at half-maximum (FWHM) may be used.
For example, the size of the one or more particles is quantified by a diameter dp of the respective particle. The diameter dp is determined using the following relation between the diameter dp a nd the ratio lout/lin of the scattered light intensity loutto the incident light intensity lin as well as the scattering angle θ:
with q1 and q2 being constants, c being a calibration constant. Examples may have the beneficial effect, that the analysis unit may be enabled to efficiently and effectively determining particle sizes using the relation above. The constants q1 and q2 may be predefined constants. These predefined constants may, e.g., be determined with an exponential fit using the Lorenz-Mie scattering theory. Using Lorenz-Mie scattering theory a prediction of the scattered light intensity lout may be provided (cf., e.g., Fig. 9 and Fig.1θ), for which the constants q1 and q2 may be determined as fitting parameters of an exponential fit. The calibration constant c may be determined by calibrating the measuring device used for measuring the particle sizes using particles of known sizes. Alternatively, the constants qi and c may be determined together using a calibration of the measuring device, while constant qz is determined using Lorenz-Mie scattering theory
For example, the received image data of the measurement volume is stereoscopic image data acquired using a plurality of the image sensors arranged at different positions relative to the
measurement volume. The execution of the program instructions by the processor may further cause the processor to control the analysis unit to determine a position of the respective particle within the measurement volume using the stereoscopic image data acquired by the plurality of image sensors. The position of the respective particle within the measurement volume is used for the determining of the scattering angle θ. Examples may have the beneficial effect, that using stereoscopic image data a dependency of the scattering angle θ on the position of the scattering particle within the measurement volume may be taken into account.
For example, the scattered light intensity lout of an individual particle is determined as an average over an aperture of the individual image sensor. Examples may have the beneficial effect, that as the scattered light intensity lout an average over the aperture of the individual image sensor may be used.
For example, the determining of the scattered light intensity lout of the individual particle using image data from the image data acquired by the individual image sensor comprises an iterative fitting using a point-spread function. Examples may have the beneficial effect, that a point-spread function may be fitted in order to identify light intensities origination form individual particles within the image data acquired by the image sensors.
For example, the iterative fitting using a model of a distribution of a light intensity I(x) of the individual particle at a position x on the individual image sensor, which is centered at a position xo, to be fitted to the image data has the form
with r2 = cxx(x - x0)2 + 2cxy(x - x0)(y - y0) + cyy(y - y0)2 describing an ellipse parametrized by the parameters cxx, cxy, cyy around the center position xo, x and y being coordinate values of the position x, x0 and y0 being coordinate values of the center position xo. IB is a local particle independent background intensity. IA is a total particle dependent intensity measured by the image sensor. The point-spread function used for the iterative fitting Φ (r, φ ) isa canonical point-spread function of the form
with Jo being a Bessel function of the first kind and zero order, dSF being a scaling factor from canonical coordinates to image sensor coordinates, and cp being a defocus parameter of the form
with being a wavenumber of light of the light source with wavelength A.
Furthermore, rA is a radius of the aperture of the individual image sensor, a is a parameter quantifying an effect of a lens of the image sensor on the phase of light passing through the aperture of the image senor, zD is a coordinate value of the particle in the direction of illumination and zs is a coordinate value of the individual image sensor in the direction of illumination.
Examples may have the beneficial effect, that an efficient and effective approach for assigning light intensities acquired by the image sensors to individual particles within the measurement volume.
For example, for modelling a circular distribution of the light intensity I(x) of the individual particle on the individual image sensor, the ellipse is a circle with the constraints cxx = cyy = 1 and cxv = 0.
Examples may have the beneficial effect, that an efficient and effective approach is provided for fitting light intensities to a circular distribution.
For example, for modelling a distorted elliptic distribution of the light intensity I(x) of the individual particle on the individual image sensor, a distortion of the ellipse is area preserving with
Examples may have the beneficial effect, that an efficient and effective approach is provided for fitting light intensities to a distorted ellipse distribution.
For example, an orientation of a semi-major axis of the ellipse and a semi-minor axis of the ellipse is fixed with cxx = b cos2 β + b-1 sin2 β, cxy = (b-1 — b) cos β sin β, cyy = b sin2 β + b-1 cos2 β, and β is an angle of rotation.
Examples may have the beneficial effect, that an efficient and effective approach is provided for fitting light intensities to a distorted ellipse distribution.
For example, the model of the distribution of the light intensity I(x) of the individual particle at a position x on the individual image sensor used for the iterative fitting has the form
with r = ||x — x01 || and a being a vector defining a direction and magnitude of an asymmetry of the light intensity.
Examples may have the beneficial effect, that this form may be appropriate for approximating the way that the point spread function tends to be brighter on one side of the image center than on the other, due to the effects of the particle finite size and/or interference.
For example, the iterative fitting comprises at least one fitting step, in which fitting parameters of scattered light intensities lout of a plurality of particles are adjusted simultaneously for all of the respective scattered light intensities lout.
Examples may have the beneficial effect, that an improved fitting may be implemented, which takes into account dependences between the particles.
For example, the execution of the program instructions by the processor further causes the processor to control the analysis unit to perform an extracting of the image data of individual particles from a video frame of image data acquired by the individual image sensor from the image data of the measurement volume for the determining of the scattered light intensities lout of the individual particles. The extracting comprises selecting the video frame of image data acquired by the individual image sensor from the received image data, selecting a set of decreasing particle image intensity thresholds, for each of the selected particle image intensity thresholds, selecting a set of target defocus parameters, for each of the selected target
defocus parameters, selecting a range of acceptable defocus parameters, filtering the selected video frame of image data using a filter matching the point spread function with the current target defocus parameter, finding local intensity maxima of the filtered video frame of image data, for each of the local intensity maxima found in decreasing order with respect to filtered intensity values of the local intensity maxima, using the local maximum found as an initial estimate of a position of an individual particle image, determining an initial estimate of a defocus parameter from the intensity distribution of the local maximum found, refining the initial estimates using an optimization algorithm, checking whether a resulting refined individual particle image satisfies current constraints with its particle image intensity exceeding the current particle image intensity threshold and its defocus parameter lying within the current range of acceptable defocus parameters, and if the refined individual particle image satisfies the current constraints, subtracting the image intensity of the refined individual particle image from the selected video frame of image data and adding the refined individual particle image to a set of extracted images.
Examples may have the beneficial effect, that an efficient and effective approach for extracting image data of individual particles from the image data acquired by the image sensors, e.g., from a video frame.
For example, the range of acceptable defocus parameters selected for a selected target defocus parameter is the range of all defocus parameters being equal to or smaller than the target defocus parameter. Examples may have the beneficial effect, that a suitable range of acceptable defocus parameters may be provided.
For example, the filter being used for filtering the selected frame of image data is a Gaussian filter. Examples may have the beneficial effect, that a suitable filter matching the point spread function with the current target defocus parameter may be implemented.
For example, the optimization algorithm used for refining the initial estimates of the position and defocus parameter of the individual particle image is a Levenberg-Marquardt algorithm. Examples may have the beneficial effect, that an efficient and effective optimization algorithm for refining the initial estimates of the position and defocus parameter of the individual particle image may be provided.
In order to obtain accurate intensities of particle images an accurate model of the particle image shape, i.e., a suitable point spread function may be required. The intensities of particle images may be defined in grayscale levels of the image sensor. Furthermore, an approach to obtain initial estimates of particle image location and defocus may be required. Furthermore, an approach to refine the initial estimates in order to arrive at an accurate final fit may be required. The refinement is may be done using the Levenberg-Marquardt algorithm and trying to minimize a difference between the model of the particle image shape and true distribution of pixel intensities. For this purpose, a constant damping factor, e.g., of X = 0.125 may be used.
The Levenberg-Marquardt algorithm is an algorithm used to solve non-linear least squares problems. The Levenberg-Marquardt algorithm interpolates between the Gauss-Newton algorithm and the algorithm of gradient descent. The Levenberg-Marquardt algorithm is more robust than the Gauss-Newton algorithm. The Levenberg-Marquardt algorithm may be viewed as a Gauss-Newton approach using a trust region approach.
An intensity profile of an image of a small particle on an image sensor, i.e., a dependence of a grayscale intensity of an individual sensor pixel on the pixel location on image sensor, may depend on several parameters. These parameters may comprise a particle radius rD, a particle distance from the image sensor aperture LD, an index of refraction nR of the material from which the particle is made, a wavelength of light λ used to illuminate the particle, a relative angle θ between the direction of the incoming light hitting the particle and the direction of view of the image sensor (scattering angle), a radius of aperture rA of an objective of the image sensor, a magnification M of the imaging system of the image sensor, and/or a sensor pixel size lp of the image sensor.
For example, the following values of these parameters may be used: a particle radius rD of 5 μm ≤ rD ≤ 30 μm, a particle distance from the image sensor aperture LD of LD « 1 m, an index of refraction nR for pure water of nR = 1.337 at A = 515 nm depending sensitively on wavelength X and weakly on temperature (dT nR = 2 x 10-5 K-1), while an imaginary part of nR may be negligible with |Im(nr) | < 10-8, a wavelength of light λ of λ = 515 nm used to illuminate the particle, a scattering angle θ — * 3θ°, a radius of aperture rA of rA = 6.25 mm
of the objective of the image sensor, a magnification M of M ≈ 1 of the imaging system of the image sensor, an a sensor pixel size lp of lp = 28 μm.
According to an example, an approximate formula for the intensity profile of a particle image may be derived that is fast enough to produce synthetic videos comprising a plurality of image frames in a reasonable time, that is asymptotically correct in the limit of rD — * 0, rA — * 0, and yet is reasonably accurate for typical parameter values.
An exemplary approximate formula for the intensity profile may have the beneficial effect of being fast enough, so that a creation of a synthetic video does not take more than a few core- days of computation time. One synthetic video may, e.g., comprise 1θ4 image frames. Each image frame may, e.g., comprise up to 1θ4 particle images. This may, e.g., amount to a requirement of several ms computation time per particle image. Given that a single particle image, e.g., consists of about 1θ3 pixels, each of which may probably require 12 evaluations of the formula as the pixel grayscale intensity value may be given by an integral over its lightcollecting area, one formula evaluation may be required take a small multiple of 1 x 10“7 s. With an exemplary core speed of 2 GHz, that may, e.g., give an upper limit of about 1 x 103 floating point operations per single call to the point spread function formula.
For example, the limiting case of an infinitesimally small particle, which scatters light uniformly in all directions, may be considered. For a simplified situation, in which the image sensor aperture of radius rA is at z = 0, so that the aperture lets light through a circle x2 + y2 ≤ rA, and is combined with a simple convex lens to model the complex lens system of the objective of the image sensor. The lens may be modelled as locally spherical, but infinitely thin. Thus, an effect of the objective may be condensed on the phase of the light passing through the aperture at [x,y, 0] as an additive factor α(x2 + y2).
The particle
maximum tangent of a deviation of the particle position from the optical axis. The aview may be assumed to be small, i.e., αview « 1. For example, aview = + 12802(28 • 10-6) « 0.021. The
image sensor may be arranged at z = zs with zD < 0 < zs. The light ha a wavelength λ and wavenumber
The particle may, e.g., be assumed to be a point source of light. An electric field at point xs on the sensor may be computed using Fresnel-Kirchhoff formula
which incorporates effects of the lens and where x are points within the aperture, i.e., z = θ and x2 ≤ rA.
The Fresnel-Kirchhoff formula may be used to model a propagation of light in a wide range of configurations, either analytically or using numerical modelling. The Fresnel-Kirchhoff formula gives an expression for a wave disturbance, when a monochromatic spherical wave is tan incoming wave of a situation under consideration. This formula is derived by applying the Kirchhoff integral theorem, which uses the Green's second identity to derive the solution to the homogeneous scalar wave equation, to a spherical wave with some approximations.
For simplifying the terms outside the exponential of the Fresnel-Kirchhoff formula, the following may be used to rewrite the formula:
that
Thus, it can be written
This formula may, e.g., be approximated by simply 2. This approximation may at most incur a relative error of
Focusing on the region of the image
4 sensor, where the particle image appears, which is near rs = zsrD/zD, the relative error being
|zD | = 1 m, this may result in a maximum relative error of 1.2 x 10 -3. The resulting relative
error in the value of E(xs) is expected to be a small fraction of this maximum value and therefore negligible.
Using this approximation, the Fresnel-Kirchhoff formula may be simplified to
Furthermore, the exponential term inside the integral may be approximated as follows:
using the following approximation to a square root:
Exponential terms in the approximation of the exponential term inside the integral the Fresnel-Kirchhoff formula may be replaced by ones, provided that the corresponding argument is sufficiently small. For example, the exponential arguments may be bound from above by 2ik|zs|1-2n(rA + |rs|)2n. For the sensor location near particle image, i.e., rs « rDzs/zD, upper bounds on the magnitude of the various parameters may be summarized as following:
In the above table, maximum values of the various terms in the Taylor expansion of
1.θ m and rs = — rD for several values of |rD|. For the parameter range considered above, the terms proportional to k|r — rs|6|zs|-5 may be negligible as well, while the terms proportional to k|r — rs |4 |zs | — 3 may be negligible only near the optical axis. Therefore, it may be assumed that the aperture radius rA and a distance of the particle from the optical axis rD are small enough that to k|r — rs|4|zs|— 3 « 1. Given the above-mentioned approximations, the Fresnel-Kirchhoff formula may be further simplified to dr.
Collecting terms of the same power in r and transforming to polar coordinates allows for rewrite the Fresnel-Kirchhoff formula as
From the definition of the Bessel functions of the first kind Bessel
follows for any real c:
Using this function and a change of variables r = rAu, the Fresnel-Kirchhoff formula may be rewritten as
From these equations, it may be seen that the particle image is centered around rs = — (|zs |/ |zD |)rSD. The parameter cp may be referred to as the defocus parameter, since it encapsulates the effects of the particle's departure from the plane of focus on the shape of its sensor image. The grayscale intensity on the image sensor may then be proportional to |E|2.
The pre-factor is chosen so that when integrated over the whole sensor plane, the result gives a value of 1. Combining the rewritten Fresnel-Kirchhoff formula with this canonical point spread function results in
where Integrating over the whole sensor gives the total intensity of the
image
This is consistent with the assumption, that the total intensity is proportional to the solid angle of the aperture as viewed from the point source, which is For particles in perfect
focus, the following holds true which is the well-known formula for
the Airy disk.
Considering a simple lens with focal length f positioned at z = 0, an object at z = zD and an image sensor at z = zs, by the lens equation the relation is given
and the magnification factor
Given the aperture radius rA, the following two quantities of interest may be determined: First quantity of interest is the radius pmin of the Airy disk, which is the distance between the center and the closest minimum of the point spread function of a point source on the focal plane. Setting φ = 0 in the canonical point spread function allows for computing the integral analytically, giving Then pmin may be obtained by considering the
first zero of from which
where is the wavelength of light used and j1 ,1 ≈ 3.8317 is the first zero of J1(x).
The second quantity of interest is the rate φ z at which the defocus parameter increases as the source moves away from the focal plane z = zF. It is derived by differentiating the expression for φ with respect to zD, which yields
In the following, an approach is provided for extracting new particle images, i.e., those which could not be extrapolated based on past information, from image sensors. This task may be complicated primarily by the constraint of computation time, which may leave very few possible filters. The approach for extracting new particle images may comprise the following:
A set of decreasing particle image intensity thresholds may be selected and it may be looped over them. For each intensity threshold, a set of target defocus parameters may be selected
and it may be looped over them. For each target defocus parameter, a range of defocus parameters may be picked that will be accepted in this step. The best-performing choice may, e.g., be to pick all below the target value. The image frame may be filtered using a filter, e.g., a Gaussian filter that may best match the point spread function with the target defocus parameter. The local maxima of the filtered intensity may be determined. It may be looped through the local maxima, e.g., in decreasing order of their filtered intensity value. Each local maximum may be considered as an initial estimate of a position of a new particle image, and its parameters may be refined, e.g., as follows: a) estimate the image defocus parameter from the local intensity distribution; b) optimize the image parameters using a small number of iterations of the Levenberg-Marquardt algorithm. It may be decided, whether each refined image obtained at the end of the previous step is real (or a noise artefact), and if so, if it satisfies current constraints, that is, if its intensity lies above the current threshold, and its defocus lies within the currently accepted range. If the image is deemed real and allowable, its intensity may be subtracted from the residual frame intensity and it may be added to a set of extracted images. For example, at the end of each loop, or, e.g., when a sufficient number of new images have been added, all the images extracted so far may be taken, their intensities may be added back to the frame and their parameters may be optimized simultaneously. Then their intensities may be subtracted again.
Proceeding in several stages with respect to the intensity threshold and defocus may have the advantage, that it may be avoided mistakenly identifying the inner-most peak of a defocused image intensity profile as a focused image with much lower intensity.
The particle images may approximately follow the theoretical intensity profiles derived above, such that for an isolated particle image centered at x0 the light intensity I(x) at x is given by
with r = ||x — x01| and Φ (r, φ ) being the canonical point-spread function of the form
with Jo being a Bessel function of the first kind and zero order. Further, cp is the defocus parameter and dSF is a scaling factor from canonical coordinates to image sensor coordinates. In its simplest form, the intensity profile may be a function of five parameters, since dSF is constant for the given image sensor calibration, namely the background intensity IB,i.e., a local light intensity that would be recorded were the particle not present, a total image intensity IA, i.e., an integral of grayscale intensity over a given image sensor attributable solely to the presence of the particle, an x-image sensor coordinate x0 of a particle image center position, an y-image sensor coordinate y0 of the particle image center position, the defocus parameter which provides a quantitative measure of how
out-of-focus the particle image appears.
However, e.g., thermal gradients, imperfections in the optical system, like curved mirror surface, or the finite size of the particles may distort the particle image to such a degree that a more complex intensity profile may be required. For example, the following parameter values may be considered in order to take into account such effect: For describing a circular form of the particle image, r = ||x — x0|| may be used. For describing particle images getting elongated and/or squashed along a particular direction, i.e., as a result of slight curvature of an optical mirror, I(x) = IB + IAΦ (r/dSF , φ ) with r2 = cxx(x — x0)2 + 2cxy(x — x0)(y — y0) + cyy(y — Y0)2 may be used. The circular image type is a special instance of this elliptic one for cxx = cyy = 1 and cxy = θ. For the elliptic image type, the following two further constraints may be imposed: the distortion may be area preserving, such that cxxcyy — cxy = 1 holds true. Further, an orientation of an ellipse semi-major axis and an ellipse semi-minor axis may be fixed, such that when the ellipse is rotated by an angle β, which is constant for all particle images on that individual sensor, the ellipse has the form with
This is equivalent to cxx = b cos2 β + b-1 sin2 β, cxy = (b-1 — b) cos β sin β, and
Cyy = b sin2 β + b-1 cos2 β, where b = cxx is a single free parameter describing the ellipse. Alternatively, for particle images getting elongated and/or squashed without a preferred direction, only a single constraint, i.e., area conservation , may be taken into
account as a constraint. In this case, cxx and cxy are the free parameters and
The most general elliptic for may be provided by I(x) = IB + IAΦ (r/dSF , φ )
with r2 = cxx(x — x0)2 + 2cxy(x — x0)(y — y0) + cyy(y — y0)2, in case no additional constraints are taken into account.
For example, the following intensity profile may be used
with r = ||x — x01| and a being a vector defining a direction and magnitude of an intensity asymmetry. This form may, e.g., be used to approximate the way that a point spread function tends to be brighter on one side of the image center than on the other, due to the effects of the particle finite size or, alternatively, interference. The vector a determines the direction and magnitude of the resulting intensity asymmetry. Further, yet more complex distortions may, e.g., be taken into account, such as area-preserving, tilted ellipsoidal distortions.
Each particle image may technically cover an entire image sensor, as the intensity decrease away from an image center may be slow and second moment of the intensity may be infinite. However, for reasons of computation speed, each particle image may be considered to be confined to a relatively small circular region centered at (x0,y0)as defined above. The radius of this region may depend on the image defocus and, e.g., also on the intensity.
For example, the following two kinds of radii may be used: A smaller fitting radius rFIT may be used for computing partial derivatives of the local intensity with respect to the particle image parameters. These derivatives may be used to construct a Jacobian matrix J and the product JTJ may be used in the Levenberg-Marquardt algorithm during optimization. In vector calculus, the Jacobian matrix of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as input as the number of vector components of its output, its determinant is referred to as the Jacobian determinant.
As the computation time of the product and the solution of the resulting system of linear equations may, e.g., make up a major fraction of the overall computation time of a tracking algorithm, and since this computation time is in turn dominated by the contributions from the image overlaps, the fitting radius may be required to as small as possible without having a
negative effect on the position accuracy. Therefore, rFIT may be chosen to contain a large proportion of a square intensity gradient of the particle image, which is a determining factor for the theoretical position accuracy. It may, e.g., be set
with rAiry = j1,1dSF ≈ 3.8.317dSF being the Airy disk radius and the middle two terms in the above equation being there to account for the finite pixel size. With this rFIT the fitting area may, e.g., contain at least 8θ percent of the total image intensity and at least 92 percent of the total image square intensity gradient.
A larger manipulation radius rADD may be used, when subtracting and/or adding back particle images, which may collectively be referred to as image manipulation, before and after each stage of extraction of new images. A larger radius may, e.g., be desirable during image manipulation, since it is not required to compute partial derivatives or a product of Jacobians, only the model intensities have to be computed. Thus, computation time penalty for doing so may only be mild. On the other hand, subtracting a particle image over a larger region may reduce a likelihood, that a false particle image, e.g., a noise artefact, would be extracted from its residual halo. The choice of a formula for rADD may be guided by a desire to encompass all the pixels, where the model intensity I(x) — IB is significant compared to a thermal noise level oT of the image sensor. The requirement of may result in
Both rFIT and rADD may be required to be at least and at
most rMAX, which, e.g., may either be provided by the user or set to a maximum radius at which the point-source function was interpolated.
Given a particle image center estimate, a first estimate of the image defocus may be provided based on the available intensity profile. A choice may, e.g., be to use some higher moments of the image intensity
However, for m > 1 this integral may diverge resulting in a limit to m G (θ,1], For m G (θ,1] convergence may be slow and not sufficiently distinct from the case of m = θ, i.e., total intensity. A better way may be to subdivide a disk centered at the estimated image center into rings and base the defocus parameter estimate on the relative magnitudes of the intensities summed over the individual rings. A choice of radii separating the rings may, e.g., be the zeros j 1, k of J1 (r), which are the locations of local maxima and minima of intensity of the underlying, not integrated point-spread function for all cp, except a discrete set of values. For this choice of the ring radii the results may, e.g., be least sensitive to the particle choice of image center.
The defocus may, e.g., be estimated in one of the following two ways: The n-th ring Rn =
3.8317 and so on. may be
computed. Then, ratios
may be computed progressively and, if they surpass a predefined threshold value qthr(k), φ may be derived from a locally linear fit to q(k) as a function of cp obtained from the canonical form of the point-spread function Φ . Thresholds qthr(k) = q(k, cpk) may be chosen in such a way that the function q(k, φ k) is monotonically decreasing as a function of cp for for
φ > φ k by a significant margin. For example, may be used.
Alternatively, may be
computed. Then, again contributions from the first k rings, may be add up progressively. A barycenter xB =
2nd moment of intensity
may be computed. This intensity moment may then be used in a similar
fashion to the ratio q(k) in the first approach. It may be computed as function of cp for the canonical point spread function <t>, linear fits on the relevant parts may be performed, which may then be invert to obtain a map from the moments to cp. Again, linear fits may be performed on the intervals
The second approach may be advantageous compared to the first approach, as it corrects for potentially wrongly placed image center and thus may not tend to overestimate the defocus parameter, where that may be an issue, e.g., mostly for cp > 1θ.
For example, the execution of the program instructions by the processor further causes the processor to control the analysis unit to filter out noise artefacts from the refined individual particle images by determining for each refined individual particle image a ratio of probabilities that the refined individual particle image is actually a noise artefact and that the refined individual particle image is actually an individual particle image. If the ratio exceeds a predefined threshold, the refined individual particle image is rejected as a noise artefact, else the refined individual particle image is accepted.
Examples may have the beneficial effect, that noise artefacts may be identified and dismissed, such that only real particle images may be taken into account for determining the particle sizes.
After the extracted image parameters have been optimized, it may be decided whether the image really is an image of a particle in view or whether it rather came about by chance fluctuations of intensity noise or interaction of other particle image residuals. For example, a log likelihood ratio test may be used for differentiating between real images and noise artefacts.
Suppose that the standard deviation of grayscale intensity at a pixel with total intensity I is o(I). For example, o2 (I) =
+ cPI may be used. Assuming that the noise is uncorrelated between the different pixels, and that it is normally distributed, a chi-square distribution may be used. It may be assumed that the intensities of all already-extracted images have been subtracted from the image frame intensity profile, and that the current intensity at xn is In = I(xn). If the current fitted image has intensity
may be considered. IB is a fitting constant, i.e., a hypothetical local background intensity. Under a null hypothesis that the image is actually a noise artefact, So ~ x2(N — 1) holds true. N is the number of pixels included in the sum above. Under the alternative hypothesis that
the image is real, Si ~ x2(N — NP) holds true. NP is the effective number of parameters of the image fit, e.g., NP « 5. However, when the null hypothesis is true, then Si ~ x2(N — 2) as there may effectively be only two fitting parameters, IB and IA defined above. Since the probability of observing the intensities In under the null hypothesis is equal to
and under the alternative hypothesis the probability is
the log likelihood ratio between the two probabilities
const. . Given the statistics of S0 and S1, when the null hypothesis is true,
— S0) = (N — 2) — (N — 1) = — 1. Thus, the log likelihood ratio . The null
hypothesis, i.e., accepting the particle image as real, may be rejected if
where σ 0 is a given threshold. For example, the threshold may be selected to be cr0 = 1θ, which corresponds to likelihood ratio of exp(— 10) ≈ 1/1000.
For example, the determining of the incident light intensity lin comprises taking into account variations of the incident light intensity lin by iteratively adjusting the determined incident light intensity lin in different sections of the measurement volume until an observed variation of particle image intensity over particle trajectories of particles passing through the measurement volume matches a variation of the local incident light intensity.
Examples may have the beneficial effect, that variations of the incident light intensity lin within the measurement volume may be taken into account.
For example, the iteratively adjusting comprises starting with an initial uniform light intensity profile of the incident light intensity lin, dividing a beam cross-section of the incident light in sections, for each step of the iteration assembling for each of these sections of the beam cross- section statistics of a local relative particle intensity, and updating the light intensity profile using a mean local relative particle intensity determined from the determined statistics.
For example, the assembling of statistics comprises for each particle position along a trajectory through the measurement volume determining a local averaged image intensity overthe image data acquired by more than one of the image sensors, determining a trajectory average of the image intensity using an average of the local averaged image intensities over the trajectory, and dividing each particle intensity by the trajectory average of the image intensity resulting in the mean local relative particle intensity.
An illumination profile may, e.g., be obtained based on the following line of reasoning: If effects of Mie scattering are neglected and effects of distance to image sensor are corrected, i.e., closer particles appear brighter, for a given particle size a particle image intensity may be assumed to be proportional to an incident light intensity. Thus, if a mean particle image intensity over each individual particle trajectory of a plurality of particle trajectories is calculated, a ratio of an instantaneous particle image intensity and the mean particle image intensity may be used as an indication of a local incident light intensity of the beam of light at the instantaneous position. The only problem is that, to obtain the mean particle image intensity, the intensity profile of the beam of light, i.e., an illumination profile, should ideally already be known.
In order to solve this problem, the illumination profile may be determined using an iterative approach. Such an iterative approach may assume an initial uniform light intensity profile IL(x) = 1. At an n-th step, a model of the light intensity profile may be described as IL(x) ≈ z ) with x = {x,y, z} and a beam direction is along the vector b =
{xb,yb, zb}.
The beam cross-section may be divided into small sections, e.g., squares. The squares may, e.g., have a side-length of θ.125 mm. For each element, statistics of the local relative particle intensity may be assembled. After processing a plurality of trajectories, e.g., all the available trajectories, the resulting mean local relative particle intensity may be used as a source value of an updated model of the light intensity profile
.
Assembling the statistics may comprise for each trajectory ^comprise adjusting the particle image intensities to account for angular dependence of the scattered light intensity and the
distance to image sensor. For each particle position along a trajectory, an overall particle intensity may be computed. The overall particle intensity may be computed as a weighted geometric average of the image intensities, while taking into account a differential brightness of the different image sensors. Further, a weighted geometric average of the overall particle intensity over the entire trajectory may be computed and each particle intensity may be divided by the resulting trajectory average to obtain relative intensities. The resulting relative intensities may be recorded by associating them with the corresponding section of the crosssection of the beam based on a three-dimensional position of the particle positions along a trajectory, for which the relative intensities have been calculated.
For this purpose, the particle trajectories may be used, e.g., determined. If temporal linkage, i.e., tracking, is not available, e.g., a simplified, non-iterative approach may be used. In this case, the local relative light intensity may be set as a mean particle intensity over all particles at the same position within the laser beam cross-section, i.e., within the same section of the cross-section of the light beam.
For example, not all particle images may be used, since, e.g., dim images may be unreliable. If they are true images of small particles, they may show a strong dependence of scattered light on the scattering angle. Therefore, a raw intensity threshold may be used. For example, all particle images with a raw intensity below the raw intensity threshold may be excluded. Similarly, all particles with no particle images that pass the raw intensity threshold may be excluded.
Moreover, particle images may be excluded whose sensor position lies too close to the frame boundary of the image frame acquired by the image sensor. The intensity values there may be unreliable. Thus, particle images with sensor positions with a distance from the frame boundary below a minimum threshold may be excluded. Such a minimum threshold may, e.g., be two Airy radii. An Airy radius is a radius of an Airy disk. In optics, the Airy disk is a description of a best-focused spot of light that a perfect lens with a circular aperture can make, limited by the diffraction of light.
To obtain accurate representation of the illumination profile, an accurate knowledge of a direction of the illumination beam may be required, e.g., within 1θ-3 accuracy. An approach
for obtaining the direction of the illumination beam may basically consist of computing the beam profile for two thin slices of the measurement volume.
For determining the direction of the illumination beam, two beam profiles, each one computed using only particles in a relative narrow horizontal slice zlow ≤ z ≤ zhigh of the measurement volume. The slice parameters ziow and Zhigh may be selected such that the resulting slice is not too thin, in order to contain enough data to establish an accurate representation of the beam profile. Conversely, the slice parameters ziow and Zhigh may be selected may be selected sch that the resulting slice is not too thick. In case a slice is to thick, fine beam profile features may be blurred in case the beam direction is not yet known with good accuracy.
For example, an iterative approach may be used. As an initial beam direction, a rough estimate of the beam direction may be, e.g., perfectly vertical, and is described below. The beam direction may be controlled by adjusting the beam direction in x-, y- and z-direction.
Using a current best estimate of the beam direction, two beam profiles for two horizontal slices through the measurement volume may be defined. For example, a correlation between the two profiles of the two slices may be computed, the resulting correlation profile may be fitted with a Gaussian and best Gaussian fit parameters may be output. Thus, x- and y- coordinates of a peak as close to zero as possible may be obtained.
From a distance Zdiff between mid-points of the two slices, a resolution of the beam profiles dx, and a relative shift xs, ys between the two profiles compared to the reference beam direction, as indicated by the Gaussian peak position, a change to the beam direction may be calculated that should improve the beam direction estimate: Δbx = — xs dx/zdiff and Δby = -Ys dx/zdiff.
For example, the execution of the program instructions by the processor further causes the processor to control the analysis unit to determine the calibration constant c. The determining of the calibration constant c comprises determining a scattered light intensity lout, a scattering angle θ, and an incident light intensity lin for particles of known diameter dp with the calibration constant c being
Examples may have the beneficial effect, that the calibration constant c may be determined accurately.
In another aspect the invention relates to a computer program product comprising a non- volatile computer-readable storage medium having computer-readable program instructions embodied therewith. The program instructions are executable by a processor of a control unit configured for controlling a measuring device for measuring particle sizes. The measuring device comprises a light source for generating a beam of light for illuminating a measurement volume, and one or more image sensors. Execution of the program instructions by the processor causes the processor to control the measuring device to illuminate the measurement volume with the beam of light and to acquire image data of a measurement volume using the one or more image sensors.
For example, execution of the program instructions by the processor causes the processor for one or more of the particles within the measurement volume to determine a scattered light intensity lout of light of the light source scattered by the respective particle to an individual image sensor of the one or more image sensors using image data from the image data acquired by the individual image sensor, determine a scattering angle θ of the light of the light source being scattered by the respective particle to the individual image sensor with the scattering angle θ being determined relative to a direction of illumination of the measurement volume by the light source, determine an incident light intensity lin of the light of the light source at the position of the respective particle within the measurement volume, determine the size of the particle using a ratio lout/lin of the scattered light intensity lout to the incident light intensity lin and the scattering angle θ.
Brief Description of the Drawings
In the following, embodiments of the invention are described in greater detail in which:
Fig. 1 illustrates a scattering of light by a spherical particle;
Fig. 2 shows an exemplary image of a measurement volume comprising a plurality of particle images;
Fig. 3A shows a calculated scattering matrix element S11 as a function of scattering particle diameter;
Fig. 3B shows a calculated scattering matrix element S11 as a function of scattering angle;
Fig. 4 shows a model drawing of an exemplary measuring device for measuring particle sizes;
Fig. 5 shows a model drawing illustrating an exemplary off-axis observation of a measurement volume containing particles;
Fig. 6 shows a model drawing of an exemplary measuring device for measuring particle sizes;
Fig. 7 a diagram showing boundaries of an exemplary measurement volume;
Fig. 8 shows a model drawing of a further exemplary measuring device for measuring particle sizes;
Fig. 9 shows a comparison of an observed light intensity, both without and with diffuser, with Lorentz-Mie scattering.
Fig. 1θ shows a comparison of an observed light intensity, both without and with diffuser, with Lorentz-Mie scattering as well as a fit of an exponential function for different particle sizes,
Fig. 11 shows exemplary droplet size distribution inferred with the intensity-based approach;
Fig. 12 shows exemplary relative uncertainties for various ranges of particle sizes;
Fig. 13 shows histograms of a square-root-particle-intensity for particles of different sizes produced with a calibrated aerosol generator in order to determine a calibration constant;
Fig. 14 shows a schematic drawing of an exemplary analysis unit; and
Fig. 15 shows a schematic drawing of an exemplary analysis unit.
In the following, similar elements are denoted by the same reference numerals.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular architectures, interfaces, techniques, etc., in order to provide a thorough understanding of the examples. However, it will be apparent to those skilled in the art that the disclosed subject matter may be practiced in other illustrative examples that depart from these specific details. In some instances, detailed descriptions of well-known devices are omitted so as not to obscure the description with unnecessary detail.
Embodiments of the device may, e.g., be used for enhancing the functionality of standard particle tracking using high-speed video cameras by carefully deducing the brightness of each of the tracked particles' images. By selecting the relative angle between each camera optical axis and the direction of illumination, and by selecting the camera apertures appropriately, it may be possible to obtain an accurate size estimate of each tracked particle from the brightness of its images.
Embodiments of the device may be used to determine the size of spherical particles in a large volume. The device relies on multiple approaches:
- Given a camera image, to reliably determine the amount of light scattered by a particle into said camera, that works for defocused, overlapping particle images.
- To reliably determine a particle's size and size uncertainty based on a measurement of the amount of light scattered.
- Establishing the incident light intensity within the measurement volume from observed particle trajectories.
The combination of these technologies may allow to measure sizes of droplets or other nearly spherical particles spread throughout a volume simultaneously.
Referring now to the drawings, Fig. 1 illustrates light scattering on a spherical particle 102 of diameter dp in the context of embodiments of the device. This particle 102 may, e.g., be a droplet. At a given position x, light of an incident light intensity lin(x) is scattered by an angle θ away from the direction the light would travel if the scattering particle 102 was not there. Thus, the outgoing light appears to an observer with an intensity lout that is proportional to the incident light intensity and varies with the particle diameter dp and the scattering angle θ: With these definitions in mind, embodiments of the device may be useful
for implementing the following approach for determining the particle size dp:
1. Given particle images on several image sensors, e.g., camera sensors, estimate the total contribution of each particle image to the total grayscale level intensity ("image intensity" for short) on the sensor by iterative fitting using theoretically derived point-spread function. In case differences between the particles arising from their different positions within the measurement volume can be neglected, e.g., for a sufficiently small measurement volume compared to a distance between the measurement volume and the sensors, a single camera sensor may be sufficient. Alternatively, several camera sensors, i.e., two or more camera sensors, may enable a determination of the positions of the particles within the measurement volume via triangulation, e.g., using stereoscopic image data acquired by the two or more camera sensors.
2. The image intensity is proportional to the integral of the scattered light intensity lout over the corresponding camera aperture. Knowledge of the camera aperture diameter may allow to determine the value of lout averaged over the solid angle corresponding to the camera aperture, up to a multiplicative constant.
3. From the particle image positions on several (at least two) camera sensors, deduce the particle position x in the world coordinates using stereoscopic reconstruction, and thus also obtain the angle θ.
4. From the statistics of the particle image intensities conditioned on their position in the world coordinates, deduce the incident light intensity lin(x), again up to a proportionality constant. Alternatively, the incident light intensity lin may be constant within the measurement volume. In this case, no statistics of the particle image intensities may be required for determining the incident light intensity lin, which is rather the same for all particles within the measurement volume.
5. The complicated dependence of lout / lin on dp and θ may be simplified by usage of a diffuser, a sufficiently large aperture and an appropriate placement of the cameras relative to the incident light direction, so that it is sufficiently well approximated by:
where q1 and q2 are known constants. These constants may, e.g., be determined using a fitting of an exponential function to a theoretical prediction of the Mie scattering.
6. The two unknown proportionality constants may be combined into a single one, c, which may be determined experimentally by a calibration using particles 102 with known size.
7. Combining all of the above, knowing the particle position (if necessary) and the intensity of its images, one obtains:
and θ, from which the diameter dp may be obtained.
The amount of light that a particle scatters onto a camera sensor can be determined from the camera images. Usually this is done by summing the intensity of the pixels that make up the particle's image. However, if the particle seeding density is sufficiently high to make particle images' overlap likely, any given pixel cannot be uniquely ascribed to single particle image,
and this approach breaks down. If particles are far from the camera's plane of focus, their images become larger, which greatly increases the probability that they overlap. This approach therefore is not expected to work for volumetric measurements, even for moderate particle number densities.
The approach employed by embodiments of the device uses a physics-based mathematical model of the particle image that would result from a point source observed by a camera fitted with an ideal lens and a finite aperture. The model has various parameters, one of which is the total intensity of the particle image. For each particle image the model is instantiated once, and at least one of the items optimizes the models' parameters simultaneously for all particle images on the sensor.
The incident light intensity within the measurement volume can be derived from the statistics of the image intensities. It can be assumed that the incident light intensity does not vary over time or along the direction of the illumination. A further assumption is that the scattered light intensity is roughly proportional to the incident light intensity via the relationship expressed by the equation Finally, it may be assumed that the individual particle size
does not change significantly during its traverse of the measurement volume. Then, the model relative incident light intensity in the various parts of the measurement volume is iteratively adjusted until the observed variation of particle image intensity over each particle trajectory best matches the modelled variation of the local incident light intensity.
The particle diameter dp may be determined as dp = c • sqrt(l), with I being a weighted average of the particle's intensities as seen by the cameras, and c a calibration constant. The size uncertainty is a function of the particle size that derives from Mie scattering theory and takes into account the light source spectrum and diffusivity, aperture sizes, and camera spectral sensitivity.
Fig. 2 shows exemplary image data 112 of a measurement volume acquired with an imaging sensor. The image data 112 comprises a plurality of particle images, i.e., image data of particles 102 present in the measurement volume. This image data of the particles 102 is given in form of greyscale level intensity distributions acquired by pixels of the image sensor.
The image data of the particles 102 results from a scattered light intensity lout of light of a light source scattered by the respective particles 102 in the measurement volume to the respective image sensor.
Figs. 3A and 3B show the behavior of the scattering matrix element S11 as a function of the particle diameter dp (Fig. 3A) and the scattering angle θ (Fig. 3B), respectively. The values have been computed for a spherical particle numerically using the popular BHMIE code for unpolarized light. As can be seen in Fig. 3A, S11 grows generally proportional to but also
oscillates strongly and has a fine structure. The intensity ratio between peaks and valleys is about 2.5, as can be estimated from the inset. In Fig. 3B, it can be seen that even though in general S11 is a very irregular function of θ, in the range from 25° to 35° it is slightly better behaved: it decreases more or less exponentially, and "only" oscillates strongly.
Figs. 3 to 6 illustrate the design and functioning of possible embodiments of the device. The device consists of a light source, a measurement volume, a set of cameras, and a control unit. The light source may be a pulsed laser or a flashlamp, the light of which is manipulated into a thick, collimated, slightly diffuse beam that passes through the measurement volume. The cameras are aimed at the measurement volume and are arranged such that the scattering angles with respect to the beam vary by as much as the apertures' subtended angles. Such arrangement increases the total effective aperture radius which helps smooth out the dependence of lout / lin on dp and θ.
The device uses software that allows accurate tracking of individual particles' positions in three dimensions and, provided that the particles are sufficiently spherical and smooth, also allows to deduce each particle size. Primarily, the device is conceived to simultaneously measure the size and position of small liquid droplets. However, it can be used also with other kinds of particles as long as they scatter light in such a way that knowing the scattering angle and the intensity of light scattered in that angle allows to infer the particle size with sufficient accuracy. This precludes particles for which the scattered light intensity heavily depends on their orientation, such as strongly elongated ellipsoidal particles or particles with significant surface irregularities or reflectivity variations.
Fig. 4 shows an overview of a prototype of measuring device 1θθ for measuring particle sizes that is built for operation at a research station at the Schneefernerhaus, mount Zugspitze, Germany. The setup shown in Fig. 4 is an open-frame design that minimizes flow resistance under open-air conditions and may be mounted on rails and be equipped with a linear motor to allow for wind-synchronous observations of cloud-borne water droplets in situ. An alternative design is shown in Fig. 8, where all components are mounted in an encased frame. While the setup of Fig. 4 may be more suitable for separate assembly and calibration by the user or OEM, the setup of Fig. 8 may simplify factory calibration.
The exemplary measuring device 1θθ comprises a light source 114 mounted on an open frame configured to illuminate a measurement volume 104 below the light source 114 with a beam of light 116. The light source 114 may, e.g., be a pulsed laser or a flash light. The light source may, e.g., be equipped with a collimator 124 and/or a diffuser 126. Using a collimator 124, the light source may be enabled to for generate a collimated beam of light. The diffuser 126 may be used for diffusing the collimated beam of light. A plurality of two or more image sensors 138, in case of Fig. 4 there are three image sensors 138 in form of three cameras, may be arranged at different positions relative to the measurement volume 100. The image sensors 138 are arranged within a box 152. Furthermore, the measuring device comprises a control unit 11, e.g., arranged within the box 152. The control unit 11 comprises a processor and a memory storing machine-executable program instructions. Execution of the program instructions by the processor causes the processor to control the measuring device 100 to illuminate the measurement volume 1θ4 with the beam of light 116 and to acquire stereoscopic image data of the measurement volume 1θ4 using the image sensors 128.
Camera box
Fig. 4 illustrates a first exemplary implementation of the measuring device, which may in the following alternatively be referred to as "the box" for simplicity. It consists of a vibration- damped aluminum box housing the high-speed cameras used for Lagrangian particle tracking. The box has been designed in such a way to minimize its total weight and cross-sectional area exposed to the particle-bearing fluid while being extremely rigid and able to fit three cameras (an exemplary model being Vision Research v2511) with their corresponding optics. Further
streamlining of the box would have added a lot of complexity in terms of fabrication, usability, and serviceability, but if the current shape of the box proves to be problematic, lightweight streamlining elements made out of, e.g., polystyrene foam can be added.
The optical system of the measuring device shown in Fig. 4 has been designed using the optional feature of off-axis observation of the measurement volume. The box's main components are 24 aluminum parts providing a rigid skeleton, three window sub-assemblies through which the cameras observe the measurement volume off-axis, and six transparent polycarbonate plates that provide visual and manual access to the box in the case of malfunction. Its external dimensions are 930 x 720 x 360 mm3, and internally, the box is subdivided into three upper sections containing one camera each and three lower sections, which contain the camera, power supplies, Ethernet and trigger cables, cooling hoses, a control unit (e.g., based on Arduino), and temperature, humidity, and acceleration sensors.
Originally conceived for the Schneefernerhaus cloud droplet observing experiment referred to above, the maximum weight of the camera box is, in principle, limited by the maximum payload that can be moved bythe motor of the seesaw unit used for mean wind compensation (s. Fig.6). However, more restrictive practical limits were imposed by the need to carry the box up to the roof manually through a narrow staircase, which allowed only two people to carry the box at a time. For this reason, the weight of the box without cameras or the beam expander (see below) was kept below 6θ kg. Even so, should the movable table crash into the emergency shock absorbers, very large forces would be generated both at the braces keeping the box above the table and at the pillars fixing the seesaw to the roof. This leads to the current orientation of the box with its longest side parallel to the direction of motion. It was also considered having the box stand upright with its longest side vertical; however, in such an orientation, the torque applied on the table during a crash into the emergency shock absorbers would put a too large strain on the system.
Fig. 5 shows an exemplary setup for an off-optical-axis observation implemented, e.g., by the measuring device 100 of Fig. 4. The off-optical-axis observation of the measurement volume may be implemented via a mirror 134 and through a tube 138. An orientation of the tube 138 defines a view axis under which the respective image sensor observes the measurement
volume via the mirror 134 and the tube138. The image sensor may be equipped with an objective 108 mounted on objective holder 110. An inclination of the mirror 134 may be adjustable using a mirror adjustment device 138. By adjusting the inclination of the mirror 134, the view axis may be adjusted. The tube 138 comprises a first end 140 and a second end 142. The first end 142 is a distal end relative to the mirror 134. The second end 142 is a proximal end relative to the mirror 134. The tube further may comprise a window 144 arranged at the second end 142 of the tube 138. For example, the tube 138 may further comprise a ventilation system with an air inlet 146 and an air outlet 148 configured for injecting a stream of air via the air inlet 146 into the tube138 and onto the window 144 and for sucking out the stream of air via the air outlet in order to remove with the stream of air liquid that may have collected on the window 144. Furthermore, the tube 138 may comprise a movable lid 150 arranged at the first end 140 of the tube 138 and configured for opening and closing the tube 138 at the first end 140. As illustrated in Fig. 5, the first end 140 of the tube 138 may be cut vertically relative to a horizontal plane, such that, e.g., rain, falling vertically is prevented from entering the tube 138. The second end 142 of the tube 138 may, e.g., be cut vertically relative to a longitudinal axis of the tube 138.
One may apply a two-tier mechanism designed to lower the risk of a water droplet or any other liquid residing in the path of optical access, thus deteriorating the quality of the images obtained. Each window is mounted at the lower end of a short tube (see Fig. 5) with the inner diameter of 24 mm just large enough to not interfere with any of the light rays between the camera apertures and the measurement volume. The tubes are oriented along the line of sight of the cameras, which is at 3θ degrees with respect to the illumination light direction, which coincides with the vertical in Figs. 3 and 4. The upper ends of the tubes are cut vertically to decrease the chance of liquid entering the tube and are capped by movable lids. The lids may be triggered so as to open only for the duration of each video capture and close during the much longer duration of the data transfer. Moreover, a powerful stream of air may be injected at the higher end of each window to remove any droplets that might still make their way through the tube. On the lower end of each window, the air and any liquid that collects there may be sucked out and removed from the camera box. To aid the water removal process, the windows may be coated with a hydrophobic coating.
The box may be supplied with a steady small stream of dry air to prevent water condensation on the underside of the windows. Moreover, the box may be temperature controlled. If the temperatures within the box stay within the operation range of the cameras, which may be typically in the range from 10 to 50 °C, no cooling or heating may be required for operation. In order to prevent exceeding the upper operational temperature limit of the cameras, there may be cooling channels running through the camera base plates, which can also contribute to bringing the inside temperature down. The temperature fluctuations may affect the camera calibration, but the changes are usually small enough to be removed by self-calibration without any adverse effects on the data quality.
If desired, additional instruments may be mounted immediately next to or downstream of the measurement volume. Small instruments, such as fast temperature and humidity probes, can be mounted in between the vertical supports shown in Fig. 4. Larger instruments can be mounted on any structure supporting the box, e.g., the seesaw's table. It must be noted that if additional instruments are mounted directly downstream of the measurement volume, they must be removed and remounted every time the fluid stream direction reverses; otherwise, they may disturb the flow into the measurement volume.
Vibration damping
Moving the LPT setup over rails, as is done in the Schneefernerhaus experiment, causes vibrations that can be detrimental to particle tracking. To prevent these vibrations from reaching the camera box, the box may be suspended on four extension springs with spring constant 6.07 N/mm and unstressed length 149 mm (model example Z-1951 made by Gutekunst Federn). The height of the spring attachments to the box can be adjusted to be at the height of the box center of mass, which prevents the box from tilting during acceleration and deceleration to or, respectively, from the fluid stream synchronous system of inertia. To limit the amplitude of the swing of the camera box during the acceleration and deceleration phase, the box may be further constrained by six pairs of rubber buffers. The buffers' height can be adjusted so that they are level with the box's center of mass.
Alternatively or additionally, a partially active approach may be useful where the box is clamped during the acceleration phase and then released to be held only by the springs during the constant velocity phase.
Control and data acquisition
The experiment may be controlled by a computer or computer cluster. In the example of the Schneefernerhaus experiment, a cluster is used that consists of a main node, six compute nodes, and two storage nodes, each of which is connected to 10 Gbit Ethernet switches and contains 35TB of storage. The cameras are also connected to the 10 Gbit network, through which they are controlled and read out. The main node runs a Python code that controls and triggers the cameras, controls the box's window shutters, monitors the box's environmental parameters, and controls the light source. After an experiment, the compute nodes download videos from the cameras to their internal hard drives. This takes 40 s typically, which is the limiting factor for the repetition rate of the measurement. The videos are then copied to both storage nodes. After a measurement campaign, the disks from the second storage node are taken out and transported to the Max Planck Institute of Dynamics and Self-Organization Gottingen, where data analysis takes place.
Optics
The ideal particles for particle tracking are so small that each of the multiple glare points that a single particle has overlap on the camera sensors. The distance between the glare points is proportional to magnification particle size, and since it may be impossible to control the size of the particles to be observed, it may be feasible to use an optical system with a small magnification. On the other hand, if one is interested in seeing, e.g., droplet pairs at distances where they nearly touch each other, a large magnification is required. In the example of the Schneefernerhaus experiment, a magnification of 28 μm per pixel was chosen, which means that if diffraction effects are ignored, a typical cloud droplet is of the order of 1 pixel on the sensor. The Airy disk at the laser's wavelength is calculated to be 3.2 pixel in diameter, so the particle tracking data do not suffer from peak locking. To provide the desired level of magnification, each camera may be equipped with two 2 teleconverters and a 200 mm objective. To prevent sagging and any lateral motion of the objectives during box movement,
each objective is supported on one end by a finely adjustable clamp ring and on the other end by three plastic adjustment screws. The cameras are also firmly bolted to the base plates underneath them.
To adjust the viewing angle of the cameras, each upper section also contains an 85 x 60 mm2 mirror housed within a custom-built mirror holder (see Fig. 5). The holder can be rotated along a vertical axis, and the mirror tilt can be finely adjusted using a fine-threaded adjustment screw. Once the desired rotation and tilt of the mirror are reached, this orientation can be kept in place by tightening a set of fixing bolts. The mirror adjustment mechanism can likewise be implemented for manual or automated adjustment of the screws.
The resulting measurement volume is defined as the set of points visible by all cameras in the particle-tracking setup. In the example of the Schneefernerhaus experiment, the measurement volume is diamond-shaped and measures about 16.6 cm3; see Fig. 7. If one allows particles to be triangulated by using fewer than three cameras, the usable volume may be much larger; then, however, in a large portion of it the particles may be so much out of focus that they would be hard to detect and their positions would be obtained with large uncertainty.
Illumination
The requirement of a typical particle residence time to be on the order of a Kolmogorov time, with its most likely value of 3θ ms and the typical fluctuating velocity of roughly 2 meters per second typical for cloud droplet observations in the exemplary Schneefernerhaus experiment led to a choice of a rather large measurement volume with the largest horizontal diameter of around 35 mm.
Achieving a depth of field comparable with the dimensions of the measurement volume, while simultaneously maintaining acceptable levels of the signal-to-noise ratio even at the lower end of the particle size range of interest across the measurement volume, requires a powerful source of light. The exemplary Schneefernerhaus experiment uses a Trumpf TruMicro 7240 laser with a wavelength of 515 nm, a maximum pulse energy of 7.5 mJ, and a pulse length of 3θθ ns. Although at higher repetition rates the laser can achieve a light power output of 3θ0
W, at lower repetition rates it is limited by the maximum pulse energy. For an exemplary sampling rate of 1θ kHz, the light power output is effectively 75 W. It should be noted that other light sources, including non-coherent light sources, may be used depending on the deployment objective of the measurement device.
In the example of the Schneefernerhaus experiment, the beam coming out of the laser fiber first passes through a diverging lens in order to reach the desired beam diameter within the small amount of space available in the beam expander. It then passes through a converging lens, which makes it nearly parallel with a diameter of 39 mm. This diameter fits the measurement volume well, i.e., only little light is wasted illuminating particles outside the measurement volume. The lenses were chosen such that their spherical aberrations expand the beam's center, while compressing its edges, which may help to illuminate the measurement volume more uniformly. The beam finally passes through an optical diffuser, the function of which is to smooth the dependence of the intensity of scattered light on the scattering angle and thus simplify the process by which the particle size is deduced. As the beam leaves the beam expander, it is clipped from both sides, so it becomes narrower in the x-direction. By doing so, it fits the shape of the measurement volume better; in particular, the volume that is illuminated but not seen by all cameras is reduced.
Intensity data describing the dependence of the illumination beam intensity on the location within its cross section inside of the measurement volume may be obtained from the tracking data by comparing instantaneous particle intensity with its mean value over its whole trajectory. In this case, data are available only for positions within view of at least two cameras, where triangulation is possible. In the example of the Schneefernerhaus experiment, beam profile is nearly flat, with a slight asymmetry most likely due to a slight offset of the laser head from the optical axis of the other optics and a smooth decrease in intensity at the edges due to the diffuser.
Camera positions and viewing angles
If the amount of light is a limiting factor, it is advisable to choose the geometry of the camera orientations with respect to the laser beam in orderto obtain the most favorable image quality while keeping the camera aperture diameter constant. It has to be noted that the number of
cameras onboard the measuring device does not necessarily have to be three; rather, a different multiplicity of cameras such as two, four, or yet a larger number of cameras may be deployed as well. One parameter that sets limits on the maximum tractable seeding density and the uncertainty of the triangulated particles' three-dimensional position is the mean variance of the positioning error of particle images on the sensor,
- F°r a given image, ' where
is the total variance of camera thermal and shot noise and I is the total
intensity of the given particle image on the sensor. In the present case, the dominant source of noise is usually the shot noise, so
Thus, for fixed aperture size and camera orientation,
The image intensity is sensitively dependent on the scattering angle, which is the angle between the illumination beam direction and the camera viewing axis (see Fig. 3B). It should be noted that the viewing axis of any camera toward the measurement volume may be different from the actual optical axis of the camera sensor (e.g., defined by the optical axis of the objective and / or a normal vector of the camera sensor plane), which is true especially for any off-axis observations, such as that shown in Fig. 5 where the light emerging from the measurement volume, and traveling along said camera viewing axis defined by the center of the feed-through tube, is reflected toward the camera objective by a mirror. Although the dependence of the image intensity on the scattering angle is non-monotonic for water droplets, due to constructive and destructive interference between the refracted and reflected beams that reach the aperture and due to the use of a diffuser and a finite aperture size, these variations are mostly smoothed out. The dependence of the scattered light intensity on the scattering angle θ can then roughly be modelled as exponential: I(θ)~ exp(— cθθ) with cθ ≈ 3.55 rad-1 so that the intensity roughly halves with an increase in θ by 11°. The rate constant cg was obtained from a fit to Mie scattering curves.
While making the scattering angle as small as possible will generally keep the particle image intensities large, it is not optimal for minimizing the triangulation error due to the resulting geometry of the camera orientations: when all camera view axes are at a small angle with respect to the beam, they are also necessarily at a small angle to each other, and a small change in the particle image position on a given sensor can lead to a large shift in the component of the triangulated 3D position along the laser beam direction. This geometric
effect can be modelled by assuming that the cameras are placed symmetrically with respect to the laser beam, at equal distances from the measurement volume, so that the angle between the view axes of any camera pair is the same and that each camera view axis is at an angle of θ to the laser beam. It can be shown that the components of the three-dimensional particle position error variance are
Here, the z direction points along the illumination light beam. Thus, while small θ decreases the variance of the position error components perpendicular to the laser beam, in the direction along the beam, small θ leads to a catastrophic amplification of the error.
Combining the effects of the camera orientation geometry and the scattering light intensity, one obtains
Optimizing the total variance In the case of the
exemplary Schneefernerhaus experiment, there is a particular interest in high accuracy in the vertical direction in order to accurately measure settling effects and the vertically-conditioned radial distribution function. As, there, the laser beam is pointing nearly vertically, one can optimize just for , which leads to an optimal angle of θ = 29.4°. In the setup shown in Figs.
3 and 4, the geometry of the camera box was designed for an intended camera view axes at 3θ° to the vertical. In reality, due to a slightly asymmetric placement of the measurement volume above the camera box, the actual view axes are at (3θ.4 ± θ.4)° to the vertical. It should be noted that the considerations of position error variance apply only to the particles with image intensity low enough to not saturate the sensor pixels. Again in the example, the smallest particle that can possibly lead to saturation (grey scale level of near 4096) when perfectly in focus has a diameter of ~23 μm. This number was obtained using particle sizing as described herein, combined with a model for a particle's point spread function. For larger particles, the optimum angle would be closer to the one obtained by geometric considerations only, which is 54.7° (cameras view axes perpendicular to each other).
Particle tracking algorithm
The particle tracking algorithm was inspired by the Shake-the-Box (STB) algorithm developed by Schanz, Gesemann, and Schroder and heavily adapted to the specific needs and challenges of the present measuring device. A comprehensive description of the algorithm is beyond the
scope of this work and is presented in a different publication. Here, its main features and differences to the standard version of STB are only briefly outlined.
The most salient features of STB that may be useful are the progressive subtraction of already tracked objects and iterative improvements of their fitted parameters. These are required for successful tracking of particles in the setup of Fig. 4 for reasons explained below. Due to large measurement volume chosen for aforementioned reasons, one can have up to 104 tracked particles in the measurement volume at a single time. Although 104 particles, which (in the example of the Schneefernerhaus experiment) translates to 0.01 particles per pixel is not very high by modern standards, the actual number of tracked images on each sensor is up to 2.5 times as high due to the shallow view angle with respect to the illumination beam as shown in Fig. 4. Moreover, due to the short depth of field compared to the size of the measurement volume, an inevitable consequence of working with the given amount of illumination, many of the images are strongly out of focus. Thus, the total area of all the tracked images may be several times higher than the total sensor area. Since the images heavily overlap and do not necessarily achieve their peak intensity at their center like well-focused images do, finding and fitting the images is a difficult and computationally expensive task. Without having a good model for the point spread function, subtracting the intensity profiles of already tracked particles, and performing several iterations of fitting the image parameters, it seems impossible to achieve a high yield necessary to study particle or droplet clustering.
The suggested tracking algorithm may be implemented by the analysis unit described herein and comprises the following items that may be performed in the stated order at each time instance:
1. Each video frame is pre-processed in order to most closely correspond to the light intensity that an ideal sensor would read out. This item consists of subtracting background intensity, correcting hardware artifacts such as image lag (where intensity values on the current frame are affected by their values on the previous frame), and correcting for smudges or scratches on the sensor and optical elements.
2. Existing 2D and 3D trajectories are extrapolated to the current time and their expected intensity profiles created on each sensor and optimized using the Levenberg-Marquardt
algorithm with a fixed damping factor. The goal of the optimization is to minimize the difference between the fitted and actual intensity profiles on each sensor. Unlike the standard STB, where each object is optimized separately while the others are kept fixed, here all particle images on each single sensor may be optimized simultaneously. This is only possible to do in reasonable time when the properties of the particle images are independent across the sensors, another major difference to standard STB where the image locations are projections of a common 3D position. An additional reason not to tie the image locations to the particle position in the world coordinates is the detrimental effect of thermal gradients on the accuracy of the projection map, as explained further below.
3. The optimized particle images obtained in the previous item are subtracted, and each sensor is searched for new images. This item proceeds in several rounds, where in each round, a search for images within a certain narrow interval of defocus is conducted, starting with the well-focused ones. In each round, the frame is first filtered using a Gaussian filter of standard deviation matching best the images within this range of defocus. Local maxima of the filtered intensity are used as the initial guesses for new image locations. The image properties are optimized in the same manner as in the previous item.
4. Both the extrapolated and the new images are combined into one set and again optimized several times in the same manner as above.
5. Using the images available at this time, one obtains candidate 3D positions of the particles using triangulation. The triangulation takes into account not only the image locations but also their level of defocus and their intensity. This enables to reliably triangulate using just two cameras and compensate for the lower position accuracy of the defocused images. Each triangulated point is assigned a likelihood of being "real," i.e., being triangulated from images that indeed correspond to the same particle.
6. The existing 2D/3D trajectories are extended to the current time instance by the addition of optimized images / triangulated points that lie close to their extrapolated position. Each of the trajectories consists of a single tail and possibly multiple heads. The tail is made out of objects from the distant past for which confidence is given that they belong to this trajectory.
Each head is made out of recent objects, for which the certainty level is much lower, as they could be ghosts or belong to another trajectory.
7. Trim trajectory heads, calculating a likelihood of each trajectory head being real based on its smoothness and on the product of likelihoods of each object being real, as calculated above. Heads with low likelihood are deleted. Further trimming is achieved by ensuring that older objects are not used in more than one trajectory. Tracks with no remaining heads are deleted, but those of sufficient length are saved first.
8. Initiate new trajectories using those objects from the two most recent time instances which are not already part of a sufficiently long trajectory. Only such pairs of objects are used, which could possibly correspond to a particle moving at a velocity similar to the mean velocity of all particles in the measurement volume.
9. Update (self-calibrate) the camera models, defocus map (map from world coordinates to the level of image defocus), mean velocity, and other auxiliary variables using the current ends of sufficiently long trajectories.
The particle tracking algorithm described above may address some or all of the challenges posed by a harsh work environment such as the weather exposure at the exemplary Schneefernerhaus experiment referred to above:
- Large measurement volume: The fact that most particle images are not in focus may have a detrimental effect on the position accuracy of the tracking. However, an accuracy of only a few micrometers may still be achieved, which may be sufficient for many practical purposes, and thanks to the way of extracting particle images disclosed herein, it may still be possible to achieve very high yield throughout the measurement volume. By using image defocus as an additional parameter, it may be possible to triangulate using only two cameras without having many ghosts.
- Large mean flow: The particles may stay within the measurement volume for only a few Kolmogorov times or less, which poses an additional challenge when it comes to choosing between alternative triangulations or temporal links. It may be important to make the right choice from the beginning as there may not be enough time left for
initiating a new correct track before the particle leaves the measurement volume again. Using the strategy of having multiple trajectory heads, this risk may be minimized. As an additional benefit, this way it may be possible to reduce the sampling rate to as low as 5 kHz without adverse effects on the temporal-linkage reliability. Doing so may increase the amount of trajectory statistics collected within the limited measurement time provided by the fluid stream conditions since a primary limitation may be given by the data transfer rate between the cameras and the control unit and / or the analysis unit (e.g., an appropriately programmed computer or computing cluster).
- Vibrations and thermal gradients: By self-calibrating at each time instance, it may be possible to effectively cancel the effects of vibrations caused by the movement of the camera box or camera fans. Thermal gradients, which may arise due to thermal convection within and above the camera box, may be partially compensated for in this manner, as they cause apparent particle image shifts that are non-uniform over the sensors. As a consequence, standard STB loses a lot of its power as the projections of three-dimensional positions cannot be made sufficiently accurate on all cameras simultaneously to allow for effective image subtraction. By fitting the image parameters on each sensor separately, it may be possible to subtract their intensities better and to optimize all images simultaneously, achieving a faster convergence rate than the standard STB approach.
- Diversity of tracked particles: The used point spread function model is based on the assumption that the particles can be approximated as point sources of light. This assumption works well for water droplets and ice particles up to a diameter of about 3θ pm for the setup shown in Figs. 3 and 4 (in general, this limit depends on the wavelength of light used, aperture size, object distance, and scattering angle). For larger particles, their images become increasingly different from the point-source model and more likely to be interpreted as a close cluster of individual particle images rather than as a single entity. It may be possible to detect such occurrences and join these clusters into single images again, but the detection of another particle image nearby becomes increasingly unlikely, with potential impacts on the clustering
analysis. Droplets and ice crystals larger than about 7θ pm may saturate the pixels (with the exemplary aperture size given above) to such a degree that their image subtraction becomes impossible. While it may still be possible to track their movement and also presumably their orientation in case their shape significantly differs from spherical, the position accuracy may be diminished compared to that of the smaller particles. However, if tracking of large particles were the aim, one could change the magnification, aperture size and beam width accordingly to achieve better results.
Self-calibration
The camera model may be self-calibrated at each frame to account for apparent motion, for which five possible causes may be assumed in the example of the Schneefernerhaus experiment: (1) thermal gradients due to thermal convection within and above the camera box; (2) thermal expansion of the camera box itself; (3) vibrations caused by the camera fans; (4) vibrations caused by the linear motor; and (5) permanent displacement of the optics due to mechanical shocks. The first three effects may be present regularly and are discussed here. The last two only may occur if the camera box is moving to compensate for the mean wind. Best results in terms of compromise between robustness and precision may be achieved by calibrating five parameters of the model per camera at each frame, corresponding to a small shift of the apparent position of the world coordinate origin (two parameters) and a small change in the view angle (three parameters). Often, the shift of the apparent position of the world coordinate origin is the most sensitive to changes in calibration. In the following, this will be used as an indicator of the severity of the previously mentioned effects.
The largest component of the camera shifts may be caused by changes in the camera box's temperature during the course of the day. Changes in temperature cause the box to slightly expand and/or contract, which, in turn, slightly displaces all the optics and hence influences the calibration. Observations of the apparent shifts of the world coordinate origin's sensor positions on camera box temperature over the span of two months at the exemplary Schneefernerhaus experiment show that even the largest apparent shifts of about 7 pixel ≈ 200 μm were well within the limits of self-calibration and small relative to the size of the sensor (1280 x 800 pixel2).
The second largest contributor to changes in the calibration may be thermal convection inside and above the camera box. Again, a plot of the apparent shifts of the world coordinate origin's sensor position, this time over the course of the first 1θθθ frames of a single high-speed video taken at the sampling rate of 5 kHz, shows that these shifts are relatively small at ~ 0.2 pixel ≈ 5 μm in amplitude.
A further contributor to changes in the calibration are the vibrations caused by the cooling fans built into the cameras in the exemplary Schneefernerhaus experiment. These result in very small high frequency oscillations with shifts being small on the order of 0.02 pixel, but due their high frequency of 340 Hz, they may create an uncertainty in the acceleration of the world coordinate frame of up to 2 m/s-2. The contribution from the thermal convection is of similar magnitude.
It must be noted that the tolerable amount of change due to self-calibration depends on the quantity of interest. If one is interested in quantities that do not depend on an inertial frame of reference, such as the radial distribution function, relative particle velocities, or velocity structure functions, then successful self-calibration may be sufficient. However, if one is interested, for example, in particle accelerations, then successful self-calibration may be no longer sufficient, as it is insensitive to solid-body translations and rotations of the world coordinate system. In this case, additional constraints may be imposed to uniquely determine the self-calibration parameters, such as zero mean acceleration over all particles in view, which might affect the overall statistics.
Particle sizing
Particle sizes may be inferred by relating them to the observed brightness of each particle:
with di being the diameter of particle i and li being its observed brightness. The particle size uncertainty may be computed from the brightness standard deviation using standard uncertainty propagation, which, in turn, is computed as the standard deviation of the particle's brightness over all frames in its track. Typical relative particle size uncertainties are less than 10%. The calibration constant was determined as c = 0.768 in the laboratory using a calibrated droplet generator [TSI flow-focusing monodisperse aerosol generator (FMAG) model 152θ], The droplet generator was set to generate droplets of diameter d = 20,
30, 120 μm, which were then observed by the particle tracking experiment. For each reference droplet size, a histogram of the droplets' square-root intensities was created,
secondary peaks due to satellite droplets and merged droplets were masked, and the mean and standard deviation of the main peak were computed. Finally, the means were fitted with a straight line, the slope of which is the calibration constant. Note that the calibration constant is dimensional with the unit pm / (grayscale value)1/2 and that it usually depends on the power of the light source, the scattering angle, the lens used, the aperture, and the camera type and settings.
The particle sizing approach may be verified in situ by comparing the size distributions thus obtained with those measured using a commercial interferometer (Artium PDI-FPDR). An exemplary test run showed that the particle sizing approach disclosed herein performs well but shows some oscillations below 35 μm. These oscillations may be caused by the scattering pattern, which is well-described by Mie scattering theory and is known to have strong oscillations as a function of both particle size and scattering angle.
Fig. 8 shows a schematic drawing of an alternative setup of a measuring device 1θθ for measuring particle sizes where the measurement volume 104 is surrounded by walls of a chamber 118, i.e., a housing, with a particle inlet opening 120 and a larger particle outlet opening 122 at the opposite side. Like the exemplary measuring device 100 of Fig. 4, the exemplary measuring device 100 of Fig. 8 comprises a light source 114 configured to illuminate the measurement volume 1θ4 within the chamber 118 with a beam of light 116. The light source 114 may, e.g., be a pulsed laser or a flash light. The light source may, e.g., be equipped with a collimator 124 and/or a diffuser 126. Using a collimator 124, the light source may be enabled to for generate a collimated beam of light. The diffuser 126 may be used for diffusing the collimated beam of light. A plurality of two or more image sensors 138, in case of Fig. 3 there are three image sensors 138 in form of three cameras, may be arranged at different positions relative to the measurement volume 104. The image sensors 138 are arranged within a box 152. Furthermore, the measuring device comprises a control unit 11, e.g., arranged within the box 152. The control unit 11 comprises a processor and a memory storing machine-executable program instructions. Execution of the program instructions by the processor causes the processor to control the measuring device 1θθ to
illuminate the measurement volume 104 with the beam of light 116 and to acquire stereoscopic image data of the measurement volume 104 using the image sensors 128.
An illumination beam from a light source as described herein illuminates the measurement volume that is located between the inlet and outlet openings. Again, the particles in the measurement volume are observed by three cameras in a flat angle of up to 4θ degrees, and preferably within a range from 25 to 35 degrees, with respect to the direction of the essentially parallel illumination light beam, where alternatively two or more than three cameras may be used as well, and likewise, the cameras may be mounted in an on-axis setup where the optical axis of the sensor and / or the objective of each camera crosses the measurement volume, or alternatively, in an off-axis setup similar to that shown in Fig. 5 where each camera is assigned a view axis crossing the measurement volume and light scattered by particles inside the measurement volume travels along the view axis before being diverted toward the camera objective and sensor having an optical axis that is not parallel to the respective view axis of the camera by a mirror. The measuring device shown also comprises a control panel allowing a user to interact with the control unit and / or the analysis unit that may both be installed in the left side of the housing or, alternatively, one or both of which may be located elsewhere and may be communicatively coupled to an appropriate communications interface, as disclosed herein, in the left part of the housing. The setup shown in Fig. 8 may be especially suitable as an out-of-the-box device that is pre-calibrated by a manufacturer. Apart from that, the design principles and considerations given above for the setup shown in Figs. 4 and 5 can be applied straightforwardly to the alternative setup of Fig. 8.
The measuring device and its control unit and / or analysis unit as disclosed herein may allow to perform in situ particle tracking on droplets of a liquid or other nearly spherical particles. For illustration purposes referring below once more to the exemplary cloud droplet tracking experiment at the Schneefernerhaus, mount Zugspitze, Germany, the whole of the particle tracking setup may be built into a stiff, weather-proof box called the camera box. The volume probed by the exemplary particle tracking setup is 16.6 cm3; however, due to its vertical extension, many particles are out of focus and therefore difficult to locate accurately, if at all. By limiting the vertical size of the measurement volume, its shape can then be approximated by a cuboid of 40 x 200 12 mm3, particle positions may be measured with an uncertainty of 5
pm, and their accelerations may be measured with an uncertainty of 0.1 m s-2. The smallest particles that may be accurately tracked within this volume are about 5 pm in diameter. The experimental setup may be typically operated at 5 or 10 kHz; however, frame rates of up to 25 kHz may be possible without losing accuracy.
Fig. 6 shows the measuring device 100 of Fig. 4 mounted on a pair of rails 13θ and equipped with a linear motor 128 enabling the measuring device 100 to move along the rails 130. This may allow for wind-synchronous observations of cloud-borne water droplets in situ by the measuring device 1000 The movement of the measuring device 100 along the rails 130 is limited by shock absorbers 160 arranged on both ends of the rails 13θ. The rails 130 are arranged on a table 162 pivotable around a pivot 156 implementing a seesaw 132. The table 162 is pivotable around an axis extending perpendicular to the rails 160 in a common plane with the rails 130 implementing. The table 162 is supported at the pivot 156 by a support structure 158. Furthermore, the table 162 is supported telescopic cylinders 154, which may be configured to tilt the table 162.
In the example of the Schneefernerhaus experiment, the camera box is mounted on a table that can be moved over a set of rails by using an electromotor, which allows to compensate for the mean wind. The wind at the Schneefernerhaus is predominantly east-west, so the rails are aligned in this direction. In orderto compensate forthe horizontal component of the mean wind, the rails are bolted onto a structure dubbed the "seesaw" which can be tipped up to 15° east or west. Although the system has been designed for fluid velocities of up to 7.5 m s-1, in practice, the maximum velocity to run it at is set by the travel of the table (5.3 m), the maximum acceleration (10 m s-2), and the desired duration of the constant velocity phase. At 10 kHz, the cameras can record for up to 1.6 s; if the constant velocity phase is set to be at least this long, the velocity is limited to at most 2.8 m s-1. The electromotor introduces extra vibrations into the camera box, which are dampened passively by decoupling the box from the table with springs and rubber buffers. This way, the rms accelerations of the camera box may be reduced down to 0.1 m s-2. It may also be possible to implement an actively damped approach, where the camera box is stiffly coupled to the table during the acceleration and deceleration phases but almost completely decoupled otherwise. By using this mean wind compensation, one may expect to see fewer particles, but their average residence time in the
measurement volume should increase. In particular, the number of tracks longer than 1θ ms may increase; the number of tracks of 1θθ ms may double. This may be beneficial for particle tracking, in general, since longer particle tracks allow to apply better filtering to the tracks, which helps to increase accuracy, in particular of higher-order derivatives such as the particle acceleration.
In the example of the Schneefernerhaus experiment, particle tracking is done with an in-house code, with which it may be possible to accurately track particles that are badly illuminated and/or strongly out of focus. It is inspired by the Shake-The-Box algorithm, but instead of trying to use the particles to resolve the underlying flow, it puts emphasis on finding and tracking the largest possible number of particles in view and accurately determining the amount of light associated with each particle image. It may be possible to track up to 104 particles in simultaneous view of all cameras and up to 8 x 104 particles in total (due to the shallow view angles). By performing a self-calibration at every frame, it may be possible to deal with thermal convection and expansion and vibrations caused by the camera box movement. By extracting not only particle positions as a function of time but also their brightness as registered by each individual camera, it may further be possible to deduce each individual particle size. For particles larger than 35 pm in diameter, one can estimate the size of each particle accurately by a simple quadratic fit to the dependence of scattered light on particle size. For smaller particles, the effects of Mie scattering should be taken into account for better accuracy.
Moreover, embodiments disclosed herein demonstrate the ability to measure the radial distribution function (RDF) and the longitudinal relative velocity distribution within the limits set by the effect of particle shadowing. Other quantities that may be measurable include, e.g., the particle acceleration statistics and various measures for particle clustering, such as the fractal dimension and the size distribution of Voronoi cells.
Fig. 9 shows an exemplary comparison of an observed light intensity Io, both without and with diffuser, with Lorentz-Mie scattering. The observed intensity is calculated by integrating Lorentz-Mie scattering overthe diffuser and over the camera's aperture. The curves are offset, so they do not overlap. The scattering angle in this case is 30°. The aperture of the objective
used subtend angles of 1° and the beam expander is equipped with a 1° FWHM diffuser. To illustrate what the effect of the aperture and the diffuser on the observed intensity Io is, the approximated equation is evaluated for many droplet sizes, both
without and with the diffuser, and is compared with Lorentz-Mie scattering. The result is shown in Fig. 9. As can be seen, the aperture by itself reduces the oscillations to some extent: they practically disappear at 55 pm, but come back for larger sizes, although there they are not as strong. Adding the diffuser reduces the oscillations even further: they disappear at 35 pm, and do not return. It is likely that the diffuser is more effective at reducing oscillations than the aperture is, because the diffuser has smooth edges due to its Gaussian character, whereas the aperture has sharp edges. So, it is possible to compute the size of particles from their observed intensity as
Fig. 1θ shows another exemplary comparison of an observed light intensity Io, both without and with diffuser, with Lorentz-Mie scattering as well as a fit of an exponential function for different particle sizes. In Fig. 10 the same quantities as in Fig. 9 are plotted, but now as a function of (central) scattering angle 6. As can be seen, the aperture and diffuser together are effective at averaging out the oscillation for particles, i.e., droplets. The larger the droplets, the stronger the effect. For example, oscillation for particle of 30 μm is very effectively averaged out. Also shown is an exponential fit of the observed intensity with diffuser, Io ~ exp(θ/cθ), with, e.g., cθ = —15.7°. An analogous exponential fit may be used for determining the constants
Fig. 11 shows exemplary droplet size distributions inferred with the intensity-based approach described herein for different time intervals ATI, AT2, AT3, and AT4. These time intervals are, e., half-hour-long periods. Fig. 12 shows exemplary relative uncertainties for various ranges of particle sizes. Cumulative distributions of particle size and relative uncertainty σp/dp are shown. These distributions show P(σP/dp > X), i.e., the probability that the relative uncertainty is larger than a certain value, and therefore decrease instead of increase. In general, the relative uncertainty may decrease with increasing drop size. The median relative uncertainty is approximately 0.05. For particles larger than 10 μm in diameter, the relative uncertainty may tend to exceed 0.1 in no more than 1% of cases.
Fig. 13 shows exemplary histograms of a square-root-particle-intensity for particles of different sizes produced with a calibrated aerosol generator in order to determine a calibration constant c of For this purpose, particles in form
of droplets with known diameter dp are generated using the calibrated aerosol generator. For these droplets of known diameter dp a scattered light intensity lout, a scattering angle θ, and an incident light intensity L are determined. With these values known, the constant c may be computed as c = dp 2(linq1exp(— q2θ))/Iout. The dashed line in Fig. 13 is c = dp 2(linq1exp(-q2θ))/IOut with, e.g., c=0.774.
Fig. 14 shows a schematic diagram of an exemplary analysis unit 1θ for determining particle sizes. The analysis unit 10 may, e.g., be a computational device and may be operational with numerous other general-purpose or special-purpose computing system environments or configurations. Analysis unit 10 may be described in the general context of computer device executable instructions, such as program modules comprising executable program instructions, being executable by the analysis unit 10. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Analysis unit 1θ may be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer device storage media including memory storage devices. Analysis unit 10 may, e.g., be comprised by a control unit configured for controlling a measuring device for measuring particle sizes.
In Fig. 14, analysis unit 10 is shown in the form of a general-purpose computing device. The components of analysis unit 10 may include, but are not limited to, one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including system memory 28 to processor 16. Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include
Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Analysis unit 10 may comprise a variety of computer device readable storage media. Such media may be any available storage media accessible by analysis unit 1θ, and include both volatile and non-volatile storage media, removable and non-removable storage media.
A system memory 28 may include computer device readable storage media in the form of volatile memory, such as random-access memory (RAM) 30 and/or cache memory 32. Analysis unit 10 may further include other removable/non-removable, volatile/non-volatile computer device storage media. For example, storage system 34 may be provided for reading from and writing to a non-removable, non-volatile magnetic media also referred to as a hard drive. For example, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk, e.g., a floppy disk, and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical storage media may be provided. In such instances, each storage medium may be connected to bus 18 by one or more data media interfaces. Memory 28 may, e.g., comprise machine-executable program instructions configured for controlling the analysis unit 10 to determining particle sizes. Memory 28 may, e.g., comprise machine-executable program instructions configured for controlling a measuring device to measure particle sizes.
Program 40 may have a set of one or more program modules 42 and by way of example be stored in memory 28. The program modules 42 may comprise an operating system, one or more application programs, other program modules, and/or program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. One or more of the program modules 42 may be configured for controlling the analysis unit 10 to determining particle sizes. One or more of the program modules 42 may be configured for controlling a measuring device to measure particle sizes.
Analysis unit 10 may further communicate with one or more external devices 14 such as a keyboard, a pointing device, like a mouse, and a display 24 enabling a user to interact with
analysis unit 10. Such communication can occur via input/output (I/O) interfaces 22. Analysis unit 10 may further communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network, like the Internet, via network adapter 20. Network adapter 20 may communicate with other components of analysis unit 1θ via bus 18. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with analysis unit 10.
Fig. 15 shows an exemplary analysis unit 10 comprised by a control unit 11 of a measuring device. Alternatively, the analysis unit 10 and the control unit 11 may be provided as separated computational devices. The analysis unit 10 may, e.g., be configured as shown in Fig. 14. The analysis unit 1θ may comprise a hardware component 54 comprising one or more processors as well as a memory storing machine-executable program instructions. Execution of the program instructions by the one or more processors may cause the one or more processors to control the analysis unit 10 to determining particle sizes. Additionally or alternatively, execution of the program instructions by the one or more processors may cause the one or more processors to control a measuring device to measure particle sizes.
The analysis unit 10 may further comprise one or more input devices, like a keyboard 58 and a mouse 56, enabling a user to interact with the analysis unit 10. Furthermore, the analysis unit 10 may comprise one or more output devices, like a display 24 providing a graphical user interface 50 with control elements 52, e.g., GUI elements, enabling the user to control the determining of particle sizes and/or the measuring of particle sizes. On display 24 image data 112, like the image data 112 of Fig. 2, may be shown.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments.
Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or functions, and the indefinite article "a" or "an" does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does
not indicate that a combination of these measured cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope.
A single processor or other unit may fulfill the functions of several items recited in the claims. A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as an apparatus, computer program or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit," "module" or "system." Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer executable code embodied thereon. A computer program comprises the computer executable code or "program instructions".
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A "computer-readable storage medium" as used herein encompasses any tangible storage medium which may store instructions which are executable by a processor of a computing device. The computer-readable storage medium may be referred to as a computer- readable non-transitory storage medium. The computer-readable storage medium may also be referred to as a tangible computer readable medium. In some embodiments, a computer- readable storage medium may also be able to store data which is able to be accessed by the processor of the computing device. Examples of computer-readable storage media include, but are not limited to: a floppy disk, a magnetic hard disk drive, a solid-state hard disk, flash memory, a USB thumb drive, Random Access Memory (RAM), Read Only Memory (ROM), an optical disk, a magneto-optical disk, and the register file of the processor. Examples of optical disks include Compact Disks (CD) and Digital Versatile Disks (DVD), for example CD-ROM, CD-
RW, CD-R, DVD-ROM, DVD-RW, or DVD-R disks. A further example of an optical disk may be a Blu-ray disk. The term computer readable-storage medium also refers to various types of recording media capable of being accessed by the computer device via a network or communication link. For example, a data may be retrieved over a modem, over the internet, or over a local area network. Computer executable code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with computer executable code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electromagnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
"Computer memory" or "memory" is an example of a computer-readable storage medium. Computer memory is any memory which is directly accessible to a processor. "Computer storage" or "storage" is a further example of a computer-readable storage medium. Computer storage is any non-volatile computer-readable storage medium. In some embodiments, computer storage may also be computer memory or vice versa.
A "processor" as used herein encompasses an electronic component which is able to execute a program or machine executable instruction or computer executable code. References to the computing device comprising "a processor" should be interpreted as possibly containing more than one processor or processing core. The processor may for instance be a multi-core processor. A processor may also refer to a collection of processors within a single computer device or distributed amongst multiple computer devices. The term computing device should also be interpreted to possibly refer to a collection or network of computing devices each comprising a processor or processors. The computer executable code may be executed by multiple processors that may be within the same computing device or which may even be distributed across multiple computing devices.
Computer executable code may comprise machine executable instructions or a program which causes a processorto perform an aspect of the present invention. Computer executable code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages and compiled into machine executable instructions. In some instances, the computer executable code may be in the form of a high-level language or in a pre-compiled form and be used in conjunction with an interpreter which generates the machine executable instructions on the fly.
The computer executable code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Generally, the program instructions can be executed on one processor or on several processors. In the case of multiple processors, they can be distributed over several different entities like clients, servers etc. Each processor could execute a portion of the instructions intended for that entity. Thus, when referring to a system or process involving multiple entities, the computer program or program instructions are understood to be adapted to be executed by a processor associated or related to the respective entity.
A "user interface" as used herein is an interface which allows a user or operator to interact with a computer or computer device. A 'user interface' may also be referred to as a 'human interface device.' A user interface may provide information or data to the operator and/or receive information or data from the operator. A user interface may enable input from an operator to be received by the computer and may provide output to the user from the computer. In other words, the user interface may allow an operator to control or manipulate
a computer and the interface may allow the computer indicate the effects of the operator's control or manipulation. The display of data or information on a display or a graphical user interface is an example of providing information to an operator. The receiving of data through a keyboard, mouse, trackball, touchpad, pointing stick, graphics tablet, joystick, gamepad, webcam, headset, gear sticks, steering wheel, pedals, wired glove, dance pad, remote control, one or more switches, one or more buttons, and accelerometer are all examples of user interface components which enable the receiving of information or data from an operator.
A GUI element is a data object some of which's attributes specify the shape, layout and/or behavior of an area displayed on a graphical user interface, e.g., a screen. A GUI element can be a standard GUI element such as a button, a text box, a tab, an icon, a text field, a pane, a check-box item or item group or the like. A GUI element can likewise be an image, an alphanumeric character or any combination thereof. At least some of the properties of the displayed GUI elements depend on the data value aggregated on the group of data object said GUI element represents.
Aspects of the present invention are described with reference to block diagrams of apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block or a portion of the blocks of the flowchart, illustrations, and/or block diagrams, can be implemented by computer program instructions in form of computer executable code when applicable. It is further understood that, when not mutually exclusive, combinations of blocks in different flowcharts, illustrations, and/or block diagrams may be combined. These computer program instructions may be provided to a processor of a general- purpose computer, special-purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable
medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions specified in the block diagram block or blocks.
LIST OF REFERENCE NUMERALS
10 analysis unit
11 control unit
14 external device
16 processing unit
18 bus
20 network adapter
22 I/O interface
24 display
28 memory
30 RAM
32 cache
34 storage system
40 program
42 program module
50 user interface
52 control elements
54 hardware device
56 keyboard
58 mouse
100 measuring device
102 particle
104 measurement volume
106 image sensor
108 objective
110 objective holder
112 image data
114 light source
116 beam of light
118 chamber
120 inlet
122 outlet
124 collimator
126 diffuser
128 linear motor
130 rail
132 seesaw
134 mirror
136 mirror adjustment
138 tube
140 first end
142 second end
144 window
146 air inlet
148 air outlet
150 lid
152 box
154 telescopic cylinder
156 pivot
158 support structure
160 shock absorber
Claims
1. A measuring device (100) for measuring particle sizes, the measuring device (100) comprising a control unit (11) for controlling the measuring device (100), a light source (114) for generating a beam of light (116) for illuminating a measurement volume (104), and one or more image sensor (106), the control unit (11) comprising a processor (16) and a memory (28) storing machine- executable program instructions (40), execution of the program instructions (40) by the processor (16) causing the processor (16) to control the measuring device (100) to illuminate the measurement volume (104) with the beam of light (116) and to acquire image data (112) of the measurement volume (104) using the one or more image sensors (106).
2. The measuring device (100) of claim 1, comprising a plurality of the image sensors (106) arranged at different positions relative to the measurement volume (104), the acquired image data (112) of the measurement volume (104) being stereoscopic image data (112) acquired using the plurality of image sensors (106).
3. The measuring device (100) of any of the preceding claims, further comprising a diffuser (124) for diffusing the beam of light (116).
4. The measuring device (100) of any of the preceding claims, comprising a chamber (118) with an inlet (120) for the particles (102) and an outlet (122) for the particles (102), the chamber (118) comprising the measurement volume (104).
5. The measuring device (100) of any of the preceding claims, the light source (114) comprising a collimator (124) for generating the beam of light (116) as a collimated beam of light (116).
6. The measuring device (100) of any of the preceding claims, the light source (114) being a pulsed laser or a flashlamp.
7. The measuring device (100) of any of the preceding claims, comprising a motion unit for moving the measuring device (100).
8. The measuring device (100) of claim 7, the motion unit comprising a linear motor (128) configured for moving the measuring device (1θθ) along a set of one or more rails (130).
9. The measuring device (100) of claim 8, the set of one or more rails (130) being pivotable around an axis extending perpendicular to the one or more rails (130) in a common plane with the one or more rails (130) implementing a seesaw mechanism (132) for pivoting the measuring device (100).
1θ. The measuring device (100) of any of the preceding claims, the image sensors (106) each being configured for an off-optical-axis observation of the measurement volume (104) via a mirror (134) and a tube (138), an orientation of the tube (138) defining a view axis under which the respective image sensor (106) observes the measurement volume (104) via the mirror (134) and the tube (138).
11. The measuring device (100) of claim 1θ, the tube (138) comprising a first end (140) and a second end (142), the first end (140) being a distal end relative to the mirror (134), the second end (142) being a proximal end relative to the mirror (134), the tube (138) further comprising a window (144) arranged at the second end (142).
12. The measuring device (100) of claim 11, the tube (138) further comprising a ventilation system (146, 148) configured for injecting a stream of air into the tube (138) and onto the window (144) and for sucking out the stream of air in order to remove with the stream of air liquid that may have collected on the window (144).
13. The measuring device (100) of any of claims 1θ to 12, the tube (138) further comprising a movable lid (150) arranged at the first end (140) of the tube (138) and configured for opening and closing the tube (138) at the first end (140).
14. The measuring device (100) of any of the preceding claims, wherein for each of the image sensors (106) a relative angle a between a view axis under which the image sensor (106) observes the measurement volume (100) and a direction of illumination of the measurement volume (104) by the light source (114) is less than or equal to 45 degrees, preferably the relative angle a lies within a range from 25 to 35 degrees.
15. The measuring device (100) of any of the preceding claims, the control unit (11) further being configured as an analysis unit (1θ), the execution of the program instructions (40) by the processor (16) further causing the processor (16) to control the measuring device (100) to determine the size of one or more of the particles (102) within the measurement volume (104) using the acquired stereoscopic image data (112), the determining of the size of the one or more of the particles (102) comprising for each of the particles (1θ2): determining a scattered light intensity lout of light of the light source (114) scattered by the respective particle (102) to an individual image sensor (106) of the one or more image sensors (106) using image data (112) from the image data (112) acquired by the individual image sensor (106), determining a scattering angle θ of the light of the light source (114) being scattered by the respective particle (102) to the individual image sensor (1θ6), the scattering angle θ being determined relative to a direction of illumination of the measurement volume (1θ4) by the light source (114), determining an incident light intensity lin of the light of the light source (114) at the position of the respective particle (102) within the measurement volume (104), determining the size of the particle (102) using a ratio lout/lin of the scattered light intensity lout to the incident light intensity lin and the scattering angle θ.
16. A computer program product comprising a non-volatile computer-readable storage medium having computer-readable program instructions (40) embodied therewith, the program instructions (40) being executable by a processor (16) of a control unit (11) configured for controlling a measuring device (100) for measuring particle sizes, the measuring device (100) comprising a light source (114) forgenerating a beam of light (116) for illuminating a measurement volume (104), and one or more image sensors (1θ6),
execution of the program instructions (40) by the processor (16) causing the processor (16) to control the measuring device (100) to illuminate the measurement volume (104) with the beam of light (116) and to acquire image data (112) of a measurement volume (104) using the one or more image sensors (106).
17. The computer program product of claim 16, execution of the program instructions (4θ) by the processor (16) causing the processor (16) for one or more of the particles (102) within the measurement volume (104) to: determine a scattered light intensity lout of light of the light source (114) scattered by the respective particle (102) to an individual image sensor (106) of the one or more image sensors (106) using image data (112) from the image data (112) acquired by the individual image sensor (106), determine a scattering angle θ of the light of the light source (114) being scattered by the respective particle (102) to the individual image sensor (106), the scattering angle θ being determined relative to a direction of illumination of the measurement volume (104) by the light source (114), determine an incident light intensity lin of the light of the light source (114) at the position of the respective particle (102) within the measurement volume (1θ4), determine the size of the particle (102) using a ratio lout/lin of the scattered light intensity lout to the incident light intensity lin and the scattering angle θ.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2022/063486 WO2023222219A1 (en) | 2022-05-18 | 2022-05-18 | Measuring device for measuring particle sizes |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2022/063486 WO2023222219A1 (en) | 2022-05-18 | 2022-05-18 | Measuring device for measuring particle sizes |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023222219A1 true WO2023222219A1 (en) | 2023-11-23 |
Family
ID=82020145
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2022/063486 WO2023222219A1 (en) | 2022-05-18 | 2022-05-18 | Measuring device for measuring particle sizes |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2023222219A1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170010197A1 (en) * | 2011-08-17 | 2017-01-12 | Technische Universitaet Darmstadt | Method and device for determining characteristic properties of a transparent particle |
US20180275038A1 (en) * | 2015-10-02 | 2018-09-27 | Institut National D'optique | System and method for individual particle sizing using light scattering techniques |
-
2022
- 2022-05-18 WO PCT/EP2022/063486 patent/WO2023222219A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170010197A1 (en) * | 2011-08-17 | 2017-01-12 | Technische Universitaet Darmstadt | Method and device for determining characteristic properties of a transparent particle |
US20180275038A1 (en) * | 2015-10-02 | 2018-09-27 | Institut National D'optique | System and method for individual particle sizing using light scattering techniques |
Non-Patent Citations (1)
Title |
---|
BERTENS G ET AL: "In situ cloud particle tracking experiment", REVIEW OF SCIENTIFIC INSTRUMENTS, AMERICAN INSTITUTE OF PHYSICS, 2 HUNTINGTON QUADRANGLE, MELVILLE, NY 11747, vol. 92, no. 12, 9 December 2021 (2021-12-09), XP012261901, ISSN: 0034-6748, [retrieved on 20211209], DOI: 10.1063/5.0065806 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12316995B2 (en) | Mobile gas and chemical imaging camera | |
CN110073185B (en) | Mobile gas and chemical imaging camera | |
CN108780223B (en) | Corneal sphere tracking for generating eye models | |
JP7076368B2 (en) | Range gate type depth camera parts | |
WO2023222218A1 (en) | Analysis unit for determining particle sizes | |
CN104567738B (en) | Parallelism of optical axis accurate measuring systems and method | |
CN104316443B (en) | A PM 2.5 Concentration Monitoring Method Based on CCD Backscattering | |
US10778952B2 (en) | Depth camera light leakage avoidance | |
CN113014906A (en) | Daily scene reconstruction engine | |
CA3170689A1 (en) | Mobile gas and chemical imaging camera | |
Berujon et al. | X-ray pulse wavefront metrology using speckle tracking | |
EP1511978A1 (en) | Methhod and system for sensing and analyzing a wavefront of an optically transmissive system | |
CN112508903B (en) | A method for detecting contours of surface defects of satellite telescope lenses | |
Thomason et al. | Calibration of a microlens array for a plenoptic camera | |
US6088098A (en) | Calibration method for a laser-based split-beam method | |
WO2023222219A1 (en) | Measuring device for measuring particle sizes | |
CN110108302B (en) | Method for improving atom group polishing precision | |
CN112907526B (en) | Detection method for surface defects of satellite telescope lens based on LBF | |
Tokovinin et al. | FADE, an instrument to measure the atmospheric coherence time | |
CN110501063B (en) | A high-precision measurement method of high-frequency standing wave amplitude distribution | |
Elliot et al. | Image Quality on the Kuiper Airborne Observatory. I. Results of the first flight series | |
Machicoane et al. | Recent developments in particle tracking diagnostics for turbulence research | |
Liu et al. | Laser spot center location algorithm based on sub-pixel interpolation | |
US12117384B1 (en) | Utilizing highly scattered light for intelligence through aerosols | |
Bolbasova et al. | Measurements of atmospheric turbulence from image motion of laser beam by shack-hartmann wavefront sensor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22729614 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22729614 Country of ref document: EP Kind code of ref document: A1 |