[go: up one dir, main page]

CN110084127B - A Vision-Based Vibration Measurement Method for Magnetic Suspension Rotors - Google Patents

A Vision-Based Vibration Measurement Method for Magnetic Suspension Rotors Download PDF

Info

Publication number
CN110084127B
CN110084127B CN201910246595.1A CN201910246595A CN110084127B CN 110084127 B CN110084127 B CN 110084127B CN 201910246595 A CN201910246595 A CN 201910246595A CN 110084127 B CN110084127 B CN 110084127B
Authority
CN
China
Prior art keywords
image
domain information
phase
frequency domain
vibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910246595.1A
Other languages
Chinese (zh)
Other versions
CN110084127A (en
Inventor
彭聪
曾聪
江驹
王雁刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201910246595.1A priority Critical patent/CN110084127B/en
Publication of CN110084127A publication Critical patent/CN110084127A/en
Application granted granted Critical
Publication of CN110084127B publication Critical patent/CN110084127B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H9/00Measuring mechanical vibrations or ultrasonic, sonic or infrasonic waves by using radiation-sensitive means, e.g. optical means
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/42Analysis of texture based on statistical description of texture using transform domain methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开一种基于视觉的磁悬浮转子振动测量方法,首先通过安装有光学透镜的高速相机采集磁悬浮转子在不同工作状态下的振动视频;然后采用以Gabor滤波器为基础构建的方向可控金字塔将图像空间域信息转换得到不同尺度、不同方向的图像频域信息,再根据图像频域信息中的局部相位变化提取磁悬浮电机的振动位移信号;接着采用LOG算子对所述振动位移信号进行二阶求导,计算得到加速度信号;最后对所述加速度信号进行快速傅立叶变换得到目标频谱图。本发明充分发挥了视觉测量的优点,能够全场测量,可以在不改变现有设备的配置以及安装条件下,只改变算法就可以获得不同维度的信息,适应性强。

Figure 201910246595

The invention discloses a method for measuring the vibration of a magnetic suspension rotor based on vision. First, a high-speed camera equipped with an optical lens is used to collect the vibration video of the magnetic suspension rotor under different working states; The image space domain information is converted to obtain image frequency domain information of different scales and different directions, and then the vibration displacement signal of the magnetic levitation motor is extracted according to the local phase change in the image frequency domain information; The derivation is obtained, and the acceleration signal is obtained by calculation; finally, the fast Fourier transform is performed on the acceleration signal to obtain the target spectrogram. The invention fully utilizes the advantages of visual measurement, can measure the whole field, and can obtain information of different dimensions only by changing the algorithm without changing the configuration and installation conditions of the existing equipment, and has strong adaptability.

Figure 201910246595

Description

Vision-based magnetic suspension rotor vibration measurement method
Technical Field
The invention relates to a magnetic suspension rotor vibration measurement method based on vision, and belongs to the technical field of vision measurement.
Background
With the increasing maturity of magnetic suspension technology, magnetic suspension rotors are widely applied to the field of high-speed rotating machinery such as magnetic suspension fans, magnetic suspension compressors and the like. The rotor is one of the most important components in a magnetic levitation apparatus, and generates mechanical vibration due to its complicated electromechanical structure. Vibration is an important factor affecting the operating conditions and the service life of the equipment. Without effective inhibition, serious damage, even destructive consequences, may be caused to the equipment. An effective measurement of vibration is a prerequisite for effective suppression of vibration. Vibration measurements of rotating equipment can identify system parameters, monitor the operating state of the equipment, and diagnose equipment faults. Therefore, vibration testing analysis plays an increasingly important role in the engineering field, however, it is still a great challenge to develop a vibration measurement system with strong adaptability, high automation degree and high precision.
From the existing vibration measurement technology, there are two measurement methods: one is contact measurement and the other is non-contact measurement. The contact measurement needs to arrange the sensors on the measurement object according to a certain rule and connect with the matched upper computer software. However, contact measurement can produce mass loading effects, and without providing full field spatial resolution, only the corresponding signal for a single point location can be measured. Furthermore, for some large structures, it is time consuming and labor intensive to handle all of the wiring and instrumentation. Non-contact measurements typically rely on some type of electromagnetic radiation to transmit information, which, unlike conventional contact measurement methods, can yield different dimensional information without changing existing equipment configuration and installation. For example, laser vibrometers for vibration measurement do not require sensors mounted on the structure and any mass loading effects. However, laser vibrometers are relatively expensive and are less efficient at handling large low frequency vibrations. Digital cameras in combination with image processing algorithms are another non-contact measurement method, also known as vision-based vibration measurement. Compared with a laser vibration meter, the digital camera has the advantages of low cost and suitability for full-field measurement. However, in the prior art, three-dimensional Digital Image Correlation (DIC), an optical flow method based on illumination intensity and the like are commonly used for measuring large-amplitude vibration, while rotor vibration belongs to sub-pixel level micro motion and is difficult to distinguish from noise, and the accuracy measured by the three-dimensional Digital Image Correlation (DIC) and the optical flow method based on illumination intensity is low. Therefore, how to extract the rotor vibration signal with high precision is a major and difficult point of research.
Disclosure of Invention
In order to solve the problems, the invention provides a magnetic suspension rotor vibration measuring method based on vision according to a phase-based computer theory under the existing vision measuring method. According to a motor vibration video sequence acquired by a high-speed camera, a direction-controllable pyramid is designed to convert an image from a spatial domain to a frequency domain, inter-frame displacement motion information is obtained by utilizing the phase change relation between a reference frame and a motion frame, a vibration acceleration signal is obtained and converted into a spectrogram, and a frequency signal of a magnetic suspension rotor is measured.
The specific technical scheme is as follows:
a magnetic suspension rotor vibration measurement method based on vision comprises the following steps:
s1, acquiring vibration videos of a magnetic suspension rotor in different working states through a high-speed camera provided with an optical lens;
s2, converting image space domain information contained in the vibration video by adopting a direction controllable pyramid constructed on the basis of a Gabor filter to obtain image frequency domain information with different scales and different directions, and extracting a vibration displacement signal of the magnetic suspension motor according to local phase change in the image frequency domain information;
s3, performing second-order derivation on the vibration displacement signal by adopting an LOG operator, and calculating to obtain an acceleration signal;
and S4, carrying out fast Fourier transform on the acceleration signal to obtain a target spectrogram.
Preferably, the step S2 specifically includes:
s21, designing a Gabor filter by using a two-dimensional Gabor wavelet as a convolution kernel function, and converting video image space domain information contained in the vibration video into frequency domain information through the Gabor filter; a low-pass sub-band part in the frequency domain information contains global information of the video image, and a high-pass sub-band part contains detail information of the video image; the two-dimensional Gabor function expression corresponding to the Gabor filter is as follows:
Figure GDA0003023390060000021
wherein x and y represent spatial pixel coordinates; theta epsilon (0 degrees, 360 degrees) represents the direction of the parallel strips in the Gabor filter kernel; λ is the wavelength of the sine function; ψ is the phase offset of the tuning function; γ is the spatial aspect ratio; σ represents the standard deviation of the Gaussian function; x is the number ofθAnd yθEach represents a spatial variable containing directional information, and the expression is as follows:
xθ=xcosθ+ysinθ,yθ=-xsinθ+ycosθ
s22, linearly combining a plurality of Gabor filters in different directions, and simultaneously carrying out scale transformation on the video image so as to construct a direction-controllable pyramid;
s23, decomposing the video image into a sub-band series with different scales and multiple directions containing the structure information and the edge information of the video image through the direction controllable pyramid, thereby completing the conversion of the video image information into frequency domain information with different scales and different directions; and calculating the phase difference of the two frames according to the phase angles of the two frames in the frequency domain information, thereby obtaining the displacement signals in the corresponding directions.
Preferably, the implementation process of the direction controllable pyramid is as follows:
the video image is first passed through a high pass filter H0(ω) and a low-pass filter L0(omega) is decomposed into two sub-bands of high pass and low pass; the low-pass subband image is then decomposed into K different directional band-pass subbands Bk-1(omega) and the low-pass sub-band L1(ω) for low-pass sub-band L simultaneously1The rows and columns of (omega) are respectively sampled again; and repeating the decomposition process after two samples, and circularly iterating until one dimension of the row and the column cannot be subjected to down sampling.
Preferably, the calculation method for calculating the phase difference between two frames according to the phase angles of the two frames in the frequency domain information to obtain the displacement signal in the corresponding direction is as follows:
assuming that the temporal interval between two successive frames of the video is Δ t, local motion (Δ x, Δ y) occurs at a spatial position (x, y), the first frame is defined as the image intensity I (x, y, t)0) The second frame is defined as the image intensity I (x + Deltax, y + Deltay, t)0Motion frame of +. DELTA.t);
the convolution operation is performed by using a two-dimensional Gabor function, and the image intensity I (x, y, t) is converted into frequency domain information F (x, y, t) as follows:
Figure GDA0003023390060000031
expressed in the form of an integral:
Figure GDA0003023390060000032
taking the example of extracting horizontal motion, let the spatial variable be represented as xθ=x、yθY, the reference frame is converted into:
Figure GDA0003023390060000033
the motion frame is converted into:
Figure GDA0003023390060000034
rearranging the above reference frame conversion formula and motion frame conversion formula, putting the phase term independent of the integration variable outside the integration, and then simplifying the equation:
Figure GDA0003023390060000041
Figure GDA0003023390060000042
expressing the phase part of the above expression as
Figure GDA0003023390060000043
The phase terms of the simplified reference frame conversion formula and the simplified motion frame conversion formula are
Figure GDA0003023390060000044
The final definite integral result is therefore the same, which is denoted as
Figure GDA0003023390060000045
The phase angles for the two frames are calculated as follows:
Figure GDA0003023390060000046
Figure GDA0003023390060000047
the phase difference of two frames in the horizontal direction is calculated as follows:
Figure GDA0003023390060000048
thereby obtaining a displacement signal in the corresponding direction.
Preferably, step S23 further includes performing weighted spatial gaussian blurring on the phase angle to improve the signal-to-noise ratio, specifically including:
weighting the phase signals for the Nth frame
Figure GDA0003023390060000049
The calculation is as follows:
Figure GDA00030233900600000410
wherein A isNWhich represents the amplitude of the nth frame,
Figure GDA00030233900600000411
indicating the phase of the nth frame; h (x, y) is a two-dimensional gaussian function, expressed as:
Figure GDA00030233900600000412
wherein the standard deviation ρ of the gaussian filter represents the width of the spatial domain filter.
Preferably, in step S1, the high-speed camera has a maximum resolution of 4096 × 3076, can continuously adjust the pixel size of the pixel range and the exposure time of the exposure time range, and uses an LED lamp as a light source.
Preferably, the step S1 specifically includes: setting the rotation speed of the magnetic suspension rotor at 6000rpm, 9000rpm, 12000rpm and 15000rpm, setting the frame rate of the high-speed camera at 300fps, 500fps, 600fps and 800fps, and recording and storing corresponding vibration video sequences.
Compared with the prior art, the magnetic suspension rotor vibration measuring method based on vision provided by the invention has the following beneficial effects:
(1) the invention adopts an optical flow method based on phase, which is not based on the original pixel intensity value, but extracts the motion by analyzing the phase change of the image; the image phase information is more robust than the image intensity with respect to image variations due to contrast and scale.
(2) The invention is different from the traditional contact measurement and laser interference measurement methods, can obtain information with different dimensions by only changing the algorithm without changing the configuration and installation conditions of the existing equipment, and has stronger adaptability.
Drawings
FIG. 1 is a diagram of a vision measurement experiment platform;
FIG. 2 is a graph of the real and imaginary parts of a Gabor filter;
FIG. 3 is a diagram of a direction controllable pyramid structure;
Detailed Description
The invention provides a magnetic suspension rotor vibration measurement method based on vision, which comprises the steps of extracting a vibration displacement signal of a magnetic suspension motor by calculating local phase change of a frequency domain by adopting a phase-based optical flow method according to a motor vibration video acquired by a high-speed camera; then carrying out second-order derivation on the displacement signal to obtain an acceleration signal; finally, a Fast Fourier Transform (FFT) is used to obtain a spectrogram. The invention is described in detail below with reference to the figures and the specific implementation steps.
The invention discloses a method for measuring the vibration of a magnetic suspension rotor based on vision, which comprises the following specific implementation methods:
step one, collecting vibration videos of the magnetic suspension motor in different working states by using a high-speed camera.
As shown in fig. 1, a vision measurement platform is required to be constructed before acquisition, and the vision measurement platform mainly comprises a magnetic suspension rotor, a high-speed camera (provided with a high-quality optical lens), a light source, a tripod, an acceleration sensor, a magnetic suspension rotor control platform and a computer for data storage. The high-speed camera used by the invention is a high-speed video camera of IO industrial company, the highest resolution is 4096 multiplied by 3076, and the size of any pixel range can be adjusted. When the resolution is reduced, the minimum frame period (maximum frame rate) of the camera may be increased and the range of exposure times recalculated. The high speed camera is fixed and adjusted to the proper position by a tripod. The light source adopts an LED lamp as the light source. In the embodiment, the vertical distance between the camera and the magnetic suspension rotor is about 0.8 m, and the magnetic suspension rotor is illuminated by the LED lamp, so that a sufficient brightness condition is provided, and the quality of a shot image is improved. The probe of the acceleration sensor is vertically arranged on the surface of the motor according to the identification direction.
And controlling the rotation speed of the magnetic suspension rotor by using a magnetic suspension rotor control platform, wherein the rotation speed is set to be 6000rpm, 9000rpm, 12000rpm and 15000rpm (revolutions per minute) in sequence, correspondingly, the frame rate of the high-speed camera is set to be 300fps, 500fps, 600fps and 800fps (frames per second), and the vibration video sequence is recorded and stored in the computer in sequence. All videos were taken at a resolution of 1024 × 1024 pixels.
And secondly, extracting a vibration displacement signal of the magnetic suspension motor by adopting a phase-based optical flow method and calculating the local phase change of a frequency domain according to the acquired video.
The phase-based optical flow method is: and obtaining image frequency domain information by adopting a direction controllable pyramid constructed on the basis of a Gabor filter, and calculating corresponding motion information according to the phase change in the image frequency domain information. The specific process is as follows:
(1) designing a Gabor filter
The Gabor filter can convert the image space domain information into frequency domain information, wherein a low-pass sub-band part in the frequency domain information contains the global information of the image, and the detail information of the image can be embodied in a high-pass sub-band part. A Gabor filter is designed by adopting a two-dimensional Gabor wavelet as a convolution kernel function, and the corresponding two-dimensional Gabor function expression is as follows:
Figure GDA0003023390060000061
wherein the space variable xθAnd yθIncluding directional information, the expression is as follows:
xθ=xcosθ+ysinθ,yθ=-xsinθ+ycosθ (2)
wherein x and y represent spatial pixel coordinates; theta epsilon (0 degrees, 360 degrees) represents the direction of the parallel strips in the Gabor filter kernel; λ is the wavelength of the sine function; ψ is the phase offset of the tuning function; γ is the spatial aspect ratio, which determines the shape of the Gabor function; σ denotes the standard deviation of the gaussian function, which determines the size of the acceptable region of the Gabor filter kernel.
Fig. 2(a), (b) show real and imaginary pairs of a two-dimensional Gabor filter in the 0 °, 45 °, 90 ° and 135 ° directions, respectively. By using the characteristics of the Gabor filter, such as direction and frequency selectivity, theta can be changed, and displacement information in different directions can be extracted.
(2) Constructing a directionally controllable pyramid
And performing linear combination by using a plurality of Gabor filters in different directions, and performing scale transformation on the image at the same time, thereby constructing the direction-controllable pyramid. The direction-controllable pyramid can decompose a frame of image in a video into a series of subbands with different scales and multiple directions, wherein the subbands in all directions have no aliasing phenomenon and have the characteristics of translation invariance and rotation invariance, and can flexibly extract structure information and edge information of the image and convert the image information into frequency domain information with different scales and different directions.
The implementation of the direction controllable pyramid structure is shown in fig. 3. The left part of fig. 3 represents the decomposition process of the image-controllable pyramid, and the right part represents the reconstruction process. The image is first passed through a high pass filter H0(ω) and a low-pass filter L0The (omega) is decomposed into two sub-bands of high pass and low pass, which is a preprocessing process. The low-pass sub-band image is decomposed into K band-pass sub-bands B in different directionsk-1(omega) and the low-pass sub-band L1(ω) for low-pass sub-band L simultaneously1The rows and columns of (omega) are respectively resampledAnd repeating the decomposition process after two samples, and circularly iterating until one dimension of the row and the column cannot be subjected to down sampling. Therefore, the frequency domain information with different scales and different directions can be obtained by converting the vibration video image information through the direction controllable pyramid.
(3) Extracting a displacement signal
And frequency domain information of different scales and different directions is obtained through the direction controllable pyramid, and displacement signals of corresponding directions can be extracted according to phase information in the frequency domain.
The following takes the horizontal movement as an example to describe a specific method for extracting the displacement signal:
assuming that the time interval between two successive frame images of the video is Δ t, the position where local motion occurs at the spatial position (x, y) is changed to (Δ x, Δ y). Define the first frame as image intensity I (x, y, t)0) The second frame is defined as the image intensity I (x + Deltax, y + Deltay, t)0+. at). The convolution operation is performed by using a two-dimensional Gabor function to convert the image intensity I (x, y, t) into frequency domain information F (x, y, t) as follows:
Figure GDA0003023390060000071
substituting the independent variables x, y in the primitive function by u, v, and expressing equation (3) in the form of integral as follows:
Figure GDA0003023390060000072
taking the example of extracting horizontal motion, the spatial variable can be represented as xθ=x、yθY, so the reference frame can be converted to:
Figure GDA0003023390060000073
the motion frame can be converted into:
Figure GDA0003023390060000081
rearranging the above equation, putting the phase term independent of the integral variable outside the integral, and then simplifying the equation, we get:
Figure GDA0003023390060000082
Figure GDA0003023390060000083
expressing the phase part of the above expression as
Figure GDA0003023390060000084
The phase terms of the simplified reference frame conversion formula and the simplified motion frame conversion formula are
Figure GDA0003023390060000085
The final definite integral result is therefore the same, which is denoted as
Figure GDA0003023390060000086
The phase angles for the two frames are calculated as follows:
the phase angle for two frames is calculated:
Figure GDA0003023390060000087
Figure GDA0003023390060000088
the phase difference of the two frames is obtained:
Figure GDA0003023390060000089
the horizontal direction motion is proportional to the phase difference.
Similarly, the wavelet direction θ will be changed by pi/2, and the motion in the y direction can be measured; by changing the direction of the wavelet, the motion corresponding to the direction of the wavelet can be estimated. The phase difference, i.e., the displacement signal between two frames, can be calculated to extract the displacement signal between the two frames.
(4) Noise processing
Noise in the input image sequence causes noise in the phase signal, affecting the final displacement extraction result, and since noise is always a low-amplitude phase signal, in order to reduce these meaningless signals, the local amplitude is used to weight the phase and spatial gaussian blur is used to reduce the signal noise floor.
Before calculating the phase difference, the local amplitude can be used to weight the phase to spatially gaussian blur to improve the signal-to-noise ratio by:
weighting the phase signals for the Nth frame
Figure GDA0003023390060000091
Can be calculated as:
Figure GDA0003023390060000092
wherein A isNAnd
Figure GDA0003023390060000093
respectively representing the amplitude and phase signals of the nth frame; h (x, y) is a two-dimensional gaussian function that can be expressed as:
Figure GDA0003023390060000094
the standard deviation rho of the Gaussian filter represents the width of the spatial domain filter, and the larger the standard deviation rho is, the wider the two-dimensional Gaussian image is, and the better the filtering effect is. The method has small calculation amount, improves the signal-to-noise ratio, reduces the lower limit of noise and better reflects actual signals.
And step three, carrying out second-order derivation on the displacement signal to obtain an acceleration signal.
And performing second-order derivation on the displacement signal by adopting an LOG operator, and calculating an acceleration signal. The LOG operator, i.e. the laplacian of gaussian function, is a combination of gaussian and laplacian, and the kernel function of the LOG operator is as follows:
Figure GDA0003023390060000095
and filtering the displacement signal through a Gaussian function, and performing second-order derivation on the filtered displacement signal through a Laplace function. And the second derivative result of the displacement signal is an acceleration signal.
And step four, obtaining a spectrogram by utilizing Fast Fourier Transform (FFT), and comparing the spectrogram with the measurement result of the accelerometer.
And performing Fast Fourier Transform (FFT) on the acceleration signal obtained in the step three to obtain a spectrogram of the acceleration signal, wherein the transform process is the prior art and is not an innovation point of the invention, and a specific transform process is not repeated herein.
In summary, based on the idea of visual guidance, the invention adopts a phase-based optical flow method to perform displacement extraction on the motor vibration image collected by the high-speed camera, converts the two-order derivation into an acceleration signal, and finally obtains the required spectrogram through fast fourier transform.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

1.一种基于视觉的磁悬浮转子振动测量方法,其特征在于:包括以下步骤:1. a vision-based magnetic levitation rotor vibration measurement method, is characterized in that: comprise the following steps: S1.通过安装有光学透镜的高速相机采集磁悬浮转子在不同工作状态下的振动视频;S1. Collect the vibration video of the magnetic levitation rotor under different working states through a high-speed camera installed with an optical lens; S2.采用以Gabor滤波器为基础构建的方向可控金字塔将所述振动视频包含的图像空间域信息转换得到不同尺度、不同方向的图像频域信息,根据图像频域信息中的局部相位变化提取磁悬浮电机的振动位移信号;S2. Using a direction-controllable pyramid constructed on the basis of a Gabor filter to convert the image space domain information contained in the vibration video to obtain image frequency domain information of different scales and directions, and extract the image frequency domain information according to the local phase change in the image frequency domain information. Vibration displacement signal of magnetic levitation motor; S3.采用LOG算子对所述振动位移信号进行二阶求导,计算得到加速度信号;S3. Use the LOG operator to carry out second-order derivation to the vibration displacement signal, and calculate the acceleration signal; S4.对所述加速度信号进行快速傅立叶变换得到目标频谱图;S4. Fast Fourier transform is performed on the acceleration signal to obtain a target spectrogram; 所述步骤S2具体包括:The step S2 specifically includes: S21.采用二维Gabor小波作为卷积核函数设计Gabor滤波器,通过所述Gabor滤波器将振动视频包含的视频图像空间域信息转换成频域信息;所述频域信息中的低通子带部分包含视频图像的全局信息,高通子带部分包含视频图像的细节信息;所述Gabor滤波器对应的二维Gabor函数表达式如下:S21. use two-dimensional Gabor wavelet as the convolution kernel function to design Gabor filter, convert the video image space domain information contained in the vibration video into frequency domain information by the Gabor filter; the low-pass subband in the frequency domain information The part contains the global information of the video image, and the high-pass subband part contains the detailed information of the video image; The two-dimensional Gabor function expression corresponding to the Gabor filter is as follows:
Figure FDA0002944132390000011
Figure FDA0002944132390000011
其中,x和y代表空间像素坐标;θ∈(0°,360°),表示Gabor滤波核中平行条带的方向;λ是正弦函数的波长;ψ是调谐函数的相位偏移;γ是空间纵横比;σ表示高斯函数的标准偏差;xθ和yθ均表示包含方向信息的空间变量,其表达式如下:where x and y represent spatial pixel coordinates; θ∈(0°, 360°), represents the direction of parallel strips in the Gabor filter kernel; λ is the wavelength of the sine function; ψ is the phase offset of the tuning function; γ is the spatial Aspect ratio; σ represents the standard deviation of the Gaussian function; x θ and y θ both represent spatial variables containing directional information, and their expressions are as follows: xθ=xcosθ+ysinθ,yθ=-xsinθ+ycosθx θ =xcosθ+ysinθ,y θ =-xsinθ+ycosθ S22.将用多个不同方向的Gabor滤波器进行线性组合,同时对视频图像进行尺度变换,从而构建方向可控金字塔;S22. Linearly combine multiple Gabor filters in different directions, and perform scale transformation on the video image at the same time, so as to construct a direction-controllable pyramid; S23.通过所述方向可控金字塔将视频图像分解成不同尺度、多个方向包含视频图像的结构信息和边缘信息的子带系列,由此完成将所述视频图像信息转换到不同尺度、不同方向的频域信息;根据频域信息中两帧的相角计算两帧的相位差,由此得到对应方向的位移信号;S23. The video image is decomposed into subband series of different scales and multiple directions including structural information and edge information of the video image through the direction-controllable pyramid, thereby completing the conversion of the video image information to different scales and different directions. The frequency domain information; calculate the phase difference of the two frames according to the phase angle of the two frames in the frequency domain information, thereby obtaining the displacement signal in the corresponding direction; 步骤S23中,根据所述频域信息中两帧的相角计算两帧的相位差得到对应方向的位移信号的计算方式如下:In step S23, the calculation method of calculating the phase difference of the two frames according to the phase angle of the two frames in the frequency domain information to obtain the displacement signal in the corresponding direction is as follows: 假设视频的两个连续帧之间的时间间隔为△t,在空间位置(x,y)发生局部运动(△x,△y),将第一帧定义为图像强度为I(x,y,t0)的参考帧,将第二帧定义为图像强度I(x+△x,y+△y,t0+△t)的运动帧;Assuming that the time interval between two consecutive frames of the video is Δt, a local motion (Δx, Δy) occurs at the spatial position (x, y), and the first frame is defined as the image intensity I(x, y, t 0 ) reference frame, the second frame is defined as the motion frame of image intensity I(x+△x,y+△y,t 0 +△t); 利用二维Gabor函数进行卷积运算,将图像强度I(x,y,t)转化为频域信息F(x,y,t)如下:The two-dimensional Gabor function is used for convolution operation, and the image intensity I(x, y, t) is converted into frequency domain information F(x, y, t) as follows:
Figure FDA0002944132390000021
Figure FDA0002944132390000021
用积分的形式表示如下:Expressed in integral form as follows:
Figure FDA0002944132390000022
Figure FDA0002944132390000022
以提取水平方向运动为例,令空间变量表示为xθ=x、yθ=y,参考帧转换为:Taking the extraction of horizontal motion as an example, let the spatial variables be expressed as x θ = x, y θ = y, and the reference frame is converted into:
Figure FDA0002944132390000023
Figure FDA0002944132390000023
运动帧转换为:Motion frames translate to:
Figure FDA0002944132390000024
重新排列上述参考帧转换公式和运动帧转换公式,将独立于积分变量的相位项放在积分之外,简化后得到:
Figure FDA0002944132390000024
Rearrange the above reference frame conversion formula and motion frame conversion formula, put the phase term independent of the integral variable outside the integral, and simplify it to get:
Figure FDA0002944132390000025
Figure FDA0002944132390000025
Figure FDA0002944132390000026
Figure FDA0002944132390000026
将以上表达式的相位部分表示为
Figure FDA0002944132390000027
简化后的参考帧转换公式和运动帧转换公式的相位项都是
Figure FDA0002944132390000028
因此最终的定积分结果也相同,将其表示为
Figure FDA0002944132390000029
计算两帧的相角如下:
Denote the phase part of the above expression as
Figure FDA0002944132390000027
The phase terms of the simplified reference frame conversion formula and the motion frame conversion formula are
Figure FDA0002944132390000028
So the final definite integral result is also the same, which is expressed as
Figure FDA0002944132390000029
Calculate the phase angle of the two frames as follows:
Figure FDA00029441323900000210
Figure FDA00029441323900000210
Figure FDA0002944132390000031
Figure FDA0002944132390000031
计算得到水平方向两帧的相位差是:The phase difference between the two frames in the horizontal direction is calculated as:
Figure FDA0002944132390000032
Figure FDA0002944132390000032
由此得到对应方向的位移信号。Thereby, the displacement signal in the corresponding direction is obtained.
2.根据权利要求1所述的一种基于视觉的磁悬浮转子振动测量方法,其特征在于:所述方向可控金字塔的实现过程如下:2. a kind of vision-based magnetic levitation rotor vibration measurement method according to claim 1, is characterized in that: the realization process of described direction controllable pyramid is as follows: 视频图像首先经过高通滤波器H0(ω)以及低通滤波器L0(ω)分解为高通、低通两个子带;然后低通子带图像再被分解为K个不同方向的带通子带Bk-1(ω)和低通子带L1(ω),同时对低通子带L1(ω)的行和列分别再进行二抽样;二抽样后重复上述分解过程,循环迭代,直到行和列其中一个维度不能进行降采样为止。The video image is first decomposed into high-pass and low-pass sub-bands through high-pass filter H 0 (ω) and low-pass filter L 0 (ω); then the low-pass sub-band image is decomposed into K bandpass sub-bands in different directions. With B k-1 (ω) and low-pass sub-band L 1 (ω), at the same time, the rows and columns of the low-pass sub-band L 1 (ω) are re-sampled respectively; after the second sampling, the above decomposition process is repeated, and the loop is iterated , until one of the row and column dimensions cannot be downsampled. 3.根据权利要求1所述的一种基于视觉的磁悬浮转子振动测量方法,其特征在于:3. a kind of vision-based magnetic suspension rotor vibration measurement method according to claim 1, is characterized in that: 步骤S23还包括对所述相角进行加权空间高斯模糊来提高信噪比,具体包括:Step S23 also includes performing weighted spatial Gaussian blurring on the phase angle to improve the signal-to-noise ratio, specifically including: 对于第N帧,加权相位信号
Figure FDA0002944132390000033
计算为:
For the Nth frame, the weighted phase signal
Figure FDA0002944132390000033
Calculated as:
Figure FDA0002944132390000034
Figure FDA0002944132390000034
其中,AN表示第N帧的振幅,
Figure FDA0002944132390000035
表示第N帧的相位;h(x,y)是二维高斯函数,表示为:
where A N represents the amplitude of the Nth frame,
Figure FDA0002944132390000035
Represents the phase of the Nth frame; h(x,y) is a two-dimensional Gaussian function, expressed as:
Figure FDA0002944132390000036
Figure FDA0002944132390000036
其中,高斯滤波器的标准差ρ表示空间域滤波器的宽度。Among them, the standard deviation ρ of the Gaussian filter represents the width of the spatial domain filter.
4.根据权利要求1至3任意一项所述的基于视觉的磁悬浮转子振动测量方法,其特征在于:步骤S1中,所述高速相机的最高分辨率为4096×3076,能连续调节像素范围的像素大小和曝光时间范围的曝光时间,并采用LED灯作为光源。4. The vision-based magnetic levitation rotor vibration measurement method according to any one of claims 1 to 3, wherein in step S1, the highest resolution of the high-speed camera is 4096×3076, and the maximum resolution of the pixel range can be adjusted continuously. Pixel size and exposure time range of exposure time, and LED light is used as light source. 5.根据权利要求1至3任意一项所述的基于视觉的磁悬浮转子振动测量方法,其特征在于:所述步骤S1具体包括:设置磁悬浮转子的转速为6000rpm、9000rpm、12000rpm和15000rpm,设置高速相机的帧速率为300fps、500fps、600fps和800fps,记录并保存相应的振动视频序列。5. The vision-based magnetic levitation rotor vibration measurement method according to any one of claims 1 to 3, wherein the step S1 specifically comprises: setting the rotating speed of the magnetic levitation rotor to be 6000rpm, 9000rpm, 12000rpm and 15000rpm, setting a high speed The camera's frame rates are 300fps, 500fps, 600fps, and 800fps, and the corresponding vibration video sequences are recorded and saved.
CN201910246595.1A 2019-03-29 2019-03-29 A Vision-Based Vibration Measurement Method for Magnetic Suspension Rotors Active CN110084127B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910246595.1A CN110084127B (en) 2019-03-29 2019-03-29 A Vision-Based Vibration Measurement Method for Magnetic Suspension Rotors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910246595.1A CN110084127B (en) 2019-03-29 2019-03-29 A Vision-Based Vibration Measurement Method for Magnetic Suspension Rotors

Publications (2)

Publication Number Publication Date
CN110084127A CN110084127A (en) 2019-08-02
CN110084127B true CN110084127B (en) 2021-06-22

Family

ID=67413727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910246595.1A Active CN110084127B (en) 2019-03-29 2019-03-29 A Vision-Based Vibration Measurement Method for Magnetic Suspension Rotors

Country Status (1)

Country Link
CN (1) CN110084127B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110440902B (en) * 2019-08-29 2021-05-14 合肥工业大学 Non-contact micro-vibration vision measurement method
CN110866892B (en) * 2019-09-30 2022-04-08 南京航空航天大学 A vision-based offline vibration measurement and analysis method and system
CN112001361B (en) * 2019-12-26 2022-06-07 合肥工业大学 A multi-target micro-vibration frequency measurement method based on Euler perspective
CN111353400B (en) * 2020-02-24 2024-04-02 南京航空航天大学 Full-scene vibration intensity spectrum analysis method based on visual vibration measurement
CN112348782B (en) * 2020-10-27 2022-03-29 天津大学 Change detection method based on plurality of controllable pyramids
CN112254801B (en) * 2020-12-21 2021-04-02 浙江中自庆安新能源技术有限公司 Micro-vibration vision measurement method and system
CN113409274B (en) * 2021-06-21 2024-11-08 珠海格力电器股份有限公司 Magnetic bearing rotor position detection method, device, medium, controller and system
CN115479556B (en) * 2021-07-15 2025-01-14 四川大学 Binary defocusing three-dimensional measurement method and device for subtracting phase error mean value
CN113724334B (en) * 2021-07-19 2023-09-15 广东工业大学 An elevator vibration detection method and system based on machine vision
CN113627325B (en) * 2021-08-10 2025-02-14 上海其高电子科技有限公司 Video-based vibration location and analysis method
CN114049310B (en) * 2021-10-26 2024-03-08 西北工业大学 An image analysis method for the relative position of the magnetic bearing rotor and its protective bearing
CN113837150B (en) * 2021-11-25 2022-02-11 湖南大学 Non-contact tire pressure acquisition method and related device based on computer vision
CN120252523B (en) * 2025-04-02 2025-09-12 天津大学 A vibration displacement measurement method and measurement system based on multi-source data fusion
CN120222852B (en) * 2025-05-28 2025-08-22 宁波码实智能科技有限公司 Multi-degree-of-freedom motion control system and method for magnetic levitation motor

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120041695A1 (en) * 2010-08-16 2012-02-16 Csi Technology, Inc. Integrated vibration measurement and analysis system
CN104089697B (en) * 2014-07-08 2017-02-15 安徽常春藤光电智能科技有限公司 Real-time online visual vibration measurement method based on thread pool concurrent technology
CN104048744B (en) * 2014-07-08 2017-03-08 安徽常春藤光电智能科技有限公司 A kind of contactless real-time online vibration measurement method based on image
JP6277147B2 (en) * 2015-03-04 2018-02-07 日本電信電話株式会社 Optical fiber vibration measurement method and system
TWI586943B (en) * 2016-03-11 2017-06-11 國立勤益科技大學 Enhanced-fft online machine vibration measurement system and method
CN106885622B (en) * 2017-02-07 2019-08-30 上海理工大学 A large field of view multi-point three-dimensional vibration measurement method
CN108414240B (en) * 2018-03-15 2020-08-11 广东工业大学 Method and device for detecting abnormal vibration of machine
CN108593087A (en) * 2018-03-29 2018-09-28 湖南科技大学 A kind of thin-wall part operational modal parameter determines method and system

Also Published As

Publication number Publication date
CN110084127A (en) 2019-08-02

Similar Documents

Publication Publication Date Title
CN110084127B (en) A Vision-Based Vibration Measurement Method for Magnetic Suspension Rotors
CN112254801B (en) Micro-vibration vision measurement method and system
Chen et al. Modal identification of simple structures with high-speed video using motion magnification
US11631432B1 (en) Apparatus and method for visualizing periodic motions in mechanical components
Bhowmick et al. Measurement of full-field displacement time history of a vibrating continuous edge from video
CN110866892B (en) A vision-based offline vibration measurement and analysis method and system
Chen et al. Developments with motion magnification for structural modal identification through camera video
CN113421224A (en) Cable structure health monitoring method and system based on vision
CN112001361B (en) A multi-target micro-vibration frequency measurement method based on Euler perspective
US8995793B1 (en) Moving object super-resolution systems and methods
CN114528887A (en) Bridge monitoring method, system and device based on micro-vibration amplification technology
CN110440902B (en) Non-contact micro-vibration vision measurement method
Guo et al. Structural vibration measurement based on improved phase-based motion magnification and deep learning
CN111353400B (en) Full-scene vibration intensity spectrum analysis method based on visual vibration measurement
Wang et al. Motion estimation from noisy data with unknown distributions using multi-frame phase-preserving denoising
He et al. Enhancing measurement precision for rotor vibration displacement via a progressive video super resolution network
CN114862809A (en) Vibration monitoring method and device based on mobile terminal and image processing
Yang et al. Sparse representation of complex steerable pyramid for machine fault diagnosis by using non-contact video motion to replace conventional accelerometers
CN116295790B (en) Frequency detection method and system based on inter-frame phase difference of bridge inhaul cable characteristic region
CN115982526B (en) Structural mode recognition method based on image phase in video stream
CN113393392A (en) Dynamic target ghost imaging system and method based on neural network
CN116183226B (en) Bearing test bed vibration displacement measurement and modal analysis algorithm based on phase
Duan et al. Video Motion Magnification and Subpixel Edge Detection‐Based Full‐Field Dynamic Displacement Measurement
CN119122821A (en) Centrifugal pump fault diagnosis method and system based on visual sensing technology
CN116152716A (en) Identification method for lost mode in binocular vision dynamics mode parameter identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant