Hilbert transform-based civil engineering structure dynamic monitoring phase evaluation method
Technical Field
The invention relates to the technical field of civil engineering structure dynamic monitoring, in particular to a dynamic monitoring phase evaluation method for a civil engineering structure based on Hilbert transform.
Background
Many civil engineering structures, including bridges, dams, highways and buildings, were built decades ago and have exceeded the design life. Structural health monitoring (SHM for short) enables lower cost condition-based maintenance, performs structural state prediction, and eliminates unexpected catastrophic failures.
Standard wired accelerometers for SHM are labor intensive instrument structures because wiring issues and physical placement are cumbersome. Also, it is very difficult and cumbersome to measure a structure densely, and the efficiency is low. Furthermore, when the structure is smaller than the size of the accelerometer, the additional mass of the accelerometer can also affect the accuracy of the monitoring results.
Accordingly, non-contact measurement methods have been developed, which, while overcoming most of the above disadvantages, are still very time consuming to inspect a large structure by ultrasonic inspection or other similar methods in the case of non-destructive inspection (NDT). Other non-contact vibration measurement techniques, such as displacement measurement by scanning laser vibrometers, provide high spatial resolution sensing capability without the need for structural mounting of sensors or the induction of mass loading effects, however, these measurement devices are relatively expensive and require sequential measurements, which can be time consuming and labor intensive when the required sensing area is large.
As a non-contact method, digital cameras are relatively low cost, flexible to operate, and provide simultaneous measurements with very high spatial resolution. Video-based measurements combined with image processing algorithms have been successfully used for vibration measurements of various types of structures, and recently, digital image correlation techniques (abbreviated as DIC) and three-dimensional point tracking techniques (abbreviated as 3DPT) have been introduced into SHM. However, these methods typically require a speckle pattern or high contrast marker to be placed on the structure surface, and then the motion field is estimated based on the image intensity correlation. On the one hand, these requirements present surface preparation and target mounting problems, especially when the measurement area is large or difficult to reach, which negates the advantage of the ease of measurement of the camera, since the target needs to be placed on the structure. On the other hand, image-based intensity correlation methods are very sensitive to noise and disturbances, such as sudden changes in scene illumination, which can cause errors in the estimated motion field. Therefore, a phase motion estimation algorithm (abbreviated PME) without target measurement and without any preparation is introduced into the SHM. The phase behavior is relatively insensitive to photometric distortions (e.g., shadows, highlights, etc.), and the phase profile provides a good approximation of the motion field. Phase-based computer vision algorithms can use natural features of the structure or image in the structure motion estimation to give normal lighting conditions, thereby avoiding the surface damage described above.
The PME is based on the analysis of the phase by Gabor wavelet, which is essentially a sinusoidal function modulated by a gaussian envelope, so that the matching of the gaussian envelope region to the image information region directly affects the algorithm effect. In practical application, the PME algorithm strongly depends on the prior knowledge of the measured object, and usually needs to predict the signal frequency, otherwise, the phase identification of the image and the motion field is wrong, and the phase estimation result is inaccurate.
It is urgent to solve the above problems.
Disclosure of Invention
In order to solve the technical problems, the invention provides a dynamic monitoring phase evaluation method for a civil engineering structure based on Hilbert transform.
The technical scheme is as follows:
a dynamic monitoring phase evaluation method for a civil engineering structure based on Hilbert transform is characterized by comprising the following steps of:
s1, calculating to obtain an image sequence InResonant frequency f of (x, y, t)s;
S2 processing the image sequence I by two-dimensional fast Fourier transformn(x, y, t) to transfer it from the time domain to the frequency domain, resulting in a frequency matrix F (u, v, t);
s3, using Butterworth ideal band-pass filter with fs center frequency and 1Hz bandwidth to frequency matrix F (u, v, t) in frequency interval [ Fs-0.5,fs+0.5]Filtering to obtain a filtered frequency matrix FB(u,v,t);
S4, frequency matrix F after filtering by two-dimensional Fourier inverse transformationB(u, v, t) to give a monocycle of IsA sequence of images of (x, y, t);
s5, processing the single cycle into I based on Hilbert transforms(x, y, t) of the sequence of images, calculating the motion of the evaluation image.
Preferably, the step S1 is performed according to the following steps:
s11, selecting image sequence InTime series I of gray scale of arbitrary position (a, b) in (x, y, t)n(a,b,t);
S12 processing gray-scale time series I by fast Fourier transformn(a, b, t) transferring the frequency sequence from the time domain to the frequency domain to obtain a frequency sequence X (f);
s13, processing the frequency sequence X (f) by using a power spectral density method to obtain a resonant frequency fs。
By adopting the method, the resonant frequency f can be simply and skillfully calculateds。
Preferably, the method comprises the following steps: in the step S12, the gray scale time series I is processedn(a, b, t) performing fast Fourier transform to obtain a frequency sequence X (f) with the expression:
in the formula (1), a and b represent coordinates of a selected position on a time domain coordinate system, t represents a selected time, and f represents frequency;
in step S13, the frequency sequence x (f) is processed by using a power spectral density method to obtain the following expression:
calculating the frequency f corresponding to the maximum value of PSD (f) as fs。
By adopting the method, the resonant frequency f can be accurately calculateds。
Preferably, the method comprises the following steps: in the step S2, the image sequence I is processednThe expression of a frequency matrix F (u, v, t) obtained by performing two-dimensional fast Fourier transform on (x, y, t) is as follows:
in the formula (3), x and y represent arguments in the time domain, u and v represent corresponding arguments of x and y in the frequency domain, and j represents an imaginary number;
in the step S3, the filtered frequency matrix FBThe relationship between (u, v, t) and the frequency matrix F (u, v, t) is:
FB(u,v,t)=F(u,v,t)*BH(u,v) (4)
in equation (4), BH (u, v) represents a basworth ideal band pass filter in the frequency domain, where the expression of BH (u, v) is:
in formula (5), D (u, v) ═ u-M/22+(v-N/2)2]D (u, v) represents the distance from the point (u, v) to the center of the circle in the frequency domain, M and N represent the length and width of the image, respectively, D1=fs+0.5,D2=fs-0.5;
In the step S4, the filtered frequency matrix F is processedB(u, v, t) two-dimensional inverse Fourier transform to obtain a single period IsThe expression for the image sequence of (x, y, t) is:
with the above method, not only the frequency matrix F (u, v, t) can be simply calculated, but also the resonance frequency F can be based onsSetting a Bassworth ideal band-pass filter, ensuring the filtering effect, and finally obtaining the single period Is(x, y, t).
Preferably, the step S5 includes:
s51, single period is Is(x, y, t) is regarded as a matrix;
s52, calculating the single period as IsMovement of the image sequence of (x, y, t) in the x-direction:
for time t, the mth row of the frame image matrix is represented by the following one-dimensional sine function:
Isx(x,ym,t)=Lxcos(2πf0xx+ψx) (7)
in the formula (7), LxRepresenting local amplitude in the x-direction, f0xRepresenting the frequency, ψ, of a sinusoidal signal in the x-directionxRepresents the amount of phase shift in the x direction;
to Isx(x,ymT) performing a Hilbert transform to obtain the following relation:
in the formula (8), τ1Representing a variable in the integral;
Isx(x,ymand t) the analytic signal is:
in formula (9), i represents an imaginary number;
therefore, IAsx(x,ymAnd t) is:
φx(x,ym,t)=2πf0xx+ψx (10)
similarly, the analytic signal I of the next frame image matrix at the time tAsx(x+δx,ym,t+Δt1) The phase of (c) can be derived as:
φx(x+δx,ym,t+Δt1)=2πf0x(x+δx)+ψx (11)
in the formula (11), δxRepresenting Δ t for time increment1Displacement of the image in the x-direction;
therefore, in the x direction, the relationship between the phase change of the analysis signal of two consecutive frames of images and the motion information is:
Δφx=φx(x+δx,ym,t+Δt1)-φx(x,ym,t)=2πf0xδx (12)
the x-direction motion can be evaluated by equation (12);
s53, calculating the single period as IsMovement of the image sequence of (x, y, t) in the y direction:
for time t, the nth column of the frame image matrix is expressed by the following one-dimensional sine function:
Isy(xn,y,t)=Lycos(2πf0yy+ψy) (13)
in the formula (13), LyRepresenting local amplitude in the y-direction, f0yRepresenting the frequency, ψ, of a sinusoidal signal in the y-directionyRepresents the amount of phase shift in the y direction;
to Isy(xn,yT) performing a Hilbert transform to obtain the following relation:
in the formula (14), τ2Representing a variable in the integral;
Isy(xny, t) is:
therefore, IAsy(xnY, t) is:
φy(xn,y,t)=2πf0yy+ψy (16)
similarly, the analytic signal I of the next frame image matrix at the time tAsy(xn,y+δy,t+Δt2) The phase of (c) can be derived as:
φy(xn,y+δy,t+Δt2)=2πf0y(y+δy)+ψy (17)
in the formula (11), δyRepresenting Δ t for time increment2Displacement of the image in the y-direction;
therefore, in the y direction, the relationship between the phase change of the analysis signal of two consecutive frames of images and the motion information is:
Δφy=φy(xn,y+δy,t+Δt2)-φy(xn,y,t)=2πf0yδy (18)
the y-direction movement can be evaluated by equation (18);
s54, the phase change of the civil engineering structure is comprehensively evaluated based on the formulas (12) and (18).
By adopting the method, the single cycle I can be simply and quickly processed based on Hilbert changes(x, y, t) imageAnd sequence, respectively evaluating the motion in the x direction and the y direction, thereby finally comprehensively evaluating the motion of the image.
Compared with the prior art, the invention has the beneficial effects that:
1. the method can obtain effective phase difference under the condition of unknown measured signals, and combines the phase difference with the movement of the signals to realize the monitoring of the civil engineering structure, solves the problem that the traditional PME method strongly depends on the prior knowledge of the measured object, and is simpler and easier to use;
2. the evaluation calculation method is based on Fourier transform and Hilbert transform and mainly processes in a frequency domain, so that the calculation complexity of the method is far less than that of Gabor wavelet transform and image convolution of the traditional PME in a space domain, and the calculation efficiency is greatly improved, so that the method is more suitable for dynamic structure monitoring, and a simpler and more practical new method is provided for dynamic monitoring of civil infrastructure.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 shows a gray scale sequence InSchematic of (a, b, t);
FIG. 3 is a graph of resonance frequency f calculated using power spectral densitysA schematic diagram of (a);
FIG. 4 is a schematic diagram of filtering using a Butterworth ideal band pass filter;
FIG. 5 is a comparison graph of PME and HPME recognition results and real motion under different Gabor parameters;
FIG. 6 is a bar graph of the correlation between the HPME method and the PME method;
FIG. 7 is a schematic diagram of a cantilever experiment;
FIG. 8 is a graph comparing displacement measured by the laser sensor with recognition results obtained by the HPME method and the PME method;
FIG. 9 is a graph showing the correlation between the identification results of the HPME method and the PME method and the displacement measured by the laser sensor under different conditions.
Detailed Description
The present invention will be further described with reference to the following examples and the accompanying drawings.
As shown in fig. 1, a dynamic monitoring phase estimation method for civil engineering structure based on hilbert transform is performed according to the following steps:
s1, calculating to obtain an image sequence InResonant frequency f of (x, y, t)s。
Specifically, step S1 is performed according to the following steps:
s11, please refer to FIG. 2, selecting the image sequence InTime series I of gray scale of arbitrary position (a, b) in (x, y, t)n(a,b,t)。
S12 processing gray-scale time series I by fast Fourier transformn(a, b, t) to transfer the frequency sequence from the time domain to the frequency domain, and obtaining a frequency sequence X (f) with the expression:
in the formula (1), a and b represent coordinates of the selected position on a time domain coordinate system, t represents the selected time, and f represents the frequency.
S13, please refer to fig. 3, the following expression is obtained by processing the frequency sequence x (f) by using the power spectral density method:
fsgoing to max (PSD (f)), and calculating the frequency f corresponding to the maximum value of PSD (f), i.e. fs。
S2 processing the image sequence I by two-dimensional fast Fourier transformn(x, y, t) which is transferred from the time domain to the frequency domain to obtain a frequency matrix F (u, v, t) whose expression is:
in the formula (3), x and y represent arguments in the time domain, u and v represent corresponding arguments of x and y in the frequency domain, and j represents an imaginary number.
S3, please refer to FIG. 4, using Butterworth ideal band-pass filter with fs center frequency and 1Hz bandwidth to frequency matrix F (u, v, t) in frequency interval [ Fs-0.5,fs+0.5]Filtering to obtain a filtered frequency matrix FB(u,v,t);
Specifically, in step S3, the filtered frequency matrix FBThe relationship between (u, v, t) and the frequency matrix F (u, v, t) is:
FB(u,v,t)=F(u,v,t)*BH(u,v) (4)
in equation (4), BH (u, v) represents a basworth ideal band pass filter in the frequency domain, where the expression of BH (u, v) is:
in formula (5), D (u, v) ═ u-M/22+(v-N/2)2]D (u, v) represents the distance from the point (u, v) to the center of the circle in the frequency domain, M and N represent the length and width of the image, respectively, D1=fs+0.5,D2=fs-0.5。
S4, frequency matrix F after filtering by two-dimensional Fourier inverse transformationB(u, v, t) to give a monocycle of Is(x, y, t) in the sequence of images, expressed as:
s5, processing the single cycle into I based on Hilbert transforms(x, y, t) of the sequence of images, calculating the motion of the evaluation image.
Specifically, step S5 includes:
s51, single period is Is(x, y, t) is considered as a matrix, each element in the matrix is called a pixel, where x and y represent the position of the pixel in the camera sensor plane;
s52, calculating the single period as IsMovement of a sequence of images of (x, y, t) in the x-direction:
For time t, the mth row of the frame image matrix is represented by the following one-dimensional sine function:
Isx(x,ym,t)=Lxcos(2πf0xx+ψx) (7)
in the formula (7), LxRepresenting local amplitude in the x-direction, f0xRepresenting the frequency, ψ, of a sinusoidal signal in the x-directionxRepresents the amount of phase shift in the x direction;
to Isx(x,ymT) performing a Hilbert transform to obtain the following relation:
in the formula (8), τ1Representing a variable in the integral;
Isx(x,ymand t) the analytic signal is:
in formula (9), i represents an imaginary number;
therefore, IAsx(x,ymAnd t) is:
φx(x,ym,t)=2πf0xx+ψx (10)
similarly, the analytic signal I of the next frame image matrix at the time tAsx(x+δx,ym,t+Δt1) The phase of (c) can be derived as:
φx(x+δx,ym,t+Δt1)=2πf0x(x+δx)+ψx (11)
in the formula (11), δxRepresenting Δ t for time increment1Displacement of the image in the x-direction;
therefore, in the x direction, the relationship between the phase change of the analysis signal of two consecutive frames of images and the motion information is:
Δφx=φx(x+δx,ym,t+Δt1)-φx(x,ym,t)=2πf0xδx (12)
equation (12) shows that the movement of two consecutive images in the x direction can be estimated from the phase change of its analytic signal, and therefore the movement of the civil engineering structure in the x direction can be evaluated by equation (12).
S53, calculating the single period as IsMovement of the image sequence of (x, y, t) in the y direction:
for time t, the nth column of the frame image matrix is expressed by the following one-dimensional sine function:
Isy(xn,y,t)=Lycos(2πf0yy+ψy) (13)
in the formula (13), LyRepresenting local amplitude in the y-direction, f0yRepresenting the frequency, ψ, of a sinusoidal signal in the y-directionyRepresents the amount of phase shift in the y direction;
to Isy(xnY, t) is Hilbert transformed to yield the following relation:
in the formula (14), τ2Representing a variable in the integral;
Isy(xny, t) is:
therefore, IAsy(xnY, t) is:
φy(xn,y,t)=2πf0yy+ψy (16)
similarly, the analytic signal I of the next frame image matrix at the time tAsy(xn,y+δy,t+Δt2) The phase of (c) can be derived as:
φy(xn,y+δy,t+Δt2)=2πf0y(y+δy)+ψy (17)
in the formula (11), δyRepresenting Δ t for time increment2Displacement of the image in the y-direction;
therefore, in the y direction, the relationship between the phase change of the analysis signal of two consecutive frames of images and the motion information is:
Δφy=φy(xn,y+δy,t+Δt2)-φy(xn,y,t)=2πf0yδy (18)
equation (18) shows that the motion of two consecutive images in the y direction can be estimated from the phase change of its analytic signal, and therefore the motion in the y direction of the civil engineering structure can be evaluated by equation (18).
S54, the phase change of the civil engineering structure is comprehensively evaluated based on the formulas (12) and (18).
The following are two comparative examples of the present process and the conventional PME process:
comparative example one: finite element experiment
A set of images and/or videos is simulated by using a two-dimensional Gaussian surface moving along the x direction:
in the formula (19), B is the amplitude, s is the standard deviation of the Gaussian curved surface, and deltax(t) is an arbitrary motion displacement, using a motion δ x (t) e-ξωntsin(ωnt) the gaussian surface simulating viscous damped oscillation moves in the x-direction.
The method (HPME for short) and the traditional PME method are adopted to process the video respectively and extract the motion information. Basswort ideal band-pass filter adopting methodThe filter decomposes the original video into single-period images, corresponding to fs1.2. The results of HPME and PME recognition are shown in FIG. 5, and the Gabor wavelet frequency parameters are randomly selected as f in FIG. 5(a)00.0167, in fig. 5(b), f00.025 in FIG. 5(c), f00.0286, in fig. 5(d), f00.0333; it is noted that f0Representing the frequency of the sinusoidal signal, the standard deviation σ of the gaussian function is 1. The performance of the recognition result was evaluated using the correlation coefficient as shown in fig. 6. As can be seen from fig. 5 and 6: the correlation between the PME recognition result and the actual displacement changes along with the selection of Gabor parameters, the maximum correlation can reach 96.4%, and the worst is 83.53%, which can lead to unstable recognition results by using PME; the correlation of HPME reaches 99.55%, which proves that the method has higher precision and effectively avoids the problem of parameter distribution; the displacement time course curve obtained by using PME has end point effect at both ends, so that the identification precision is reduced, but the time course curve obtained by using HPME has no problem.
Comparative example two: cantilever beam experiment
As shown in fig. 7, a cantilever 1 with a length of 1200mm, a width of 50mm and a height of 10mm is photographed, and the displacement of the cantilever 1 at one point is measured by using a laser displacement sensor 2 as reference data of the measurement result, and meanwhile, the motion information of the cantilever 1 is recorded by using a common mobile phone 3. In addition, reference numeral 4 is a computer, reference numeral 5 is a cantilever beam fixing device, and reference numeral 6 is an LED lamp.
The distance between the mobile phone 3 and the plane of the cantilever beam 1 is 2.1 m. The resolution of the handset 3 is 1920 × 1048, and the frame rate is 60 frames/second. The video frame width is 1068mm in the plane of the laser sensor 2. Two sets of random vibrations of different initial amplitudes (case 1 and case2, respectively) are randomly generated by the impact hammer acting on the cantilever beam 1, and then the excitation results are identified and compared using PME and HPME. Since time synchronization between the handset 3 and the laser sensor 2 data sets is not possible, the time series is manually aligned in the data.
A comparison graph of the recognition results is shown in FIG. 8, and a comparison graph of the correlations is shown in FIG. 9, it should be noted thatIn the PME, Gabor parameter sigma is randomly selected to be 1, f00.01. As can be seen from fig. 8 and 9, the correlations between the displacement measured by the laser sensor 2 and the two random initial amplitude curves (case 1 and case2, respectively) identified by the PME are 81.27% and 82.7%, respectively, while the correlations between the corresponding HPMEs reach 95.67% and 96.89%. Here again, the HPME method has demonstrated greater accuracy than the PME method. First, the HPME identifies the instantaneous phase, while the PME is the phase obtained by convolution processing in a certain area. Secondly, the randomly selected parameters in the PME are also the reason for their accuracy not being as high as HPME. Finally, the possible presence of end-point effects can lead to a reduction in accuracy. Therefore, it can be concluded that the accuracy of motion estimation using the HPME method is about 14% higher than that of the PME method.
Finally, it should be noted that the above-mentioned description is only a preferred embodiment of the present invention, and those skilled in the art can make various similar representations without departing from the spirit and scope of the present invention.