Sound source distribution visualization method and computer program product
Technical Field
The present invention relates to a visualization technology, and more particularly, to a sound source distribution visualization method and computer program product.
Background
In the field of noise control, correct identification of a noise source is the basis for effective noise improvement, so the accuracy of sound source identification and localization will affect the effect of noise control, and the influence of noise can be effectively controlled or correctly evaluated only under the conditions of really mastering the position, intensity distribution, speed distribution, density distribution and the like of the noise source, and further, the noise caused by structure vibration is reduced, so that the noise of the structure is optimized. For example, the noise control technique is applied to the power machine diagnosis industry, and not only can assist engineers in determining the fault point of the power machine and evaluating the influence caused by the noise source, but also can improve the accuracy of the judgment of the engineers.
In the prior art, a sound source is searched by using a sound intensity method in the technology of identifying the sound source, a plurality of grids need to be distinguished from a detection target space, a sound intensity value of a region is measured in each grid by using a sound intensity meter (sound intensity probe), and then the current sound intensity distribution is reduced and measured by using an interpolation method, so that the purpose of positioning the sound source is achieved.
In addition, U.S. Pat. No. 20050225497 discloses a method for recognizing a sound source by using a beam forming array (beam forming array) technique, however, the beam forming array technique can only recognize a far-field sound field, and has a poor recognition performance for an unsteady sound source, and has disadvantages that it is impossible to perform an instantaneous operation, it is impossible to recognize a sound field with different coordinates synchronously, and it is necessary to change the shape of an array microphone in order to prevent a spatial distortion.
Disclosure of Invention
In order to solve the above problems, the present invention provides a method for visualizing sound source distribution, which can obtain the visualized distribution of the sound source in real time, quickly and accurately by matching the visual characteristics with the analysis operation and the neural network operation.
An embodiment of the present invention provides a sound source distribution visualization method, including: an image creating step: reading a target image of a detection target; a marking step: marking a detection boundary on the target image and setting a plurality of detection points on the detection boundary, wherein each detection point has a special code; a signal acquisition step: inputting a physical signal generated in the operation process of the detection target corresponding to each detection point; an operation processing step: calculating the spectrum distribution of each physical signal through spectrum superposition to analyze the bandwidth range of each physical signal, and acquiring a time waveform in the bandwidth range of each physical signal through analysis and calculation processing to generate a characteristic signal of each physical signal; and a visualization step: and calculating each characteristic signal through a neural network to form an image sound source distribution of the visual characteristic, wherein the image sound source distribution is presented in the detection boundary in cooperation with the target image.
In one embodiment, the intensity variation of each feature signal is generated by calculating each feature signal through a neural network to obtain the distance between each detection point, so as to form the image sound source distribution of the visual feature.
In one embodiment, a continuous and smooth image sound source distribution is formed between the detection points by a bi-harmonic spline interpolation method.
In one embodiment, the distribution of the image sound source of the visual features exhibits color variations according to the intensity of each feature signal.
In one embodiment, the analysis operation is a time-frequency analysis, each physical signal obtains a time waveform within a bandwidth range of each physical signal through the analysis operation, and the characteristic signal for selectively generating each physical signal is provided as a root mean square value or a maximum value of the waveform.
In one embodiment, the neural Network operation is a regression neural Network method (GRNN) or a Supervised neural Network method (Supervised Learning Network).
In one embodiment, when the detection target is a constant-speed device, each physical signal is input corresponding to each detection point in a step-by-step manner, and each physical signal corresponds to the exclusive code memory of each detection point.
In one embodiment, when the detection target is a variable-speed device, each physical signal is input corresponding to each detection point in a synchronous manner, and each physical signal corresponds to the exclusive code memory of each detection point.
In one embodiment, the physical signal is a sound signal or a vibration signal.
One embodiment of the present invention provides a computer program product comprising a non-transitory computer readable medium having instructions recorded thereon, which when executed by a computer implement the method of any of the above embodiments.
Through the above, the invention can instantly, rapidly and accurately obtain the visual image distribution of the sound source by matching the physical signal generated in the operation process of the detection target with the visual characteristics through analysis operation and neural network operation, thereby solving the problem that the prior art cannot instantly and accurately obtain the sound source.
Furthermore, the invention can form the image sound source distribution with visual characteristics by analyzing and calculating and neural network calculation without considering the linear or nonlinear sound source transmission path.
In addition, the invention can be applied to equipment for detecting and measuring the rotating speed or changing the rotating speed so as to improve the application range of the invention.
Drawings
FIG. 1 is a schematic diagram of the method steps of an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating the detection of a boundary marked on a target image according to the present invention;
FIG. 3 is a schematic diagram of a plurality of inspection points arranged on an inspection boundary according to the present invention;
FIG. 4 is a schematic diagram of inputting physical signals corresponding to various detection points according to the present invention;
FIG. 5 is a schematic diagram of calculating the spectral distribution of each physical signal by spectral superposition according to the present invention;
FIG. 6 is a schematic diagram illustrating the distribution of the image sound source in cooperation with the presentation of the target image within the detection boundary according to the present invention.
Description of the reference numerals
I target image
B detecting the boundary
P detection point
C-specific coding
SI image sound source distribution
S1 image creating step
S2 marking step
S3 Signal acquisition step
S4 arithmetic processing step
And S5 visualization step.
Detailed Description
For the purpose of illustrating the central concepts of the present invention as embodied in the above summary, reference will now be made to specific embodiments. Various objects in the embodiments are depicted in terms of scale, dimensions, deformation, or displacement suitable for illustration, rather than in terms of actual component proportions, as previously described.
Referring to fig. 1 to 6, the present invention provides a method for visualizing sound source distribution, including:
an image creating step S1: reading a target image I of a detection target; the detection target can be fixed-speed equipment or variable-speed equipment; the target image I of the detected target is obtained by a camera device (e.g., a camera, a video camera, or a smart mobile device), or is generated by a drawing method.
A labeling step S2: marking a detection boundary B on the target image I obtained in the image establishing step S1, and setting a plurality of detection points P on the detection boundary B, each detection point P having a dedicated code C, wherein the detection boundary B is a closed range, and the detection boundary B can be a closed rectangular shape, a polygonal shape, or a circular curve shape, and in the embodiment of the present invention, the detection boundary B is a closed rectangular shape, as shown in fig. 2; furthermore, the number of the detecting points P can be set according to the requirement of the user, the dedicated code C of each detecting point P can be a number or an english letter, and the dedicated code C is used to identify each detecting point P, in the embodiment of the present invention, the number of the detecting points P is 12, the dedicated code C of each detecting point P is a number, and the dedicated codes C of each detecting point P are 1 to 12, respectively, as shown in fig. 3 and 4.
A signal acquiring step S3: after the marking step S2, a physical signal generated in the operation process of the detection target is inputted corresponding to each detection point P, as shown in fig. 4; wherein, the physical signal is a sound signal or a vibration signal; when the physical signal is a sound signal, the physical signal generated in the operation process of the detection target can be read by a sound reading device (such as an independent microphone, a built-in microphone of an intelligent mobile device or a multi-bit recording pen, but the invention is not limited thereto); when the physical signal is a vibration signal, the physical signal can pass through a vibration sensor (for example, a displacement sensor, a velocity sensor, an acceleration sensor or an accelerometer, but the invention is not limited thereto).
Furthermore, when the detected object is a constant speed device, a single sound reading device or vibration sensor is used to read the physical signal corresponding to each detecting point P step by step according to the type of the physical signal to be read, and the read physical signal is memorized corresponding to the exclusive code C of each detecting point P.
In addition, when the detected object is a variable-speed device, a plurality of sound reading devices or a plurality of vibration sensors are used according to the types of physical signals to be read, each sound reading device or each vibration device is placed at the actual position of the physical detected object corresponding to each detection point P, the physical signals are input into each detection point P through each sound reading device or each vibration device in a synchronous mode, and the read physical signals are memorized corresponding to the exclusive codes C of each detection point P.
An arithmetic processing step S4: the physical signals of each detection point P obtained in the signal obtaining step S3 are subjected to spectrum superposition to calculate the spectrum distribution of each physical signal, so as to analyze the bandwidth range of each physical signal, and a time waveform within the bandwidth range of each physical signal is obtained through an analyzing and calculating process to generate a characteristic signal of each physical signal, as shown in fig. 5; in the embodiment of the invention, the analysis operation is time-frequency analysis, and the characteristic signal of the physical signal can be a root mean square value or a waveform maximum value; after the bandwidth range of each physical signal is analyzed, each physical signal can obtain a time waveform in the bandwidth range of each physical signal through analysis and calculation processing, and a characteristic signal for selectively generating each physical signal is provided as a root mean square value or a waveform maximum value, wherein the bandwidth range of each physical signal can be set or preset by a user.
A visualization step S5: the feature signal of each detection point P obtained in the operation step S4 is operated through a neural network to form an image sound source distribution SI of visual features, and the image sound source distribution SI is presented in the detection boundary B in cooperation with the target image I, wherein the image sound source distribution SI is superimposed on the target image I and does not display each detection point P, as shown in fig. 6.
In the embodiment of the present invention, the neural Network operation is a regression neural Network method (GRNN) or a Supervised neural Network method (Supervised Learning Network); the image sound source distribution of the visual features presents color changes according to the intensity of each feature signal. Further explanation is as follows: calculating each characteristic signal through a neural network to obtain the intensity variation of each characteristic signal generated by the distance difference between the detection points P, for example: when 12 detection points P exist, the distance from each detection point P to the rest detection points P is different, and the intensity change of different characteristic signals can be generated among the detection points P; then, a continuous and smooth image sound source distribution SI is formed between the detecting points P by bi-harmonic spline interpolation method, as shown in FIG. 6.
Some embodiments according to the invention comprise an data carrier with electronically readable control signals capable of cooperating with a programmable computer system such that one of the methods described in the invention is performed. Generally, embodiments of the present invention can be implemented as a computer program product having program code operative to perform one of the methods described above when the computer program product is executed on a terminal device; wherein the program code may for example be stored on a machine readable carrier.
In other embodiments of the invention, a computer program product stored on a machine-readable carrier for performing the methods described herein can be included. In other words, an embodiment of the inventive method is thus a computer program having program code for performing one of the methods described herein when the computer program product is executed on a terminal device, such as a computer or a smart mobile device.
Therefore, when the embodiment of the invention is a computer program product with program codes, the target image I of the detection target can be obtained through signal connection between the terminal device and the camera device; or generating a target image I of the detection target on the terminal device in a drawing mode. Furthermore, the terminal device can be connected with a sound reading device or a vibration sensor through signals to acquire physical signals.
In summary, the present invention has the following effects:
the invention matches the physical signal generated in the operation process of the detection target with the visual characteristic through analysis operation and neural network operation, and can instantly, quickly and accurately obtain the visual image distribution of the sound source.
The invention can form the image sound source distribution with visual characteristics by analysis operation and neural network operation without considering the linear or nonlinear sound source transmission path.
The invention can be applied to equipment for detecting and measuring the rotating speed or changing the rotating speed so as to improve the application range of the invention.
The above examples are provided only for illustrating the present invention and are not intended to limit the scope of the present invention. All such modifications and variations are within the scope of the invention as determined by the appended claims.