[go: up one dir, main page]

CN107886049A - A kind of visibility identification method for early warning based on camera probe - Google Patents

A kind of visibility identification method for early warning based on camera probe Download PDF

Info

Publication number
CN107886049A
CN107886049A CN201710959010.1A CN201710959010A CN107886049A CN 107886049 A CN107886049 A CN 107886049A CN 201710959010 A CN201710959010 A CN 201710959010A CN 107886049 A CN107886049 A CN 107886049A
Authority
CN
China
Prior art keywords
mrow
msup
mtr
mtd
msubsup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710959010.1A
Other languages
Chinese (zh)
Other versions
CN107886049B (en
Inventor
单婵
罗晓春
任冉
谢小萍
杭鑫
孙明
史潇
张岚
徐敏
魏晓奕
王珂清
孙玉宝
王素娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Meteorological Service Center
Original Assignee
Jiangsu Meteorological Service Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Meteorological Service Center filed Critical Jiangsu Meteorological Service Center
Priority to CN201710959010.1A priority Critical patent/CN107886049B/en
Publication of CN107886049A publication Critical patent/CN107886049A/en
Application granted granted Critical
Publication of CN107886049B publication Critical patent/CN107886049B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of visibility based on camera probe to identify method for early warning, comprises the following steps:1, multigroup scene picture around it is shot using camera probe, is divided into training sample and test sample;2, training sample and test sample are pre-processed;3, build caffenet convolutional neural networks models;4, the training in two stages of propagated forward and back-propagating is carried out to convolutional neural networks model using pretreated training sample, when the error that back-propagating training calculates reaches desired value, training terminates, and obtains the parameter of convolutional neural networks model;5, test sample random cropping is gone out into polylith, input to the convolutional neural networks model trained is tested, and obtains final visibility classification results using " majority ballot " method, and early warning is carried out when low visibility is in given threshold.The present invention carries out Classification and Identification using caffenet convolutional neural networks model to the visibility of picture, and classification accuracy is high.

Description

Visibility recognition early warning method based on camera probe
Technical Field
The invention particularly relates to a visibility recognition early warning method based on a camera probe.
Background
The scattering visibility meter used in modern meteorological observation is limited by the principle and the station distribution density, and the regional characteristics of the low-visibility weather phenomenon are difficult to accurately describe. According to the basic principle of weather science and the practice of weather forecast, the low-visibility weather phenomenon locally appearing from early morning to early morning is easy to develop to form regional heavy fog weather. Therefore, observation and early warning of low visibility phenomena in a small range are necessary.
The scattering visibility observation instrument commonly used in the current meteorological station has low observation accuracy in the range of 0-1000 m due to the fact that the sampling space is less than 1 cubic meter, and cannot reflect the characteristic of the weather phenomenon of fog which is a large scale.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a visibility recognition early warning method based on a camera probe.
In order to solve the technical problem, the invention provides a visibility recognition early warning method based on a camera probe, which is characterized by comprising the following steps of:
step S1, shooting a plurality of groups of surrounding scene pictures by using a camera probe, and dividing the scene pictures into training samples and testing samples;
step S2, preprocessing the training sample and the test sample, and cutting the training sample and the test sample into image blocks adapted to the cafenet convolutional neural network;
step S3, constructing a coffee convolutional neural network model, wherein the network model comprises 5 convolutional layers, 3 downsampling layers and 3 full-connection layers;
step S4, training the cafenet convolutional neural network model constructed in the step S3 in two stages of forward propagation and backward propagation by using the training sample preprocessed in the step S2, and finishing the training and obtaining the parameters of the convolutional neural network model when the error calculated by the backward propagation training reaches an expected value;
and step S5, testing the test samples preprocessed in the step S2 by using the trained ca ffenet convolutional neural network model in the step S4 to obtain visibility classification results, and performing early warning when the visibility is lower than a set threshold value.
Preferably, the image block size is 227 pixels x227 pixels.
Preferably, the image blocks are saved in a. bmp format.
Preferably, the output of the caddet model is the probability that the picture block belongs to each visibility category, and the picture belongs to the visibility category corresponding to the maximum probability, and the visibility categories are divided into the following five categories according to the visibility interval: the first type: 0-750 m; the second type: 751 m to 1000 m; in the third category: 1001 m to 2250 m; the fourth type: 2251-3000 m; the fifth type: 3001 m or more.
Preferably, the calculation formula of the convolutional layer in step S3 is:wherein,is the first of the convolution layercThe jth output graph of a layer, f being the activation function, MjFor a set of input feature maps, a convolution operation,is the first of the convolution layercThe convolution kernel between the jth output graph of a layer and the ith input graph of the previous layer, i is more than or equal to 1 and less than or equal to max (l)cin),max(lcin) Is the firstcThe maximum number of layer input graphs, i is more than or equal to 1 and less than or equal to max (l)cout),max(lcout) Is the firstcThe maximum number of layers to output the graph,is the first of the convolution layercAdditional deviation of jth output graph of layer,/c=1,…,5。
Preferably, the formula of the downsampling layer in step S3 is:wherein,for down-sampling the layer lsThe jth output graph of a layer, f is the activation function, S is the downsampling function,respectively, the l < th > down-sampling layersMultiplier deviation, additional deviation, l of jth output graph of layers=1,…,3。
Preferably, the hypothetical function of step S4 for the classification result is:
where k is the number of classes, x(i)Is the ith sampleResponse in the last fully connected layer of the cafenet. The calculation formula of the loss function of the classification result is:
wherein m is the total number of samples of the training sample, k is the number of classes, the second term is a weight attenuation term, and λ is the weight attenuation; the network model is trained by adopting a batch gradient descent method, and the loss function is related to thetajThe partial derivatives of the parameters are as follows:
preferably, the classification result of the final test sample is obtained by a "majority voting" method.
Preferably, the visibility threshold is set to 1000 meters or 750 meters.
Compared with the prior art, the invention has the following beneficial effects: the invention adopts the mask model to classify and identify the visibility of the picture, has the advantages of high classification accuracy, simple method, high identification speed and the like, and provides a new technical scheme for visibility early warning.
Drawings
FIG. 1 is a view of a scene photographed in an embodiment of the present invention;
FIG. 2 shows the results of the tests performed in the examples of the present invention;
FIG. 3 is a diagram illustrating the test accuracy in an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
In the description of the present patent application, it is noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, including not only those elements listed, but also other elements not expressly listed.
The invention relates to a visibility identification method based on a camera probe, which comprises the following steps:
step S1, shooting a plurality of groups of surrounding scene pictures by using a camera probe, and dividing the scene pictures into training samples and testing samples;
step S2, preprocessing the training sample and the test sample, removing the damaged camera therein or the scene picture shot under the fuzzy condition, and randomly cutting the training sample into the input size suitable for the noise convolution neural network to form a real training set;
in the embodiment of the invention, the scene pictures come from two scenes, namely Yangzhong and Zhanghong, the size of the Yangzhong picture is 704 pixels x576 pixels, and the size of the Zhanghong picture is 960 pixels x576 pixels, while the size of the input image of the depth network framework adopted by the invention is 227 pixels x227 pixels, and the format is a.bmp picture. Therefore, each shot scene picture is randomly cropped to a picture block of 227 pixels x227 pixels of the original image to be used as a training sample.
The scene picture in the embodiment is from pictures collected from 6 am to 17 am, the pictures at night and with damaged or blurred cameras are removed, and the color is a three-channel color image.
Step S3, constructing a coffee convolutional neural network model, wherein the network model comprises 5 convolutional layers, 3 downsampling layers and 3 full-connection layers.
The calculation formula of the convolutional layer is as follows:
wherein,is the first of the convolution layercThe jth output graph of a layer, f being the activation function, MjFor a set of input feature maps, a convolution operation,is the first of the convolution layercThe convolution kernel between the jth output graph of a layer and the ith input graph of the previous layer, i is more than or equal to 1 and less than or equal to max (l)cin),max(lcin) Is the firstcThe maximum number of layer input graphs, i is more than or equal to 1 and less than or equal to max (l)cout),max(lcout) Is the firstcThe maximum number of layers to output the graph,is the first of the convolution layercAdditional deviation of jth output graph of layer,/c=1,…,5。
The calculation formula of the downsampling layer is as follows:
wherein,for down-sampling the layer lsThe jth output graph of a layer, f is the activation function, S is the downsampling function,respectively, the l < th > down-sampling layersMultiplier deviation, additional deviation, l of jth output graph of layers=1,…,3。
Step S4, training the cafenet convolutional neural network model constructed in the step S3 in two stages of forward propagation and backward propagation by using the training sample preprocessed in the step S2, finishing the training when the error calculated by the backward propagation training reaches an expected value, and obtaining the parameters of the convolutional neural network model;
the invention is divided into the following five categories according to visibility intervals:
categories Visibility interval (Unit: meter)
First kind 0-750
Second class 751-1000
Class III 1001-2250
Class IV 2251-3000
Fifth class 3001 and above
The model adopted by the method is an image classification model, namely, a noise in deep learning, training samples which are obtained in the last step and are composed of picture blocks with the size of 227 pixels x227 pixels and the format of bmp are input, the probability that the pictures belong to each visibility category is output, and the pictures belong to the visibility category corresponding to the maximum probability value. Namely, by comparing the sizes of the five types of probability, the visibility type to which the input image belongs can be judged.
In this embodiment, the specific parameters during training are as follows: the initial learning rate is set to 0.001, the maximum number of iterations is set to 10000, the learning rate is further performed every 2000 iterations, the momentum is set to 0.95, and the weight attenuation is set to 0.0005. The output number is set as the visibility type to be distinguished, and the value is 5.
The hypothetical function for the classification result is:
where k is the number of classes, x(i)Is the response of the ith sample in the last fully connected layer of the cafent. The calculation formula of the loss function of the classification result is:
where m is the total number of samples of the training sample, k is the number of classes, the second term is the weight decay term, and λ is the weight decay. The network model is trained using a batch gradient descent method. Loss function with respect to thetajThe partial derivatives of the parameters are as follows:
the training platform was configured as an ubuntu system, Intel i7-4790 processor, a NVIDIA TITAN GPU with 32G video memory. The number of training samples is: 3463 pictures, 866 pictures as test sample images, and 0.9076 as test accuracy, see FIG. 3.
And step S5, testing the test sample preprocessed in the step S2 by using the noise convolutional neural network model trained in the step S4, randomly cutting a plurality of image blocks adapting to the size of the network from the test sample, inputting the image blocks into the network, obtaining a final visibility classification result by adopting a majority voting method, and giving an early warning when the visibility is lower than a set threshold value.
For example, the input picture is a scene graph of the yangzhong station shown in fig. 1, the output result is shown in fig. 2, and the probabilities of belonging to each visibility category are: 9.99984622e-1、1.53363544e-5、2.95128366e-9、1.67594187e-17、1.19709305e-17Through comparison of the five types of probability, the input image can be judged to belong to the first visibility type, namely, the visibility is less than 750 meters.
The visible region of the picture is also the region near the camera, so the visibility value identified from the picture represents the visibility of the region captured by the camera probe.
Once the visibility value of a picture is found to be lower than a threshold value (such as 1000 meters or 750 meters), an alarm is started to remind a forecaster to pay attention.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (9)

1. A visibility recognition early warning method based on a camera probe is characterized by comprising the following steps:
step S1, shooting a plurality of groups of surrounding scene pictures by using a camera probe, and dividing the scene pictures into training samples and testing samples;
step S2, preprocessing the training sample and the test sample, and cutting the training sample and the test sample into image blocks adapted to the cafenet convolutional neural network;
step S3, constructing a coffee convolutional neural network model, wherein the network model comprises 5 convolutional layers, 3 downsampling layers and 3 full-connection layers;
step S4, training the cafenet convolutional neural network model constructed in the step S3 in two stages of forward propagation and backward propagation by using the training sample preprocessed in the step S2, and finishing the training and obtaining the parameters of the convolutional neural network model when the error calculated by the backward propagation training reaches an expected value;
and step S5, testing the test samples preprocessed in the step S2 by using the trained ca ffenet convolutional neural network model in the step S4 to obtain visibility classification results, and performing early warning when the visibility is lower than a set threshold value.
2. The visibility recognition and early warning method based on the camera probe as claimed in claim 1, wherein the image block size is 227 pixels x227 pixels.
3. The visibility recognition and early warning method based on the camera probe as claimed in claim 2, wherein the image blocks are stored in a bmp format.
4. The visibility identification early warning method based on the camera probe as claimed in claim 1, wherein the output of the mask model is the probability that the picture block belongs to each visibility category, and the picture belongs to the visibility category corresponding to the maximum probability, and the visibility categories are divided into the following five categories according to the visibility interval: the first type: 0-750 m; the second type: 751 m to 1000 m; in the third category: 1001 m to 2250 m; the fourth type: 2251-3000 m; the fifth type: 3001 m or more.
5. The visibility recognition and early warning method based on the camera probe as claimed in claim 1, wherein the calculation formula of the convolutional layer in step S3 is as follows:wherein,is the first of the convolution layercThe jth output graph of a layer, f being the activation function, MjFor a set of input feature maps, a convolution operation,is the first of the convolution layercThe convolution kernel between the jth output graph of a layer and the ith input graph of the previous layer, i is more than or equal to 1 and less than or equal to max (l)cin),max(lcin) Is the firstcThe maximum number of layer input graphs, i is more than or equal to 1 and less than or equal to max (l)cout),max(lcout) Is the firstcThe maximum number of layers to output the graph,is the first of the convolution layercAdditional deviation of jth output graph of layer,/c=1,…,5。
6. The visibility recognition and early warning method based on the camera probe as claimed in claim 1, wherein the formula of the down-sampling layer in step S3 is:wherein,for down-sampling the layer lsThe jth output graph of a layer, f is the activation function, S is the downsampling function,respectively, the l < th > down-sampling layersMultiplier deviation, additional deviation, l of jth output graph of layers=1,…,3。
7. The visibility recognition and early warning method based on the camera probe as claimed in claim 1, wherein the assumed function of the classification result in step S4 is:
<mrow> <msub> <mi>h</mi> <mi>&amp;theta;</mi> </msub> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <mrow> <msup> <mi>y</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mn>1</mn> <mo>|</mo> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>;</mo> <mi>&amp;theta;</mi> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <mrow> <msup> <mi>y</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mn>2</mn> <mo>|</mo> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>;</mo> <mi>&amp;theta;</mi> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mtable> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> </mtable> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <mrow> <msup> <mi>y</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mi>k</mi> <mo>|</mo> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>;</mo> <mi>&amp;theta;</mi> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <msup> <mi>e</mi> <mrow> <msubsup> <mi>&amp;theta;</mi> <mi>j</mi> <mi>T</mi> </msubsup> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> </mrow> </msup> </mrow> </mfrac> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msup> <mi>e</mi> <mrow> <msubsup> <mi>&amp;theta;</mi> <mn>1</mn> <mi>T</mi> </msubsup> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> </mrow> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>e</mi> <mrow> <msubsup> <mi>&amp;theta;</mi> <mn>2</mn> <mi>T</mi> </msubsup> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> </mrow> </msup> </mtd> </mtr> <mtr> <mtd> <mtable> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> </mtable> </mtd> </mtr> <mtr> <mtd> <msup> <mi>e</mi> <mrow> <msubsup> <mi>&amp;theta;</mi> <mi>k</mi> <mi>T</mi> </msubsup> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> </mrow> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> </mrow>
where k is the number of classes, x(i)Is the response of the ith sample in the last fully connected layer of the cafenet; the calculation formula of the loss function of the classification result is:
<mrow> <mi>J</mi> <mrow> <mo>(</mo> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>-</mo> <mfrac> <mn>1</mn> <mi>m</mi> </mfrac> <mrow> <mo>&amp;lsqb;</mo> <mrow> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </munderover> <mn>1</mn> <mrow> <mo>{</mo> <mrow> <msup> <mi>y</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mi>j</mi> </mrow> <mo>}</mo> </mrow> <mi>log</mi> <mfrac> <msup> <mi>e</mi> <mrow> <msubsup> <mi>&amp;theta;</mi> <mi>j</mi> <mi>T</mi> </msubsup> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> </mrow> </msup> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <msup> <mi>e</mi> <mrow> <msubsup> <mi>&amp;theta;</mi> <mi>j</mi> <mi>T</mi> </msubsup> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> </mrow> </msup> </mrow> </mfrac> </mrow> <mo>&amp;rsqb;</mo> </mrow> <mo>+</mo> <mfrac> <mi>&amp;lambda;</mi> <mn>2</mn> </mfrac> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </munderover> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>n</mi> </munderover> <msubsup> <mi>&amp;theta;</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mn>2</mn> </msubsup> </mrow>
wherein m is the total number of samples of the training sample, k is the number of classes, the second term is a weight attenuation term, and λ is the weight attenuation; the network model is trained by adopting a batch gradient descent method, and the loss function is related to thetajThe partial derivatives of the parameters are as follows:
<mrow> <msub> <mo>&amp;dtri;</mo> <msub> <mi>&amp;theta;</mi> <mi>j</mi> </msub> </msub> <mi>J</mi> <mrow> <mo>(</mo> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>-</mo> <mfrac> <mn>1</mn> <mi>m</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <mo>&amp;lsqb;</mo> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <mn>1</mn> <mo>{</mo> <msup> <mi>y</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mi>j</mi> <mo>}</mo> <mo>-</mo> <mfrac> <msup> <mi>e</mi> <mrow> <msubsup> <mi>&amp;theta;</mi> <mi>j</mi> <mi>T</mi> </msubsup> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> </mrow> </msup> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <msup> <mi>e</mi> <mrow> <msubsup> <mi>&amp;theta;</mi> <mi>j</mi> <mi>T</mi> </msubsup> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> </mrow> </msup> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>+</mo> <msub> <mi>&amp;theta;</mi> <mi>j</mi> </msub> </mrow>
8. the camera probe-based visibility recognition early warning method as claimed in claim 1, wherein a 'majority voting' method is adopted to obtain the classification result of the final test sample.
9. The visibility recognition and early warning method based on the camera probe as claimed in claim 1, wherein the visibility threshold is set to 1000 m or 750 m.
CN201710959010.1A 2017-10-16 2017-10-16 Visibility recognition early warning method based on camera probe Active CN107886049B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710959010.1A CN107886049B (en) 2017-10-16 2017-10-16 Visibility recognition early warning method based on camera probe

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710959010.1A CN107886049B (en) 2017-10-16 2017-10-16 Visibility recognition early warning method based on camera probe

Publications (2)

Publication Number Publication Date
CN107886049A true CN107886049A (en) 2018-04-06
CN107886049B CN107886049B (en) 2022-08-26

Family

ID=61781462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710959010.1A Active CN107886049B (en) 2017-10-16 2017-10-16 Visibility recognition early warning method based on camera probe

Country Status (1)

Country Link
CN (1) CN107886049B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112014393A (en) * 2020-08-26 2020-12-01 大连信维科技有限公司 Medium visibility identification method based on target visual effect
CN114565795A (en) * 2022-03-02 2022-05-31 河北雄安京德高速公路有限公司 Method and system for identifying visibility grade of image monitoring adverse weather
US11961001B2 (en) 2017-12-15 2024-04-16 Nvidia Corporation Parallel forward and backward propagation

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101957309A (en) * 2010-08-17 2011-01-26 招商局重庆交通科研设计院有限公司 All-weather video measurement method for visibility
CN102012977A (en) * 2010-12-21 2011-04-13 福建师范大学 Signal peptide prediction method based on probabilistic neural network ensemble
CN102509102A (en) * 2011-09-28 2012-06-20 郝红卫 Visibility measuring method based on image study
CN102855640A (en) * 2012-08-10 2013-01-02 上海电机学院 Fruit grading system based on neural network
CN105117739A (en) * 2015-07-29 2015-12-02 南京信息工程大学 Clothes classifying method based on convolutional neural network
CN106096654A (en) * 2016-06-13 2016-11-09 南京信息工程大学 A kind of cell atypia automatic grading method tactful based on degree of depth study and combination
JP2016194815A (en) * 2015-03-31 2016-11-17 アイシン・エィ・ダブリュ株式会社 Feature image recognition system, feature image recognition method, and computer program
CN106408526A (en) * 2016-08-25 2017-02-15 南京邮电大学 Visibility detection method based on multilayer vectogram
CN106488559A (en) * 2016-11-22 2017-03-08 上海斐讯数据通信技术有限公司 A kind of outdoor positioning method based on visibility and server
CN106548645A (en) * 2016-11-03 2017-03-29 济南博图信息技术有限公司 Vehicle route optimization method and system based on deep learning
CN106682704A (en) * 2017-01-20 2017-05-17 中国科学院合肥物质科学研究院 Method of disease image identification based on hybrid convolutional neural network fused with context information
CN106778657A (en) * 2016-12-28 2017-05-31 南京邮电大学 Neonatal pain expression classification method based on convolutional neural networks
CN106845529A (en) * 2016-12-30 2017-06-13 北京柏惠维康科技有限公司 Image feature recognition methods based on many visual field convolutional neural networks
WO2017133009A1 (en) * 2016-02-04 2017-08-10 广州新节奏智能科技有限公司 Method for positioning human joint using depth image of convolutional neural network
CN107194924A (en) * 2017-05-23 2017-09-22 重庆大学 Expressway foggy-dog visibility detecting method based on dark channel prior and deep learning

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101957309A (en) * 2010-08-17 2011-01-26 招商局重庆交通科研设计院有限公司 All-weather video measurement method for visibility
CN102012977A (en) * 2010-12-21 2011-04-13 福建师范大学 Signal peptide prediction method based on probabilistic neural network ensemble
CN102509102A (en) * 2011-09-28 2012-06-20 郝红卫 Visibility measuring method based on image study
CN102855640A (en) * 2012-08-10 2013-01-02 上海电机学院 Fruit grading system based on neural network
JP2016194815A (en) * 2015-03-31 2016-11-17 アイシン・エィ・ダブリュ株式会社 Feature image recognition system, feature image recognition method, and computer program
CN105117739A (en) * 2015-07-29 2015-12-02 南京信息工程大学 Clothes classifying method based on convolutional neural network
WO2017133009A1 (en) * 2016-02-04 2017-08-10 广州新节奏智能科技有限公司 Method for positioning human joint using depth image of convolutional neural network
CN106096654A (en) * 2016-06-13 2016-11-09 南京信息工程大学 A kind of cell atypia automatic grading method tactful based on degree of depth study and combination
CN106408526A (en) * 2016-08-25 2017-02-15 南京邮电大学 Visibility detection method based on multilayer vectogram
CN106548645A (en) * 2016-11-03 2017-03-29 济南博图信息技术有限公司 Vehicle route optimization method and system based on deep learning
CN106488559A (en) * 2016-11-22 2017-03-08 上海斐讯数据通信技术有限公司 A kind of outdoor positioning method based on visibility and server
CN106778657A (en) * 2016-12-28 2017-05-31 南京邮电大学 Neonatal pain expression classification method based on convolutional neural networks
CN106845529A (en) * 2016-12-30 2017-06-13 北京柏惠维康科技有限公司 Image feature recognition methods based on many visual field convolutional neural networks
CN106682704A (en) * 2017-01-20 2017-05-17 中国科学院合肥物质科学研究院 Method of disease image identification based on hybrid convolutional neural network fused with context information
CN107194924A (en) * 2017-05-23 2017-09-22 重庆大学 Expressway foggy-dog visibility detecting method based on dark channel prior and deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DEAR F等: ""Application of artificial neural network forecasts to predict fog at canberra international airport"", 《WEATHER AND FORECASTING》 *
李沛等: ""基于神经网络逐级分类建模的北京地区能见度预报"", 《兰州大学学报(自然科学版)》 *
苗苗: ""视频能见度检测算法综述"", 《现代电子技术》 *
邓小花等: ""运用支持向量机方法对数值模拟结果的初步释用"", 《海洋预报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11961001B2 (en) 2017-12-15 2024-04-16 Nvidia Corporation Parallel forward and backward propagation
CN112014393A (en) * 2020-08-26 2020-12-01 大连信维科技有限公司 Medium visibility identification method based on target visual effect
CN112014393B (en) * 2020-08-26 2023-12-19 大连信维科技有限公司 A media visibility recognition method based on target visual effects
CN114565795A (en) * 2022-03-02 2022-05-31 河北雄安京德高速公路有限公司 Method and system for identifying visibility grade of image monitoring adverse weather

Also Published As

Publication number Publication date
CN107886049B (en) 2022-08-26

Similar Documents

Publication Publication Date Title
CN110348376B (en) Pedestrian real-time detection method based on neural network
CN113469278B (en) Strong weather target identification method based on deep convolutional neural network
CN110223341B (en) Intelligent water level monitoring method based on image recognition
CN116503318B (en) Aerial insulator multi-defect detection method, system and equipment integrating CAT-BiFPN and attention mechanism
CN109559302A (en) Pipe video defect inspection method based on convolutional neural networks
CN107742099A (en) A kind of crowd density estimation based on full convolutional network, the method for demographics
CN111738114B (en) Vehicle target detection method based on accurate sampling of remote sensing images without anchor points
CN116665153B (en) A road scene segmentation method based on improved Deeplabv3+ network model
CN116740528B (en) A Target Detection Method and System Based on Shadow Features in Side-Scan Sonar Images
CN111062950A (en) Method, storage medium and equipment for multi-class forest scene image segmentation
CN113469097B (en) A multi-camera real-time detection method of floating objects on water surface based on SSD network
CN108052929A (en) Parking space state detection method, system, readable storage medium storing program for executing and computer equipment
CN114140428A (en) Method and system for detection and identification of larch caterpillar pests based on YOLOv5
CN119027676B (en) Segmentation algorithm and quantitative calculation of various defects in bridge images based on perceptual analysis
CN118941507A (en) Defect detection method, device, electronic device and storage medium for unmanned aerial vehicle inspection of photovoltaic arrays
CN114078209A (en) Lightweight target detection method for improving small target detection precision
CN114862812A (en) Two-stage rail transit vehicle defect detection method and system based on priori knowledge
CN107886049B (en) Visibility recognition early warning method based on camera probe
CN109376580A (en) A deep learning-based identification method for power tower components
CN110796360A (en) Fixed traffic detection source multi-scale data fusion method
CN117274822A (en) Processing methods, devices and electronic equipment for soil and water loss monitoring models
CN112906795A (en) Whistle vehicle judgment method based on convolutional neural network
CN116994161A (en) An insulator defect detection method based on improved YOLOv5
CN119963686B (en) A data visualization processing method for intelligent transportation
CN114529815A (en) Deep learning-based traffic detection method, device, medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant