Temperature Prediction of Heating Furnace Based on Deep Transfer Learning
<p>(<b>a</b>) Architectural elements in a temporal convolutional network (TCN); (<b>b</b>) residual structure of TCN.</p> "> Figure 2
<p>Heating process of heating furnace. The heating furnace is a three-stage heating furnace. The heating zones 1 to 4 are located in the preheating section, 5 to 8 in the heating section, and 9 and 10 in the soaking section.</p> "> Figure 3
<p>The first part preprocesses the collected original data, including the selection of relevant variables, the conversion of collected data into time series samples, the determination of sliding window size, as well as the determination of the source domain and target domain of this case. The second part trains and optimizes the structure and parameters of the TCN model according to the source domain data and then uses the target domain data to monitor and learn the basic model. In the third part, the prediction error and accuracy of the target domain model are evaluated by the target domain test and finally output the target model.</p> "> Figure 4
<p>Training and test set distribution of the target variable in heating zone 1.</p> "> Figure 5
<p>We choose two heating zones to show that the neural network has no extrapolation (<b>a</b>) Table 2. (<b>b</b>) Training and test set distribution of target variable in heating zone 6.</p> "> Figure 6
<p>An example of constructing time series samples by sliding window.</p> "> Figure 7
<p>Auto-correlation within different sliding window sizes.</p> "> Figure 8
<p>Maximum mean discrepancy <span class="html-italic">(MMD)</span> scores of source domain and target domain under different sliding window sizes. This score represents the similarity measurement between the rest of the zones and zone 1. The figure in the upper right corner refers to the <span class="html-italic">MMD</span> scores of zone 2 and zone 1, and the <span class="html-italic">MMD</span> scores of zone 3 and zone 1, all the way to zone 10.</p> "> Figure 9
<p>The TCN structure proposed in this paper and the initial TCN structure proposed in [<a href="#B19-sensors-20-04676" class="html-bibr">19</a>]. (<b>a</b>) shows the TCN structure used in this paper; hidden layer C is added to the TCN structure. (<b>b</b>) is the structure of hidden layer C. (<b>c</b>) is the initial TCN structure proposed in [<a href="#B19-sensors-20-04676" class="html-bibr">19</a>].</p> "> Figure 10
<p>Using self-transfer learning as a method of neural network initialization. Some of the hidden layers of pre-training were frozen, and then the unfrozen layers were updated with their own training set.</p> "> Figure 11
<p>Performance of TCN under different frozen layers.</p> "> Figure 12
<p>Prediction results before and after TCN optimization.</p> "> Figure 13
<p>We compare the TCN network after two improvements with the TCN proposed in [<a href="#B19-sensors-20-04676" class="html-bibr">19</a>], (<b>a</b>) is the RMSE score under different frozen layers, and (<b>b</b>) is the MAE score under different frozen layers.</p> "> Figure 14
<p>Using the target domain training set to fine-tune the unfrozen layer of the source model to obtain a new target model TL-TCN and then using the target domain test set to test the target model.</p> "> Figure 15
<p>The domain adaptation is achieved by the generative adversarial network, and the target model is fine-tuned by the target variable of the target domain.</p> "> Figure 16
<p>RMSE and MAE scores of different models applied to nine target zones.</p> "> Figure 17
<p>No transfer learning means that we do not use transfer learning to complete the prediction, but we use self-transfer learning to change the initial weight of the neural network. TL-TCN refers to the method with better results in fine-tuning and GAN-TCN, (<b>a</b>–<b>i</b>) shows the comparison of prediction results of heating zone 2–10.</p> "> Figure 17 Cont.
<p>No transfer learning means that we do not use transfer learning to complete the prediction, but we use self-transfer learning to change the initial weight of the neural network. TL-TCN refers to the method with better results in fine-tuning and GAN-TCN, (<b>a</b>–<b>i</b>) shows the comparison of prediction results of heating zone 2–10.</p> "> Figure A1
<p>Temperature distribution of target variables in each heating zone except those mentioned in the manuscript, (<b>a</b>–<b>g</b>) shows the temperature distribution of heating zone 3–10 except heating zone 6.</p> "> Figure A1 Cont.
<p>Temperature distribution of target variables in each heating zone except those mentioned in the manuscript, (<b>a</b>–<b>g</b>) shows the temperature distribution of heating zone 3–10 except heating zone 6.</p> ">
Abstract
:1. Introduction
- Parameter complexity. There are many kinds of parameters in the production process of a heating furnace, including the structural parameters of the heating furnace (heat exchanger model, thermocouple model), control parameters (furnace temperature of heating section, fan setting opening, gas valve opening), thermal parameters (gas flow, oxygen content, nitrogen flow, steam flow), etc., and the parameters have strong coupling and mutual influence.
- Temperature hysteresis. The heating process is a nonlinear, time-varying, and lagging process. After the implementation of control, the effect will be delayed for a period of time. If the corresponding countermeasures are made after the alarm is triggered, it will cause the loss of production equipment and increase energy consumption. It is necessary to establish a time series prediction model to predict the temperature change trend in advance, so as to adjust the control strategy in time.
- Multi objective prediction. The heating process of billet in the furnace will pass through multiple heating zones. Since different heating zones have independent adjustable control variables, different heating zones correspond to different prediction tasks. At the same time, there is parameter sharing between different heating zones, so the question of how to achieve efficient and accurate prediction of multiple zones is a thorny problem.
- This paper describes the first time that transfer learning is used to solve the problem of temperature prediction in multiple heating zones of the same furnace. First, we use the generative adversarial loss for domain adaptation, and we then use the fine-tuning method to complete the target domain task. The framework proposed in this paper can obviously improve the prediction accuracy.
- Combining with the auto-correlation function and maximum mean discrepancy (MMD), a sliding window size selection method in the context of transfer learning is first proposed, which provides a novel idea for window size selection.
- We propose a weight initialization method for neural networks based on transfer learning in this paper.
- It is the first time that transfer learning is used to solve the problem of the neural network having no extrapolation.
- We optimize the structure and parameters of TCN to improve its prediction performance for time series.
- Through many experiments, we provide the consistent results of 10 different heating zones, which prove that the TCN optimization method and the transfer learning framework proposed in this paper are the most advanced methods for multiple heating zone prediction.
2. Related Work
2.1. Temporal Convolutional Network
2.2. Transfer Learning for Time Series
2.3. Generative Adversarial Networks
3. Data Processing and Analysis
3.1. Heating Process of Heating Furnace
3.2. The Overall Framework
3.3. Data Collection and Processing
3.4. Source Domain and Target Domain
3.5. Sliding Window Model
4. Methodology and Results Analysis
4.1. Technical Details
4.2. TCN for Furnace Temperature Prediction
4.3. Transfer Learning for Furnace Temperature Prediction
- The performance of the TCN model is better than the variant of the RNN in almost all zones, and the GRU is better than TCN only in heating zone 9.
- The self-TL-TCN proposed by us has better performance than the common TCN model; the self-TL-GRU also has better performance than the common GRU model. This means that network performance can be improved by changing the initial weight of the network based on the migration learning idea.
- Two transfer learning frameworks proposed by us can effectively solve the problem of large prediction error in some heating zones, which greatly reduces the prediction error. Zones 10 and 9 have the largest error reductions, with RMSE reduced by 57.38% and 43.97% and MAE reduced by 51.63% and 50.59%. TL-BiLSTM also shows better performance than models without knowledge transfer in each zone but worse performance than our models.
4.4. Discussion on Whether the Framework Is Overfitted
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
Appendix A
Representative Brand | Soaking Section (°C) | Heating Section (°C) | Preheating Section (°C) |
---|---|---|---|
09, 10 | 07, 08 05, 06 | 04, 03 01, 02 | |
Q *** | 1300~1190 | 1310~1180 | 1250~1000 |
S *** | 1310~1180 | 1300~1170 | 1250~1000 |
Q *** | 1300~1180 | 1310~1170 | 1240~1000 |
X *** | 1300~1180 | 1310~1170 | 1240~1000 |
Representative Brand | Soaking Section(°C) | Heating Section (°C) | Preheating Section (°C) |
---|---|---|---|
09, 10 | 07, 08 05, 06 | 04, 03 01, 02 | |
Q *** | 1290~1190 | 1300~1180 | 1250~1000 |
S *** | 1300~1180 | 1290~1170 | 1250~1000 |
Q *** | 1290~1180 | 1300~1170 | 1240~1000 |
X *** | 1290~1180 | 1300~1170 | 1240~1000 |
References
- Ko, H.S.; Kim, J.-S.; Yoon, T.W.; Lim, M.; Yang, D.R.; Jun, I.S. Modeling and predictive control of a reheating furnace. In Proceedings of the 2000 American Control Conference. ACC (IEEE Cat. No.00CH36334), Chicago, IL, USA, 28–30 June 2000; Volume 4, pp. 2725–2729. [Google Scholar]
- Liao, Y.; She, J.; Wu, M. Integrated Hybrid-PSO and Fuzzy-NN decoupling control for temperature of reheating furnace. IEEE Trans. Ind. Electron. 2009, 56, 2704–2714. [Google Scholar] [CrossRef]
- Wang, J.; Wang, J.; Xiao, Q.; Ma, S.; Rao, W.; Zhang, Y. Data-driven thermal efficiency modeling and optimization for co-firing boiler. In Proceedings of the 26th Chinese Control and Decision Conference (2014 CCDC), Changsha, China, 31 May–2 June 2014; pp. 3608–3611. [Google Scholar]
- Zhou, P.; Guo, D.; Wang, H.; Chai, T. Data-driven robust M-LS-SVR-based NARX modeling for estimation and control of molten iron quality indices in blast furnace ironmaking. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 4007–4021. [Google Scholar] [CrossRef] [PubMed]
- Wang, X. Ladle furnace temperature prediction model based on large-scale data with random forest. IEEE/CAA J. Autom. Sinica 2017, 4, 770–774. [Google Scholar] [CrossRef]
- Gao, C.; Ge, Q.; Jian, L. Rule extraction from fuzzy-based blast furnace SVM multiclassifier for decision-making. IEEE Trans. Fuzzy Syst. 2014, 22, 586–596. [Google Scholar] [CrossRef]
- Xiaoke, F.; Liye, Y.; Qi, W.; Jianhui, W. Establishment and optimization of heating furnace billet temperature model. In Proceedings of the 2012 24th Chinese Control and Decision Conference (CCDC), Taiyuan, China, 23–25 May 2012; pp. 2366–2370. [Google Scholar]
- Tunckaya, Y.; Köklükaya, E. Comparative performance evaluation of blast furnace flame temperature prediction using artificial intelligence and statistical methods. Turk. J. Elect. Engin. Comput. Sci. 2016, 24, 1163–1175. [Google Scholar] [CrossRef]
- Liu, X.; Liu, L.; Wang, L.; Guo, Q.; Peng, X. Performance sensing data prediction for an aircraft auxiliary power unit using the optimized extreme learning machine. Sensors 2019, 19, 3935. [Google Scholar] [CrossRef] [Green Version]
- Chen, Y.; Chai, T. A model for steel billet temperature of prediction of heating furnace. In Proceedings of the 29th Chinese Control Conference, Beijing, China, 29–31 July 2010; pp. 1299–1302. [Google Scholar]
- Shao, S.; McAleer, S.; Yan, R.; Baldi, P. Highly accurate machine fault diagnosis using deep transfer learning. IEEE Trans. Ind. Inform. 2019, 15, 2446–2455. [Google Scholar] [CrossRef]
- Zhou, H.; Zhang, H.; Yang, C. Hybrid-model-based intelligent optimization of ironmaking process. IEEE Trans. Ind. Electron. 2020, 67, 2469–2479. [Google Scholar] [CrossRef]
- Ding, S.; Wang, Z.; Peng, Y.; Yang, H.; Song, G.; Peng, X. IDynamic prediction of the silicon content in the blast furnace using LSTM-RNN based models. In Proceedings of the 2017 International Conference on Computer Technology, Electronics and Communication (ICCTEC), Dalian, China, 19–21 December 2017; pp. 227–231. [Google Scholar]
- Gao, H.; Cheng, B.; Wang, J.; Li, K.; Zhao, J.; Li, D. Object classification using CNN-based fusion of vision and LIDAR in autonomous vehicle environment. IEEE Trans. Ind. Inform. 2018, 14, 4224–4231. [Google Scholar] [CrossRef]
- Liu, Z.; Wu, Z.; Li, T.; Li, J.; Shen, C. GMM and CNN hybrid method for short utterance speaker recognition. IEEE Trans. Ind. Inform. 2018, 14, 3244–3252. [Google Scholar] [CrossRef]
- Cruz, G.; Bernardino, A. Learning temporal features for detection on maritime airborne video sequences using convolutional LSTM. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6565–6576. [Google Scholar] [CrossRef]
- Pascanu, R.; Mikolov, T.; Bengio, Y. On the difficulty of training recurrent neural networks. In Proceedings of the International Conference on Machine Learning, Atlanta, GA, USA, 16–24 June 2013; pp. 1310–1318. [Google Scholar]
- Jozefowicz, R.; Zaremba, W.; Sutskever, I. An empirical exploration of recurrent network architectures. In Proceedings of the 32nd International Conference on International Conference on Machine Learning—Volume 37, Lille, France, 6–11 July 2015; pp. 2342–2350. [Google Scholar]
- Bai, S.; Kolter, J.Z.; Koltun, V. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv 2018, arXiv:1803.01271. [Google Scholar]
- Chang, S.-Y.; Li, B.; Simko, G.; Sainath, T.N.; Tripathi, A.; van den Oord, A.; Vinyals, O. Temporal modeling using dilated convolution and gating for voice-activity-detection. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Korea, 22–27 April 2018; pp. 5549–5553. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Torrey, L.; Shavlik, J. Transfer Learning. In Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques; IGI Global: Hershey, PA, USA, 2010; pp. 242–264. [Google Scholar]
- Do, C.B.; Ng, A.Y. Transfer Learning for Text Classification. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2006; pp. 299–306. [Google Scholar]
- Wang, H.; Yu, Y.; Cai, Y.; Chen, L.; Chen, X. A vehicle recognition algorithm based on deep transfer learning with a multiple feature subspace distribution. Sensors 2018, 18, 4109. [Google Scholar] [CrossRef] [Green Version]
- Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Know. Data Engin. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 8–12 June 2015; pp. 3431–3440. [Google Scholar]
- Fawaz, H.I.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P. Transfer learning for time series classification. In Proceedings of the 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA, 10–13 December 2018; pp. 1367–1376. [Google Scholar]
- Qureshi, A.S.; Khan, A.; Zameer, A.; Usman, A. Wind power prediction using deep neural network based meta regression and transfer learning. Appl. Soft Comput. 2017, 58, 742–755. [Google Scholar] [CrossRef]
- Sun, C.; Ma, M.; Zhao, Z.; Tian, S.; Yan, R.; Chen, X. Deep transfer learning based on sparse autoencoder for remaining useful life prediction of tool in manufacturing. IEEE Trans. Ind. Inf. 2019, 15, 2416–2425. [Google Scholar] [CrossRef]
- Ma, J.; Cheng, J.C.P.; Lin, C.; Tan, Y.; Zhang, J. Improving air quality prediction accuracy at larger temporal resolutions using deep learning and transfer learning techniques. Atmos. Environ. 2019, 214, 116885. [Google Scholar] [CrossRef]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2014; pp. 2672–2680. [Google Scholar]
- Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein gan. arXiv 2017, arXiv:1701.07875. [Google Scholar]
- Basrak, B. The Sample Autocorrelation Function of Non-linear Time Series; Rijksuniversiteit Groningen: Groningen, The Netherlands, 2000. [Google Scholar]
- Dziugaite, G.K.; Roy, D.M.; Ghahramani, Z. Training generative neural networks via maximum mean discrepancy optimization. arXiv 2015, arXiv:1505.03906. [Google Scholar]
- Gretton, A.; Borgwardt, K.M.; Rasch, M.J.; Schölkopf, B.; Smola, A. A kernel two-sample test. J. Mach. Learn. Res. 2012, 13, 723–773. [Google Scholar]
- Steinwart, I.; Hush, D.; Scovel, C. An explicit description of the reproducing kernel hilbert spaces of gaussian RBF kernels. IEEE Trans. Inf. Theory 2006, 52, 4635–4643. [Google Scholar] [CrossRef]
- Gulli, A.; Pal, S. Deep Learning with Keras; Packt Publishing Ltd.: Birmingham, UK, 2017. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 8–12 June 2015; pp. 1026–1034. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Cho, K.; Van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv 2014, arXiv:1406.1078. [Google Scholar]
- Huang, Z.; Xu, W.; Yu, K. Bidirectional LSTM-CRF models for sequence tagging. arXiv 2015, arXiv:1508.01991. [Google Scholar]
Heating zone | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
Number of different variables | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
Parameters | CNN-LSTM | LSTM | BiLSTM | GRU |
---|---|---|---|---|
Neurons | 364 | 128 | 256 | 256 |
Batch Size | 64 | 64 | 64 | 72 |
Epochs | 100 | 100 | 100 | 70 |
CNN-LSTM | LSTM | BiLSTM | GRU | TCN | |
---|---|---|---|---|---|
RMSE | 18.067 | 18.318 | 17.684 | 13.428 | 11.362 |
MAE | 14.715 | 13.938 | 13.807 | 10.218 | 8.797 |
Dataset Characteristics | Number of Instances | Attribute Characteristics | Number of Attributes |
---|---|---|---|
Multivariate, Time Series | 43,824 | Integer, Real | 13 1 |
Dataset | TCN | Improved TCN | |
---|---|---|---|
Heating furnace | RMSE | 12.509 | 11.362 |
MAE | 9.596 | 8.797 | |
Beijing PM 2.5 | RMSE | 23.065 | 22.399 |
MAE | 12.194 | 11.738 |
Dataset | TCN | Self-TL-TCN | Improvement | |
---|---|---|---|---|
Heating furnace | RMSE | 12.509 | 8.870 | 29.09% |
MAE | 9.596 | 6.781 | 29.33% | |
Beijing PM 2.5 | RMSE | 23.065 | 22.279 | 3.408% |
MAE | 12.194 | 11.589 | 4.958% |
Heating zone | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
Number of freezing layers | 2 | 28 | 2 | 2 | 2 | 2 | 2 | 4 | 24 |
GAN-TL | Fine-Tune | TL-BiLSTM | Self-TL-TCN | TCN | LSTM | BiLSTM | GRU | CNN-LSTM | Improvement | |
Zone 2 | 16.376 | 16.622 | 26.737 | 29.923 | 45.937 | 46.683 | 50.034 | 49.908 | 57.514 | 45.27% |
Zone 3 | 7.803 | 9.137 | 10.058 | 12.504 | 14.817 | 15.410 | 14.855 | 14.968 | 16.321 | 37.60% |
Zone 4 | 1.644 | 1.567 | 1.744 | 3.049 | 3.759 | 4.365 | 5.244 | 4.275 | 5.074 | 48.61% |
Zone 5 | 6.200 | 6.214 | 8.688 | 10.048 | 13.310 | 11.598 | 14.776 | 17.558 | 14.085 | 38.30% |
Zone 6 | 3.062 | 3.204 | 3.793 | 5.064 | 7.017 | 7.935 | 7.838 | 7.804 | 7.534 | 39.53% |
Zone 7 | 2.272 | 2.143 | 3.211 | 2.987 | 6.751 | 9.271 | 10.486 | 9.756 | 11.129 | 28.26% |
Zone 8 | 3.560 | 4.072 | 5.427 | 7.096 | 10.944 | 12.563 | 11.771 | 11.104 | 11.929 | 49.83% |
Zone 10 | 3.309 | 3.037 | 5.790 | 7.125 | 9.850 | 9.948 | 9.861 | 10.113 | 12.734 | 57.38% |
GAN-TL | Fine-Tune | TL-BiLSTM | Self-TL-GRU | TCN | LSTM | BiLSTM | GRU | CNN-LSTM | Improvement | |
Zone 9 | 2.866 | 2.124 | 4.063 | 3.791 | 6.360 | 7.106 | 6.727 | 6.206 | 7.064 | 43.97% |
GAN-TL | Fine-Tune | TL-BiLSTM | Self-TL-TCN | TCN | LSTM | BiLSTM | GRU | CNN-LSTM | Improvement | |
Zone 2 | 12.006 | 12.595 | 21.536 | 24.407 | 38.162 | 40.790 | 43.751 | 42.851 | 47.471 | 50.81% |
Zone 3 | 6.116 | 6.936 | 8.163 | 9.092 | 10.964 | 12.215 | 11.420 | 11.567 | 12.712 | 32.73% |
Zone 4 | 1.318 | 1.295 | 1.433 | 2.334 | 2.996 | 3.400 | 4.366 | 3.440 | 3.971 | 44.52% |
Zone 5 | 4.683 | 4.792 | 6.988 | 7.848 | 10.219 | 9.640 | 11.834 | 14.846 | 11.427 | 40.33% |
Zone 6 | 2.418 | 2.463 | 2.955 | 4.080 | 5.642 | 6.280 | 6.173 | 6.135 | 6.011 | 40.73% |
Zone 7 | 1.696 | 1.680 | 2.731 | 2.400 | 5.354 | 7.438 | 8.924 | 8.573 | 9.201 | 30.00% |
Zone 8 | 2.862 | 3.361 | 4.979 | 5.985 | 9.150 | 10.171 | 9.521 | 9.306 | 10.091 | 52.18% |
Zone 10 | 2.448 | 2.285 | 3.935 | 4.724 | 5.456 | 6.194 | 6.261 | 6.541 | 7.020 | 51.63% |
GAN-TL | Fine-Tune | TL-BiLSTM | Self-TL-GRU | TCN | LSTM | BiLSTM | GRU | CNN-LSTM | Improvement | |
Zone 9 | 2.157 | 1.631 | 3.092 | 3.301 | 5.298 | 6.009 | 5.729 | 5.071 | 5.889 | 50.59% |
Zone | Zone 2 | Zone 3 | Zone 4 | Zone 5 | Zone 6 | Zone 7 | Zone 8 | Zone 9 | Zone 10 |
---|---|---|---|---|---|---|---|---|---|
Original | −0.19 | −0.15 | 0.84 | 0.72 | 0.58 | 0.82 | 0.18 | 0.72 | 0.72 |
Generated | 0.90 | 0.37 | 0.71 | 0.88 | 0.82 | 0.74 | 0.80 | 0.60 | 0.60 |
Zone 2 | Zone 3 | Zone 4 | Zone 5 | Zone 6 | Zone 7 | Zone 8 | Zone 9 | Zone 10 | |
---|---|---|---|---|---|---|---|---|---|
TL-TCN | 119,937 | 57,921 | 119,937 | 119,937 | 119,937 | 119,937 | 119,937 | 111,681 | 62,081 |
Self-TCN | 181,890 | 181,890 | 247,938 | 132,244 | 181,890 | 243,906 | 223,234 | 173,634 | 247,938 |
TCN | 123,969 | 123,969 | 123,969 | 123,969 | 123,969 | 123,969 | 123,969 | 123,969 | 123,969 |
LSTM | 326,913 | 97,921 | 97,921 | 326,913 | 97,921 | 326,913 | 97,921 | 97,921 | 97,921 |
GRU | 245,249 | 73,473 | 245,249 | 245,249 | 73,473 | 245,249 | 245,249 | 73,473 | 245,249 |
BiLSTM | 653,825 | 195,841 | 195,841 | 653,825 | 195,841 | 653,825 | 195,841 | 195,841 | 195,841 |
CNN-LSTM | 549,221 | 155,649 | 155,649 | 275,877 | 155,649 | 275,877 | 155,649 | 155,649 | 231,141 |
Zone 2 | Zone 3 | Zone 4 | Zone 5 | Zone 6 | Zone 7 | Zone 8 | Zone 9 | Zone 10 | |
---|---|---|---|---|---|---|---|---|---|
TL-TCN | 51 | 28 | 50 | 51 | 51 | 52 | 51 | 45 | 31 |
Self-TCN | 125 | 125 | 145 | 101 | 124 | 142 | 138 | 119 | 143 |
TCN | 90 | 91 | 90 | 92 | 90 | 90 | 91 | 92 | 91 |
LSTM | 703 | 366 | 365 | 700 | 366 | 703 | 365 | 365 | 366 |
GRU | 504 | 360 | 500 | 502 | 358 | 502 | 503 | 348 | 500 |
BiLSTM | 1413 | 733 | 735 | 1415 | 733 | 1413 | 730 | 732 | 732 |
CNN-LSTM | 302 | 255 | 256 | 283 | 255 | 285 | 254 | 255 | 280 |
TL-TCN | TL-BiLSTM | Self-TL-TCN | TCN | LSTM | BiLSTM | GRU | CNN-LSTM | |
---|---|---|---|---|---|---|---|---|
RMSE | 92.8767 | 93.6835 | 93.9478 | 94.7354 | 96.4152 | 95.8696 | 96.2931 | 105.0256 |
MAE | 65.3171 | 65.4864 | 65.4517 | 66.2132 | 66.5143 | 66.8429 | 67.4102 | 75.2448 |
TL-TCN | TL-BiLSTM | Self-TL-TCN | TCN | LSTM | BiLSTM | GRU | CNN-LSTM | Improvement | |
Zone 2 | 16.244 | 26.737 | 29.923 | 45.937 | 46.683 | 50.034 | 49.908 | 57.514 | 45.71% |
Zone 4 | 2.609 | 1.744 | 3.049 | 3.759 | 4.365 | 5.244 | 4.275 | 5.074 | 14.43% |
Zone 5 | 8.528 | 8.688 | 10.048 | 13.310 | 11.598 | 14.776 | 17.558 | 14.085 | 15.13% |
Zone 6 | 3.363 | 3.793 | 5.064 | 7.017 | 7.935 | 7.838 | 7.804 | 7.534 | 33.59% |
Zone 7 | 2.984 | 3.211 | 2.987 | 6.751 | 9.271 | 10.486 | 9.756 | 11.129 | 0.1% |
Zone 8 | 4.767 | 5.427 | 7.096 | 10.944 | 12.563 | 11.771 | 11.104 | 11.929 | 32.82% |
Zone 10 | 3.574 | 5.790 | 7.125 | 9.850 | 9.948 | 9.861 | 10.113 | 12.734 | 49.84% |
TL-TCN | TL-BiLSTM | Self-TL-GRU | TCN | LSTM | BiLSTM | GRU | CNN-LSTM | Improvement | |
Zone 9 | 3.924 | 4.063 | 3.791 | 6.360 | 7.106 | 6.727 | 6.206 | 7.064 | −3.51% |
TL-TCN | TL-BiLSTM | Self-TL-TCN | TCN | LSTM | BiLSTM | GRU | CNN-LSTM | Improvement | |
Zone 2 | 14.266 | 21.536 | 24.407 | 38.162 | 40.790 | 43.751 | 42.851 | 47.471 | 41.55% |
Zone 4 | 2.179 | 1.433 | 2.334 | 2.996 | 3.400 | 4.366 | 3.440 | 3.971 | 6.64% |
Zone 5 | 6.882 | 6.988 | 7.848 | 10.219 | 9.640 | 11.834 | 14.846 | 11.427 | 12.31% |
Zone 6 | 2.853 | 2.955 | 4.080 | 5.642 | 6.280 | 6.173 | 6.135 | 6.011 | 30.07% |
Zone 7 | 2.234 | 2.731 | 2.400 | 5.354 | 7.438 | 8.924 | 8.573 | 9.201 | 18.20% |
Zone 8 | 3.904 | 4.979 | 5.985 | 9.150 | 10.171 | 9.521 | 9.306 | 10.091 | 34.77% |
Zone 10 | 2.899 | 3.935 | 4.724 | 5.456 | 6.194 | 6.261 | 6.541 | 7.020 | 38.63% |
TL-TCN | TL-BiLSTM | Self-TL-GRU | TCN | LSTM | BiLSTM | GRU | CNN-LSTM | Improvement | |
Zone 9 | 3.477 | 3.092 | 3.301 | 5.298 | 6.009 | 5.729 | 5.071 | 5.889 | −5.33% |
Zone 2 | Zone 4 | Zone 5 | Zone 6 | Zone 7 | Zone 8 | Zone 9 | Zone 10 | ||
---|---|---|---|---|---|---|---|---|---|
RMSE | Zone 1 | 16.622 | 1.567 | 6.214 | 3.204 | 2.143 | 4.072 | 2.124 | 3.037 |
Zone 3 | 16.244 | 2.609 | 8.528 | 3.363 | 2.984 | 4.767 | 3.924 | 3.574 | |
MAE | Zone 1 | 12.595 | 1.295 | 4.792 | 2.463 | 1.680 | 3.361 | 1.631 | 2.285 |
Zone 3 | 14.266 | 2.179 | 6.882 | 2.853 | 2.234 | 3.904 | 3.477 | 2.899 |
Furnace 2 | TL-TCN | Self-TL-TCN | TCN | LSTM | BiLSTM | GRU | CNN-LSTM | |
---|---|---|---|---|---|---|---|---|
Zone 1 | RMSE | 4.173 | 6.912 | 9.188 | 9.177 | 10.197 | 9.227 | 12.208 |
MAE | 3.209 | 5.677 | 7.718 | 7.454 | 8.147 | 7.303 | 10.287 | |
Zone 5 | RMSE | 3.668 | 4.009 | 8.143 | 9.205 | 9.923 | 8.199 | 12.070 |
MAE | 2.837 | 3.300 | 6.739 | 8.309 | 8.822 | 7.307 | 10.728 | |
Zone 10 | RMSE | 4.168 | 9.697 | 9.725 | 10.070 | 10.188 | 9.879 | 11.350 |
MAE | 3.407 | 8.681 | 7.889 | 8.946 | 8.651 | 8.976 | 10.200 |
Furnace 1 | TL-TCN | Self-TL-TCN | TCN | LSTM | BiLSTM | GRU | CNN-LSTM | |
---|---|---|---|---|---|---|---|---|
Zone 1 | RMSE | 5.499 | 16.417 | 17.243 | 21.082 | 17.368 | 14.591 | 20.085 |
MAE | 4.214 | 11.442 | 12.874 | 18.903 | 14.805 | 13.368 | 17.703 | |
Zone 5 | RMSE | 5.097 | 6.196 | 8.344 | 9.191 | 13.456 | 9.159 | 9.353 |
MAE | 4.196 | 5.327 | 6.945 | 7.251 | 10.175 | 7.916 | 6.784 | |
Zone 10 | RMSE | 2.084 | 2.264 | 3.306 | 3.565 | 3.458 | 4.093 | 4.644 |
MAE | 1.793 | 1.903 | 2.479 | 2.899 | 2.855 | 3.432 | 3.586 |
Furnace 2 | TL-TCN | Self-TL-TCN | TCN | LSTM | BiLSTM | GRU | CNN-LSTM | |
---|---|---|---|---|---|---|---|---|
Zone 1 | RMSE | 7.138 | 9.898 | 11.162 | 10.835 | 11.418 | 9.593 | 11.608 |
MAE | 5.242 | 8.268 | 8.979 | 9.485 | 8.947 | 8.404 | 9.496 | |
Zone 5 | RMSE | 11.632 | 14.522 | 14.694 | 14.714 | 17.596 | 12.288 | 15.439 |
MAE | 8.721 | 10.711 | 11.228 | 10.284 | 12.971 | 10.298 | 11.281 | |
Zone 10 | RMSE | 4.716 | 5.446 | 5.598 | 7.858 | 5.681 | 6.239 | 8.225 |
MAE | 3.748 | 4.245 | 4.520 | 6.582 | 4.626 | 5.242 | 6.321 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhai, N.; Zhou, X. Temperature Prediction of Heating Furnace Based on Deep Transfer Learning. Sensors 2020, 20, 4676. https://doi.org/10.3390/s20174676
Zhai N, Zhou X. Temperature Prediction of Heating Furnace Based on Deep Transfer Learning. Sensors. 2020; 20(17):4676. https://doi.org/10.3390/s20174676
Chicago/Turabian StyleZhai, Naiju, and Xiaofeng Zhou. 2020. "Temperature Prediction of Heating Furnace Based on Deep Transfer Learning" Sensors 20, no. 17: 4676. https://doi.org/10.3390/s20174676