Automatic cooling method of die casting system based on deep learning
Technical Field
The invention relates to the field of die casting technology, in particular to an automatic cooling method of a die casting system based on deep learning.
Background
The die casting process is a precision casting method in which a metal melt is forced to be pressed into a metal mold with a complex shape by high pressure in a process of unifying pressure, speed and time by using three major elements such as a machine, a mold, an alloy and the like.
The die casting has the characteristics of high production precision, high material utilization rate and high production efficiency, and can manufacture metal parts with complex shapes, clear outlines and thin-wall deep cavities, so that the die casting is more and more widely applied.
After the die casting is performed, the temperature of the die is increased at the same time, and then the die is solidified to form a product, and the temperature of the die is also fallen back. In the process, the shape of the die cavity is complex, the temperature difference between different positions of the die is large, at the moment, the die is easy to generate microscopic deformation, microcracks are initiated, and the die is invalid and the product is disqualified. Although the existing mold is generally provided with a cooling system consisting of a plurality of cooling paths so as to ensure that different positions of the mold can be uniformly cooled, the phenomena of low product qualification rate, short service life of the mold and the like still exist due to lack of effective control.
In the conventional cooling system, an empirical method is adopted to achieve an ideal cooling effect, and the water flow rate and the water flow time are gradually changed according to the cooling effect in each actual production process. The empirical method often cannot quantitatively analyze the scientific water flow and water flow time, and can damage a die in the test process, thereby affecting the processing of products.
Disclosure of Invention
The invention aims to provide an automatic cooling method of a die-casting system based on deep learning, which can solve one or more of the technical problems.
In order to achieve the above purpose, the technical scheme provided by the invention is as follows:
an automatic cooling method of a die casting system based on deep learning comprises
(1) The water valves in the cooling system of the die casting machine are marked as water valve No. 1-water valve No. N respectively, the water flow of each water valve is set as a 1...aN, and the water flow time of each water valve is set as b 1...bN;
(2) Establishing a CNN convolutional neural network training database;
(21) T times of die casting systems are randomly executed, and the water flow rate of each die casting system is as follows according to different water flow rates and water flow times The water-through time is
(22) Recording an input image X 1...XT and an output image Y 1...YT of each die casting system by adopting a thermal imager;
(23) The corresponding water flow is set according to the operation of each die casting system Water-through timeThe input image X 1...XT and the output image Y 1...YT are counted to form a die thermal image database;
(24) Screening training data from the die thermogram database of the step (23) on the condition of the cooling effect of the output image to form a training database, wherein the training database comprises water passing amounts corresponding to each other Water-through timeAn input image X 1...XM, an output image Y 1...YM;
(3) Training CNN convolutional neural networks
(31) Performing feature extraction on an input image X 1...XM and an output image Y 1...YM in the training database of the step (24) by adopting a CNN convolutional neural network to generate training data and corresponding labels required by deep learning;
(32) Setting a first Loss function los_1 related to the correlation of the input image, the output image, the water flow and the water flow time, and training the CNN convolutional neural network in the step (24) to obtain the weight and the deviation of the CNN by taking the condition that the first Loss function los_1 reaches the minimum;
(4) Performing secondary training on the CNN1 convolutional neural network obtained in the step (3);
(41) Building a secondary training database:
(411) The die casting system is subjected to L times of die casting production again to obtain a new input image
(412) Obtaining an average output image from the output images Y 1...YM in the step (2)Will average the output imageOutput images for secondary training;
(42) Taking the water flow and the water flow time as unknowns, and inputting the image in the step (41) Average output imageSetting a second Loss function loss_2 related to the correlation of the input image, the output image, the water flow and the water flow time for the known quantity;
(43) Performing second training on the CNN1 convolutional neural network trained in the step (3) to obtain a CNN2 convolutional neural network on the condition that the second Loss function los_2 reaches the minimum value, and obtaining an ideal value of the water flow a 1...aN and an ideal value of the water flow time b 1...bN;
(5) The water flow a 1...aN and the water flow time b 1...bN obtained in the step (43) are adopted to be applied to formal production.
Further, the training data and the corresponding labels required for deep learning in the step (32) are generated by extracting and converting the die surface temperature distribution information contained in the input image X 1...XM and the output image Y 1...YM into one-dimensional array vectors through a convolutional neural network.
Further, a VGGNet network is adopted as a main network for feature extraction, the size of an input/output image is preprocessed to 224 multiplied by 224 pixels, a Layer1 convolution Layer is composed of 64 convolution kernels with 3 multiplied by 3, BN batch standardization processing is adopted, and an activation function is relu '; the Layer2 convolution Layer is composed of 64 convolution kernels with the size of 3 multiplied by 3, BN batch standardization processing is adopted, an activation function is ' relu ', and maximum pooling with the step length of 2 is adopted; the Layer3 convolution Layer is composed of 128 convolution kernels with the size of 3×3, BN batch standardization is adopted, an activation function is ' relu ', the Layer4 convolution Layer is composed of 128 convolution kernels with the size of 3×3, BN batch standardization is adopted, an activation function is ' relu ', the Layer5 convolution Layer is composed of 256 convolution kernels with the size of 3×3, BN batch standardization is adopted, an activation function is ' relu ', the Layer6 convolution Layer is composed of 256 convolution kernels with the size of 3×3, BN batch standardization is adopted, an activation function is ' relu ', the Layer7 convolution Layer is composed of 256 convolution kernels with the size of 1×3, BN batch standardization is adopted, an activation function is ' relu ', a maximum pool with the step size of 2 is adopted, the Layer8 convolution Layer is composed of 512 convolution kernels with the size of 3×3, BN batch standardization is adopted, an activation function is ' relu ', the Layer9 convolution Layer is composed of 512 convolution kernels with the size of 3×3', the Layer is formed by the size of 512×3×3, the Layer is in the maximum pool with the step size of 2, BN batch standardization is adopted, the Layer is formed by the activation function is ' relu ', the Layer7 convolution Layer is composed of 256 convolution kernels with the size of 1×1×3×3', the step size is adopted, the maximum pool with the step size of 2 is adopted, the step size of the Layer is used to be standardized by the step is 2, and the step is used to be the step is used by the step is 2. The activation function is 'relu', the Layer12 convolution Layer is composed of 512 convolution kernels with the size of 3 multiplied by 3, the activation function is 'relu' by adopting the BN batch standardization process, the Layer13 convolution Layer is composed of 512 convolution kernels with the size of 1 multiplied by 1, the activation function is 'relu' by adopting the BN batch standardization process, the maximum pooling with the step length of 2 is adopted, the Layer14 full-connection Layer is composed of 512 neurons, the activation function is 'relu', the Layer15 full-connection Layer is composed of 512 neurons, and the activation function is 'relu'.
Further, the training algorithm is a random gradient descent algorithm and the optimizer selects an Adam algorithm
Further, the first Loss function in the step (32) is Loss_1= |A. CNN (X) -B-CNN (Y) | 2, wherein A is water flow and B is water flow time, and the first Loss function is Loss_1= |A is CNN (X) -B-CNN (Y) | 2 And X= [ X 1...XM]、Y=[Y1...YM ], and CNN are convolutional neural networks.
Further, the second loss function in step (42) is: Wherein A is water flow and B is water flow time, The CNN1, which is the average output image obtained in step (41), is the CNN1 convolutional neural network obtained in step (32).
The invention has the technical effects that:
Compared with an empirical method, the method is more flexible in the adjustment process in actual production, achieves the purpose of scientifically controlling the cooling system, and can achieve the aims of prolonging the service life of the die and reducing machining errors.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application.
In the drawings:
FIG. 1 is a schematic general construction of the present invention;
Fig. 2 is a flow chart of the operation of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the drawings and the specific embodiments thereof, wherein the exemplary embodiments and the description are for the purpose of illustrating the invention only and are not to be construed as unduly limiting the invention.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present application. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the application described herein may be capable of being practiced otherwise than as specifically illustrated and described. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Spatially relative terms, such as "above," "upper" and "upper surface," "above" and the like, may be used herein for ease of description to describe one device or feature's spatial relationship to another device or feature as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as "above" or "over" other devices or structures would then be oriented "below" or "beneath" the other devices or structures. Thus, the process is carried out, the exemplary term "above" may be included. Upper and lower. Two orientations below. The device may also be positioned in other different ways (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
An automatic cooling method of a die casting system based on deep learning comprises
(1) The water valves in the cooling system of the die casting machine are marked as water valve No. 1-water valve No. N respectively, the water flow of each water valve is set as a 1...aN, and the water flow time of each water valve is set as b 1...bN;
(2) Establishing a CNN convolutional neural network training database;
(21) T times of die casting systems are randomly executed, and the water flow rate of each die casting system is as follows according to different water flow rates and water flow times The water-through time is
(22) Recording an input image X 1...XT and an output image Y 1...YT of each die casting system by adopting a thermal imager;
(23) The corresponding water flow is set according to the operation of each die casting system Water-through timeThe input image X 1...XT and the output image Y 1...YT are counted to form a die thermal image database;
(24) Screening training data from the die thermogram database of the step (23) on the condition of the cooling effect of the output image to form a training database, wherein the training database comprises water passing amounts corresponding to each other Water-through timeInput image X 1...XM, output image Y 1...YM.
(3) Training CNN convolutional neural networks
(31) Performing feature extraction on an input image X 1...XM and an output image Y 1...YM in the training database of the step (24) by adopting a CNN convolutional neural network to generate training data and corresponding labels required by deep learning;
The mold surface temperature distribution information contained in the input image X 1...XM and the output image Y 1...YM is extracted and converted into a one-dimensional array vector by a convolutional neural network.
The method adopts VGGNet network as a main network for feature extraction, the size of an input and output image is preprocessed to 224 multiplied by 224 pixel size, a Layer1 convolution Layer is composed of 64 convolution kernels with 3 multiplied by 3 size, BN batch standardization processing is adopted, and an activation function is 'relu'; the Layer2 convolution Layer is composed of 64 convolution kernels with the size of 3 multiplied by 3, BN batch standardization processing is adopted, an activation function is relu ', and maximum pooling with the step length of 2 is adopted; the Layer3 convolution Layer is composed of 128 convolution kernels with the size of 3 multiplied by 3, BN batch standardization processing is adopted, and an activation function is' relu '; the Layer4 convolution Layer is composed of 128 convolution kernels with 3×3 sizes, adopts BN batch standardization processing, an activation function is' relu ', adopts maximum pool with step length of 2, the Layer5 convolution Layer is composed of 256 convolution kernels with 3×3 sizes, adopts BN batch standardization processing, an activation function is' relu ', the Layer6 convolution Layer is composed of 256 convolution kernels with 3×3 sizes, adopts BN batch standardization processing, an activation function is' relu ', the Layer7 convolution Layer is composed of 256 convolution kernels with 1×1 size, adopts BN batch standardization processing, an activation function is' relu ', adopts maximum pool with step length of 2, the Layer8 convolution Layer is composed of 512 convolution kernels with 3×3 sizes, adopts BN batch standardization processing, an activation function is' relu ', the Layer9 convolution Layer is composed of 512 convolution kernels with 3×3 sizes, adopts BN batch standardization processing, the activation function is' relu ', the Layer10 convolution Layer is composed of 512 convolution kernels with 1×1', the Layer is composed of the Layer with 2, the Layer with the step length of 512, the Layer with the step length of 3×3×3 is composed of the Layer with the maximum pool with the step length of 2, the Layer with the step length of 512×3×3 is composed of the convolution kernels with the step length of 3×3 ',' relu ',' with the convolution function is composed of the maximum pool with the step length of 512, the method adopts BN batch standardization treatment, an activation function is 'relu', a Layer13 convolution Layer is composed of 512 convolution kernels with the size of 1 multiplied by 1, the method adopts BN batch standardization treatment, the activation function is 'relu', the maximum pooling with the step length of 2 is adopted, a Layer14 full-connection Layer is composed of 512 neurons, the activation function is 'relu', a Layer15 full-connection Layer is composed of 512 neurons, and the activation function is 'relu'.
The training algorithm is a random gradient descent algorithm, and the optimizer selects an Adam algorithm.
(32) Setting a first Loss function los_1 regarding correlation of input image, output image, water flow time, the first Loss function being los_1= ||a CNN (X) -B-CNN (Y) || 2, where a is water flow, B is water flow time, in this stepAnd X= [ X 1...XM]、Y=[Y1...YM ], and CNN are convolutional neural networks.
(33) Training the CNN convolutional neural network in the step (24) on the condition that the first Loss function loss_1 reaches the minimum to obtain the weight and the deviation of the CNN;
(4) Performing secondary training on the CNN1 convolutional neural network obtained in the step (3);
(41) Building a secondary training database:
(411) The die casting system is subjected to L times of die casting production again to obtain a new input image
(412) Obtaining an average output image from the output images Y 1...YM in the step (2)Will average the output imageOutput images for secondary training;
(42) Taking the water flow and the water flow time as unknowns, and inputting the image in the step (41) Average output imageSetting a second Loss function loss_2 related to the input image, the output image, the water flow and the water flow time as a known quantity, wherein the second Loss function in the step (42) is as follows:
Wherein the method comprises the steps of The average output image obtained in the step (31) and CNN1 are CNN1 convolutional neural networks obtained in the step (32), wherein A is water flow and B is water flow time.
(43) Performing second training on the CNN1 convolutional neural network trained in the step (3) to obtain a CNN2 convolutional neural network on the condition that the second Loss function los_2 reaches the minimum value, and obtaining an ideal value of the water flow a 1...aN and an ideal value of the water flow time b 1...bN;
(5) The water flow a 1...aN and the water flow time b 1...bN obtained in the step (43) are adopted to be applied to formal production.
According to the invention, through adjusting the water flow rate and the water flow time of the cooling system, the cooling effect is ideal, scientific quantitative analysis can be carried out on the cooling effect, uniform cooling of different positions of the die is ensured, point damage to the die is reduced, and the processing quality of the product is improved.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.