1. Introduction
Recently, with the development of the Internet of Things (IoT), the requirements for high spectral efficiency (SE) and energy efficiency (EE) are higher and higher [
1]. In communication systems, different popularly used technologies are either designed for sending information on communication channels of mobile devices (including cellular networks, wireless LANs, and Bluetooth) or are designed for power transfer (such as Qi and WiTricity) [
2]. Moreover, RF has been regarded as a potential resource for energy harvesting in wireless systems. Since the information and energy are contained simultaneously in RF signals, simultaneous wireless information and power transfer (SWIPT) is presented for the first time in [
3], and is regarded as a potential technology, where both information and energy of the common transmit signal are transmitted to the receivers [
4,
5]. Therefore, SWIPT has engaged a lot of attention in the research community. SWIPT schemes for single-input single-output (SISO) system have been investigated [
3,
6,
7], where the time-switching (TS) scheme and power-splitting (PS) scheme were considered. The study of SWIPT in interference channels is of more practical interest than in non-interference channels. Therefore, many studies on SWIPT addressing the interference channel (IFC) have been done [
8,
9,
10,
11,
12,
13,
14,
15,
16].
Two practical architectures at receiver in SWIPT systems are the time-switching (TS) receiver and the power-splitting (PS) receiver [
5,
16]. The TS scheme at the receiver switches between information decoding (ID) and energy harvesting (EH) over time, while the PS scheme splits the received signal into two parts, one is implemented for the ID and the other is done for EH. The SWIPT system with a PS scheme has been investigated for solving the minimum transmit power problem [
10,
11]. In this paper, we investigate a SWIPT system with multiple single-antenna transmitters and multiple single-antenna receivers. The studied scenario is based on the PS structure. By optimizing transmit power and PS ratios, the power optimization problem is studied under considering of the requirements for SINR and energy harvesting. The system with the PS scheme is similar to the SWIPT system that has been studied in [
10], but is studied in a single-input single-output (SISO) SWIPT scenario with IFC. Moreover, our purpose is to open up new research directions with a combination of the two areas of deep learning and optimization problems by taking advantage of deep learning to solve the optimization problems.
In wireless communication systems (WCSs), the management and allocation of wireless resources like transmit power must be properly executed in order to obtain high network performance. For example, user equipments (UEs) manage transmit power inefficiently, which can cause a large amount of interference among UEs, and further result in a decrease in network performance. Therefore, the proper allocation of wireless resources is becoming more important with the increase in the popularity of UE density in WCSs. In this work, we also study power problem in WCSs but in a SWIPT system, and we are not too focused on optimization algorithms.
In previous researches, transmit power was obtained by solving iterative optimization problems, which had a specified set of network condition parameters such as channel gains and SINR requirements as their input. Those studies ran an entailed number of iterations before achieving convergence, and produced the optimized resource allocation strategy as their output. The iterative optimization algorithms have been efficiently solved the associated resource allocation problems and achieved relatively high performance. However, the iterative nature of these algorithms increases the computation time, which may lead to long latency and high computation costs in real-time operations due to the significant changing of some network parameters, so the entire iterative procedure of the algorithms has to be re-executed. Therefore, real-time operation of transmit power control still faces much difficulties in practical use. The problem becomes more serious when the number of more iteration is essential as the number of users increases [
17]. For instance, the weighted minimum mean square error (WMMSE) algorithm needs complicated operations like the inversion of matrix [
18,
19], singular value decomposition [
20,
21] in each iteration, and semidefinite relaxation [
22,
23].
Recently, using deep learning technology has been investigated in many domains, e.g., image processing, and it has proven superior performance compared to conventional schemes [
24,
25,
26,
27]. In particular, deep learning has started to be dealt with wireless communication systems, e.g., communication signals classification [
25], channel estimation and signal detection [
27], indoor localization [
28], sparse optimization [
29], and the optimization of constellation mapping [
30]. Besides, through deep neural networks (DNNs), deep learning technology can be used to efficiently solve sophisticated nonlinear problems through back-propagation algorithms [
17] with low computation time [
27] without the need to derive complex mathematical models. The authors in [
17] approximated the transmit power based on WMMSE that was studied in [
18] using a dense neural network, which can solve the main shortcoming of the WMMSE-based algorithm, for example, a long computation time caused by a huge number of iterations. Moreover, the validity of using a DNN in practical systems of wireless communication fields was confirmed in [
31] using a testbed.
Unlike conventional resource management schemes in which sophisticated optimization problems have to be dealt with an iterative manner, we propose a deep learning-based approach for wireless resources allocation (in particular, for transmit power management) that can be obtained with low computation time. The basic idea is to use a set of wireless resources that is solved by an optimization algorithm, and to attempt to learn its input and output by using deep learning technologies based on DNNs or recurrent neural networks (RNNs) [
32]. If a network efficiently approximates to a wireless resource allocation optimization algorithm, then the input of the optimization algorithm is passed into the trained network to get the output with higher computational efficiency in comparison with the optimization algorithm. This is because its testing stage does not involve iterative optimization; it only needs some layers and simple operations like matrix-vector multiplication/addition and simple nonlinear transformations. Therefore, if the training stage can efficiently approximate the optimization algorithm, the computation time for resource allocation in real time can be reduced significantly. Overall, our approach can be regarded as using a deep learning technology which based on DNNs and RNNs models for approximating an iterative optimization algorithm according to given network parameters.
The main contributions of this paper are summarized as follows.
We investigate a SWIPT system with multiple single-antenna at transmitters and receivers with a PS structure. The information and energy are transmitted simultaneously from transmitters to receivers. The transmit power optimization problem subjects to required SINR and energy harvesting is investigated by optimizing transmit power and the PS ratio at the receiver.
We propose a deep learning-based approach to the transmit power optimization problem in SISO interference channels with SWIPT systems. The proposed approach opens up new research directions with a combination of the two fields of deep learning and wireless resources management (specifically, the combination of deep learning architectures and the transmit power optimization problem over interference channels). In the proposed approach, we use deep learning architectures including a type of DNN: the Feed-Forward Neural Network (FFNN) and three types of Recurrent Neural Network (RNN): the Layer Recurrent Network (LRN), the Nonlinear AutoRegressive network with eXogenous inputs (NARX), and Long Short-Term Memory (LSTM). To the best of our knowledge, this is the first attempt to apply deep learning technology based on the RNN for transmit power control in the SWIPT system.
Through our proposed approach, the transmit power of transmitters, and the PS ratio of the receivers can be obtained with lower complexity and less computation time, compared to conventional iterative approaches. Simulation results show that the deep learning-based approach is a great potential tool for approximating iterative optimization algorithms.
The rest of this paper is arranged as follows.
Section 2 shows the system model, and the problem formulation and solution for the SWIPT system. The deep learning-based proposed approaches are described in
Section 3. The numerical results and discussions are provided in
Section 4. Finally,
Section 5 provides a conclusion.
3. The Deep Learning-Based Approaches
Although optimization algorithms can achieve relatively high performance in solving the associated resource management problems, these algorithms entail a huge number of iterations, which increase the computation time. So, the implementation of these algorithms in real time is still a challenging issue. In this section, we suggest a deep learning-based approach to wireless resources allocation, where wireless resources can be approximated by deep learning architectures such as the DNN and RNNs. The proposed deep learning-based approach is shown in
Figure 2. The basic idea is to use a set of wireless resources that is solved by an optimization algorithm, and to attempt to learn its input and output through the training stage. If a network efficiently approximates to a wireless resource allocation optimization algorithm, then the input of the optimization algorithm is passed into the trained network to get the output with higher computational efficiency.
3.1. Network Structure
This section describes the deep learning-based approaches used for minimum transmit power and PS ratio prediction, and presents a kind of DNN, the FFNN, and three kinds of RNN: the LRN, NARX, and LSTM.
An FFNN is an artificial neural network where the connections between the units do not form a directed cycle or loops, which is different from an RNN. The information passes through the network follow one direction, from the input layer through the hidden layers and to the output layer [
33].
The RNN is a technology of learning in the deep learning, which has been attracting attention in recent years. RNN performs a memory mechanism with a recurring structure. Through this structure, neurons can use the information of the past in order to affect the output at the current moment, which is suitable for the prediction of time series data. In this paper, three kinds of RNN are used: the LRN, NARX, and LSTM.
An earlier simplified version of the LRN was introduced by Elman [
34]. The LRN is a type of RNN that has a single delay and a feedback loop at each hidden layer, but not at the last layer. The basic LRN architecture is constituted by three layers: input layer, hidden layer, and output layer, as presented in [
35].
NARX is a dynamic RNN whose feedback loop is connected from the last layer to the input layer. The NARX model is based on the linear autoregressive exogenous model, which is used in time-series modeling. The formulation and the basic architecture for NARX are given in [
36,
37].
Unfortunately, the training process of traditional RNNs is influenced by the exploding gradient, which can cause learning to diverge, or the vanishing gradient, where learning either becomes very slow or stops working altogether [
38,
39], which prevents complete learning of the time series. One solution is to utilize LSTM networks. Therefore, we have also investigated LSTM RNNs, which introduces a new structure called a memory cell, as presented in [
40,
41].
3.2. Optimization Stage
In the optimization stage, we treat the required SINR and the required harvested energy as fixed constants, and generate a large number of channel realizations
following certain distributions (specified in
Section 4). Each tuple
is then fed to optimization algorithms like problem Equation (5), and obtains the corresponding optimized power and PS ratio vectors
. The functions
and
are convex with variable
where
. So, Equations (5a)–(5c) are also convex function. Therefore, the minimum power problem is a convex optimization problem and Matlab’s CVX [
42] can efficiently solve this problem.
3.3. Training Stage
After solving the minimum transmit power problem in the optimization stage, we obtained optimal power and PS ratio vectors corresponding to channel vectors with and . Then, we treat as an input/output pair, and try to learn the input/output relation through the training process of the deep learning-based approaches. Assuming that we use N samples for training data, then, we have input/output pairs where .
With the simple structure of an FFNN, we can use the input/output pairs as above, i.e.,
, for the training stage of the DNN. The training data is trained by using back propagation, and the training stage is implemented based on optimizing the mean squared error (MSE) by using the scaled conjugate gradient algorithm (which is both memory and computationally efficient [
43]).
In the LRN and NARX, when the data are concurrent (matrix format), we need to change it into sequential data (a cell array format) before setting the network parameters because the input appears in time order. The LRN and NARX have a delay in the feedback loop. The feedback loop affects the order of the input, and the output. Therefore, we need to add more data corresponding to the number of delays used, i.e., , where T is the number of delays in each feedback loop. Then, we define input and output pairs for the training stages of the LRN and NARX as follows: .
The LRN and NARX are trained by using the scaled conjugate gradient algorithm. After training stage, the MSE can be calculated. With a NARX neural network, firstly, the network is trained in an open loop, like the FFNN, with the scaled conjugate gradient algorithm. After that, the trained network with the open loop is changed to a closed loop in which target values are replaced by feedback signals from the output. Finally, the network is retrained in close loop form.
Although LSTM works with sequence and time series input data for data prediction, like the LRN and NARX, this network does not have the delays in the input and each feedback loop. Therefore, we also use as input/output data for training. The LSTM network is trained by using the Adam optimization algorithm to update network weights, instead of using the classical stochastic gradient descent method.
Since the DNN and RNN architectures use backpropagation, we need activation function that calculates its own derivative. In this paper, the following activation functions for hidden layers and output layer are used and given by Equations (6) and (7), respectively.
3.4. Testing Stage
In this stage, we also generate the channels , where with M as the number of samples for testing. The channels follow the same distribution as the training stage. Data preparation for the testing stage is the same process as the training stage for the FFNN, LRN, and NARX, and for the LSTM network. Then, testing data with channel vectors is run through the trained network.
4. Numerical Results and Discussions
In this section, we provide numerical results to showcase the effectiveness of the proposed approach. We first describe the simulation setup and neural network parameter selection, then demonstrate the capability of the proposed deep learning-based approaches that can produce responses similar to those produced by the optimization algorithm in changing the required SINR and the required harvested energy values. Neural network parameter selection is based on evaluation of the system performance by computing the MSE and efficiency of the computation time. In the paper, in order to calculate the parameters of the networks such as weights and biases of each layer, we use stochastic gradient descent. The stochastic gradient descent picks a randomly chosen subset of data by which neural network is trained with gradient approximation. To compute gradients, we use the back-propagation algorithm (BP). The optimal weight and bias are obtained by minimizing MSE between the actual and desired output values [
44]. Besides, in order to quantitatively compare the model fitting error, the MSE is a typical metric for evaluating the accuracy of the deep learning-based model [
45,
46,
47]. The MSE is a measure of prediction accuracy which is calculated between observed values,
, taken from the optimization algorithm and the predicted values,
, in the testing stage taken from deep learning methods,
. The smaller the MSE value, the higher the predicted performance. The computation time in deep learning-based approaches includes two categories: training and testing times. Meanwhile, the computation time of the optimization scheme is attained according to the number of samples of the testing stage.
4.1. The Simulation Setup and Neural Network Parameter Selection
In our simulation, the proposed approaches are implemented in Matlab R2018b on a computer with 16 GB of memory and a 3.40 GHz CPU. For the parameters in the optimization problem, we set the Gaussian antenna noise at −90 dBW, and the circuit noise at = −60 dBW. Assuming that the required harvested energy and the required SINR for all receiver are equal. The energy harvesting efficiency at the receiver is assumed to be the highest one such that . A standard normal distribution is used to generate channel coefficients; for example, we use a Gaussian distribution with zero mean and unit variance. The SWIPT system operates with two transceiver pairs (). To gain further insight into the impact of the deep learning-based approach, and to take real-time computational efficiency into account, we use a simple FFNN and three types of RNN: the LRN, NARX, and LSTM. We use a relatively large training data set with 10,000 samples (), and used 1000 samples () as a testing data set. The proposed deep learning-based approaches include an input layer, two hidden layers, and an output layer. In the neural network parameter selection process, the number of considered neurons for hidden layers is 20, 40, and 60. For the LRN and NARX, the number of delays on input is set to zero, and the number of delays in each feedback loop is set to 1 (). The initial values for the required SINR and harvested energy set at 6 dB and −20 dBm, respectively. The network parameter selection process can strike a good value between the MSE and the computation time. For the LSTM network, we set some options for training. In particular, the gradient threshold is 0.01, and the maximum number of epochs is 1000. To reduce the amount of padding in the mini-batches, we chose a mini-batch size of 20. Since the mini-batches are small with short sequences, training is better suited to the CPU, so we specified the environment for execution as CPU. The learning rate is set to 0.001.
Figure 3 shows the network performance in term of the MSE in the testing stage. As we increase the hidden layers size from 20 neurons to 60 neurons, the complexity of the network increases, and the MSE might decrease. However, the value of MSE varies slightly. The FFNN provides a bad performance in term of MSE. This is because FFNN often encounters issues like overfitting or underfitting, which makes predictions for new data be inaccurate. Therefore, the FFNN does not fit the data efficiently. In the case of the LRN approach, the MSE tends to increase, compared to NARX and LSTM networks. This is due to the feedback loop structure in each hidden layer of the LRN network (i.e., the more hidden layers are used, the more loops exist in the network). As a result, the MSE of LRN network might increase. Since LSTM network can solve the gradient vanishing and exploding problems in traditional RNNs, the LSTM network provides the lowest MSE as shown in
Figure 3.
Figure 4 shows the computation time of both training and testing stages where 10,000 samples are used for training in the FFNN, NARX, LRN and LSTM, and 1000 samples are used for testing when the hidden layer size changes from 20 neurons to 60 neurons. In all deep learning-based approaches, the training and testing time are the lowest when 20 neurons are used in each hidden layer. In
Figure 4a, the computation time of the LSTM approach increases dramatically compared to other deep learning-based approaches. This is because LSTM has more backpropagation neural network modules, and the more memory information is transmitted over time, the more the cell state increases.
Figure 4a also shows that the computation time increases when the hidden layer sizes increase. In
Figure 4b, the testing time in deep learning-based approaches is very small, but the computation time of the optimization-based scheme is significantly larger. Therefore, we present the computation time of the optimization scheme in minutes in order to facilitate observation. Overall, the computation time of the deep learning-based approaches is much lower when compared to the optimization algorithm-based approach. This is easily explained because the testing stage does not involve iterative optimization; it only needs some layers and simple operations like matrix-vector multiplication/addition and simple nonlinear transformations. Therefore, if the training stage can estimate the optimization algorithm well enough, the computation time for resource allocation in real-time can be reduced significantly.
In addition, we also tested network performance by increasing the number of hidden layers in the selection process of neural network parameter in terms of MSE and computation time when networks used 20 neurons in the hidden layers, as shown in
Figure 5 and
Figure 6.
Figure 5 shows that the MSE value changed very slightly among different schemes when the number of hidden layers is increased from 2 to 6.
Figure 6 shows the computation time of both training and testing stages where 10,000 samples are used for training in the FFNN, NARX, LRN and LSTM, and 1000 samples are used for testing. In the testing stage, we measured the computation time of the optimization scheme to get 1000 samples, for the purpose of computation time comparison between deep-learning approach and optimization approach.
Figure 6a shows that the computation time of all deep-learning networks increases very slightly in the training stage as the number of hidden layers increases. Among deep-learning networks, LSTM has the largest computation time.
Figure 6b also shows that the computation time of all deep-learning networks increases very slightly in the testing stage as the number of hidden layers increases. However, it is noteworthy that all deep-learning networks have very small amount of computation time, compared to that of optimization scheme.
From the simulation results in
Figure 3,
Figure 4,
Figure 5 and
Figure 6, we have chosen 20 neurons and two hidden layers for the next simulation results with which we can get the best balance in the tradeoff between MSE and computation time. The next simulation results provide more insight on the capability of the proposed deep learning-based approaches that can capture responses produced by the optimization algorithm in changing the required SINR and the required harvested energy.
4.2. Network Performance in Changing of Required SINR and Required Harvesting Energy Values
In this section, we provided the simulation results and discussions to verify the capability and efficiency of the proposed deep learning-based approaches in scenarios for changing the required SINR and required harvested energy. We employed two network performance metrics in the performance evaluation (transmit power and receiver PS ratio) with various network conditions, such as required SINR and required harvested energy. We also used two hidden layers, and 20 neurons for each hidden layer, as discussed in
Section 4.1. Network condition parameters set for either changing the required SINR or the required energy harvesting. In case of changing the required SINR, we fixed the required harvested energy value at −20 dBm and the required SINR is changed from 2 dB to 6 dB, as shown in
Figure 7 and
Figure 8 for the sum of the transmit powers and the average PS ratios, respectively. In the other cases, i.e, changing the required energy harvesting, we fixed required SINR value at 2 dB, and changed the required harvested energy from −20 dBm to −12 dBm, as shown in
Figure 9 and
Figure 10.
In
Figure 7 and
Figure 8, we show the sum of transmit powers and the average PS ratios, respectively, when changing the required SINR in the training and testing stages. Overall, the schemes mostly have an upward trend in both transmit power and PS ratio as the required SINR increases. This is because when the required SINR increases, the power of the transmitter must increase in order to guarantee the information services at the receivers. Moreover, the PS ratio increases to guarantee that the transmission is more effective so constraint Equation (4b) can be guaranteed. When the required SINR is small, transmitters only use a small amount of power for transmission. This makes the fluctuation of transmit power among samples insignificant, and thus, the training and testing stages perform effectively, especially the sum of transmit powers (from 2 dB to 5 dB for the required SINR) and the average PS ratios (from 2 dB to 4 dB for the required SINR), as shown in
Figure 7 and
Figure 8, respectively. The result is that the deep learning-based approaches can capture a response similar to the response of the optimization algorithm in both training and testing stages. In other words, due to a relatively large fluctuation of transmit power that exists in the training data set at the high required SINR value (
= 6 dB), the results of the training stage are less effective. Then, the testing stage will inevitably be less effective, as shown in
Figure 7b (at
= 6 dB) and
Figure 8b (at
= 5 dB and
= 6 dB). Noteworthy is
Figure 8 at
= 6 dB, where the results of the training and testing stages are not good, which make the PS ratio no longer significantly desirable (
) in the FFNN and LRN approaches when a relatively large fluctuation of transmit power exists in the considered data. Unfortunately, the training process of traditional RNNs is influenced by an issue in backpropagation through time, called the exploding gradient (which can cause learning to diverge) or by the vanishing gradient, where learning either becomes very slow or stops working altogether (which prevents complete learning of the time series). One solution is to utilize LSTM networks. Therefore, in the comparison of simulation results in
Figure 7 and
Figure 8, LSTM has superior performance compared to the other deep learning-based approaches (in particular, in the average PS ratio for both training and testing stages as shown in
Figure 8). Moreover, we utilize relative error, which is a measure of the uncertainty of measurement to evaluate the variations between the results provided by the deep learning-based approaches and the optimization algorithm. For example, in
Figure 7a at required SINR is 5dB, relative errors of FFNN, NARX, LRN, and LSTM compared to optimization algorithm are 0.2%, 0.58%, 2.1%, and 0.69%, respectively. In
Figure 7b at required SINR is 5dB, relative errors of FFNN, NARX, LRN, and LSTM compared to optimization algorithm are 5.6%, 5%, 6.1%, and 0.35%, respectively. Simulation results in
Figure 7a,b also show that LSTM provides the best performance in term of transmit power.
Figure 9 shows the sum of transmit powers while changing the required harvested energy in the training and testing stages. In
Figure 9, we can see that the change of required harvesting energy threshold of receiver will affect the minimum transmit power of transmitter. The higher the required harvesting energy, the more power the transmitter needs to transmit, which makes the receivers avoid to interrupt the communication due to the lack of power. In
Figure 9a, the deep learning-based approaches can capture responses similar to the response of the optimization algorithm in the training stage with relatively large samples. However, with a smaller number of samples in the testing stage and the relatively large fluctuation of transmit powers in the testing data, the deep learning-based approaches cannot effectively capture the response of optimization algorithm as shown in
Figure 9b. We also calculate relative errors of FFNN, NARX, LRN, and LSTM compared to optimization algorithm in
Figure 9a at the required harvested energy
= −14 dBm, which are 0.05%, 0.3%, 1.83%, and 0.06%, respectively. In this case, LSTM also provides low relative error in most of deep learning-based approaches.
Figure 10 shows the average PS ratios according to the required harvested energy in the training and testing stages. Overall, the schemes mostly have a downward trend in the PS ratio as the required harvested energy increases. That is, the PS ratio should be decreased to guarantee that the energy harvested by EH is utilized more efficiently. In
Figure 10a, FFNN, LRN, and NARX cannot approximate the optimization algorithm accurately. Therefore, these deep learning-based approaches cannot capture a response like the response of the optimization algorithm in testing stage, as shown in
Figure 10b. Although LSTM cannot capture a response as good as the response of the optimization algorithm in
Figure 9b, it gives the best performance in
Figure 10b, where it can capture a response similar to the response of the optimization algorithm.
5. Conclusions
In this work, we investigated the transmit power optimization problem in SISO interference channels with SWIPT systems where a PS scheme was used at each receiver. The transmit power optimization problems subject to the required SINR and required harvested energy were investigated by optimizing transmit power and the PS ratio. After that, we exploited the ability of deep learning to improve the computing time in comparison with the traditional optimization algorithm. The various approaches based on deep learning were proposed such as FFNN, LRN, NARX, and the LSTM network. The performance of the proposed approaches was evaluated and compared to that of the traditional optimization algorithm. From experimental results, LSTM network provided the best balance in the tradeoff between solution quality (i.e., MSE) and solution efficiency (i.e., computation time) in the deep learning-based approaches. Overall, the experimental results showed that deep learning models can forecast output of the optimization problem effectively without prior knowledge about the system’s state. Most of all, deep learning-based approach provided low computation time, compared to the traditional optimization algorithm, which is very useful for real-time resource allocation processing.
From the simulation results, the FFNN is relatively simple to implement, so it has the lowest computation time. However, the FFNN provides inaccurate estimation of both transmit power and PS ratio. Due to the lack of control over the learning process, the FFNN further may lead to overfitting or underfitting to the training set, which makes predictions for new data be inaccurate. Traditional RNNs such as LRN and NARX have a “memory” which captures information about what has been calculated. In some cases of simulation results, traditional RNNs may improve network performance compared to the FFNN but in return longer computation time is required. However, traditional RNNs are influenced by the gradient vanishing and exploding problems, which makes the training of traditional RNN difficult, in two ways: (1) the RNN cannot process very long sequences if Hyperbolic tangent function is used as its activation function, (2) the RNN is very unstable if Rectified Linear Unit (ReLU) is used as its activation function. Besides, it cannot be stacked into very deep models. This is mostly due to the saturated activation function used in RNN models which makes the gradient decay over layers. LSTM network resolves these issues via Cell state (memory for LSTM) and Gates. In this paper, simulation results also show that the LSTM network can provide the best balance in the tradeoff between solution quality (i.e., MSE) and solution efficiency (i.e., computation time) compared to the other deep learning-based approaches.