CN111709553B - Subway flow prediction method based on tensor GRU neural network - Google Patents
Subway flow prediction method based on tensor GRU neural network Download PDFInfo
- Publication number
- CN111709553B CN111709553B CN202010419486.8A CN202010419486A CN111709553B CN 111709553 B CN111709553 B CN 111709553B CN 202010419486 A CN202010419486 A CN 202010419486A CN 111709553 B CN111709553 B CN 111709553B
- Authority
- CN
- China
- Prior art keywords
- tensor
- order
- data
- network
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 15
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 12
- 238000004364 calculation method Methods 0.000 claims abstract description 6
- 238000002759 z-score normalization Methods 0.000 claims description 2
- 238000010606 normalization Methods 0.000 claims 2
- 238000004140 cleaning Methods 0.000 claims 1
- 238000012216 screening Methods 0.000 claims 1
- 238000012549 training Methods 0.000 abstract description 5
- 238000012360 testing method Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 1
- 230000002354 daily effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/40—Business processes related to the transportation industry
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- Economics (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Marketing (AREA)
- Biophysics (AREA)
- General Business, Economics & Management (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Tourism & Hospitality (AREA)
- Game Theory and Decision Science (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Entrepreneurship & Innovation (AREA)
- Development Economics (AREA)
- Primary Health Care (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
技术领域technical field
本发明涉及一种地铁流量预测方法,具体涉及一种基于张量GRU神经网络的地铁流量预测方法。The invention relates to a subway flow prediction method, in particular to a subway flow prediction method based on a tensor GRU neural network.
背景技术Background technique
每个城市的交通监控都希望能实时精准的获取各个地区的流量,然后根据这些已有的流量数据做地区间的交通分析,这样有助于把握整个城市的交通状况,丰富城市大脑的决策能力。在城市里每个时刻产生的数据是多种多样的,数据可以是人群的密度也可以是通过的流量,甚至可以是个人的性别。本文针对地铁监控的场景提出了利用张量GRU(循环神经网络RNN的演化)网络来预测地铁流量的方法。The traffic monitoring of every city hopes to obtain real-time and accurate traffic in various regions, and then conduct inter-regional traffic analysis based on these existing traffic data, which will help to grasp the traffic conditions of the entire city and enrich the decision-making ability of the city brain . The data generated at each moment in the city is diverse, and the data can be the density of the crowd, the flow of traffic, or even the gender of an individual. This paper proposes a method for predicting subway traffic using tensor GRU (an evolution of RNN) network for the subway monitoring scenario.
地铁流量是依据沿着城市中各条地铁线的每个地铁站在一个时间单位里所产生的进站人数和出战人数的综合评估。为了预测一段时间内地铁流量的走向,往往也需要考虑地铁流量的其他数据,比如该地铁站的位置,该地铁站所处的时间段等。这些信息天然就是结构的数据,传统方法往往要拆分各个维度的数据进行特征提取,最后进行特征融合,这直接损害的就是数据中的结构关系。于是采用张量的运算方法,将神经网络矩阵运算改为全张量运算,根据地铁的数据特质,采用时序网络来接受不同时间段的输入,最后根据张量距离来考量数据中各个元素的位置关系从而达到直接利用其数据结构的目的,同时还可以把高阶思路拓展到输出端,使输出端不再局限于向量级别的判断,依靠张量距离对张量内元素下标的计算来达到把握张量结构特征的目的,这样输出端也可以实现高阶的拓展,这在对预测未来不同发展趋势下的不同结果有着重要的意义。Subway traffic is based on a comprehensive evaluation of the number of people entering and leaving each subway station along each subway line in the city in a time unit. In order to predict the direction of the subway flow in a period of time, other data of the subway flow, such as the location of the subway station, the time period of the subway station, etc., need to be considered. This information is naturally structural data. Traditional methods often split the data of each dimension for feature extraction, and finally perform feature fusion, which directly damages the structural relationship in the data. Therefore, the tensor operation method is adopted, and the neural network matrix operation is changed to full tensor operation. According to the data characteristics of the subway, the time series network is used to accept the input of different time periods, and finally the position of each element in the data is considered according to the tensor distance. In order to achieve the purpose of directly using its data structure, it can also expand the high-level thinking to the output end, so that the output end is no longer limited to the judgment of the vector level, and relies on the calculation of the subscript of the element in the tensor distance to achieve the grasp The purpose of tensor structure features, so that the output can also achieve high-level expansion, which is of great significance in predicting different results under different development trends in the future.
发明内容Contents of the invention
本发明针对现有技术的不足,提出了一种基于张量GRU神经网络的地铁流量预测方法。Aiming at the deficiencies of the prior art, the present invention proposes a subway flow prediction method based on a tensor GRU neural network.
本发明要解决的技术问题是在高阶数据集的场景下如何运用张量GRU去解决地铁人流量的预测问题。The technical problem to be solved by the present invention is how to use the tensor GRU to solve the problem of predicting the flow of people in the subway under the scenario of a high-order data set.
为了解决这一问题,本发明通过以下技术方案予以实现:In order to solve this problem, the present invention is realized through the following technical solutions:
基于张量GRU的地铁流量预测方法,包括如下步骤:The subway flow prediction method based on tensor GRU includes the following steps:
1)取某一时段各个地铁站各时段刷卡的具体数据作为数据集,对数据集进行清洗以及筛选,分别找到以地铁站,地铁线路,节假日类型作为各阶数据的维度,组成一个高阶张量。1) Take the specific data of card swiping at each subway station at each time period in a certain period of time as a data set, clean and filter the data set, find the subway station, subway line, and holiday type as the dimensions of each level of data, and form a high-level Zhang quantity.
2)在输入GRU神经网络前,对每次的输入按照阶进行归一化,同时保证标签数据与输入数据共享同一组均值方差;根据网络输出的维度对输入数据进行z-score标准化,得到均值张量M和标准差张量St;K1×K2×…×KN形状的N阶张量按照阶展开。2) Before entering the GRU neural network, normalize each input according to the order, and at the same time ensure that the label data and the input data share the same set of mean variance; according to the dimension of the network output, z-score normalizes the input data to obtain the mean value Tensor M and standard deviation tensor St ; K 1 ×K 2 ×…×K N -shaped tensors of order N are expanded according to order.
3)将步骤2)处理后的数据分割成每一个时间步X输入到GRU神经网络中,在每一个张量GRU运算单元中,通过将权重W与X进行爱因斯坦乘得到隐藏层的张量状态,爱因斯坦积的具体表达式为: 3) Divide the data processed in step 2) into each time step X and input it to the GRU neural network. In each tensor GRU operation unit, the tensor of the hidden layer is obtained by multiplying the weight W and X by Einstein Quantity state, the specific expression of the Einstein product is:
其中A和B分别代表一个N阶张量和一个N+M阶张量,kN代表张量第N阶的维度大小,将按照阶的顺序从k1到kN依次进行对应下标元素的相乘然后进行求和运算将N阶缩减到1阶使得结果为M阶张量;通过张量GRU内部的更新门和重置门的运算得到最后的隐藏层状态H,用作输出或作下一时间步的输入。通过规定的时间步长的计算后,得到的结果张量H,该张量就是得到的隐藏层状态H。Among them, A and B represent an N-order tensor and an N+M-order tensor respectively, k N represents the dimension size of the N-th order of the tensor, and the corresponding subscript elements will be sequentially processed from k 1 to k N in order of order Multiply and then perform a summation operation to reduce the N-order to 1-order so that the result is an M-order tensor; the final hidden layer state H is obtained through the operation of the update gate and reset gate inside the tensor GRU, which is used as an output or as the next Input for a time step. After the calculation of the specified time step, the resulting tensor H is obtained, which is the obtained hidden layer state H.
4)将得到的输出H与同样在输入前进行归一化的标签进行损失计算,采用张量距离作为抓住高阶张量结构特性的损失函数:4) Calculate the loss of the obtained output H and the labels that are also normalized before input, and use the tensor distance as the loss function to capture the structural characteristics of high-order tensors:
其中l和m分别代表两个张量的下标,xl,yl,xm,ym分别代表两个张量的对应的下标的元素值,表示对应下标的元素值的欧式距离,通过反向传播机制更新每一个时刻的Wi 权重张量,i代表输出的某一个时间步,Where l and m represent the subscripts of the two tensors respectively, x l , y l , x m , y m represent the element values of the corresponding subscripts of the two tensors respectively, Represents the Euclidean distance of the element value corresponding to the subscript, and updates the W i weight tensor at each moment through the back propagation mechanism, i represents a certain time step of the output,
对于不同epoch,重复执行步骤3)到步骤4),直到epoch结束再执行下一步。For different epochs, repeat steps 3) to 4) until the end of the epoch before performing the next step.
5)当epoch结束后,取得步骤3)的结果,根据M和St将网络的输出进行反归一化,取得反归一化后的张量中的有效维度作为网络最后预测的结果值,即得到了最终的流量预测值。5) When the epoch is over, obtain the result of step 3), denormalize the output of the network according to M and St , and obtain the effective dimension in the denormalized tensor as the final predicted result value of the network, that is, get The final traffic forecast.
本发明相对现有技术所具有的效果:Effect that the present invention has relative to prior art:
1、使基于向量与矩阵运算的神经网络可以基于张量结构进行运算,提升神经网络的维度,在地铁流量的应用中,可以输入高阶的信息,进行高阶的数据融合,计算能力是向量无可比拟的。1. The neural network based on vector and matrix operations can be operated based on the tensor structure, and the dimension of the neural network can be improved. In the application of subway traffic, high-level information can be input and high-level data fusion can be performed. The computing power is vector Incomparable.
2、在张量距离作为损失函数的情况下,可以在网络学习过程中学习到结构的特性,在流量的输出结果上表现为,可以预测多模态的场景预测值,拓宽了网络预测或回归的维度边界。2. In the case of the tensor distance as the loss function, the characteristics of the structure can be learned during the network learning process, and the output results of the traffic can predict the multi-modal scene prediction value, broadening the network prediction or regression dimension boundaries.
附图说明Description of drawings
图1是GRU网络内部的结构图;Figure 1 is a structural diagram of the GRU network;
图2是本发明的整体流程图。Fig. 2 is the overall flow chart of the present invention.
具体实施方式Detailed ways
下面结合附图与具体实施方式对本发明做进一步的描述:The present invention will be further described below in conjunction with accompanying drawing and specific embodiment:
1)取得该月份各个地铁站各时段刷卡的具体数据并对其进行统计,将十分钟作为一个时间单位,将一天划分成144个时间单位,统计每个时间单位的出站和入站人数,得到该月从1号到26号的每天的各个地铁站的流量数据,然后依据地铁站所在的地铁线路以及日期所处的节假日情况构造出一个五阶张量X,且X∈R26×144×80×3×2,其中26代表有26天的数据,144代表一天内的144个时间段,80代表的是地铁站数量,3代表的是地铁线的数量,2代表是工作日和不是工作日两种情况。1) Obtain the specific data of card swiping at each subway station at each time period of the month and make statistics on it. Take ten minutes as a time unit, divide a day into 144 time units, and count the number of people leaving and entering the station for each time unit. Get the flow data of each subway station every day from the 1st to the 26th of the month, and then construct a fifth-order tensor X according to the subway line where the subway station is located and the holidays on the date, and X ∈ R 26×144 ×80×3×2 , where 26 means there are 26 days of data, 144 means 144 time periods in a day, 80 means the number of subway stations, 3 means the number of subway lines, 2 means working days and not Two cases on weekdays.
2)取1-25日作为训练数据,26日作为测试数据,本文选择了144作为训练的batch,有助于网络随机的学习各个时间段的规律。当预测某一个时间段的数据时,本文会取该时间段前50分钟和后40分钟加上自身的十分钟总共100分钟的流量数据作为输入,即在GRU的每一个时间步上,特征数量为100分钟的流量数据也就是10个数字。2) Take the 1-25th as the training data and the 26th as the test data. This paper chooses 144 as the training batch, which will help the network to randomly learn the rules of each time period. When predicting the data of a certain time period, this paper will take the traffic data of the first 50 minutes and the last 40 minutes of the time period plus its own ten minutes for a total of 100 minutes as input, that is, at each time step of the GRU, the number of features The flow data of 100 minutes is 10 numbers.
3)在单次训练中,从144个时间段任选一段,然后将1-25日总共25天的数据展开,分日传入GRU中,所以每个GRU时间步的输入的张量阶数为R80×3×2×10。3) In a single training session, choose a period from 144 time periods, and then expand the data of 25 days from 1 to 25 days, and transfer them to the GRU on a daily basis, so the tensor order of the input of each GRU time step is R 80×3×2×10 .
4)在输入网络运算前,对输入和标签进行归一化,根据网络输出的维度对输入数据进行z-score标准化,如果输出的维度是R80,那么会合并输入数据的R3×2×10这三个维度得到R80的均值张量M和标准差张量St,便于对输出数据进行反归一化操作,同时不考虑标签自己的分布特征,用输入得到的均值标准差对标签进行归一化。4) Before inputting into the network operation, normalize the input and labels, and perform z-score normalization on the input data according to the dimension of the network output. If the dimension of the output is R 80 , then the R 3×2× of the input data will be merged 10 These three dimensions get the mean tensor M and standard deviation tensor St of R 80 , which is convenient for denormalizing the output data. At the same time, regardless of the distribution characteristics of the label itself, the label is calculated using the mean standard deviation obtained from the input. Normalized.
5)将归一化后的数据根据25日这个时间维度依次作为GRU的cell的输入进行训练,按照公式:5) The normalized data is sequentially used as the input of the GRU cell according to the time dimension of the 25th, according to the formula:
Z t=σ(U (Z)*N X t+W (Z)*M S t-1), Z t = σ( U (Z) * N X t + W (Z) * M S t-1 ),
R t=σ(U (R)*N X t+W (R)*M S t-1) R t = σ( U (R) * N X t + W (R) * M S t-1 )
H t=tanh(U (H)*N X t+W (H)*M(S t-1⊙R t)) H t =tanh( U (H) * N X t + W (H) * M ( S t-1 ⊙ R t ))
S t=(1-Z t)⊙H t+Z t⊙S t-1 S t =(1- Z t )⊙ H t + Z t ⊙ S t-1
其中U和W分别代表与输入张量和上一时刻隐藏层张量运算的权重张量,Z t代表计算后得到的更新门的张量,R t代表重置门的张量,H t代表根据重置门重置了上一时间步的张量S t-1后运算得到的隐藏层张量,那么本时间步的张量S t即上一时刻根据更新门更新后的Z t⊙S t-1和不参与更新的(1-Z t)⊙H t相加得到的,进行前向传播,首先,将t时刻的4阶输入张量与4+n阶的权重张量进行爱因斯坦积,n是一个超参数代表的时候隐藏层张量的阶数,数据经过更新门和重置门并根据每个门计算出的保留权重值来决定上一个状态和当前状态的部分元素有效值的去留。Among them, U and W represent the weight tensor of the operation with the input tensor and the hidden layer tensor at the previous moment, Z t represents the tensor of the update gate obtained after calculation, R t represents the tensor of the reset gate, and H t represents The hidden layer tensor obtained by resetting the tensor S t-1 of the previous time step according to the reset gate, then the tensor S t of this time step is the Z t ⊙ S updated according to the update gate at the previous moment It is obtained by adding t-1 and (1- Z t )⊙ H t that does not participate in the update, and performs forward propagation. First, the 4th-order input tensor at time t and the 4+n-order weight tensor are processed by Einstein Stan product, n is the order of the hidden layer tensor represented by a hyperparameter, the data passes through the update gate and the reset gate, and according to the reserved weight value calculated by each gate, it is determined that some elements of the previous state and the current state are valid worth the stay.
6)当网络经过前向传播获得网络输出结果Y后,用归一化后的标签值与输出进行基于张量损失的反向传播,主要是为了更新网络中各个门中的权重张量U (Z),W (Z),U (R),W (R),U (H),W (H)。6) After the network obtains the network output result Y through forward propagation, use the normalized label value and output to perform tensor-based loss The back propagation of is mainly to update the weight tensors U (Z) , W (Z) , U (R) , W (R) , U (H) , W (H) in each gate in the network.
7)重复执行步骤4)到步骤6),直到网络收敛后,将最终的输出结果根据M和St进行反归一化后得到就是最终的流量预测。7) Repeat step 4) to step 6) until the network converges, and denormalize the final output result according to M and St to obtain the final flow prediction.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010419486.8A CN111709553B (en) | 2020-05-18 | 2020-05-18 | Subway flow prediction method based on tensor GRU neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010419486.8A CN111709553B (en) | 2020-05-18 | 2020-05-18 | Subway flow prediction method based on tensor GRU neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111709553A CN111709553A (en) | 2020-09-25 |
CN111709553B true CN111709553B (en) | 2023-05-23 |
Family
ID=72538039
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010419486.8A Active CN111709553B (en) | 2020-05-18 | 2020-05-18 | Subway flow prediction method based on tensor GRU neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111709553B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117726183A (en) * | 2024-02-07 | 2024-03-19 | 天津生联智慧科技发展有限公司 | Gas operation data prediction method based on space high-order convolution |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109492814A (en) * | 2018-11-15 | 2019-03-19 | 中国科学院深圳先进技术研究院 | A kind of Forecast of Urban Traffic Flow prediction technique, system and electronic equipment |
CN110310479A (en) * | 2019-06-20 | 2019-10-08 | 云南大学 | A kind of Forecast of Urban Traffic Flow forecasting system and method |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019084254A1 (en) * | 2017-10-27 | 2019-05-02 | Wave Computing, Inc. | Tensor manipulation within a neural network |
CN107798385B (en) * | 2017-12-08 | 2020-03-17 | 电子科技大学 | Sparse connection method of recurrent neural network based on block tensor decomposition |
GB2574224B (en) * | 2018-05-31 | 2022-06-29 | Vivacity Labs Ltd | Traffic management system |
CN109325585A (en) * | 2018-10-10 | 2019-02-12 | 电子科技大学 | A Sparse Connection Method for Long Short-Term Memory Networks Based on Tensor Ring Decomposition |
CN111046740B (en) * | 2019-11-17 | 2023-05-19 | 杭州电子科技大学 | A Classification Method Based on Full Frame Quantization Recurrent Neural Network for Human Action Video |
-
2020
- 2020-05-18 CN CN202010419486.8A patent/CN111709553B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109492814A (en) * | 2018-11-15 | 2019-03-19 | 中国科学院深圳先进技术研究院 | A kind of Forecast of Urban Traffic Flow prediction technique, system and electronic equipment |
CN110310479A (en) * | 2019-06-20 | 2019-10-08 | 云南大学 | A kind of Forecast of Urban Traffic Flow forecasting system and method |
Also Published As
Publication number | Publication date |
---|---|
CN111709553A (en) | 2020-09-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wu et al. | Daily urban air quality index forecasting based on variational mode decomposition, sample entropy and LSTM neural network | |
CN111210633B (en) | Short-term traffic flow prediction method based on deep learning | |
Salloom et al. | A novel deep neural network architecture for real-time water demand forecasting | |
CN112001548B (en) | OD passenger flow prediction method based on deep learning | |
Ko et al. | Short-term load forecasting using SVR (support vector regression)-based radial basis function neural network with dual extended Kalman filter | |
CN109583565B (en) | Flood prediction method based on attention model long-time and short-time memory network | |
CN110473592B (en) | A Multi-view Human Synergistic Lethal Gene Prediction Method | |
CN108346293B (en) | A method for short-term forecasting of real-time traffic flow | |
CN110263280A (en) | A kind of dynamic link predetermined depth model and application based on multiple view | |
CN111754025A (en) | Prediction method of bus short-term passenger flow based on CNN+GRU | |
CN112182423B (en) | Internet public opinion event evolution trend prediction method based on attention mechanism | |
Haghighat | Predicting the trend of indicators related to Covid-19 using the combined MLP-MC model | |
CN112330028A (en) | Electric bus charging load prediction method based on spectral clustering and LSTM neural network | |
CN110223785A (en) | A kind of infectious disease transmission network reconstruction method based on deep learning | |
CN113362637B (en) | A method and system for predicting vacant parking spaces at multiple sites in a region | |
CN112784479A (en) | Flood flow prediction method | |
CN110210656B (en) | Method and system of shared bicycle traffic flow prediction based on site behavior analysis | |
CN115759413B (en) | Meteorological prediction method and device, storage medium and electronic equipment | |
Awad et al. | Prediction of water demand using artificial neural networks models and statistical model | |
CN111709553B (en) | Subway flow prediction method based on tensor GRU neural network | |
CN110390419A (en) | Traffic forecast method of expressway toll station based on PSO-LSSVM model | |
CN107704944A (en) | A kind of fluctuation of stock market interval prediction method based on information theory study | |
CN112488397A (en) | Load prediction method under extreme scene based on modal decomposition and transfer learning | |
Rafik et al. | Learning and Predictive Energy Consumption Model based on LSTM recursive neural networks | |
CN118070952A (en) | Multi-head attention traffic flow prediction method based on space-time diagram ordinary differential equation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |