CN117346285B - Indoor heating and ventilation control method, system and medium - Google Patents
Indoor heating and ventilation control method, system and medium Download PDFInfo
- Publication number
- CN117346285B CN117346285B CN202311644580.3A CN202311644580A CN117346285B CN 117346285 B CN117346285 B CN 117346285B CN 202311644580 A CN202311644580 A CN 202311644580A CN 117346285 B CN117346285 B CN 117346285B
- Authority
- CN
- China
- Prior art keywords
- indoor
- window
- angle
- module
- video data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000010438 heat treatment Methods 0.000 title claims description 8
- 238000009423 ventilation Methods 0.000 title claims description 8
- 230000011218 segmentation Effects 0.000 claims abstract description 54
- 230000007613 environmental effect Effects 0.000 claims abstract description 16
- 230000003287 optical effect Effects 0.000 claims description 52
- 238000012549 training Methods 0.000 claims description 31
- 238000000605 extraction Methods 0.000 claims description 27
- 238000004422 calculation algorithm Methods 0.000 claims description 26
- 230000008569 process Effects 0.000 claims description 16
- CURLTUGMZLYLDI-UHFFFAOYSA-N Carbon dioxide Chemical compound O=C=O CURLTUGMZLYLDI-UHFFFAOYSA-N 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 14
- 238000001514 detection method Methods 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 12
- 238000011217 control strategy Methods 0.000 claims description 11
- 229910002092 carbon dioxide Inorganic materials 0.000 claims description 7
- 239000001569 carbon dioxide Substances 0.000 claims description 7
- 238000004378 air conditioning Methods 0.000 claims description 6
- 238000003062 neural network model Methods 0.000 claims description 5
- 238000001914 filtration Methods 0.000 claims description 4
- 229910052799 carbon Inorganic materials 0.000 claims 1
- 230000009467 reduction Effects 0.000 abstract description 2
- 230000006399 behavior Effects 0.000 description 56
- 238000010586 diagram Methods 0.000 description 15
- 230000006870 function Effects 0.000 description 14
- 230000008859 change Effects 0.000 description 7
- 238000005265 energy consumption Methods 0.000 description 6
- 230000007246 mechanism Effects 0.000 description 6
- 238000011176 pooling Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 125000004122 cyclic group Chemical group 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 238000004134 energy conservation Methods 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 1
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 1
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 1
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 1
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F11/00—Control or safety arrangements
- F24F11/30—Control or safety arrangements for purposes related to the operation of the system, e.g. for safety or monitoring
- F24F11/46—Improving electric energy efficiency or saving
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F11/00—Control or safety arrangements
- F24F11/62—Control or safety arrangements characterised by the type of control or by internal processing, e.g. using fuzzy logic, adaptive control or estimation of values
- F24F11/63—Electronic processing
- F24F11/64—Electronic processing using pre-stored data
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F11/00—Control or safety arrangements
- F24F11/70—Control systems characterised by their outputs; Constructional details thereof
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F11/00—Control or safety arrangements
- F24F11/88—Electrical aspects, e.g. circuits
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F2110/00—Control inputs relating to air properties
- F24F2110/10—Temperature
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F2110/00—Control inputs relating to air properties
- F24F2110/20—Humidity
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F2110/00—Control inputs relating to air properties
- F24F2110/30—Velocity
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F2110/00—Control inputs relating to air properties
- F24F2110/50—Air quality properties
- F24F2110/65—Concentration of specific substances or contaminants
- F24F2110/70—Carbon dioxide
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F2120/00—Control inputs relating to users or occupants
- F24F2120/20—Feedback from users
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02B—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
- Y02B30/00—Energy efficient heating, ventilation or air conditioning [HVAC]
- Y02B30/70—Efficient control or regulation technologies, e.g. for control of refrigerant flow, motor or heating
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Combustion & Propulsion (AREA)
- Mechanical Engineering (AREA)
- Evolutionary Computation (AREA)
- Chemical & Material Sciences (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Fuzzy Systems (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
本发明公开一种室内暖通控制方法、系统及介质,方法包括:获取室内视频数据及环境参数;将所述视频数据输入训练好的室内窗户分割模型IWS‑Net,得到窗户分割掩膜图像,并根据所述窗户分割掩膜图像,计算得到室内窗户打开程度;计算与所述视频数据对应的逆光流序列及人体骨骼关键点序列,并将所述视频数据、逆光流序列及人体骨骼关键点序列输入训练好的室内人员行为识别模型IDARM,得到室内人员热舒适行为;根据得到的室内窗户打开程度、室内人员热舒适行为及所述环境参数,计算室内暖通控制调整策略。本发明根据室内环境参数、窗户打开程度和室内人员的行为对暖通系统进行控制调节,实现节能减排,提高室内热舒适度。
The invention discloses an indoor HVAC control method, system and medium. The method includes: acquiring indoor video data and environmental parameters; inputting the video data into a trained indoor window segmentation model IWS-Net to obtain a window segmentation mask image. And calculate the indoor window opening degree based on the window segmentation mask image; calculate the backlight flow sequence and human skeleton key point sequence corresponding to the video data, and combine the video data, backlight flow sequence and human skeleton key point sequence Sequentially input the trained indoor personnel behavior recognition model IDARM to obtain the indoor personnel thermal comfort behavior; based on the obtained indoor window opening degree, indoor personnel thermal comfort behavior and the environmental parameters, calculate the indoor HVAC control adjustment strategy. The invention controls and adjusts the HVAC system based on indoor environmental parameters, window opening degree and indoor personnel behavior to achieve energy saving and emission reduction and improve indoor thermal comfort.
Description
技术领域Technical Field
本发明涉及为一种室内暖通控制方法、系统及介质,具体涉及一种计算机视觉和暖通空调控制,属于人工智能及图像处理技术领域。The present invention relates to an indoor HVAC control method, system and medium, and in particular to computer vision and HVAC control, belonging to the technical field of artificial intelligence and image processing.
背景技术Background Art
全球建筑领域能耗占到总能耗的40%左右,其中约有一半用于建筑空调。必须减少建筑领域能耗。而人员开窗行为是影响暖通控制的最主要的因素,在不使用智能化控制的情况下,建筑内人员频繁地开窗行为,会急剧增加能源消耗和运行成本。Energy consumption in the building sector accounts for about 40% of the total energy consumption in the world, of which about half is used for building air conditioning. Energy consumption in the building sector must be reduced. The behavior of people opening windows is the most important factor affecting HVAC control. Without the use of intelligent control, the frequent opening of windows by people in the building will sharply increase energy consumption and operating costs.
发明内容Summary of the invention
为克服上述现有技术的不足,本发明提供一种室内暖通控制方法、系统及介质,用以根据室内环境参数、窗户打开程度和室内人员的行为对暖通系统进行控制调节,实现节能减排,提高室内热舒适度。In order to overcome the deficiencies of the above-mentioned prior art, the present invention provides an indoor HVAC control method, system and medium, which are used to control and adjust the HVAC system according to indoor environmental parameters, window opening degree and indoor occupant behavior, so as to achieve energy conservation and emission reduction and improve indoor thermal comfort.
一方面,本发明提供一种室内暖通控制方法,包括:In one aspect, the present invention provides an indoor HVAC control method, comprising:
获取室内视频数据及环境参数;Obtain indoor video data and environmental parameters;
将所述视频数据输入训练好的室内窗户分割模型IWS-Net,得到窗户分割掩膜图像,并根据所述窗户分割掩膜图像,计算得到室内窗户打开程度;Input the video data into the trained indoor window segmentation model IWS-Net to obtain a window segmentation mask image, and calculate the degree of opening of the indoor window according to the window segmentation mask image;
计算与所述视频数据对应的逆光流序列及人体骨骼关键点序列,并将所述视频数据、逆光流序列及人体骨骼关键点序列输入训练好的室内人员行为识别模型IDARM,得到室内人员热舒适行为;Calculating the inverse optical flow sequence and the human skeleton key point sequence corresponding to the video data, and inputting the video data, the inverse optical flow sequence and the human skeleton key point sequence into the trained indoor personnel behavior recognition model IDARM to obtain the thermal comfort behavior of the indoor personnel;
根据得到的室内窗户打开程度、室内人员热舒适行为及所述环境参数,确定室内暖通控制调整策略。The indoor HVAC control adjustment strategy is determined based on the obtained indoor window opening degree, indoor occupant thermal comfort behavior and the environmental parameters.
进一步地,所述视频数据通过摄像机获取。摄像机可放置于室内的顶部,确保其能够清晰地拍摄到室内全局和窗户的画面,并测量其与检测窗户的距离、角度以及自身的俯仰角、偏航角和滚转角。Furthermore, the video data is obtained by a camera. The camera can be placed on the top of the room to ensure that it can clearly capture the overall indoor and window images, and measure the distance and angle between it and the detection window, as well as its own pitch angle, yaw angle and roll angle.
具体地,按照一定频率对摄像机获取的视频进行抽样,获取相应的视频帧,再将所述视频帧输入室内窗户分割模型IWS-Net进行窗户分割。Specifically, the video acquired by the camera is sampled at a certain frequency to obtain corresponding video frames, and then the video frames are input into the indoor window segmentation model IWS-Net for window segmentation.
进一步地,所述环境参数通过室内热舒适度测量仪获取。室内热舒适度测量仪主要获取室内的空气温度、相对湿度、空气流速以及二氧化碳浓度。Furthermore, the environmental parameters are obtained by an indoor thermal comfort measuring instrument, which mainly obtains indoor air temperature, relative humidity, air flow rate and carbon dioxide concentration.
可选地,使用循环全对域转换器RAFT从摄像机获取的RGB视频帧序列计算出相应的逆光流序列,同时使用Mediapipe库对视频中的每一帧图像进行骨骼关键点检测,形成人体骨骼关键点序列。Optionally, a cyclic full-domain converter RAFT is used to calculate a corresponding inverse optical flow sequence from the RGB video frame sequence obtained from the camera, and the Mediapipe library is used to detect skeleton key points for each frame image in the video to form a human skeleton key point sequence.
进一步地,计算逆光流序列的操作如下:Furthermore, the operation of calculating the inverse optical flow sequence is as follows:
(1)选取视频帧序列相邻两帧记为 和,并将其宽和高填充至8的倍数;(1) Select two adjacent frames in the video frame sequence and record them as and , and fill its width and height to multiples of 8;
(2)在将和进行交换后,作为预训练循环全对域转换器RAFT的输入,得到逆光流图;(2) In and After the exchange, it is used as the input of the pre-trained cyclic full-domain converter RAFT to obtain the inverse optical flow map;
(3)将逆光流图中数值的方向取反,并重复(1),直至视频结束。(3) Reverse the direction of the values in the inverse optical flow map and repeat (1) until the video ends.
进一步地,计算每一帧中的人体骨骼关键点的操作如下:Furthermore, the operation of calculating the key points of the human skeleton in each frame is as follows:
(1)将视频帧序列逐帧分离;(1) Separate the video frame sequence frame by frame;
(2)使用Python中Mediapipe,对每一帧图像检测人体骨骼关键点;(2) Use Mediapipe in Python to detect the key points of the human skeleton for each frame of the image;
(3)将(2)中获得的33个人体骨骼关键点进行存储,并重复(1),直至结束。(3) Store the 33 human skeleton key points obtained in (2) and repeat (1) until the end.
作为进一步的技术方案,所述室内窗户分割模型IWS-Net的训练进一步包括:As a further technical solution, the training of the indoor window segmentation model IWS-Net further includes:
采集室内窗户的图像,对所述图像进行预处理并形成训练集;Collecting images of indoor windows, preprocessing the images and forming a training set;
将所述训练集输入构建的室内窗户分割模型IWS-Net进行训练,得到满足精度要求的模型参数;Inputting the training set into the constructed indoor window segmentation model IWS-Net for training to obtain model parameters that meet the accuracy requirements;
将所述模型参数加载到室内窗户分割模型IWS-Net,得到训练好的室内窗户分割模型IWS-Net。The model parameters are loaded into the indoor window segmentation model IWS-Net to obtain a trained indoor window segmentation model IWS-Net.
进一步地,采集室内窗户的图像后,对所述图像使用labelme进行标注,将图像中的墙壁、窗户以及窗沿进行分割,其余部分划分为背景类。将所有数据按照的比例分为测试集和训练集。Furthermore, after collecting the images of indoor windows, the images are annotated using labelme, the walls, windows and window edges in the images are segmented, and the rest are classified as background. All data are divided into a test set and a training set according to the ratio.
作为进一步的技术方案,所述室内窗户分割模型IWS-Net的运算过程包括:As a further technical solution, the operation process of the indoor window segmentation model IWS-Net includes:
利用由5个特征提取模块顺序组成的主干网络,对输入的室内窗户图像进行特征提取,并保存每个特征提取模块的输出;A backbone network consisting of five feature extraction modules is used to extract features from the input indoor window image, and the output of each feature extraction module is saved;
利用顺序连接的3个注意力模块,对最后一个特征提取模块的输出进行背景特征抑制和所需分割部分特征增强;Using three attention modules connected sequentially, the output of the last feature extraction module is used to suppress background features and enhance the features of the required segmented parts;
利用重建模块对最后一个注意力模块的输出及每个特征提取模块的输出进行重建,得到墙壁、窗户、窗沿以及背景的掩膜图像。The reconstruction module is used to reconstruct the output of the last attention module and the output of each feature extraction module to obtain mask images of walls, windows, window sills and background.
上述技术方案采用IWS-Net深度学习模型进行室内窗户分割,具有如下特点:(1)采用深度可分离卷积和空洞卷积计算方式,在扩大了感受野的同时减少运算参数,提高计算速度与分割精度;(2)采用多通道卷积的方式,利用不同的卷积核提取图像特征,增加了特征的多样性,提高了模型的表达能力,减少了模型过拟合的风险,加速模型训练和推理过程;(3)采用了注意力机制,抑制图像中的背景信息,增强所需分割部分的特征信息,提高IWS-Net对感兴趣区域的关注程度,从而提高分割的准确性和鲁棒性。The above technical solution uses the IWS-Net deep learning model for indoor window segmentation, which has the following characteristics: (1) It adopts the depthwise separable convolution and dilated convolution calculation methods to expand the receptive field while reducing the operation parameters, thereby improving the calculation speed and segmentation accuracy; (2) It adopts the multi-channel convolution method and uses different convolution kernels to extract image features, which increases the diversity of features, improves the expression ability of the model, reduces the risk of model overfitting, and accelerates the model training and reasoning process; (3) It adopts the attention mechanism to suppress the background information in the image, enhance the feature information of the part to be segmented, and improve the attention of IWS-Net to the area of interest, thereby improving the accuracy and robustness of segmentation.
进一步地,所述重建模块包括5个上采样模块,其中第个模块的输入是由第个模块的输出和第个特征提取模块的输出组成。Furthermore, the reconstruction module includes five upsampling modules, wherein the first The input of each module is The output of the module and The output of the feature extraction module composition.
作为进一步的技术方案,所述方法还包括:As a further technical solution, the method further includes:
在得到墙壁、窗户、窗沿以及背景的掩膜图像后,提取窗户图像,并对提取的窗户图像依次进行滤波、阈值分割和开运算处理;After obtaining the mask images of the wall, window, window edge and background, the window image is extracted, and the extracted window image is filtered, threshold segmented and opened in sequence;
获取处理后窗户图像上的若干最小内接矩形及每一矩形的顶点坐标及面积;Obtaining several minimum inscribed rectangles on the processed window image and the vertex coordinates and area of each rectangle;
根据获取的矩形数量、每一矩形的顶点坐标及面积,计算推拉窗开口比例。The sliding window opening ratio is calculated based on the number of rectangles obtained, the vertex coordinates and the area of each rectangle.
具体地,利用窗户的掩膜将图像中的窗户提取出来,并进行高斯滤波,去除噪声;再利用阈值分割,生成二值图像;接着将二值图像进行开运算,去除二值图像上的毛点。在获得的图像上寻找最小内接矩形,统计矩形数量,并获取其四个顶点的坐标。最后根据矩形数量、四个顶点坐标以及面积,计算推拉窗开口比例。Specifically, the window mask is used to extract the window in the image, and Gaussian filtering is performed to remove noise; then threshold segmentation is used to generate a binary image; then the binary image is opened to remove the fuzzy spots on the binary image. The minimum inscribed rectangle is found on the obtained image, the number of rectangles is counted, and the coordinates of their four vertices are obtained. Finally, the opening ratio of the sliding window is calculated based on the number of rectangles, the coordinates of the four vertices, and the area.
作为进一步的技术方案,所述方法还包括:As a further technical solution, the method further includes:
在得到墙壁、窗户、窗沿以及背景的掩膜图像后,利用霍夫线检测,获取窗沿上端的直线和窗沿下端的直线;After obtaining the mask images of the wall, window, window edge and background, the straight line at the upper end of the window edge and the straight line at the lower end of the window edge are obtained by using Hough line detection;
计算两条直线的斜率,并根据斜率计算出两条直线的夹角;Calculate the slope of two straight lines, and calculate the angle between the two straight lines based on the slope;
利用深度神经网络模型DNN,结合拍摄装置与窗户的相对距离和角度,以及拍摄装置的俯仰角、偏航角和滚转角,得到外平开窗打开的实际角度。The deep neural network model DNN is used to combine the relative distance and angle between the camera and the window, as well as the pitch angle, yaw angle and roll angle of the camera to obtain the actual opening angle of the external casement window.
作为进一步的技术方案,所述室内人员行为识别模型IDARM的训练进一步包括:As a further technical solution, the training of the indoor personnel behavior recognition model IDARM further includes:
获取室内人员热舒适行为的视频数据,并构建训练集;Obtain video data of indoor occupants’ thermal comfort behavior and construct a training set;
将所述训练集输入构建的室内人员行为识别模型IDARM进行训练,得到满足精度要求的模型参数;Input the training set into the constructed indoor personnel behavior recognition model IDARM for training to obtain model parameters that meet the accuracy requirements;
将所述模型参数加载到室内人员行为识别模型IDARM,得到训练好的室内人员行为识别模型IDARM。The model parameters are loaded into the indoor person behavior recognition model IDARM to obtain a trained indoor person behavior recognition model IDARM.
进一步地,构建训练集包括对多名受试者进行以下6种动作的视频采集:①坐;②走;③用手扇风;④抖衣服;⑤搓手;⑥抱肩,其中,每段视频长3至5秒,帧率为30FPS;将收集到的视频按照1:1的比例分为训练集和测试集。Furthermore, constructing the training set includes collecting videos of multiple subjects performing the following six actions: ① sitting; ② walking; ③ fanning with hands; ④ shaking clothes; ⑤ rubbing hands; ⑥ hugging shoulders, where each video is 3 to 5 seconds long and has a frame rate of 30FPS; the collected videos are divided into training set and test set in a 1:1 ratio.
作为进一步的技术方案,所述室内人员行为识别模型IDARM的运算过程包括:As a further technical solution, the operation process of the indoor personnel behavior recognition model IDARM includes:
将视频帧序列和逆光流序列分别进行特征提取,得到视频特征和逆光流特征;Extract features from the video frame sequence and the inverse optical flow sequence respectively to obtain video features and inverse optical flow features;
将视频特征、逆光流特征和位置编码通过编码模块Encoder,进行数据增强,得到增强后的视频特征;The video features, inverse optical flow features and position encoding are passed through the encoding module Encoder to perform data enhancement to obtain enhanced video features;
将增强后的视频特征、逆光流特征和人体骨骼关键点序列作为解码模块Decoder的输入,输出光流查询和内容查询;The enhanced video features, inverse optical flow features and human skeleton key point sequence are used as the input of the decoding module Decoder, and the optical flow query and content query are output;
将光流查询和内容查询进行多头线性变化,并通过全连接网络和Softmax函数获得与热舒适行为相关动作对应的置信度。The optical flow query and content query are subjected to multi-head linear transformation, and the confidence corresponding to the actions related to thermal comfort behavior is obtained through a fully connected network and a Softmax function.
作为进一步的技术方案,所述方法还包括:As a further technical solution, the method further includes:
构建数据张量,其中,T是空气温度,H是相对湿度,V是空气流速,C是二氧化碳浓度,L是窗户打开程度,A是室内人员做出的行为;Constructing data tensors , where T is the air temperature, H is the relative humidity, V is the air velocity, C is the carbon dioxide concentration, L is the window opening degree, and A is the behavior of the indoor occupants;
根据所述数据张量,利用室内温湿度调节控制算法获取最佳调控策略;According to the data tensor, an optimal control strategy is obtained by using an indoor temperature and humidity control algorithm;
根据所述最佳调控策略,调节室内供暖、通风及空气调节控制系统。According to the optimal control strategy, the indoor heating, ventilation and air conditioning control systems are adjusted.
另一方面,本发明提供一种室内暖通控制系统,包括:In another aspect, the present invention provides an indoor HVAC control system, comprising:
获取模块,用于获取室内视频数据及环境参数;An acquisition module is used to acquire indoor video data and environmental parameters;
窗户打开程度计算模块,用于将所述视频数据输入训练好的室内窗户分割模型IWS-Net,得到窗户分割掩膜图像,并根据所述窗户分割掩膜图像,计算得到室内窗户打开程度;A window opening degree calculation module is used to input the video data into a trained indoor window segmentation model IWS-Net to obtain a window segmentation mask image, and calculate the indoor window opening degree according to the window segmentation mask image;
室内人员热舒适行为识别模块,用于计算与所述视频数据对应的逆光流序列及人体骨骼关键点序列,并将所述视频数据、逆光流序列及人体骨骼关键点序列输入训练好的室内人员行为识别模型IDARM,得到室内人员热舒适行为;An indoor occupant thermal comfort behavior recognition module is used to calculate the inverse optical flow sequence and the human skeleton key point sequence corresponding to the video data, and input the video data, the inverse optical flow sequence and the human skeleton key point sequence into the trained indoor occupant behavior recognition model IDARM to obtain the indoor occupant thermal comfort behavior;
室内暖通控制模块,用于根据得到的室内窗户打开程度、室内人员热舒适行为及所述环境参数,确定室内暖通控制调整策略。The indoor HVAC control module is used to determine the indoor HVAC control adjustment strategy according to the obtained indoor window opening degree, indoor occupant thermal comfort behavior and the environmental parameters.
本发明还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,其中所述计算机程序被处理器执行时,实现所述室内暖通控制方法的步骤。The present invention also provides a computer-readable storage medium, on which a computer program is stored, wherein when the computer program is executed by a processor, the steps of the indoor HVAC control method are implemented.
与现有技术相比,本发明的有益效果在于:Compared with the prior art, the present invention has the following beneficial effects:
(1)本发明利用室内窗户图像数据训练IWS-Net模型,实现对室内窗户图像中墙壁、窗户、窗沿和背景的分割,通过推拉窗开口比例算法和外平开窗打开角度算法计算窗户打开程度。(1) The present invention uses indoor window image data to train the IWS-Net model to achieve the segmentation of walls, windows, window edges and backgrounds in indoor window images, and calculates the window opening degree through the sliding window opening ratio algorithm and the casement window opening angle algorithm.
(2)本发明通过视频帧序列,计算逆光流序列和人体骨骼关键点序列,并将视频帧序列、逆光流序列和人体骨骼关键点序列进行整合,通过室内人员行为识别模型IDARM,实现端到端地识别室内人员做出的与热舒适相关的6种行为。(2) The present invention calculates the inverse optical flow sequence and the human skeleton key point sequence through the video frame sequence, and integrates the video frame sequence, the inverse optical flow sequence and the human skeleton key point sequence. Through the indoor occupant behavior recognition model IDARM, the end-to-end recognition of six behaviors related to thermal comfort performed by indoor occupants is achieved.
(3)本发明通过室内温、湿度调节控制算法,根据最佳调控策略控制暖通系统,平衡人员的热舒适度和建筑能源消耗,避免不必要的能源浪费,从而实现能源节约,同时提高了居住者的生产和生活质量,提高了建筑设备的可靠性和使用效率,减少了设备的故障率和维护成本,降低设备维护的工作量和费用。(3) The present invention controls the HVAC system according to the optimal control strategy through the indoor temperature and humidity adjustment control algorithm, balances the thermal comfort of personnel and the energy consumption of the building, avoids unnecessary energy waste, and thus achieves energy conservation. At the same time, it improves the production and living quality of the residents, improves the reliability and utilization efficiency of building equipment, reduces the failure rate and maintenance cost of equipment, and reduces the workload and cost of equipment maintenance.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本发明实施例中一种室内暖通控制方法的流程示意图;FIG1 is a schematic flow chart of an indoor HVAC control method according to an embodiment of the present invention;
图2为本发明实施例中窗户打开检测算法的流程图;FIG2 is a flow chart of a window opening detection algorithm according to an embodiment of the present invention;
图3为本发明实施例中室内窗户分割模型IWS-Net的整体网络结构图;FIG3 is an overall network structure diagram of an indoor window segmentation model IWS-Net according to an embodiment of the present invention;
图4为本发明实施例中室内窗户分割模型IWS-Net中特征提取模块结构图;FIG4 is a structural diagram of a feature extraction module in an indoor window segmentation model IWS-Net according to an embodiment of the present invention;
图5为本发明实施例中室内窗户分割模型IWS-Net中注意力模块结构图;FIG5 is a structural diagram of an attention module in an indoor window segmentation model IWS-Net according to an embodiment of the present invention;
图6为本发明实施例中室内窗户分割模型IWS-Net中上采样模块结构图;FIG6 is a structural diagram of an upsampling module in an indoor window segmentation model IWS-Net according to an embodiment of the present invention;
图7为本发明实施例中室内人员行为识别模型IDARM的整体网络结构图。FIG. 7 is an overall network structure diagram of an indoor personnel behavior recognition model IDARM according to an embodiment of the present invention.
实施方式Implementation
以下将结合附图对本发明各实施例的技术方案进行清楚、完整的描述,显然,所描述的实施例仅仅是本发明的一部分实施例,而不是全部的实施例。基于本发明的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所得到的所有其它实施例,都属于本发明所保护的范围。The following will clearly and completely describe the technical solutions of various embodiments of the present invention in conjunction with the accompanying drawings. Obviously, the described embodiments are only part of the embodiments of the present invention, rather than all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary technicians in this field without making creative work are within the scope of protection of the present invention.
如图1所示,为本发明提供的一种室内暖通控制方法的流程示意图,其通过室内热舒适度测量仪采集室内热环境参数,同时使用摄像机采集室内全局图像;利用窗户打开检测算法和室内人员行为识别算法分别获得窗户打开的程度和室内人员做出的与热舒适有关的热行为;最后通过室内温、湿度调节控制算法,计算最佳调控策略,对室内供暖、通风、空气调节系统进行控制,平衡室内人员的热舒适度与建筑能源消耗。As shown in Figure 1, it is a flow chart of an indoor HVAC control method provided by the present invention, which collects indoor thermal environment parameters through an indoor thermal comfort measuring instrument, and uses a camera to collect the global indoor image; uses a window opening detection algorithm and an indoor occupant behavior recognition algorithm to respectively obtain the degree of window opening and the thermal behavior related to thermal comfort performed by indoor occupants; finally, through the indoor temperature and humidity adjustment control algorithm, the optimal control strategy is calculated to control the indoor heating, ventilation, and air conditioning systems to balance the thermal comfort of indoor occupants and the energy consumption of the building.
本发明中的核心技术有窗户打开检测算法、室内人员行为识别算法和室内温、湿度调节控制算法,接下来将结合图例具体阐述工作原理及模块功能。The core technologies in the present invention include window opening detection algorithm, indoor personnel behavior recognition algorithm and indoor temperature and humidity adjustment control algorithm. The working principle and module functions will be explained in detail with reference to the illustrations.
一、窗户打开检测算法1. Window Opening Detection Algorithm
窗户打开检测算法分为四个部分:(1)室内窗户分割模型IWS-Net;(2)推拉窗开口比例算法;(3)外平开窗打开角度算法;(4)窗户打开程度算法。窗户打开检测算法的流程如图2所示,利用室内窗户分割模型IWS-Net,生成所需的墙体、窗户、窗沿以及背景的掩膜图像;通过推拉窗开口比例算法计算出推拉窗的开口面积及所占比例;通过外平开窗打开角度算法计算出外平开窗实际打开的角度;最后计算出两类窗户实际的打开程度。The window opening detection algorithm is divided into four parts: (1) indoor window segmentation model IWS-Net; (2) sliding window opening ratio algorithm; (3) external casement window opening angle algorithm; (4) window opening degree algorithm. The process of the window opening detection algorithm is shown in Figure 2. The indoor window segmentation model IWS-Net is used to generate the required mask images of the wall, window, window edge and background; the sliding window opening ratio algorithm is used to calculate the opening area and proportion of the sliding window; the external casement window opening angle algorithm is used to calculate the actual opening angle of the external casement window; and finally, the actual opening degree of the two types of windows is calculated.
(1)室内窗户分割模型IWS-Net(1) Indoor window segmentation model IWS-Net
1)工作原理1) Working principle
室内窗户分割模型IWS-Net的整体结构如图3所示,其输入图像为固定尺寸(本方法采用输入大小),IWS-Net由Backbone、注意力模块以及重建模块组成。The overall structure of the indoor window segmentation model IWS-Net is shown in Figure 3. Its input image is of fixed size (this method uses input size), IWS-Net consists of Backbone, attention module and reconstruction module.
网络输入的RGB图像首先通过Backbone进行特征提取,Backbone由5个特征提取模块组成,在进行特征提取的过程中,对每一个模块的输出进行保存,并命名为。接着将Backbone的最终输出通过3个注意力模块,注意力模块的作用是抑制特征图中的背景信息,增强所需分割部分的特征信息。最后通过重建模块将经过注意力模块后的特征张量进行重建,最终生成所需的墙体、窗户、窗沿以及背景的掩膜图像。重建模块由5个上采样模块和一个深度可分离卷积组成,每一个上采样模块的输入是上一个上采样模块的输出以及backbone对应特征提取模块的输出,深度可分离卷积则是将最后一层上采样模块进行通道重整,使通道数等于4。The RGB image input by the network is first subjected to feature extraction by Backbone, which consists of five feature extraction modules. During the feature extraction process, the output of each module is saved and named . Then the final output of Backbone is passed through three attention modules. The function of the attention module is to suppress the background information in the feature map and enhance the feature information of the required segmented part. Finally, the feature tensor after the attention module is reconstructed through the reconstruction module to finally generate the required mask images of the wall, window, window edge and background. The reconstruction module consists of 5 upsampling modules and a depthwise separable convolution. The input of each upsampling module is the output of the previous upsampling module and the output of the corresponding feature extraction module of the backbone. The depthwise separable convolution reorganizes the channels of the last layer of upsampling modules so that the number of channels is equal to 4.
2)特征提取模块2) Feature extraction module
特征提取模块的具体结构如图4所示,假设输入的特征张量为,深度可分离卷积1的作用是将输入特征张量的通道进行重整,经过深度可分离卷积1后特征张量的通道数保持不变,将深度可分离卷积记为,则可以表示为以下公式:The specific structure of the feature extraction module is shown in Figure 4. Assume that the input feature tensor is The function of depthwise separable convolution 1 is to reorganize the channels of the input feature tensor. After depthwise separable convolution 1, the number of channels of the feature tensor remains unchanged. The depthwise separable convolution is recorded as , it can be expressed as the following formula:
, ,
其中为卷积核大小;为步长;为填充像素大小;为卷积输出后通道数;为特征图输入的通道数。为深度可分离卷积计算后的特征张量,为特征张量的高,为特征张量的宽。in is the convolution kernel size; is the step length; is the filling pixel size; The number of channels after convolution output; The number of channels input for the feature map. is the feature tensor calculated by depth-wise separable convolution, is the height of the feature tensor, is the width of the feature tensor.
然后将通过不同感受野的卷积计算,提取不同的特征,由于空洞卷积会使特征图的尺寸发生改变,因此需要上采样操作,使三个特征图的尺寸相同,最后在通道维度上将三个特征图进行拼接。Then Different features are extracted through convolution calculations of different receptive fields. Since the hole convolution will change the size of the feature map, an upsampling operation is required to make the three feature maps of the same size. Finally, the three feature maps are spliced in the channel dimension.
; ;
其中,为空洞数占卷积核尺寸的比例,为三个特征张量,在通道维度上拼接后的输出张量;为采样函数,为拼接函数,,。深度可分离卷积1的感受野大小为3,空洞卷积1的感受野大小为5,空洞卷积2的感受野大小为7。in, is the ratio of the number of holes to the size of the convolution kernel, It is the output tensor of three feature tensors concatenated in the channel dimension; is the sampling function, is the concatenation function, , The receptive field size of depthwise separable convolution 1 is 3, the receptive field size of atrous convolution 1 is 5, and the receptive field size of atrous convolution 2 is 7.
最后,将进行降采样操作,特征图的尺寸变为原来的一半,并通过深度可分离卷积3将通道数改变成原来的2倍,即。Finally, The downsampling operation is performed, the size of the feature map becomes half of the original size, and the number of channels is changed to twice the original size through depthwise separable convolution 3, that is, .
, ,
其中,为降采样后的特征张量,。in, is the feature tensor after downsampling, .
3)Backbone3) Backbone
Backbone的结构图如图3所示,Backbone是由5个特征提取模块组成,假设输入的RGB图像为,则,那么第个特征提取模块输出的特征图记为,其中,,,为输入图像的高,为输入图像的宽。将每一个特征提取模块的输出进行保存,记为,同时Backbone的最终输出为,为Backbone各层提取的特征张量的列表,用于存储。The structure diagram of Backbone is shown in Figure 3. Backbone is composed of five feature extraction modules. Assume that the input RGB image is ,but , then The feature map output by the feature extraction module is denoted as ,in , , , is the height of the input image, is the width of the input image. The output of each feature extraction module is saved and recorded as , and the final output of Backbone is , A list of feature tensors extracted from each Backbone layer, used to store .
4)注意力模块4) Attention Module
注意力模块的结构图如图5所示,假设输入的特征图为,。首先将其展平为一维张量,接着对其进行线性变化,记为,则可以表示为:The structure diagram of the attention module is shown in Figure 5. Assume that the input feature map is , . First flatten it into a 1D tensor , and then perform a linear transformation on it, recorded as , it can be expressed as:
; ;
其中,为经过线性变换的输出特征张量,。in, is the output feature tensor after linear transformation, .
对特征图的坐标进行编码,特征图的每一个点的坐标可以表示为,位置编码则是将坐标转换为通道数为的一维张量,具体公式如下所示:For feature maps The coordinates are encoded, and the feature map The coordinates of each point can be expressed as , position encoding is to convert the coordinates into the number of channels The one-dimensional tensor of is as follows:
; ;
其中,,,。in, , , .
多头自注意力机制中的原理是将输入数据分为多个部分,每个部分都有一个独立的注意力头。每个注意力头都会计算输入数据中与其相关的权重,然后将这些权重加权求和,得到最终的输出结果。注意力机制所需三个输入,分别为,,,以下简写为,,。The principle of the multi-head self-attention mechanism is to divide the input data into multiple parts, each of which has an independent attention head. Each attention head calculates the weight associated with it in the input data, and then sums these weights to get the final output result. The attention mechanism requires three inputs, namely , , , which is abbreviated as , , .
以下为注意力机制的计算公式:The following is the calculation formula of the attention mechanism:
; ;
其中,为特征张量的通道数。in, for The number of channels of the feature tensor.
本模块中为;,均为与在通道维度上的和,可以表示为以下形式:In this module for ; , Both and The sum in the channel dimension can be expressed as follows:
。 .
多头自注意力机制的公式可以表达为以下形式:The formula of the multi-head self-attention mechanism can be expressed as follows:
; ;
其中,均为将张量投影到另一个空间的变换矩阵,将经过多头自注意力机制后的输出记为。in, Both are transformation matrices that project tensors into another space. The output after the multi-head self-attention mechanism is recorded as .
最后将和的和经过线性全连接层后,最为整个注意力模块的最终输出,可以表示为以下形式:Finally and After the sum of passes through the linear fully connected layer, the final output of the entire attention module can be expressed as follows:
。 .
5)上采样模块5) Upsampling module
上采样模块的结构图如图6所示,上采样模块的输入是上一层模块的输出和Backbone中相应特征提取模块的输出特征张量。记上一层模块的输出为,Backbone对应特征提取模块输出的特征张量记为。The structure diagram of the upsampling module is shown in Figure 6. The input of the upsampling module is the output of the previous layer module and the output feature tensor of the corresponding feature extraction module in Backbone. The output of the previous layer module is , the feature tensor output by the Backbone corresponding feature extraction module is recorded as .
假设,,在将其进行拼接后进行上采样操作,可以表示为以下形式:Assumptions , , after splicing them and performing upsampling, it can be expressed as follows:
; ;
其中,,。张量拼接是将两个特征张量在通道维度上进行拼接,所以拼接后的特征图尺寸不变,但通道数变为之前的两倍。上采样操作的目的是将输入特征图的尺寸变为原来的两倍。in, , Tensor concatenation is to concatenate two feature tensors in the channel dimension, so the size of the concatenated feature map remains unchanged, but the number of channels is doubled. The purpose of the upsampling operation is to double the size of the input feature map.
深度可分离卷积1的作用是将通道数减少一半,即The role of depthwise separable convolution 1 is to reduce the number of channels by half, that is
; ;
其中,。in, .
通道注意力是由全局平均池化和线性全连接层成:Channel attention is composed of global average pooling and linear fully connected layers:
; ;
其中,代表全局平均池化,将原先在整个特征图上进行全局平均池化,,所以。in, Represents global average pooling, which converts the original Perform global average pooling on the entire feature map. ,so .
空间注意力是由通道平均池化和深度可分离卷积组成:Spatial attention is composed of channel average pooling and depth-wise separable convolution:
; ;
其中,代表通道平均池化,将原先的在通道上进行平均池化,,所以。in, Represents channel average pooling, which converts the original Perform average pooling on the channels, ,so .
深度可分离卷积2的作用与1相同,用代表深度可分离卷积1和2的计算,因此在上采样操作后,整个模块的输出是:The function of depthwise separable convolution 2 is the same as 1, using represents the computation of depthwise separable convolutions 1 and 2, so after the upsampling operation, the output of the entire module is:
; ;
其中,。in, .
6)重建模块6) Reconstruction module
重建模块的具体结构如图3中所示,重建模块由5个上采样模块和1个深度可分离卷积组成。每一个上采样模块的输入是上一个上采样模块的输出以及backbone对应特征提取模块的输出,深度可分离卷积则是将最后一层上采样模块进行通道重整,使通道数等于4。每一个通道与图像中的墙壁、窗户、窗沿和背景类相对应。The specific structure of the reconstruction module is shown in Figure 3. The reconstruction module consists of 5 upsampling modules and 1 depthwise separable convolution. The input of each upsampling module is the output of the previous upsampling module and the output of the backbone corresponding feature extraction module. The depthwise separable convolution reorganizes the channels of the last upsampling module to make the number of channels equal to 4. Each channel corresponds to the wall, window, window edge and background class in the image.
(2)推拉窗开口比例算法(2) Sliding window opening ratio algorithm
将室内窗户图像作为IWS-Net模型输入,生成窗户和窗沿的掩膜图像。利用窗户的掩膜将图像中的窗户提取出来,并进行高斯滤波,去除噪声,使用阈值分割,生成二值图像。将二值图像进行开运算,去除二值图像上的毛点。The indoor window image is used as the input of the IWS-Net model to generate a mask image of the window and window edge. The window mask is used to extract the window in the image, and Gaussian filtering is performed to remove noise. Threshold segmentation is used to generate a binary image. The binary image is opened to remove the hair points on the binary image.
在去噪后的图像中寻找最小内接矩形,统计矩形数量,并获取其四个顶点坐标,分别代表了矩形左上、右上、左下、右下角的坐标,以下将矩形数量分为X种情况,分别讨论:Find the minimum inscribed rectangle in the denoised image, count the number of rectangles, and get the coordinates of their four vertices , respectively represent the coordinates of the upper left, upper right, lower left, and lower right corners of the rectangle. The following divides the number of rectangles into X cases and discusses them separately:
1)矩形数量为2,对于每一个矩形计算和,,分别为第个矩形的顶部、底部线段中点的坐标的高度值,其中代表不同的矩阵,在这种情况下,。若或者则判断窗户使关闭状态,否则判断为窗户完全打开,窗户的开口比例为100%。其中为人工设置的阈值。1) The number of rectangles is 2, and for each rectangle, calculate and , , Respectively The height value of the coordinates of the midpoints of the top and bottom segments of the rectangle, where represents different matrices, in this case, .like or If the window is closed, the window is considered to be fully open, and the window opening ratio is 100%. This is a manually set threshold.
2)矩形数量为3,则判断窗户完全打开,窗户的开口比例为100%。2) If the number of rectangles is 3, the window is considered to be fully open and the window opening ratio is 100%.
3)矩形数量为4,对于处于左右两侧的两个矩形计算和。如果,则判断窗户开口靠左,窗户的开口比例为,反之,则判断窗户开口靠右,窗户的开口比例为。其中,,分别为最左边和最右边矩形的顶部线段中点的坐标的高度;,分别为最左边和最右边矩形的面积;为第个矩阵的面积,在这种情况下,。3) The number of rectangles is 4. Calculate the two rectangles on the left and right sides. and .if , then the window opening is judged to be on the left, and the window opening ratio is , otherwise, the window opening is judged to be on the right, and the window opening ratio is .in, , The heights of the coordinates of the midpoints of the top segments of the leftmost and rightmost rectangles respectively; , are the areas of the leftmost and rightmost rectangles respectively; For the The area of a matrix, in this case, .
(3)外平开窗打开角度算法(3) Calculation of the opening angle of the external casement window
将室内窗户图像作为IWS-Net模型输入,生成窗户和窗沿的掩膜图像。分别提取窗户和窗沿的掩膜图像,使用图像学开运算,减少图像的噪点和毛刺,之后进行阈值分割,同时对两幅图像使用霍夫线检测,获取窗沿上端的直线,和窗户下端的斜线。分别获取两条直线的斜率,和截距,。The indoor window image is used as the input of the IWS-Net model to generate mask images of the window and window edge. The mask images of the window and window edge are extracted respectively, and the image opening operation is used to reduce the noise and burrs of the image. Then the threshold segmentation is performed, and the Hough line detection is used on the two images at the same time to obtain the straight line at the upper end of the window edge and the oblique line at the lower end of the window. The slopes of the two straight lines are obtained respectively. , and the intercept , .
通过两条直线的斜率计算两条直线的夹角,计算公式如下:The angle between two straight lines is calculated by the slope of the two straight lines. The calculation formula is as follows:
。 .
建立DNN模型,设置其输入张量长度是5,隐藏层数量为3,隐藏层神经元数量为10,输出层神经元数量为1。Establish a DNN model, set its input tensor length to 5, the number of hidden layers to 3, the number of hidden layer neurons to 10, and the number of output layer neurons to 1.
将图像中窗户打开角度,相机与窗户的相对距离,相机与窗户正面的夹角,相机的俯仰角、偏航角和滚转角,进行拼接,形成张量。Open the windows in the image , the relative distance between the camera and the window , the angle between the camera and the front of the window , the camera's pitch angle , yaw angle and roll angle , concatenate to form a tensor .
通过DNN从中预测实际中窗户打开夹角。Through DNN from Predict the actual window opening angle .
(4)窗户打开程度算法(4) Window opening degree algorithm
(2)中计算获得的推拉窗开口比例能够代表窗户打开的程度,(3)中计算获得的外平开窗实际打开角度,并不能代表窗户打开的程度,因此要得到外平开窗打开程度需要将实际打开角度除以最大打开角度。The sliding window opening ratio calculated in (2) can represent the degree of window opening. The actual opening angle of the casement window calculated in (3) cannot represent the degree of window opening. Therefore, to obtain the degree of window opening, the actual opening angle needs to be divided by the maximum opening angle.
, ,
其中,是窗户打开程度,其取值范围为。in, is the window opening degree, and its value range is .
二、室内人员行为识别算法2. Indoor Person Behavior Recognition Algorithm
室内人员行为算法通过输入的视频帧序列,计算逆光流序列和人体骨骼关键点序列,并将其作为室内人员行为识别模型IDARM的输入,进行端到端的室内人员行为识别。其主要能够识别出室内人员与热舒适相关的最常做的6种动作,分别是①坐;②走;③用手扇风;④抖衣服;⑤搓手;⑥抱肩。其中①,②代表热中性的行为,而其余4种代表热不适的行为,更具体地,③,④代表感觉热的行为,⑤,⑥代表感觉冷的行为。The indoor personnel behavior algorithm calculates the inverse optical flow sequence and the human skeleton key point sequence through the input video frame sequence, and uses it as the input of the indoor personnel behavior recognition model IDARM to perform end-to-end indoor personnel behavior recognition. It can mainly identify the six most common actions related to thermal comfort of indoor personnel, namely ① sitting; ② walking; ③ fanning with hands; ④ shaking clothes; ⑤ rubbing hands; ⑥ hugging shoulders. Among them, ① and ② represent thermal neutral behaviors, while the other 4 represent thermal discomfort behaviors. More specifically, ③ and ④ represent behaviors that feel hot, and ⑤ and ⑥ represent behaviors that feel cold.
接下来将详细介绍逆光流序列和人体骨骼关键点序列的生成和室内人员行为识别模型IDARM。Next, the generation of inverse optical flow sequences and human skeleton key point sequences and the indoor person behavior recognition model IDARM will be introduced in detail.
(1)逆光流序列和人体骨骼关键点序列(1) Inverse optical flow sequence and human skeleton key point sequence
假设输入的视频帧序列为,共帧图像。为了获取人体骨骼关键点序列,使用Python中的Mediapipe库对每一帧进行人体骨骼关键点检测,每帧图像可以检测出33个骨骼关键点,每个点的坐标,其中。Assume that the input video frame sequence is ,common In order to obtain the human skeleton key point sequence, the Mediapipe library in Python is used to detect the human skeleton key points for each frame. 33 skeleton key points can be detected for each frame, and the coordinates of each point are ,in .
因此,人体骨骼关键点序列可以表示为,其中代表了第帧中所有骨骼关键点的坐标。Therefore, the human skeleton key point sequence can be expressed as ,in Represents the The coordinates of all bone key points in the frame.
为了获取逆光流序列,使用了预训练RAFT模型计算逆光流。选取视频帧序列中相邻的两帧和。正常的光流计算如下式所示:In order to obtain the inverse optical flow sequence, the pre-trained RAFT model is used to calculate the inverse optical flow. Select two adjacent frames in the video frame sequence and The normal optical flow calculation is as follows:
, ,
为了获取逆光流,则应该将输入的相邻两帧进行交换,同时将光流的速度方向取反,具体公式如下所示:In order to obtain the inverse optical flow, the two adjacent input frames should be swapped and the speed direction of the optical flow should be reversed. The specific formula is as follows:
, ,
所以,逆光流序列可以表示为。Therefore, the inverse optical flow sequence can be expressed as .
(2)室内人员行为识别模型IDARM(2) Indoor Person Behavior Identification Model IDARM
为了使逆光流序列、视频帧序列以及骨骼关键点序列的长度相同,去除第一帧视频图像及其检测到的骨骼关键点坐标。室内人员行为识别模型IDARM的整体网络结构图如图7所示。In order to make the length of the inverse optical flow sequence, video frame sequence and skeleton key point sequence the same, the first frame of video image and its detected skeleton key point coordinates are removed. The overall network structure diagram of the indoor person behavior recognition model IDARM is shown in Figure 7.
首先,将视频帧序列和逆光流序列分别通过两个Backbone进行特征提取,得到视频特征和逆光流特征。接下来使用Encoder模块和Decoder模块对人员行为进行识别。First, the video frame sequence and the inverse optical flow sequence are extracted through two backbones to obtain the video features. and backlight flow features Next, the Encoder module and the Decoder module are used to identify human behavior.
Encoder模块将,和位置编码通过自注意力模块,增强视频特征。之后对增强后的视频特征进行标准化,同时加上未增强的,最后使用FFN进行维度映射。堆叠N个Encoder模块,其中N为6。其中第i个Encoder模块可以表示为以下形式:The Encoder module will , And position encoding through the self-attention module to enhance video features After that, the enhanced video features Standardize and add unenhanced , and finally use FFN for dimension mapping. Stack N Encoder modules, where N is 6. The i-th Encoder module can be expressed as follows:
, ,
其中,是位置编码,是自注意力模块,是标准化函数,是全连接层,将N个Encoder层堆叠的最终输出记为。in, is the positional encoding, is the self-attention module, is the normalization function, is a fully connected layer, and the final output of the N Encoder layers stacked is recorded as .
Decoder模块将一般的查询张量进一步细化,分为光流查询和内容查询,其中。利用骨骼关键点序列提供对应坐标,首先对其进行坐标编码,其编码原理与位置编码原理相同获得。The Decoder module further refines the general query tensor into optical flow queries and content query ,in . Use the skeleton key point sequence to provide the corresponding coordinates First, coordinate encoding is performed, and the encoding principle is the same as the position encoding principle to obtain .
首先,对光流查询通过自注意力模块,接着使用可形变注意力模块更新光流查,因此更新过程可以表示为以下形式:First, the optical flow query is passed through the self-attention module, and then the deformable attention module is used to update the optical flow query. Therefore, the update process can be expressed as follows:
, ,
其中,代表可形变注意力模块。in, Represents deformable attention module.
对更新后的光流查询和内容查询的和通过自注意力模块进行更新,获得混合查询,最后使用可形变注意力模块更新内容查询。因此更新过程可以表示为以下形式:,同样,与Encoder相同,Decoder也需要堆叠N个,N在本发明中设置为6。记Decoder的最终输出的光流查询和内容查询为和。The sum of the updated optical flow query and content query is updated through the self-attention module to obtain a mixed query, and finally the deformable attention module is used to update the content query. Therefore, the update process can be expressed as follows: , similarly, like the Encoder, the Decoder also needs to be stacked N times, and N is set to 6 in the present invention. The optical flow query and content query of the final output of the Decoder are recorded as and .
最后通过多头线性变化将和进行融合,计算6种行为的置信度分数,可以表示为以下形式:Finally, through the multi-head linear change and The confidence scores of the six behaviors are calculated by fusion, which can be expressed as follows:
; ;
其中,代表的是帧中每个行为的置信度分数,为了确定输入的视频序列所代表的行为,因此需要对沿第一维度进行相加,再将其映射至0到1之间,因此需要通过Softmax函数计算每种行为的置信度分数,其具体可以表示为以下形式:in, Represents The confidence score of each behavior in the frame. In order to determine the behavior represented by the input video sequence, it is necessary to Add along the first dimension and map it to between 0 and 1. Therefore, the confidence score of each behavior needs to be calculated through the Softmax function, which can be specifically expressed as follows:
, ,
其中,为最终输出的置信度分数。选取置信度分数最大的行为作为模型输出。in, is the confidence score of the final output. Select the behavior with the largest confidence score as model output.
三、室内温、湿度调节控制算法3. Indoor temperature and humidity control algorithm
假设是空气温度,是相对湿度,是空气流速,是二氧化碳浓度,是窗户打开程度,是室内人员做出的行为。对于室内人员做出的行为分为三类,①,②记为第一类,输出分数为0;③,④记为第二类,输出分数为-1;⑤,⑥记为第三类,输出分数为1。将输出分数记为。Assumptions is the air temperature, is the relative humidity, is the air velocity, is the carbon dioxide concentration, is the degree of window opening, It is the behavior of the people inside. Divided into three categories, ①, ② are recorded as the first category, with an output score of 0; ③, ④ are recorded as the second category, with an output score of -1; ⑤, ⑥ are recorded as the third category, with an output score of 1. The output score is recorded as .
当室内有人员活动时,首先将室内温度保持在25℃,湿度保持在60%,同时空气流速保存在0.25m/s。对于温、湿度和空气流速的调控,将使用额定值,并根据影响因素进行调整。When there are people in the room, the indoor temperature is kept at 25℃, the humidity is kept at 60%, and the air flow rate is kept at 0.25m/s. For the control of temperature, humidity and air flow rate, the rated values will be used and adjusted according to the influencing factors.
首先,将开窗程度分为三类:(1)认为是窗户关闭状态;(2)认为是需要增加室内空气流速,但不想改变室内温度;(3)认为是将室内温、湿度调节为与室外相同。First, the degree of window opening is divided into three categories: (1) Assuming that the window is closed; (2) Think that the indoor air flow rate needs to be increased, but the indoor temperature does not want to change; (3) It is considered to be adjusting the indoor temperature and humidity to the same as the outdoor temperature and humidity.
因此,特殊状况为开窗程度时,将关闭温、湿度和空气调控系统。Therefore, the special condition is the degree of window opening When the temperature, humidity and air conditioning systems are turned off.
室内温度的调控策略如下所示:The indoor temperature control strategy is as follows:
, ,
其中,为温度调节的额定值,即对室内温度调控的改变量,本发明中设置为2;为策略输出当前室内温度应该调整为的数值。in, is the rated value of temperature regulation, i.e., the change in indoor temperature regulation, which is set to 2 in the present invention; Output the value to which the current indoor temperature should be adjusted for the strategy.
该策略的意义是,当窗户关闭时,根据人员行为的类别进行温度调节。而当窗户开启时,则需要根据人员行为以及窗户打开程度进行调控,窗户打开程度越大,为了维持室内温度,则需要增加该变量。The significance of this strategy is that when the window is closed, the temperature is adjusted according to the type of human behavior. When the window is open, it needs to be regulated according to the human behavior and the degree of window opening. The greater the degree of window opening, the more this variable needs to be increased in order to maintain the indoor temperature.
室内空气流速的调控策略如下所示:The control strategy of indoor air velocity is as follows:
, ,
其中,为空气流速调节的额定值,即对室内空气流速调控的改变量,本发明中设置为0.005;为策略输出当前室内空气流速应该调整为的数值;是逻辑函数,当括号里的内容为真,则输出1,反之,则输出0。in, is the rated value of air flow rate regulation, i.e., the change in indoor air flow rate regulation, which is set to 0.005 in the present invention; Output the value to which the current indoor air flow rate should be adjusted for the strategy; It is a logical function. When the content in the brackets is true, it outputs 1, otherwise it outputs 0.
该策略的意义是:当窗户关闭时,二氧化碳浓度过高时增加空气流速。而当窗户开启时,窗户打开程度越大,其能够带来的空气流速也越大,因此可以将空气流速的改变量减小。The significance of this strategy is that when the window is closed, the air flow rate is increased when the carbon dioxide concentration is too high. When the window is open, the greater the window opening, the greater the air flow rate it can bring, so the change in air flow rate can be reduced.
室内湿度的调控策略如下所示:The indoor humidity control strategy is as follows:
, ,
其中,为室内湿度调节的额定值,即对室内湿度调控的改变量,本发明设置为0.05;为策略输出当前室内湿度应该调整为的数值;为室内湿度的该变量,即前一时刻测量的湿度数值减去当前时刻测量的湿度数值。in, is the rated value of indoor humidity regulation, i.e., the change in indoor humidity regulation, which is set to 0.05 in the present invention; Output the value that the current indoor humidity should be adjusted to for the strategy; The variable is the indoor humidity, which is the humidity value measured at the previous moment minus the humidity value measured at the current moment.
该策略的意义是:当窗户关闭时,保持室内湿度为0.6,当窗户打开后,如果室内湿度下降,则在0.6的基础上进行升高,保持室内湿度在一段时间内的均值为0.6。反之,则在0.6的基础上进行减少。The significance of this strategy is: when the window is closed, the indoor humidity is maintained at 0.6. When the window is opened, if the indoor humidity drops, it is increased on the basis of 0.6 to maintain the average indoor humidity at 0.6 over a period of time. Otherwise, it is reduced on the basis of 0.6.
本发明还提供一种室内暖通控制系统,包括:The present invention also provides an indoor HVAC control system, comprising:
获取模块,用于获取室内视频数据及环境参数;An acquisition module is used to acquire indoor video data and environmental parameters;
窗户打开程度计算模块,用于将所述视频数据输入训练好的室内窗户分割模型IWS-Net,得到窗户分割掩膜图像,并根据所述窗户分割掩膜图像,计算得到室内窗户打开程度;A window opening degree calculation module is used to input the video data into a trained indoor window segmentation model IWS-Net to obtain a window segmentation mask image, and calculate the indoor window opening degree according to the window segmentation mask image;
室内人员热舒适行为识别模块,用于计算与所述视频数据对应的逆光流序列及人体骨骼关键点序列,并将所述视频数据、逆光流序列及人体骨骼关键点序列输入训练好的室内人员行为识别模型IDARM,得到室内人员热舒适行为;An indoor occupant thermal comfort behavior recognition module is used to calculate the inverse optical flow sequence and the human skeleton key point sequence corresponding to the video data, and input the video data, the inverse optical flow sequence and the human skeleton key point sequence into the trained indoor occupant behavior recognition model IDARM to obtain the indoor occupant thermal comfort behavior;
室内暖通控制模块,用于根据得到的室内窗户打开程度、室内人员热舒适行为及所述环境参数,计算室内暖通控制调整策略。The indoor HVAC control module is used to calculate the indoor HVAC control adjustment strategy according to the obtained indoor window opening degree, indoor occupant thermal comfort behavior and the environmental parameters.
所述系统可参照前述方法的实施方式来实现。The system can be implemented with reference to the implementation of the aforementioned method.
进一步地,所述视频数据通过摄像机获取。摄像机可放置于室内的顶部,确保其能够清晰地拍摄到室内全局和窗户的画面,并测量其与检测窗户的距离、角度以及自身的俯仰角、偏航角和滚转角。Furthermore, the video data is obtained by a camera. The camera can be placed on the top of the room to ensure that it can clearly capture the overall indoor and window images, and measure the distance and angle between it and the detection window, as well as its own pitch angle, yaw angle and roll angle.
具体地,按照一定频率对摄像机获取的视频进行抽样,获取相应的视频帧,再将所述视频帧输入室内窗户分割模型IWS-Net进行窗户分割。Specifically, the video acquired by the camera is sampled at a certain frequency to obtain corresponding video frames, and then the video frames are input into the indoor window segmentation model IWS-Net for window segmentation.
进一步地,所述环境参数通过室内热舒适度测量仪获取。室内热舒适度测量仪主要获取室内的空气温度、相对湿度、空气流速以及二氧化碳浓度。Furthermore, the environmental parameters are obtained by an indoor thermal comfort measuring instrument, which mainly obtains indoor air temperature, relative humidity, air flow rate and carbon dioxide concentration.
可选地,使用循环全对域转换器RAFT从摄像机获取的RGB视频帧序列计算出相应的逆光流序列,同时使用Mediapipe库对视频中的每一帧图像进行骨骼关键点检测,形成人体骨骼关键点序列。Optionally, a cyclic full-domain converter RAFT is used to calculate a corresponding inverse optical flow sequence from an RGB video frame sequence acquired from a camera, and a Mediapipe library is used to perform skeleton key point detection on each frame image in the video to form a human skeleton key point sequence.
进一步地,计算逆光流序列的操作如下:Furthermore, the operation of calculating the inverse optical flow sequence is as follows:
(1)选取视频帧序列相邻两帧记为和,并将其宽和高填充至8的倍数;(1) Select two adjacent frames in the video frame sequence and record them as and , and fill its width and height to multiples of 8;
(2)在将和进行交换后,作为预训练循环全对域转换器RAFT的输入,得到逆光流图;(2) In and After the exchange, it is used as the input of the pre-trained cyclic full-domain converter RAFT to obtain the inverse optical flow map;
(3)将逆光流图中数值的方向取反,并重复(1),直至视频结束。(3) Reverse the direction of the values in the inverse optical flow map and repeat (1) until the video ends.
进一步地,计算每一帧中的人体骨骼关键点的操作如下:Furthermore, the operation of calculating the key points of the human skeleton in each frame is as follows:
(1)将视频帧序列逐帧分离;(1) Separate the video frame sequence frame by frame;
(2)使用Python中Mediapipe,对每一帧图像检测人体骨骼关键点;(2) Use Mediapipe in Python to detect the key points of the human skeleton for each frame of the image;
(3)将(2)中获得的33个人体骨骼关键点进行存储,并重复(1),直至结束。(3) Store the 33 human skeleton key points obtained in (2) and repeat (1) until the end.
所述室内窗户分割模型IWS-Net的训练进一步包括:The training of the indoor window segmentation model IWS-Net further includes:
采集室内窗户的图像,对所述图像进行预处理并形成训练集;Collecting images of indoor windows, preprocessing the images and forming a training set;
将所述训练集输入构建的室内窗户分割模型IWS-Net进行训练,得到满足精度要求的模型参数;Inputting the training set into the constructed indoor window segmentation model IWS-Net for training to obtain model parameters that meet the accuracy requirements;
将所述模型参数加载到室内窗户分割模型IWS-Net,得到训练好的室内窗户分割模型IWS-Net。The model parameters are loaded into the indoor window segmentation model IWS-Net to obtain a trained indoor window segmentation model IWS-Net.
进一步地,采集室内窗户的图像后,对所述图像使用labelme进行标注,将图像中的墙壁、窗户以及窗沿进行分割,其余部分划分为背景类。将所有数据按照的比例分为测试集和训练集。Furthermore, after collecting the images of indoor windows, the images are annotated using labelme, the walls, windows and window edges in the images are segmented, and the rest are classified as background. All data are divided into a test set and a training set according to the ratio.
所述室内窗户分割模型IWS-Net的运算过程包括:The operation process of the indoor window segmentation model IWS-Net includes:
利用由5个特征提取模块顺序组成的主干网络,对输入的室内窗户图像进行特征提取,并保存每个特征提取模块的输出;A backbone network consisting of five feature extraction modules is used to extract features from the input indoor window image, and the output of each feature extraction module is saved;
利用顺序连接的3个注意力模块,对最后一个特征提取模块的输出进行背景特征抑制和所需分割部分特征增强;Using three attention modules connected sequentially, the output of the last feature extraction module is used to suppress background features and enhance the features of the required segmented parts;
利用重建模块对最后一个注意力模块的输出及每个特征提取模块的输出进行重建,得到墙壁、窗户、窗沿以及背景的掩膜图像。The reconstruction module is used to reconstruct the output of the last attention module and the output of each feature extraction module to obtain mask images of walls, windows, window sills and background.
进一步地,所述重建模块包括5个上采样模块,其中第 个模块的输入是由第个模块的输出和第个特征提取模块的输出组成。Furthermore, the reconstruction module includes five upsampling modules, wherein the first The input of each module is The output of the module and The output of the feature extraction module composition.
所述窗户打开程度计算模块,还用于执行如下操作:The window opening degree calculation module is also used to perform the following operations:
在得到墙壁、窗户、窗沿以及背景的掩膜图像后,提取窗户图像,并对提取的窗户图像依次进行滤波、阈值分割和开运算处理;After obtaining the mask images of the wall, window, window edge and background, the window image is extracted, and the extracted window image is filtered, threshold segmented and opened in sequence;
获取处理后窗户图像上的若干最小内接矩形及每一矩形的顶点坐标;Obtaining several minimum inscribed rectangles and vertex coordinates of each rectangle on the processed window image;
根据获取的矩形数量、每一矩形的顶点坐标及面积,计算推拉窗开口比例。The sliding window opening ratio is calculated based on the number of rectangles obtained, the vertex coordinates and the area of each rectangle.
具体地,利用窗户的掩膜将图像中的窗户提取出来,并进行高斯滤波,去除噪声;再利用阈值分割,生成二值图像;接着将二值图像进行开运算,去除二值图像上的毛点。在获得的图像上寻找最小内接矩形,统计矩形数量,并获取其四个顶点的坐标。最后根据矩形数量、四个顶点坐标以及面积,计算推拉窗开口比例。Specifically, the window mask is used to extract the window in the image, and Gaussian filtering is performed to remove noise; then threshold segmentation is used to generate a binary image; then the binary image is opened to remove the fuzzy spots on the binary image. The minimum inscribed rectangle is found on the obtained image, the number of rectangles is counted, and the coordinates of their four vertices are obtained. Finally, the opening ratio of the sliding window is calculated based on the number of rectangles, the coordinates of the four vertices, and the area.
所述窗户打开程度计算模块,还用于执行如下操作:The window opening degree calculation module is also used to perform the following operations:
在得到墙壁、窗户、窗沿以及背景的掩膜图像后,利用霍夫线检测,获取窗沿上端的直线和窗沿下端的直线;After obtaining the mask images of the wall, window, window edge and background, the straight line at the upper end of the window edge and the straight line at the lower end of the window edge are obtained by using Hough line detection;
计算两条直线的斜率,并根据斜率计算出两直线的夹角;Calculate the slope of two straight lines, and calculate the angle between the two straight lines based on the slope;
利用深度神经网络模型DNN,结合拍摄装置与窗户的相对距离和角度,以及拍摄装置的俯仰角、偏航角和滚转角,得到外平开窗打开的实际角度。The deep neural network model DNN is used to combine the relative distance and angle between the camera and the window, as well as the pitch angle, yaw angle and roll angle of the camera to obtain the actual opening angle of the external casement window.
所述室内人员行为识别模型IDARM的训练进一步包括:The training of the indoor personnel behavior recognition model IDARM further includes:
获取室内人员热舒适行为的视频数据,并构建训练集;Obtain video data of indoor occupants’ thermal comfort behavior and construct a training set;
将所述训练集输入构建的室内人员行为识别模型IDARM进行训练,得到满足精度要求的模型参数;Input the training set into the constructed indoor personnel behavior recognition model IDARM for training to obtain model parameters that meet the accuracy requirements;
将所述模型参数加载到室内人员行为识别模型IDARM,得到训练好的室内人员行为识别模型IDARM。The model parameters are loaded into the indoor person behavior recognition model IDARM to obtain a trained indoor person behavior recognition model IDARM.
进一步地,构建训练集包括对多名受试者进行以下6种动作的视频采集:①坐;②走;③用手扇风;④抖衣服;⑤搓手;⑥抱肩,其中,每段视频长3至5秒,帧率为30FPS;将收集到的视频按照1:1的比例分为训练集和测试集。Furthermore, constructing the training set includes collecting videos of multiple subjects performing the following six actions: ① sitting; ② walking; ③ fanning with hands; ④ shaking clothes; ⑤ rubbing hands; ⑥ hugging shoulders, where each video is 3 to 5 seconds long and has a frame rate of 30FPS; the collected videos are divided into training set and test set in a 1:1 ratio.
所述室内人员行为识别模型IDARM的运算过程包括:The operation process of the indoor personnel behavior recognition model IDARM includes:
将视频帧序列和逆光流序列分别进行特征提取,得到视频特征和逆光流特征;Extract features from the video frame sequence and the inverse optical flow sequence respectively to obtain video features and inverse optical flow features;
将视频特征、逆光流特征和位置编码通过编码模块Encoder,进行数据增强,得到增强后的视频特征;The video features, inverse optical flow features and position encoding are passed through the encoding module Encoder to perform data enhancement to obtain enhanced video features;
将增强后的视频特征、逆光流特征和人体骨骼关键点序列作为解码模块Decoder的输入,输出光流查询和内容查询;The enhanced video features, inverse optical flow features and human skeleton key point sequence are used as the input of the decoding module Decoder, and the optical flow query and content query are output;
将光流查询和内容查询进行多头线性变化,并通过全连接网络和Softmax函数获得与热舒适行为相关的动作对应的置信度。The optical flow query and content query are subjected to multi-head linear transformation, and the confidence corresponding to the actions related to thermal comfort behavior is obtained through a fully connected network and a Softmax function.
所述室内人员热舒适行为识别模块,还用于执行如下操作:The indoor personnel thermal comfort behavior recognition module is also used to perform the following operations:
构建数据张量,其中,T是空气温度,H是相对湿度,V是空气流速,C是二氧化碳浓度,L是窗户打开程度,A是室内人员做出的行为;Constructing data tensors , where T is the air temperature, H is the relative humidity, V is the air velocity, C is the carbon dioxide concentration, L is the window opening degree, and A is the behavior of the indoor occupants;
根据所述数据张量,利用室内温湿度调节控制算法获取最佳调控策略;According to the data tensor, an optimal control strategy is obtained by using an indoor temperature and humidity control algorithm;
根据所述最佳调控策略,调节室内供暖、通风及空气调节控制系统。According to the optimal control strategy, the indoor heating, ventilation and air conditioning control systems are adjusted.
根据本发明说明书的一方面,提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,其中所述计算机程序被处理器执行时,实现所述的室内暖通控制方法的步骤。According to one aspect of the present invention, a computer-readable storage medium is provided, on which a computer program is stored, wherein when the computer program is executed by a processor, the steps of the indoor HVAC control method are implemented.
本发明是参照根据本发明实施例的方法、设备(系统)和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention is described with reference to the flowcharts and/or block diagrams of the methods, devices (systems) and computer program products according to the embodiments of the present invention. It should be understood that each process and/or box in the flowchart and/or block diagram, as well as the combination of the processes and/or boxes in the flowchart and/or block diagram, can be implemented by computer program instructions. These computer program instructions can be provided to a processor of a general-purpose computer, a special-purpose computer, an embedded processor or other programmable data processing device to produce a machine, so that the instructions executed by the processor of the computer or other programmable data processing device produce a device for implementing the functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing device to operate in a specific manner, so that the instructions stored in the computer-readable memory produce a manufactured product including an instruction device that implements the functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions may also be loaded onto a computer or other programmable data processing device so that a series of operational steps are executed on the computer or other programmable device to produce a computer-implemented process, whereby the instructions executed on the computer or other programmable device provide steps for implementing the functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明实施例技术方案。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention, rather than to limit it. Although the present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that they can still modify the technical solutions described in the aforementioned embodiments, or replace some or all of the technical features therein by equivalents. However, these modifications or replacements do not deviate the essence of the corresponding technical solutions from the technical solutions of the embodiments of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311644580.3A CN117346285B (en) | 2023-12-04 | 2023-12-04 | Indoor heating and ventilation control method, system and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311644580.3A CN117346285B (en) | 2023-12-04 | 2023-12-04 | Indoor heating and ventilation control method, system and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117346285A CN117346285A (en) | 2024-01-05 |
CN117346285B true CN117346285B (en) | 2024-03-26 |
Family
ID=89367016
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311644580.3A Active CN117346285B (en) | 2023-12-04 | 2023-12-04 | Indoor heating and ventilation control method, system and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117346285B (en) |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1319900A1 (en) * | 2001-12-13 | 2003-06-18 | Lg Electronics Inc. | Air conditioner and method for controlling the same |
JP2007024416A (en) * | 2005-07-19 | 2007-02-01 | Daikin Ind Ltd | Air conditioner |
CN107102022A (en) * | 2017-03-07 | 2017-08-29 | 青岛海尔空调器有限总公司 | Thermal environment Comfort Evaluation method based on thermal manikin |
CN109489226A (en) * | 2018-12-27 | 2019-03-19 | 厦门天翔园软件科技有限公司 | A kind of air-conditioning indoor energy-saving policy management system and air conditioning control method |
CN109948472A (en) * | 2019-03-04 | 2019-06-28 | 南京邮电大学 | A non-invasive human thermal comfort detection method and system based on attitude estimation |
DE102018204789A1 (en) * | 2018-03-28 | 2019-10-02 | Robert Bosch Gmbh | Method for climate control in rooms |
CN112303861A (en) * | 2020-09-28 | 2021-02-02 | 山东师范大学 | Air conditioner temperature adjusting method and system based on human body thermal adaptability behavior |
CN113435508A (en) * | 2021-06-28 | 2021-09-24 | 中冶建筑研究总院(深圳)有限公司 | Method, device, equipment and medium for detecting opening state of glass curtain wall opening window |
CN115457056A (en) * | 2022-09-20 | 2022-12-09 | 北京威高智慧科技有限公司 | Skeleton image segmentation method, device, equipment and storage medium |
CN115540286A (en) * | 2022-08-16 | 2022-12-30 | 青岛海尔空调器有限总公司 | Air-conditioning system control method, device, air-conditioning system and storage medium |
CN115682368A (en) * | 2022-10-31 | 2023-02-03 | 西安建筑科技大学 | Non-contact indoor thermal environment control system and method based on reinforcement learning |
CN116258705A (en) * | 2023-03-16 | 2023-06-13 | 湖南大学 | Window opening detection method based on image processing |
CN117053378A (en) * | 2023-05-18 | 2023-11-14 | 苏州科技大学 | Intelligent heating ventilation air conditioner regulating and controlling method based on user portrait |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5208785B2 (en) * | 2009-01-28 | 2013-06-12 | 株式会社東芝 | VIDEO DISPLAY DEVICE, VIDEO DISPLAY DEVICE CONTROL METHOD, AND CONTROL PROGRAM |
-
2023
- 2023-12-04 CN CN202311644580.3A patent/CN117346285B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1319900A1 (en) * | 2001-12-13 | 2003-06-18 | Lg Electronics Inc. | Air conditioner and method for controlling the same |
JP2007024416A (en) * | 2005-07-19 | 2007-02-01 | Daikin Ind Ltd | Air conditioner |
CN107102022A (en) * | 2017-03-07 | 2017-08-29 | 青岛海尔空调器有限总公司 | Thermal environment Comfort Evaluation method based on thermal manikin |
DE102018204789A1 (en) * | 2018-03-28 | 2019-10-02 | Robert Bosch Gmbh | Method for climate control in rooms |
CN109489226A (en) * | 2018-12-27 | 2019-03-19 | 厦门天翔园软件科技有限公司 | A kind of air-conditioning indoor energy-saving policy management system and air conditioning control method |
CN109948472A (en) * | 2019-03-04 | 2019-06-28 | 南京邮电大学 | A non-invasive human thermal comfort detection method and system based on attitude estimation |
CN112303861A (en) * | 2020-09-28 | 2021-02-02 | 山东师范大学 | Air conditioner temperature adjusting method and system based on human body thermal adaptability behavior |
CN113435508A (en) * | 2021-06-28 | 2021-09-24 | 中冶建筑研究总院(深圳)有限公司 | Method, device, equipment and medium for detecting opening state of glass curtain wall opening window |
CN115540286A (en) * | 2022-08-16 | 2022-12-30 | 青岛海尔空调器有限总公司 | Air-conditioning system control method, device, air-conditioning system and storage medium |
CN115457056A (en) * | 2022-09-20 | 2022-12-09 | 北京威高智慧科技有限公司 | Skeleton image segmentation method, device, equipment and storage medium |
CN115682368A (en) * | 2022-10-31 | 2023-02-03 | 西安建筑科技大学 | Non-contact indoor thermal environment control system and method based on reinforcement learning |
CN116258705A (en) * | 2023-03-16 | 2023-06-13 | 湖南大学 | Window opening detection method based on image processing |
CN117053378A (en) * | 2023-05-18 | 2023-11-14 | 苏州科技大学 | Intelligent heating ventilation air conditioner regulating and controlling method based on user portrait |
Non-Patent Citations (2)
Title |
---|
基于热舒适性理论的智能节能窗控制策略研究;孙旭灿;潘玉勤;常建国;王放;;科技通报(08);第58-53页 * |
孙旭灿 ; 潘玉勤 ; 常建国 ; 王放 ; .基于热舒适性理论的智能节能窗控制策略研究.科技通报.2020,(08),第58-53页. * |
Also Published As
Publication number | Publication date |
---|---|
CN117346285A (en) | 2024-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111695469B (en) | Hyperspectral image classification method of light-weight depth separable convolution feature fusion network | |
CN116058195B (en) | Illumination regulation and control method, system and device for leaf vegetable growth environment | |
CN105512624A (en) | Smile face recognition method and device for human face image | |
CN111368846A (en) | Road ponding identification method based on boundary semantic segmentation | |
CN114882351B (en) | Multi-target detection and tracking method based on improved YOLO-V5s | |
CN114782311B (en) | CENTERNET improvement-based multi-scale defect target detection method and system | |
CN111639577A (en) | Method for detecting human faces of multiple persons and recognizing expressions of multiple persons through monitoring video | |
CN110287777A (en) | A Body Segmentation Algorithm for Golden Monkey in Natural Scenes | |
CN113689382B (en) | Tumor postoperative survival prediction method and system based on medical images and pathological images | |
CN112434723B (en) | Day/night image classification and object detection method based on attention network | |
CN116311483A (en) | Micro-expression Recognition Method Based on Partial Facial Region Reconstruction and Memory Contrastive Learning | |
CN117333753A (en) | Fire detection method based on PD-YOLO | |
CN112800988A (en) | C3D behavior identification method based on feature fusion | |
CN114170671B (en) | Massage manipulation recognition method based on deep learning | |
CN117612025B (en) | Remote sensing image roof recognition method based on diffusion model | |
CN118154984B (en) | Method and system for generating non-supervision neighborhood classification superpixels by fusing guided filtering | |
CN117557857B (en) | Detection network light weight method combining progressive guided distillation and structural reconstruction | |
CN113409290B (en) | Method and device for detecting appearance defects of liquid crystal display, and storage medium | |
CN111274895A (en) | CNN micro-expression identification method based on cavity convolution | |
CN118430054A (en) | A face recognition method and system based on AI intelligence | |
CN118072090A (en) | Dermatological image detection method based on U2-Net and ResNeXt-50 models | |
CN116309228A (en) | Method for converting visible light image into infrared image based on generation of countermeasure network | |
CN115902806A (en) | Multi-mode-based radar echo extrapolation method | |
CN118674749A (en) | Mask comparison learning pre-training-based visual target tracking method | |
CN114550270A (en) | A Micro-expression Recognition Method Based on Dual Attention Mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |