US20210381712A1 - Determining demand curves from comfort curves - Google Patents
Determining demand curves from comfort curves Download PDFInfo
- Publication number
- US20210381712A1 US20210381712A1 US17/177,391 US202117177391A US2021381712A1 US 20210381712 A1 US20210381712 A1 US 20210381712A1 US 202117177391 A US202117177391 A US 202117177391A US 2021381712 A1 US2021381712 A1 US 2021381712A1
- Authority
- US
- United States
- Prior art keywords
- curve
- neural network
- comfort
- simulated
- demand curve
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013528 artificial neural network Methods 0.000 claims abstract description 83
- 238000010801 machine learning Methods 0.000 claims abstract description 31
- 210000002569 neuron Anatomy 0.000 claims description 103
- 230000006870 function Effects 0.000 claims description 93
- 238000000034 method Methods 0.000 claims description 72
- 230000008569 process Effects 0.000 claims description 23
- 230000004913 activation Effects 0.000 claims description 19
- 238000004891 communication Methods 0.000 claims description 16
- 230000004069 differentiation Effects 0.000 claims description 12
- 238000005457 optimization Methods 0.000 claims description 6
- 238000011478 gradient descent method Methods 0.000 claims description 4
- 210000004205 output neuron Anatomy 0.000 claims description 2
- 230000001537 neural effect Effects 0.000 abstract description 23
- 238000003062 neural network model Methods 0.000 abstract description 4
- 230000006399 behavior Effects 0.000 description 15
- 238000012545 processing Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 9
- 238000012549 training Methods 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000004088 simulation Methods 0.000 description 5
- 239000000463 material Substances 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 2
- 210000005056 cell body Anatomy 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 239000002245 particle Substances 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 244000005700 microbiome Species 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000007493 shaping process Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0631—Resource planning, allocation, distributing or scheduling for enterprises or organisations
- G06Q10/06313—Resource planning in a project environment
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60H—ARRANGEMENTS OF HEATING, COOLING, VENTILATING OR OTHER AIR-TREATING DEVICES SPECIALLY ADAPTED FOR PASSENGER OR GOODS SPACES OF VEHICLES
- B60H1/00—Heating, cooling or ventilating [HVAC] devices
- B60H1/00271—HVAC devices specially adapted for particular vehicle parts or components and being connected to the vehicle HVAC unit
- B60H1/00285—HVAC devices specially adapted for particular vehicle parts or components and being connected to the vehicle HVAC unit for vehicle seats
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F11/00—Control or safety arrangements
- F24F11/62—Control or safety arrangements characterised by the type of control or by internal processing, e.g. using fuzzy logic, adaptive control or estimation of values
- F24F11/63—Electronic processing
- F24F11/64—Electronic processing using pre-stored data
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F11/00—Control or safety arrangements
- F24F11/62—Control or safety arrangements characterised by the type of control or by internal processing, e.g. using fuzzy logic, adaptive control or estimation of values
- F24F11/63—Electronic processing
- F24F11/65—Electronic processing for selecting an operating mode
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/0265—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
- G05B13/027—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion using neural networks only
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/04—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B15/00—Systems controlled by a computer
- G05B15/02—Systems controlled by a computer electric
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/18—Network design, e.g. design based on topological or interconnect aspects of utility systems, piping, heating ventilation air conditioning [HVAC] or cabling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G06N3/0472—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/067—Enterprise or organisation modelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/16—Real estate
- G06Q50/163—Real estate management
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F2120/00—Control inputs relating to users or occupants
- F24F2120/10—Occupancy
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F2120/00—Control inputs relating to users or occupants
- F24F2120/20—Feedback from users
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F2140/00—Control inputs relating to system states
- F24F2140/50—Load
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/04—Programme control other than numerical control, i.e. in sequence controllers or logic controllers
- G05B19/042—Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/20—Pc systems
- G05B2219/26—Pc applications
- G05B2219/2614—HVAC, heating, ventillation, climate control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2119/00—Details relating to the type or aim of the analysis or the optimisation
- G06F2119/06—Power analysis or power optimisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2119/00—Details relating to the type or aim of the analysis or the optimisation
- G06F2119/08—Thermal analysis or thermal optimisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Definitions
- the present disclosure relates to neural network methods for creating demand curves from comfort curves. More specifically the present disclosure relates to receiving a time curve of desired state values and outputting a time curve of energy amounts that may be input into a structure to achieve the desired state values.
- PID controllers Proportional-Integral-Derivative controllers
- a method of determining a demand curve implemented by one or more computers comprising: receiving a neural network of a plurality of controlled building zones; receiving a desired comfort curve for at least one of the plurality of controlled building zones; performing a machine learning process to run the heterogenous model using a simulated demand curve as input and receiving a simulated comfort curve as output; computing a cost function using the simulated comfort curve and the desired comfort curve; using the cost function to determine a new simulated demand curve; iteratively performing the using and computing steps until a goal state is reached; and determining that the new simulated demand curve is the demand curve upon the goal state being reached.
- the new simulated demand curve is a time series of zone energy inputs and the simulated comfort curve is a time series of zone state values.
- computing the cost function further comprises determining difference between the desired comfort curve and the simulated comfort curve.
- performing the machine learning process further comprises performing automatic differentiation recursively through the neural network producing a new simulated demand curve.
- the goal state comprises the cost function being minimized, the model running for a specific time, or the model running a specific number of cycles.
- the neural network comprises multiple activation functions.
- performing the machine process comprises computing a gradient of the neural network by taking a reverse gradient of the cost function result forward using automatic differentiation.
- performing the machine learning process further comprises taking a gradient of the neural network backward using automatic differentiation.
- the neural network is a heterogeneous neural network.
- any neuron may be an output neuron.
- a demand curve creation system comprising: at least one processor, a memory in operable communication with the processor and demand curve creation code residing in memory which comprises: receiving a heterogenous neural network of a plurality of controlled building zones; receiving a ground truth comfort state zone curve for at least one of the plurality of controlled building zones; performing a machine learning process to run the heterogenous neural network using a simulated demand curve as input and receiving a simulated comfort curve as output; computing a cost function using the simulated comfort state zone curve and the ground truth comfort curve; using the cost function to determine a new simulated demand curve; iteratively performing the using and computing steps until a goal state is reached; and determining that the current simulated demand curve is the demand curve upon the goal state being reached.
- the machine learning process comprises a backward pass that computes the gradient with respect to the cost function, and then uses an optimizer to update the demand curves.
- the backward pass that computes the gradient uses automatic differentiation.
- the optimizer uses stochastic gradient descent or mini batch gradient descent to minimize the cost function.
- the neural network is a heterogenous neural network.
- the heterogenous neural network comprises neurons, any of which may be inputs.
- any of the neurons may be outputs.
- a computer-readable storage medium configured with executable instructions to perform a method for creation of a demand curve upon receipt of a comfort curve
- the method comprising: receiving a ground truth comfort curve for at least one of the plurality of controlled building zones; performing a machine learning process to run a heterogenous neural network using a simulated demand curve as input and receiving a simulated comfort curve as output; computing a cost function using the simulated comfort state zone curve and the ground truth comfort curve; using the cost function to determine a new simulated demand curve; iteratively performing the using and computing steps until a goal state is reached; and determining that the current simulated demand curve is the demand curve upon the goal state being reached.
- the machine learning process comprises using automatic differentiation to perform backpropagation.
- the machine learning process comprises using a gradient descent method to perform incremental optimization of the simulated demand curve
- FIG. 1 depicts a computing system in conjunction with which described embodiments can be implemented.
- FIG. 2 is a flow diagram showing an exemplary embodiment of a method to determine demand curves from comfort curves.
- FIG. 3 is a functional block diagram showing an exemplary embodiment of the input and output of a model with which described embodiments can be implemented.
- FIG. 4 is a functional block diagram showing different machine learning functions with which described embodiments can be implemented.
- FIG. 5 depicts a physical system whose behavior can be determined by using a neural network.
- FIG. 6 depicts a simplified neural network that may be used to model behaviors of the physical system of FIG. 3 .
- FIG. 7 is a block diagram describing the nature of exemplary neurons.
- Disclosed below are representative embodiments of methods, computer-readable media, and systems having particular applicability to systems and methods for building neural networks that describe physical structures. Described embodiments implement one or more of the described technologies.
- Embodiments comprise using a heterogeneous neural model of a defined space that models the various materials in the space as connected nodes, and uses machine learning techniques to train the model.
- “Defined Space” should be understood broadly—it can be a building, several buildings, buildings and grounds around it, a defined outside space, such as a garden or an irrigated field, etc. A portion of a building may be used as well. For example, a floor of a building may be used, a random section of a building, a room in a building, etc. This may be a space that currently exists, or may be a space that exists only as a design. Other choices are possible as well.
- the defined space may be divided into zones. Each zone may have a different set of requirements for state values during a time period. For example, for the state “temperature,” a user Chris may like their office at 72° from 8 am-5 pm, while a user Avery may prefer their office at 77° from 6 am-4 pm. These preferences can be turned into comfort curves, which are a chronological (time-based) state curve. Chris's office comfort curve may be 68° from Midnight to 8 am, 72° from 8 am to 5 pm, then 68° from 5 pm to midnight. The comfort curves (for a designated space, such as Chris's office), are then used to calculate demand curves, which are the amount of state that may be input into the associated zones to achieve the state desired over time.
- comfort curve(s) we want zones (e.g., areas) to conform to, such as Chris's office, as described above, and we wish to find the amount of state necessary over time to meet the temperature (e.g., state) indicated by the comfort curve.
- the amount of state over time a demand curve.
- demand curves we use demand curves as input into a heterogenous model, such as a heterogenous neural network that represents the zones within a structure.
- a heterogenous neural network that represents the zones within a structure.
- We then run the model forward with the demand curves as input to determine comfort curve output for that demand curve. That is, when we pump such an amount of state into a structure, the structure, in turn, has some amount of state over time.
- the state propagates through both zones in the neural network which includes the walls, the air, etc.
- the model outputs the state from time T to time T+240 in both zones, giving us two comfort curves, e.g., what temperature the two zones were from time T to time T+240.
- a gradient of the cost function is calculated through backpropagate to the input, and then optimized by, e.g., a type of gradient descent, etc., giving us a new demand curve to try. This is repeated until a goal state is reached.
- the last demand curve run is the demand curve that is then used to determine state needed over time in one or more spaces.
- Optimize means to improve, not necessarily to perfect. For example, it may be possible to make further improvements in a value or an algorithm which has been optimized.
- Determine means to get a good idea of, not necessarily to achieve the exact value. For example, it may be possible to make further improvements in a value or algorithm which has already been determined.
- the cost function may use a least squares function, a Mean Error (ME), Mean Squared Error (MSE), Mean Absolute Error (MAE), a Categorical Cross Entropy Cost Function, a Binary Cross Entropy Cost Function, and so on, to arrive at the answer.
- the cost function is a loss function.
- the cost function is a threshold, which may be a single number that indicates the simulated truth curve is close enough to the ground truth.
- the cost function may be a slope.
- the slope may also indicate that the simulated truth curve and the ground truth are of sufficient closeness.
- a cost function may be time variant. It also may be linked to factors such as user preference, or changes in the physical model.
- the cost function applied to the simulation engine may comprise models of any one or more of the following: energy use, primary energy use, energy monetary cost, human comfort, the safety of building or building contents, the durability of building or building contents, microorganism growth potential, system equipment durability, system equipment longevity, environmental impact, and/or energy use CO2 potential.
- the cost function may utilize a discount function based on discounted future value of a cost.
- the discount function may devalue future energy as compared to current energy such that future uncertainty is accounted for, to ensure optimized operation over time.
- the discount function may devalue the future cost function of the control regimes, based on the accuracy or probability of the predicted weather data and/or on the value of the energy source on a utility pricing schedule, or the like.
- a “goal state” may read in a cost (a value from a cost function) and determine if that cost meets criteria such that a goal has been reached. Such criteria may be the cost reaching a certain value, being higher or lower than a certain value, being between two values, etc.
- a goal state may also look at the time spent running the simulation model overall, if a specific running time has been reached, the neural network running a specific number of iterations, and so on.
- a machine learning process is one of a variety of computer algorithms that improve automatically through experience. Common machine learning processes are Linear Regression, Logistic Regression, Decision Tree, Support Vector Machine (SVM), Naive Bayes, K-Nearest Neighbors (kNN), K-Means Clustering, Random Forest, Backpropagation with optimization, etc.
- SVM Support Vector Machine
- kNN K-Nearest Neighbors
- K-Means Clustering Random Forest, Backpropagation with optimization, etc.
- An “optimization method” may include Gradient Descent, stochastic gradient descent, min-batch gradient descent, methods based on Newton's method, inversions of the Hessian using conjugate gradient techniques, Evolutionary computation such as Swarm Intelligence, Bee Colony optimization; SOMA, and Particle Swarm, etc.
- Non-linear optimization techniques, and other methods known by those of skill in the art may also be used.
- Backpropagation may be performed by automatic differentiation, or by a different method to determine partial derivatives.
- a “state” as used herein may be Air Temperature, Radiant Temperature, Atmospheric Pressure, Sound Pressure, Occupancy Amount, Indoor Air Quality, CO2 concentration, Light Intensity, or another state that can be measured and controlled.
- Some structures comprise multiple zones (such as rooms or specific areas monitored by a sensor). Each separate zone may be modeled by its own neural model.
- the collection of neural models can comprise the heterogenous model of the structure.
- zones share a surface, such as (in a building implementation), a wall, a floor, or a ceiling, the outside neuron of one neural model may be used as the inner neuron of the next.
- Some zones may overlap with other zones, while some zones do not.
- the structure may be covered in zones, or some locations within a structure may have no explicit zone. Defined spaces may be defined into multiple subsystems. Any of these portioned defined spaces may be used as the subsystems.
- FIG. 1 illustrates a generalized example of a suitable computing environment 100 in which described embodiments may be implemented.
- the computing environment 100 is not intended to suggest any limitation as to scope of use or functionality of the disclosure, as the present disclosure may be implemented in diverse general-purpose or special-purpose computing environments.
- the computing environment 100 includes at least one central processing unit 110 and memory 120 .
- the central processing unit 110 executes computer-executable instructions and may be a real or a virtual processor. It may also comprise a vector processor 112 , which allows same-length neuron strings to be processed rapidly. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power and as such the vector processor 112 , GPU 115 , and CPU can be running simultaneously.
- the memory 120 may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two.
- the memory 120 stores software 185 implementing the described methods of using a neural net to derive a demand curve from a comfort curve.
- a computing environment may have additional features.
- the computing environment 100 includes storage 140 , one or more input devices 150 , one or more output devices 155 , one or more network connections (e.g., wired, wireless, etc.) 160 as well as other communication connections 170 .
- An interconnection mechanism such as a bus, controller, or network interconnects the components of the computing environment 100 .
- operating system software provides an operating environment for other software executing in the computing environment 100 , and coordinates activities of the components of the computing environment 100 .
- the computing system may also be distributed; running portions of the software 185 on different CPUs.
- the storage 140 may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, flash drives, or any other medium which can be used to store information and which can be accessed within the computing environment 100 .
- the storage 140 stores instructions for the software, such a demand curve creation software 185 to implement methods of neuron discretization and creation.
- the input device(s) 150 may be a device that allows a user or another device to communicate with the computing environment 100 , such as a touch input device such as a keyboard, video camera, a microphone, mouse, pen, or trackball, and a scanning device, touchscreen, or another device that provides input to the computing environment 100 .
- a touch input device such as a keyboard, video camera, a microphone, mouse, pen, or trackball
- a scanning device such as a keyboard, video camera, a microphone, mouse, pen, or trackball
- the input device(s) 150 may be a sound card or similar device that accepts audio input in analog or digital form, or a CD-ROM reader that provides audio samples to the computing environment.
- the output device(s) 155 may be a display, printer, speaker, CD-writer, or another device that provides output from the computing environment 100 .
- the communication connection(s) 170 enable communication over a communication medium to another computing entity.
- the communication medium conveys information such as computer-executable instructions, compressed graphics information, or other data in a modulated data signal.
- Communication connections 170 may comprise input devices 150 , output devices 155 , and input/output devices that allows a client device to communicate with another device over network 160 .
- a communication device may include one or more wireless transceivers for performing wireless communication and/or one or more communication ports for performing wired communication. These connections may include network connections, which may be a wired or wireless network such as the Internet, an intranet, a LAN, a WAN, a cellular network or another type of network. It will be understood that network 160 may be a combination of multiple different kinds of wired or wireless networks.
- the network 160 may be a distributed network, with multiple computers, which might be building controllers, acting in tandem.
- a computing connection 170 may be a portable communications device such as a wireless handheld device, a cell phone device, and so on
- Computer-readable media are any available non-transient tangible media that can be accessed within a computing environment.
- computer-readable media include memory 120 , storage 140 , communication media, and combinations of any of the above.
- Computer readable storage media 165 which may be used to store computer readable media comprises instructions 175 and data 180 .
- Data Sources may be computing devices, such as general hardware platform servers configured to receive and transmit information over the communications connections 170 .
- the computing environment 100 may be an electrical controller that is directly connected to various resources, such as HVAC resources, and which has CPU 110 , a GPU 115 , Memory, 120 , input devices 150 , communication connections 170 , and/or other features shown in the computing environment 100 .
- the computing environment 100 may be a series of distributed computers. These distributed computers may comprise a series of connected electrical controllers.
- data produced from any of the disclosed methods can be created, updated, or stored on tangible computer-readable media (e.g., tangible computer-readable media, such as one or more CDs, volatile memory components (such as DRAM or SRAM), or nonvolatile memory components (such as hard drives) using a variety of different data structures or formats.
- tangible computer-readable media e.g., tangible computer-readable media, such as one or more CDs, volatile memory components (such as DRAM or SRAM), or nonvolatile memory components (such as hard drives) using a variety of different data structures or formats.
- Such data can be created or updated at a local computer or over a network (e.g., by a server computer), or stored and accessed in a cloud computing environment.
- FIG. 2 illustrates a method 200 that that determines a demand curve from a comfort curve using a heterogenous neural network.
- the operations of method 200 presented below are intended to be illustrative. In some embodiments, method 200 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 200 are illustrated in FIG. 2 and described below is not intended to be limiting.
- method 200 may be implemented in one or more processing devices (e.g., a digital or analog processor, or a combination of both; a series of computer controllers each with at least one processor networked together, and/or other mechanisms for electronically processing information etc.)
- the one or more processing devices may include one or more devices executing some or all of the operations of method 200 in response to instructions stored electronically on an electronic storage medium.
- the one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 200 .
- a neural network model 205 is received.
- the neural network model may have been stored in memory, and so may be received from the processing device that the model is being run on.
- the neural network model may be stored within a distributed system, and received from more than one processors within the distributed system, etc.
- a heterogenous neural network the fundamentals of physics are utilized to model single components or pieces of equipment on a one-to-one basis with neural net neurons.
- some neurons use physics equations as activation functions. Different types of neurons may have different equations for their activation functions, such that a neural network may have multiple activation functions within its neurons.
- a neural net is created that models the components as neurons. The values between the objects flow between the neurons as weights of connected edges.
- the neurons are arranged in order of an actual system (or set of equations) and because the neurons themselves comprise an equation or a series of equations that describe the function of their associated object, and certain relationships between them are determined by their location in the neural net. Therefore, a huge portion of training is no longer necessary, as the neural net itself comprises location information, behavior information, and interaction information between the different objects represented by the neurons. Further, the values held by neurons in the neural net at given times represent real-world behavior of the objects so represented. The neural net is no longer a black box but itself contains important information. This neural net structure also provides much deeper information about the systems and objects being described. Since the neural network is physics- and location-based, unlike the conventional AI structures, it is not limited to a specific model, but can run multiple models for the system that the neural network represents without requiring separate creation or training.
- the neural network that is described herein chooses the location of the neurons to tell you something about the physical nature of the system.
- the neurons are arranged in a way that references the locations of actual objects in the real work.
- the neural network also places actual equations that can be used to determine object behavior into the activation function of the neuron.
- the weights that move between neurons are equation variables. Different neurons may have unrelated activation functions, depending on the nature of the model being represented. In an exemplary embodiment, each activation function in a neural network may be different.
- a pump could be represented in a neural network as a series of network neurons, some that represent efficiency, energy consumption, pressure, etc.
- the neurons will be placed such that one set of weights (variables) feeds into the next neuron (e.g., with an equation as its activation function) that uses those weights (variables).
- the neural net model need not be trained on information that is already known.
- the individual neurons represent physical representations. These individual neurons may hold parameter values that help define the physical representation. As such, when the neural net is run, the parameters helping define the physical representation can be tweaked to more accurately represent the given physical representation.
- a simulated demand curve is received.
- the values of the demand curve may be random, may be a demand curve from another similar model run, etc.
- zones e.g., areas
- the demand curve amount of state needed over time
- FIG. 3 is a functional block diagram showing an exemplary embodiment of the input and output of a model.
- this entails using a demand curve 302 (e.g., an amount of state over time, e.g., zone energy inputs 310 ) as input into the heterogenous model 305 .
- the demand curves 302 are injected into a defined space as described by the demand curve at times T(0) through T(n); in the case of a heterogenous model modeling multiple zones, multiple demand curves are injected in corresponding zones over time.
- outside influences such as weather 320 , are also used as inputs.
- the backpropagation may not be backpropagated to such outside influence inputs, as, e.g., the weather inputs are not optimized, and so are not changed.
- Running the model may entail feedforward—running the demand curve state though the model to the outputs over time T(0)-T(n), capturing internal state values within neurons over the same time T(0)-T(n).
- These internal state values (such as a temperature variable in a neuron that model's Chris's office) may be the outputs that define the simulated comfort curves 225 .
- simulated comfort curve(s) are output. In other embodiments, the comfort curve is output 225 successively in timesteps during the model run, or other methods are used.
- a demand curve 210 may be supplied. This initial demand curve may be determined randomly, or another method may be used, such as a demand curve stored previously that was used as the solution to a similar comfort curve problem.
- the desired comfort curve(s) are received. These are the curves that describe the state the structure being modeled is to be in; e.g., the temperature Chris's office should be for some time period. These may also be called ground truth comfort curves. Ground truth is the information provided by direct evidence or is the desired output from the neural network.
- a cost function is computed using the time series of desired comfort curve(s) and the model output—a simulated comfort curve.
- the cost function measures the difference between the time series of desired comfort curve(s) (e.g., the desired temperature of Chris's office over 24 hours) and the comfort curve(s) output 304 from the neural network 220 . Details of the cost function are described elsewhere, such as with reference to FIG. 7 .
- a goal state is checked to determine if a stopping state has been reached.
- the goal state may be that the cost from the cost function is within a certain value, that the program has run for a given time, that the model has run for a given number of iterations, that the model has run for a given time, that a threshold value has been reached, such as the cost function should be equal or lower than the threshold value, or a different criterion may be used. If the goal state has not been reached, then a new set of inputs needs to be determined that are incrementally closer to an eventual answer—a lowest (or highest or otherwise determined) value for the cost function, as described elsewhere. This can be thought of as iteratively executing the running neural network 220 , outputting the simulated comfort curve 225 , and computing the cost function 230 , and determining a new simulated demand curve 240 until the goal state 235 is reached.
- new simulated demand curve(s) are determined for the next run of the neural network. This may be performed using the cost function; by using machine learning algorithms, etc.
- backpropagation is used to determine a gradient of the cost function in relation to the various values in the neural network (such as the inputs shown in FIG. 7 ) is determined and then used to backpropagate that gradient through the neural network to obtain an updated set of demand curves.
- the demand curve that was used for the last heterogenous model run is set as the solved demand curve; that is, the demand curve that will meet the requirements for the desired comfort curve, within some range.
- the input that is given to the model is used as the eventual output.
- the demand curve can then be used to determine equipment behavior 250 for controlled equipment in the building (i.e., when the equipment should turn on and off/hold intermediate values), which can then be used to control equipment in a building.
- Equipment behavior 250 for controlled equipment in the building i.e., when the equipment should turn on and off/hold intermediate values
- Controlling equipment in such a predetermined fashion can save greatly on energy costs, as well as more accurately controlling state in a defined space for the people and objects therein.
- FIG. 3 depicts some potential inputs and outputs of the heterogenous model.
- the end result we want is a demand curve 302 .
- the model then outputs a comfort curve 304 .
- This comfort curve can be thought of as a time series of zone state values 315 , where a zone is a location whose state values wish to be measured.
- the zone state values may be the simulated temperature taken in a zone for each model time step, here, from T(0) to T(n). There does not need to be a direct way to measure the corresponding location in a space that is to be measured.
- the initial demand curve 302 may be chosen randomly from among feasible values, or may be chosen by some other method.
- the demand curve 302 zone energy inputs 310 feeds state into the model at each model step.
- the state may be fed in at various locations, such as at neurons that represent HVAC equipment, etc.
- the state then diffuses through the model through the time steps.
- the model outputs a comfort curve of the state that has diffused at specific locations, such as at Chris's office—the comfort curve may be a time curve of the temperature at a neuron representing Chris's office.
- This comfort curve is then compared with the desired state over the model running time.
- the desired state may be the desired temperature in Chris's office over the time that the model was run for. This pattern iterates until the modeled comfort curve is close enough to the desired comfort curve.
- the last demand curve used becomes the solution, that is, the amount of state needed to be injected into areas in the model over time.
- the neural network will be comprised of many neurons that represent zones. The state values of these neuron zones affect each other. For example, a room with a heater in it will warm up the room next door, for example. Therefore, the calculations in a neural network that model such a system are far from straightforward.
- a heterogenous model 305 may take as input a demand curve 302 (e.g., an amount of state over time 310 , in this example, from T(0) to T(n), with the state value fluctuating during that time).
- These comfort curves 215 can be areas whose state needs can be quantified, such as the amount of humidity that should be in an area, how loud a sound should be, the amount of CO2 in a space allowed, temperature, etc.
- FIG. 3 only explicitly shows one set of demand curves 302 and a weather curve 320 as input, but many heterogenous model runs will be run for multiple zones, and will include multiple demand curve 302 /zone energy inputs 310 .
- other values that may affect the state of a designated space during the time the model is being run may also be used as input into the neural network 205 .
- the desired comfort curves are for a known period of time.
- weather 320 affecting the building such as one or more of the temperatures, wind speed, rainfall, etc. in a weather forecast
- FIG. 4 is a functional block diagram 400 showing different machine learning functions.
- new demand curves are determined. Ways of determining new demand curves 240 are shown with reference to FIG. 4 .
- These demand curves may be determined by using machine learning 405 .
- These machine learning 405 techniques may comprise determining gradients 415 of the various variables within the neural network with respect to the cost function. This will provide a space which allows one to incrementally optimize the inputs 430 using the gradients. This shows which way to step to minimize the cost function with respect to the inputs.
- gradients of the internal variables with respect to the cost function are determined 415 . With reference to FIG.
- the internal parameters, e.g., 707 - 737 of each neuron have their partial derivatives calculated.
- Different neurons may have different parameters.
- a neuron modeling a pump may have parameters such as density, shaft speed, volume flow ratio, hydraulic power, etc.
- backpropagation 420 can be used to determine the partial derivatives. Backpropagation finds the derivative of the error (given by the cost function) for the parameters in the neural network, that is, backpropagation computes the gradient of the cost function with respect to the parameters within the network.
- Backpropagation 420 calculates the derivative between the cost function and parameters by using the chain rule from the last neurons calculated during the feedforward propagation (a backward pass), through the internal neurons, to the first neurons calculated.
- backpropagation will be performed by automatic differentiation 425 .
- automatic differentiation is accomplished by augmenting the algebra of real numbers and obtaining a new arithmetic. An additional component is added to every number to represent the derivative of a function at the number, and all arithmetic operators are extended for the augmented algebra.” Other methods may be used to determine the parameter partial derivatives. These include Particle Swarm and SOMA ((Self-Organizing Migrating Algorithm), etc.
- the backpropagation may work on a negative gradient of the cost function, as the negative gradient points in the direction of smaller values.
- the demand curve 302 is optimized to lower the value of the cost function with respect to the inputs. This process is repeated incrementally.
- Many different optimizers may be used, which can be roughly grouped into 1) gradient descent methods 435 and 2) other methods 440 .
- the gradient descent methods 435 are standard gradient descent, stochastic gradient descent, and mini-batch gradient descent.
- the other methods 440 are Momentum, Adagrad, AdaDelta, ADAM (adaptive movement estimation), and so on.
- the heterogenous model 305 is run again. If the goal state is reached 235 , then the last demand curve that was determined 145 is determined to be the demand curve that satisfies the original comfort curve requirements. Once the demand curve has been determined, it can be used to determine how much state needs to be input into different zones in a structure at what times to meet the comfort curve needs. This method can save as much as 30% of energy costs over adjusting the state when the need arises. If the goal state has not been reached, then the determine new simulation demand curve step 240 , the run neural network step 220 , the output simulation comfort curve step 225 , and compute cost function state 230 are iteratively performed, within incrementally optimizes the comfort curve until the goal state 235 is reached.
- FIG. 5 depicts a physical system 500 whose behavior can be determined by using a neural network, which may be a heterogenous neural network.
- a portion of a structure 500 is shown which comprises a Wall 1 505 .
- This Wall 1 505 is connected to a room which comprises Zone 1 525 .
- This zone also comprises a sensor 545 which can determine state of the zone.
- Wall 2 510 is between Zone 1 525 and Zone 2 530 .
- Zone 2 does not have a sensor.
- Wall 3 515 is between the two zones 1 525 and 2 530 and the two zones Zone 3 535 and Zone 4 540 .
- Zone 3 and Zone 4 do not have a wall between them.
- Zone 4 has a sensor 550 that can determine state in Zone 4 .
- Zones 3 535 and Zone 4 540 are bounded on the right side by Wall 4 520 .
- Zone 2 530 has a heater 555 , which disseminates heat over the entire structure.
- the zones 1 - 4 are controlled building zones, as their state (in this case heat) can be controlled by the heater 555 .
- FIG. 6 depicts a simplified heterogenous neural network 600 that may be used to model behaviors of the simplified physical system of FIG. 5 .
- areas of the structure are represented by neurons that are connected with respect to the location of the represented physical structure.
- the neurons are not put in layers, as in other types of neural networks.
- the neural network configuration is, in some embodiments, determined by a physical layout; that is, the neurons are arranged topologically similar to a physical structure that the neural net is simulating.
- Wall 1 505 is represented by neuron 605 .
- This neuron 605 is connected by edges 660 to neurons representing Zone 1 620 , Wall 2 610 , and Zone 2 630 .
- the neurons for Zone 1 620 , Wall 2 610 , and Zone 2 630 are connected by edges to the neuron representing Wall 3 615 .
- the neuron representing Wall 3 515 is connected by edges to the neurons representing Zone 3 635 and Zone 4 640 .
- Those two neurons 635 , 640 are connected by edges to the neuron representing Wall 3 620 .
- a neuron may have multiple edges leading to another neuron, as will be discussed later.
- Neurons may have edges that reference each other.
- edge 660 may be two-way.
- the edges have inputs that are adjusted by activation functions within neurons.
- Some inputs may be considered temporary properties that are associated with the physical system, such as temperature.
- a temperature input represented in a neural network 600 may represent temperature in the corresponding location in the physical system 500 , such that a temperature input in Neuron Zone 1 620 can represent the temperature at the sensor 545 in Zone 1 525 .
- the body of the neural net is not a black box, but rather contains information that is meaningful (in this case, a neuron input represents a temperature within a structure) and that can be used.
- inputs may enter and exit from various places in the neural network, not just from an input and an output layer.
- inputs of type 1 which are the dashed lines entering each neuron.
- Inputs of type 2 are the straight lines.
- each neuron has at least one input. For purposes of clarity not all inputs are included. Of those that are, inputs of type 2 are marked with a straight line, where inputs of type 1 are marked with a dashed line.
- Input 665 is associated with the neuron that represents Wall 1 605
- input 652 is associated with the neuron that represents Wall 3 615 .
- Signals, (or weights) passed from edge to edge, and transformed by the activation functions, can travel not just from one layer to the layer in a lock-step fashion, but can travel back and forth between layers, such as signals that travel along edges from Zone 1 620 to Wall 2 610 , and from there to Zone 2 630 .
- a system that represents a building may have several inputs that represent different states, such as temperature, humidity, atmospheric pressure, wind, dew point, time of day, time of year, etc. These inputs may be time curves that define the state over time.
- a system may have different inputs for different neurons.
- inputs may be time curves, which defines the state at a particular time, over a period of time.
- outputs are not found in a traditional output layer, but rather are values within a neuron within the neural network. These may be located in multiple neurons. These outputs for a run may be time curves.
- the neuron associated with Zone 1 620 may have a temperature value that can be looked at each timestep of a model run, creating temperature time curves that represent the temperature of the corresponding physical Zone 1 525 .
- activation functions in a neuron transform the weights on the upstream edges, and then send none, some, or all of the transformed weights to the next neuron(s). Not every activation function transforms every weight. Some activation functions may not transform any weights. In some embodiments, each neuron may have a different activation function. In some embodiments, some neurons may have similar functions.
- a heterogenous neural network may have many inputs and outputs. Each output may have its own comfort curve associated with it, so a heterogenous neural network may input many demand curves 302 and output many comfort curves 304 .
- FIG. 7 is a block diagram 700 describing possible inputs and outputs of neurons.
- Neural networks described herein may not have traditional input and output layers. Rather, neurons may have internal values that can be captured as output. Similarly, a wide variety of neurons, even those deep within a neural net can be used for input.
- Chris's office may be in Zone 4 540 .
- This zone may be represented by a neuron 640 that is somewhere in the middle of a neural network 600 .
- a zone neuron 715 may have an activation function that is comprised of several equations that model state moving through the space.
- the space itself may have inputs associated with it, e.g., Layer Mass 732 , Layer Heat Capacity 735 , and Heat Transfer Rate 737 , to name a few.
- the neuron may also have temporary values that flow through the neural network, that may be changed by the neuron's activation function.
- These type 2 inputs 707 , 717 may be qualities such as Temperature 719 , Mass Flow Rate 721 , Pressure 723 , etc.
- Different neurons may have different values.
- a Wall Neuron 705 may have Type 1 inputs 725 such as Surface Area 727 , Layer Heat Capacity 728 , and Thermal Resistance 729 , as well as Type 2 inputs 707 .
- a comfort curve output 304 of the neural network 600 may comprise a value gathered from among the variables in a neuron.
- Chris's office may be zone 4 540 , which is represented by the neuron Zone 4 640 .
- the output of the heterogenous model 305 may be a time series of the zone neuron temperature.
- a cost function can be calculated using these internal neural net values.
- a cost function (also sometimes called a loss function) is a performance metric on how well the neural network is reaching its goal of generating outputs as close as possible to the desired values.
- Zone 1 525 has a sensor 545 which can record state within the zone.
- Zone 4 540 has a sensor 550 which can also record state values.
- desired values may be synthetic, that is, they are the values that are hoped to be reached.
- the desired values may be derived from actual measurements.
- the desired values are time series of the actual temperatures from the sensors.
- the network prediction values are not determined from a specific output layer of the neural network, as the data we want is held within neurons within the network.
- the zone neurons 715 in our sample model hold a temperature value 719 .
- the network prediction values to be used for the cost function are, in this case, the values (temperature 719 ) within the neuron 620 that corresponds to Zone 525 (where we have data from sensor 545 ) and the values (temperature 719 ) within the neuron 640 that correspond to Zone 4 540 , with sensor 550 .
- a record of the temperature values from those zones can be accumulated from time t0 to tn; and neuron 640 which may be another internal temperature from time t0 to tn or a different value.
- These are our network prediction values.
- the desired values are data from the sensors 545 and 550 .
- heterogenous neural networks comprise neural networks that have neurons with different activation functions. These neurons may comprise virtual replicas of actual or theoretical physical locations. The activation functions of the neurons may comprise multiple equations that describe state moving through a location associated with the neuron.
- heterogenous neural networks also have neurons that comprise multiple variables that hold values that are meaningful outside of the neural network itself. For example, a value, such as a temperature value (e.g., 719 ) may be held within a neuron (e.g., 640 ) which can be associated with an actual location (e.g., 540 ).
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Human Resources & Organizations (AREA)
- Software Systems (AREA)
- Strategic Management (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Economics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Signal Processing (AREA)
- Tourism & Hospitality (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Entrepreneurship & Innovation (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Combustion & Propulsion (AREA)
- Chemical & Material Sciences (AREA)
- Fuzzy Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Geometry (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Quality & Reliability (AREA)
Abstract
Description
- The present application hereby incorporates by reference the entirety of, and claims priority to, U.S. provisional patent application Ser. No. 62/704,976 filed 5 Jun. 2020.
- The present application hereby incorporates by reference U.S. utility patent application Ser. No. 17/009,713, filed Sep. 1, 2020.
- The present disclosure relates to neural network methods for creating demand curves from comfort curves. More specifically the present disclosure relates to receiving a time curve of desired state values and outputting a time curve of energy amounts that may be input into a structure to achieve the desired state values.
- Current building automation systems rely on current state of the building and simple requirements to determine building behavior. For example, a building may know that at 7:00 am it should be at 70°. At 7:00 am it will then check what the current temperature is and turn on the heater to warm the building up to the desired temperature or turn on the air conditioner to cool it down. Not only is this simplistic, but it leaves buildings unable to adapt to predictable events, such as when weather reports predict changes; when the number of people in the building is known to change at a certain time, and so on. Trying to model buildings quickly runs into problems, as even simple buildings are very complex in terms of the current controllers that are used to manage the systems in buildings. Proportional-Integral-Derivative controllers (PID controllers)—originally designed for ship steering in 1922—are widely used to control HVAC and other systems in building, but fit very poorly into creating models that have more than a single setpoint. To model a room heterogeneously, you would need roughly 50 PID controllers; why so many? The walls are made of multiple materials that transfer state differently, and there are four walls, typically; the ceiling and floor are made of different levels of materials, forces act on the outside of the walls; there are heat sources, such as people and lights in the room, all of which together make up the building.
- This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description section. This summary does not identify required or essential features of the claimed subject matter.
- In an embodiment, a method of determining a demand curve implemented by one or more computers is disclosed, comprising: receiving a neural network of a plurality of controlled building zones; receiving a desired comfort curve for at least one of the plurality of controlled building zones; performing a machine learning process to run the heterogenous model using a simulated demand curve as input and receiving a simulated comfort curve as output; computing a cost function using the simulated comfort curve and the desired comfort curve; using the cost function to determine a new simulated demand curve; iteratively performing the using and computing steps until a goal state is reached; and determining that the new simulated demand curve is the demand curve upon the goal state being reached.
- In an embodiment, the new simulated demand curve is a time series of zone energy inputs and the simulated comfort curve is a time series of zone state values.
- In an embodiment, computing the cost function further comprises determining difference between the desired comfort curve and the simulated comfort curve.
- In an embodiment, performing the machine learning process further comprises performing automatic differentiation recursively through the neural network producing a new simulated demand curve.
- In an embodiment, the goal state comprises the cost function being minimized, the model running for a specific time, or the model running a specific number of cycles.
- In an embodiment, the neural network comprises multiple activation functions.
- In an embodiment, performing the machine process comprises computing a gradient of the neural network by taking a reverse gradient of the cost function result forward using automatic differentiation.
- In an embodiment, performing the machine learning process further comprises taking a gradient of the neural network backward using automatic differentiation.
- In an embodiment, the neural network is a heterogeneous neural network.
- In an embodiment, any neuron may be an output neuron.
- In an embodiment, a demand curve creation system is disclosed, the system comprising: at least one processor, a memory in operable communication with the processor and demand curve creation code residing in memory which comprises: receiving a heterogenous neural network of a plurality of controlled building zones; receiving a ground truth comfort state zone curve for at least one of the plurality of controlled building zones; performing a machine learning process to run the heterogenous neural network using a simulated demand curve as input and receiving a simulated comfort curve as output; computing a cost function using the simulated comfort state zone curve and the ground truth comfort curve; using the cost function to determine a new simulated demand curve; iteratively performing the using and computing steps until a goal state is reached; and determining that the current simulated demand curve is the demand curve upon the goal state being reached.
- In an embodiment, the machine learning process comprises a backward pass that computes the gradient with respect to the cost function, and then uses an optimizer to update the demand curves.
- In an embodiment, the backward pass that computes the gradient uses automatic differentiation.
- In an embodiment, the optimizer uses stochastic gradient descent or mini batch gradient descent to minimize the cost function.
- In an embodiment, the neural network is a heterogenous neural network.
- In an embodiment, the heterogenous neural network comprises neurons, any of which may be inputs.
- In an embodiment, any of the neurons may be outputs.
- In an embodiment, a computer-readable storage medium configured with executable instructions to perform a method for creation of a demand curve upon receipt of a comfort curve is disclosed, the method comprising: receiving a ground truth comfort curve for at least one of the plurality of controlled building zones; performing a machine learning process to run a heterogenous neural network using a simulated demand curve as input and receiving a simulated comfort curve as output; computing a cost function using the simulated comfort state zone curve and the ground truth comfort curve; using the cost function to determine a new simulated demand curve; iteratively performing the using and computing steps until a goal state is reached; and determining that the current simulated demand curve is the demand curve upon the goal state being reached.
- In an embodiment, the machine learning process comprises using automatic differentiation to perform backpropagation.
- In an embodiment, the machine learning process comprises using a gradient descent method to perform incremental optimization of the simulated demand curve
- Additional features and advantages will become apparent from the following detailed description of illustrated embodiments, which proceeds with reference to accompanying drawings.
-
FIG. 1 depicts a computing system in conjunction with which described embodiments can be implemented. -
FIG. 2 is a flow diagram showing an exemplary embodiment of a method to determine demand curves from comfort curves. -
FIG. 3 is a functional block diagram showing an exemplary embodiment of the input and output of a model with which described embodiments can be implemented. -
FIG. 4 is a functional block diagram showing different machine learning functions with which described embodiments can be implemented. -
FIG. 5 depicts a physical system whose behavior can be determined by using a neural network. -
FIG. 6 depicts a simplified neural network that may be used to model behaviors of the physical system ofFIG. 3 . -
FIG. 7 is a block diagram describing the nature of exemplary neurons. - Corresponding reference characters indicate corresponding components throughout the several views of the drawings. Skilled artisans will appreciate that elements in the FIGURES are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of various embodiments. Also, common but well-understood elements that are useful or necessary in a commercially feasible embodiment are often not depicted in order to facilitate a less obstructed view of these various embodiments.
- Disclosed below are representative embodiments of methods, computer-readable media, and systems having particular applicability to systems and methods for building neural networks that describe physical structures. Described embodiments implement one or more of the described technologies.
- Various alternatives to the implementations described herein are possible. For example, embodiments described with reference to flowchart diagrams can be altered, such as, for example, by changing the ordering of stages shown in the flowcharts, or by repeating or omitting certain stages.
- Embodiments comprise using a heterogeneous neural model of a defined space that models the various materials in the space as connected nodes, and uses machine learning techniques to train the model. “Defined Space” should be understood broadly—it can be a building, several buildings, buildings and grounds around it, a defined outside space, such as a garden or an irrigated field, etc. A portion of a building may be used as well. For example, a floor of a building may be used, a random section of a building, a room in a building, etc. This may be a space that currently exists, or may be a space that exists only as a design. Other choices are possible as well.
- The defined space may be divided into zones. Each zone may have a different set of requirements for state values during a time period. For example, for the state “temperature,” a user Chris may like their office at 72° from 8 am-5 pm, while a user Avery may prefer their office at 77° from 6 am-4 pm. These preferences can be turned into comfort curves, which are a chronological (time-based) state curve. Chris's office comfort curve may be 68° from Midnight to 8 am, 72° from 8 am to 5 pm, then 68° from 5 pm to midnight. The comfort curves (for a designated space, such as Chris's office), are then used to calculate demand curves, which are the amount of state that may be input into the associated zones to achieve the state desired over time. For Chris's office, that is the amount of heat (or cold) that may be pumped into their office for the 24 hour time period covered by the comfort curve. These zones are controlled by some sort of equipment, allowing their state to be changed. Such zones may be referred to as controlled building zones.
- As a brief overview, in an illustrative embodiment, we have the comfort curve(s) we want zones (e.g., areas) to conform to, such as Chris's office, as described above, and we wish to find the amount of state necessary over time to meet the temperature (e.g., state) indicated by the comfort curve. We call the amount of state over time a demand curve. To determine comfort curves, we use demand curves as input into a heterogenous model, such as a heterogenous neural network that represents the zones within a structure. We then run the model forward with the demand curves as input to determine comfort curve output for that demand curve. That is, when we pump such an amount of state into a structure, the structure, in turn, has some amount of state over time. This can be thought of as running a furnace from time T to time T+20, and from time T+120 to time T+145 in a structure with two zones. The state propagates through both zones in the neural network which includes the walls, the air, etc. The model outputs the state from time T to time T+240 in both zones, giving us two comfort curves, e.g., what temperature the two zones were from time T to time T+240. We then check the comfort curve output with the desired comfort curve using a cost function, and then machine learning curves are used to tune the input values to create a new demand curve. In some embodiments, a gradient of the cost function is calculated through backpropagate to the input, and then optimized by, e.g., a type of gradient descent, etc., giving us a new demand curve to try. This is repeated until a goal state is reached. The last demand curve run is the demand curve that is then used to determine state needed over time in one or more spaces.
- “Optimize” means to improve, not necessarily to perfect. For example, it may be possible to make further improvements in a value or an algorithm which has been optimized.
- “Determine” means to get a good idea of, not necessarily to achieve the exact value. For example, it may be possible to make further improvements in a value or algorithm which has already been determined.
- A “cost function,” generally, compares the output of a simulation model with the ground truth—a time curve that represents the answer the model is attempting to match. This gives us the cost—the difference between simulated truth curve values and the expected values (the ground truth). The cost function may use a least squares function, a Mean Error (ME), Mean Squared Error (MSE), Mean Absolute Error (MAE), a Categorical Cross Entropy Cost Function, a Binary Cross Entropy Cost Function, and so on, to arrive at the answer. In some implementations, the cost function is a loss function. In some implementations, the cost function is a threshold, which may be a single number that indicates the simulated truth curve is close enough to the ground truth. In other implementations, the cost function may be a slope. The slope may also indicate that the simulated truth curve and the ground truth are of sufficient closeness. When a cost function is used, it may be time variant. It also may be linked to factors such as user preference, or changes in the physical model. The cost function applied to the simulation engine may comprise models of any one or more of the following: energy use, primary energy use, energy monetary cost, human comfort, the safety of building or building contents, the durability of building or building contents, microorganism growth potential, system equipment durability, system equipment longevity, environmental impact, and/or energy use CO2 potential. The cost function may utilize a discount function based on discounted future value of a cost. In some embodiments, the discount function may devalue future energy as compared to current energy such that future uncertainty is accounted for, to ensure optimized operation over time. The discount function may devalue the future cost function of the control regimes, based on the accuracy or probability of the predicted weather data and/or on the value of the energy source on a utility pricing schedule, or the like.
- A “goal state” may read in a cost (a value from a cost function) and determine if that cost meets criteria such that a goal has been reached. Such criteria may be the cost reaching a certain value, being higher or lower than a certain value, being between two values, etc. A goal state may also look at the time spent running the simulation model overall, if a specific running time has been reached, the neural network running a specific number of iterations, and so on.
- A machine learning process is one of a variety of computer algorithms that improve automatically through experience. Common machine learning processes are Linear Regression, Logistic Regression, Decision Tree, Support Vector Machine (SVM), Naive Bayes, K-Nearest Neighbors (kNN), K-Means Clustering, Random Forest, Backpropagation with optimization, etc.
- An “optimization method” may include Gradient Descent, stochastic gradient descent, min-batch gradient descent, methods based on Newton's method, inversions of the Hessian using conjugate gradient techniques, Evolutionary computation such as Swarm Intelligence, Bee Colony optimization; SOMA, and Particle Swarm, etc. Non-linear optimization techniques, and other methods known by those of skill in the art may also be used.
- In some machine learning techniques, Backpropagation may be performed by automatic differentiation, or by a different method to determine partial derivatives.
- A “state” as used herein may be Air Temperature, Radiant Temperature, Atmospheric Pressure, Sound Pressure, Occupancy Amount, Indoor Air Quality, CO2 concentration, Light Intensity, or another state that can be measured and controlled.
- Some structures comprise multiple zones (such as rooms or specific areas monitored by a sensor). Each separate zone may be modeled by its own neural model. The collection of neural models can comprise the heterogenous model of the structure. In such a multiple zone model, when zones share a surface, such as (in a building implementation), a wall, a floor, or a ceiling, the outside neuron of one neural model may be used as the inner neuron of the next. Some zones may overlap with other zones, while some zones do not. The structure may be covered in zones, or some locations within a structure may have no explicit zone. Defined spaces may be defined into multiple subsystems. Any of these portioned defined spaces may be used as the subsystems.
-
FIG. 1 illustrates a generalized example of asuitable computing environment 100 in which described embodiments may be implemented. Thecomputing environment 100 is not intended to suggest any limitation as to scope of use or functionality of the disclosure, as the present disclosure may be implemented in diverse general-purpose or special-purpose computing environments. - With reference to
FIG. 1 , the core processing is indicated by thecore processing 130 box. Thecomputing environment 100 includes at least onecentral processing unit 110 andmemory 120. Thecentral processing unit 110 executes computer-executable instructions and may be a real or a virtual processor. It may also comprise avector processor 112, which allows same-length neuron strings to be processed rapidly. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power and as such thevector processor 112,GPU 115, and CPU can be running simultaneously. Thememory 120 may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two. Thememory 120stores software 185 implementing the described methods of using a neural net to derive a demand curve from a comfort curve. - A computing environment may have additional features. For example, the
computing environment 100 includesstorage 140, one ormore input devices 150, one ormore output devices 155, one or more network connections (e.g., wired, wireless, etc.) 160 as well asother communication connections 170. An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of thecomputing environment 100. Typically, operating system software (not shown) provides an operating environment for other software executing in thecomputing environment 100, and coordinates activities of the components of thecomputing environment 100. The computing system may also be distributed; running portions of thesoftware 185 on different CPUs. - The
storage 140 may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, flash drives, or any other medium which can be used to store information and which can be accessed within thecomputing environment 100. Thestorage 140 stores instructions for the software, such a demandcurve creation software 185 to implement methods of neuron discretization and creation. - The input device(s) 150 may be a device that allows a user or another device to communicate with the
computing environment 100, such as a touch input device such as a keyboard, video camera, a microphone, mouse, pen, or trackball, and a scanning device, touchscreen, or another device that provides input to thecomputing environment 100. For audio, the input device(s) 150 may be a sound card or similar device that accepts audio input in analog or digital form, or a CD-ROM reader that provides audio samples to the computing environment. The output device(s) 155 may be a display, printer, speaker, CD-writer, or another device that provides output from thecomputing environment 100. - The communication connection(s) 170 enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, compressed graphics information, or other data in a modulated data signal.
Communication connections 170 may compriseinput devices 150,output devices 155, and input/output devices that allows a client device to communicate with another device overnetwork 160. A communication device may include one or more wireless transceivers for performing wireless communication and/or one or more communication ports for performing wired communication. These connections may include network connections, which may be a wired or wireless network such as the Internet, an intranet, a LAN, a WAN, a cellular network or another type of network. It will be understood thatnetwork 160 may be a combination of multiple different kinds of wired or wireless networks. Thenetwork 160 may be a distributed network, with multiple computers, which might be building controllers, acting in tandem. Acomputing connection 170 may be a portable communications device such as a wireless handheld device, a cell phone device, and so on. - Computer-readable media are any available non-transient tangible media that can be accessed within a computing environment. By way of example, and not limitation, with the
computing environment 100, computer-readable media includememory 120,storage 140, communication media, and combinations of any of the above. Computerreadable storage media 165 which may be used to store computer readable media comprisesinstructions 175 anddata 180. Data Sources may be computing devices, such as general hardware platform servers configured to receive and transmit information over thecommunications connections 170. Thecomputing environment 100 may be an electrical controller that is directly connected to various resources, such as HVAC resources, and which hasCPU 110, aGPU 115, Memory, 120,input devices 150,communication connections 170, and/or other features shown in thecomputing environment 100. Thecomputing environment 100 may be a series of distributed computers. These distributed computers may comprise a series of connected electrical controllers. - Although the operations of some of the disclosed methods are described in a particular, sequential order for convenient presentation, it should be understood that this manner of description encompasses rearrangement, unless a particular ordering is required by specific language set forth below. For example, operations described sequentially can be rearranged or performed concurrently. Moreover, for the sake of simplicity, the attached figures may not show the various ways in which the disclosed methods, apparatus, and systems can be used in conjunction with other methods, apparatus, and systems. Additionally, the description sometimes uses terms like “determine,” “build,” and “identify” to describe the disclosed technology. These terms are high-level abstractions of the actual operations that are performed. The actual operations that correspond to these terms will vary depending on the particular implementation and are readily discernible by one of ordinary skill in the art.
- Further, data produced from any of the disclosed methods can be created, updated, or stored on tangible computer-readable media (e.g., tangible computer-readable media, such as one or more CDs, volatile memory components (such as DRAM or SRAM), or nonvolatile memory components (such as hard drives) using a variety of different data structures or formats. Such data can be created or updated at a local computer or over a network (e.g., by a server computer), or stored and accessed in a cloud computing environment.
-
FIG. 2 illustrates amethod 200 that that determines a demand curve from a comfort curve using a heterogenous neural network. The operations ofmethod 200 presented below are intended to be illustrative. In some embodiments,method 200 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations ofmethod 200 are illustrated inFIG. 2 and described below is not intended to be limiting. - In some embodiments,
method 200 may be implemented in one or more processing devices (e.g., a digital or analog processor, or a combination of both; a series of computer controllers each with at least one processor networked together, and/or other mechanisms for electronically processing information etc.) The one or more processing devices may include one or more devices executing some or all of the operations ofmethod 200 in response to instructions stored electronically on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations ofmethod 200. - At
operation 205, aneural network model 205 is received. The neural network model may have been stored in memory, and so may be received from the processing device that the model is being run on. In some implementations, the neural network model may be stored within a distributed system, and received from more than one processors within the distributed system, etc. - In some embodiments described herein, in a heterogenous neural network, the fundamentals of physics are utilized to model single components or pieces of equipment on a one-to-one basis with neural net neurons. In some embodiments, some neurons use physics equations as activation functions. Different types of neurons may have different equations for their activation functions, such that a neural network may have multiple activation functions within its neurons. When multiple components are linked to each other in a schematic diagram, a neural net is created that models the components as neurons. The values between the objects flow between the neurons as weights of connected edges. These neural networks model not only the real complexities of systems but also their emergent behavior and the system semantics. Therefore, it bypasses two major steps of the conventional AI modeling approaches: determining the shape of the neural net, and training the neural net from scratch.
- As the neurons are arranged in order of an actual system (or set of equations) and because the neurons themselves comprise an equation or a series of equations that describe the function of their associated object, and certain relationships between them are determined by their location in the neural net. Therefore, a huge portion of training is no longer necessary, as the neural net itself comprises location information, behavior information, and interaction information between the different objects represented by the neurons. Further, the values held by neurons in the neural net at given times represent real-world behavior of the objects so represented. The neural net is no longer a black box but itself contains important information. This neural net structure also provides much deeper information about the systems and objects being described. Since the neural network is physics- and location-based, unlike the conventional AI structures, it is not limited to a specific model, but can run multiple models for the system that the neural network represents without requiring separate creation or training.
- In some embodiments, the neural network that is described herein chooses the location of the neurons to tell you something about the physical nature of the system. The neurons are arranged in a way that references the locations of actual objects in the real work. The neural network also places actual equations that can be used to determine object behavior into the activation function of the neuron. The weights that move between neurons are equation variables. Different neurons may have unrelated activation functions, depending on the nature of the model being represented. In an exemplary embodiment, each activation function in a neural network may be different.
- As an exemplary embodiment, a pump could be represented in a neural network as a series of network neurons, some that represent efficiency, energy consumption, pressure, etc. The neurons will be placed such that one set of weights (variables) feeds into the next neuron (e.g., with an equation as its activation function) that uses those weights (variables). Now, two previous required steps, shaping the neural net and training the model may already be performed, at least to a large part. Using embodiments discussed here the neural net model need not be trained on information that is already known. In some embodiments, the individual neurons represent physical representations. These individual neurons may hold parameter values that help define the physical representation. As such, when the neural net is run, the parameters helping define the physical representation can be tweaked to more accurately represent the given physical representation.
- This has the effect of pre-training the model with a qualitative set of guarantees, as the physics equations that describe objects being modeled are true, which saves having to find training sets and using huge amounts of computational time to run the training sets through the models to train them. A model does not need to be trained with information about the world that is already known. With objects connected in the neural net like they are connected in the real world, emergent behavior arises in the model that, in certain cases, maps to the real world. This model behavior that is uncovered is often otherwise too computationally complex to determine. Further, the neurons represent actual objects, not just black boxes. The behavior of the neurons themselves can be examined to determine behavior of the object, and can also be used to refine the understanding of the object behavior. One example of heterogenous models is described in U.S. patent application Ser. No. 17/143,796, filed on Jan. 7, 2021, which is incorporated herein in its entirety by reference.
- At
operation 210, a simulated demand curve is received. Initially, the values of the demand curve may be random, may be a demand curve from another similar model run, etc. As a brief overview, in an illustrative embodiment, we have the comfort curve(s) we want zones (e.g., areas) to conform to, such as Chris's office, as described above, and we wish to find the demand curve (amount of state needed over time) necessary for the structure modeled by the heterogenous model amount of state necessary over time to meet the state (e.g., temperature) indicated by the comfort curve. To do so, we use simulated demand curves as input in the model, run the model which outputs the simulated comfort curve for the given demand curve. -
FIG. 3 is a functional block diagram showing an exemplary embodiment of the input and output of a model. With reference toFIG. 3 , this entails using a demand curve 302 (e.g., an amount of state over time, e.g., zone energy inputs 310) as input into the heterogenous model 305. In some embodiments, specifically, the demand curves 302 are injected into a defined space as described by the demand curve at times T(0) through T(n); in the case of a heterogenous model modeling multiple zones, multiple demand curves are injected in corresponding zones over time. In some embodiments, outside influences, such asweather 320, are also used as inputs. In some cases, if the model is optimized using backpropagation, the backpropagation may not be backpropagated to such outside influence inputs, as, e.g., the weather inputs are not optimized, and so are not changed. - At
operation 220, the neural network is run. Running the model may entail feedforward—running the demand curve state though the model to the outputs over time T(0)-T(n), capturing internal state values within neurons over the same time T(0)-T(n). These internal state values (such as a temperature variable in a neuron that model's Chris's office) may be the outputs that define the simulated comfort curves 225. Atoperation 225, simulated comfort curve(s) are output. In other embodiments, the comfort curve isoutput 225 successively in timesteps during the model run, or other methods are used. The first time the heterogenous model is run for a given set of comfort curves, ademand curve 210 may be supplied. This initial demand curve may be determined randomly, or another method may be used, such as a demand curve stored previously that was used as the solution to a similar comfort curve problem. - At
operation 215, the desired comfort curve(s) are received. These are the curves that describe the state the structure being modeled is to be in; e.g., the temperature Chris's office should be for some time period. These may also be called ground truth comfort curves. Ground truth is the information provided by direct evidence or is the desired output from the neural network. - At
operation 230, a cost function is computed using the time series of desired comfort curve(s) and the model output—a simulated comfort curve. The cost function measures the difference between the time series of desired comfort curve(s) (e.g., the desired temperature of Chris's office over 24 hours) and the comfort curve(s)output 304 from theneural network 220. Details of the cost function are described elsewhere, such as with reference toFIG. 7 . - At operation 235 a goal state is checked to determine if a stopping state has been reached. The goal state may be that the cost from the cost function is within a certain value, that the program has run for a given time, that the model has run for a given number of iterations, that the model has run for a given time, that a threshold value has been reached, such as the cost function should be equal or lower than the threshold value, or a different criterion may be used. If the goal state has not been reached, then a new set of inputs needs to be determined that are incrementally closer to an eventual answer—a lowest (or highest or otherwise determined) value for the cost function, as described elsewhere. This can be thought of as iteratively executing the running
neural network 220, outputting thesimulated comfort curve 225, and computing thecost function 230, and determining a newsimulated demand curve 240 until thegoal state 235 is reached. - At
operation 240 new simulated demand curve(s) are determined for the next run of the neural network. This may be performed using the cost function; by using machine learning algorithms, etc. In some embodiments, backpropagation is used to determine a gradient of the cost function in relation to the various values in the neural network (such as the inputs shown inFIG. 7 ) is determined and then used to backpropagate that gradient through the neural network to obtain an updated set of demand curves. - If the
goal state 235 has determined that a stopping state been reached, then the demand curve that was used for the last heterogenous model run is set as the solved demand curve; that is, the demand curve that will meet the requirements for the desired comfort curve, within some range. The input that is given to the model is used as the eventual output. - In some implementations, once the demand curve has been determined 245, it can then be used to determine
equipment behavior 250 for controlled equipment in the building (i.e., when the equipment should turn on and off/hold intermediate values), which can then be used to control equipment in a building. Controlling equipment in such a predetermined fashion can save greatly on energy costs, as well as more accurately controlling state in a defined space for the people and objects therein. -
FIG. 3 depicts some potential inputs and outputs of the heterogenous model. The end result we want is ademand curve 302. To create the demand curves 302 we use ademand curve 302 as input into the model, the model then outputs acomfort curve 304. This comfort curve can be thought of as a time series of zone state values 315, where a zone is a location whose state values wish to be measured. For example, the zone state values may be the simulated temperature taken in a zone for each model time step, here, from T(0) to T(n). There does not need to be a direct way to measure the corresponding location in a space that is to be measured. We iteratively modify the demand curve using machine learning techniques to successively more closely match the comfort curve. Theinitial demand curve 302 may be chosen randomly from among feasible values, or may be chosen by some other method. Thedemand curve 302 zone energy inputs 310 feeds state into the model at each model step. The state may be fed in at various locations, such as at neurons that represent HVAC equipment, etc. The state then diffuses through the model through the time steps. The model outputs a comfort curve of the state that has diffused at specific locations, such as at Chris's office—the comfort curve may be a time curve of the temperature at a neuron representing Chris's office. This comfort curve is then compared with the desired state over the model running time. The desired state may be the desired temperature in Chris's office over the time that the model was run for. This pattern iterates until the modeled comfort curve is close enough to the desired comfort curve. Then, the last demand curve used becomes the solution, that is, the amount of state needed to be injected into areas in the model over time. Very often the neural network will be comprised of many neurons that represent zones. The state values of these neuron zones affect each other. For example, a room with a heater in it will warm up the room next door, for example. Therefore, the calculations in a neural network that model such a system are far from straightforward. - A heterogenous model 305 may take as input a demand curve 302 (e.g., an amount of state over time 310, in this example, from T(0) to T(n), with the state value fluctuating during that time). These comfort curves 215 can be areas whose state needs can be quantified, such as the amount of humidity that should be in an area, how loud a sound should be, the amount of CO2 in a space allowed, temperature, etc.
FIG. 3 only explicitly shows one set ofdemand curves 302 and aweather curve 320 as input, but many heterogenous model runs will be run for multiple zones, and will includemultiple demand curve 302/zone energy inputs 310. - In some implementations, other values that may affect the state of a designated space during the time the model is being run may also be used as input into the
neural network 205. In an exemplary embodiment, the desired comfort curves are for a known period of time. In such instances,weather 320 affecting the building (such as one or more of the temperatures, wind speed, rainfall, etc. in a weather forecast) can be used as a further input into the heterogenous model expressed as weather value curve(s) for the same time period, in this case, T(0) to T(n). -
FIG. 4 is a functional block diagram 400 showing different machine learning functions. Atoperation 240, new demand curves are determined. Ways of determiningnew demand curves 240 are shown with reference toFIG. 4 . These demand curves may be determined by usingmachine learning 405. These machine learning 405 techniques may comprise determininggradients 415 of the various variables within the neural network with respect to the cost function. This will provide a space which allows one to incrementally optimize theinputs 430 using the gradients. This shows which way to step to minimize the cost function with respect to the inputs. In some embodiments, gradients of the internal variables with respect to the cost function are determined 415. With reference toFIG. 7 , in some embodiments, the internal parameters, e.g., 707-737 of each neuron have their partial derivatives calculated. Different neurons may have different parameters. For example, a neuron modeling a pump may have parameters such as density, shaft speed, volume flow ratio, hydraulic power, etc. If the derivatives are differentiable, then backpropagation 420 can be used to determine the partial derivatives. Backpropagation finds the derivative of the error (given by the cost function) for the parameters in the neural network, that is, backpropagation computes the gradient of the cost function with respect to the parameters within the network. -
Backpropagation 420 calculates the derivative between the cost function and parameters by using the chain rule from the last neurons calculated during the feedforward propagation (a backward pass), through the internal neurons, to the first neurons calculated. In some embodiments, backpropagation will be performed byautomatic differentiation 425. According to Wikipedia, “automatic differentiation is accomplished by augmenting the algebra of real numbers and obtaining a new arithmetic. An additional component is added to every number to represent the derivative of a function at the number, and all arithmetic operators are extended for the augmented algebra.” Other methods may be used to determine the parameter partial derivatives. These include Particle Swarm and SOMA ((Self-Organizing Migrating Algorithm), etc. The backpropagation may work on a negative gradient of the cost function, as the negative gradient points in the direction of smaller values. - After the partial derivatives are determined, the
demand curve 302 is optimized to lower the value of the cost function with respect to the inputs. This process is repeated incrementally. Many different optimizers may be used, which can be roughly grouped into 1)gradient descent methods 435 and 2)other methods 440. Among thegradient descent methods 435 are standard gradient descent, stochastic gradient descent, and mini-batch gradient descent. Among theother methods 440 are Momentum, Adagrad, AdaDelta, ADAM (adaptive movement estimation), and so on. - Once a new demand curve is determined, the heterogenous model 305 is run again. If the goal state is reached 235, then the last demand curve that was determined 145 is determined to be the demand curve that satisfies the original comfort curve requirements. Once the demand curve has been determined, it can be used to determine how much state needs to be input into different zones in a structure at what times to meet the comfort curve needs. This method can save as much as 30% of energy costs over adjusting the state when the need arises. If the goal state has not been reached, then the determine new simulation
demand curve step 240, the runneural network step 220, the output simulationcomfort curve step 225, and computecost function state 230 are iteratively performed, within incrementally optimizes the comfort curve until thegoal state 235 is reached. -
FIG. 5 depicts aphysical system 500 whose behavior can be determined by using a neural network, which may be a heterogenous neural network. A portion of astructure 500 is shown which comprises aWall 1 505. ThisWall 1 505 is connected to a room which comprisesZone 1 525. This zone also comprises asensor 545 which can determine state of the zone.Wall 2 510 is betweenZone 1 525 andZone 2 530.Zone 2 does not have a sensor.Wall 3 515 is between the twozones 1 525 and 2 530 and the twozones Zone 3 535 andZone 4 540.Zone 3 andZone 4 do not have a wall between them.Zone 4 has asensor 550 that can determine state inZone 4.Zones 3 535 andZone 4 540 are bounded on the right side byWall 4 520.Zone 2 530 has aheater 555, which disseminates heat over the entire structure. The zones 1-4 are controlled building zones, as their state (in this case heat) can be controlled by theheater 555. -
FIG. 6 depicts a simplified heterogenousneural network 600 that may be used to model behaviors of the simplified physical system ofFIG. 5 . In some embodiments, areas of the structure are represented by neurons that are connected with respect to the location of the represented physical structure. The neurons are not put in layers, as in other types of neural networks. Further, rather than being required to determine what shape the neural network should be to best fit the problem at hand, the neural network configuration is, in some embodiments, determined by a physical layout; that is, the neurons are arranged topologically similar to a physical structure that the neural net is simulating. - For example,
Wall 1 505 is represented byneuron 605. Thisneuron 605 is connected byedges 660 toneurons representing Zone 1 620,Wall 2 610, andZone 2 630. This mirrors the physical connections betweenWall 1 505,Zone 1 525,Wall 2 510, andZone 2 530. Similarly, the neurons forZone 1 620,Wall 2 610, andZone 2 630 are connected by edges to theneuron representing Wall 3 615. Theneuron representing Wall 3 515 is connected by edges to theneurons representing Zone 3 635 andZone 4 640. Those twoneurons neuron representing Wall 3 620. Even though only one edge is seen going from one neuron to another neuron for clarity in this specific figure, a neuron may have multiple edges leading to another neuron, as will be discussed later. Neurons may have edges that reference each other. For example,edge 660 may be two-way. - In some implementations, the edges have inputs that are adjusted by activation functions within neurons. Some inputs may be considered temporary properties that are associated with the physical system, such as temperature. In such a case, a temperature input represented in a
neural network 600 may represent temperature in the corresponding location in thephysical system 500, such that a temperature input inNeuron Zone 1 620 can represent the temperature at thesensor 545 inZone 1 525. In this way, the body of the neural net is not a black box, but rather contains information that is meaningful (in this case, a neuron input represents a temperature within a structure) and that can be used. - In some implementations, inputs may enter and exit from various places in the neural network, not just from an input and an output layer. This can be seen with inputs of
type 1, which are the dashed lines entering each neuron. Inputs oftype 2 are the straight lines. In the illustrative example, each neuron has at least one input. For purposes of clarity not all inputs are included. Of those that are, inputs oftype 2 are marked with a straight line, where inputs oftype 1 are marked with a dashed line.Input 665 is associated with the neuron that representsWall 1 605, whileinput 652 is associated with the neuron that representsWall 3 615. Signals, (or weights) passed from edge to edge, and transformed by the activation functions, can travel not just from one layer to the layer in a lock-step fashion, but can travel back and forth between layers, such as signals that travel along edges fromZone 1 620 toWall 2 610, and from there toZone 2 630. Further, there may be multiple inputs into a single neuron, and multiple outputs from a single neuron. For example, a system that represents a building may have several inputs that represent different states, such as temperature, humidity, atmospheric pressure, wind, dew point, time of day, time of year, etc. These inputs may be time curves that define the state over time. A system may have different inputs for different neurons. In some embodiments, inputs may be time curves, which defines the state at a particular time, over a period of time. - In some implementations, outputs are not found in a traditional output layer, but rather are values within a neuron within the neural network. These may be located in multiple neurons. These outputs for a run may be time curves. For example, the neuron associated with
Zone 1 620 may have a temperature value that can be looked at each timestep of a model run, creating temperature time curves that represent the temperature of the correspondingphysical Zone 1 525. - In some embodiments, activation functions in a neuron transform the weights on the upstream edges, and then send none, some, or all of the transformed weights to the next neuron(s). Not every activation function transforms every weight. Some activation functions may not transform any weights. In some embodiments, each neuron may have a different activation function. In some embodiments, some neurons may have similar functions.
- A heterogenous neural network may have many inputs and outputs. Each output may have its own comfort curve associated with it, so a heterogenous neural network may input
many demand curves 302 and output many comfort curves 304. -
FIG. 7 is a block diagram 700 describing possible inputs and outputs of neurons. Neural networks described herein may not have traditional input and output layers. Rather, neurons may have internal values that can be captured as output. Similarly, a wide variety of neurons, even those deep within a neural net can be used for input. For example, Chris's office may be inZone 4 540. This zone may be represented by aneuron 640 that is somewhere in the middle of aneural network 600. Azone neuron 715 may have an activation function that is comprised of several equations that model state moving through the space. The space itself may have inputs associated with it, e.g.,Layer Mass 732, Layer Heat Capacity 735, and Heat Transfer Rate 737, to name a few. For the purposes of this disclosure, we are calling thesetype 1inputs type 2inputs Temperature 719,Mass Flow Rate 721,Pressure 723, etc. Different neurons may have different values. For example aWall Neuron 705 may haveType 1inputs 725 such asSurface Area 727,Layer Heat Capacity 728, andThermal Resistance 729, as well asType 2inputs 707. Acomfort curve output 304 of theneural network 600 may comprise a value gathered from among the variables in a neuron. For example, Chris's office may bezone 4 540, which is represented by theneuron Zone 4 640. The output of the heterogenous model 305 may be a time series of the zone neuron temperature. - A cost function can be calculated using these internal neural net values. A cost function (also sometimes called a loss function) is a performance metric on how well the neural network is reaching its goal of generating outputs as close as possible to the desired values. To create the cost function we determine the values we want from inside the neural network, retrieve them, then make a vector with the desired values; viz: a cost C=(y,0) where y=desired values, and 0=network prediction values. These desired values are sometimes called the “ground truth.” With reference to
FIG. 5 ,Zone 1 525 has asensor 545 which can record state within the zone. Similarly,Zone 4 540 has asensor 550 which can also record state values. In some embodiments, desired values may be synthetic, that is, they are the values that are hoped to be reached. In some embodiments, the desired values may be derived from actual measurements. - Continuing the example from
FIG. 5 , there are two sensors that gather sensor data. The desired values are time series of the actual temperatures from the sensors. The network prediction values are not determined from a specific output layer of the neural network, as the data we want is held within neurons within the network. Thezone neurons 715 in our sample model hold atemperature value 719. The network prediction values to be used for the cost function are, in this case, the values (temperature 719) within theneuron 620 that corresponds to Zone 525 (where we have data from sensor 545) and the values (temperature 719) within theneuron 640 that correspond to Zone 4 540, withsensor 550. - When the model is run, a record of the temperature values from those zones can be accumulated from time t0 to tn; and
neuron 640 which may be another internal temperature from time t0 to tn or a different value. These are our network prediction values. In the instant example, the desired values are data from thesensors - The networks described herein may be heterogenous neural networks. Heterogenous neural networks comprise neural networks that have neurons with different activation functions. These neurons may comprise virtual replicas of actual or theoretical physical locations. The activation functions of the neurons may comprise multiple equations that describe state moving through a location associated with the neuron. In some embodiments, heterogenous neural networks also have neurons that comprise multiple variables that hold values that are meaningful outside of the neural network itself. For example, a value, such as a temperature value (e.g., 719) may be held within a neuron (e.g., 640) which can be associated with an actual location (e.g., 540).
- In view of the many possible embodiments to which the principles of the disclosed invention may be applied, it should be recognized that the illustrated embodiments are only preferred examples of the invention and should not be taken as limiting the scope of the invention. Rather, the scope of the invention is defined by the following claims. We therefore claim as our invention all that comes within the scope and spirit of these claims.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/177,391 US20210381712A1 (en) | 2020-06-05 | 2021-02-17 | Determining demand curves from comfort curves |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202062704976P | 2020-06-05 | 2020-06-05 | |
US17/177,391 US20210381712A1 (en) | 2020-06-05 | 2021-02-17 | Determining demand curves from comfort curves |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210381712A1 true US20210381712A1 (en) | 2021-12-09 |
Family
ID=78817218
Family Applications (11)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/009,713 Pending US20210383200A1 (en) | 2020-06-05 | 2020-09-01 | Neural Network Methods for Defining System Topology |
US17/177,391 Pending US20210381712A1 (en) | 2020-06-05 | 2021-02-17 | Determining demand curves from comfort curves |
US17/177,285 Pending US20210383235A1 (en) | 2020-06-05 | 2021-02-17 | Neural networks with subdomain training |
US17/193,179 Active 2041-05-05 US11861502B2 (en) | 2020-06-05 | 2021-03-05 | Control sequence generation system and methods |
US17/208,036 Pending US20210383041A1 (en) | 2020-06-05 | 2021-03-22 | In-situ thermodynamic model training |
US17/228,119 Active 2041-11-11 US11915142B2 (en) | 2020-06-05 | 2021-04-12 | Creating equipment control sequences from constraint data |
US17/308,294 Pending US20210383219A1 (en) | 2020-06-05 | 2021-05-05 | Neural Network Initialization |
US17/336,779 Abandoned US20210381711A1 (en) | 2020-06-05 | 2021-06-02 | Traveling Comfort Information |
US17/336,640 Pending US20210383236A1 (en) | 2020-06-05 | 2021-06-02 | Sensor Fusion Quality Of Data Determination |
US18/467,627 Pending US20240005168A1 (en) | 2020-06-05 | 2023-09-14 | Control sequence generation system and methods |
US18/403,542 Pending US20240160936A1 (en) | 2020-06-05 | 2024-01-03 | Creating equipment control sequences from constraint data |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/009,713 Pending US20210383200A1 (en) | 2020-06-05 | 2020-09-01 | Neural Network Methods for Defining System Topology |
Family Applications After (9)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/177,285 Pending US20210383235A1 (en) | 2020-06-05 | 2021-02-17 | Neural networks with subdomain training |
US17/193,179 Active 2041-05-05 US11861502B2 (en) | 2020-06-05 | 2021-03-05 | Control sequence generation system and methods |
US17/208,036 Pending US20210383041A1 (en) | 2020-06-05 | 2021-03-22 | In-situ thermodynamic model training |
US17/228,119 Active 2041-11-11 US11915142B2 (en) | 2020-06-05 | 2021-04-12 | Creating equipment control sequences from constraint data |
US17/308,294 Pending US20210383219A1 (en) | 2020-06-05 | 2021-05-05 | Neural Network Initialization |
US17/336,779 Abandoned US20210381711A1 (en) | 2020-06-05 | 2021-06-02 | Traveling Comfort Information |
US17/336,640 Pending US20210383236A1 (en) | 2020-06-05 | 2021-06-02 | Sensor Fusion Quality Of Data Determination |
US18/467,627 Pending US20240005168A1 (en) | 2020-06-05 | 2023-09-14 | Control sequence generation system and methods |
US18/403,542 Pending US20240160936A1 (en) | 2020-06-05 | 2024-01-03 | Creating equipment control sequences from constraint data |
Country Status (1)
Country | Link |
---|---|
US (11) | US20210383200A1 (en) |
Cited By (61)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220138183A1 (en) | 2017-09-27 | 2022-05-05 | Johnson Controls Tyco IP Holdings LLP | Web services platform with integration and interface of smart entities with enterprise applications |
US20220376944A1 (en) | 2019-12-31 | 2022-11-24 | Johnson Controls Tyco IP Holdings LLP | Building data platform with graph based capabilities |
US11699903B2 (en) | 2017-06-07 | 2023-07-11 | Johnson Controls Tyco IP Holdings LLP | Building energy optimization system with economic load demand response (ELDR) optimization and ELDR user interfaces |
US11704311B2 (en) | 2021-11-24 | 2023-07-18 | Johnson Controls Tyco IP Holdings LLP | Building data platform with a distributed digital twin |
US11709965B2 (en) | 2017-09-27 | 2023-07-25 | Johnson Controls Technology Company | Building system with smart entity personal identifying information (PII) masking |
US11714930B2 (en) | 2021-11-29 | 2023-08-01 | Johnson Controls Tyco IP Holdings LLP | Building data platform with digital twin based inferences and predictions for a graphical building model |
US11726632B2 (en) | 2017-07-27 | 2023-08-15 | Johnson Controls Technology Company | Building management system with global rule library and crowdsourcing framework |
US11727738B2 (en) | 2017-11-22 | 2023-08-15 | Johnson Controls Tyco IP Holdings LLP | Building campus with integrated smart environment |
US11735021B2 (en) | 2017-09-27 | 2023-08-22 | Johnson Controls Tyco IP Holdings LLP | Building risk analysis system with risk decay |
US11733663B2 (en) | 2017-07-21 | 2023-08-22 | Johnson Controls Tyco IP Holdings LLP | Building management system with dynamic work order generation with adaptive diagnostic task details |
US11741165B2 (en) | 2020-09-30 | 2023-08-29 | Johnson Controls Tyco IP Holdings LLP | Building management system with semantic model integration |
US11755604B2 (en) | 2017-02-10 | 2023-09-12 | Johnson Controls Technology Company | Building management system with declarative views of timeseries data |
US11754982B2 (en) | 2012-08-27 | 2023-09-12 | Johnson Controls Tyco IP Holdings LLP | Syntax translation from first syntax to second syntax based on string analysis |
US11762362B2 (en) | 2017-03-24 | 2023-09-19 | Johnson Controls Tyco IP Holdings LLP | Building management system with dynamic channel communication |
US11762343B2 (en) | 2019-01-28 | 2023-09-19 | Johnson Controls Tyco IP Holdings LLP | Building management system with hybrid edge-cloud processing |
US11761653B2 (en) | 2017-05-10 | 2023-09-19 | Johnson Controls Tyco IP Holdings LLP | Building management system with a distributed blockchain database |
US11762351B2 (en) | 2017-11-15 | 2023-09-19 | Johnson Controls Tyco IP Holdings LLP | Building management system with point virtualization for online meters |
US11762353B2 (en) | 2017-09-27 | 2023-09-19 | Johnson Controls Technology Company | Building system with a digital twin based on information technology (IT) data and operational technology (OT) data |
US11762886B2 (en) | 2017-02-10 | 2023-09-19 | Johnson Controls Technology Company | Building system with entity graph commands |
US11763266B2 (en) | 2019-01-18 | 2023-09-19 | Johnson Controls Tyco IP Holdings LLP | Smart parking lot system |
US11764991B2 (en) | 2017-02-10 | 2023-09-19 | Johnson Controls Technology Company | Building management system with identity management |
US11768004B2 (en) | 2016-03-31 | 2023-09-26 | Johnson Controls Tyco IP Holdings LLP | HVAC device registration in a distributed building management system |
US11769066B2 (en) | 2021-11-17 | 2023-09-26 | Johnson Controls Tyco IP Holdings LLP | Building data platform with digital twin triggers and actions |
US11770020B2 (en) | 2016-01-22 | 2023-09-26 | Johnson Controls Technology Company | Building system with timeseries synchronization |
US11774930B2 (en) | 2017-02-10 | 2023-10-03 | Johnson Controls Technology Company | Building system with digital twin based agent processing |
US11778030B2 (en) | 2017-02-10 | 2023-10-03 | Johnson Controls Technology Company | Building smart entity system with agent based communication and control |
US11774922B2 (en) | 2017-06-15 | 2023-10-03 | Johnson Controls Technology Company | Building management system with artificial intelligence for unified agent based control of building subsystems |
US11774920B2 (en) | 2016-05-04 | 2023-10-03 | Johnson Controls Technology Company | Building system with user presentation composition based on building context |
US11782407B2 (en) | 2017-11-15 | 2023-10-10 | Johnson Controls Tyco IP Holdings LLP | Building management system with optimized processing of building system data |
US11792039B2 (en) | 2017-02-10 | 2023-10-17 | Johnson Controls Technology Company | Building management system with space graphs including software components |
US11796974B2 (en) | 2021-11-16 | 2023-10-24 | Johnson Controls Tyco IP Holdings LLP | Building data platform with schema extensibility for properties and tags of a digital twin |
US11874635B2 (en) | 2015-10-21 | 2024-01-16 | Johnson Controls Technology Company | Building automation system with integrated building information model |
US11874809B2 (en) | 2020-06-08 | 2024-01-16 | Johnson Controls Tyco IP Holdings LLP | Building system with naming schema encoding entity type and entity relationships |
US11880677B2 (en) | 2020-04-06 | 2024-01-23 | Johnson Controls Tyco IP Holdings LLP | Building system with digital network twin |
US11892180B2 (en) | 2017-01-06 | 2024-02-06 | Johnson Controls Tyco IP Holdings LLP | HVAC system with automated device pairing |
US11894944B2 (en) | 2019-12-31 | 2024-02-06 | Johnson Controls Tyco IP Holdings LLP | Building data platform with an enrichment loop |
US11902375B2 (en) | 2020-10-30 | 2024-02-13 | Johnson Controls Tyco IP Holdings LLP | Systems and methods of configuring a building management system |
US11899723B2 (en) | 2021-06-22 | 2024-02-13 | Johnson Controls Tyco IP Holdings LLP | Building data platform with context based twin function processing |
US11900287B2 (en) | 2017-05-25 | 2024-02-13 | Johnson Controls Tyco IP Holdings LLP | Model predictive maintenance system with budgetary constraints |
US20240073093A1 (en) * | 2019-09-20 | 2024-02-29 | Sonatus, Inc. | System, method, and apparatus to execute vehicle communications using a zonal architecture |
US11921481B2 (en) | 2021-03-17 | 2024-03-05 | Johnson Controls Tyco IP Holdings LLP | Systems and methods for determining equipment energy waste |
US11920810B2 (en) | 2017-07-17 | 2024-03-05 | Johnson Controls Technology Company | Systems and methods for agent based building simulation for optimal control |
US11927925B2 (en) | 2018-11-19 | 2024-03-12 | Johnson Controls Tyco IP Holdings LLP | Building system with a time correlated reliability data stream |
US11934966B2 (en) | 2021-11-17 | 2024-03-19 | Johnson Controls Tyco IP Holdings LLP | Building data platform with digital twin inferences |
US11941238B2 (en) | 2018-10-30 | 2024-03-26 | Johnson Controls Technology Company | Systems and methods for entity visualization and management with an entity node editor |
US11947785B2 (en) | 2016-01-22 | 2024-04-02 | Johnson Controls Technology Company | Building system with a building graph |
US11954154B2 (en) | 2020-09-30 | 2024-04-09 | Johnson Controls Tyco IP Holdings LLP | Building management system with semantic model integration |
US11954713B2 (en) | 2018-03-13 | 2024-04-09 | Johnson Controls Tyco IP Holdings LLP | Variable refrigerant flow system with electricity consumption apportionment |
US11954478B2 (en) | 2017-04-21 | 2024-04-09 | Tyco Fire & Security Gmbh | Building management system with cloud management of gateway configurations |
US12013673B2 (en) | 2021-11-29 | 2024-06-18 | Tyco Fire & Security Gmbh | Building control system using reinforcement learning |
US12013823B2 (en) | 2022-09-08 | 2024-06-18 | Tyco Fire & Security Gmbh | Gateway system that maps points into a graph schema |
US12021650B2 (en) | 2019-12-31 | 2024-06-25 | Tyco Fire & Security Gmbh | Building data platform with event subscriptions |
US12019437B2 (en) | 2017-02-10 | 2024-06-25 | Johnson Controls Technology Company | Web services platform with cloud-based feedback control |
US12055908B2 (en) | 2017-02-10 | 2024-08-06 | Johnson Controls Technology Company | Building management system with nested stream generation |
US12061633B2 (en) | 2022-09-08 | 2024-08-13 | Tyco Fire & Security Gmbh | Building system that maps points into a graph schema |
US12061453B2 (en) | 2020-12-18 | 2024-08-13 | Tyco Fire & Security Gmbh | Building management system performance index |
US12099334B2 (en) | 2019-12-31 | 2024-09-24 | Tyco Fire & Security Gmbh | Systems and methods for presenting multiple BIM files in a single interface |
US12100280B2 (en) | 2020-02-04 | 2024-09-24 | Tyco Fire & Security Gmbh | Systems and methods for software defined fire detection and risk assessment |
US12184444B2 (en) | 2017-02-10 | 2024-12-31 | Johnson Controls Technology Company | Space graph based dynamic control for buildings |
US12197299B2 (en) | 2019-12-20 | 2025-01-14 | Tyco Fire & Security Gmbh | Building system with ledger based software gateways |
US12196437B2 (en) | 2016-01-22 | 2025-01-14 | Tyco Fire & Security Gmbh | Systems and methods for monitoring and controlling an energy plant |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11553618B2 (en) | 2020-08-26 | 2023-01-10 | PassiveLogic, Inc. | Methods and systems of building automation state load and user preference via network systems activity |
US20220108277A1 (en) * | 2020-10-01 | 2022-04-07 | Jpmorgan Chase Bank, N.A. | Method and system for providing an integrated organizational user interface |
US11644212B2 (en) * | 2020-11-12 | 2023-05-09 | International Business Machines Corporation | Monitoring and optimizing HVAC system |
US20220401854A1 (en) * | 2021-06-16 | 2022-12-22 | The Regents Of The University Of Colorado, A Body Corporate | Method and system for extracting material using supercritical fluid |
US20230042696A1 (en) * | 2021-08-05 | 2023-02-09 | Aiperion LLC | Predictive resource planning and optimization |
US20230142580A1 (en) * | 2021-11-10 | 2023-05-11 | Banpu Innovation & Ventures LLC | Method and system for identifying real plant broadband dynamics performance in green energy generation utilizing artificial intelligence technology |
US20230214555A1 (en) * | 2021-12-30 | 2023-07-06 | PassiveLogic, Inc. | Simulation Training |
WO2024199651A1 (en) * | 2023-03-29 | 2024-10-03 | Abb Schweiz Ag | Method for controlling an industrial process |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190360711A1 (en) * | 2018-05-22 | 2019-11-28 | Seokyoung Systems | Method and device for controlling power supply to heating, ventilating, and air-conditioning (hvac) system for building based on target temperature |
US20200080744A1 (en) * | 2018-09-12 | 2020-03-12 | Seokyoung Systems | Method for creating demand response determination model for hvac system and method for implementing demand response |
Family Cites Families (59)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
ATE163777T1 (en) * | 1990-10-10 | 1998-03-15 | Honeywell Inc | IDENTIFICATION OF A PROCESS SYSTEM |
US5361326A (en) * | 1991-12-31 | 1994-11-01 | International Business Machines Corporation | Enhanced interface for a neural network engine |
US5224648A (en) | 1992-03-27 | 1993-07-06 | American Standard Inc. | Two-way wireless HVAC system and thermostat |
JPH07200512A (en) | 1993-09-13 | 1995-08-04 | Ezel Inc | 1optimization problems solving device |
US6128609A (en) * | 1997-10-14 | 2000-10-03 | Ralph E. Rose | Training a neural network using differential input |
US6119125A (en) | 1998-04-03 | 2000-09-12 | Johnson Controls Technology Company | Software components for a building automation system based on a standard object superclass |
IL134943A0 (en) * | 2000-03-08 | 2001-05-20 | Better T V Technologies Ltd | Method for personalizing information and services from various media sources |
CA2432440C (en) | 2001-01-12 | 2007-03-27 | Novar Controls Corporation | Small building automation control system |
US7756804B2 (en) * | 2002-05-10 | 2010-07-13 | Oracle International Corporation | Automated model building and evaluation for data mining system |
US6967565B2 (en) | 2003-06-27 | 2005-11-22 | Hx Lifespace, Inc. | Building automation system |
US7447664B2 (en) | 2003-08-28 | 2008-11-04 | Boeing Co | Neural network predictive control cost function designer |
US7620613B1 (en) * | 2006-07-28 | 2009-11-17 | Hewlett-Packard Development Company, L.P. | Thermal management of data centers |
US20080082183A1 (en) | 2006-09-29 | 2008-04-03 | Johnson Controls Technology Company | Building automation system with automated component selection for minimum energy consumption |
US20080277486A1 (en) | 2007-05-09 | 2008-11-13 | Johnson Controls Technology Company | HVAC control system and method |
US20100025483A1 (en) | 2008-07-31 | 2010-02-04 | Michael Hoeynck | Sensor-Based Occupancy and Behavior Prediction Method for Intelligently Controlling Energy Consumption Within a Building |
US9020647B2 (en) | 2009-03-27 | 2015-04-28 | Siemens Industry, Inc. | System and method for climate control set-point optimization based on individual comfort |
US8812418B2 (en) * | 2009-06-22 | 2014-08-19 | Hewlett-Packard Development Company, L.P. | Memristive adaptive resonance networks |
US9258201B2 (en) | 2010-02-23 | 2016-02-09 | Trane International Inc. | Active device management for use in a building automation system |
US8626700B1 (en) * | 2010-04-30 | 2014-01-07 | The Intellisis Corporation | Context aware device execution for simulating neural networks in compute unified device architecture |
CA3044757C (en) * | 2011-10-21 | 2021-11-09 | Google Llc | User-friendly, network connected learning thermostat and related systems and methods |
WO2013075080A1 (en) | 2011-11-17 | 2013-05-23 | Trustees Of Boston University | Automated technique of measuring room air change rates in hvac system |
US9557750B2 (en) | 2012-05-15 | 2017-01-31 | Daikin Applied Americas Inc. | Cloud based building automation systems |
US9791872B2 (en) | 2013-03-14 | 2017-10-17 | Pelco, Inc. | Method and apparatus for an energy saving heating, ventilation, and air conditioning (HVAC) control system |
US9298197B2 (en) | 2013-04-19 | 2016-03-29 | Google Inc. | Automated adjustment of an HVAC schedule for resource conservation |
US9910449B2 (en) * | 2013-04-19 | 2018-03-06 | Google Llc | Generating and implementing thermodynamic models of a structure |
US10222277B2 (en) * | 2013-12-08 | 2019-03-05 | Google Llc | Methods and systems for generating virtual smart-meter data |
US9857238B2 (en) | 2014-04-18 | 2018-01-02 | Google Inc. | Thermodynamic model generation and implementation using observed HVAC and/or enclosure characteristics |
US9092741B1 (en) | 2014-04-21 | 2015-07-28 | Amber Flux Private Limited | Cognitive platform and method for energy management for enterprises |
US9869484B2 (en) * | 2015-01-14 | 2018-01-16 | Google Inc. | Predictively controlling an environmental control system |
US10094586B2 (en) | 2015-04-20 | 2018-10-09 | Green Power Labs Inc. | Predictive building control system and method for optimizing energy use and thermal comfort for a building or network of buildings |
US9798336B2 (en) | 2015-04-23 | 2017-10-24 | Johnson Controls Technology Company | Building management system with linked thermodynamic models for HVAC equipment |
US20160328432A1 (en) * | 2015-05-06 | 2016-11-10 | Squigglee LLC | System and method for management of time series data sets |
US20170091622A1 (en) * | 2015-09-24 | 2017-03-30 | Facebook, Inc. | Systems and methods for generating forecasting models |
US20170091615A1 (en) * | 2015-09-28 | 2017-03-30 | Siemens Aktiengesellschaft | System and method for predicting power plant operational parameters utilizing artificial neural network deep learning methodologies |
KR102042077B1 (en) | 2016-09-26 | 2019-11-07 | 주식회사 엘지화학 | Intelligent fuel cell system |
US10013644B2 (en) | 2016-11-08 | 2018-07-03 | International Business Machines Corporation | Statistical max pooling with deep learning |
WO2018106969A1 (en) * | 2016-12-09 | 2018-06-14 | Hsu Fu Chang | Three-dimensional neural network array |
US10571143B2 (en) | 2017-01-17 | 2020-02-25 | International Business Machines Corporation | Regulating environmental conditions within an event venue |
US11994833B2 (en) * | 2017-02-10 | 2024-05-28 | Johnson Controls Technology Company | Building smart entity system with agent based data ingestion and entity creation using time series data |
US11574164B2 (en) * | 2017-03-20 | 2023-02-07 | International Business Machines Corporation | Neural network cooperation |
US10247438B2 (en) | 2017-03-20 | 2019-04-02 | International Business Machines Corporation | Cognitive climate control based on individual thermal-comfort-related data |
US11675322B2 (en) * | 2017-04-25 | 2023-06-13 | Johnson Controls Technology Company | Predictive building control system with discomfort threshold adjustment |
US11371739B2 (en) * | 2017-04-25 | 2022-06-28 | Johnson Controls Technology Company | Predictive building control system with neural network based comfort prediction |
US10984315B2 (en) * | 2017-04-28 | 2021-04-20 | Microsoft Technology Licensing, Llc | Learning-based noise reduction in data produced by a network of sensors, such as one incorporated into loose-fitting clothing worn by a person |
JP6688763B2 (en) * | 2017-05-30 | 2020-04-28 | 東京エレクトロン株式会社 | Plasma processing method |
US11003982B2 (en) * | 2017-06-27 | 2021-05-11 | D5Ai Llc | Aligned training of deep networks |
US11209184B2 (en) | 2018-01-12 | 2021-12-28 | Johnson Controls Tyco IP Holdings LLP | Control system for central energy facility with distributed energy storage |
US10140544B1 (en) | 2018-04-02 | 2018-11-27 | 12 Sigma Technologies | Enhanced convolutional neural network for image segmentation |
US10921760B2 (en) * | 2018-06-12 | 2021-02-16 | PassiveLogic, Inc. | Predictive control loops using time-based simulation and building-automation systems thereof |
US10845815B2 (en) | 2018-07-27 | 2020-11-24 | GM Global Technology Operations LLC | Systems, methods and controllers for an autonomous vehicle that implement autonomous driver agents and driving policy learners for generating and improving policies based on collective driving experiences of the autonomous driver agents |
US11908573B1 (en) * | 2020-02-18 | 2024-02-20 | C/Hca, Inc. | Predictive resource management |
US11170314B2 (en) * | 2018-10-22 | 2021-11-09 | General Electric Company | Detection and protection against mode switching attacks in cyber-physical systems |
US10896679B1 (en) * | 2019-03-26 | 2021-01-19 | Amazon Technologies, Inc. | Ambient device state content display |
CN112085186B (en) * | 2019-06-12 | 2024-03-05 | 上海寒武纪信息科技有限公司 | Method for determining quantization parameter of neural network and related product |
WO2020254857A1 (en) * | 2019-06-18 | 2020-12-24 | Uab Neurotechnology | Fast and robust friction ridge impression minutiae extraction using feed-forward convolutional neural network |
US20210182660A1 (en) | 2019-12-16 | 2021-06-17 | Soundhound, Inc. | Distributed training of neural network models |
US11573540B2 (en) * | 2019-12-23 | 2023-02-07 | Johnson Controls Tyco IP Holdings LLP | Methods and systems for training HVAC control using surrogate model |
US11525596B2 (en) * | 2019-12-23 | 2022-12-13 | Johnson Controls Tyco IP Holdings LLP | Methods and systems for training HVAC control using simulated and real experience data |
CN111708922A (en) * | 2020-06-19 | 2020-09-25 | 北京百度网讯科技有限公司 | Model generation method and device for representing heterogeneous graph nodes |
-
2020
- 2020-09-01 US US17/009,713 patent/US20210383200A1/en active Pending
-
2021
- 2021-02-17 US US17/177,391 patent/US20210381712A1/en active Pending
- 2021-02-17 US US17/177,285 patent/US20210383235A1/en active Pending
- 2021-03-05 US US17/193,179 patent/US11861502B2/en active Active
- 2021-03-22 US US17/208,036 patent/US20210383041A1/en active Pending
- 2021-04-12 US US17/228,119 patent/US11915142B2/en active Active
- 2021-05-05 US US17/308,294 patent/US20210383219A1/en active Pending
- 2021-06-02 US US17/336,779 patent/US20210381711A1/en not_active Abandoned
- 2021-06-02 US US17/336,640 patent/US20210383236A1/en active Pending
-
2023
- 2023-09-14 US US18/467,627 patent/US20240005168A1/en active Pending
-
2024
- 2024-01-03 US US18/403,542 patent/US20240160936A1/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190360711A1 (en) * | 2018-05-22 | 2019-11-28 | Seokyoung Systems | Method and device for controlling power supply to heating, ventilating, and air-conditioning (hvac) system for building based on target temperature |
US20200080744A1 (en) * | 2018-09-12 | 2020-03-12 | Seokyoung Systems | Method for creating demand response determination model for hvac system and method for implementing demand response |
Non-Patent Citations (2)
Title |
---|
Biswas et al. ("Prediction of residential building energy consumption: A neural network approach" Energy 117 (2016) 84-92.) (Year: 2016) * |
Huang et al. ("Reachnn: Reachability analysis of neural-network controlled systems." ACM Transactions on Embedded Computing Systems (TECS) 18.5s (2019): 1-22) (Year: 2019) * |
Cited By (92)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11754982B2 (en) | 2012-08-27 | 2023-09-12 | Johnson Controls Tyco IP Holdings LLP | Syntax translation from first syntax to second syntax based on string analysis |
US12105484B2 (en) | 2015-10-21 | 2024-10-01 | Johnson Controls Technology Company | Building automation system with integrated building information model |
US11899413B2 (en) | 2015-10-21 | 2024-02-13 | Johnson Controls Technology Company | Building automation system with integrated building information model |
US11874635B2 (en) | 2015-10-21 | 2024-01-16 | Johnson Controls Technology Company | Building automation system with integrated building information model |
US12196437B2 (en) | 2016-01-22 | 2025-01-14 | Tyco Fire & Security Gmbh | Systems and methods for monitoring and controlling an energy plant |
US11894676B2 (en) | 2016-01-22 | 2024-02-06 | Johnson Controls Technology Company | Building energy management system with energy analytics |
US11770020B2 (en) | 2016-01-22 | 2023-09-26 | Johnson Controls Technology Company | Building system with timeseries synchronization |
US11947785B2 (en) | 2016-01-22 | 2024-04-02 | Johnson Controls Technology Company | Building system with a building graph |
US11768004B2 (en) | 2016-03-31 | 2023-09-26 | Johnson Controls Tyco IP Holdings LLP | HVAC device registration in a distributed building management system |
US11774920B2 (en) | 2016-05-04 | 2023-10-03 | Johnson Controls Technology Company | Building system with user presentation composition based on building context |
US11927924B2 (en) | 2016-05-04 | 2024-03-12 | Johnson Controls Technology Company | Building system with user presentation composition based on building context |
US12210324B2 (en) | 2016-05-04 | 2025-01-28 | Johnson Controls Technology Company | Building system with user presentation composition based on building context |
US11892180B2 (en) | 2017-01-06 | 2024-02-06 | Johnson Controls Tyco IP Holdings LLP | HVAC system with automated device pairing |
US11994833B2 (en) | 2017-02-10 | 2024-05-28 | Johnson Controls Technology Company | Building smart entity system with agent based data ingestion and entity creation using time series data |
US12055908B2 (en) | 2017-02-10 | 2024-08-06 | Johnson Controls Technology Company | Building management system with nested stream generation |
US11809461B2 (en) | 2017-02-10 | 2023-11-07 | Johnson Controls Technology Company | Building system with an entity graph storing software logic |
US11792039B2 (en) | 2017-02-10 | 2023-10-17 | Johnson Controls Technology Company | Building management system with space graphs including software components |
US12019437B2 (en) | 2017-02-10 | 2024-06-25 | Johnson Controls Technology Company | Web services platform with cloud-based feedback control |
US11778030B2 (en) | 2017-02-10 | 2023-10-03 | Johnson Controls Technology Company | Building smart entity system with agent based communication and control |
US11762886B2 (en) | 2017-02-10 | 2023-09-19 | Johnson Controls Technology Company | Building system with entity graph commands |
US11774930B2 (en) | 2017-02-10 | 2023-10-03 | Johnson Controls Technology Company | Building system with digital twin based agent processing |
US12184444B2 (en) | 2017-02-10 | 2024-12-31 | Johnson Controls Technology Company | Space graph based dynamic control for buildings |
US11764991B2 (en) | 2017-02-10 | 2023-09-19 | Johnson Controls Technology Company | Building management system with identity management |
US11755604B2 (en) | 2017-02-10 | 2023-09-12 | Johnson Controls Technology Company | Building management system with declarative views of timeseries data |
US11762362B2 (en) | 2017-03-24 | 2023-09-19 | Johnson Controls Tyco IP Holdings LLP | Building management system with dynamic channel communication |
US11954478B2 (en) | 2017-04-21 | 2024-04-09 | Tyco Fire & Security Gmbh | Building management system with cloud management of gateway configurations |
US11761653B2 (en) | 2017-05-10 | 2023-09-19 | Johnson Controls Tyco IP Holdings LLP | Building management system with a distributed blockchain database |
US11900287B2 (en) | 2017-05-25 | 2024-02-13 | Johnson Controls Tyco IP Holdings LLP | Model predictive maintenance system with budgetary constraints |
US11699903B2 (en) | 2017-06-07 | 2023-07-11 | Johnson Controls Tyco IP Holdings LLP | Building energy optimization system with economic load demand response (ELDR) optimization and ELDR user interfaces |
US12061446B2 (en) | 2017-06-15 | 2024-08-13 | Johnson Controls Technology Company | Building management system with artificial intelligence for unified agent based control of building subsystems |
US11774922B2 (en) | 2017-06-15 | 2023-10-03 | Johnson Controls Technology Company | Building management system with artificial intelligence for unified agent based control of building subsystems |
US11920810B2 (en) | 2017-07-17 | 2024-03-05 | Johnson Controls Technology Company | Systems and methods for agent based building simulation for optimal control |
US11733663B2 (en) | 2017-07-21 | 2023-08-22 | Johnson Controls Tyco IP Holdings LLP | Building management system with dynamic work order generation with adaptive diagnostic task details |
US11726632B2 (en) | 2017-07-27 | 2023-08-15 | Johnson Controls Technology Company | Building management system with global rule library and crowdsourcing framework |
US11709965B2 (en) | 2017-09-27 | 2023-07-25 | Johnson Controls Technology Company | Building system with smart entity personal identifying information (PII) masking |
US12056999B2 (en) | 2017-09-27 | 2024-08-06 | Tyco Fire & Security Gmbh | Building risk analysis system with natural language processing for threat ingestion |
US20220138183A1 (en) | 2017-09-27 | 2022-05-05 | Johnson Controls Tyco IP Holdings LLP | Web services platform with integration and interface of smart entities with enterprise applications |
US12013842B2 (en) | 2017-09-27 | 2024-06-18 | Johnson Controls Tyco IP Holdings LLP | Web services platform with integration and interface of smart entities with enterprise applications |
US11768826B2 (en) | 2017-09-27 | 2023-09-26 | Johnson Controls Tyco IP Holdings LLP | Web services for creation and maintenance of smart entities for connected devices |
US11762353B2 (en) | 2017-09-27 | 2023-09-19 | Johnson Controls Technology Company | Building system with a digital twin based on information technology (IT) data and operational technology (OT) data |
US11762356B2 (en) | 2017-09-27 | 2023-09-19 | Johnson Controls Technology Company | Building management system with integration of data into smart entities |
US11735021B2 (en) | 2017-09-27 | 2023-08-22 | Johnson Controls Tyco IP Holdings LLP | Building risk analysis system with risk decay |
US11741812B2 (en) | 2017-09-27 | 2023-08-29 | Johnson Controls Tyco IP Holdings LLP | Building risk analysis system with dynamic modification of asset-threat weights |
US11782407B2 (en) | 2017-11-15 | 2023-10-10 | Johnson Controls Tyco IP Holdings LLP | Building management system with optimized processing of building system data |
US11762351B2 (en) | 2017-11-15 | 2023-09-19 | Johnson Controls Tyco IP Holdings LLP | Building management system with point virtualization for online meters |
US11727738B2 (en) | 2017-11-22 | 2023-08-15 | Johnson Controls Tyco IP Holdings LLP | Building campus with integrated smart environment |
US11954713B2 (en) | 2018-03-13 | 2024-04-09 | Johnson Controls Tyco IP Holdings LLP | Variable refrigerant flow system with electricity consumption apportionment |
US11941238B2 (en) | 2018-10-30 | 2024-03-26 | Johnson Controls Technology Company | Systems and methods for entity visualization and management with an entity node editor |
US11927925B2 (en) | 2018-11-19 | 2024-03-12 | Johnson Controls Tyco IP Holdings LLP | Building system with a time correlated reliability data stream |
US11775938B2 (en) | 2019-01-18 | 2023-10-03 | Johnson Controls Tyco IP Holdings LLP | Lobby management system |
US11763266B2 (en) | 2019-01-18 | 2023-09-19 | Johnson Controls Tyco IP Holdings LLP | Smart parking lot system |
US11769117B2 (en) | 2019-01-18 | 2023-09-26 | Johnson Controls Tyco IP Holdings LLP | Building automation system with fault analysis and component procurement |
US11762343B2 (en) | 2019-01-28 | 2023-09-19 | Johnson Controls Tyco IP Holdings LLP | Building management system with hybrid edge-cloud processing |
US20240073093A1 (en) * | 2019-09-20 | 2024-02-29 | Sonatus, Inc. | System, method, and apparatus to execute vehicle communications using a zonal architecture |
US12197299B2 (en) | 2019-12-20 | 2025-01-14 | Tyco Fire & Security Gmbh | Building system with ledger based software gateways |
US11894944B2 (en) | 2019-12-31 | 2024-02-06 | Johnson Controls Tyco IP Holdings LLP | Building data platform with an enrichment loop |
US12099334B2 (en) | 2019-12-31 | 2024-09-24 | Tyco Fire & Security Gmbh | Systems and methods for presenting multiple BIM files in a single interface |
US20220376944A1 (en) | 2019-12-31 | 2022-11-24 | Johnson Controls Tyco IP Holdings LLP | Building data platform with graph based capabilities |
US12143237B2 (en) | 2019-12-31 | 2024-11-12 | Tyco Fire & Security Gmbh | Building data platform with graph based permissions |
US11770269B2 (en) | 2019-12-31 | 2023-09-26 | Johnson Controls Tyco IP Holdings LLP | Building data platform with event enrichment with contextual information |
US12063126B2 (en) | 2019-12-31 | 2024-08-13 | Tyco Fire & Security Gmbh | Building data graph including application programming interface calls |
US11777758B2 (en) | 2019-12-31 | 2023-10-03 | Johnson Controls Tyco IP Holdings LLP | Building data platform with external twin synchronization |
US11777757B2 (en) | 2019-12-31 | 2023-10-03 | Johnson Controls Tyco IP Holdings LLP | Building data platform with event based graph queries |
US11824680B2 (en) | 2019-12-31 | 2023-11-21 | Johnson Controls Tyco IP Holdings LLP | Building data platform with a tenant entitlement model |
US11968059B2 (en) | 2019-12-31 | 2024-04-23 | Johnson Controls Tyco IP Holdings LLP | Building data platform with graph based capabilities |
US11991019B2 (en) | 2019-12-31 | 2024-05-21 | Johnson Controls Tyco IP Holdings LLP | Building data platform with event queries |
US11991018B2 (en) | 2019-12-31 | 2024-05-21 | Tyco Fire & Security Gmbh | Building data platform with edge based event enrichment |
US12040911B2 (en) | 2019-12-31 | 2024-07-16 | Tyco Fire & Security Gmbh | Building data platform with a graph change feed |
US11777756B2 (en) | 2019-12-31 | 2023-10-03 | Johnson Controls Tyco IP Holdings LLP | Building data platform with graph based communication actions |
US11777759B2 (en) | 2019-12-31 | 2023-10-03 | Johnson Controls Tyco IP Holdings LLP | Building data platform with graph based permissions |
US12021650B2 (en) | 2019-12-31 | 2024-06-25 | Tyco Fire & Security Gmbh | Building data platform with event subscriptions |
US12100280B2 (en) | 2020-02-04 | 2024-09-24 | Tyco Fire & Security Gmbh | Systems and methods for software defined fire detection and risk assessment |
US11880677B2 (en) | 2020-04-06 | 2024-01-23 | Johnson Controls Tyco IP Holdings LLP | Building system with digital network twin |
US11874809B2 (en) | 2020-06-08 | 2024-01-16 | Johnson Controls Tyco IP Holdings LLP | Building system with naming schema encoding entity type and entity relationships |
US11954154B2 (en) | 2020-09-30 | 2024-04-09 | Johnson Controls Tyco IP Holdings LLP | Building management system with semantic model integration |
US11741165B2 (en) | 2020-09-30 | 2023-08-29 | Johnson Controls Tyco IP Holdings LLP | Building management system with semantic model integration |
US12063274B2 (en) | 2020-10-30 | 2024-08-13 | Tyco Fire & Security Gmbh | Self-configuring building management system |
US11902375B2 (en) | 2020-10-30 | 2024-02-13 | Johnson Controls Tyco IP Holdings LLP | Systems and methods of configuring a building management system |
US12058212B2 (en) | 2020-10-30 | 2024-08-06 | Tyco Fire & Security Gmbh | Building management system with auto-configuration using existing points |
US12061453B2 (en) | 2020-12-18 | 2024-08-13 | Tyco Fire & Security Gmbh | Building management system performance index |
US11921481B2 (en) | 2021-03-17 | 2024-03-05 | Johnson Controls Tyco IP Holdings LLP | Systems and methods for determining equipment energy waste |
US11899723B2 (en) | 2021-06-22 | 2024-02-13 | Johnson Controls Tyco IP Holdings LLP | Building data platform with context based twin function processing |
US12197508B2 (en) | 2021-06-22 | 2025-01-14 | Tyco Fire & Security Gmbh | Building data platform with context based twin function processing |
US12055907B2 (en) | 2021-11-16 | 2024-08-06 | Tyco Fire & Security Gmbh | Building data platform with schema extensibility for properties and tags of a digital twin |
US11796974B2 (en) | 2021-11-16 | 2023-10-24 | Johnson Controls Tyco IP Holdings LLP | Building data platform with schema extensibility for properties and tags of a digital twin |
US11769066B2 (en) | 2021-11-17 | 2023-09-26 | Johnson Controls Tyco IP Holdings LLP | Building data platform with digital twin triggers and actions |
US11934966B2 (en) | 2021-11-17 | 2024-03-19 | Johnson Controls Tyco IP Holdings LLP | Building data platform with digital twin inferences |
US11704311B2 (en) | 2021-11-24 | 2023-07-18 | Johnson Controls Tyco IP Holdings LLP | Building data platform with a distributed digital twin |
US12013673B2 (en) | 2021-11-29 | 2024-06-18 | Tyco Fire & Security Gmbh | Building control system using reinforcement learning |
US11714930B2 (en) | 2021-11-29 | 2023-08-01 | Johnson Controls Tyco IP Holdings LLP | Building data platform with digital twin based inferences and predictions for a graphical building model |
US12013823B2 (en) | 2022-09-08 | 2024-06-18 | Tyco Fire & Security Gmbh | Gateway system that maps points into a graph schema |
US12061633B2 (en) | 2022-09-08 | 2024-08-13 | Tyco Fire & Security Gmbh | Building system that maps points into a graph schema |
Also Published As
Publication number | Publication date |
---|---|
US20210382445A1 (en) | 2021-12-09 |
US20210383235A1 (en) | 2021-12-09 |
US20210383236A1 (en) | 2021-12-09 |
US20210383200A1 (en) | 2021-12-09 |
US20210383219A1 (en) | 2021-12-09 |
US20210383042A1 (en) | 2021-12-09 |
US20210383041A1 (en) | 2021-12-09 |
US11861502B2 (en) | 2024-01-02 |
US20210381711A1 (en) | 2021-12-09 |
US20240005168A1 (en) | 2024-01-04 |
US11915142B2 (en) | 2024-02-27 |
US20240160936A1 (en) | 2024-05-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210381712A1 (en) | Determining demand curves from comfort curves | |
US20230214555A1 (en) | Simulation Training | |
Chen et al. | Modeling and optimization of complex building energy systems with deep neural networks | |
US20230252205A1 (en) | Simulation Warmup | |
CN110223517A (en) | Short-term traffic flow forecast method based on temporal correlation | |
CN111027772A (en) | Multi-factor short-term load forecasting method based on PCA-DBILSTM | |
Cheng et al. | Multi-task and multi-view learning based on particle swarm optimization for short-term traffic forecasting | |
CN116596044A (en) | Power generation load prediction model training method and device based on multi-source data | |
Rani R et al. | Training radial basis function networks for wind speed prediction using PSO enhanced differential search optimizer | |
CN117277279A (en) | A deep learning short-term load forecasting method based on particle swarm optimization | |
CN116401941B (en) | Prediction method for evacuation capacity of subway station gate | |
US20230059708A1 (en) | Generation of Optimized Hyperparameter Values for Application to Machine Learning Tasks | |
US20240070449A1 (en) | Systems and methods for expert guided semi-supervision with contrastive loss for machine learning models | |
Gunasekar et al. | Sustainable optimized LSTM-based intelligent system for air quality prediction in Chennai | |
CN114582131A (en) | Monitoring method and system based on intelligent ramp flow control algorithm | |
CN117458481B (en) | A method and device for power system load forecasting based on XGBoost | |
CN116151478A (en) | Short-time traffic flow prediction method, device and medium for improving sparrow search algorithm | |
Zhao et al. | Multi-point temperature or humidity prediction for office building indoor environment based on CGC-BiLSTM deep neural network | |
Xu | Learning efficient dynamic controller for HVAC system | |
US20240338602A1 (en) | Training a Learning Model using a Digital Twin | |
US20240412109A1 (en) | Multi-Agent Generative Adversarial Imitative Superlearning | |
Sha et al. | Construction Cost Estimation in Data-Poor Areas Using Grasshopper Optimization Algorithm-Guided Multi-Layer Perceptron and Transfer Learning | |
CN118625696B (en) | A UAV multi-machine simulation training control system and control method | |
Li et al. | A novel paradigm for neural computation: X-net with learnable neurons and adaptable structure | |
Zhang et al. | Intelligent HVAC System Control Method Based on Digital Twin |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: PASSIVELOGIC, INC., UTAH Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HARVEY, TROY AARON;FILLINGIM, JEREMY DAVID;REEL/FRAME:055292/0849 Effective date: 20210216 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |