HK1251701A1 - Local analytics at an asset - Google Patents
Local analytics at an asset Download PDFInfo
- Publication number
- HK1251701A1 HK1251701A1 HK18111155.8A HK18111155A HK1251701A1 HK 1251701 A1 HK1251701 A1 HK 1251701A1 HK 18111155 A HK18111155 A HK 18111155A HK 1251701 A1 HK1251701 A1 HK 1251701A1
- Authority
- HK
- Hong Kong
- Prior art keywords
- asset
- predictive model
- data
- workflow
- model
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B23/00—Testing or monitoring of control systems or parts thereof
- G05B23/02—Electric testing or monitoring
- G05B23/0205—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
- G05B23/0208—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterized by the configuration of the monitoring system
- G05B23/0213—Modular or universal configuration of the monitoring system, e.g. monitoring system having modules that may be combined to build monitoring program; monitoring system that can be applied to legacy systems; adaptable monitoring system; using different communication protocols
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B23/00—Testing or monitoring of control systems or parts thereof
- G05B23/02—Electric testing or monitoring
- G05B23/0205—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
- G05B23/0259—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterized by the response to fault detection
- G05B23/0286—Modifications to the monitored process, e.g. stopping operation or adapting control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0633—Workflow analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0635—Risk analysis of enterprise or organisation activities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/04—Manufacturing
Landscapes
- Business, Economics & Management (AREA)
- Engineering & Computer Science (AREA)
- Human Resources & Organizations (AREA)
- Economics (AREA)
- Strategic Management (AREA)
- Entrepreneurship & Innovation (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Tourism & Hospitality (AREA)
- Theoretical Computer Science (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Game Theory and Decision Science (AREA)
- Quality & Reliability (AREA)
- Operations Research (AREA)
- Educational Administration (AREA)
- Development Economics (AREA)
- Automation & Control Theory (AREA)
- Manufacturing & Machinery (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Primary Health Care (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Disclosed herein are systems, devices, and methods related to assets and predictive models and corresponding workflows that are related to the operation of assets. In particular, examples involve defining and deploying aggregate, predictive models and corresponding workflows, defining and deploying individualized, predictive models and/or corresponding workflows, and dynamically adjusting the execution of model-workflow pairs. Additionally, examples involve assets configured to receive and locally execute predictive models, locally individualize predictive models, and/or locally execute workflows or portions thereof.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
The present application claims priority to: (i) us non-provisional patent No. 14/744,352, filed on 19/6/2015 and entitled aggregation Predictive Model and Workflow for Local Execution; (ii) us non-provisional patent application No. 14/744,369 entitled personalized Predictive Model & Workflow for an Asset, filed on 19/6/2015; and (iii) U.S. non-provisional patent application No. 14/963,207, filed on 8/12/2015 and entitled "Local analysis at Asset an Asset", the entire contents of each of which are incorporated herein by reference in their entirety. This application also incorporates by reference the entire contents of U.S. non-provisional patent application No. 14/732,258, filed on day 5/6/2015 and entitled "Asset Health Score".
Background
Today, machines (also referred to herein as "assets") are ubiquitous in many industries. Assets play an important role in everyday life, from locomotives that transport goods to countries, to medical equipment that helps nurses and doctors save lives. The complexity and cost of an asset may vary depending on the role it plays. For example, some assets may include multiple subsystems (e.g., engines, transmissions, etc. of a locomotive) that must operate in coordination for the asset to function properly.
Because assets play an important role in everyday life, it is desirable that assets can be repaired with limited downtime. Accordingly, some have developed mechanisms to monitor and detect abnormal conditions within the asset to facilitate the possible maintenance of the asset with minimal downtime.
Disclosure of Invention
Current methods for monitoring assets typically involve an on-asset computer that receives signals from various sensors and/or actuators distributed throughout the asset that monitor the operating conditions of the asset. As one representative example, if the asset is a locomotive, the sensors and/or actuators may monitor parameters such as temperature, voltage, and speed, among other parameters. If the sensor and/or actuator signals from one or more of these devices reach certain values, the on-asset computer may generate an abnormal condition indicator, such as a "fault code," which is an indication that an abnormal condition has occurred within the asset.
Typically, an abnormal condition may be a defect at an asset or component thereof, which may lead to a failure of the asset and/or component. Thus, an abnormal condition may be associated with a given fault or possibly multiple faults, as an abnormal condition is a symptom of a given fault or multiple faults. In practice, the user typically defines the sensor and corresponding sensor value associated with each abnormal-condition indicator. That is, the user defines "normal" operating conditions (e.g., operating conditions that do not trigger a fault code) and "abnormal" operating conditions (e.g., operating conditions that trigger a fault code) for the asset.
After the on-asset computer generates the abnormal condition indicator, the indicator and/or sensor signal may be communicated to a remote location where a user may receive some indication of the abnormal condition and/or sensor signal and decide whether to take action. One action that a user may take is to assign a mechanic or the like to evaluate and possibly repair the asset. Once at the asset, the mechanic may connect the computing device to the asset and operate the computing device to cause the asset to utilize one or more local diagnostic tools to facilitate diagnosing the cause of the generated indicator.
While current asset monitoring systems are generally effective at triggering abnormal situation indicators, such systems are generally conservative. That is, by the time the asset monitoring system triggers the indicator, a fault within the asset may have occurred (or be imminent), which may result in costly downtime, as well as other drawbacks. Additionally, due to the simple nature of on-asset anomaly detection mechanisms in such asset monitoring systems, current asset monitoring methods tend to involve a remote computing system performing monitoring calculations on the asset and transmitting instructions to the asset if a problem is detected. This may be disadvantageous when the asset moves outside the coverage of the communication network due to network delays and/or infeasibility. Additionally, due to the nature of local diagnostic tools stored on the assets, current diagnostic procedures tend to be inefficient and cumbersome because of the need for a mechanic to cause the assets to utilize such tools.
The example systems, devices, and methods disclosed herein seek to help address one or more of these issues. In an example implementation, the network configuration may include a communication network that facilitates communication between the asset and the remote computing system. In some cases, the communication network may facilitate secure communication (e.g., via encryption or other security measures) between the asset and the remote computing system.
As described above, each asset may include a plurality of sensors and/or actuators distributed throughout the asset that facilitate monitoring an operating condition of the asset. A plurality of assets may provide respective data indicative of an operating condition of each asset to a remote computing system, which may be configured to perform one or more operations based on the provided data. Typically, sensor and/or actuator data may be used for general asset monitoring operations. However, as described herein, the remote computing system and/or asset may utilize this data to facilitate performing more complex operations.
In an example implementation, the remote computing system may be configured to define and deploy a predictive model and corresponding workflow (referred to herein as a "model-workflow pair") related to the operation of the asset. The asset may be configured to receive a model-workflow pair and operate in accordance with the model-workflow pair with a local analytics device.
In general, model-workflow pairs can cause assets to monitor certain operating conditions, and when certain conditions exist, modify behavior that may help prevent certain events from occurring. In particular, the predictive model may receive data from a particular set of asset sensors and/or actuators as inputs and output a likelihood that one or more particular events may occur at the asset within a particular time period in the future. The workflow may involve one or more operations performed based on the likelihood of one or more particular events output by the model.
In practice, the remote computing system may define an aggregation, a predictive model and corresponding workflow, an individualized predictive model and corresponding workflow, or some combination thereof. An "aggregate" model/workflow may refer to a model/workflow that is generic to a group of assets, while an "individualized" model/workflow may refer to a model/workflow that is customized for a single asset or a subgroup of assets from the group of assets.
In an example embodiment, the remote computing system may begin by defining an aggregate predictive model based on historical data for a plurality of assets. Utilizing data from multiple assets can facilitate defining a more accurate predictive model than utilizing operational data from a single asset.
The historical data forming the basis of the aggregate model may include at least operational data indicative of the operational condition of a given asset. In particular, the operational data may include abnormal condition data identifying a condition when a fault occurs at the asset and/or data indicative of one or more physical properties measured at the asset at the time the condition occurred. The data may also include environmental data indicating the environment in which the asset has been operated and scheduling data indicating the date and time when the asset was utilized, asset-related data used to define the aggregate model-workflow pair, and so forth.
Based on the historical data, the remote computing system may define an aggregate model that predicts the occurrence of a particular event. In particular example embodiments, the aggregate model may output a probability that a failure will occur at the asset within a particular period of time in the future. This model may be referred to herein as a "fault model". In addition to other instance prediction models, other aggregate models may predict the likelihood that an asset will complete a task within a particular time period in the future.
After defining the aggregation model, the remote computing system may then define an aggregation workflow corresponding to the defined aggregation model. In general, a workflow may include one or more operations that an asset may perform based on a corresponding model. That is, the output of the corresponding model may cause the asset to perform the workflow operation. For example, an aggregate model-workflow pair may be defined such that when the aggregate model outputs probabilities within a particular range, the asset will perform a particular workflow operation (e.g., a local diagnostic tool).
After defining the aggregate model-workflow pair, the remote computing system may transmit the pair to one or more assets. One or more assets may then operate according to the aggregate model-workflow pair.
In an example implementation, the remote computing system may be configured to further define an individualized predictive model and/or corresponding workflow for one or more assets. The remote computing system may make the above definitions based on certain characteristics of each given asset and other considerations. In an example implementation, the remote computing system may begin with an aggregate model-workflow pair as a benchmark, and personalize one or both of the aggregate model and the workflow for a given asset based on characteristics of the asset.
In practice, the remote computing system may be configured to determine asset characteristics (e.g., characteristics of interest) related to the aggregate model-workflow pair. Examples of such characteristics may include asset age, asset usage, asset class (e.g., brand and/or model), asset health, and operating environment of the asset, among other characteristics.
The remote computing system may then determine a characteristic of the given asset that corresponds to the characteristic of interest. Based at least on some of the given asset characteristics, the remote computing system may be configured to personalize the aggregate model and/or corresponding workflow.
Defining the personalized model and/or workflow may involve the remote computing system making certain modifications to the aggregated model and/or workflow. For example, personalizing an aggregate model may involve, among other examples, changing model inputs, changing model calculations, and/or changing weights of calculated variables or outputs, among other examples. Personalizing an aggregated workflow may involve, among other examples, changing one or more operations of the workflow and/or changing a model output value or range of values that trigger the workflow.
After defining the personalized model and/or workflow for the given asset, the remote computing system may then transmit the personalized model and/or workflow to the given asset. In cases where only one of the models or workflows is personalized, a given asset may utilize an aggregated version of the model or workflow that is not personalized. The given asset may then operate according to its personalized model-workflow pair.
In an example implementation, a given asset may include a local analytics device, which may be configured to cause the given asset to operate in accordance with model-workflow pairs provided by a remote computing system. The local analytics device may be configured to run the predictive model utilizing operational data from the asset sensors and/or actuators (e.g., data typically used for other asset-related purposes). When a local analytics device receives certain operational data, it may execute the model, and depending on the output of the model, may execute a corresponding workflow.
Executing the corresponding workflow may help facilitate preventing undesirable events from occurring at a given asset. In this way, a given asset may locally determine that a particular event may occur, and then may execute a particular workflow to help prevent the occurrence of the event. This may be particularly useful if communication between a given asset and a remote computing system is blocked. For example, in some cases, a failure may occur before a command to take preventative action arrives at a given asset from a remote computing system. In such cases, a local analytics device may be advantageous because it may generate commands locally, thereby avoiding any network delays or any problems due to a given asset being "offline". Thus, the local analytics device executing the model-workflow pair may facilitate causing the asset to adapt to its condition.
In some example implementations, the local analytics device itself may personalize the model-workflow pair it receives from the remote computing system before or when it first executes the model-workflow pair. In general, the local analytics device may personalize a model-workflow pair by evaluating some or all of the predictions, assumptions, and/or generalizations made in defining the model-workflow pair that are relevant to a given asset. Based on the evaluation, the local analytics device may modify the model-workflow pair such that the underlying predictions, assumptions, and/or generalization of the model-workflow pair more accurately reflect the actual state of the given asset. The local analytics device may then execute the personalized model-workflow pair instead of the model-workflow pair it originally received from the remote computing system, which may result in more accurate monitoring of the asset.
The given asset may also continue to provide operational data to the remote computing system while the given asset operates according to the model-workflow pair. Based at least on this data, the remote computing system may modify the aggregate model-workflow pair and/or one or more personalized model-workflow pairs. The remote computing system may be modified for a variety of reasons.
In one example, the remote computing system may modify the model and/or workflow if a new event occurs at the asset that the model has not previously considered. For example, in a failure model, a new event may be a new failure that has not occurred at any of the assets whose data was used to define the aggregate model.
In another example, the remote computing system may modify the model and/or workflow if the event occurs at the asset under operating conditions that do not normally result in the event occurring. For example, returning again to the fault model, if the fault occurs under an operating condition that has not caused the fault to occur in the past, the fault model or corresponding workflow may be modified.
In yet another example, the remote computing system may modify the model and/or workflow if the executed workflow fails to prevent the occurrence of an event. In particular, if the output of the model causes the asset to execute a workflow that is intended to prevent the occurrence of an event, but nevertheless the event occurs at the asset, the remote computing system may modify the model and/or workflow. Other examples of reasons for modifying the model and/or workflow are possible.
The remote computing system may then assign any modifications to the asset whose data caused the modification and/or other assets in communication with the remote computing system. In this manner, the remote computing system may dynamically modify models and/or workflows and assign such modifications to the entire population of assets based on the operating conditions of the individual assets.
In some example implementations, the asset and/or remote computing system may be configured to dynamically adjust the execution prediction model and/or the workflow. In particular, the asset and/or remote computing system may be configured to detect certain events that trigger a change in responsibility regarding whether the asset and/or remote computing system executes the predictive model and/or workflow.
For example, in some cases, after the asset receives the model-workflow pair from the remote computing system, the asset may store the model-workflow pair in a data store, and then may rely on the remote computing system to centrally execute some or all of the model-workflow pair. In other cases, on the other hand, the remote computing system may rely on the assets to locally execute some or all of the model-workflow pairs. In still other cases, the remote computing system and the assets may share responsibility for executing the model-workflow pair.
Regardless, at some point in time, certain events may occur that trigger the asset and/or the remote computing system to adjust the performance of the predictive model and/or workflow. For example, the asset and/or the remote computing system may detect certain characteristics of a communication network coupling the asset to the remote computing system. Based on the characteristics of the communication network, the asset may adjust whether it executes the predictive model and/or workflow locally, and the remote computing system may modify whether it executes the model and/or workflow centrally accordingly. In this manner, the asset and/or the remote computing system may adapt to the condition of the asset.
In a particular example, an asset may detect the following indications: the signal strength of the communication link between the asset and the remote computing system is relatively weak (e.g., the asset may determine to be "offline"), the network latency is relatively high, and/or the network bandwidth is relatively low. Thus, the asset may be programmed to assume responsibility for the execution model-workflow pair previously handled by the remote computing system. Further, the remote computing system may stop centrally executing some or all of the model-workflow pairs. In this way, the asset may locally execute the predictive model, and then execute the corresponding workflow based on executing the predictive model to potentially help prevent a failure at the asset.
Additionally, in some implementations, the asset and/or remote computing system may similarly adjust the execution (or possibly modify) of the predictive model and/or workflow based on various other considerations. For example, based on the processing power of the asset, the asset may execute the model-workflow pair locally, and the remote computing system may also adjust accordingly. In another example, the asset may perform the modified workflow (e.g., transmit data to the remote computing system at a reduced transmission rate according to a data transmission scheme) based on a bandwidth of a communication network coupling the asset to the remote computing system. Other examples are possible.
As discussed above, examples provided herein relate to deployment and execution of predictive models. In one aspect, a computing system is provided. The computing system comprises at least one processor, a non-transitory computer-readable medium, and program instructions stored on the non-transitory computer-readable medium that are executable by the at least one processor to cause the computing system to: (a) receiving respective operational data for a plurality of assets; (b) defining a predictive model and corresponding workflow related to the operation of the plurality of assets based on the received operational data; and (c) transmitting the predictive model and corresponding workflow to at least one asset of the plurality of assets for local execution by the at least one asset.
In another aspect, a non-transitory computer-readable medium having instructions stored thereon is provided that are executable to cause a computing system to: (a) receiving respective operational data for a plurality of assets; (b) defining a predictive model and corresponding workflow related to the operation of the plurality of assets based on the received operational data; and (c) transmitting the predictive model and corresponding workflow to at least one asset of the plurality of assets for local execution by the at least one asset.
In yet another aspect, a computer-implemented method is provided. The method comprises the following steps: (a) receiving respective operational data for a plurality of assets; (b) defining a predictive model and corresponding workflow related to the operation of the plurality of assets based on the received operational data; and (c) transmitting the predictive model and corresponding workflow to at least one asset of the plurality of assets for local execution by the at least one asset.
As discussed above, examples provided herein relate to deployment and execution of predictive models. In one aspect, a computing system is provided. The computing system comprises at least one processor, a non-transitory computer-readable medium, and program instructions stored on the non-transitory computer-readable medium that are executable by the at least one processor to cause the computing system to: (a) receiving operational data for a plurality of assets, wherein the plurality of assets includes a first asset; (b) defining an aggregate predictive model and an aggregate corresponding workflow related to the operation of the plurality of assets based on the received operational data; (c) determining one or more characteristics of the first asset; (d) defining at least one of a personalized predictive model or a personalized corresponding workflow related to the operation of the first asset based on the one or more characteristics of the first asset and the aggregated predictive model and the aggregated corresponding workflow; and (e) transmitting the defined at least one personalized predictive model or personalized corresponding workflow to the first asset for local execution by the first asset.
In another aspect, a non-transitory computer-readable medium having instructions stored thereon is provided that are executable to cause a computing system to: (a) receiving operational data for a plurality of assets, wherein the plurality of assets includes a first asset; (b) defining an aggregate predictive model and an aggregate corresponding workflow related to the operation of the plurality of assets based on the received operational data; (c) determining one or more characteristics of the first asset; (d) defining at least one of an individualized predictive model or an individualized corresponding workflow related to the operation of the first asset based on the one or more characteristics of the first asset and the aggregate predictive model and the set corresponding workflow; and (e) transmitting the defined at least one personalized predictive model or personalized corresponding workflow to the first asset for local execution by the first asset.
In yet another aspect, a computer-implemented method is provided. The method comprises the following steps: (a) receiving operational data for a plurality of assets, wherein the plurality of assets includes a first asset; (b) defining an aggregate predictive model and an aggregate corresponding workflow related to the operation of the plurality of assets based on the received operational data; (c) determining one or more characteristics of the first asset; (d) defining at least one of a personalized predictive model or a personalized corresponding workflow related to the operation of the first asset based on the one or more characteristics of the first asset and the aggregated predictive model and the aggregated corresponding workflow; and (e) transmitting the defined at least one personalized predictive model or personalized corresponding workflow to the first asset for local execution by the first asset.
As discussed above, examples provided herein relate to receiving and executing a predictive model and/or workflow at an asset. In one aspect, a computing device is provided. The computing device includes: (i) an asset interface configured to couple the computing device to an asset; (ii) a network interface configured to facilitate communication between the computing device and a computing system located remotely from the computing device; (iii) at least one processor; (iv) a non-transitory computer-readable medium; and (v) program instructions stored on the non-transitory computer-readable medium that are executable by the at least one processor to cause the computing device to: (a) receiving, via the network interface, a predictive model related to operation of the asset, wherein the predictive model is defined by the computing system based on operational data of a plurality of assets; (b) receiving, via the asset interface, operational data for the asset; (c) executing the predictive model based on at least a portion of the received operational data of the asset; and (d) based on executing the predictive model, executing a workflow corresponding to the predictive model, wherein executing the workflow comprises causing the asset to perform an operation via the asset interface.
In another aspect, a non-transitory computer-readable medium having instructions stored thereon is provided that is executable to cause a computing device coupled to an asset via an asset interface of the computing device to: (a) receiving, via a network interface of the computing device, a predictive model related to the operation of the asset, the network interface of the computing device configured to facilitate communication between the computing device and a computing system located remotely from the computing device, wherein the predictive model is defined by the computing system based on operational data of a plurality of assets; (b) receiving, via the asset interface, operational data for the asset; (c) executing the predictive model based on at least a portion of the received operational data of the asset; and (c) based on executing the predictive model, executing a workflow corresponding to the predictive model, wherein executing the workflow comprises causing the asset to perform an operation via the asset interface.
In yet another aspect, a computer-implemented method is provided. The method comprises the following steps: (a) receiving, via a network interface of a computing device, a predictive model related to the operation of an asset, the network interface of the computing device coupled to the asset via an asset interface of the computing device, wherein the predictive model is defined by the computing system located remotely from the computing device based on operational data of a plurality of assets; (b) receiving, by the computing device, operation data for the asset via the asset interface; (b) executing, by the computing device, the predictive model based on at least a portion of the received operational data of the asset; and (c) based on executing the predictive model, executing, by the computing device, a workflow corresponding to the predictive model, wherein executing the workflow comprises causing the asset to perform an operation via the asset interface.
These and many other aspects will be apparent to one of ordinary skill in the art upon reading the following disclosure.
Drawings
FIG. 1 depicts an example network configuration in which example embodiments may be implemented.
FIG. 2 depicts a simplified block diagram of an example asset.
FIG. 3 depicts a conceptual illustration of example abnormal situation indicators and trigger criteria.
FIG. 4 depicts a simplified block diagram of an example analytics system.
FIG. 5 depicts an example flow diagram of a definition phase that can be used to define a model-workflow pair.
FIG. 6A depicts a conceptual illustration of an aggregate model-workflow pair.
FIG. 6B depicts a conceptual illustration of a personalized model-workflow pair.
FIG. 6C depicts a conceptual illustration of another personalized model-workflow pair.
FIG. 6D depicts a conceptual illustration of the modified model-workflow pair.
FIG. 7 depicts an example flow diagram of a modeling phase that may be used to define a predictive model that outputs a health indicator.
FIG. 8 depicts a conceptual illustration of data used to define a model.
FIG. 9 depicts an example flow diagram of a local execution phase that may be used to locally execute a predictive model.
FIG. 10 depicts an example flow diagram of a modification phase that can be used to modify a model-workflow pair.
FIG. 11 depicts an example flow diagram of an adjustment phase that may be used to adjust the execution of a model-workflow pair.
FIG. 12 depicts a flow diagram of an example method for defining and deploying an aggregate predictive model and corresponding workflow.
FIG. 13 depicts a flow diagram of an example method for defining and deploying an individualized predictive model and/or corresponding workflow.
FIG. 14 depicts a flow diagram of an example method for dynamically modifying execution of a model-workflow pair.
FIG. 15 depicts a flow diagram of an example method for receiving and locally executing a model-workflow pair.
Detailed Description
The following disclosure makes reference to the accompanying drawings and several exemplary scenarios. It will be understood by those of ordinary skill in the art that such references are for illustrative purposes only and are therefore not meant to be limiting. Some or all of the disclosed systems, devices, and methods may be rearranged, combined, added, and/or removed in various ways, each of which is contemplated herein.
I. Example network configuration
Turning now to the drawings, FIG. 1 depicts an example network configuration 100 in which example embodiments may be implemented. As shown, the network configuration 100 includes assets 102, assets 104, a communication network 106, a remote computing system 108, which may take the form of an analytics system, an output system 110, and a data source 112.
A communication network 106 communicatively connects each of the components in the network configuration 100. For example, the assets 102 and 104 may communicate with the analytics system 108 via the communication network 106. In some cases, the assets 102 and 104 may communicate with one or more intermediate systems, such as an asset gateway (not depicted), which in turn communicates with the analytics system 108. Similarly, the analysis system 108 may be in communication with an output system 110 via the communication network 106. In some cases, the analytics system 108 may communicate with one or more intermediate systems, such as a host server (not depicted), which in turn communicates with the output system 110. Many other configurations are possible. In an example case, the communication network 106 may facilitate secure communications (e.g., via encryption or other security measures) between network components.
In general, assets 102 and 104 may take the form of any device configured to perform one or more operations (which may be defined based on a domain), and may also include apparatus configured to transmit data indicative of one or more operating conditions of a given asset. In some examples, an asset may include one or more subsystems configured to perform one or more respective operations. In practice, multiple subsystems may operate in parallel or in sequence to operate the asset.
Example assets can include transportation machines (e.g., locomotives, airplanes, passenger vehicles, semi-trucks, ships, etc.), industrial machines (e.g., mining equipment, construction equipment, factory automation equipment, etc.), medical machines (e.g., medical imaging equipment, surgical equipment, medical monitoring systems, medical laboratory equipment, etc.), and utility machines (e.g., turbines, solar farms, etc.), among others. Those of ordinary skill in the art will appreciate that these are just a few examples of assets, and that several other assets are possible and contemplated herein.
In an example implementation, assets 102 and 104 may each be of the same type (e.g., a fleet of locomotives or aircraft, a wind turbine group or set of MRI machines, etc.) and possibly of the same category (e.g., the same make and/or model). In other examples, the assets 102 and 104 may differ in type, make, model, etc. Assets are discussed in further detail below with reference to fig. 2.
As shown, the assets 102 and 104 and possibly the data source 112 may be in communication with the analytics system 108 via the communication network 106. In general, the communication network 106 may include one or more computing systems and network infrastructure configured to facilitate the transfer of data between network components. The communication network 106 may be or may include one or more Wide Area Networks (WANs) and/or Local Area Networks (LANs) that may be wired and/or wireless and support secure communications. In some examples, communication network 106 may include one or more cellular networks and/or networks such as the internet. The communication network 106 may operate according to one or more communication protocols such as LTE, CDMA, GSM, LPWAN, WiFi, Bluetooth, Ethernet, HTTP/S, TCP, CoAP/DTLS, and so forth. While the communication network 106 is illustrated as a single network, it should be understood that the communication network 106 may include a plurality of different networks that are themselves communicatively linked. The communication network 106 may take other forms as well.
As described above, the analytics system 108 may be configured to receive data from the assets 102 and 104 and the data source 112. In general, the analytics system 108 may include one or more computing systems, such as servers and databases, configured to receive, process, analyze, and output data. The analytics system 108 may be configured according to a given data flow technology (e.g., TPL Dataflow or NiFi, etc.). The analysis system 108 is discussed in further detail below with reference to fig. 3.
As shown, the analytics system 108 may be configured to transmit data to the assets 102 and 104 and/or the output system 110. The particular data transmitted may take a variety of forms and will be described in further detail below.
In general, the output system 110 may take the form of a computing system or device configured to receive data and provide some form of output. The output system 110 may take various forms. In one example, the output system 110 may be or include an output device configured to receive data and provide audible, visual, and/or tactile output in response to the data. In general, the output device may include one or more input interfaces configured to receive user input, and the output device may be configured to transmit data over the communication network 106 based on such user input. Examples of output devices include tablet computers, smart phones, laptop computers, other mobile computing devices, desktop computers, smart televisions, and the like.
Another example of the output system 110 may take the form of a work order system configured to output a request for a mechanic or the like to repair an asset. Yet another example of the output system 110 may take the form of a part ordering system configured to place an order for a part of an asset and output a receipt thereof. Many other output systems are possible.
The data source 112 may be configured to communicate with the analytics system 108. In general, the data source 112 may be or include one or more computing systems configured to collect, store, and/or provide data to other systems (e.g., the analytics system 108) that may be related to functions performed by the analytics system 108. The data source 112 may be configured to generate and/or obtain data independent of the assets 102 and 104. Thus, the data provided by the data source 112 may be referred to herein as "external data". The data source 112 may be configured to provide current and/or historical data. In practice, the analytics system 108 may receive data from the data sources 112 by "subscribing" to services provided by the data sources. However, the analytics system 108 may receive data from the data source 112 in other manners as well.
Examples of data sources 112 include environmental data sources, asset management data sources, and other data sources. Typically, the environmental data source provides data indicative of some characteristic of the operating environment of the asset. Examples of environmental data sources include meteorological data servers, Global Navigation Satellite System (GNSS) servers, map data servers, and topology data servers, among others, that provide information about the natural and man-made features of a given area.
Typically, an asset management data source provides data indicative of events or states of entities (e.g., other assets) that may affect the operation or maintenance of an asset (e.g., when and where an asset may operate or receive maintenance). Examples of asset management data sources include: a traffic data server providing information about air, water and/or ground traffic; an asset scheduling server that provides information regarding an expected route and/or location of an asset on a particular date and/or at a particular time; a defect detector system (also referred to as a "hot box" detector) that provides information about one or more operating conditions of an asset passing in the vicinity of the defect detector system; a part supplier server that provides information about parts in inventory of a particular supplier and their prices; and a maintenance workshop server which provides information on the productivity of the maintenance workshop and the like; and so on.
Examples of other data sources include a grid server that provides information about power consumption and an external database that stores historical operating data for assets, among others. Those of ordinary skill in the art will appreciate that these are just a few examples of data sources and that several other examples are possible.
It should be understood that network configuration 100 is one example of a network in which embodiments described herein may be implemented. Several other arrangements are possible and are contemplated herein. For example, other network configurations may include additional components not depicted and/or more or fewer components depicted.
Example assets
Turning to FIG. 2, a simplified block diagram of an example asset 200 is depicted. Either or both of the assets 102 and 104 from fig. 1 may be configured as the asset 200. As shown, the asset 200 may include one or more subsystems 202, one or more sensors 204, one or more actuators 205, a central processing unit 206, a data storage 208, a network interface 210, a user interface 212, and a local analytics device 220, all of which may be communicatively linked (directly or indirectly) through a system bus, network, or other connection mechanism. It will be apparent to one of ordinary skill in the art that the asset 200 may include additional components not shown and/or more or fewer of the depicted components.
In general, the asset 200 may include one or more electrical, mechanical, and/or electromechanical components configured to perform one or more operations. In some cases, one or more components may be grouped into a given subsystem 202.
In general, the subsystem 202 may include groups of related components that are part of the asset 200. A single subsystem 202 may perform one or more operations independently, or a single subsystem 202 may operate with one or more other subsystems to perform one or more operations. Typically, different types of assets, and even different classes of the same type of asset, may include different subsystems.
For example, in the context of transportation assets, examples of subsystems 202 may include engines, transmissions, drivetrains, fuel systems, battery systems, exhaust systems, brake systems, electrical systems, signal processing systems, generators, gearboxes, rotors, and hydraulic systems, among several other subsystems. In the context of a medical machine, examples of subsystems 202 may include a scanning system, a motor, a coil and/or magnet system, a signal processing system, a rotor, and an electrical system, among several other subsystems.
As indicated above, the asset 200 may be equipped with: various sensors 204 configured to monitor operating conditions of the asset 200; and various actuators 205 configured to interact with the asset 200 or components thereof and monitor the operating conditions of the asset 200. In some cases, some of the sensors 204 and/or actuators 205 may be grouped based on a particular subsystem 202. In this manner, a group of sensors 204 and/or actuators 205 may be configured to monitor operating conditions of a particular subsystem 202, and actuators from the group may be configured to interact with a particular subsystem 202 in a manner that may alter the behavior of the subsystem based on those operating conditions.
In general, the sensors 204 may be configured to detect physical properties that may be indicative of one or more operating conditions of the asset 200, and provide an indication of the detected physical properties, such as electrical signals. In operation, the sensor 204 may be configured to obtain measurements continuously, periodically (e.g., based on a sampling frequency), and/or in response to some triggering event. In some examples, the sensor 204 may be preconfigured with operating parameters for performing the measurements and/or the measurements may be performed according to operating parameters provided by the central processing unit 206 (e.g., a sampling signal indicative of the sensor 204 obtaining the measurement values). In an example, different sensors 204 may have different operating parameters (e.g., some sensors may sample based on a first frequency while other sensors sample based on a second, different frequency). Regardless, the sensor 204 may be configured to transmit an electrical signal indicative of the measured physical property to the central processing unit 206. The sensor 204 may continuously or periodically provide such signals to the central processing unit 206.
For example, the sensors 204 may be configured to measure physical properties of the asset 200, such as the location and/or movement of the asset 200, in which case the sensors may take the form of GNSS sensors, dead reckoning based sensors, accelerometers, gyroscopes, pedometers, magnetometers, and the like.
Additionally, various sensors 204 may be configured to measure other operating conditions of the asset 200, examples of which may include temperature, pressure, velocity, acceleration or deceleration rates, friction, power usage, fuel usage, fluid level, runtime, voltage and current, magnetic fields, electric fields, presence or absence of objects, location of components and power generation, and so forth. Those of ordinary skill in the art will appreciate that these are merely some example operating conditions that a sensor may be configured to measure. More or fewer sensors may be used depending on the industry application or particular asset.
As indicated above, the configuration of the actuator 205 may be similar in some respects to the sensor 204. Specifically, the actuator 205 may be configured to detect a physical property indicative of an operating condition of the asset 200 and provide an indication thereof in a similar manner as the sensor 204.
Additionally, the actuator 205 may be configured to interact with the asset 200, one or more subsystems 202, and/or some components thereof. As such, the actuator 205 may include a motor or the like configured to perform a mechanical operation (e.g., move) or otherwise control a component, subsystem, or system. In particular examples, the actuator may be configured to measure fuel flow and change fuel flow (e.g., restrict fuel flow), or the actuator may be configured to measure hydraulic pressure and change hydraulic pressure (e.g., increase or decrease hydraulic pressure). Several other example interactions of the actuator are also possible and contemplated herein.
Generally, the central processing unit 206 may include one or more processors and/or controllers, which may take the form of general or special purpose processors or controllers. Specifically, in an example implementation, the central processing unit 206 may be or include a microprocessor, microcontroller, application specific integrated circuit, digital signal processor, or the like. Further, data storage device 208 may be or include one or more non-transitory computer-readable storage media, such as optical, magnetic, organic, or flash memory, among others.
The central processing unit 206 may be configured to store, access, and execute computer-readable program instructions stored in the data storage 208 to perform the operations of the assets described herein. For example, as indicated above, the central processing unit 206 may be configured to receive respective sensor signals from the sensors 204 and/or actuators 205. The central processing unit 206 may be configured to store sensor and/or actuator data in the data storage 208 and subsequently access the data from the data storage 208.
The central processing unit 206 may also be configured to determine whether the received sensor and/or actuator signals trigger any abnormal condition indicators, such as fault codes. For example, the central processing unit 206 may be configured to store exception condition rules in the data store 208, each rule of which includes a given exception condition indicator representing a particular exception condition and a corresponding trigger criteria that triggers the exception condition indicator. That is, each abnormal-condition indicator corresponds to one or more sensor and/or actuator measurements that must be satisfied before the abnormal-state indicator is triggered. In practice, the asset 200 may be preprogrammed with abnormal-condition rules and/or may receive new abnormal-condition rules or updates to existing rules from a computing system (e.g., the analytics system 108).
Regardless, the central processing unit 206 may be configured to determine whether the received sensor and/or actuator signals trigger any abnormal-condition indicators. That is, the central processing unit 206 may determine whether the received sensor and/or actuator signals satisfy any triggering criteria. When this determination is affirmative, the central processing unit 206 may generate abnormal status data, and may also cause the asset's user interface 212 to output an indication of an abnormal condition, such as a visual and/or audible alert. Additionally, the central processing unit 206 may record the occurrence of the triggered abnormal condition indicator in the data store 208, possibly with a time stamp.
FIG. 3 depicts a conceptual illustration of an example abnormal situation indicator and corresponding trigger criteria for an asset. In particular, FIG. 3 depicts a conceptual illustration of an example fault code. As shown, table 300 includes columns 302, 304, and 306 corresponding to sensor A, actuator B, and sensor C, respectively, and rows 308, 310, and 312 corresponding to fault codes 1, 2, and 3, respectively. The entry 314 then specifies the sensor criteria (e.g., sensor value threshold) corresponding to the given fault code.
For example, fault code 1 will be triggered when sensor A detects a rotation measurement greater than 135 Revolutions Per Minute (RPM) and sensor C detects a temperature measurement greater than 65 degrees Celsius (C). Fault code 2 will be triggered when actuator B detects a voltage measurement greater than 1000 volts (V) and sensor C detects a temperature measurement less than 55 ℃. Fault code 3 will be triggered when sensor a detects a rotation measurement greater than 100RPM, actuator B detects a voltage measurement greater than 750V, and sensor C detects a temperature measurement greater than 60 ℃. It will be appreciated by those of ordinary skill in the art that FIG. 3 is provided for purposes of example and explanation only, and that many other fault codes and/or triggering criteria are possible and contemplated herein.
Referring back to fig. 2, the central processing unit 206 may also be configured to perform various additional functions for managing and/or controlling the operation of the asset 200. For example, the central processing unit 206 may be configured to provide command signals to the subsystem 202 and/or the actuator 205 that cause the subsystem 202 and/or the actuator 205 to perform an operation (e.g., modify a throttle position). Additionally, the central processing unit 206 may be configured to modify the rate at which it processes data from the sensors 204 and/or actuators 205, or the central processing unit 206 may be configured to provide instruction signals to the sensors 204 and/or actuators 205 that cause the sensors 204 and/or actuators 205 to, for example, modify the sampling rate. Additionally, the central processing unit 206 may be configured to receive signals from the subsystems 202, sensors 204, actuators 205, network interface 210, and/or user interface 212, and cause operations to occur based on such signals. Additionally, the central processing unit 206 may be configured to receive signals from a computing device, such as a diagnostic device, that cause the central processing unit 206 to execute one or more diagnostic tools according to diagnostic rules stored in the data storage 208. Other functionalities of the central processing unit 206 are discussed below.
The network interface 210 may be configured to provide communication between the asset 200 and various network components connected to the communication network 106. For example, the network interface 210 may be configured to facilitate wireless communication to and from the communication network 106, and thus may take the form of antenna structures and associated devices for transmitting and receiving various wireless signals (over-the-air signals). Other examples are possible. In practice, the network interface 210 may be configured according to a communication protocol (such as, but not limited to, any of the communication protocols described above).
The user interface 212 may be configured to facilitate user interaction with the asset 200, and may also be configured to facilitate causing the asset 200 to perform an operation in response to the user interaction. Examples of user interface 212 include touch-sensitive interfaces, mechanical interfaces (e.g., levers, buttons, scroll wheels, dials, keyboards, etc.), and other input interfaces (e.g., microphones), among others. In some cases, the user interface 212 may include or provide connectivity to output components (e.g., a display screen, speakers, a headphone jack, etc.).
The local analytics device 220 may generally be configured to receive and analyze data related to the asset 200, and based on this analysis may cause one or more operations to occur at the asset 200. For example, the local analytics device 220 may receive operational data (e.g., data generated by the sensors 204 and/or actuators 205) of the asset 200 and, based on this data, may provide instructions to the central processing unit 206, the sensors 204, and/or actuators 205 that cause the asset 200 to perform operations.
To facilitate this operation, the local analytics device 220 may include one or more asset interfaces configured to couple the local analytics device 220 to one or more of the on-board systems of the asset. For example, as shown in fig. 2, the local analytics device 220 may have an interface to the asset's central processing unit 206, which may enable the local analytics device 220 to receive operational data (e.g., operational data generated by the sensors 204 and/or actuators 205 and sent to the central processing unit 206) from the central processing unit 206 and then provide instructions to the central processing unit 206. In this manner, the local analytics device 220 may indirectly interface with and receive data from other on-board systems of the asset 200 (e.g., the sensors 204 and/or actuators 205) via the central processing unit 206. Additionally or alternatively, as shown in fig. 2, the local analytics device 220 may have an interface to one or more sensors 204 and/or actuators 205 that may enable the local analytics device 220 to communicate directly with the sensors 204 and/or actuators 205. Local analytics device 220 may also otherwise interface with the on-board system of asset 200, including the possibility that the interface illustrated in fig. 2 is facilitated by one or more intermediate systems not shown.
In fact, the local analytics device 220 may enable the asset 200 to locally perform advanced analytics and associated operations, such as executing predictive models and corresponding workflows, that may not be able to be performed using other on-asset components. Thus, the local analytics device 220 may help provide additional processing power and/or intelligence to the asset 200.
It should be understood that the local analytics device 220 may also be configured to cause the asset 200 to perform operations that are independent of the predictive model. For example, the local analytics device 220 may receive data from a remote source (e.g., the analytics system 108 or the output system 110) and cause the asset 200 to perform one or more operations based on the received data. One particular example may involve the local analytics device 220 receiving a firmware update for the asset 200 from a remote source and then causing the asset 200 to update its firmware. Another particular example may involve the local analytics device 220 receiving diagnostic instructions from a remote source, and then causing the asset 200 to execute a local diagnostic tool in accordance with the received instructions. Several other examples are possible.
As shown, in addition to the one or more asset interfaces discussed above, the local analytics device 220 may also include a processing unit 222, a data storage device 224, and a network interface 226, all of which may be communicatively linked through a system bus, network, or other connection mechanism. The processing unit 222 may include any of the components discussed above with respect to the central processing unit 206. In turn, the data storage 224 may be or include one or more non-transitory computer-readable storage media, which may take any of the forms of computer-readable storage media discussed above.
The processing unit 222 may be configured to store, access, and execute computer-readable program instructions stored in the data storage 224 to perform the operations of the local analytics device described herein. For example, the processing unit 222 may be configured to receive respective sensor and/or actuator signals generated by the sensors 204 and/or actuators 205, and may execute a predictive model-workflow pair based on such signals. Other functions are described below.
The network interface 226 may be the same as or similar to the network interfaces described above. Indeed, the network interface 226 may facilitate communication between the local analytics device 220 and the analytics system 108.
In some example implementations, the local analytics device 220 may include and/or communicate with a user interface that may be similar to the user interface 212. In practice, the user interface may be located remotely from the local analytics device 220 (and the asset 200). Other examples are possible.
While fig. 2 shows the local analytics device 220 physically and communicatively coupled to its associated asset (e.g., asset 200) via one or more asset interfaces, it should also be understood that this may not always be the case. For example, in some embodiments, the local analytics device 220 may not be physically coupled to its associated asset, but may be located remotely from the asset 220. In an example of such an implementation, the local analytics device 220 may be wirelessly communicatively coupled to the asset 200. Other arrangements and configurations are also possible.
Those of ordinary skill in the art will appreciate that the asset 200 shown in FIG. 2 is merely one example of a simplified representation of an asset, and that many other examples are possible. For example, other assets can include additional components not depicted and/or more or fewer components depicted. Additionally, a given asset may include multiple individual assets that operate in unison to perform the operations of the given asset. Other examples are possible.
Example analysis System
Turning now to FIG. 4, a simplified block diagram of an example analytics system 400 is depicted. As indicated above, the analysis system 400 may include one or more computing systems communicatively linked and arranged to carry out the various operations described herein. Specifically, as shown, the analytics system 400 may include a data intake system 402, a data science system 404, and one or more databases 406. These system components may be communicatively coupled via one or more wireless and/or wired connections, which may be configured to facilitate secure communications.
The data intake system 402 is generally operable to receive and process data and output data to the data science system 404. As such, the data intake system 402 may include one or more network interfaces configured to receive data from various network components of the network configuration 100 (e.g., the assets 102 and 104, the output system 110, and/or the data source 112). Specifically, the data intake system 402 may be configured to receive analog signals, data streams, and/or network packets, among others. As such, the network interface may include one or more wired network interfaces (e.g., ports, etc.) and/or wireless network interfaces, similar to the wireless network interfaces described above. In some examples, the data intake system 402 may be or include components configured according to a given data flow technology, such as a NiFi receiver or the like.
The data intake system 402 may include one or more processing components configured to perform one or more operations. Example operations may include compression and/or decompression, encryption and/or decryption, analog-to-digital and/or digital-to-analog conversion, screening, and amplification, among other operations. Additionally, the data intake system 402 may be configured to parse, classify, organize, and/or route data based on data types and/or data characteristics of the data. In some examples, the data intake system 402 may be configured to format, package, and/or route data based on one or more characteristics or operating parameters of the data science system 404.
Generally, the data received by the data intake system 402 may take various forms. For example, the payload of data may include a single sensor or actuator measurement, multiple sensor and/or actuator measurements, and/or one or more abnormal condition data. Other examples are possible.
In addition, the received data may include certain characteristics, such as a source identifier and a time stamp (e.g., a date and/or time at which the information was obtained). For example, a unique identifier (e.g., a computer-generated letter, number, alphanumeric, or the like) may be assigned to each asset, and possibly to each sensor and actuator. Such identifiers may be operable to identify the asset, sensor, or actuator from which the data originated. In some cases, another characteristic may include a location (e.g., GPS coordinates) at which the information was obtained. The data characteristics may be in the form of a signal signature or metadata, etc.
The data science system 404 is generally operable to receive data (e.g., from the data intake system 402) and analyze the data, and cause one or more operations to occur based on this analysis. Thus, the data science system 404 can include one or more network interfaces 408, processing units 410 and data storage devices 412, all of which can be communicatively linked by a system bus, network or other connection mechanism. In some cases, the data science system 404 may be configured to store and/or access one or more Application Program Interfaces (APIs) that facilitate performing some of the functionality disclosed herein.
The network interface 408 may be the same as or similar to any of the network interfaces described above. In practice, the network interface 408 may facilitate communication (e.g., with some level of security) between the data science system 404 and various other entities (e.g., the data intake system 402, the database 406, the assets 102, the output system 110, etc.).
The processing unit 410 may include one or more processors, which may take any of the processor forms described above. In turn, the data storage device 412 may be or include one or more non-transitory computer-readable storage media, which may take any of the forms of computer-readable storage media discussed above. The processing unit 410 may be configured to store, access, and execute computer-readable program instructions stored in the data storage 412 to perform the operations of the analysis system described herein.
In general, the processing unit 410 may be configured to perform analysis on data received from the data intake system 402. To this end, the processing unit 410 may be configured to execute one or more modules, each of which may take the form of one or more sets of program instructions stored in the data storage 412. The modules can be configured to facilitate causing results to occur based on execution of respective program instructions. Example results from a given module may include program instructions to output data to another module, update the given module and/or another module, and output data to the network interface 408 for transmission to the asset and/or output system 110, and so on.
The database 406 is generally operable to receive data (e.g., from the data science system 404) and store the data. As such, each database 406 may include one or more non-transitory computer-readable storage media, such as any of the examples provided above. In practice, database 406 may be separate from data storage 412 or integrated with data storage 412.
Database 406 may be configured to store several types of data, some of which are discussed below. Indeed, some of the data stored in database 406 may include a time stamp indicating the date and time the data was generated or added to the database. Additionally, data may be stored in database 406 in several ways. For example, the data may be stored in a chronological, tabular manner, and/or organized based on data source type (e.g., based on asset, asset type, sensor type, actuator, or actuator type), or abnormal condition indicator, among other examples.
Example operation
The operation of the example network configuration 100 depicted in fig. 1 will now be discussed in further detail below. To help describe some of such operations, combinations of operations that may be performed may be described with reference to flow diagrams. In some cases, each block may represent a module or portion of program code, which includes instructions executable by a processor to perform specific logical functions or steps in a process. The program code may be stored on any type of computer readable medium, such as a non-transitory computer readable medium. In other cases, each block may represent circuitry that is wired to perform a particular logical function or step in the process. In addition, the blocks shown in the flow diagrams may be rearranged into a different order, combined into fewer blocks, divided into additional blocks, and/or removed based on the particular embodiments.
The following description may refer to an example in which a single data source, such as an asset 102, provides data to an analytics system 108 that then performs one or more functions. It is to be understood that this is done for clarity and explanation only and is not meant to be limiting. In practice, the analytics system 108 typically receives data from multiple sources simultaneously and performs operations based on this aggregated received data.
A. Collection of operational data
As mentioned above, a typical asset 102 may take various forms and may be configured to perform a plurality of operations. In a non-limiting example, the asset 102 may take the form of a locomotive operable to transport cargo across the United states. While in transit, sensors and/or actuators of the asset 102 may obtain data reflecting one or more operating conditions of the asset 102. The sensors and/or actuators may transmit data to the processing unit of the asset 102.
The processing unit may be configured to receive data from the sensors and/or actuators. In practice, the processing unit may receive sensor data from multiple sensors and/or actuator data from multiple actuators, either simultaneously or sequentially. As discussed above, upon receiving such data, the processing unit may also be configured to determine whether the data meets the triggering criteria for triggering any abnormal-condition indicators, such as fault codes. In the event that the processing unit determines that one or more abnormal state indicators are triggered, the processing unit may be configured to perform one or more local operations, such as outputting an indication of the triggered indicator via a user interface.
The asset 102 may then transmit the operational data to the analytics system 108 via the network interface of the asset 102 and the communication network 106. In operation, the asset 102 may continuously, periodically, and/or in response to a triggering event (e.g., an abnormal condition) transmit operational data to the operational analysis system 108. In particular, the asset 102 may periodically transmit the operational data based on a particular frequency (e.g., daily, hourly, fifteen minutes, once per minute, once per second, etc.), or the asset 102 may be configured to transmit a continuous real-time feed of the operational data. Additionally or alternatively, the asset 102 may be configured to transmit operational data based on certain triggers, such as when sensor and/or actuator measurements satisfy trigger criteria for any abnormal-condition indicator. The asset 102 may transmit the operational data in other manners as well.
In practice, the operational data of the asset 102 may include sensor data, actuator data, and/or abnormal-condition data. In some implementations, the asset 102 may be configured to provide the operational data in a single data stream, while in other implementations, the asset 102 may be configured to provide the operational data in a plurality of different data streams. For example, the asset 102 may provide a first data stream of sensor and/or actuator data and a second data stream of abnormal condition data to the analytics system 108. Other possibilities also exist.
The sensor and actuator data may take various forms. For example, at times, the sensor data (or actuator data) may include measurements obtained by each of the sensors (or actuators) of the asset 102. While at other times, the sensor data (or actuator data) may include measurements obtained by a subset of the sensors (or actuators) of the asset 102.
In particular, the sensor and/or actuator data may include measurements obtained by the sensors and/or actuators associated with a given triggered abnormal-condition indicator. For example, if the fault code triggered is fault code 1 from FIG. 3, the sensor data may include raw measurements obtained by sensors A and C. Additionally or alternatively, the data may include measurements obtained by one or more sensors or actuators not directly associated with the triggered fault code. Continuing with the last example, the data may additionally include measurements obtained by actuator B and/or other sensors or actuators. In some examples, the asset 102 may include specific sensor data in the operational data based on fault code rules or instructions provided by the analytics system 108, which may determine, for example, that there is a correlation between the measurement that actuator B is measuring and the measurement that initially caused fault code 1 to be triggered. Other examples are possible.
Additionally, the data may include one or more sensor and/or actuator measurements from each sensor and/or actuator of interest based on a particular time of interest, which may be selected based on a number of factors. In some examples, the particular time of interest may be based on a sampling rate. In other examples, the particular time of interest may be based on the time that the abnormal condition indicator was triggered.
In particular, based on the time at which the abnormal-condition indicator is triggered, the data may include one or more respective sensor and/or actuator measurements from each sensor and/or actuator of interest (e.g., the sensor and/or actuator directly and indirectly associated with the triggered indicator). The one or more measurements may be based on a particular number of measurements or a particular duration of time around the time of the triggered abnormal condition indicator.
For example, if the fault code triggered is fault code 2 from fig. 3, the sensors and actuators of interest may include actuator B and sensor C. The one or more measurements may include the most recent respective measurements obtained by actuator B and sensor C before triggering a fault code (e.g., a trigger measurement) or a respective set of measurements before, after, or near the trigger measurement. For example, a set of five measurements may include five measurements before or after a trigger measurement (e.g., without a trigger measurement), four measurements before or after a trigger measurement and a trigger measurement, or two measurements before and after a trigger measurement and a trigger measurement, among other possibilities.
Similar to sensor and actuator data, abnormal-condition data may take various forms. Typically, the abnormal situation data may include or take the form of an indicator operable to uniquely identify a particular abnormal condition occurring at the asset 102 from all other abnormal conditions that may occur at the asset 102. The abnormal condition indicator may take the form of an alpha, numeric, or alphanumeric identifier, or the like. Additionally, the abnormal-condition indicator may take the form of a string describing the abnormal condition, such as "engine over-heated" or "fuel under-heated" or the like.
The analytics system 108, and in particular the data intake system of the analytics system 108, may be configured to receive operational data from one or more assets and/or data sources. The data intake system may be configured to perform one or more operations on the received data and then relay the data to a data science system of the analysis system 108. Further, the data science system may analyze the received data and perform one or more operations based on this analysis.
B. Defining a predictive model and workflow
As one example, the analytics system 108 may be configured to define a predictive model and corresponding workflow based on received operational data of one or more assets and/or received external data related to one or more assets. The analytics system 108 may also define model-workflow pairs based on various other data.
Typically, the model-workflow pair may include a set of program instructions that cause the asset to monitor certain operating conditions and perform certain operations that help facilitate preventing the occurrence of specific events indicated by the monitored operating conditions. In particular, the predictive model may include one or more algorithms whose inputs are sensor and/or actuator data from one or more sensors and/or actuators of the asset and whose outputs are used to determine the probability that a particular event may occur at the asset within a particular time period in the future. In turn, the workflow may include one or more triggers (e.g., model output values) and corresponding operations that the asset carries out based on the triggers.
As indicated above, the analytics system 108 may be configured to define an aggregated and/or personalized predictive model and/or workflow. An "aggregate" model/workflow may refer to a model/workflow as follows: a model/workflow that is generic to a group of assets and defined without regard to the specific characteristics of the assets on which the model/workflow is deployed. On the other hand, a "personalized" model/workflow may refer to the following model/workflow: a model/workflow that is specifically tailored to an individual asset or subgroup of assets from the group of assets and is defined based on the specific characteristics of the individual asset or subgroup of assets to which the model/workflow is deployed. These different types of models/workflows and the operations performed by the analytics system 108 to define the models/workflows are discussed in further detail below.
1. Aggregation model and workflow
In an example implementation, the analytics system 108 may be configured to define an aggregate model-workflow pair based on aggregate data for a plurality of assets. Defining an aggregate model-workflow pair may be performed in various ways.
FIG. 5 is a flow diagram 500 depicting one possible example of a definition phase that may be used to define a model-workflow pair. For purposes of illustration, the instance definition phase is described as being carried out by the analysis system 108, but this definition phase can also be carried out by other systems. It will be appreciated by those of ordinary skill in the art that the flowchart 500 is provided for purposes of clarity and explanation, and that several other combinations of operations may be utilized to define model-workflow pairs.
As shown in FIG. 5, at block 502, the analytics system 108 may begin by defining a set of data (e.g., data of interest) that forms the basis of a given predictive model. The data of interest may originate from several sources, such as assets 102 and 104 and data sources 112, and may be stored in a database of the analytics system 108.
The data of interest may include historical data from a particular set of assets in the group of assets or from all assets in the group of assets (e.g., the asset of interest). Additionally, the data of interest may include measurements from a particular set of sensors and/or actuators for each of the assets of interest or from all of the sensors and/or actuators for each of the assets of interest. Additionally, the data of interest may include data from a particular period of time in the past, such as two weeks of historical data.
The data of interest may contain various types of data, which may depend on a given predictive model. In some cases, the data of interest may include at least operational data indicative of an operational condition of the asset, where the operational data is as discussed above in the collection section of operational data. Additionally, the data of interest may include environmental data indicative of the environment in which the asset is typically operated and/or scheduling data indicative of the scheduled dates and times during which the asset will perform certain tasks. Other types of data may also be included in the data of interest.
In fact, the data of interest may be defined in a number of ways. In one example, the data of interest may be user defined. In particular, a user may operate the output system 110, which receives user input indicating a selection of certain data of interest, and the output system 110 may provide data indicating such a selection to the analysis system 108. Based on the received data, the analytics system 108 may then define the data of interest.
In another example, the data of interest may be machine-defined. In particular, the analysis system 108 may perform various operations, such as simulations, to determine the data of interest that produces the most accurate predictive model. Other examples are possible.
Returning to FIG. 5, at block 504, the analytics system 108 may be configured to define an aggregate predictive model related to asset operation based on the data of interest. In general, an aggregate, predictive model may define a relationship between the operating conditions of an asset and the likelihood of an event occurring at the asset. In particular, the aggregate, predictive model may receive as input sensor data from sensors of the asset and/or actuator data from actuators of the asset, and output a probability that an event will occur at the asset within a certain amount of time in the future.
The events predicted by the predictive model may vary depending on the particular implementation. For example, the event may be a fault, and thus, the predictive model may be a fault model that predicts whether a fault will occur within some period of time in the future (the fault model is discussed in detail below in the health score model and workflow paragraphs). In another example, an event may be an asset completing a task, and thus, the predictive model may predict the likelihood that the asset will complete the task on time. In other examples, the event may be a fluid or component replacement, and thus, the predictive model may predict the amount of time before a particular asset fluid or component needs to be replaced. In still other examples, the event may be a change in the productivity of the asset, and thus, the predictive model may predict the productivity of the asset over a particular period of time in the future. In another example, an event may be the occurrence of a "leading indicator" event, which may indicate a different asset behavior than expected asset behavior, and thus, the predictive model may predict the likelihood of the occurrence of one or more leading indicator events in the future. Other examples of predictive models are possible.
Regardless, the analytics system 108 may define the aggregate predictive model in various ways. In general, this operation may involve utilizing one or more modeling techniques, such as random forest techniques, logistic regression techniques, or other regression techniques, among other modeling techniques, to generate a model that returns a probability between 0 and 1. In a specific example implementation, the analytics system 108 may define the aggregate predictive model according to the discussion below with reference to fig. 7. The analytics system 108 may define the aggregate model in other manners as well.
At block 506, the analytics system 108 may be configured to define an aggregate workflow corresponding to the defined model from block 504. Generally, a workflow may take the form of an action that is carried out based on a particular output of a predictive model. In an example implementation, the workflow may include one or more operations that the asset performs based on the output of the defined predictive model. Examples of operations that may be part of a workflow include the asset collecting data according to a particular data collection scheme, transmitting data to the analytics system 108 according to a particular data transmission scheme, performing local diagnostic tools and/or modifying the operating condition of the asset, etc.
A particular data collection scenario may dictate how the asset collects data. In particular, the data acquisition scheme may indicate certain sensors and/or actuators from which the asset obtains data, such as a subset of sensors and/or actuators of a plurality of sensors and actuators of the asset (e.g., sensors/actuators of interest). Additionally, the data acquisition scheme may indicate the amount of data an asset obtains from the sensor/actuator of interest and/or the sampling frequency at which the asset acquires such data. The data collection scheme may also include various other attributes. In a particular example implementation, a particular data collection scheme may correspond to a predictive model of asset health, and may be adjusted (e.g., from a particular sensor) to collect more data and/or particular data based on a decrease in asset health. Or a particular data acquisition scheme may correspond to a leading indicator predictive model and may be adjusted to the modified data acquired by the asset sensors and/or actuators based on an increased likelihood of the occurrence of a leading indicator event, which may indicate a possible failure of the subsystem.
A particular data transmission scheme may indicate how the asset transmits data to the analytics system 108. In particular, the data transmission scheme may indicate the type of data that the asset should transmit (and may also indicate the format and/or structure of the data), such as data from certain sensors or actuators, a number of data samples that the asset should transmit, the frequency of transmission, and/or a priority scheme of the data that the asset should include in its data transmission. In some cases, a particular data acquisition scheme may include or be paired with a data transmission scheme. In some example implementations, the particular data transmission scheme may correspond to a predictive model of asset health and may be adjusted to transmit data less frequently based on asset health above a threshold. Other examples are possible.
As indicated above, the local diagnostic tool may be a collection of programs stored locally at the asset, or the like. Local diagnostic tools may generally facilitate diagnosing the cause of an error or fault at an asset. In some cases, when executed, the local diagnostic tool may pass test inputs into a subsystem of the asset or a portion thereof to obtain test results, which may facilitate diagnosing the cause of the error or failure. These local diagnostic tools are typically dormant on the asset and will not be executed unless the asset receives a particular diagnostic instruction. Other local diagnostic tools are also possible. In one example implementation, a particular local diagnostic tool may correspond to a predictive model of the health of a subsystem of an asset, and may be executed based on the subsystem health being at or below a threshold.
Finally, the workflow may involve modifying the operating conditions of the asset. For example, one or more actuators of the asset may be controlled to facilitate modifying an operating condition of the asset. Various operating conditions may be modified, such as speed, temperature, pressure, fluid level, current consumption, and power distribution, among others. In a particular example implementation, the operating condition modification workflow may correspond to a predictive model for predicting whether the asset will complete the task on time, and may cause the asset to increase its travel speed based on a predicted completion percentage below a threshold.
Regardless, the overall workflow may be defined in various ways. In one example, the aggregated workflow can be user-defined. In particular, a user may operate a computing device that receives user input indicating a selection of certain workflow operations, and the computing device may provide data indicating such selection to the analysis system 108. Based on this data, the analytics system 108 may then define an aggregated workflow.
In another example, the aggregate workflow may be machine-defined. In particular, the analytics system 108 may perform various operations (e.g., simulations) to determine workflows that may facilitate determining a cause of a probability output by a predictive model and/or prevent the occurrence of an event predicted by the model. Other instances of defining an aggregated workflow are also possible.
In defining a workflow corresponding to the predictive model, the analytics system 108 may define triggers for the workflow. In example implementations, the workflow trigger may be a value of a probability output by the predictive model or a range of values output by the predictive model. In some cases, a workflow may have multiple triggers, each of which may cause a different operation or operations to occur.
For illustration, FIG. 6A is a conceptual illustration of an aggregation model-workflow pair 600. As shown, the aggregate model-workflow pair specification 600 includes columns for model inputs 602, model calculations 604, model output ranges 606, and corresponding workflow operations 608. In this example, the predictive model has a single input data from sensor a, and has two calculated values: calculated values I and II. The output of this predictive model affects the workflow operations performed. If the output probability is less than or equal to 80%, workflow operation 1 is performed. Otherwise, workflow operation 2 is performed. Other example model-workflow pairs are possible and contemplated herein.
2. Personalized models and workflows
In another aspect, the analytics system 108 may be configured to define an individualized predictive model and/or workflow for the asset, which may involve utilizing the aggregate model-workflow pair as a benchmark. Personalization may be based on certain characteristics of the asset. In this manner, the analytics system 108 may provide a more accurate and robust model-workflow pair for a given asset than an aggregate model-workflow pair.
In particular, returning to fig. 5, at block 508, the analytics system 108 may be configured to decide whether to personalize the aggregate model defined at block 504 for a given asset (e.g., asset 102). The analysis system 108 may carry out this determination in a variety of ways.
In some cases, the analytics system 108 may be configured to define a personalized predictive model by default. In other cases, the analytics system 108 may be configured to decide whether to define a personalized predictive model based on certain characteristics of the asset 102. For example, in some cases, only certain types or categories or assets operating in a particular environment or having certain health scores may receive personalized predictive models. In still other cases, the user may define whether a personalized model is defined for the asset 102. Other examples are possible.
In any event, if the analytics system 108 decides to define a personalized predictive model for the asset 102, the analytics system 108 may do so at block 510. Otherwise, the analysis system 108 may proceed to block 512.
At block 510, the analytics system 108 may be configured to define the personalized predictive model in a variety of ways. In an example implementation, the analytics system 108 may define the personalized predictive model based at least in part on one or more characteristics of the asset 102.
Prior to defining the personalized predictive model for the asset 102, the analytics system 108 may have determined one or more asset characteristics of interest that form the basis of the personalized model. In practice, different predictive models may have different corresponding characteristics of interest.
In general, the characteristics of interest may be characteristics related to an aggregation model-workflow pair. For example, the characteristic of interest may be a characteristic that the analytics system 108 has determined affects on the accuracy of the aggregate model-workflow pair. Examples of such characteristics may include characteristics of asset age, asset usage, asset capabilities, asset load, asset health (perhaps indicated by the asset health indicator discussed below), asset class (e.g., brand and/or model), and environment in which the asset is operated.
The analysis system 108 may have determined the characteristic of interest in a number of ways. In one example, the analysis system 108 may be accomplished by performing one or more modeling simulations that facilitate identifying the characteristics of interest. In another example, the characteristic of interest may already be predefined and stored in a data storage of the analysis system 108. In yet another example, the characteristics of interest may have been defined by a user and provided to the analysis system 108 via the output system 110. Other examples are possible.
Regardless, after determining the characteristics of interest, the analytics system 108 may determine characteristics of the asset 102 that correspond to the determined characteristics of interest. That is, the analytics system 108 may determine the type, value, presence or absence, etc. of the property of the asset 102 corresponding to the property of interest. The analysis system 108 may perform this operation in a variety of ways.
For example, the analytics system 108 may be configured to perform this operation based on data originating from the assets 102 and/or the data sources 112. In particular, the analytics system 108 may utilize operational data of the asset 102 and/or external data from the data source 112 to determine one or more characteristics of the asset 102. Other examples are possible.
Based on the determined one or more characteristics of the asset 102, the analytics system 108 may define a personalized predictive model by modifying the aggregate model. The aggregation model may be modified in a number of ways. For example, the aggregate model may be modified by changing (e.g., adding, removing, reordering, etc.) one or more model inputs, changing one or more sensor and/or actuator measurement ranges corresponding to asset operating limits (e.g., changing operating limits corresponding to "leading indicator" events), changing one or more model calculated values, weighting (or changing weights) calculated variables or outputs, utilizing a different modeling technique than the one used to define the aggregate model, and/or utilizing a different response variable than the one used to define the aggregate model, and so forth.
For illustration, FIG. 6B is a conceptual illustration of a personalized model-workflow pair 610. Specifically, the personalized model-workflow pair specification 610 is a modified version of the aggregated model-workflow pair from FIG. 6A. As shown, the personalized model-workflow pair description 610 includes modified columns for model inputs 612 and model calculations 614, and includes the original columns for the model output ranges 606 and workflow operations 608 from FIG. 6A. In this example, the personalized model has two inputs, data from sensor a and actuator B, and has two calculated values: calculated values II and III. The output range and corresponding workflow operation are the same as those of fig. 6A. The analytics system 108 may have defined the personalized model in this manner based on determining the asset 102 is, for example, relatively old and relatively poor in health.
Indeed, the personalized aggregate model may depend on one or more characteristics of a given asset. In particular, certain characteristics may affect the modification of the aggregation model in a different manner than other characteristics. In addition, the type, value, presence, etc. of the property may also affect the modification. For example, an asset age may affect a first portion of an aggregate model, while an asset class may affect a second, different portion of the aggregate model. And asset ages within a first range of ages may affect a first portion of the aggregate model in a first manner, and asset ages within a second range of ages different from the first range may affect the first portion of the aggregate model in a second manner. Other examples are possible.
In some embodiments, the personalized aggregate model may depend on considerations in addition to or in lieu of asset characteristics. For example, when an asset is known to be in a relatively good operating state (e.g., as defined by a mechanic or the like), the aggregate model may be personalized based on sensor and/or actuator readings of the asset. More particularly, in the example of a leading indicator predictive model, the analytics system 108 may be configured to receive an indication that the asset is in a good operating state (e.g., from a computing device operated by a mechanic) and to operate on data from the asset. Based at least on the operational data, the analytics system 108 may then personalize the leading indicator predictive model of the asset by modifying the respective operational limits corresponding to the "leading indicator" event. Other examples are possible.
Returning to FIG. 5, at block 512, the analytics system 108 may also be configured to decide whether to personalize the workflow of the asset 102. The analysis system 108 may carry out this determination in a variety of ways. In some implementations, the analysis system 108 may perform this operation according to block 508. In other embodiments, the analytics system 108 may decide whether to define a personalized workflow based on a personalized predictive model. In yet another embodiment, if a personalized predictive model is defined, the analytics system 108 may decide to define a personalized workflow. Other examples are possible.
In any event, if the analytics system 108 decides to define a personalized workflow for the asset 102, the analytics system 108 may do so at block 514. Otherwise, the analysis system 108 may end the definition phase.
At block 514, the analytics system 108 may be configured to define the personalized workflow in a variety of ways. In an example implementation, the analytics system 108 may define a personalized workflow based at least in part on one or more characteristics of the asset 102.
Prior to defining the personalized workflow for the asset 102, the analytics system 108 may have determined one or more asset characteristics of interest that form the basis of the personalized workflow, which may have been determined in accordance with the discussion of block 510, similar to defining the personalized predictive model. In general, these characteristics of interest may be characteristics that affect the effectiveness of the aggregation workflow. Such characteristics may include any of the example characteristics discussed above. Other characteristics are also possible.
Again similar to block 510, the analytics system 108 may determine characteristics of the asset 102 that correspond to the determined characteristics of interest of the personalized workflow. In an example embodiment, the analytics system 108 may determine characteristics of the asset 102 in a similar manner to the characteristic determination discussed with reference to block 510, and may actually utilize some or all of the determinations.
Regardless, based on the determined one or more characteristics of the asset 102, the analytics system 108 may personalize the workflow of the asset 102 by modifying the aggregated workflow. The aggregation workflow may be modified in a number of ways. For example, the aggregated workflow may be modified by changing (e.g., adding, removing, reordering, replacing, etc.) one or more workflow operations (e.g., changing from a first data acquisition scheme to a second scheme or from a particular data acquisition scheme to a particular local diagnostic tool) and/or changing (e.g., adding, decreasing, adding, removing, etc.) corresponding model output values or value ranges that trigger particular workflow operations, and so forth. Indeed, the modifications to the aggregate workflow may depend on one or more characteristics of the asset 102 in a manner similar to the modifications to the aggregate model.
For illustration, FIG. 6C is a conceptual illustration of a personalized model-workflow pair 620. In particular, the personalized model-workflow pair specification 620 is a modified version of the aggregated model-workflow pair from FIG. 6A. As shown, the personalized model-workflow pair specification 620 includes the original columns of model inputs 602, model calculations 604, and model output ranges 606 from FIG. 6A, but includes the modified columns for the workflow operation 628. In this example, the personalized model-workflow pair is similar to the aggregate model-workflow pair in FIG. 6A, except that workflow operation 3 is triggered instead of operation 1 when the output of the aggregate model is greater than 80%. The analytics system 108 may define this individual workflow based on, among other reasons, determining that the asset 102 is operating in an environment that historically increases the occurrence of asset failures, for example.
After defining the personalized workflow, the analytics system 108 may end the definition phase. At that point, the analytics system 108 may then have a personalized model-workflow pair for the asset 102.
In some example implementations, the analytics system 108 may be configured to define an individualized predictive model and/or corresponding workflow for a given asset without first defining an aggregate predictive model and/or corresponding workflow. Other examples are possible.
While the analysis system 108 is discussed above as personalizing the predictive models and/or workflows, other devices and/or systems may perform the personalization. For example, a local analytics device of the asset 102 may personalize the predictive model and/or workflow, or may work with the analytics system 108 to perform such operations. The local analytics device performs such operations as discussed in further detail below.
3. Health score model and workflow
In a particular implementation, as mentioned above, the analytics system 108 may be configured to define a predictive model and corresponding workflow associated with the health of the asset. In an example implementation, one or more predictive models for monitoring the health of an asset may be used to output a health indicator (e.g., a "health score") for the asset, which is a single aggregate indicator that indicates whether a failure will occur at a given asset within a given time frame in the future (e.g., the next two weeks). In particular, the health indicator may indicate a likelihood that any of the group of failures will not occur at the asset within a given time horizon in the future, or the health indicator may indicate a likelihood that at least one of the group of failures will occur at the asset within a given time horizon in the future.
In practice, the predictive model and corresponding workflow for outputting the health indicator may be defined as an aggregate or personalized model and/or workflow according to the above discussion.
Additionally, depending on the desired granularity of the health indicator, the analysis system 108 may be configured to define different predictive models that output different levels of the health indicator and to define different corresponding workflows. For example, the analytics system 108 may define a predictive model that outputs a health indicator for the entire asset (i.e., an asset level health indicator). As another example, the analytics system 108 may define respective predictive models that output respective health indicators (i.e., subsystem-level health indicators) for one or more subsystems of the asset. In some cases, the outputs of each subsystem-level predictive model may be combined to generate an asset level health indicator. Other examples are possible.
In general, defining a predictive model that outputs a health indicator may be performed in various ways. FIG. 7 is a flow diagram 700 depicting one possible example of a modeling phase that may be used to define a model that outputs a health indicator. For purposes of illustration, the example modeling phase is described as being carried out by the analytics system 108, but this modeling phase may also be carried out by other systems. It will be appreciated by those of ordinary skill in the art that the flowchart 700 is provided for purposes of clarity and explanation, and that several other combinations of operations may be utilized to determine the health indicator.
As shown in fig. 7, at block 702, the analysis system 108 may begin by defining a set of one or more faults (i.e., faults of interest) that form the basis of a health indicator. In practice, the one or more faults may be faults that, if they occur, may render the asset (or its subsystems) inoperable. Based on the defined set of faults, the analysis system 108 may take steps to define a model for predicting the likelihood of any of the faults occurring within a given time frame in the future (e.g., the next two weeks).
In particular, at block 704, the analytics system 108 may analyze historical operational data of one or more groups of assets to identify past occurrences of a given fault from the set of faults. At block 706, the analysis system 108 may identify a respective set of operational data associated with each identified past occurrence of the given fault (e.g., sensor and/or actuator data from a given time range prior to occurrence of the given fault). At block 708, the analysis system 108 may analyze the identified set of operational data associated with the past occurrence of the given fault to define a relationship (e.g., a fault model) between (1) values for the given set of operational metrics and (2) a likelihood of the fault occurring within a given time frame in the future (e.g., two weeks in the future). Finally, at block 710, the defined relationships (e.g., individual fault models) for each fault in the defined set may then be combined into a model for predicting the overall likelihood of the fault occurring.
As the analytics system 108 continues to receive updated operational data for one or more groups of assets, the analytics system 108 may also continue to improve the predictive model for the defined set of one or more faults by repeating steps 704-710 on the updated operational data.
The function of the example modeling phase illustrated in FIG. 7 will now be described in further detail. Beginning in block 702, as described above, the analysis system 108 may begin by defining a set of one or more faults that form the basis of a health indicator. The analysis system 108 may perform this function in various ways.
In one example, the set of one or more faults may be based on one or more user inputs. Specifically, the analysis system 108 may receive user-selected input data indicative of one or more faults from a computing system operated by a user (e.g., the output system 110). Thus, the set of one or more faults may be user defined.
In other examples, the set of one or more faults may be based on a determination (e.g., machine-defined) made by the analysis system 108. In particular, the analysis system 108 may be configured to define a set of one or more faults that may occur in a variety of ways.
For example, the analytics system 108 may be configured to define a set of faults based on one or more characteristics of the assets 102. That is, certain faults may correspond to certain characteristics of an asset, such as asset type, category, and so forth. For example, each type and/or category of asset may have a corresponding fault of interest.
In another example, the analytics system 108 may be configured to define the set of faults based on historical data stored in a database of the analytics system 108 and/or external data provided by the data source 112. For example, the analysis system 108 may utilize this data to determine which faults resulted in the longest repair time and/or historically which faults followed by additional faults, and so forth.
In still other examples, the set of one or more faults may be defined based on a combination of user input and determinations made by the analysis system 108. Other examples are possible.
At block 704, for each of the faults in the set of faults, the analytics system 108 may analyze historical operational data (e.g., abnormal behavior data) of one or more groups of assets to identify past occurrences of the given fault. The one or more groups of assets may include a single asset (e.g., asset 102), or may include multiple assets of the same or similar type, such as a team of asset groups including assets 102 and 104. The analysis system 108 may analyze a particular amount of historical operational data, such as a certain amount of time worth of data (e.g., one month worth of data) or a certain number of data points (e.g., the most recent one thousand data points), and so forth.
In practice, identifying past occurrences of a given fault may involve the analysis system 108 identifying the type of operational data (e.g., abnormal-condition data) indicative of the given fault. Typically, a given fault may be associated with one or more abnormal condition indicators (e.g., fault codes). That is, one or more abnormal status indicators may be triggered when a given fault occurs. Thus, the abnormal condition indicator may reflect potential signs of a given fault.
After identifying the type of operational data indicative of a given fault, the analysis system 108 may identify past occurrences of the given fault in a variety of ways. For example, the analysis system 108 may locate abnormal-condition data corresponding to an abnormal-condition indicator associated with a given fault from historical operating data stored in a database of the analysis system 108. Each located abnormal condition data will indicate the occurrence of a given fault. Based on this located abnormal-condition data, the analysis system 108 can identify the time at which the past fault occurred.
At block 706, the analysis system 108 may identify a respective set of operational data associated with each identified past occurrence of the given fault. In particular, the analysis system 108 may identify a set of sensor and/or actuator data from within a certain time range around the time of a given occurrence of a given fault. For example, the data set may come from a particular time range (e.g., two weeks) before, after, or around the occurrence of a given fault. In other cases, the data set may be identified from a certain number of data points before, after, or around the occurrence of a given fault.
In an example implementation, the set of operational data may include sensor and/or actuator data from some or all of the sensors and actuators of the asset 102. For example, the set of operational data may include data from sensors and/or actuators associated with abnormal-condition indicators corresponding to a given fault.
To illustrate, FIG. 8 depicts a conceptual illustration of historical operational data that the analytics system 108 may analyze to facilitate defining a model. The graph 800 may correspond to historical data segments originating from some of the sensors and actuators (e.g., sensor a and actuator B) or the owner of the asset 102. As shown, the graph 800 includes time on an x-axis 802, measurements on a y-axis 804, and sensor data 806 corresponding to sensor A and actuator data 808 corresponding to actuator B, each of which includes data representing a particular point in time TiVarious data points of the measured values of (a). In addition, graph 800 includes the elapsed time TfAn indication of an occurrence of a fault 810 that occurred (e.g., "time to failure") and an indication Δ T of an amount of time 812 before the fault occurred, from which an operational data set is identified. Thus, TfΔ T defines a time range 814 for the data point of interest.
Returning to FIG. 7, a given occurrence of a given fault (e.g., T) is identified at the analysis system 108fOccurrences of (d) operation data sets, the analytics system 108 may determine whether there are any remaining occurrences for which operation data sets should be identified. Where there are remaining occurrences, block 706 will be repeated for each remaining occurrence.
Thereafter, at block 708, the analysis system 108 may analyze the identified set of operational data associated with the past occurrence of the given fault to define a relationship (e.g., a fault model) between (1) the given set of operational metrics (e.g., the given set of sensor and/or actuator measurements) and (2) a likelihood of the fault occurring within a given time frame in the future (e.g., two weeks in the future). That is, a given fault model may take as input sensor and/or actuator measurements from one or more sensors and/or actuators and output a probability that a given fault will occur within a given time frame in the future.
In general, a fault model may define a relationship between the operating conditions of the asset 102 and the likelihood of a fault occurring. In some implementations, the fault model may receive a plurality of other data inputs, also referred to as signatures, derived from sensor and/or actuator signals in addition to raw data signals from sensors and/or actuators of the asset 102. Such features may include an average or range of values historically measured at the time of the fault, an average or range of value gradients historically measured before the fault occurred (e.g., a rate of change in the measured values), a duration between features (e.g., an amount of time or a number of data points between the first and second occurrence of the fault), and/or one or more fault patterns indicative of sensor and/or actuator measurement trends around the occurrence of the fault. Those of ordinary skill in the art will appreciate that these are just a few example features that may be derived from sensor and/or actuator signals, and that many other features are possible.
In practice, the fault model may be defined in a number of ways. In an example implementation, the analysis system 108 may define the fault model by utilizing one or more modeling techniques that return a probability between 0 and 1, which may take the form of any of the modeling techniques described above.
In a particular example, defining the fault model may involve the analysis system 108 generating response variables based on the historical operating data identified at block 706. Specifically, the analysis system 108 may determine an associated response variable for each set of sensor and/or actuator measurements received at a particular point in time. Thus, the response variable may take the form of a data set associated with the fault model.
The response variable may indicate whether the given set of measurement values is within any of the time ranges determined at block 706. That is, the response variable may reflect whether a given data set came from a time of interest around the occurrence of the fault. The response variable may be a binary-valued response variable, such that if a given set of measurement values is within any of the determined time ranges, the associated response variable is assigned a value of 1, otherwise the associated response variable is assigned a value of 0.
Go back to the figure8, the response variable vector Y is shown on the graph 800resA conceptual illustration of (a). As shown, the response variable associated with the set of measurement values within time range 814 has a value of 1 (e.g., at time T)i+3To Ti+8Y of (A)res) While the response variable associated with the set of measurements outside of time range 814 has a value of 0 (e.g., at time T)iTo Ti+2And Ti+9To Ti+10Y of (A)res). Other response variables are also possible.
Continuing in the specific example of defining a fault model based on response variables, the analysis system 108 may utilize the historical operating data identified at block 706 and the generated response variables to train the fault model. Based on this training process, the analysis system 108 may then define a fault model that receives as inputs the various sensor and/or actuator data and outputs a probability between 0 and 1 that a fault will occur within a time period equivalent to that used to generate the response variable.
In some cases, training utilizing the historical operating data identified at block 706 and the generated response variables may result in variable importance statistics for each sensor and/or actuator. A given variable importance statistic may indicate the relative impact of a sensor or actuator on the probability that a given fault will occur within a future period of time.
Additionally or alternatively, the analytics system 108 may be configured to define a fault model based on one or more survival analysis techniques (e.g., Cox proportional hazards techniques). The analytics system 108 may utilize a time-to-live analysis technique in a manner similar in some respects to the modeling techniques discussed above, but the analytics system 108 may determine a time-to-live response variable that indicates an amount of time from the last failure to the next expected event. The next expected event may be the occurrence of a measurement or fault receiving the sensor and/or actuator, whichever occurred first. Such a response variable may include a pair of values associated with each of the particular points in time at which the measurement values were received. The response variable may then be utilized to determine the probability that a fault will occur within a given time frame in the future.
In some example implementations, the fault model may be defined based in part on external data, such as weather data and "hot box" data, as well as other data. For example, based on this data, the fault model may increase or decrease the output fault probability.
In practice, the external data may be observed at a point in time that is inconsistent with the time at which the asset sensors and/or actuators obtain measurements. For example, the time at which "hot box" data is collected (e.g., the time a locomotive travels along a section of railroad track that is equipped with hot box sensors) may not coincide with the sensor and/or actuator measurement time. In such cases, the analysis system 108 may be configured to perform one or more operations to determine external data observations that would otherwise be observed at times corresponding to the sensor measurement times.
Specifically, the analysis system 108 may interpolate the external data observations using the times of the external data observations and the times of the measurements to generate external data values for times corresponding to the measurement times. Interpolation of the external data may allow for inclusion of external data observations or features derived therefrom as inputs in the fault model. Indeed, various techniques may be used to utilize sensor and/or actuator data to interpolate external data, such as nearest neighbor interpolation, linear interpolation, polynomial interpolation, and spline interpolation, among other examples.
Returning to fig. 7, after the analytics system 108 determines a fault model from a given fault in the set of faults defined at block 702, the analytics system 108 may determine whether there are any remaining faults for which a fault model should be determined. In the event that there is still a fault for which a fault model should be determined, the analysis system 108 may repeat the loop of blocks 704 through 708. In some implementations, the analytics system 108 may determine a single fault model that includes the owner of the fault defined at block 702. In other implementations, the analytics system 108 may determine a fault model for each subsystem of the asset 102, which may then be utilized to determine an asset level fault model. Other examples are possible.
Finally, at block 710, the defined relationships (e.g., individual fault models) for each fault in the defined set may then be combined into a model (e.g., a health indicator model) for predicting an overall likelihood of a fault occurring at a given time horizon in the future (e.g., the next two weeks). That is, the model may receive sensor and/or actuator measurements from one or more sensors and/or actuators as inputs and output a single probability that at least one fault of a set of faults will occur within a given time frame in the future.
The analysis system 108 may define the health indicator model in a variety of ways, which may depend on the desired granularity of the health indicator. That is, where there are multiple fault models, the results of the fault models may be utilized in a variety of ways to obtain the output of the health indicator model. For example, the analysis system 108 may determine a maximum, median, or average value from a plurality of fault models and use the determined value as an output of the health indicator model.
In other examples, determining the health indicator model may involve the analysis system 108 attributing weights to individual probabilities output by individual fault models. For example, each fault from the set of faults may be considered equally undesirable, and thus each probability may equally be weighted in determining the health indicator model. In other cases, some faults may be considered less popular (e.g., more catastrophic or require longer repair time, etc.) than others, and thus those corresponding probabilities may be weighted more heavily than others.
In still other examples, determining the health indicator model may involve the analysis system 108 utilizing one or more modeling techniques, such as a regression technique. The aggregate response variable may take the response variable from each of the individual fault models (e.g., Y in FIG. 8)res) In a logically separate (logically or) form. For example, any measurements that occur within any time range (e.g., time range 814 of FIG. 8) determined at block 706The aggregate response variable associated with the set of values may have a value of 1, while the aggregate response variable associated with the set of measurements occurring outside of any of the time ranges may have a value of zero. Other ways of defining the health indicator model are also possible.
In some embodiments, block 710 may not be necessary. For example, as discussed above, the analysis system 108 may determine a single fault model, in which case the health indicator model may be a single fault model.
Indeed, the analysis system 108 may be configured to update the individual fault models and/or the overall health indicator model. The analytics system 108 may update the model daily, weekly, monthly, etc., and may be based on new portions of historical operating data from the asset 102 or from other assets (e.g., other assets from the same group as the asset 102). Other examples are possible.
C. Deployment model and workflow
After the analytics system 108 defines the model-workflow pair, the analytics system 108 may deploy the defined model-workflow pair to one or more assets. Specifically, the analytics system 108 may transmit the defined predictive model and/or the corresponding workflow to at least one asset (e.g., the asset 102). The analytics system 108 may transmit the model-workflow pair periodically or based on a triggering event (e.g., any modification or update to a given model-workflow pair).
In some cases, the analytics system 108 may transmit only one of the personalized model or the personalized workflow. For example, where the analytics system 108 defines only personalized models or workflows, the analytics system 108 may transmit an aggregated version of the workflows or models along with the personalized models or workflows, or if the asset 102 already stores the aggregated version in a data store, the analytics system 108 may not need to transmit the aggregated version. In summary, the analytics system 108 may transmit (1) personalized models and/or personalized workflows, (2) personalized models and aggregated workflows, (3) aggregated models and personalized workflows, or (4) aggregated models and aggregated workflows.
Indeed, the analytics system 108 may have performed some or all of the operations of blocks 702 through 710 of FIG. 7 for multiple assets to define model-workflow pairs for each asset. For example, the analytics system 108 may additionally define model-workflow pairs for the assets 104. The analytics system 108 may be configured to transmit respective model-workflow pairs to the assets 102 and 104 simultaneously or sequentially.
D. Local execution of assets
A given asset, such as asset 102, may be configured to receive a model-workflow pair, or portion thereof, and operate in accordance with the received model-workflow pair. That is, the asset 102 may store model-workflow pairs in a data storage device and input data obtained by sensors and/or actuators of the asset 102 into a predictive model, and sometimes execute a corresponding workflow based on the output of the predictive model.
In practice, various components of the asset 102 may execute the predictive model and/or corresponding workflow. For example, as discussed above, each asset may include a local analytics device configured to store and run model-workflow pairs provided by the analytics system 108. When a local analytics device receives particular sensor and/or actuator data, it may input the received data into a predictive model, and depending on the output of the model, may perform one or more operations of a corresponding workflow.
In another example, a central processing unit of the asset 102, separate from the local analytics device, may execute the predictive model and/or the corresponding workflow. In still other examples, the local analytics device and the central processing unit of the asset 102 may cooperatively execute a model-workflow pair. For example, the local analytics device may execute the predictive model and the central processing unit may execute the workflow, or vice versa.
In an example implementation, prior to locally executing the model-workflow pair (or perhaps when the model workflow is first locally executed), the local analytics device may personalize the predictive model of the asset 102 and/or the corresponding workflow. This may occur regardless of whether the model-workflow pair takes the form of an aggregate model-workflow pair or an individualized model-workflow pair.
As indicated above, the analytics system 108 may define model-workflow pairs based on certain predictions, assumptions, and/or generalizations regarding groups of assets or specific assets. For example, in defining model-workflow pairs, the analytics system 108 may predict, assume, and/or generalize relevant characteristics of the asset and/or operating conditions of the asset, among other considerations.
Regardless, the local analytics device-personalized predictive model and/or corresponding workflow may involve the local analytics device validating or overriding one or more of the predictions, assumptions, and/or generalizations that the analytics system 108 makes in defining the model-workflow pair. Based on the local analytics device's evaluation of the predictions, assumptions, and/or generalizations, the local analytics device may thereafter modify (or further modify in the case of already personalized models and/or workflows) the predictive models and/or workflows. In this way, the local analytics device may help define more realistic and/or accurate model-workflow pairs, which may result in more efficient asset monitoring.
In fact, the local analytics device may personalize the predictive model and/or workflow based on a number of considerations. For example, the local analytics device may do so based on operational data generated by one or more sensors and/or actuators of the asset 102. In particular, the local analysis device may be personalized by: (1) obtaining operational data generated by a particular group of one or more sensors and/or actuators (e.g., by obtaining such data indirectly via a central processing unit of the asset, or possibly directly from certain of the sensors and/or actuators themselves), (2) evaluating one or more predictions, hypotheses, and/or generalizations associated with the model-workflow pair based on the obtained operational data, and (3) modifying the model and/or workflow accordingly if the evaluation indicates that any of the predictions, hypotheses, and/or generalizations are incorrect.
In one example, obtaining operational data generated by a particular group of sensors and/or actuators by a local analytics device (e.g., via a central processing unit of an asset) may be based on instructions included as part of or with a model-workflow pair. In particular, the instructions may identify one or more tests performed by the local analytics device that evaluate some or all of the predictions, hypotheses, and/or generalizations involved in defining the model-workflow pair. Each test may identify one or more sensors and/or actuators of interest for which the local analytics device is to obtain operational data, an amount of operational data to obtain, and/or other test considerations. Thus, obtaining operational data generated by a particular group of sensors and/or actuators by a local analytics device may involve the local analytics device obtaining such operational data according to test instructions or the like. Other examples of local analytics means obtaining operational data for a personalized model-workflow pair are also possible.
As described above, after obtaining operational data, the local analytics device may utilize the data to evaluate some or all of the predictions, assumptions, and/or generalizations involved in defining the model-workflow pair. This operation may be performed in various ways. In one example, the local analytics device may compare the obtained operational data to one or more thresholds (e.g., thresholds and/or threshold ranges of values). In general, a given threshold or range may correspond to one or more predictions, hypotheses, and/or generalizations used to define a model-workflow pair. Specifically, each sensor or actuator (or combination of sensors and/or actuators) identified in the test instructions may have a corresponding threshold or range. The local analysis device may then determine whether the operational data generated by a given sensor or actuator is above or below a corresponding threshold or range. Other examples of local analytics devices evaluating predictions, hypotheses, and/or generalizations are also possible.
Thereafter, the local analytics device may modify (or not modify) the predictive model and/or the workflow based on the evaluation. That is, if the evaluation indicates that any predictions, assumptions, and/or generalizations are incorrect, the local analytics device may modify the predictive model and/or workflow accordingly. Otherwise, the local analytics device may execute the model-workflow pair without modification.
Indeed, the local analytics device may modify the predictive model and/or the workflow in a variety of ways. For example, the local analytics device may modify one or more parameters of the predictive model and/or workflow (e.g., by modifying a value or range of values) and/or trigger points of the predictive model and/or workflow, and so forth.
As a non-limiting example, the analytics system 108 may have defined a model-workflow pair for the asset 102, assuming that the engine operating temperature of the asset 102 does not exceed a certain temperature. As a result, portions of the predictive model of the asset 102 may involve determining a first calculated value and then determining a second calculated value only when the first calculated value exceeds a threshold determined based on the assumed engine operating temperature. When personalizing the model-workflow pair, the local analytics device may obtain data generated by one or more sensors and/or actuators that measure operational data of the engine of the asset 102. The local analysis device may then use this data to determine whether the assumption about the engine operating temperature is actually true (e.g., whether the engine operating temperature exceeds a threshold). If the data indicates that the engine operating temperature has a value that exceeds the assumed particular temperature or exceeds the particular temperature by a threshold amount, the local analysis device may, for example, modify the threshold that triggers the determination of the second calculated value. Other examples of local analytics device-personalized predictive models and/or workflows are also possible.
The local analytics device may personalize the model-workflow pair based on additional or alternative considerations. For example, the local analytics device may do so based on one or more asset characteristics (such as any of the asset characteristics discussed above), which may be determined by or provided to the local analytics device. Other examples are possible.
In an example implementation, after the local analytics device personalizes the predictive model and/or workflow, the local analytics device may provide an indication to the analytics system 108 that the predictive model and/or workflow has been personalized. This indication may take various forms. For example, the indication may identify an aspect or portion of the predictive model and/or workflow that the local analytics device modified (e.g., the parameters that were modified and/or why the parameters were modified) and/or may identify the reason for the modification (e.g., a description of the underlying operational data or other asset data and/or reasons that caused the local analytics device to modify). Other examples are possible.
In some example implementations, both the local analytics device and the analytics system 108 may be involved in a personalized model-workflow pair, which may be performed in various ways. For example, the analytics system 108 may provide instructions to the local analytics device to test certain conditions and/or characteristics of the asset 102. Based on the instructions, the local analytics device may perform a test at the asset 102. For example, a local analytics device may obtain operational data generated by a particular asset sensor and/or actuator. Thereafter, the local analytics device may provide results from the test conditions to the analytics system 108. Based on such results, the analytics system 108 may accordingly define and transmit a predictive model and/or workflow of the asset 102 to a local analytics device for local execution.
In other examples, the local analytics device may perform the same or similar testing operations as the portion of the workflow is executed. That is, a particular workflow corresponding to a predictive model may cause the local analytics device to perform certain tests and transmit the results to the analytics system 108.
In an example implementation, after the local analytics device personalizes the predictive models and/or workflows (or works with the analytics system 108 to personalize the predictive models and/or workflows), the local analytics device may execute the personalized predictive models and/or workflows instead of the original models and/or workflows (e.g., the models and/or workflows that the local analytics device originally received from the analytics system 108). In some cases, while the local analytics device executes the personalized version, the local analytics device may retain the original version of the model and/or workflow in the data storage.
In general, the operation of an asset executing a predictive model and executing a workflow based on the resulting output may facilitate determining a cause of a likelihood of occurrence of a particular event output by the model and/or may facilitate preventing the occurrence of the particular event in the future. In executing the workflow, the asset may determine and take action locally to help prevent the occurrence of an event, which may be beneficial in cases where it is not effective or feasible to rely on the analytics system 108 to make such a determination and provide a recommended action (e.g., when there is network delay, when a network connection is poor, when the asset moves out of coverage of the communication network 106, etc.).
In fact, the asset may execute the predictive model in various ways, which may depend on the particular predictive model. FIG. 9 is a flow diagram 900 depicting one possible example of a local execution phase that may be used to locally execute a predictive model. The example local execution phase will be discussed in the context of a health indicator model that outputs a health indicator for an asset, but it should be understood that the same or similar local execution phase may be used for other types of predictive models. Additionally, for purposes of illustration, the example local execution phase is described as being carried out by a local analytics device of the asset 102, although this phase may also be carried out by other devices and/or systems. It will be appreciated by those of ordinary skill in the art that the flowchart 900 is provided for purposes of clarity and explanation, and that several other combinations of operations and functions may be utilized to locally execute the predictive model.
As shown in fig. 9, at block 902, the local analytics device may receive data reflecting the current operating conditions of the asset 102. At block 904, the local analytics device may identify, from the received data, a set of operational data to be input into a model provided by the analytics system 108. At block 906, the local analytics device may then input the identified set of operational data into a model and run the model to obtain a health indicator for the asset 102.
As the local analytics device continues to receive updated operational data for the asset 102, the local analytics device may also continue to update the health indicator for the asset 102 by repeating the operations of blocks 902-906 based on the updated operational data. In some cases, the operations of blocks 902-906 may be repeated each time the local analytics device receives new data from sensors and/or actuators of the asset 102 or periodically (e.g., hourly, daily, weekly, monthly, etc.). In this way, when the asset is used in operation, the local analytics device may be configured to dynamically update the health indicator, possibly in real-time.
The function of the example native execution phase illustrated in FIG. 9 will now be described in further detail. At block 902, the local analytics device may receive data reflecting the current operating conditions of the asset 102. This data may include sensor data from one or more of the sensors of the asset 102, actuator data from one or more actuators of the asset 102, and/or it may include abnormal-condition data, among other types of data.
At block 904, the local analytics device may identify, from the received data, a set of operational data to be input into a health indicator model provided by the analytics system 108. This operation may be performed in a variety of ways.
In one example, the local analytics device may identify a set of operational data inputs (e.g., data from a particular sensor and/or actuator of interest) for the model based on characteristics of the asset 102, such as the asset type or asset class for which a health indicator is being determined. In some cases, the identified set of operational data inputs may be sensor data from some or all of the sensors of the asset 102 and/or actuator data from some or all of the actuators of the asset 102.
In another example, the local analytics device may identify the set of operational data inputs based on a predictive model provided by the analytics system 108. That is, the analytics system 108 may provide some indication to the asset 102 of the particular inputs for the model (e.g., in a predictive model or in a separate data transmission). Other examples of identifying sets of operational data inputs are possible.
At block 906, the local analytics device operations may then run the health indicator model. In particular, the local analytics device may input the identified set of operational data into a model that in turn determines and outputs an overall likelihood of at least one fault occurring within a given time frame in the future (e.g., the next two weeks).
In some implementations, such operation may involve the local analytics device inputting specific operational data (e.g., sensor and/or actuator data) into one or more individual fault models of the health indicator model, each of which may output an individual probability. The local analytics device may then use these individual probabilities, perhaps weighting some more than others according to the health indicator model, to determine an overall likelihood of failure within a given time frame in the future.
After determining the overall likelihood of failure, the local analytics device may convert the probability of failure into a health indicator, which may take the form of a single aggregated parameter reflecting the likelihood that a failure will not occur at the asset 102 within a future time frame (e.g., two weeks). In an example implementation, converting the failure probability to the health indicator may involve the local analytics device determining the complement of the failure probability. In particular, the overall failure probability may take the form of a value from 0 to 1; the health indicator may be determined by subtracting the number from 1. Other examples of converting the failure probability to a health indicator are possible.
After the asset locally executes the predictive model, the asset may then execute the corresponding workflow based on the resulting output of the executed predictive model. In general, an asset execution workflow may involve a local analytics device causing operations to be performed at an asset (e.g., by sending instructions to one or more of the asset's on-board systems) and/or a local analytics device causing a computing system, such as analytics system 108 and/or output system 110, to perform operations remote from an asset. As mentioned above, workflows can take various forms, and thus workflows can be executed in various ways.
For example, the asset 102 may be caused to internally perform one or more operations that modify some behavior of the asset 102, such as modifying a data collection and/or transmission scheme, executing local diagnostic tools, modifying an operating condition of the asset 102 (e.g., modifying speed, acceleration, fan speed, propeller angle, air intake, etc., or performing other mechanical operations via one or more actuators of the asset 102), or outputting an indication of a preventative action that may be a relatively low indicator of health or recommendation that should be performed at a user interface of the asset 102 on the asset 102 or on an external computing system.
In another example, the asset 102 may transmit instructions to a system (e.g., output system 110) on the communication network 106 that cause the system to perform an operation, such as generating a work order or ordering a particular part for use in repairing the asset 102. In yet another example, the asset 102 may be in communication with a remote system (e.g., the analytics system 108) that then facilitates causing operations to occur remotely from the asset 102. Other instances of the asset 102 executing the workflow locally are also possible.
E. Model/workflow modification phase
In another aspect, the analytics system 108 may carry out a modification phase during which the analytics system 108 modifies the deployed model and/or workflow based on the new asset data. This phase may be performed for both the aggregation and personalization models and workflows.
In particular, when a given asset (e.g., asset 102) operates according to a model-workflow pair, the asset 102 may provide operational data to the analytics system 108 and/or the data source 112 may provide external data related to the asset 102 to the analytics system 108. Based at least on this data, the analytics system 108 may modify models and/or workflows of the assets 102 and/or models and/or workflows of other assets (e.g., the assets 104). The analytics system 108 may share information derived from behavioral learning of the asset 102 when modifying models and/or workflows of other assets.
Indeed, the analysis system 108 may be modified in a number of ways. FIG. 10 is a flow diagram 1000 depicting one possible example of a modification phase that may be used to modify a model-workflow pair. For purposes of illustration, the example modification phase is described as being carried out by the analysis system 108, but this modification phase may also be carried out by other systems. It will be appreciated by those of ordinary skill in the art that the flowchart 1000 is provided for purposes of clarity and explanation, and that several other combinations of operations may be utilized to modify the model-workflow pair.
As shown in fig. 10, at block 1002, the analytics system 108 may receive data from which the analytics system 108 identified the occurrence of a particular event. The data may be operational data originating from the asset 102 or external data related to the asset 102 from the data source 112. The event may take the form of any of the events discussed above, such as a failure at the asset 102.
In other example implementations, the event may take the form of a new component or subsystem being added to the asset 102. Another event may take the form of a "leading indicator" event, which may involve sensors and/or actuators of the asset 102 generating data that is different (possibly a difference threshold difference) from the data identified at block 706 of fig. 7 during the model definition phase. This difference may indicate that the asset 102 has an operating condition that is higher or lower than the normal operating condition of an asset similar to the asset 102. Yet another event may take the form of an event followed by one or more leading indicator events.
Based on the identified occurrences of particular events and/or underlying data (e.g., operational data and/or external data related to the assets 102), the analytics system 108 may then modify the aggregation, predictive model, and/or workflow and/or one or more personalized predictive models and/or workflows. In particular, at block 1004, the analytics system 108 may determine whether to modify the aggregate predictive model. The analytics system 108 may determine to modify the aggregate predictive model for a number of reasons.
For example, if the occurrence of the identified particular event is the first occurrence of a plurality of assets, including asset 102 (e.g., the first occurrence of a particular failure at an asset from a fleet of assets or the first addition of a particular new component to an asset from a fleet of assets), then analytics system 108 may modify the aggregate predictive model.
In another example, the analytics system 108 may make the modification if the data associated with the identified occurrence of the particular event is different from the data used to initially define the aggregate model. For example, the identified occurrence of a particular event may occur under operating conditions that were not previously associated with the occurrence of the particular event (e.g., a particular fault may occur under an associated sensor value that was not previously measured under the particular fault). Other reasons for modifying the aggregation model are also possible.
If the analytics system 108 determines to modify the aggregate predictive model, the analytics system 108 may do so at block 1006. Otherwise, the analysis system 108 may proceed to block 1008.
At block 1006, the analytics system 108 may modify the aggregate model based at least in part on the data related to the assets 102 received at block 1002. In an example embodiment, the aggregation model may be modified in various ways (e.g., any of the ways discussed above with reference to block 510 of fig. 5). In other embodiments, the polymerization model may be modified in other ways as well.
At block 1008, the analytics system 108 may then determine whether to modify the aggregated workflow. The analytics system 108 may modify the aggregated workflow for a number of reasons.
For example, the analytics system 108 may modify the aggregated workflow based on whether the aggregated model was modified at block 1004 and/or whether there were other changes at the analytics system 108. In other examples, while the asset 102 executes the aggregated workflow, the analytics system 108 may modify the aggregated workflow if the identified event occurrence occurs at block 1002. For example, if the workflow is intended to help facilitate preventing the occurrence of an event (e.g., a failure) and the workflow is executed correctly, but the event still occurs, the analytics system 108 may modify the aggregated workflow. Other reasons for modifying the aggregated workflow are also possible.
If the analytics system 108 determines to modify the aggregated workflow, the analytics system 108 may do so at block 1010. Otherwise, the analysis system 108 may proceed to block 1012.
At block 1010, the analytics system 108 may modify the aggregated workflow based at least in part on the data related to the assets 102 received at block 1002. In an example embodiment, the aggregation workflow may be modified in various ways (e.g., any of the ways discussed above with reference to block 514 of fig. 5). In other embodiments, the polymerization model may be modified in other ways as well.
At blocks 1012-1018, the analytics system 108 may be configured to modify one or more personalization models and/or one or more personalization workflows (e.g., for one of the assets 102 or the asset 104) based at least in part on the data related to the assets 102 received at block 1002 (e.g., for each of the assets 102 and 104). The analytics system 108 may do so in a manner similar to blocks 1004 through 1010.
However, the reasons for modifying the personalized model or workflow may be different from the reasons for the aggregated situation. For example, the analytics system 108 may further consider the underlying asset characteristics that were first used to define the personalized model and/or workflow. In a particular example, the analytics system 108 may modify the personalized model and/or the workflow if the occurrence of the particular event identified is the first occurrence of this particular event for an asset having asset characteristics of the asset 102. Other reasons for modifying the personalized model and/or workflow are also possible.
For illustration, FIG. 6D is a conceptual illustration of a modified model-workflow pair 630. In particular, model-workflow pair specification 630 is a modified version of the aggregate model-workflow pair from FIG. 6A. As shown, modified model-workflow pair description 630 includes the original columns from model input 602 of FIG. 6A, and includes modified columns for model calculations 634, model output ranges 636, and workflow operations 638. In this example, the modified predictive model has a single input data from sensor a, and has two calculated values: calculated values I and III. If the output probability of the modified model is less than 75%, workflow operation 1 is performed. If the output probability is between 75% and 85%, workflow operation 2 is performed. And if the output probability is greater than 85%, workflow operation 3 is performed. Other example modification model-workflow pairs are possible and contemplated herein.
Returning to FIG. 10, at block 1020, the analytics system 108 may then transmit any model and/or workflow modifications to one or more assets. For example, the analytics system 108 may transmit the modified personalized model-workflow pair to the asset 102 (e.g., the asset whose data caused the modification) and transmit the modified aggregate model to the asset 104. In this manner, the analytics system 108 may dynamically modify models and/or workflows based on data associated with the operation of the assets 102 and distribute such modifications to multiple assets, such as teams to which the assets 102 belong. Thus, other assets may benefit from data originating from the assets 102, as local model-workflow pairs for other assets may be improved based on this data, thereby helping to create more accurate and robust model-workflow pairs.
While the modification phase described above is discussed as being performed by the analytics system 108, in an example embodiment, the local analytics device of the asset 102 may additionally or alternatively perform the modification phase in a manner similar to that discussed above. For example, in one example, the local analytics device may modify the model-workflow pair when the asset 102 operates by utilizing operational data generated by one or more sensors and/or actuators. Accordingly, the local analytics device of the asset 102, the analytics system 108, or some combination thereof may modify the predictive model and/or the workflow as the asset-related conditions change. In this way, the local analytics device and/or the analytics system 108 may continuously adjust the model-workflow pair based on the most recent data available to it.
F. Dynamic execution model/workflow
In another aspect, the asset 102 and/or the analytics system 108 may be configured to dynamically adjust the execution model-workflow pair. In particular, the asset 102 and/or the analytics system 108 may be configured to detect certain events that trigger a change in responsibility regarding whether the asset 102 and/or analytics system 108 should execute a predictive model and/or workflow.
In operation, both the asset 102 and the analytics system 108 may execute all or part of a model-workflow pair representing the asset 102. For example, after the asset 102 receives the model-workflow pair from the analytics system 108, the asset 102 may store the model-workflow pair in a data store, but may then rely on the analytics system 108 to centrally execute some or all of the model-workflow pair. In particular, the asset 102 may provide at least sensor and/or actuator data to the analytics system 108, which the analytics system 108 may then use to centrally execute a predictive model of the asset 102. Based on the output of the model, the analytics system 108 may then execute the corresponding workflow, or the analytics system 108 may transmit the output of the model to the asset 102 or instructions that cause the asset 102 to execute the workflow locally.
In other instances, the analytics system 108 may rely on the assets 102 to locally execute some or all of the model-workflow pairs. In particular, the asset 102 may locally execute some or all of the predictive models and transmit results to the analytics system 108, which may then cause the analytics system 108 to centrally execute the corresponding workflow. Or the asset 102 may also execute the corresponding workflow locally.
In still other examples, the analytics system 108 and the assets 102 may share responsibility for executing the model-workflow pair. For example, the analytics system 108 may centrally execute portions of the model and/or workflow, while the assets 102 locally execute other portions of the model and/or workflow. The assets 102 and analytics system 108 may transmit results from their respective executed responsibilities. Other examples are possible.
At some point in time, the asset 102 and/or the analytics system 108 may determine that the execution of the model-workflow pair should be adjusted. That is, one or both may determine that execution responsibility should be modified. This operation may occur in various ways.
FIG. 11 is a flow diagram 1100 depicting one possible example of an adjustment phase that may be used to adjust the execution of a model-workflow pair. For illustration purposes, the example adjustment phase is described as being carried out by the asset 102 and/or the analytics system 108, but this modification phase may also be carried out by other systems. It will be appreciated by one of ordinary skill in the art that the flowchart 1100 is provided for purposes of clarity and explanation, and that several other combinations of operations may be utilized to adjust the execution of the model-workflow pair.
At block 1102, the asset 102 and/or the analytics system 108 may detect a tuning factor (or potentially multiple tuning factors) that indicates a condition that requires tuning of the execution of the model-workflow pair. Examples of such conditions include network conditions of the communication network 106 or processing conditions of the asset 102 and/or the analytics system 108, among others. Example network conditions may include network delay, network bandwidth, signal strength of a link between the asset 102 and the communication network 106, or some other indication of network performance, among others. An example processing condition may include processing capacity (e.g., available processing power), an amount of processing usage (e.g., an amount of processing power consumed), or some other indication of processing power, among others.
In practice, detecting the adjustment factor may be performed in various ways. For example, such operation may involve determining whether network (or processing) conditions reach one or more thresholds or whether conditions change in some way. Other examples of detecting the adjustment factor are possible.
In particular, in some cases, detecting an adjustment factor may involve the asset 102 and/or the analytics system 108 detecting an indication that the signal strength of a communication link between the asset 102 and the analytics system 108 is below a threshold signal strength or decreasing at some rate of change. In this example, the adjustment factor may indicate that the asset 102 is about to be "offline".
In another case, detecting the adjustment factor may additionally or alternatively involve the asset 102 and/or the analytics system 108 detecting an indication that the network delay is above a threshold delay or increasing at some rate of change. Or the indication may be that the network bandwidth is below a threshold bandwidth or decreases at some rate of change. In these examples, the adjustment factor may indicate that the communication network 106 is lagging.
In still other cases, detecting an adjustment factor may additionally or alternatively involve the asset 102 and/or the analytics system 108 detecting an indication that processing capacity is below a particular threshold or decreases at some rate of change and/or that processing usage is above a threshold or increases at some rate of change. In such examples, the adjustment factor may indicate that the processing power of the asset 102 (and/or the analytics system 108) is low. Other examples of detecting the adjustment factor are possible.
At block 1104, based on the detected adjustment factor, local execution responsibilities may be adjusted, which may occur in a variety of ways. For example, the asset 102 may have detected the adjustment factor and then determined to locally execute the model-workflow pair, or portion thereof. In some cases, the asset 102 may then transmit a notification to the analytics system 108 that the asset 102 is locally executing the predictive model and/or workflow.
In another example, the analytics system 108 may have detected the adjustment factor and then transmitted instructions to the asset 102 to cause the asset 102 to locally execute the model-workflow pair, or portions thereof. Based on the instructions, the asset 102 may then locally execute the model-workflow pair.
At block 1106, centralized execution responsibilities may be adjusted, which may occur in a variety of ways. For example, the centralized execution responsibilities may be adjusted based on an indication that the analytics system 108 detects that the asset 102 is locally executing a predictive model and/or workflow. The analysis system 108 may detect this indication in a variety of ways.
In some examples, the analytics system 108 may detect the indication by receiving a notification from the asset 102 that the asset 102 is executing the predictive model and/or workflow locally. The notification may take various forms, such as binary or text, and may identify the particular predictive model and/or workflow that the asset executes locally.
In other examples, the analytics system 108 may detect the indication based on received operational data of the asset 102. In particular, detecting the indication may involve the analytics system 108 receiving operational data for the asset 102, and then detecting one or more characteristics of the received data. Based on one or more detected characteristics of the received data, the analytics system 108 may infer that the asset 102 is locally executing a predictive model and/or workflow.
Indeed, detecting one or more characteristics of the received data may be performed in various ways. For example, the analytics system 108 may detect the type of data received. In particular, the analytics system 108 may detect a data source, such as a particular sensor or actuator that generated the sensor or actuator data. Based on the type of data received, the analytics system 108 may infer that the asset 102 is locally executing a predictive model and/or workflow. For example, based on detecting a sensor identifier of a particular sensor, the analytics system 108 may infer that the asset 102 is locally executing a predictive model and corresponding workflow, which results in the asset 102 collecting data from the particular sensor and transmitting the data to the analytics system 108.
In another case, the analytics system 108 may detect the amount of data received. The analysis system 108 may compare the amount to some threshold amount of data. Based on the amount reaching a threshold amount, the analytics system 108 may infer that the asset 102 is executing locally a predictive model and/or workflow that causes the asset 102 to collect an amount of data equal to or greater than the threshold amount. Other examples are possible.
In an example implementation, detecting one or more characteristics of the received data may involve the analysis system 108 detecting a particular change in one or more characteristics of the received data, such as a change in the type of data received, a change in the amount of data received, or a change in the frequency of the received data. In a particular example, a change in the type of data received may involve the analytics system 108 detecting a change in the sensor data source it is receiving (e.g., a change in the sensors and/or actuators that are generating the data provided to the analytics system 108).
In some cases, detecting a change in the received data may involve the analytics system 108 comparing recently received data with data received in the past (e.g., an hour, a day, a week, etc. prior to the current time). Regardless, based on detecting a change in one or more characteristics of the received data, the analytics system 108 may infer that the asset 102 is locally executing a predictive model and/or workflow that results in such a change to the data provided by the asset 102 to the analytics system 108.
Additionally, the analytics system 108 may detect an indication that the asset 102 is locally executing a predictive model and/or workflow based on detecting the adjustment factor at block 1102. For example, if the analytics system 108 detects an adjustment factor at block 1102, the analytics system 108 may transmit instructions to the asset 102 that cause the asset 102 to adjust its local execution responsibilities, and accordingly, the analytics system 108 may adjust its own centralized execution responsibilities. Other examples of detection indications are possible.
In an example implementation, the centralized execution responsibilities may be adjusted according to adjustments to the local execution responsibilities. For example, if the asset 102 is now executing the predictive model locally, the analytics system 108 may stop centrally executing the predictive model accordingly (and may or may not stop executing the corresponding workflow). Additionally, if the asset 102 executes the corresponding workflow locally, the analytics system 108 may stop executing the workflow accordingly (and may or may not stop centrally executing the predictive model). Other examples are possible.
In practice, the asset 102 and/or the analytics system 108 may continuously perform the operations of blocks 1102-1106. Sometimes, local and centralized execution responsibilities may be adjusted to facilitate the execution of optimized model-workflow pairs.
Additionally, in some implementations, the asset 102 and/or the analytics system 108 may perform other operations based on detecting the adjustment factor. For example, based on a condition of the communication network 106 (e.g., bandwidth, delay, signal strength, or another indication of network quality), the asset 102 may execute a particular workflow locally. The analytics system 108 may provide a particular workflow based on the analytics system 108 detecting a condition of the communication network, which may already be stored on the asset 102, or may be a modified version of the workflow already stored on the asset 102 (e.g., the asset 102 may modify the workflow locally). In some cases, a particular workflow may include a data acquisition scheme that increases or decreases the sampling rate and/or a data transmission scheme that increases or decreases the emissivity or amount of data transmitted to the analysis system 108, among other possible workflow operations.
In a particular example, the asset 102 may determine that one or more detected conditions of the communication network have reached respective thresholds (e.g., indicating poor network quality). Based on this determination, the asset 102 may locally execute a workflow that includes transmitting data according to a data transmission scheme that reduces the amount and/or frequency of data transmitted by the asset 102 to the analytics system 108. Other examples are possible.
V. example method
Turning now to FIG. 12, a flow diagram is depicted illustrating an example method 1200 for defining and deploying an aggregated predictive model and corresponding workflow that may be executed by the analytics system 108. For the method 1200 and other methods discussed below, the operations illustrated by the blocks in the flow diagrams may be performed in accordance with the discussion above. Additionally, one or more of the operations discussed above may be added to a given flowchart.
At block 1202, the method 1200 may involve the analytics system 108 receiving respective operational data for a plurality of assets (e.g., the assets 102 and 104). At block 1204, the method 1200 may involve the analytics system 108 defining a predictive model and a corresponding workflow (e.g., a fault model and a corresponding workflow) related to the operation of the plurality of assets based on the received operational data. At block 1206, the method 1200 may involve the analytics system 108 transmitting the predictive model and the corresponding workflow to at least one asset of the plurality of assets (e.g., the asset 102) for local execution by the at least one asset.
FIG. 13 depicts a flow diagram of an example method 1300 for defining and deploying an individualized predictive model and/or corresponding workflow that may be executed by the analytics system 108. At block 1302, the method 1300 may involve the analytics system 108 receiving operational data for a plurality of assets, wherein the plurality of assets includes at least a first asset (e.g., asset 102). At block 1304, the method 1300 may involve the analytics system 108 defining an aggregate predictive model and an aggregate corresponding workflow related to the operation of the plurality of assets based on the received operation data. At block 1306, the method 1300 may involve the analytics system 108 determining one or more characteristics of the first asset. At block 1308, the method 1300 may involve the analytics system 108 defining at least one of a personalized predictive model or a personalized corresponding workflow related to the operation of the first asset based on the one or more characteristics of the first asset and the aggregated predictive model and the aggregated corresponding workflow. At block 1310, the method 1300 may involve the analytics system 108 transmitting the defined at least one personalized predictive model or personalized corresponding workflow to the first asset for local execution by the first asset.
FIG. 14 depicts a flow diagram of an example method 1400 for dynamically modifying the execution of a model-workflow pair that may be executed by the analytics system 108. At block 1402, the method 1400 may involve the analytics system 108 transmitting to an asset (e.g., the asset 102) a predictive model and a corresponding workflow related to the operation of the asset for local execution by the asset. At block 1404, the method 1400 may involve the analytics system 108 detecting an indication that the asset is locally executing at least one of a predictive model or a corresponding workflow. At block 1406, the method 1400 may involve the analytics system 108 modifying the centralized execution by a computing system of at least one of the predictive model or the corresponding workflow based on the detected indication.
Similar to method 1400, another method for dynamically modifying the execution of a model-workflow pair may be performed by an asset (e.g., asset 102). For example, such a method may involve the asset 102 receiving a predictive model and corresponding workflow related to the operation of the asset 102 from a central computing system (e.g., the analytics system 108). The method may also involve the asset 102 detecting an adjustment factor indicative of one or more conditions associated with adjusting the performance of the predictive model and the corresponding workflow. The method may involve, based on the detected adjustment factor, (i) modifying local execution of the assets 102 of at least one of the predictive model or the corresponding workflow, and (ii) transmitting an indication to the central computing system that the assets 102 execute the at least one of the predictive model or the corresponding workflow locally to facilitate the central computing system to centrally execute with the computing system modification of the at least one of the predictive model or the corresponding workflow.
Fig. 15 depicts a flow diagram of an example method 1500 of locally executing a model-workflow pair, e.g., by a local analytics device of an asset 102. At block 1502, the method 1500 may involve the local analytics device receiving, via the network interface, a predictive model related to operation of an asset (e.g., asset 102) coupled to the local analytics device via an asset interface of the local analytics device, wherein the predictive model is defined by a computing system (e.g., analytics system 108) located remotely from the local analytics device based on operational data of a plurality of assets. At block 1504, the method 1500 may involve the local analytics device receiving, via an asset interface, operational data for the asset 102 (e.g., operational data generated by and receivable from one or more sensors and/or actuators either indirectly or directly via a central processing unit of the asset). At block 1506, the method 1500 may involve the local analytics device executing a predictive model based on at least a portion of the received operational data for the asset 102. At block 1508, the method 1500 may involve the local analytics device executing a workflow corresponding to the predictive model based on executing the predictive model, wherein executing the workflow includes causing the asset 102 to perform the operation via the asset interface.
Conclusion VI
Example embodiments of the disclosed innovations have been described above. However, it will be understood by those skilled in the art that changes and modifications may be made to the described embodiments without departing from the true scope and spirit of the invention as defined by the claims.
Further, to the extent that examples described herein refer to operations performed or initiated by actors such as "people," "operators," "users," or other entities, this is for purposes of example and explanation only. The claims should not be construed to require such an actor to take action unless explicitly recited in the claim language.
Claims (60)
1. A computing system, comprising:
at least one processor;
a non-transitory computer-readable medium; and
program instructions stored on the non-transitory computer-readable medium that are executable by the at least one processor to cause the computing system to:
receiving respective operational data for a plurality of assets;
defining a predictive model and corresponding workflow related to the operation of the plurality of assets based on the received operational data; and
transmitting the predictive model and the corresponding workflow to at least one asset of the plurality of assets for local execution by the at least one asset.
2. The computing system of claim 1, wherein the respective operational data includes (i) abnormal-condition data associated with a fault occurring at a given asset at a particular time, and (ii) at least one of sensor or actuator data indicative of at least one operational condition of the given asset at the particular time.
3. The computing system of claim 1, wherein the predictive model is defined to output a probability that a particular event will occur at a given asset within a future time period.
4. The computing system of claim 3, wherein the corresponding workflow comprises one or more operations performed based on the determined probability.
5. The computing system of claim 1, wherein the corresponding workflow comprises a given asset controlling one or more actuators of the given asset to facilitate modifying an operating condition of the given asset.
6. The computing system of claim 1, wherein the corresponding workflow comprises one or more diagnostic tools to be executed locally by a given asset.
7. The computing system of claim 1, wherein the corresponding workflow comprises collecting sensor data according to a data collection scheme.
8. The computing system of claim 7, wherein the data collection scheme indicates one or more sensors of a given asset from which data is collected.
9. The computing system of claim 8, wherein the data collection schema further indicates an amount of data that the given asset is to collect from each of the one or more sensors.
10. The computing system of claim 1, wherein the corresponding workflow comprises transmitting data to the computing system according to a data transmission scheme.
11. The computing system of claim 10, wherein the data transmission scheme indicates a frequency at which a given asset transmits data to the computing system.
12. The computing system of claim 1, wherein the computing system is a first computing system, and wherein the corresponding workflow comprises a given asset transmitting instructions to a second computing system to facilitate causing the second computing system to carry out operations related to the given asset.
13. The computing system of claim 1, wherein the at least one asset of the plurality of assets comprises a first asset and a second asset, and wherein transmitting the predictive model and the corresponding workflow comprises transmitting the predictive model and the corresponding workflow to the first asset and the second asset.
14. A non-transitory computer-readable medium having instructions stored thereon that are executable to cause a computing system to:
receiving respective operational data for a plurality of assets;
defining a predictive model and corresponding workflow related to the operation of the plurality of assets based on the received operational data; and
transmitting the predictive model and corresponding workflow to at least one asset of the plurality of assets for local execution by the at least one asset.
15. The non-transitory computer-readable medium of claim 14, wherein the predictive model is defined to output a probability that a particular event will occur at a given asset within a future time period.
16. The non-transitory computer-readable medium of claim 14, wherein the corresponding workflow comprises a given asset controlling one or more actuators of the given asset to facilitate modifying an operating condition of the given asset.
17. The non-transitory computer-readable medium of claim 14, wherein the corresponding workflow comprises one or more diagnostic tools to be executed locally by a given asset.
18. The non-transitory computer-readable medium of claim 14, wherein the computing system is a first computing system, and wherein the corresponding workflow comprises a given asset transmitting instructions to a second computing system to facilitate causing the second computing system to perform operations related to the given asset.
19. A computer-implemented method, comprising:
receiving respective operational data for a plurality of assets;
defining a predictive model and corresponding workflow related to the operation of the plurality of assets based on the received operational data; and
transmitting the predictive model and corresponding workflow to at least one asset of the plurality of assets for local execution by the at least one asset.
20. The computer-implemented method of claim 19, wherein the corresponding workflow comprises collecting sensor data according to a data collection scheme, wherein the data collection scheme indicates one or more sensors of a given asset from which data is collected.
21. A computing system, comprising:
at least one processor;
a non-transitory computer-readable medium; and
program instructions stored on the non-transitory computer-readable medium that are executable by the at least one processor to cause the computing system to:
receiving operational data for a plurality of assets, wherein the plurality of assets includes a first asset;
defining an aggregate predictive model and an aggregate corresponding workflow related to the operation of the plurality of assets based on the received operational data;
determining one or more characteristics of the first asset;
defining at least one of a personalized predictive model or a personalized corresponding workflow related to the operation of the first asset based on the one or more characteristics of the first asset and the aggregated predictive model and the aggregated corresponding workflow; and
transmitting the defined at least one personalized predictive model or personalized corresponding workflow to the first asset for local execution by the first asset.
22. The computing system of claim 21, wherein the one or more characteristics of the first asset comprise at least one of an asset age or an asset health.
23. The computing system of claim 21, wherein determining the one or more characteristics of the first asset comprises determining the one or more characteristics of the first asset based on received operation data of the first asset.
24. The computing system of claim 21, wherein defining at least one of an individualized predictive model or an individualized corresponding workflow comprises defining the individualized predictive model and the individualized corresponding workflow, and wherein transmitting the at least one individualized predictive model or individualized corresponding workflow comprises transmitting the individualized predictive model and the individualized corresponding workflow.
25. The computing system of claim 21, wherein defining at least one of an individualized predictive model or an individualized corresponding workflow comprises defining the individualized corresponding workflow, and wherein transmitting the at least one individualized predictive model or individualized corresponding workflow comprises transmitting the aggregated predictive model and the individualized corresponding workflow.
26. The computing system of claim 25, wherein the aggregated corresponding workflow comprises a first operation, and wherein the personalized corresponding workflow comprises a second operation different from the first operation.
27. The computing system of claim 26, wherein the first operation comprises acquiring data according to a first acquisition scheme, and wherein the second operation comprises acquiring data according to a second acquisition scheme.
28. The computing system of claim 26, wherein the first operation comprises acquiring data according to an acquisition scheme, and wherein the second operation comprises executing one or more diagnostic tools.
29. The computing system of claim 21, wherein the plurality of assets further comprises a second asset, and wherein the program instructions further comprise instructions executable to cause the computing system to:
after transmitting the at least one personalized predictive model or personalized corresponding workflow, receiving operational data of the second asset indicating an occurrence of an event at the second asset;
modifying the at least one personalized predictive model or personalized corresponding workflow based on the received operational data of the second asset; and
transmitting the modified at least one personalized predictive model or personalized corresponding workflow to the first asset.
30. A non-transitory computer-readable medium having instructions stored thereon that are executable to cause a computing system to:
receiving operational data for a plurality of assets, wherein the plurality of assets includes a first asset;
defining an aggregate predictive model and an aggregate corresponding workflow related to the operation of the plurality of assets based on the received operational data;
determining one or more characteristics of the first asset;
defining at least one of a personalized predictive model or a personalized corresponding workflow related to the operation of the first asset based on the one or more characteristics of the first asset and the aggregated predictive model and the aggregated workflow; and
transmitting the defined at least one personalized predictive model or personalized corresponding workflow to the first asset for local execution by the first asset.
31. The non-transitory computer-readable medium of claim 30, wherein defining at least one of an individualized predictive model or an individualized corresponding workflow comprises defining the individualized predictive model and the individualized corresponding workflow, and wherein transmitting the at least one individualized predictive model or individualized corresponding workflow comprises transmitting the individualized predictive model and the individualized corresponding workflow.
32. The non-transitory computer-readable medium of claim 30, wherein defining at least one of an individualized predictive model or an individualized corresponding workflow comprises defining the individualized corresponding workflow, and wherein transmitting the at least one individualized predictive model or individualized corresponding workflow comprises transmitting the aggregated predictive model and the individualized corresponding workflow.
33. The non-transitory computer-readable medium of claim 32, wherein the aggregated corresponding workflow comprises a first operation, and wherein the personalized corresponding workflow comprises a second operation different from the first operation.
34. The non-transitory computer-readable medium of claim 33, wherein the first operation comprises acquiring data according to a first acquisition scheme, and wherein the second operation comprises acquiring data according to a second acquisition scheme.
35. The non-transitory computer-readable medium of claim 33, wherein the first operation comprises acquiring data according to an acquisition scheme, and wherein the second operation comprises executing one or more diagnostic tools.
36. The non-transitory computer-readable medium of claim 30, wherein the plurality of assets further comprises a second asset, and wherein the program instructions further comprise instructions executable to cause the computing system to:
after transmitting the at least one personalized predictive model or personalized corresponding workflow, receiving operational data of the second asset indicating an occurrence of an event at the second asset;
modifying the at least one personalized predictive model or personalized corresponding workflow based on the received operational data of the second asset; and
transmitting the modified at least one personalized predictive model or personalized corresponding workflow to the first asset.
37. A computer-implemented method, comprising:
receiving operational data for a plurality of assets, wherein the plurality of assets includes a first asset;
defining an aggregate predictive model and an aggregate corresponding workflow related to the operation of the plurality of assets based on the received operational data;
determining one or more characteristics of the first asset;
defining at least one of a personalized predictive model or a personalized corresponding workflow related to the operation of the first asset based on the one or more characteristics of the first asset and the aggregated predictive model and the aggregated corresponding workflow; and
transmitting the defined at least one personalized predictive model or personalized corresponding workflow to the first asset for local execution by the first asset.
38. The computer-implemented method of claim 37, wherein defining at least one of an individualized predictive model or an individualized corresponding workflow comprises defining the individualized corresponding workflow, and wherein transmitting the at least one individualized predictive model or individualized corresponding workflow comprises transmitting the aggregated predictive model and the individualized corresponding workflow.
39. The computer-implemented method of claim 38, wherein the aggregated corresponding workflow comprises a first operation, and wherein the personalized corresponding workflow comprises a second operation different from the first operation.
40. The computer-implemented method of claim 39, wherein one of the first operation or the second operation comprises executing one or more diagnostic tools.
41. A computing device, comprising:
an asset interface configured to couple the computing device to an asset;
a network interface configured to facilitate communication between the computing device and a computing system located remotely from the computing device;
at least one processor;
a non-transitory computer-readable medium; and
program instructions stored on the non-transitory computer-readable medium that are executable by the at least one processor to cause the computing device to:
receiving, via the network interface, a predictive model related to operation of the asset, wherein the predictive model is defined by the computing system based on operational data of a plurality of assets;
receiving, via the asset interface, operational data for the asset;
executing the predictive model based on at least a portion of the received operational data of the asset; and
based on executing the predictive model, executing a workflow corresponding to the predictive model, wherein executing the workflow comprises causing the asset to perform an operation via the asset interface.
42. The computing device of claim 41, wherein the asset interface communicatively couples the computing device to an on-asset computer of the asset.
43. The computing device of claim 41, wherein the asset comprises an actuator, and wherein executing the workflow comprises causing the actuator to perform a mechanical operation.
44. The computing device of claim 41, wherein executing the workflow comprises causing the asset to execute a diagnostic tool.
45. The computing device of claim 41, wherein executing the workflow further includes causing execution of operations remote from the asset via the network interface.
46. The computing device of claim 45, wherein causing an operation that is remote from the asset to be performed comprises instructing the computing system to perform an operation that is remote from the asset.
47. The computing device of claim 41, wherein the program instructions stored on the non-transitory computer-readable medium are further executable by the at least one processor to cause the computing device to:
personalizing the predictive model prior to executing the predictive model.
48. The computing device of claim 47, wherein personalizing the predictive model comprises modifying one or more parameters of the predictive model based at least on received operational data of the asset.
49. The computing device of claim 47, wherein the program instructions stored on the non-transitory computer-readable medium are further executable by the at least one processor to cause the computing device to:
after personalizing the predictive model, transmitting, via the network interface to the computing system, an indication that the predictive model has been personalized.
50. The computing device of claim 41, wherein the predictive model is a first predictive model, and wherein the program instructions stored on the non-transitory computer-readable medium are further executable by the at least one processor to cause the computing device to:
prior to executing the first predictive model, transmitting, to the computing system via the network interface, a given subset of the received operational data of the asset, wherein the given subset of received operational data comprises operational data generated by a given group of one or more sensors.
51. The computing device of claim 50, wherein the program instructions stored on the non-transitory computer-readable medium are further executable by the at least one processor to cause the computing device to:
after transmitting the given subset of the received operational data of the asset, receiving a second predictive model related to the operation of the asset, wherein the second predictive model is defined by the computing system based on the given subset of the received operational data of the asset; and
executing the second predictive model instead of the first predictive model.
52. A non-transitory computer-readable medium having instructions stored thereon that are executable to cause a computing device coupled to an asset via an asset interface of the computing device to:
receiving, via a network interface of the computing device, a predictive model related to operation of the asset, the network interface of the computing device configured to facilitate communication between the computing device and a computing system located remotely from the computing device, wherein the predictive model is defined by the computing system based on operational data of a plurality of assets;
receiving, via the asset interface, operational data for the asset;
executing the predictive model based on at least a portion of the received operational data of the asset; and
based on executing the predictive model, executing a workflow corresponding to the predictive model, wherein executing the workflow comprises causing the asset to perform an operation via the asset interface.
53. The non-transitory computer-readable medium of claim 52, wherein the program instructions stored on the non-transitory computer-readable medium are further executable to cause the computing device to:
personalizing the predictive model prior to executing the predictive model.
54. The non-transitory computer-readable medium of claim 53, wherein personalizing the predictive model comprises modifying one or more parameters of the predictive model based at least on received operational data of the asset.
55. The non-transitory computer-readable medium of claim 52, wherein the predictive model is a first predictive model, and wherein the program instructions stored on the non-transitory computer-readable medium are further executable to cause the computing device to:
prior to executing the first predictive model, transmitting, to the computing system via the network interface, a given subset of the received operational data of the asset, wherein the given subset of received operational data comprises operational data generated by a given group of one or more sensors.
56. The non-transitory computer-readable medium of claim 55, wherein the program instructions stored on the non-transitory computer-readable medium are further executable to cause the computing device to:
after transmitting the operational data from the particular group of one or more sensors, receiving a second predictive model related to the operation of the asset, wherein the second predictive model is defined by the computing system based on the given subset of assets of the received operational data of the asset; and
executing the second predictive model instead of the first model.
57. A computer-implemented method, the method comprising:
receiving, via a network interface of a computing device, a predictive model related to operation of an asset, the network interface of the computing device coupled to the asset via an asset interface of the computing device, wherein the predictive model is defined by a computing system located remotely from the computing device based on operational data of a plurality of assets;
receiving, by the computing device, operation data for the asset via the asset interface;
executing, by the computing device, the predictive model based on at least a portion of the received operational data of the asset; and
based on executing the predictive model, executing, by the computing device, a workflow corresponding to the predictive model, wherein executing the workflow comprises causing the asset to perform an operation via the asset interface.
58. The computer-implemented method of claim 57, the method further comprising:
personalizing, by the computing device, the predictive model prior to executing the predictive model.
59. The computer-implemented method of claim 58, wherein personalizing the predictive model comprises modifying one or more parameters of the predictive model based at least on received operational data of the asset.
60. The computer-implemented method of claim 57, wherein executing the workflow further comprises causing execution of an operation remote from the asset via the network interface.
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/744,352 US10261850B2 (en) | 2014-12-01 | 2015-06-19 | Aggregate predictive model and workflow for local execution |
US14/744,352 | 2015-06-19 | ||
US14/744,369 US20160371616A1 (en) | 2014-12-01 | 2015-06-19 | Individualized Predictive Model & Workflow for an Asset |
US14/744,369 | 2015-06-19 | ||
US14/963,207 US10254751B2 (en) | 2015-06-05 | 2015-12-08 | Local analytics at an asset |
US14/963,207 | 2015-12-08 |
Publications (1)
Publication Number | Publication Date |
---|---|
HK1251701A1 true HK1251701A1 (en) | 2019-02-01 |
Family
ID=60989383
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
HK18111155.8A HK1251701A1 (en) | 2015-06-19 | 2016-06-13 | Local analytics at an asset |
Country Status (7)
Country | Link |
---|---|
EP (1) | EP3311345A4 (en) |
JP (1) | JP2018519594A (en) |
KR (1) | KR20180011333A (en) |
CN (1) | CN107851233A (en) |
AU (1) | AU2016277850A1 (en) |
CA (1) | CA2989806A1 (en) |
HK (1) | HK1251701A1 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7105213B2 (en) * | 2019-04-09 | 2022-07-22 | ソフトバンク株式会社 | Communication terminal and program |
JP7096201B2 (en) | 2019-05-29 | 2022-07-05 | 株式会社 ミックウェア | Seat terminals, programs, and information processing methods |
CN110427992A (en) * | 2019-07-23 | 2019-11-08 | 杭州城市大数据运营有限公司 | Data matching method, device, computer equipment and storage medium |
US20210124618A1 (en) * | 2019-10-24 | 2021-04-29 | Honeywell International Inc. | Methods, apparatuses and systems for integrating and managing automated dataflow systems |
CN114821856B (en) * | 2022-04-18 | 2023-04-07 | 大连理工大学 | Intelligent auxiliary device connected in parallel to automobile traveling computer for rapid automobile maintenance |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3636883B2 (en) * | 1998-03-20 | 2005-04-06 | 富士通株式会社 | Simulation device, simulation method, and computer-readable recording medium on which simulation program is recorded |
AU2002211793A1 (en) * | 2000-10-17 | 2002-04-29 | Pdf Solutions, Incorporated | Method for optimizing the characteristics of integrated circuits components fromcircuit speficications |
JP2004214785A (en) * | 2002-12-27 | 2004-07-29 | Matsushita Electric Ind Co Ltd | Device management system, device management method, management device, managed device, device management program for management device, and device management program for managed device |
JP2007164724A (en) * | 2005-12-16 | 2007-06-28 | Ricoh Co Ltd | System, method and program for supporting delivery of required component |
JP2007172131A (en) * | 2005-12-20 | 2007-07-05 | Nec Fielding Ltd | Failure prediction system, failure prediction method and failure prediction program |
JP2009206850A (en) * | 2008-02-28 | 2009-09-10 | Fuji Xerox Co Ltd | Failure diagnosis device and program |
US8724636B2 (en) * | 2008-03-31 | 2014-05-13 | Qualcomm Incorporated | Methods of reliably sending control signal |
US20090327325A1 (en) * | 2008-06-30 | 2009-12-31 | Honeywell International Inc., | Meta modeling in decision support system |
JP5129725B2 (en) * | 2008-11-19 | 2013-01-30 | 株式会社日立製作所 | Device abnormality diagnosis method and system |
US8306778B2 (en) * | 2008-12-23 | 2012-11-06 | Embraer S.A. | Prognostics and health monitoring for electro-mechanical systems and components |
SG192002A1 (en) * | 2011-01-26 | 2013-08-30 | Google Inc | Dynamic predictive modeling platform |
US20140188778A1 (en) * | 2012-12-27 | 2014-07-03 | General Electric Company | Computer-Implemented System for Detecting Anomaly Conditions in a Fleet of Assets and Method of Using the Same |
US9217999B2 (en) * | 2013-01-22 | 2015-12-22 | General Electric Company | Systems and methods for analyzing data in a non-destructive testing system |
WO2014145977A1 (en) * | 2013-03-15 | 2014-09-18 | Bates Alexander B | System and methods for automated plant asset failure detection |
CN103532760B (en) * | 2013-10-18 | 2018-11-09 | 北京奇安信科技有限公司 | Analytical equipment, system and method for analyzing the order executed on each host |
CN103516563A (en) * | 2013-10-18 | 2014-01-15 | 北京奇虎科技有限公司 | Equipment and method for monitoring abnormal or normal command |
CN104392752B (en) * | 2014-10-13 | 2016-11-30 | 中国科学院合肥物质科学研究院 | The nuclear reactor fault diagnosis of a kind of real-time online and monitoring system |
-
2016
- 2016-06-13 KR KR1020187001578A patent/KR20180011333A/en not_active Withdrawn
- 2016-06-13 HK HK18111155.8A patent/HK1251701A1/en unknown
- 2016-06-13 CN CN201680043854.5A patent/CN107851233A/en active Pending
- 2016-06-13 EP EP16812206.7A patent/EP3311345A4/en not_active Withdrawn
- 2016-06-13 JP JP2017565106A patent/JP2018519594A/en active Pending
- 2016-06-13 CA CA2989806A patent/CA2989806A1/en not_active Abandoned
- 2016-06-13 AU AU2016277850A patent/AU2016277850A1/en not_active Abandoned
Also Published As
Publication number | Publication date |
---|---|
AU2016277850A1 (en) | 2018-02-15 |
EP3311345A4 (en) | 2019-03-20 |
CN107851233A (en) | 2018-03-27 |
CA2989806A1 (en) | 2016-12-22 |
JP2018519594A (en) | 2018-07-19 |
KR20180011333A (en) | 2018-01-31 |
EP3311345A1 (en) | 2018-04-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10261850B2 (en) | Aggregate predictive model and workflow for local execution | |
US10878385B2 (en) | Computer system and method for distributing execution of a predictive model | |
US10579750B2 (en) | Dynamic execution of predictive models | |
US11036902B2 (en) | Dynamic execution of predictive models and workflows | |
US10254751B2 (en) | Local analytics at an asset | |
US20180247239A1 (en) | Computing System and Method for Compressing Time-Series Values | |
CN108780526B (en) | Disposal of asset localization-based predictive models | |
JP2019527897A (en) | Computer architecture and method for recommending asset repair | |
HK1251701A1 (en) | Local analytics at an asset | |
WO2018213617A1 (en) | Computing system and method for approximating predictive models and time-series values | |
HK40000092A (en) | Handling of predictive models based on asset location | |
HK40000092B (en) | Handling of predictive models based on asset location | |
HK40007180A (en) | Computer architecture and method for recommending asset repairs | |
HK1259725A1 (en) | Computer architecture and method for modifying data intake parameters based on a predictive model |