WO2025125944A1 - Delivering therapy based on machine learning model classification of health events - Google Patents
Delivering therapy based on machine learning model classification of health events Download PDFInfo
- Publication number
- WO2025125944A1 WO2025125944A1 PCT/IB2024/061512 IB2024061512W WO2025125944A1 WO 2025125944 A1 WO2025125944 A1 WO 2025125944A1 IB 2024061512 W IB2024061512 W IB 2024061512W WO 2025125944 A1 WO2025125944 A1 WO 2025125944A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- machine learning
- classification
- processing circuitry
- patient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N1/00—Electrotherapy; Circuits therefor
- A61N1/18—Applying electric currents by contact electrodes
- A61N1/32—Applying electric currents by contact electrodes alternating or intermittent currents
- A61N1/38—Applying electric currents by contact electrodes alternating or intermittent currents for producing shock effects
- A61N1/39—Heart defibrillators
- A61N1/3956—Implantable devices for applying electric shocks to the heart, e.g. for cardioversion
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
Definitions
- This disclosure generally relates to systems including medical devices and, more particularly, to monitoring of patient health using such systems.
- a variety of devices are configured to monitor physiological signals of a patient.
- Such devices include implantable or wearable medical devices, as well as a variety of wearable health or fitness tracking devices.
- the physiological signals sensed by such devices include as examples, electrocardiogram (ECG) signals, respiration signals, perfusion signals, activity and/or posture signals, pressure signals, blood oxygen saturation signals, body composition, and blood glucose or other blood constituent signals.
- ECG electrocardiogram
- respiration signals respiration signals
- perfusion signals perfusion signals
- activity and/or posture signals activity and/or posture signals
- pressure signals blood oxygen saturation signals
- body composition body composition
- blood glucose or other blood constituent signals In general, using these signals, such devices facilitate monitoring and evaluating patient health over a number of months or years, outside of a clinic setting.
- such devices are configured to detect acute health events based on the physiological signals, such as episodes of cardiac arrhythmia, myocardial infarction, stroke, or seizure.
- Example arrhythmia types include cardiac arrest (e.g., asystole), ventricular tachycardia (VT), and ventricular fibrillation (VF).
- the devices may store ECG and other physiological signal data collected during a time period including an episode as episode data.
- Such acute health events are associated with significant rates of death, particularly if not treated quickly.
- VF and other malignant tachyarrhythmias are the most commonly identified arrhythmia in sudden cardiac arrest (SCA) patients. If this arrhythmia continues for more than a few seconds, it may result in cardiogenic shock and cessation of effective blood circulation.
- the survival rate from SCA decreases between 7 and 10 percent for every minute that the patient waits for defibrillation. Consequently, sudden cardiac death (SCD) may result in a matter of minutes.
- SCA sudden cardiac arrest
- the disclosure describes techniques for detection of acute health events, such as VT, VF, and/or SCA, by monitoring patient parameter data, such as ECG data. More particularly, the disclosure describes techniques for applying rules, which may include one or more machine learning models, to patient parameter data to detect acute health events.
- the techniques include configuring rules and/or the application of the rules to the patient parameter data in order to improve the efficiency and effectiveness of the detection of acute health events.
- the techniques may include applying one or more machine learning (ML) models to each of a plurality of segments of patient parameter data (e.g., episode data) received from a sensor device in response the sensor device detecting an acute health event to determine a classification of the episode from a plurality of predetermined classifications.
- ML machine learning
- One or more of the possible classifications are acute health event(s) of interest, such as potentially lethal tachyarrhythmias that may result in SCA.
- the techniques and systems of this disclosure may use one or more classifiers to more accurately classify the acute health event as one of a plurality of classifications that are clinically relevant to the actions taken or not taken by a system on behalf of the patient and a caregiving team of the patient.
- the classifications may include ventricular tachyarrhythmias of different severities, such as VF and polymorphic VT, or monomorphic VT, as well as classifications for which no action or cancelation of action may be appropriate, such as supraventricular tachycardia, oversensing, or other noise.
- the system may avoid expensive medical system and user response, and/or delivery of unnecessary therapy, to likely erroneous determinations regarding the health of the patient.
- the machine learning model is trained with a set of training instances, where one or more of the training instances comprise data that indicate relationships between patient parameter data and classifications related to the acute health event, e.g., related to potentially lethal cardiac arrhythmias. Because the machine learning model is trained with potentially thousands or millions of training instances, the machine learning model may, for example, reduce the amount of classification error in classifying ECG data as different arrhythmia classifications when compared to conventional detection systems.
- the techniques and systems of this disclosure may be implemented with an implantable medical device (IMD) that can continuously (e.g., on a periodic or triggered basis without human intervention) sense the ECG and/or other patient parameter data while subcutaneously implanted in a patient over months or years and perform numerous operations per second on patient parameter data to enable the systems herein to detect acute health events.
- IMD implantable medical device
- Using techniques of this disclosure with an IMD may be advantageous when a physician cannot be continuously present with the patient over weeks or months to evaluate the patient parameter data and/or where performing the operations on the ECG and/or other patient parameter described herein (e.g., application of a machine learning model) on weeks or months of data could not practically be performed in the mind of a physician.
- processing circuitry of a computing device configured to wirelessly communicate with an IMD or other medical device applies a machine learning model to patient parameter data as a second set of rules to confirm or reject detection of an acute health event by the medical device using a first set of rules.
- Reducing classification errors for acute health events with a machine learning model implementing techniques of this disclosure may provide one or more technical and clinical advantages. For example, improved specificity and sensitivity may increase the ability of another device, user, and/or clinician to rely on the accuracy of the system’s assessment of the patient’s condition and improve resulting treatment of the patient and patient outcomes.
- processing circuitry may operate on different segments of data, such as ECG data. Segments of data may include a segment from a period of time at the onset of arrhythmia, another segment when the episode reaches sustained detection, and multiple ongoing segments thereafter. The segments may be contiguous, separated by time, and/or overlapping. Segment-based classification of episode data according to the techniques described herein may improve the accuracy of classification/detection of health events, particularly in situations where shorter segments of continuous episode data are available to train the one or more ML models.
- Segment-based classification of episode data may improve the accuracy of classification/detection of health events where the patient condition may change during an episode, e.g., where a tachyarrhythmia may spontaneously terminate or change during an episode.
- a medical device system includes an implantable medical device configured to: detect an acute health event; collect episode data associated with the acute health event; transmit the episode data; and deliver therapy to a patient; and processing circuitry configured to: receive episode data for the acute health event detected by the implantable medical device, the episode data transmitted by the implantable medical device in response to detecting the acute health event; apply a first one or more machine learning models and first classification logic to the episode data to determine a first respective classification of a plurality of predetermined classifications for the episode data; based on a probability of the first respective classification determined by the first one or more machine learning models, determine whether to: retrieve first additional data from the implantable medical device and apply a second one or more machine learning models and second classification logic to the first additional data, or retrieve second additional data from the implantable medical device and apply a third one or more machine learning models and third classification logic to the second additional data; determine a classification of the acute health event from the plurality of predetermined classifications based on the application of the second one or more machine learning models
- a computing device includes communication circuitry configured to wirelessly communicate with a sensor device on a patient or implanted within the patient; one or more output devices; and processing circuitry configured to: determine a first classification of an acute health event based on applying a first one or more machine learning models and first classification logic to episode data; and based on a probability of the first classification, applying at least one of a second one or more machine learning models or a second classification logic to additional data to determine a final classification of the acute health event.
- FIG. 1 is a block diagram illustrating an example system configured detect acute health events of a patient, and to respond to such detections, in accordance with one or more techniques of this disclosure.
- FIG. 2A is a block diagram illustrating an example configuration of a patient sensing device that operates in accordance with one or more techniques of the present disclosure.
- FIG. 2B is a block diagram illustrating an example configuration of a patient sensing device, including therapy delivery circuitry, that operates in accordance with one or more techniques of the present disclosure.
- FIG. 3 is a block diagram illustrating an example configuration of a computing device that operates in accordance with one or more techniques of the present disclosure.
- FIG. 4 is a block diagram illustrating an example configuration of a health monitoring system that operates in accordance with one or more techniques of the present disclosure.
- FIG. 5 is a flow diagram illustrating an example operation for applying rules to patient parameter data to determine whether an acute health event is detected.
- FIG. 6 is a flow diagram illustrating another example operation for applying rules to patient parameter data to determine whether an acute health event is detected.
- FIG. 7 is a flow diagram illustrating an example operation for configuring rules applied to patient parameter data to determine whether an acute health event is detected for a patient.
- FIG. 8 is a flow diagram illustrating another example operation for configuring rules applied to patient parameter data to determine whether an acute health event is detected for a patient.
- FIG. 9 is a block diagram illustrating an example of an ensemble of neural networks configured to classify ventricular tachyarrhythmias.
- FIG. 10 is a block diagram illustrating an example of a single classifier utilizing raw signals and derived features.
- FIG. 11 is a block diagram illustrating a staged approach for classifying a ventricular tachyarrhythmia episode.
- FIGS. 12A and 12B illustrate frequency decompositions of a monomorphic ventricular tachycardia episode and a supraventricular tachycardia episode, respectively.
- FIG. 13 is a block diagram illustrating an example configuration of a classifier configured to classify episode data.
- FIGS. 14-17 are tables illustrating example segment classifications and associated episode classifications.
- FIG. 18 is a flow diagram illustrating an example operation of the classifier of FIG. 13.
- FIG. 19 is a block diagram illustrating an example configuration of a classifier configured to classify episode data.
- FIG. 20 is a flow diagram illustrating an example operation of the classifier of FIG. 19.
- FIG. 21 is a conceptual diagram illustrating an example machine learning model configured to determine an extent to which data of a patient indicates an acute health event.
- FIG. 22 is a conceptual diagram illustrating an example training process for a machine learning model, in accordance with examples of the current disclosure.
- FIG. 23A is a perspective drawing illustrating an insertable cardiac monitor.
- FIG. 23B is a perspective drawing illustrating another insertable cardiac monitor.
- FIG. 24 is a conceptual flow diagram illustrating an example operation of one or more machine learning model, in accordance with examples of the current disclosure.
- FIG. 25 is a conceptual flow diagram illustrating example classification logic by a system, in accordance with examples of the current disclosure.
- FIG. 26 is a table showing one example of logic for initial and subsequent evaluation following a 60 second data request.
- FIG. 27 is a table showing one example of subsequent evaluation logic following a 15 second data request.
- FIG. 28 is a flowchart illustrating one example method of the disclosure.
- FIG. 29 is a process diagram illustrating an example implementation of detecting an acute health emergency in accordance with the techniques of this disclosure.
- FIG. 30 is a process diagram illustrating another example implementation of detecting an acute health emergency in accordance with the techniques of this disclosure.
- FIG. 31 is a process diagram illustrating another example implementation of detecting an acute health emergency in accordance with the techniques of this disclosure.
- a variety of types of implantable and external devices are configured to detect arrhythmia episodes and other acute health events based on sensed ECGs and, in some cases, other physiological signals.
- External devices that may be used to non-invasively sense and monitor ECGs and other physiological signals include wearable devices with electrodes configured to contact the skin of the patient, such as patches, watches, rings, necklaces, hearing aids, a wearable cardiac monitor or automated external defibrillator (AED), clothing, car seats, or bed linens.
- Such external devices may facilitate relatively longer-term monitoring of patient health during normal daily activities.
- Implantable medical devices also sense and monitor ECGs and other physiological signals, and detect acute health events such as episodes of arrhythmia, cardiac arrest, myocardial infarction, stroke, and seizure.
- Example IMDs include pacemakers and implantable cardioverter-defibrillators, which may be coupled to intravascular or extravascular leads, as well as pacemakers with housings configured for implantation within the heart, which may be leadless. Some IMDs do not provide therapy, such as implantable patient monitors.
- One example of such an IMD is the Reveal LINQTM and LINQ IITM Insertable Cardiac Monitors (ICMs), available from Medtronic, Inc., which may be inserted subcutaneously.
- IMDs do not provide therapy, such as implantable patient monitors.
- ICMs Reveal LINQTM and LINQ IITM Insertable Cardiac Monitors (ICMs), available from Medtronic, Inc., which may be inserted subcutaneously.
- ICMs Reveal LINQTM and LINQ IITM Insertable Cardiac Monitors
- Such IMDs may facilitate relatively longer-term monitoring of patients during normal daily activities, and may periodically transmit collected data, e.g., episode data for detected arrhythmia episodes, to a remote patient monitoring system, such as the Medtronic CarelinkTM Network.
- FIG. 1 is a block diagram illustrating an example system 2 configured to detect acute health events of a patient 4, and to respond to such detection, in accordance with one or more techniques of this disclosure.
- the terms “detect,” “detection,” and the like may refer to detection of an acute health event presently (at the time the data is collected) being experienced by patient 4, as well as detection based on the data that the condition of patient 4 is such that they have a suprathreshold likelihood of experiencing the event within a particular timeframe, e.g., prediction of the acute health event.
- IMD 10 may be used with one or more patient sensing devices, e.g., IMD 10, which may be in wireless communication with one or more patient computing devices, e.g., patient computing devices 12A and 12B (collectively, “patient computing devices 12”).
- patient computing devices 12 e.g., patient computing devices 12A and 12B
- IMD 10 include electrodes and other sensors to sense physiological signals of patient 4, and may collect and store sensed physiological data based on the signals and detect episodes based on the data.
- IMD 10 may be implanted outside of a thoracic cavity of patient 4 (e.g., subcutaneously in the pectoral location illustrated in FIG. 1). IMD 10 may be positioned near the sternum near or just below the level of the heart of patient 4, e.g., at least partially within the cardiac silhouette. In some examples, IMD 10 takes the form of a LINQ ICM.
- IMD 10 takes the form of an ICM
- the techniques of this disclosure may be implemented in systems including any one or more implantable or external medical devices, including monitors, pacemakers, defibrillators (e.g., subcutaneous or substemal), wearable external defibrillators (WAEDs), neurostimulators, or drug pumps.
- a system includes one or more patient sensing devices, which may be implanted within patient 4 or external to (e.g., worn by) patient 4.
- a system with two IMDs 10 may capture different values of a common patient parameter with different resolution/accuracy based on their respective locations.
- system 2 may include a ventricular assist device or WAED in addition to IMD 10.
- Patient computing devices 12 are configured for wireless communication with IMD 10.
- Computing devices 12 retrieve event data and other sensed physiological data from IMD 10 that was collected and stored by the IMD.
- computing devices 12 take the form of personal computing devices of patient 4.
- computing device 12A may take the form of a smartphone of patient 4
- computing device 12B may take the form of a smartwatch or other smart apparel of patient 4.
- computing devices 12 may be any computing device configured for wireless communication with IMD 10, such as a desktop, laptop, or tablet computer.
- Computing devices 12 may communicate with IMD 10 and each other according to the Bluetooth® or Bluetooth® Low Energy (BLE) protocols, as examples.
- BLE Bluetooth® or Bluetooth® Low Energy
- only one of computing devices 12, e.g., computing device 12A, is configured for communication with IMD 10, e.g., due to execution of software (e.g., part of a health monitoring application as described herein) enabling communication and interaction with an IMD.
- software e.g., part of a health monitoring application as described herein
- computing device(s) 12, e.g., wearable computing device 12B in the example illustrated by FIG. 1, may include electrodes and other sensors to sense physiological signals of patient 4, and may collect and store physiological data and detect episodes based on such signals.
- Computing device 12B may be incorporated into the apparel of patient 14, such as within clothing, shoes, eyeglasses, a watch or wristband, a hat, etc.
- computing device 12B is a smartwatch or other accessory or peripheral for a smartphone computing device 12 A.
- One or more of computing devices 12 may be configured to communicate with a variety of other devices or systems via a network 16.
- one or more of computing devices 12 may be configured to communicate with one or more computing systems, e.g., computing systems 20A and 20B (collectively, “computing systems 20”) via network 16.
- Computing systems 20A and 20B may be respectively managed by manufacturers of IMD 10 and computing devices 12 to, for example, provide cloud storage and analysis of collected data, maintenance and software services, or other networked functionality for their respective devices and users thereof.
- Computing system 20A may comprise, or may be implemented by, the Medtronic CarelinkTM Network, in some examples. In the example illustrated by FIG.
- computing system 20A implements a health monitoring system (HMS) 22, although in other examples, either of both of computing systems 20 may implement HMS 22.
- HMS 22 facilities detection of acute health events of patient 4 by system 2, and the responses of system 2 to such acute health events.
- Computing device(s) 12 may transmit data, including data retrieved from IMD 10, to computing system(s) 20 via network 16.
- the data may include sensed data, e.g., values of physiological parameters measured by IMD 10 and, in some cases one or more of computing devices 12, data regarding episodes of arrhythmia or other acute health events detected by IMD 10 and computing device(s) 12, and other physiological signals or data recorded by IMD 10 and/or computing device(s) 12.
- HMS 22 may also retrieve data regarding patient 4 from one or more sources of electronic health records (EHR) 24 via network.
- EHR electronic health records
- EHR 24 may include data regarding historical (e.g., baseline) physiological parameter values, previous health events and treatments, disease states, comorbidities, demographics, height, weight, and body mass index (BMI), as examples, of patients including patient 4.
- HMS 22 may use data from EHR 24 to configure algorithms implemented by IMD 10 and/or computing devices 12 to detect acute health events for patient 4.
- HMS 22 provides data from EHR 24 to computing device(s) 12 and/or IMD 10 for storage therein and use as part of their algorithms for detecting acute health events.
- Network 16 may include one or more computing devices, such as one or more non-edge switches, routers, hubs, gateways, security devices such as firewalls, intrusion detection, and/or intrusion prevention devices, servers, cellular base stations and nodes, wireless access points, bridges, cable modems, application accelerators, or other network devices.
- Network 16 may include one or more networks administered by service providers, and may thus form part of a large-scale public network infrastructure, e.g., the Internet.
- Network 16 may provide computing devices and systems, such as those illustrated in FIG. 1, access to the Internet, and may provide a communication framework that allows the computing devices and systems to communicate with one another.
- network 16 may include a private network that provides a communication framework that allows the computing devices and systems illustrated in FIG. 1 to communicate with each other, but isolates some of the data flows from devices external to the private network for security purposes.
- the communications between the computing devices and systems illustrated in FIG. 1 are encrypted.
- IMD 10 may be configured to detect acute health events of patient 4, such as SCA, based on data sensed by IMD 10 and, in some cases, other data, such as data sensed by computing devices 12A and/or 12B, and data from EHR 24. To detect acute health events, IMD 10 may apply rules to the data, which may be referred to as patient parameter data. In response to detection of an acute health event, IMD 10 may wirelessly transmit a message to one or both of computing devices 12A and 12B. The message may indicate that IMD 10 detected an acute health event of the patient. The message may indicate a time that IMD 10 detected the acute health event.
- SCA acute health events of patient 4
- IMD 10 may apply rules to the data, which may be referred to as patient parameter data.
- IMD 10 may wirelessly transmit a message to one or both of computing devices 12A and 12B. The message may indicate that IMD 10 detected an acute health event of the patient. The message may indicate a time that IMD 10 detected the acute health event.
- the message may include physiological data collected by IMD 10, e.g., data which lead to detection of the acute health event, data prior to detection of the acute health event, and/or real-time or more recent data collected after detection of the acute health event.
- the physiological data may include values of one or more physiological parameters and/or digitized physiological signals. Examples of acute health events are SCA, a ventricular fibrillation, a ventricular tachycardia, myocardial infarction, a pause in heart rhythm (asystole), or Pulseless Electrical Activity (PEA), acute respiratory distress syndrome (ARDS), a stroke, a seizure, or a fall.
- PDA Pulseless Electrical Activity
- ARDS acute respiratory distress syndrome
- the detection of the acute health event by IMD 10 may include multiple phases. For example, IMD 10 may complete an initial detection of the acute health event, e.g., SCA or tachyarrhythmia, and initiate wireless communication, e.g., Bluetooth® or Bluetooth Low Energy®, with computing device(s) 12 in response to the initial detection. The initial detection may occur five to ten seconds after onset of the acute health event, for example. IMD 10 may continue monitoring to determine whether the acute health event is sustained, e.g., a sustained detection of SCA or tachyarrhythmia. In some examples, IMD 10 may use more patient parameters and/or different rules to determine whether event is sustained or otherwise confirm detection.
- IMD 10 may use more patient parameters and/or different rules to determine whether event is sustained or otherwise confirm detection.
- Initiating communication with computing device(s) 12 in response to an initial detection may facilitate the communication being established at the time the acute health event is confirmed as sustained.
- IMD 10 may wait to send the message, e.g., including sensed data associated with the acute health event, until it is confirmed as sustained, which may be determined about thirty seconds after onset of the event, or after a longer period of time.
- Less urgent events may have longer confirmation phases and may be alerted with less urgency, such being alerted as health care events rather than acute health events.
- the initiation of communication after initial detection may still benefit less urgent events.
- conserveing power may be significant in the case of non-rechargeable IMDs to prolong their life prior to needing surgery for replacement, as well as for rechargeable IMDs or external devices to reduce recharge frequency.
- computing device(s) 12 may output an alarm that may be visual and/or audible, and configured to immediately attract the attention of patient 4 or any person in environment 28 with patient 4. Additionally or alternatively, computing device(s) 12 may transmit an alert or alarm message to devices and users outside the visible/audio range of computing device(s) 12, e.g., to loT devices 30 or HMS 22. Environment 28 may be a home, office, or place of business, or public venue, as examples.
- An alert or alarm message sent to HMS 22 via network 16, or other messages sent by computing device(s) 12 may include the data received from IMD 10 and, in some cases, additional data collected by computing device(s) 12 or other devices in response to the detection of the acute health event by IMD 10.
- loT devices 30 may provide audible and/or visual alarms when configured with output devices to do so. As other examples, loT devices 30 may cause smart lights throughout environment 28 to flash or blink and unlock doors. In some examples, loT devices 30 that include cameras, microphones, or other sensors may activate those sensors to collect data regarding patient 4, e.g., for evaluation of the condition of patient 4. [0060]
- Computing device(s) 12 may be configured to wirelessly communicate with loT devices 30 to cause loT devices 30 to take the actions described herein.
- HMS 22 communicates with loT devices 30 via network 16 to cause loT devices 30 to take the actions described herein, e.g., in response to receiving the alert message from computing device(s) 12 as described above.
- IMD 10 is configured to communicate wirelessly with one or more of loT devices 30, e.g., in response to detection of an acute health event when communication with computing devices 12 is unavailable.
- loT device(s) 30 may be configured to provide some or all of the functionality ascribed to computing devices 12 herein.
- Environment 28 includes computing facilities, e.g., a local network 32, by which computing devices 12, loT devices 30, and other devices within environment 28 may communicate via network 16, e.g., with HMS 22.
- environment 28 may be configured with wireless technology, such as IEEE 802.11 wireless networks, IEEE 802.15 ZigBee networks, an ultra- wideband protocol, near-field communication, or the like.
- Environment 28 may include one or more wireless access points, e.g., wireless access points 34A and 34B (collectively, “wireless access points 34”) that provide support for wireless communications throughout environment 28.
- wireless access points 34A and 34B collectively, “wireless access points 34”
- computing devices 12, loT devices 30, and other devices within environment 28 may be configured to communicate with network 16, e.g., with HMS 22, via a cellular base station 36 and a cellular network.
- Computing device(s) 12, and in some examples loT device(s) 30, may include input devices and interfaces to allow a user to override the alarm in the event the detection of the acute health event by IMD 10 was false.
- one or more of computing device(s) 12 and loT device(s) 30 may implement an event assistant.
- the event assistant may provide a conversational interface for patient 4 to exchange information with the computing device or loT device.
- the event assistant may query the user regarding the condition of patient 4 in response to receiving the alert message from IMD 10. Responses from the user may be used to confirm or override detection of the acute health event by IMD 10, or to provide additional information about the acute health event or the condition of patient 4 more generally that may improve the efficacy of the treatment of patient 4.
- information received by the event assistant may be used to provide an indication of severity or type (differential diagnosis) for the acute health event.
- the event assistant may use natural language processing and context data to interpret utterances by the user.
- the event assistant in addition to receiving responses to queries posed by the assistant, the event assistant may be configured to respond to queries posed by the user. For example, patient 4 may indicate that they feel dizzy and ask the event assistant, “how am I doing?”.
- computing device(s) 12 and/or HMS 22 may implement one or more techniques to evaluate the sensed physiological data received from IMD 10, and in some cases additional physiological or other patient parameter data sensed or otherwise collected by the computing device(s) or loT devices 30, to confirm or override the detection of the acute health event by IMD 10.
- computing device(s) 12 and/or computing system(s) 20 may have greater processing capacity than IMD 10, enabling more complex analysis of the data.
- the computing device(s) 12 and/or HMS 22 may apply the data to one or more machine learning models or other artificial intelligence developed algorithms, e.g., to determine whether the data is sufficiently indicative of the acute health event.
- computing device(s) 12 may output alert messages and/or transmit alert messages to HMS 22 and/or loT devices 30 in response to confirming the acute health event.
- computing device(s) 12 may be configured to output/transmit the alert messages prior to completing the confirmation analysis, and output/transmit cancellation messages in response to the analysis overriding the detection of the acute health event by IMD 10.
- HMS 22 may be configured to perform a number of operations in response to receiving an alert message from computing device(s) 12 and/or loT device(s) 30.
- HMS 22 may be configured to cancel such operations in response to receiving a cancellation message from computing device(s) 12 and/or loT device(s) 30.
- Any of IMD 10, computing device(s) 12, loT device(s) 30, computing device(s) 38, or HMS 22 may, individually or in any combination, perform the operations described herein for detection of acute health events, such as SCA, by applying rules, which may include one or more machine learning models, to patient parameter data to detect acute health events.
- rules which may include one or more machine learning models
- one of these devices, or more than one of them in cooperation may apply a first set of rules to patient parameter data for a first determination of whether an acute health event is detected and, based on whether one or more context criteria associated with the first determination are satisfied, determine whether to apply a second set of rules to patient parameter data to determine whether the acute health event is detected.
- FIG. 2A is a block diagram illustrating an example configuration of IMD 10 of FIG. 1.
- IMD 10A includes processing circuitry 50A, memory 52A, sensing circuitry 54A coupled to electrodes 56A and 56B (hereinafter, “electrodes 56”) and one or more sensor(s) 58 A, and communication circuitry 60A.
- Processing circuitry 50A may include fixed function circuitry and/or programmable processing circuitry.
- Processing circuitry 50A may include any one or more of a microprocessor, a controller, a graphics processing unit (GPU), a tensor processing unit (TPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or equivalent discrete or analog logic circuitry.
- processing circuitry 50A may include multiple components, such as any combination of one or more microprocessors, one or more controllers, one or more GPUs, one or more TPUs, one or more DSPs, one or more ASICs, or one or more FPGAs, as well as other discrete or integrated logic circuitry.
- memory 52A includes computer-readable instructions that, when executed by processing circuitry 50A, cause IMD 10A and processing circuitry 50A to perform various functions attributed herein to IMD 10A and processing circuitry 50A.
- Memory 52A may include any volatile, non-volatile, magnetic, optical, or electrical media, such as a random-access memory (RAM), read-only memory (ROM), non-volatile RAM (NVRAM), electrically-erasable programmable ROM (EEPROM), flash memory, or any other digital media.
- RAM random-access memory
- ROM read-only memory
- NVRAM non-volatile RAM
- EEPROM electrically-erasable programmable ROM
- flash memory or any other digital media.
- Sensing circuitry 54A may monitor signals from electrodes 56 in order to, for example, monitor electrical activity of a heart of patient 4 and produce ECG data for patient 4.
- processing circuitry 50A may identify features of the sensed ECG, such as heart rate, heart rate variability, T-waveretemans, intra-beat intervals (e.g., QT intervals), and/or ECG morphologic features, to detect an episode of cardiac arrhythmia of patient 4.
- Example Processing circuitry 50A may store the digitized ECG and features of the ECG used to detect the arrhythmia episode in memory 52A as episode data for the detected arrhythmia episode.
- sensing circuitry 54A measures impedance, e.g., of tissue proximate to IMD 10A, via electrodes 56.
- the measured impedance may vary based on respiration, cardiac pulse or flow, and a degree of perfusion or edema.
- Processing circuitry 50A may determine physiological data relating to respiration, cardiac pulse or flow, perfusion, and/or edema based on the measured impedance.
- IMD 10A includes one or more sensors 58 A, such as one or more accelerometers, gyroscopes, microphones, optical sensors, temperature sensors, pressure sensors, and/or chemical sensors.
- sensing circuitry 54A may include one or more filters and amplifiers for filtering and amplifying signals received from one or more of electrodes 56 and/or sensors 58A.
- sensing circuitry 54A and/or processing circuitry 50A may include a rectifier, filter and/or amplifier, a sense amplifier, comparator, and/or analog-to-digital converter.
- Processing circuitry 50A may determine physiological data, e.g., values of physiological parameters of patient 4, based on signals from sensors 58 A, which may be stored in memory 52A.
- Patient parameters determined from signals from sensors 58 A may include oxygen saturation, glucose level, stress hormone level, heart sounds, body motion, body posture, or blood pressure.
- Memory 52A may store applications 70A executable by processing circuitry 50A, and data 80A.
- Applications 70A may include an acute health event surveillance application 72A.
- Processing circuitry 50A may execute event surveillance application 72 A to detect an acute health event of patient 4 based on combination of one or more of the types of physiological data described herein, which may be stored as sensed data 82A.
- sensed data 82 A may additionally include patient parameter data sensed by other devices, e.g., computing device(s) 12 or loT device(s) 30, and received via communication circuitry 60A.
- Event surveillance application 72 A may be configured with a rules engine 74 A.
- Rules engine 74 A may apply rules 84 A to sensed data 82A.
- Rules 84A may include one or more models, algorithms, decision trees, and/or thresholds. In some cases, rules 84A may be developed based on machine learning, e.g., may include one or more machine learning models.
- event surveillance application 72A may detect SCA, a ventricular fibrillation, a ventricular tachycardia, supra- ventricular tachycardia (e.g., conducted atrial fibrillation), ventricular asystole, or a myocardial infarction based on an ECG and/or other patient parameter data indicating the electrical or mechanical activity of the heart of patient 4.
- event surveillance application 72A may detect stroke based on such cardiac activity data.
- sensing circuitry 54A may detect brain activity data, e.g., an electroencephalogram (EEG) via electrodes 56, and event surveillance application 72 A may detect stroke or a seizure based on the brain activity alone, or in combination with cardiac activity data or other physiological data.
- event surveillance application 72A detects whether the patient has fallen based on data from an accelerometer alone, or in combination with other physiological data.
- event surveillance application 72 A may store the sensed data 82 A that lead to the detection (and in some cases a window of data preceding and/or following the detection) as event data 86A, also referred to herein as episode data.
- processing circuitry 50A transmits, via communication circuitry 60A, event data 86A for the event to computing device(s) 12 (FIG. 1). This transmission may be included in a message indicating the acute health event, as described herein. Transmission of the message may occur on an ad hoc basis and as quickly as possible.
- Communication circuitry 60A may include any suitable hardware, firmware, software, or any combination thereof for wirelessly communicating with another device, such as computing devices 12 and/or loT devices 30.
- IMD 10A is one example of a device configured to detect an acute health event, collect episode data associated with the acute health event, and transmit the episode data.
- Processing circuitry 50A of IMD 10A may be further configured to receive the episode data associated with the acute health event detected by IMD 10A, apply a first one or more machine learning models and first classification logic to the episode data to determine a first respective classification of a plurality of predetermined classifications for the episode data.
- processing circuitry 50 A may further determine whether to retrieve first additional data from the IMD 10A and apply a second one or more machine learning models and second classification logic to the first additional data, or retrieve second additional data from the IMD 10A and apply a third one or more machine learning models and third classification logic to the second additional data.
- Processing circuitry 50A may further determine a classification of the acute health event from the plurality of predetermined classifications based on the application of the second one or more machine learning models to the first additional data or the third one or more machine learning models to the second additional data.
- memory 52B includes computer-readable instructions that, when executed by processing circuitry 50B, cause IMD 10B and processing circuitry 50B to perform various functions attributed herein to IMD 10B and processing circuitry 50B.
- Memory 52B may include any volatile, non-volatile, magnetic, optical, or electrical media, such as a RAM, ROM, NVRAM, EEPROM, flash memory, or any other digital media.
- Sensing circuitry 54B and therapy delivery circuitry 57B are coupled to electrodes 55B. Sensing circuitry 54B may monitor signals from electrodes 55B in order to, for example, monitor electrical activity of a heart of patient 4 and produce ECG data for patient 4. In some examples, processing circuitry 50B may identify features of the sensed ECG, such as heart rate, heart rate variability, T-waveretemans, intra-beat intervals (e.g., QT intervals), and/or ECG morphologic features, to detect an episode of cardiac arrhythmia of patient 4. Example processing circuitry 50B may store the digitized ECG and features of the ECG used to detect the arrhythmia episode in memory 52B as episode data for the detected arrhythmia episode.
- features of the sensed ECG such as heart rate, heart rate variability, T-waveretemans, intra-beat intervals (e.g., QT intervals), and/or ECG morphologic features
- sensing circuitry 54B measures impedance, e.g., of tissue proximate to IMD 10B, via electrodes 55B.
- the measured impedance may vary based on respiration, cardiac pulse or flow, and a degree of perfusion or edema.
- Processing circuitry 50B may determine physiological data relating to respiration, cardiac pulse or flow, perfusion, and/or edema based on the measured impedance.
- IMD 10B includes one or more sensors 58B, such as one or more accelerometers, gyroscopes, microphones, optical sensors, temperature sensors, pressure sensors, and/or chemical sensors.
- sensing circuitry 54B may include one or more filters and amplifiers for filtering and amplifying signals received from one or more of electrodes 55B and/or sensors 58B.
- sensing circuitry 54B and/or processing circuitry 50B may include a rectifier, filter and/or amplifier, a sense amplifier, comparator, and/or analog-to-digital converter.
- Processing circuitry 50B may determine physiological data, e.g., values of physiological parameters of patient 4, based on signals from sensors 58B, which may be stored in memory 52B.
- Patient parameters determined from signals from sensors 58B may include oxygen saturation, glucose level, stress hormone level, heart sounds, body motion, body posture, or blood pressure.
- Memory 52B may store applications 70B executable by processing circuitry 50B, and data 80B.
- Applications 70B may include an acute health event surveillance application 72B.
- Processing circuitry 50B may execute event surveillance application 72B to detect an acute health event of patient 4 based on combination of one or more of the types of physiological data described herein, which may be stored as sensed data 82B.
- sensed data 82B may additionally include patient parameter data sensed by other devices, e.g., computing device(s) 12 or loT device(s) 30, and received via communication circuitry 60B.
- Event surveillance application 72B may be configured with a rules engine 74B.
- Rules engine 74B may apply rules 84B to sensed data 82B.
- sensing circuitry 54B may detect brain activity data, e.g., an electroencephalogram (EEG) via electrodes 55B, and event surveillance application 72B may detect stroke or a seizure based on the brain activity alone, or in combination with cardiac activity data or other physiological data.
- event surveillance application 72B detects whether the patient has fallen based on data from an accelerometer alone, or in combination with other physiological data.
- event surveillance application 72B may store the sensed data 82B that lead to the detection (and in some cases a window of data preceding and/or following the detection) as event data 86B, also referred to herein as episode data.
- processing circuitry 50B transmits, via communication circuitry 60B, event data 86B for the event to computing device(s) 12 (FIG. 1). This transmission may be included in a message indicating the acute health event, as described herein. Transmission of the message may occur on an ad hoc basis and as quickly as possible.
- Communication circuitry 60B may include any suitable hardware, firmware, software, or any combination thereof for wirelessly communicating with another device, such as computing devices 12 and/or loT devices 30.
- Therapy delivery circuitry 57B may be configured to generate and deliver electrical therapy to the heart, brain, spinal cord, nerves, or other part of the body of patient 4.
- therapy delivery circuitry 57B may include one or more pulse generators, capacitors, and/or other components capable of generating and/or storing energy to deliver as pacing therapy, defibrillation therapy, cardioversion therapy, other therapy, or a combination of therapies.
- therapy delivery circuitry 57B may include a first set of components configured to provide pacing therapy and a second set of components configured to provide anti-tachyarrhythmia shock therapy.
- therapy delivery circuitry 57B may utilize the same set of components to provide both pacing and anti-tachyarrhythmia shock therapy. In still other instances, therapy delivery circuitry 57B may share some of the pacing and shock therapy components while using other components solely for pacing or shock delivery.
- Therapy delivery circuitry 57B may include charging circuitry, one or more charge storage devices, such as one or more capacitors, and switching circuitry that controls when the capacitor(s) are discharged to electrodes 55B and the widths of pulses. Charging of capacitors to a programmed pulse amplitude and discharging of the capacitors for a programmed pulse width may be performed by therapy delivery circuitry 57B according to control signals received from processing circuitry 50B, which are provided by processing circuitry 50B according to parameters stored in memory 52B. Processing circuitry 50B controls therapy delivery circuitry 57B to deliver the generated therapy to patient 4 via one or more combinations of electrodes 55B, e.g., according to parameters stored in memory 52B. Therapy delivery circuitry 57B may include switch circuitry to select which of the available electrodes 55B are used to deliver the therapy, e.g., as controlled by processing circuitry 50B.
- IMD 10B may additionally or alternatively be configured to deliver other therapies configured to prevent the predicted acute cardiac event.
- processing circuitry 50B may control therapy delivery circuitry 57B to deliver cardiac pacing therapy configured to prevent a ventricular tachyarrhythmia, such as overdrive pacing therapy when one or more of the patient parameters indicate that the heart rate is not fast or down-drive pacing therapy if one or more of the patient parameters indicate that the heart rate is too fast.
- IMD 10B may additionally or alternatively be configured to deliver neuromodulation therapy to prevent an acute cardiac event, such as ventricular tachyarrhythmia, heart failure decompensation, or ischemia.
- processing circuitry 50B may be programmed, and therapy delivery circuitry 57B and electrodes 55B configured and placed, to generate and deliver the neuromodulation therapy.
- Example neuromodulation therapies include vagal nerve stimulation, spinal cord stimulation, peripheral nerve stimulation, cardiac intrinsic nerve modulation, and cardiac stellate ganglion stimulation.
- IMD 10B may additionally or alternatively be configured to deliver a therapeutic substance, e.g., infuse a drug.
- IMD 10B may include a pump to deliver the substance, and processing circuitry 50B may be configured to control the pump according to therapy parameters stored in memory 52B.
- Examples of delivery of therapy substances to prevent an acute cardiac event include delivery of substances that modulate the cardiovascular or neurological systems of the patient.
- IMD 10B is one example of a device configured to detect an acute health event, collect episode data associated with the acute health event, transmit the episode data, and deliver therapy to a patient.
- Processing circuitry 50B of IMD 10B may be further configured to receive the episode data associated with the acute health event detected by IMD 10B, apply a first one or more machine learning models and first classification logic to the episode data to determine a first respective classification of a plurality of predetermined classifications for the episode data.
- processing circuitry 50B may further determine whether to retrieve first additional data from the IMD 10B and apply a second one or more machine learning models and second classification logic to the first additional data, or retrieve second additional data from the IMD 10B and apply a third one or more machine learning models and third classification logic to the second additional data.
- Processing circuitry 50B may further determine a classification of the acute health event from the plurality of predetermined classifications based on the application of the second one or more machine learning models to the first additional data or the third one or more machine learning models to the second additional data.
- Processing circuitry 50B may also determine whether to control IMD 10B (e.g., therapy delivery circuitry 57B) to deliver therapy based on the classification.
- FIG. 3 is a block diagram illustrating an example configuration of a computing device 12 of patient 4, which may correspond to either (or both operating in coordination) of computing devices 12A and 12B illustrated in FIG. 1.
- computing device 12 takes the form of a smartphone, a laptop, a tablet computer, a personal digital assistant (PDA), a smartwatch or other wearable computing device.
- PDA personal digital assistant
- loT devices 30, and/or computing device 38 may be configured similarly to the configuration of computing device 12 illustrated in FIG. 3.
- computing device 12 may be logically divided into user space 102, kernel space 104, and hardware 106.
- Hardware 106 may include one or more hardware components that provide an operating environment for components executing in user space 102 and kernel space 104.
- User space 102 and kernel space 104 may represent different sections or segmentations of memory, where kernel space 104 provides higher privileges to processes and threads than user space 102.
- kernel space 104 may include operating system 120, which operates with higher privileges than components executing in user space 102.
- hardware 106 includes processing circuitry 130, memory 132, one or more input devices 134, one or more output devices 136, one or more sensors 138, and communication circuitry 140.
- computing device 12 may be any component or system that includes processing circuitry or other suitable computing environment for executing software instructions and, for example, need not necessarily include one or more elements shown in FIG. 3.
- Processing circuitry 130 is configured to implement functionality and/or process instructions for execution within computing device 12.
- processing circuitry 130 may be configured to receive and process instructions stored in memory 132 that provide functionality of components included in kernel space 104 and user space 102 to perform one or more operations in accordance with techniques of this disclosure.
- processing circuitry 130 may include, any one or more microprocessors, controllers, GPUs, TPUs, DSPs, ASICs, FPGAs, or equivalent discrete or integrated logic circuitry.
- Memory 132 may be configured to store information within computing device 12, for processing during operation of computing device 12.
- Memory 132 in some examples, is described as a computer-readable storage medium.
- memory 132 includes a temporary memory or a volatile memory. Examples of volatile memories include random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art.
- RAM random access memories
- DRAM dynamic random access memories
- SRAM static random access memories
- Memory 132 in some examples, also includes one or more memories configured for long-term storage of information, e.g., including non-volatile storage elements.
- non-volatile storage elements include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories.
- EPROM electrically programmable memories
- EEPROM electrically erasable and programmable
- memory 132 includes cloud-associated storage.
- One or more input devices 134 of computing device 12 may receive input, e.g., from patient 4 or another user. Examples of input are tactile, audio, kinetic, and optical input. Input devices 134 may include, as examples, a mouse, keyboard, voice responsive system, camera, buttons, control pad, microphone, presence-sensitive or touch- sensitive component (e.g., screen), or any other device for detecting input from a user or a machine. [0095] One or more output devices 136 of computing device 12 may generate output, e.g., to patient 4 or another user. Examples of output are tactile, haptic, audio, and visual output.
- Output devices 136 of computing device 12 may include a presence- sensitive screen, sound card, video graphics adapter card, speaker, cathode ray tube (CRT) monitor, liquid crystal display (LCD), light emitting diodes (LEDs), or any type of device for generating tactile, audio, and/or visual output.
- CTR cathode ray tube
- LCD liquid crystal display
- LEDs light emitting diodes
- One or more sensors 138 of computing device 12 may sense physiological parameters or signals of patient 4.
- Sensor(s) 138 may include electrodes, accelerometers (e.g., 3-axis accelerometers), an optical sensor, impedance sensors, temperature sensors, pressure sensors, heart sound sensors (e.g., microphones), and other sensors, and sensing circuitry (e.g., including an ADC), similar to those described above with respect to IMD 10A and IMD 10B of FIG. 2A and FIG. 2B.
- Communication circuitry 140 of computing device 12 may communicate with other devices by transmitting and receiving data.
- Communication circuitry 140 may include a network interface card, such as an Ethernet card, an optical transceiver, a radio frequency transceiver, or any other type of device that can send and receive information.
- communication circuitry 140 may include a radio transceiver configured for communication according to standards or protocols, such as 3G, 4G, 5G, WiFi (e.g., 802.11 or 802.15 ZigBee), Bluetooth®, or Bluetooth® Low Energy (BLE).
- health monitoring application 150 executes in user space 102 of computing device 12.
- Health monitoring application 150 may be logically divided into presentation layer 152, application layer 154, and data layer 156.
- Presentation layer 152 may include a user interface (UI) component 160, which generates and renders user interfaces of health monitoring application 150.
- Application layer 154 may include, but is not limited to, an event engine 170, rules engine 172, rules configuration component 174, and event assistant 176.
- Event engine 170 may be responsive to receipt of a transmission from IMD 10 indicating that IMD 10 detected an acute health event.
- Event engine 170 may control performance of any of the operations in response to detection of an acute health event ascribed herein to computing device 12, such as transmitting messages to HMS 22, controlling loT devices 30, and analyzing data to confirm or override the detection of the acute health event by IMD 10.
- Rules engine 172 analyzes sensed data 190, and in some examples, patient input 192 and/or EHR data 194, to determine whether there is a sufficient likelihood that patient 4 is experiencing the acute health event detected by IMD 10.
- Sensed data 190 may include data received from IMD 10 as part of the alert transmission, additional data transmitted from IMD 10, e.g., in “real-time,” and physiological and other data related to the condition of patient 4 collected by, for example, computing device(s) 12 and/or loT devices 30.
- sensed data 190 from computing device(s) 12 may include one or more of: activity levels, walking/running distance, resting energy, active energy, exercise minutes, quantifications of standing, body mass, body mass index, heart rate, low, high, and/or irregular heart rate events, heart rate variability, walking heart rate, heart beat series, digitized ECG, blood oxygen saturation, blood pressure (systolic and/or diastolic), respiratory rate, maximum volume of oxygen, blood glucose, peripheral perfusion, and sleep patterns.
- Patient input 192 may include responses to queries posed by health monitoring application 150 regarding the condition of patient 4, input by patient 4 or another user, such as bystander 26.
- the queries and responses may occur responsive to the detection of the event by IMD 10, or may have occurred prior to the detection, e.g., as part long-term monitoring of the health of patient 4.
- User recorded health data may include one or more of: exercise and activity data, sleep data, symptom data, medical history data, quality of life data, nutrition data, medication taking or compliance data, allergy data, demographic data, weight, and height.
- EHR data 194 may include any of the information regarding the historical condition or treatments of patient 4 described above.
- EHR data 194 may relate to history of SCA, tachyarrhythmia, myocardial infarction, stroke, seizure, one or more disease states, such as status of heart failure chronic obstructive pulmonary disease (COPD), renal dysfunction, or hypertension, aspects of disease state, such as ECG characteristics, cardiac ischemia, oxygen saturation, lung fluid, activity, or metabolite level, genetic conditions, congenital anomalies, history of procedures, such as ablation or cardioversion, and healthcare utilization.
- EHR data 194 may also include cardiac indicators, such as ejection fraction and left-ventricular wall thickness.
- EHR data 194 may also include demographic and other information of patient 4, such as age, gender, race, height, weight, and BMI.
- Rules engine 172 may apply rules 196 to the data.
- Rules 196 may include one or more models, algorithms, decision trees, and/or thresholds. In some cases, rules 196 may be developed based on machine learning, e.g., may include one or more machine learning models. In some examples, rules 196 and the operation of rules engine 172 may provide a more complex analysis the patient parameter data, e.g., the data received from IMD 10A or IMD 10B, than is provided by rules engine 74 A or 74B and rules 84 A or 84B. In examples in which rules 196 include one or more machine learning models, rules engine 172 may apply feature vectors derived from the data to the model(s).
- Rules configuration component 174 may be configured to modify rules 196 (and in some examples rules 84 A or 84B) based on feedback indicating whether the detections and confirmations of acute health events by IMD 10 and computing device 12 were accurate. The feedback may be received from patient 4 and/or EHR 24 via HMS 22. In some examples, rules configuration component 174 may utilize the data sets from true and false detections and confirmations for supervised machine learning to further train models included as part of rules 196.
- Rules configuration component 174 may select a configuration of rules 196 based on etiological data for patient, e.g., any combination of one or more of the examples of sensed data 190, patient input 192, and EHR data 194 discussed above. In some examples, different sets of rules 196 tailored to different cohorts of patients may be available for selection for patient 4 based on such etiological data.
- event assistant 176 may provide a conversational interface for patient 4 to exchange information with computing device 12.
- Event assistant 176 may query the user regarding the condition of patient 4 in response to receiving the alert message from IMD 10. Responses from the user may be included as patient input 192.
- Event assistant 176 may use natural language processing and context data to interpret utterances by the user.
- event assistant 176 in addition to receiving responses to queries posed by the assistant, event assistant 176 may be configured to respond to queries posed by the user.
- event assistant 176 may provide directions to and respond to queries regarding treatment of patient 4 from patient 4.
- computing device 12 may be configured to receive the episode data associated with the acute health event detected by IMD 10 (e.g., IMD 10A or IMD 10B), apply a first one or more machine learning models and first classification logic to the episode data to determine a first respective classification of a plurality of predetermined classifications for the episode data. Based on a probability of the first respective classification determined by the first one or more machine learning models, computing device 12 may further determine whether to retrieve first additional data from IMD 10 and apply a second one or more machine learning models and second classification logic to the first additional data, or retrieve second additional data from IMD 10 and apply a third one or more machine learning models and third classification logic to the second additional data. Computing device 12 may further determine a classification of the acute health event from the plurality of predetermined classifications based on the application of the second one or more machine learning models to the first additional data or the third one or more machine learning models to the second additional data.
- FIG. 4 is a block diagram illustrating an operating perspective of HMS 22.
- HMS 22 may be implemented in a computing system 20, which may include hardware components such as those of computing device 12, e.g., processing circuitry, memory, and communication circuitry, embodied in one or more physical devices.
- FIG. 4 provides an operating perspective of HMS 22 when hosted as a cloud-based platform.
- components of HMS 22 are arranged according to multiple logical layers that implement the techniques of this disclosure. Each layer may be implemented by one or more modules comprised of hardware, software, or a combination of hardware and software.
- Computing devices such as computing devices 12, loT devices 30, and computing devices 38 operate as clients that communicate with HMS 22 via interface layer 200.
- the computing devices typically execute client software applications, such as desktop application, mobile application, and web applications.
- Interface layer 200 represents a set of application programming interfaces (API) or protocol interfaces presented and supported by HMS 22 for the client software applications.
- Interface layer 200 may be implemented with one or more web servers.
- HMS 22 also includes an application layer 202 that represents a collection of services 210 for implementing the functionality ascribed to HMS herein.
- Application layer 202 receives information from client applications, e.g., a message indicating detection of an acute health event from a computing device 12 or loT device 30, and further processes the information according to one or more of the services 210 to respond to the information.
- Application layer 202 may be implemented as one or more discrete software services 210 executing on one or more application servers, e.g., physical or virtual machines. That is, the application servers provide runtime environments for execution of services 210.
- the functionality interface layer 200 as described above and the functionality of application layer 202 may be implemented at the same server.
- Services 210 may communicate via a logical service bus 212.
- Service bus 212 generally represents a logical interconnections or set of interfaces that allows different services 210 to send messages to other services, such as by a publish/subscription communication model.
- Data layer 204 of HMS 22 provides persistence for information using one or more data repositories 220.
- a data repository 220 may be any data structure or software that stores and/or manages data. Examples of data repositories 220 include but are not limited to relational databases, multi-dimensional databases, maps, and hash tables, to name only a few examples.
- each of services 230-238 is implemented in a modular form within HMS 22. Although shown as separate modules for each service, in some examples the functionality of two or more services may be combined into a single module or component.
- Each of services 230-238 may be implemented in software, hardware, or a combination of hardware and software.
- services 230-238 may be implemented as standalone devices, separate virtual machines or containers, processes, threads or software instructions generally for execution on one or more physical processors.
- Event processor service 230 may be responsive to receipt of a transmission from computing device(s) 12 and/or loT device(s) 30 indicating that IMD 10 detected an acute health event of patient and, in some examples, that the transmitting device confirmed the detection. Event processor service 230 may initiate performance of any of the operations in response to detection of an acute health event ascribed herein to HMS 22, such as communicating with patient 4 and, in some cases, analyzing data to confirm or override the detection of the acute health event by IMD 10. Record management service 238 may store the patient data included in a received alert message within event records 252.
- event processor service 230 may apply one or more rules 250 to the data received in the message, e.g., to feature vectors derived by event processor service 230 from the data, or to raw data, e.g., digitized ECG or other waveforms.
- Rules 250 may include one or more models, algorithms, decision trees, and/or thresholds, which may be developed by rules configuration service 234 based on machine learning.
- Example machine learning techniques that may be employed to generate rules 250 (as well as rules 84A or 84B and/or 196) can include various learning styles, such as supervised learning, unsupervised learning, and semi- supervised learning.
- Example types of algorithms include Bayesian algorithms, Clustering algorithms, decision-tree algorithms, regularization algorithms, regression algorithms, instance-based algorithms, artificial neural network algorithms, deep learning algorithms, dimensionality reduction algorithms and the like.
- Various examples of specific algorithms include Bayesian Linear Regression, Boosted Decision Tree Regression, and Neural Network Regression, Back Propagation Neural Networks, Convolution Neural Networks (CNN), Long Short Term Networks (LSTM), the Apriori algorithm, K-Means Clustering, k-Nearest Neighbour (kNN), Learning Vector Quantization (LVQ), Self-Organizing Map (SOM), Locally Weighted Learning (LWL), Ridge Regression, Least Absolute Shrinkage and Selection Operator (LASSO), Elastic Net, and Least-Angle Regression (LARS), Principal Component Analysis (PCA) and Principal Component Regression (PCR).
- Bayesian Linear Regression Boosted Decision Tree Regression
- Neural Network Regression Back Propagation Neural Networks
- CNN Convolution Neural Networks
- LSTM Long Short Term Networks
- K-Means Clustering K-Means Clustering
- kNN Learning Vector Quantization
- SOM Self-Organizing Map
- rules 250 maintained by HMS 22 may include rules 196 utilized by computing devices 12 and rules 84A or 84B used by IMD 10A or IMD 10B.
- rules configuration service 234 may be configured to develop and maintain rules 196 and rules 84A or 84B.
- Rules configuration service 234 may be configured to develop different sets of rules 84A or 84B, 196, 250, e.g., different machine learning models, for different cohorts of patients.
- Rules configuration service 234 may be configured to modify these rules based on event feedback data 254 that indicates whether the detections and confirmations of acute health events by IMD 10, computing device 12, and/or HMS 22 were accurate.
- Event feedback data 254 may be received from patient 4, e.g., via computing device(s) 12, and/or EHR 24.
- rules configuration service 234 may utilize event records from true and false detections (as indicated by event feedback data 254) and confirmations for supervised machine learning to further train models included as part of rules 250.
- services 210 may also include an assistant configuration service 236 for configuring and interacting with event assistant 176 implemented in computing device 12 or other computing devices.
- assistant configuration service 236 may provide event assistants updates to their natural language processing and context analyses to improve their operation over time.
- assistant configuration service 236 may apply machine learning techniques to analyze sensed data and event assistant interactions stored in event records 252, as well as the ultimate disposition of the event, e.g., indicated by EHR 24, to modify the operation of event assistants, e.g., for patient 4, a class of patients, all patients, or for particular users or devices.
- FIG. 5 is a flow diagram illustrating an example operation for applying rules to patient parameter data to determine whether an acute health event is detected.
- the example operation of FIG. 5 may be performed by processing circuitry of any one of IMD 10, computing device(s) 12, 38, loT devices 30, or HMS 22 (e.g., by processing circuitry 50A or 50B or 130 implementing rules engine 74A, 74B or 172 and applying rules 84A, 84B or 196), or by processing circuitry of two or more of these devices respectively performing portions of the example operation.
- the processing circuitry applies a first set of rules to first patient parameter data for a first determination of whether an acute health event, e.g., SCA, is detected (300).
- the processing circuitry determines whether one or more context criteria associated with the first determination are satisfied (302). If the one or more context criteria are not satisfied (NO of 302), the processing circuitry may determine whether the acute health event is detected based on the first determination (304). If the acute health event is detected (YES of 304), the processing circuitry may generate an alert, e.g., a message to another device and/or a user-perceptible alert as described herein (306). If the acute health event is not detected (NO of 304) or the alert has been generated, the example operation of FIG. 5 may end.
- an alert e.g., a message to another device and/or a user-perceptible alert as described herein (306). If the acute health event is not detected (NO of 304) or the alert has been generated, the example operation of FIG. 5 may end.
- the processing circuitry may apply a second set of rules to second patient parameter data for a second determination of whether the acute health event, e.g., SCA, is detected (308), and determine whether the acute health event is detected based on the second determination (304).
- the acute health event e.g., SCA
- the first and second sets of rules are different in at least one aspect.
- the second set of rules comprises at least one machine learning model.
- both the first and second sets of rules comprise at least one machine learning model.
- the processing circuitry determines a risk score of the acute health event, e.g., SCA, based on the application of the first set of rules to the first patient parameter data, and compares the risk score to a threshold to determine whether the one or more context criteria are satisfied.
- the context indicating that the second set of rules should be applied to the second patient parameter data may be that the risk score produced by the first determination does not meet a threshold indicating a sufficient certainty that the acute health event is occurring.
- the risk score may be a percentage likelihood of the acute health event.
- the processing circuitry determines a confidence level of the first determination of whether the acute health event is detected, and compares the confidence level to a threshold.
- the one or more context criteria may be satisfied where the first determination does not have a threshold degree of confidence, or where the first determination is associated with a likelihood of being a false positive that exceeds a threshold.
- application of the second set of rules to the second patient parameter data may act as a “tie -breaker” when the first determination is not confident.
- the processing circuitry determines that the one or more context criteria are satisfied when input from a user, e.g., the patient, contradicts the first determination (e.g., that the acute health event was detected or not detected), indicating that the likelihood that the first determination is false may be relatively high.
- the processing circuitry may determine a confidence level of the first determination of whether the acute health event is present using a variety of techniques. For example, the application of the first set of rules to the first patient parameter data may produce a level of confidence through its output, e.g., a risk score. In such examples, a higher output indicating a higher likelihood of the acute health event may indicate a higher level of confidence. Examples of rules that may produce such outputs include machine learning models and time-domain signal processing algorithms.
- the processing circuitry may determine a noise level of one or more signals from which the first patient parameter data is determined. In such examples, the processing circuitry may determine a confidence level of the first determination of whether the acute health event is present based on a noise level. In general, confidence level and noise level may be inversely related. In some examples, the processing circuitry may determine the confidence level based on health record data for patient 4. For example, if a clinician has indicated in a health record or via programming IMD 10 that patient 4 has experienced a myocardial infarction or has heart failure, confidence levels may be increased and/or thresholds included in the rules applied to the first patient parameter data may be lowered.
- a context criterion may be satisfied when a component of system 2, e.g., IMD 10 or computing devices 12, has sufficient power to enable the application of the second set of rules to the second patient parameter data.
- the processing circuitry may determine a power level of system 2, e.g., of the relevant device, and compare the power level threshold.
- the second patient parameter data includes data of at least one patient parameter that is not included in the first patient parameter data.
- the processing circuitry activates a sensor to sense this patient parameter, e.g., when the device including the sensor has sufficient power for the measurement.
- the first patient parameter data and the second patient parameter data are both sensed by an implantable medical device.
- the at least one patient parameter that is included in the second patient parameter data but not included in the first patient parameter data is sensed by an external device.
- processing circuitry 50A or 50B of IMD 10A or IMD 10B or processing circuitry 130 of computing device(s) 12 performs each of sub-operations 300-308.
- processing circuitry 50 A or 50B of IMD 10A or IMD 10B performs the first determination of whether the acute health event, e.g., SCA, is detected (300), and processing circuitry 130 of computing device(s) 12 (or loT devices 30 or the other devices discussed herein) performs each of sub-operations 302-308.
- the acute health event e.g., SCA
- processing circuitry 130 of computing device(s) 12 or loT devices 30 or the other devices discussed herein
- the first patient parameter data includes at least one patient parameter determined from ECG data, and the at least one patient parameter comprises a patient parameter determined from at least one of heart sounds of the patient, an impedance of the patient, motion of the patient (e.g., whether a fall occurred or is suspected), respiration of the patient, posture of the patient, blood pressure of the patient, a chemical detected in the patient, or an optical signal from the patient.
- the first patient parameter data and second patient parameter data may be determined using different combinations of sensors, e.g., internal and/or external sensors. The first and second determinations may be considered different tiers, with the second determination utilizing additional sensor(s), data, and/or power if the context suggests it would be desirable to supplement the first determination.
- the processing circuitry selects at least one of the second set of rules or the parameters used for the second patient parameter data based on at least one of user (e.g., patient or care giver or bystander) input or medical record information of the patient.
- user e.g., patient or care giver or bystander
- the user input and/or medical history information may include information entered by a clinician when programming IMD 10.
- the processing circuitry may select at least one of the second set of rules or the parameters used for the second patient parameter data based on user input or medical record information indicating a particular symptom or condition of the patient.
- the first patient parameter data comprises data for a first set of patient parameters
- the processing circuitry may select at least one of the second set of rules or a second patient parameter for the second patient parameter data based on the level.
- a level for a particular parameter that is clinically significant but contrary to the first determination may suggest that the second determination should be performed, and should be performed with a particular parallel (but different) or orthogonal patient parameter to resolve the uncertainty about whether the acute health event is detected.
- the first patient parameter data includes at least one patient parameter determined from ECG data of the patient
- the second patient parameter data comprises at least one of a morphological change or a frequency shift of the ECG data over time.
- the processing circuitry may analyze ECG data for timing or morphology changes. For example, morphological or frequency changes as a ventricular fibrillation persists may indicate an increase lethality of the ventricular fibrillation.
- the rules applied processing circuitry may determine a higher likelihood of the acute health event, e.g., lethal ventricular fibrillation or SCA, the presence of such morphological or frequency shifts.
- the example operation of FIG. 5 may result in a hierarchy of rules or even sensor measurements.
- one or more sensors may be activated in certain contexts, and may be inactive for first determinations of whether the acute health event is detected, e.g., to conserve power of IMD 10. For example, if in a first determination ECG data indicates ventricular fibrillation and other sensor data indicates no pulse and no heart sounds, there may be no need for the second determination. But if those levels of evidence are not high, e.g., not sure if it definitely ventricular fibrillation there might be faint heart sounds, faint pulses, a fall, or a gait change, then a second determination could be employed.
- the rules and sensors used in either or both of the first as second determinations can be configured/personalized for each patient based on their medical history from EHR 24 or their history of previous events or by their physicians/caregivers depending on the situation. For example, if a caregiver has to leave town for few days, the processing circuitry could configure the rules to be satisfied by lower levels of evidence, e.g., automatically, which may advantageously tailor the monitoring provided by system 2 to the context of patient 4 and care givers of the patient.
- FIG. 6 is a flow diagram illustrating another example operation for applying rules to patient parameter data to determine whether an acute health event is detected. The example operation of FIG.
- processing circuitry of any one of IMD 10, computing device(s) 12, 38, loT devices 30, or HMS 22 e.g., by processing circuitry 50A or 50B or 130 implementing rules engine 74A, 74B or 172 and applying rules 84A, 84B, or 196), or by processing circuitry of two or more of these devices respectively performing portions of the example operation.
- the processing circuitry applies a set of rules to patient parameter data to determine whether an acute health event, e.g., SCA, is detected (320).
- the processing circuitry determines whether one or more context criteria associated with the determination are satisfied (322). If the one or more context criteria are not satisfied (NO of 322), the processing circuitry may determine whether the acute health event is detected based on the determination (324). If the acute health event is detected (YES of 324), the processing circuitry may generate an alert, e.g., a message to another device and/or a user-perceptible alert as described herein (326). If the acute health event is not detected (NO of 324) or the alert has been generated, the example operation of FIG. 6 may end.
- an alert e.g., a message to another device and/or a user-perceptible alert as described herein (326). If the acute health event is not detected (NO of 324) or the alert has been generated, the example operation of FIG. 6 may end.
- the processing circuitry may modify the set of rules (328), apply second patient parameter data to the second set of rules (330), and determine whether the acute health event is detected based on the application of the second patient parameter data to the second set of rules (324).
- the processing circuitry may determine whether the one or more context criteria are satisfied in the manner described with respect to FIG. 5.
- the first and second patient parameter data may be determined from the same patient parameters or (with respect to at least one parameter) different patient parameters.
- the first patient parameter data and the second patient parameter data include at least one common patient parameter, and the processing circuitry may change a mode sensing for the common patient parameter between the first patient parameter data and the second patient parameter data in response to satisfaction of the one or more context criteria. For example, the processing circuitry may change a sampling frequency for the common patient parameter.
- the processing circuitry may determine that a context criterion is satisfied by detecting that IMD 10 has flipped or otherwise migrated within patient 4. Such migration may lead to significant changes in patient parameter data, e.g., ECG data, impedance data, or heart sound data. Changing a mode employed by IMD 10 to sense one or more patient parameters, or changing rules to account for changes in patient parameter data resulting from device migration, may help ameliorate the device migration and maintain effective acute health event detection. In addition to the mode of sensing and/or rules, the processing circuity may adjust other aspects of system, such mode of wireless communication between the IMD and other devices.
- the processing circuitry determines that the one or more context criteria are satisfied when the processing circuitry determines that the acute health event, e.g., ventricular tachyarrhythmia or SCA, is detected, but the patient or another user cancels the alarm or otherwise provides user input contradicting the determination.
- the processing circuitry may modify one or both of the sensed patient parameters or the rules applied to the patient parameter data.
- the patient may have tolerated a rapid ventricular tachycardia that lasted for a sustained period (e.g., a programmed 10 or 20 seconds), but could experience another arrhythmia, e.g., syncope, soon even though the patient believes they are OK.
- the modification may include adapting the rules based on the rhythm. Sometimes a long duration episode accelerates to ventricular fibrillation or more rapid ventricular tachycardia. Sometimes ventricular fibrillation slows down. In either case, the modification could include changing a heart rate threshold, e.g., applying hysteresis to the heart rate threshold. In some examples, ventricular fibrillation becomes difficult to sense.
- the modification may include changing a ventricular depolarization detection threshold to allow more undersensing of depolarizations.
- the processing circuitry determines that the one or more context criteria are satisfied based on a recent history of high arrhythmia burden. Some patients have electrical storms. Their electrolytes may be imbalanced, and they may experience a cluster of ventricular arrhythmias, but the patient parameter data may not satisfy the rules for detection of the acute health event. In such cases, the processing circuitry may adapt a tachyarrhythmia duration the threshold, may alert patient and caregivers and inform them to seek care ASAP, and/or may alert a clinician and send patient parameter data, e.g., ECG data, so the clinician can review.
- patient parameter data e.g., ECG data
- FIG. 7 is a flow diagram illustrating an example operation for configuring rules applied to patient parameter data to determine whether an acute health event is detected for a patient.
- the example operation of FIG. 7 may be performed by processing circuitry that implements HMS 22, e.g., that implements rules configuration service 234.
- the operation of FIG. 7 may be performed by processing circuitry of any one of IMD 10, computing device(s) 12, 38, loT devices 30, or HMS 22, e.g., implementing a rules configuration module, or by processing circuitry of two or more of these devices respectively performing portions of the example operation.
- the processing circuitry determines whether an acute health event, e.g., SCA, is detected (340).
- the processing circuitry receives feedback for the event (342).
- the feedback indicates whether the detection a true or false positive, or the non-detection is a true or false negative.
- the processing circuitry may receive the feedback from patient 4 or EHR 24.
- the processing circuity updates rules (e.g., rules 84A or 84B, rules 196, and/or rules 250) based on the feedback and event data, e.g., event data 86A or 86B or event records 252.
- uses the event data as a training set for one or more machine learning models based on the feedback.
- the operation of a system used to detect, confirm and respond to acute health events can be improved.
- the information used to improve the performance could include physiologic sensor data that may indicate an SCA event is likely (QT prolongation, T-wave alternans, changes in respiration rate or thoracic impedance, history of PVCs or non-sustained VT, reduction in 02 saturation and/or perfusion, etc.).
- the information used to improve the performance could include information indicating whether the prior SCA event was alerted appropriately and accurately, clinical or physiologic characteristics of the patient (disease state, weight, gender, etc.), data from EHR 24, and data input from the patient (e.g., symptom logging, confirmation that he/she is OK and not suffering from SCA, etc.).
- the processing circuitry may personalize the rules for patient 4 over time. If patient 4 has a lot of false positives, the example operation of FIG. 7 may modify the rules to be less sensitive and, conversely, if the patient 4 has a lot of false negatives may modify the rules to be more sensitive.
- the processing circuitry may use the feedback and event data to update rules, e.g., machine learning models, for other patients, such as all patients whose IMDs are served by HMS 22, or a particular population or cohort of patients.
- the processing circuitry may use data from a number of sources (e.g., computing devices 12, loT devices 30, etc.) to modify the rules or the collection of patient parameter data. Data used by processing circuitry to update rules may include data collected using an accelerometer, speaker, light detector, or camera on a computing device or loT device.
- FIG. 8 is a flow diagram illustrating another example operation for configuring rules applied to patient parameter data to determine whether an acute health event is detected for a patient.
- the example operation of FIG. 7 may be performed by processing circuitry that implements HMS 22, e.g., that implements rules configuration service 234.
- the operation of FIG. 8 may be performed by processing circuitry of any one of IMD 10, computing device(s) 12, 38, loT devices 30, or HMS 22, e.g., implementing a rules configuration module, or by processing circuitry of two or more of these devices respectively performing portions of the example operation.
- the processing circuitry determines an etiology or risk stratification of patient 4 (360).
- the processing circuitry selects a set of rules (e.g., a set of rules 84A or 84B, rules 196, and/or rules 250), which may be a first set of rules and/or a second set of rules, for acute health event, e.g., SCA, detection for patient 4 based on the patient etiology (362).
- a set of rules e.g., a set of rules 84A or 84B, rules 196, and/or rules 250
- acute health event e.g., SCA
- rules 250 include different sets of rules for different patient cohorts having different etiologies
- processing circuitry may select different rule sets to implement as rules 84 A or 84B in IMD 10A or IMD 10B and rules 196 in computing device(s) 12 for a given patient based on the etiology of that patient.
- the processing circuitry may apply the selected set of rules to patient parameter data to determine whether the acute health event is detected using any of the techniques described herein (364).
- Detection of SCA can be achieved by looking at a number of possible markers that occur prior to and during the event. The best markers to detect an impending or ongoing event are likely to be different based on an etiology of the patient.
- An SCA detection algorithm which uses a generic algorithm designed for a broad population may not achieve satisfactory sensitivity and specificity.
- the etiology of patient 4 may include baseline characteristics, medical history, or disease state.
- the etiology of patient 4 may include any EHR data 194 described herein, as well as patient activity level or metabolite level. With such possible inputs, the rules could look for certain markers to exhibit certain trends or threshold crossings to detect an impending or ongoing acute health event, e.g., SCA.
- selection of a set of rules may include modification of a universal rule set to turn certain rules (or markers of the acute health event) on or off, or change the weight of certain rules or markers.
- a family of devices could be designed such that individual models have sensors or calculation for only a limited set of inputs motivated by a need to reduce manufacturing costs or energy consumption.
- rules related to other patient parameter data may be set to a heightened alert based patient etiology. For example, a patient with prior myocardial infarction may have rules that weigh ischemia factors such as ST segment elevation more heavily than for patients lacking this etiology. As another example, a patient with long QT syndrome may have rules that more heavily weight QT interval and activity. As another example, rules for a heart failure patient may have rules that apply greater weight to patient parameter data related to lung fluid and QRS duration.
- processing circuitry of system 2 may use patient etiology to “personalize” other aspects of the operation of system 2 for patient 4 or a cohort including patient 4.
- the processing circuitry may provide alerts and user interfaces that guide patient 4 or others based on the etiology.
- the processing circuitry can provide patient-specific care recommendations (e.g., delivery of antitachyarrhythmia shock or pacing, neurostimulation, or potential drug therapy for prevention or therapy (termination) of SCA).
- patient-specific care recommendations e.g., delivery of antitachyarrhythmia shock or pacing, neurostimulation, or potential drug therapy for prevention or therapy (termination) of SCA.
- the etiology may indicate patient 4 is more at risk for pulseless electrical activity vs. ventricular fibrillation/ventricular tachycardia.
- the processing circuitry of system 2 may control IMD to deliver therapy to the patient, such as providing defibrillation, to treat the patient’s condition.
- the processing circuitry of system 2 may recommend or otherwise cause the delivery of patient-specific care actions (e.g., by IMD 10) based on the etiology.
- system 2 may be used to detection any of a number of acute health events of patient 4.
- system 2 may be used to detect stroke.
- Stroke can often present in the form of facial droop.
- This change in facial tone could be identified using facial image processing on a computing device 12, e.g., a smartphone, or loT devices 30.
- image processing could be a primary indicator of possible stroke or a part of a confirmation after another device indications changes related to stroke.
- the processing circuitry could detect possible stroke, and various devices of system 2 could provide alerts.
- the device in response to detection based on the camera images, the device could output a series of prompts (audible and/or visual) to access a current state of patient 4.
- Patient 4 could be prompted to repeat a phrase or answer audible questions to assess cognitive ability.
- the device could use additional motion processing to further verify the state of patient 4, e.g., using an accelerometer of computing device 12A and/or 12B. Changes in body motion and asymmetry, e.g., of the face and/or body motion, are indictive of stroke.
- the device may ask patient 4 questions.
- Processing circuitry may analyze the response to detect speech difficulties associated with stroke.
- processing circuitry of one or more devices of system 2 may be configured to analyze episode data associated with an acute health event, such a ventricular tachyarrhythmia or SCA, detected by IMD 10.
- the episode data may include ECG and other physiological parameter data collected by IMD 10 for the event, e.g., leading up to, during, and/or after the event.
- polymorphic VT and VF are life threatening
- monomorphic VT are life threatening if sustained for durations on the order of minutes
- SVTs are generally not life threatening unless sustained for greater than 1 hour.
- the techniques of this disclosure may include use of a second set of rules that includes machine learning models or other Al algorithms to improve accuracy of classification of these different forms of ventricular tachyarrhythmia that maybe detected by IMDs.
- An ensemble of neural networks may improve sensitivity and specificity of the overall analysis by allowing for different inputs to have respective networks of different forms, e.g., one can use recurrent neural networks for one or more specific inputs and CNNs for one or more other inputs.
- the output of each network may be concatenated and flattened, and then fed as input into the final stages of the ensemble network which may have fully connected layers and classification layers.
- FIG. 9 is a block diagram illustrating an example of an ensemble classifier 400 of neural networks configured to classify ventricular tachyarrhythmias.
- Processing circuitry e.g., processing circuitry 130 of computing device 12 or loT device 30, may apply a plurality of inputs 402 to a plurality of neural networks 404 of ensemble classifier 400.
- Inputs 402 include raw signal inputs 406A or other raw parameter data of patient 4, e.g., from IMD 10 or other devices as described herein, and inputs 406B comprising features derived from the raw data.
- Inputs 406A may include a raw ECG segment sensed by IMD 10 including a ventricular tachyarrhythmia onset detected by IMD 10 based on the ECG, and a raw ECG segment sensed by IMD 10 including a portion of the ECG by which IMD 10 determined the ventricular tachyarrhythmia was sustained.
- inputs 406B may include a feature determined by the processing circuitry based on a temporal history of other sensed parameters of patient 4.
- one or more inputs 402 or portions thereof may be fed into separate individual neural networks 404, which may include 1 or 2-dimensional CNNs, RNNs, or long short-term memory (LSTM) memory networks (which may be a type of RNN).
- the processing circuitry may flatten 408 and concatenate 410 the outputs from the plurality of neural networks to provide ensemble classifier 400.
- the processing circuitry may apply the flattened and concatenated outputs to a fully connected layer 412, and the outputs of the fully connected layer to one or more SoftMax functions 414.
- the outputs of the one or more SoftMax functions 414 are probabilities 416, e.g., respective probabilities of different classifications of the data for the episode of ventricular tachyarrhythmia detected by IMD 10.
- the classifications are different classifications are PVT, MVT, SVT, noise, and oversensing.
- the processing circuitry may combine the raw signals and derived features in a 2D array format (to form an input ensemble) for a single CNN or other neural network.
- FIG. 10 is a block diagram illustrating an example of a single classifier 430 utilizing raw signals and derived features as inputs 432. Inputs 432 of FIG. 10 may be substantially similar to inputs 406B of FIG. 9.
- the processing circuitry may concatenate 434 inputs 432.
- the processing circuitry may concatenate 434 inputs 432 to form a concatenated 2D array 436 of input values to be applied to a neural network 438 including one or more of an ESTM/RNN, rectifier function, and/or multiplex pooling layers.
- the processing circuitry may concatenate 440 the output of neural network 438 for application to a fully connected layer 442 and SoftMax function 444 to produce probabilities 446 in the manner described above with respect to FIG. 9.
- Classifier 430 may be an example of a second set of rules as described above.
- the processing circuitry uses different segments of ECG, such as a segment from period of time at onset of arrhythmia, another segment when the episode reaches sustained detection, and multiple ongoing segments thereafter, as respective inputs to the one or more neural networks, e.g., of ensemble classifier 400 or classifier 430.
- the processing circuitry uses features derived from different segments of the ECG in the episode data as respective inputs to the one or more neural networks, such as RR intervals during the episode and prior to start of episode, RR interval stability or variability, or short term HRV prior to onset of the episode.
- the segments may be timewise, e.g., respective periods of the ECG.
- the segments may be contiguous, separated by time, and/or overlapping.
- the processing circuitry uses data from other sensors, e.g., of IMD 10, computing devices 12, and/or loT devices 30.
- the additional data may include patient motion (e.g., gait) or posture, e.g., from an accelerometer, which may indicate activity level during arrhythmia or gait/posture during arrhythmia or if patient 4 had a fall during the detected episode.
- other data e.g., historical data, may be obtained from IMD 10, computing devices 12, HMS 22, and/or EHR 24.
- the other data may include, as examples, ventricular tachyarrhythmia episode detection history, Al based episode classification history, AF burden history, or clinical history.
- the processing circuitry may derive features from sensor signals using signal processing techniques such as autocorrelation, Short Time Fourier transforms, Continuous Wavelet transforms, principal component analysis, independent component analysis, etc.
- FIG. 11 is a block diagram illustrating a staged classifier 460 for classifying a ventricular tachyarrhythmia episode.
- the processing circuitry e.g., processing circuitry 130 of computing device 12 or loT device 30, may first apply a 5-class classifier 462, e.g., similar to ensemble classifier 400 or classifier 430, and the most dominant classes, such as inappropriate detections, noise, and oversensing episodes, are removed.
- the processing circuitry classifies episodes that are classified as appropriate tachycardia (PVT, MVT, and SVT) using a 3-class classifier 464.
- the next dominant class (SVT) is removed.
- the processing circuitry then classifies the remainder episodes using a 2-class classifier 466 to classify PVT vs MVT episodes.
- the processing circuitry may discriminate SVT from other ventricular tachyarrhythmia classifications based on a comparison of ECG data for the episode to a historical ECG segment.
- the episode ECG data may be received from IMD 10 as described herein, and the historical ECG segment may be retrieved from HMS 22.
- the historical ECG segment may be from a previous transmission from IMD 10 to HMS 22, e.g., a daily transmission, such as the most recent transmission.
- the historical ECG segment may be a segment prior, e.g., most recently prior to a fast heart rate associated with the detected ventricular tachyarrhythmia, or a most recent periodically, e.g., every hour, collected ECG.
- the historical ECG segment may be a segment of normal sinus rhythm ECG collected when the device was not currently detecting any cardiac events nor arrhythmias, or may be a segment previously verified as SVT, e.g., based on a user or algorithmic analysis of the segment.
- the processing circuitry may apply a convolutional filter and/or bank of convolutional filters to the ECG data for an episode to discriminate SVT from other classifications.
- the processing circuitry may generate the convolutional filter based on the historical ECG segment, which may be about 8 seconds in length.
- the processing circuitry may generate the bank of convolutional filters based on a wavelet or other decomposition of the historical ECG segment.
- the processing circuitry may classify the episode as SVT based on a suprathreshold output of the convolutional filter(s).
- an additional classifier may further classify SVT as one of sinus tachycardia, atrial arrhythmia, SVT with aberrancy, junctional rhythms, atrioventricular nodal reentry tachycardia, or others.
- the processing circuitry may discriminate SVT from other ventricular tachyarrhythmia classifications based on a feature indicative of the presence of absence of high frequency harmonics in the episode ECG data.
- FIGS. 12A and 12B illustrate frequency decompositions of ECG 470 and ECG 480 of a MVT episode and an SVT episode, respectively. As illustrated by FIGS. 12A and 12B, the magnitude at certain higher frequency harmonics is greater in the decomposed ECG 480 for the SVT episode (FIG. 12B) than the decomposed ECG 470 for the MVT episode (FIG. 12A).
- the processing circuitry applies a bank of complex exponential functions as convolutional filters to the ECG data for the episode.
- the frequency range of the bank may be configured to span a frequencies of interest, which may be integer multiples of a lowest frequency in the decomposed ECG data for the episode.
- the lowest frequency may be about 60 Hertz (Hz)
- the bank may span a range from 100 Hz to 500 Hz, continuously across the range or via bands centered on respective integer multiples of 60 Hz.
- the processing circuitry may classify the episode as SVT based on a suprathreshold output of the convolutional filter(s).
- the processing circuitry may apply a beat- wise morphological comparator to discriminate PVT from MVT.
- the processing circuitry may generate a convolutional filter from a selected beat, e.g., the first beat, in the ECG stored by IMD 10 for the episode.
- the processing circuitry may generate a plurality of convolutional filters based on a decomposition, e.g., Walsh, Fourier, or wavelet, of the selected beat.
- the processing circuitry may apply the filter(s) to some or all of the other beats in the ECG stored by IMD 10 for the episode, e.g., sequentially.
- the processing circuitry may classify the episode as PVT based on a suprathreshold variability in the output of the convolutional filter(s).
- the processing circuitry applies a classifier to event or episode data collected by IMD 10 for a suspected acute health event to determine one of a plurality of possible classifications.
- the possible classifications may include one or more acute health events of interest, including the one suspected by the IMD.
- the event data may include ECG data, and the classifications may include the classifications discussed above with respect to FIGS. 9-11.
- the classifier may be implemented by a rules engine, such as rules engine 172, and may be an example of application of a second set of rules to patient parameter data.
- FIG. 13 is a block diagram illustrating an example configuration of a classifier 490 configured to classify episode data collected and transmitted by IMD 10 in response to detecting an acute health event, e.g., transmitted by the IMD based on application of a first set of rules, as described herein.
- Classifier 490 respectively analyzes timewise segments 492 of the episode data, e.g., M second segments of N seconds of episode data transmitted by IMD 10, to determine a classification 494.
- the episode data comprises ECG data transmitted by IMD 10 in response to detecting a sustained ventricular tachyarrhythmia, and possible classifications include the classifications discussed above with respect to FIGS. 9-11.
- Classifier 490 may be implemented by processing circuitry 130 of computing device 12, and/or processing circuitry of any one or more devices described herein.
- Classifier 490 may analyze all available segments of the episode data, or selected segments of the episode data, which may be consecutive or non-consecutive. For example, classifier 490 may analyze a plurality of consecutive segments at the end of the episode and, in some cases, additionally analyze one or more non-consecutive segments preceding the plurality of segments. The segments may be adjacent in time, overlap in time, or be spaced apart in time. In some examples, segments 492 include a historical or baseline segment, from the beginning of the episode data or from another transmission from IMD 10, as described above. Additionally, in some examples, classifier 490 may analyze later segments, after the end of the episode data, when computing device 12 and/or any one or more devices described herein requests additional data from IMD 10 based on an uncertain (e.g., lower confidence level) classification.
- uncertain e.g., lower confidence level
- classifier 490 includes one or more machine learning models 496.
- One or more machine learning models 496 may be configured and operate as illustrated and described with respect to FIGS. 9-11.
- One or more machine learning models 496 may output, for each of one or more segments 492, respective classifications, probabilities, decisions, or other outputs 499 to classification logic 498.
- one or more machine learning models 496 may output, for each of segments 492, a respective classification (e.g., tachyarrhythmia type as described above) and, in some cases, an associated probability or confidence level.
- one or more machine learning models 496 may output, for each of segments 492, a respective probability for each possible classification (e.g., each tachyarrhythmia type).
- Classification logic 498 determines a classification 494 of the episode data based on the classifications of segments 492 of episode data by machine learning model(s) 496. Based on the classification of the episode data, e.g., based on the classification being certain tachyarrhythmias such as VF or PVT, processing circuitry 130 may control output as described herein. In some examples, processing circuitry 130 requests additional patient parameter data from IMD 10 based on the classification, e.g., if the classification being certain tachyarrhythmias such as VF or PVT, but with a relatively lower probability and/or duration.
- Segment-based classification of episode data according to the techniques described herein may improve the accuracy of classification/detection of health events, particularly in situations where shorter segments of continuous episode data are available to train the one or more machine learning models. Segment-based classification of episode data according to the techniques described herein may improve the accuracy of classification/detection of health events where the patient condition may change during an episode, e.g., where a tachyarrhythmia may spontaneously terminate or change during an episode.
- classification logic 498 determines the classification of the episode based on a number of the segments determined to have the classification, or a total duration of segments having the classification, satisfying a threshold. In some examples, classification logic 498 additionally or alternatively determines the classification of the episode based on a time location of one or more segments determined to have the classification within the episode data. For example, classification logic 498 may require that the last N segments, where N is an integer greater than or equal to 1, have the classification in order for the episode data as a whole to have the same classification.
- classification logic 498 additionally or alternatively determines the classification of the episode based on respective probabilities associated with the classifications of the segments, e.g., probabilities output by machine learning model(s) 496. In some examples, classification logic 498 compares the respective probabilities to one or more thresholds. In some examples, classification logic 498 compares a number or duration of segments having a common classification to a threshold as described above, but not include segments for which the probability of the classification does not satisfy a threshold.
- classification logic 498 additionally or alternatively determines the classification of the episode based on a comparison of a combination, e.g., sum or average, of the probabilities associated with segments having the classification to a threshold.
- the combination is weighted, with one or more segments being weighted differently than one or more other segments.
- one or more segments later in the episode are weighted more heavily than one or more segments earlier in the episode.
- FIGS. 14-17 are tables 500-800 illustrating example segment classifications, and associated episode classifications that may be determined by classification logic 498 based on the segment classifications.
- classification logic 498 may determine a classification PVT/VF or MVT in response to each of the four segments W5-W8 (at the end of the episode) being classified as PVT/VF or MVT.
- processing circuitry 130 may cause the delivery of therapy (e.g., by therapy delivery circuitry 57B of IMD 10B of FIG. 2B).
- classification logic 498 may determine a classification of semi-sustained or non-sustained PVT/VF or MVT based on the number/location of segments classified as PVT/VF or MVT not satisfying a threshold or criterion.
- processing circuitry 130 may control communication circuitry 140 to communicate with IMD 10 to retrieve additional ECG data and/or other patient parameter data.
- classification logic 498 may also determine a classification of semi-sustained or non-sustained PVT/VF or MVT based on a certain amount of, e.g., 2 of 4, segments being classified as PVT/VF or MVT, and in which N most recent segments did not have that classification. Where one or more of the N most recent segments did have that classification, classification logic 498 may determine a classification of PVT/VF or MVT, or non-sustained PVT/VF or MVT, based on the probabilities associated with the segments classified as PVT/VF or MVT, e.g., based on comparison of the probabilities to a threshold.
- Example probability criteria include: 2 of 4 segments having a classification with a probability being greater than 0.98; 3 of 4 segments having a classification with a probability greater than 0.9; and/or an average probability of a classification across segments greater than 0.5.
- classification logic 498 may also determine a classification of semisustained or non-sustained PVT/VF or MVT based on the presence of normal sinus rhythm (NSR) classifications for N latest segments of episode data.
- NSR normal sinus rhythm
- classification logic 498 may apply a second one or more machine learning models to the classifications and, in some examples, probabilities, determined for each segment by one or more machine learning models 496.
- the second one or more machine learning models implemented by classification logic 498 may include on or more convolutional neural networks or recurrent neural networks, such as long short-term networks (LSTMs) that encode changes over time.
- LSTMs long short-term networks
- Other examples of machine learning methods to combine classifications from individual segments include state space machines, Bayesian belief networks or fuzzy logic, or other data fusion techniques.
- classification logic 498 includes one or more machine learning models that receive as input features identified automatically by a deep learning model, e.g., convolutional neural network, of one or more machine learning models 496 and/or output from non-machine learning rules 497 (FIG. 19).
- Non-machine learning rules 497 may provide outputs to classification logic 498 based on morphological features, such as morphological features determined using wavelets or cross-correlation, or RR interval features, such as metrics of regularity, irregularity or entropy, or presence of rate onset or irregularity onset.
- FIG. 18 is a flow diagram illustrating an example operation of classifier 490 of FIG. 13.
- processing circuitry e.g., processing circuitry 130 of computing device 12, receives episode data (also referred to as event data) from IMD 10 (900).
- IMD 10 may have transmitted the episode data to computing device 12 in response to detecting a tachyarrhythmia or other health event based on application of a first set of rules as described herein.
- Processing circuitry 130 may implement classifier 490, which may apply one or more machine learning models 496 to each segment of a plurality of segments 492 of the episode data received from IMD 10 (902). Based on the respective segment classifications, classification logic 498 may output a classification 494 of the episode (904).
- FIG. 19 is a block diagram illustrating another example configuration of a classifier 1000 configured to classify episode data collected and transmitted by IMD 10 in response to detecting an acute health event, e.g., transmitted by the IMD based on application of a first set of rules, as described herein.
- Classifier 1000 may be configured similarly to ensemble classifier 400 of FIG. 13 except as noted herein.
- Classifier 1000 may be implemented by processing circuitry 130 of computing device 12, and/or processing circuitry of any one or more devices described herein.
- classifier 1000 includes one or more non-machine learning rules 497.
- One or more non-machine learning rules 497 may include rules applied to morphological stability or variability of the electrocardiogram data, frequency content of the electrocardiogram data, and/or heart rate stability or variability.
- One or more non-machine learning rules 497 may include template matching or RR interval modesum.
- One or more non-machine learning rules 497 may output, for each of one or more segments 492, respective classifications, probabilities, decisions, parameter values, or other outputs 495 to classification logic 498.
- one or more non-machine learning rules 497 may output, for each of one or more segments 492, a classification, binary decision (e.g., between classifications), or parameter value indicative of one or more classifications (e.g., of different types of tachyarrhythmia as described above).
- classification logic 498 determines a classification for the episode or, in some cases, whether to request additional data from IMD 10 for making the classification.
- classification logic 498 may require a threshold level of agreement, e.g., complete, majority, or other voting threshold, between the classifications of segments 492 in order to output the predominant classification as classification 494.
- classification logic 498 determines classification 494 based on a weighted combination of outputs 499, e.g., in comparison to a threshold.
- Classification logic 498 may weight outputs 499 based on respective probabilities and and/or the time sequence position of segments, e.g., with one or more segments 492 later in the episode data being weighted more than one or more segments 492 earlier in the episode.
- classification logic 498 may adopt the output 495 of non-machine learning rules 497, ignore the output 499 from machine learning models 496, or decrease a weight applied to the output 499 from machine learning models 496 for the segment 492.
- classification logic 498 only considers outputs 495 (and/or classifier 1000 only applies non-machine learning rules 497) for a subset of segments 492 to which machine learning models 496 are applied, such as segments 492 for which a probability /confidence of a classification output 499 is less than (or equal to) a threshold, or for which classification output 499 is a predetermined classification.
- non-machine learning rules 497 may provide independent assessment of a key classification (e.g., VT vs. VF or VT vs. PVT discrimination).
- classifier 1000 that applies both machine learning models 496 and non-machine learning rules 497 to segments 492 of episode data as described herein may improve the accuracy of classification/detection of health events, such as tachyarrhythmias, particularly in situations where availability of training data may limit the accuracy of one or more machine learning models 496 in isolation.
- FIG. 20 is a flow diagram illustrating an example operation of classifier 1000 of FIG. 19.
- processing circuitry e.g., processing circuitry 130 of computing device 12, receives episode data (also referred to as event data) from IMD 10 (1100).
- IMD 10 may have transmitted the episode data to computing device 12 in response to detecting a tachyarrhythmia or other health event based on application of a first set of rules as described herein.
- Processing circuitry 130 may implement classifier 1000, which may apply one or more machine learning models 496 to each segment of a plurality of segments 492 of the episode data received from IMD 10 (1102). Classifier 1000 may also apply one or more non-machine learning rules 497 to one or more segments of the plurality of segments 492 (1104). Based on resulting outputs 499 and 495 of one or more machine learning models 496 and one or more non-machine learning rules 497, classifier 1000 may output a classification 494 of the episode (1106).
- one or more non-machine learning rules 497 may be configured to discriminate MVT and PVT.
- one or more non-machine learning rules 497 may include one or more rules applied to a metric of regularity /variability of heart rate (e.g., RR intervals).
- one or more non-machine learning rules 497 may include one or more rules, e.g., thresholds, applied to a modesum of RR intervals.
- one or more non-machine learning rules 497 may include a linear modesum threshold (LMS), which is a modesum threshold that linearly decreases with increasing cycle length (RR interval length).
- LMS linear modesum threshold
- An LMS may be advantageously account for a phenomenon in which cycle length variability for faster MVTs is less than slower MVTs.
- a metric value to which classifier 1000 may apply one or more non-machine learning rules 497 includes a sum of standard deviations of cycle lengths.
- the beat (e.g., R-wave) morphology of MVTs is more stable than PVTs over an episode.
- one or more non-machine learning rules 497 may include one or more rules applied to a metric of stability /variability or instability of beat morphology.
- the metric may be a degree of similarity of morphology of different beats during the episode.
- Morphology of beats may be compared using any known techniques, e.g., cross-correlation, point-by-point differences, or comparison of wavelet decompositions. In some examples, selective wavelet coefficients may be compared.
- morphology of beats may be compared by comparing features of beats, such as peak-to-peak amplitude, maximum amplitude, minimum amplitude, slope or slew rate, or relative timing or values of the maximum and minimum.
- morphology of beats may be compared by comparing normalized energy distributions or imprints for the beats, e.g., comparing histograms for each beat with bins corresponding to different energy levels.
- one or more non-machine learning rules 497 may be configured to discriminate VF and rapidly conducting SVT, such as AF. Beat morphology of rapidly conducting SVTs generally is distinct from VF due to conduction of SVTs through the His-Purkinje system.
- a weighted zero crossing sum (WZCS) technique uses baseline information and frequency content information for discrimination between VF and SVT.
- the WZCS technique may include determining zero crossings of a filtered ECG signal and weighting each zero crossing point by consecutive sample difference or slope at that point.
- the WZCS technique may include summing absolute values of the weighted zero-crossings within a window and comparing the sum to a sum for a baseline window.
- a slope metric is a metric of comparison of slopes within a window for a beat to slopes within a baseline window and/or a previous beat window.
- Metrics to which one or more non-machine learning rules 497 are applied may be designed such that the values show distinctly different distribution depending on the tachyarrhythmia type. Based on the distribution of metric values, a threshold can be set to provide a desired sensitivity and specificity.
- non-machine learning rules 497 may be applied to data from other sensors indicative of other physiological signals or parameters, e.g., respiration, perfusion, activity and/or posture, heart sounds, blood pressure, blood oxygen saturation signals, or other data orthogonal to ECG features but indicative of the presence of or classification of tachyarrhythmia. Based on such data, non-machine learning rules 497 may provide inputs to classification logic 498 indicating falls, respiration changes, lack of tissue perfusion, or lack of pulsatile flow, the presence of which may indicate that ventricular tachyarrhythmia, e.g., PVT or VF, is more likely.
- classification logic 498 indicating falls, respiration changes, lack of tissue perfusion, or lack of pulsatile flow, the presence of which may indicate that ventricular tachyarrhythmia, e.g., PVT or VF, is more likely.
- FIG. 21 is a conceptual diagram illustrating an example machine learning model 1200 configured to determine an extent to which patient parameter data is indicative of an acute health event, such as a ventricular tachyarrhythmia or SCA.
- Machine learning model 1200 is an example of a set of rules implemented by any rules engine described herein, neural networks 404 and 438 described with respect to FIGS. 9 and 10, or machine learning model(s) 496 of FIGS. 13 and 19, any of which may be implemented by processing circuitry 130 and/or rules engine 172 of computing device 12 in wireless communication with IMD 10, as discussed above.
- Machine learning model 1200 is an example of a deep learning model, or deep learning algorithm, trained to determine whether a particular set of patient parameter data indicates the presence of an acute health event, e.g., whether a particular segment of ECG signal data indicates SCA, or a certain classification related to ventricular tachyarrhythmia, as described herein.
- One or more of IMD 10, computing device 12, an loT device 30, or a computing system 20 may train, store, and/or utilize machine learning model 1200, but other devices may apply inputs associated with a particular patient to machine learning model 1200 in other examples.
- machine learning and deep learning models or algorithms may be utilized in other examples.
- a CNN model of ResNet-18 may be used.
- Some non-limiting examples of models that may be used for transfer learning include AlexNet, VGGNet, GoogleNet, ResNet50, or DenseNet, etc.
- Some non-limiting examples of machine learning techniques include Support Vector Machines, K-Nearest Neighbor algorithm, and Multi-layer Perceptron.
- machine learning model 1200 may include three layers. These three layers include input layer 1202, hidden layer 1204, and output layer 1206. Output layer 1206 comprises the output from the transfer function 1205 of output layer 1206. Input layer 1202 represents each of the input values XI through X4 provided to machine learning model 1200. The number of inputs may be equal to, less than, or greater than 4, including much greater than 4, e.g., hundreds or thousands. In some examples, the input values may any of the of values input into a machine learning model, as described above. In some examples, input values may include samples of an ECG signal. In addition, in some examples input values of machine learning model 1200 may include additional data, such as R-wave data, R-R interval data, or other data relating to one or more additional parameters of patient 4, as described herein.
- additional data such as R-wave data, R-R interval data, or other data relating to one or more additional parameters of patient 4, as described herein.
- Each of the input values for each node in the input layer 1202 is provided to each node of hidden layer 1204.
- hidden layers 1204 include two layers, one layer having four nodes and the other layer having three nodes, but fewer or greater number of nodes may be used in other examples.
- Each input from input layer 1202 is multiplied by a weight and then summed at each node of hidden layers 1204.
- the weights for each input are adjusted to establish the relationship between the inputs, e.g., input ECG segment, to determining whether a particular set of inputs represents an acute health event and/or determining a score indicative of whether a set of inputs may be representative of SCA, MVT, PVT, VR, or another acute health event.
- one hidden layer may be incorporated into machine learning model 1200, or three or more hidden layers may be incorporated into machine learning model 1200, where each layer includes the same or different number of nodes.
- the result of each node within hidden layers 1204 is applied to the transfer function of output layer 1206.
- the transfer function may be liner or non-linear, depending on the number of layers within machine learning model 1200.
- Example non-linear transfer functions may be a sigmoid function or a rectifier function.
- the output 1207 of the transfer function may be a classification that indicates whether the particular ECG segment or other input set represents an acute health event, e.g., ventricular tachyarrhythmia, and/or a score indicative of an extent to which the input data set represents an acute health event.
- output 1207 may include respective probabilities for a plurality of classifications, e.g., as discussed herein with respect to FIGS. 9-11, 13, and 20.
- Machine learning model 1200 By applying the ECG signal data and/or other patient parameter data to a machine learning model, such as machine learning model 1200, processing circuitry, such as processing circuitry 130 of computing device 12, is able to determine a patient is experiencing or will soon experience an acute health event with great accuracy, specificity, and sensitivity. This may facilitate determinations of risk of sudden cardiac death and may lead to therapy delivery and other interventions as described herein.
- Machine learning model 1200 may correspond to any one or more of rules 84 A or 84B, rules 196, and rules 250 described herein.
- FIG. 22 is an example of a machine learning model 1200 being trained using supervised and/or reinforcement learning techniques.
- Machine learning model 1200 may be implemented using any number of models for supervised and/or reinforcement learning, such as but not limited to, an artificial neural network, a decision tree, naive Bayes network, support vector machine, or k-nearest neighbor model, to name only a few of the examples discussed above.
- processing circuitry one or more of IMD 10, computing device 12, an loT device 30, and/or computing system(s) 20 initially trains the machine learning model 1200 based on training set data 1300 including numerous instances of input data corresponding to acute health events and non-acute health events, e.g., as labeled by an expert.
- a prediction or classification by the machine learning model 1200 may be compared 1304 to the target output 1303, e.g., as determined based on the label.
- the processing circuitry implementing a leaming/training function 1305 may send or apply a modification to weights of machine learning model 1200 or otherwise modify/update the machine learning model 1200.
- one or more of IMD 10, computing device 12, loT device 30, and/or computing system(s) 20 may, for each training instance in the training set data 1300, modify machine learning model 1200 to change a score generated by the machine learning model 1200 in response to data applied to the machine learning model 1200.
- FIG. 23A is a perspective drawing illustrating an IMD 10A, which may be an example configuration of IMD 10 of FIGS. 1, 2 A, and 2B as an ICM.
- IMD 10A may be embodied as a monitoring device having housing 1412, proximal electrode 1416A and distal electrode 1416B.
- Housing 1412 may further comprise first major surface 1414, second major surface 1418, proximal end 1420, and distal end 1422.
- Housing 1412 encloses electronic circuitry located inside the IMD 10A and protects the circuitry contained therein from body fluids.
- Housing 1412 may be hermetically sealed and configured for subcutaneous implantation. Electrical feedthroughs provide electrical connection of electrodes 1416A and 1416B.
- IMD 10A is defined by a length L, a width W and thickness or depth D and is in the form of an elongated rectangular prism wherein the length L is much larger than the width W, which in turn is larger than the depth D.
- the geometry of the IMD 10A - in particular a width W greater than the depth D - is selected to allow IMD 10A to be inserted under the skin of the patient using a minimally invasive procedure and to remain in the desired orientation during insertion.
- the device shown in FIG. 23A includes radial asymmetries (notably, the rectangular shape) along the longitudinal axis that maintains the device in the proper orientation following insertion.
- the spacing between proximal electrode 1416A and distal electrode 1416B may range from 5 millimeters (mm) to 55 mm, 30 mm to 55 mm, 35 mm to 55 mm, and from 40 mm to 55 mm and may be any range or individual spacing from 5 mm to 60 mm.
- IMD 10A may have a length E that ranges from 30 mm to about 70 mm.
- the length L may range from 5 mm to 60 mm, 40 mm to 60 mm, 45 mm to 60 mm and may be any length or range of lengths between about 30 mm and about 70 mm.
- the width W of major surface 1414 may range from 3 mm to 15, mm, from 3 mm to 10 mm, or from 5 mm to 15 mm, and may be any single or range of widths between 3 mm and 15 mm.
- the thickness of depth D of IMD 10A may range from 2 mm to 15 mm, from 2 mm to 9 mm, from 2 mm to 5 mm, from 5 mm to 15 mm, and may be any single or range of depths between 2 mm and 15 mm.
- IMD 10A according to an example of the present disclosure is has a geometry and size designed for ease of implant and patient comfort. Examples of IMD 10A described in this disclosure may have a volume of three cubic centimeters (cm) or less, 1.5 cubic cm or less or any volume between three and 1.5 cubic centimeters.
- the first major surface 1414 faces outward, toward the skin of the patient while the second major surface 1418 is located opposite the first major surface 1414.
- proximal end 1420 and distal end 1422 are rounded to reduce discomfort and irritation to surrounding tissue once inserted under the skin of the patient.
- IMD 10A including instrument and method for inserting IMD 10A is described, for example, in U.S. Patent Publication No. 2014/0276928, incorporated herein by reference in its entirety.
- Proximal electrode 1416A is at or proximate to proximal end 1420, and distal electrode 1416B is at or proximate to distal end 1422.
- Proximal electrode 1416A and distal electrode 1416B are used to sense cardiac EGM signals, e.g., ECG signals, thoracically outside the ribcage, which may be sub-muscularly or subcutaneously.
- Cardiac signals may be stored in a memory of IMD 10 A, and data may be transmitted via integrated antenna 1430A to another device, which may be another implantable device or an external device.
- electrodes 1416A and 1416B may additionally or alternatively be used for sensing any bio-potential signal of interest, which may be, for example, an EGM, EEG, EMG, or a nerve signal, or for measuring impedance, from any implanted location.
- bio-potential signal of interest which may be, for example, an EGM, EEG, EMG, or a nerve signal, or for measuring impedance, from any implanted location.
- proximal electrode 1416A is at or in close proximity to the proximal end 1420 and distal electrode 1416B is at or in close proximity to distal end 1422.
- distal electrode 1416B is not limited to a flattened, outward facing surface, but may extend from first major surface 1414 around rounded edges 1424 and/or end surface 1426 and onto the second major surface 1418 so that the electrode 1416B has a three-dimensional curved configuration.
- electrode 1416B is an uninsulated portion of a metallic, e.g., titanium, part of housing 1412.
- proximal electrode 1416A is located on first major surface 1414 and is substantially flat, and outward facing.
- proximal electrode 1416A may utilize the three-dimensional curved configuration of distal electrode 1416B, providing a three-dimensional proximal electrode (not shown in this example).
- distal electrode 1416B may utilize a substantially flat, outward facing electrode located on first major surface 1414 similar to that shown with respect to proximal electrode 1416A.
- proximal electrode 1416A and distal electrode 1416B are located on both first major surface 1414 and second major surface 1418.
- proximal electrode 1416A and distal electrode 1416B are located on both major surfaces 1414 and 1418.
- both proximal electrode 1416A and distal electrode 1416B are located on one of the first major surface 1414 or the second major surface 1418 (e.g., proximal electrode 1416A located on first major surface 1414 while distal electrode 1416B is located on second major surface 1418).
- IMD 10A may include electrodes on both major surface 1414 and 1418 at or near the proximal and distal ends of the device, such that a total of four electrodes are included on IMD 10 A.
- Electrodes 1416A and 1416B may be formed of a plurality of different types of biocompatible conductive material, e.g., stainless steel, titanium, platinum, iridium, or alloys thereof, and may utilize one or more coatings such as titanium nitride or fractal titanium nitride.
- proximal end 1420 includes a header assembly 1428 that includes one or more of proximal electrode 1416A, integrated antenna 1430A, anti-migration projections 1482, and/or suture hole 1434.
- Integrated antenna 1430A is located on the same major surface (i.e., first major surface 1414) as proximal electrode 1416A and is also included as part of header assembly 1428.
- Integrated antenna 1430A allows IMD 10A to transmit and/or receive data.
- integrated antenna 1430A may be formed on the opposite major surface as proximal electrode 1416A or may be incorporated within the housing 1412 of IMD 10A. In the example shown in FIG.
- anti-migration projections 1432 are located adjacent to integrated antenna 1430A and protrude away from first major surface 1414 to prevent longitudinal movement of the device.
- anti-migration projections 1432 include a plurality (e.g., nine) small bumps or protrusions extending away from first major surface 1414.
- anti-migration projections 1432 may be located on the opposite major surface as proximal electrode 1416A and/or integrated antenna 1430A.
- header assembly 1428 includes suture hole 1434, which provides another means of securing IMD 10A to the patient to prevent movement following insertion.
- FIG. 23B is a perspective drawing illustrating another IMD 10B, which may be another example configuration of IMD 10 from FIGS. 1, 2 A, and 2B as an ICM. IMD 10B of FIG. 23B may be configured substantially similarly to IMD 10A of FIG. 23 A, with differences between them discussed herein.
- IMD 10B may include a leadless, subcutaneously-implantable monitoring device, e.g., an ICM.
- IMD 10B includes housing having a base 1440 and an insulative cover 1442.
- Proximal electrode 1416C and distal electrode 1416D may be formed or placed on an outer surface of cover 1442.
- Various circuitries and components of IMD 10B e.g., described above with respect to FIG. 2A or FIG. 2B, may be formed or placed on an inner surface of cover 1442, or within base 1440.
- a battery or other power source of IMD 10B may be included within base 1440.
- antenna 1430B is formed or placed on the outer surface of cover 1442, but may be formed or placed on the inner surface in some examples.
- insulative cover 1442 may be positioned over an open base 1440 such that base 1440 and cover 1442 enclose the circuitries and other components and protect them from fluids such as body fluids.
- the housing including base 1440 and insulative cover 1442 may be hermetically sealed and configured for subcutaneous implantation.
- Circuitries and components may be formed on the inner side of insulative cover 1442, such as by using flip-chip technology.
- Insulative cover 1442 may be flipped onto a base 1440. When flipped and placed onto base 1440, the components of IMD 10B formed on the inner side of insulative cover 1442 may be positioned in a gap 1444 defined by base 1440. Electrodes 1216C and 1216D and antenna 1230B may be electrically connected to circuitry formed on the inner side of insulative cover 1442 through one or more vias (not shown) formed through insulative cover 1442.
- Insulative cover 1442 may be formed of sapphire (i.e., corundum), glass, parylene, and/or any other suitable insulating material.
- Base 1440 may be formed from titanium or any other suitable material (e.g., a biocompatible material). Electrodes 1416C and 1246D may be formed from any of stainless steel, titanium, platinum, iridium, or alloys thereof. In addition, electrodes 1246C and 1246D may be coated with a material such as titanium nitride or fractal titanium nitride, although other suitable materials and coatings for such electrodes may be used. [0211] In the example shown in FIG. 23B, the housing of IMD 10B defines a length L, a width W and thickness or depth D and is in the form of an elongated rectangular prism wherein the length L is much larger than the width W, which in turn is larger than the depth D, similar to IMD 10A of FIG.
- the spacing between proximal electrode 1416C and distal electrode 1416D may range from 5 mm to 50 mm, from 30 mm to 50 mm, from 35 mm to 45 mm, and may be any single spacing or range of spacings from 5 mm to 50 mm, such as approximately 40 mm.
- IMD 10B may have a length L that ranges from 5 mm to about 70 mm. In other examples, the length L may range from 30 mm to 70 mm, 40 mm to 60 mm, 45 mm to 55 mm, and may be any single length or range of lengths from 5 mm to 50 mm, such as approximately 45 mm.
- the width W may range from 3 mm to 15 mm, 5 mm to 15 mm, 5 mm to 10 mm, and may be any single width or range of widths from 3 mm to 15 mm, such as approximately 8 mm.
- the thickness or depth D of IMD 10B may range from 2 mm to 15 mm, from 5 mm to 15 mm, or from 3 mm to 5 mm, and may be any single depth or range of depths between 2 mm and 15 mm, such as approximately 4 mm.
- IMD 10B may have a volume of three cubic centimeters (cm) or less, or 1.5 cubic cm or less, such as approximately 1.4 cubic cm.
- outer surface of cover 1442 faces outward, toward the skin of the patient.
- proximal end 1446 and distal end 1448 are rounded to reduce discomfort and irritation to surrounding tissue once inserted under the skin of the patient.
- edges of IMD 10B may be rounded.
- FIG. 23C illustrates an example environment of an example medical system comprising an implantable cardiac defibrillator device in conjunction with a patient, in accordance with one or more examples of the present disclosure.
- IMD 10 may be included in system 2 and configured to communicate with devices such as computing devices 12 shown in and described with respect to FIG. 1.
- medical device system 2 may detect an acute health event based on episode data collected by IMD 10C.
- computing system 20 may receive data from IMD 10C, and perform classification to classify the acute health event. Responsive to classification, IMD 10C may be configured to deliver a therapy configured to address the acute health event detected by IMD 10c, computing device 38, or computing system 20.
- IMD 10C is coupled to a ventricular lead 1520 and an atrial lead 1521.
- IMD 10C may be an ICD capable of delivering pacing, cardioversion and defibrillation therapy to the heart 1516A of a patient. In the example of FIG.
- ventricular lead 1520 and atrial lead 1521 are electrically coupled to IMD 10C and extend into the patient's heart 1516A.
- Ventricular lead 1520 includes electrodes 1522 and 1524 shown positioned on the lead in the patient's right ventricle (RV) for sensing ventricular EGM signals and pacing in the RV.
- Atrial lead 1521 includes electrodes 1526 and 1528 positioned on the lead in the patient's right atrium (RA) for sensing atrial EGM signals and pacing in the RA.
- Ventricular lead 1520 additionally carries a high voltage coil electrode 1542, and atrial lead 1521 carries a high voltage coil electrode 1544, used to deliver cardioversion and defibrillation shocks.
- the term “anti-tachy arrhythmia shock” may be used herein to refer to both cardioversion shocks and defibrillation shocks.
- ventricular lead 1520 may carry both of high voltage coil electrodes 1542 and 1544, or may carry a high voltage coil electrode in addition to those illustrated in the example of FIG. 23C.
- IMD 10C may use both ventricular lead 1520 and atrial lead 1521 to acquire cardiac electrogram (EGM) signals from the patient and to deliver therapy in response to the acquired data.
- Medical system 2 is shown as having a dual chamber ICD configuration, but other examples may include one or more additional leads, such as a coronary sinus lead extending into the right atrium, through the coronary sinus and into a cardiac vein to position electrodes along the left ventricle (LV) for sensing LV EGM signals and delivering pacing pulses to the LV.
- a medical device system may be a single chamber system, or otherwise not include atrial lead 1521.
- IMD 10C may include processing circuitry, sensing circuitry, and other circuitry that are configured for performing the techniques described herein.
- IMD 10C may include therapy delivery circuitry configured to deliver antitachyarrhythmia pacing (ATP), other types of cardiac pacing, and antitachyarrhythmia shocks via the leads and electrodes described above.
- Sealed housing 12C may house such circuitries. Housing 12C (or a portion thereof) may be conductive so as to serve as an electrode for pacing or sensing or as an active electrode during shock delivery. As such, housing 12C is also referred to herein as “housing electrode” 12C.
- IMD 10C may transmit patient physiological data and/or cardiac rhythm episode data acquired by IMD 10C, as well as data regarding delivery of therapy by IMD 10C, to computing device 12 and/or computing system 20 (e.g., implementing HMS 22), which may perform any of the techniques described herein, e.g., for classifying episode data and confirming acute health events.
- processing circuitry may apply one or more machine learning models to episode data to determine a classification of a plurality of predetermined classifications for the episode data. For instance, the processing circuitry may evaluate one or more segments, such as segments 2492 of FIG. 24, to classify arrhythmia episodes.
- segments of data may include a segment from a period of time at the onset of arrhythmia, another segment when the episode reaches sustained detection, and multiple ongoing segments thereafter. The segments may be contiguous, separated by time, and/or overlapping. FIG.
- machine learning models 2496 may be configured to output probabilities for the various arrhythmia classifications (e.g., VT vs. VF or VT vs. PVT discrimination).
- machine learning models 2496 may discriminate between tachycardia (TACH), oversensing due to noise (NOS), and oversensing due to cardiac reasons (COS).
- TACH tachycardia
- NOS oversensing due to noise
- COS cardiac reasons
- machine learning models 2496 may determine a probability of TACH, probability of NOS, and probability of COS.
- Machine learning models may assess which rhythm classification of segments 2492 is the most likely to be accurate (represented by decision block 2498 in FIG. 24). For example, if machine learning models 2496 A determines the probability of TACH is the maximum of the three probabilities, then the processing circuitry may apply machine learning models 2496B to discriminate between polymorphic ventricular tachycardia or ventricular fibrillation (PVT/VF), monomorphic ventricular tachycardia (MVT), and supraventricular tachycardia (SVT). Machine learning models 2496B may then determine the probability of PVT/VF, the probability of MVT, and the probability of SVT based on segments 2492.
- PVT/VF polymorphic ventricular tachycardia or ventricular fibrillation
- MVT monomorphic ventricular tachycardia
- SVT supraventricular tachycardia
- machine learning models 2496 A may determine that the probability of PVT/VF, the probability of MVT, and the probability of SVT are all equal to the probability of TACH divided by three. Therefore, for each of segments 2492, machine learning models 2496 may output the probability of probability of PVT/VF, the probability of MVT, and the probability of SVT, the probability of NOS, and the probability of COS.
- a classification implementation may include one AI/ML stage with 5 classifications (e.g., PVT/VF, MVT, SVT, COS, and NOS).
- a classification implementation may include three AI/ML stages, where the first stage is the example shown in FIG. 24, a second stage classifies between PVT/MVT vs SVT, and then if PVT/MVT, a third stage further classifies into PVT or MVT.
- processing circuitry may analyze ECG segments 2492 from the relevant time period (e.g., segments that occurred before, during, and after the episode).
- the processing circuitry may implement any combination of the techniques disclosed herein to determine whether to deliver therapy to the patient for PVT/VF/MVT or to terminate the evaluation.
- the processing circuitry may apply classification logic 2598A-2598N shown in FIG. 25 (collectively, “classification logic 2598”) to the probabilities determined by machine learning models 2496 to determine whether to deliver therapy to the patient for PVT/VF/MVT or to terminate the evaluation. Only 2598A-2598C are shown in FIG.
- the processing circuitry may be configured to determine whether to retrieve first additional data from the sensor device (e.g., IMD 10) and apply a second one or more machine learning models (e.g., machine learning models 2596B) to the first additional data.
- the additional data may start any amount of time (e.g., 15 seconds, 30 seconds, 60 seconds, etc.) after the initial classification or previous decision by the processing circuitry.
- the processing circuitry may determine various properties of each of segments 2492 during evaluation.
- the properties may include the probability of MP (e.g., the probability of MVT or PVT/VF), the probability of NC (probability of NOS or COS), the number of segments where the probability of RHY is > 0.98 (where RHY is any cardiac rhythm classification for which model(s) 2496 are configured, such as any of MP, SVT, and NC), the number of segments where the probability of RHY is the maximum, the average probability of RHY, the maximum probability of RHY, etc.
- MP e.g., the probability of MVT or PVT/VF
- NC probability of NOS or COS
- RHY is any cardiac rhythm classification for which model(s) 2496 are configured, such as any of MP, SVT, and NC
- the processing circuitry may implement classification logic (e.g., classification logic 498 shown in FIG. 19, classification logic 2598 shown in FIG. 25, or any other classification logic described herein) to output the overall classification for the episode.
- classification logic e.g., classification logic 498 shown in FIG. 19, classification logic 2598 shown in FIG. 25, or any other classification logic described herein
- the processing circuitry may determine the overall rhythm classification for 4 combined segments by applying the following logic:
- the processing circuitry may check a first condition of whether the average probability (Probavg) of NC is greater than or equal to the average probability of MP and SVT.
- the average probability may refer to the median of the segments, the mean, or any other statistical measure of the set of segments. If the processing circuitry determines that the first condition is true, then the processing circuitry may check a subcondition of whether the average probability of NOS is greater than or equal to the average probability of COS. If the average probability of NOS is greater than or equal to the average probability of COS, the processing circuitry may classify the Rhythm as NOS.
- the processing circuitry may classify the Rhythm as NOS.
- the processing circuitry may check a second condition of whether the average probability of S VT is greater than or equal to the average probability of both NC and MP. If the second condition is true, then the processing circuitry may classify the Rhythm as SVT. If the processing circuitry determines that the second condition is also false, the processing circuitry may check a third condition of whether the average probability of MP is greater than or equal to the average probability both NC and SVT.
- the processing circuitry may check a sub-condition of whether the average probability of PVT is greater than or equal to the average probability of MVT. If the average probability of PVT is greater than or equal to the average probability of MVT, then the processing circuitry may classify the Rhythm as PVT. If the average probability of MVT is greater than or equal to the average probability of PVT, then the processing circuitry may classify the Rhythm as MVT.
- FIG. 25 is a conceptual flow diagram illustrating example operation of machine learning models and classification logic by a system in accordance with techniques of this disclosure.
- processing circuitry may implement one or more machine learning models, such as machine learning models 2596A-2596C (collectively, “machine learning models 2596), and different configurations of classification logic, such as classification logic 2598A-2598C (collectively, “classification logic 2598”) to perform the classification.
- machine learning models 2596A-2596C collectively, “machine learning models 2596
- classification logic 2598A-2598C collectively, “classification logic 2598”
- the processing circuitry may initially evaluate (e.g., using machine learning models and/or non-machine learning rules) one or more segments, such as segments 2592A-2592C (collectively, “segments 2592”) of data, and apply classification logic 2598A-2598C to classify the rhythm (e.g., as NOS, COS, SVT, MVT, PVT/VF, etc.) as well as determine an action to perform.
- classification logic 2598A-2598C to classify the rhythm (e.g., as NOS, COS, SVT, MVT, PVT/VF, etc.) as well as determine an action to perform.
- machine learning models 2596A and classification logic 2598A may determine an action of actions 2599A to perform
- machine learning models 2596B and classification logic 2598B may determine an action of actions 2599B to perform
- machine learning models 2596C and classification logic 2598C may determine an action of actions 2599C to perform.
- Any two or more of machine learning models 2596A-2596C may be the same or substantially similar to each other (e.g., first machine learning models 2596A may be the same as second machine learning models 2596B and/or third machine learning models 2596C).
- Any two or more of classification logic 2598A-2598C may be the same or substantially similar to each other (e.g., first classification logic 2598 A may be the same as second classification logic 2598C and/or third classification logic 2598C).
- the processing circuitry may either deliver therapy (e.g., because the patient is experiencing an emergency) or terminate the evaluation (e.g., because the patient is not experiencing an emergency). However, if the evidence is unclear, the processing circuitry may continue collecting data by retrieving first additional data (e.g., segments 2592B), second additional data (e.g., segments 2592C), and so on.
- first additional data e.g., segments 2592B
- second additional data e.g., segments 2592C
- the processing circuitry may request and evaluate additional data (e.g., 15 seconds of data, 30 seconds of data, 60 seconds of data, etc.) after any period of time following a prior decision by the one or more machine learning models and/or classification logic, such as 15 seconds, 30 seconds, 60 seconds, etc.
- additional data e.g., 15 seconds of data, 30 seconds of data, 60 seconds of data, etc.
- the processing circuitry may determine whether to retrieve first additional data (e.g., 30seconds of addition data 15 seconds after the previous decision), which may include segments 2592B, from IMD 10, or retrieve second additional data (e.g., 30 seconds of data 60s later), which may include segments 2592C, from IMD 10 and apply machine learning models 2596C.
- the processing circuitry may apply machine learning models 2596B and second classification logic 2598B. If the processing circuitry determines to retrieve the second additional data, the processing circuitry may apply machine learning models 2596C and third classification logic 2598C.
- the processing circuitry may iteratively retrieve and evaluate additional data and reassess whether to deliver therapy or terminate the evaluation in view of the additional data.
- the processing circuitry may repeat the process of retrieving and evaluating additional data (e.g., using the same or different logic, such as more sensitive logic as the number of data requests increases) until either the processing circuitry successfully classifies the acute health event with a high degree of certainty or the evaluation times out (e.g., the duration of the evaluation reaches a time limit).
- the resolution or sensitivity of the additional data may depend on the data request sent by the processing circuitry.
- the processing circuitry may receive higher resolution data in response to a 15s data request and lower resolution data in response to a 60s data request.
- the processing circuitry may consider a variety of factors when determining whether to send a shorter-duration data request or a longer-duration data request, such as the need for higher resolution (e.g., if the initial data were too noisy), the urgency of the patient’s condition, and so on.
- the processing circuitry may determine whether to retrieve first additional data or second additional data.
- the first additional data may derive from a first time period (e.g., a time period that begins 15 seconds after an earlier classification or decision), and the second additional data may derive from a second time period (e.g,. a time period that begins 60 seconds after an earlier classification or decision).
- the properties of the data may differ depending on the outputs of machine learning models 2596 and/or classification logic 2598.
- the outputs may indicate that the first additional data or the second additional data is necessary to determine the classification of the acute health event.
- a subsequent output of machine learning models 2596 and/or classification logic 2598 may result in a different indication (e.g., a different data request). This process may reiterate until a therapy or terminate decision is made or until the process times out.
- the criteria included in Table 1 of FIG. 26 for application by classification logic 2598 may include IF, THAN, AND, OR logic statements that include various variables, thresholds, and comparison statements (e.g., less than, greater than, equal to).
- the variables may include counts of segments where NC is the max probability, counts of segments where the probability of NC is above a threshold, the probability of NC in the segment X, the average probability of NC over all segments, the maximum probability of NC over all segments, counts of segments where MP is the max probability, counts of segments where the probability of MP is above a threshold, the probability of MP in the segment X, the average probability of MP over all segments, the maximum probability of MP over all segments, counts of segments where SVT is the max probability, counts of segments where the probability of SVT is above a threshold, the probability of SVT in the segment X, the average probability of SVT over all segments, the maximum probability of SVT over all segments, etc.
- Table 2 of FIG. 27 shows example LOGIC(X) as a function of X for subsequent evaluation following a “Data Request - 15s” for continued monitoring in case of some but uncertain evidence of PVT/VF/MVT.
- data request -15s refers to a request for additional data (e.g., 30 seconds of data) 15 seconds after an earlier classification or decision.
- Data request -15s is an example action from the set of actions 2599 that the processing circuitry may perform in response to applying machine learning models 2596 and classification logic 2598.
- the logic in Table 2 makes it more likely to call the episode PVT/MVT and more likely to deliver therapy, i.e. it is logic with higher sensitivity to VT/VF.
- the criteria included in Table 2 of FIG. 27 for application by classification logic 2598 may include IF, THAN, AND, OR logic statements that include various variables, thresholds, and comparison statements (e.g., less than, greater than, equal to).
- the variables may include counts of segments where NC is the max probability, counts of segments where the probability of NC is above a threshold, the probability of NC in the segment X, the average probability of NC over all segments, the maximum probability of NC over all segments, counts of segments where MP is the max probability, counts of segments where the probability of MP is above a threshold, the probability of MP in the segment X, the average probability of MP over all segments, the maximum probability of MP over all segments, counts of segments where SVT is the max probability, counts of segments where the probability of SVT is above a threshold, the probability of SVT in the segment X, the average probability of SVT over all segments, the maximum probability of SVT over all segments, etc.
- the criteria in Table 2 may be more sensitive than the criteria in Table 1.
- thresholds in Table 2 may be easier to satisfy than thresholds in Table 1, possibly resulting in more detections/classifications.
- the processing circuitry may include fewer or more levels of logic, such as levels based on probability and length of an event. For example, if the evidence is uncertain such that the processing circuitry requests additional data, the processing circuitry may increase resolution of the data in order to collect more detailed information (which may facilitate a determination).
- This logic structure may even incorporate expert knowledge (e.g., make determinations based on input from clinicians, academics, or other experts on behavior, progression, or other aspects of cardiac arrhythmias).
- the processing circuitry may evaluate additional factors when determining to perform and/or performing an action. Additional factors or considerations may include cycle length cutoff, duration programming, timeout considerations, time of day, etc.
- the processing circuitry may estimate the cycle length by automatically correlating segments 2492 with little to no sample lags (e.g., 0-150 sample lags). The processing circuitry may find the first peak of the signal (e.g., by detecting when a difference signal crosses zero) with autocorrelation value > 0.5 after a certain number of samples (e.g., 20, which may be referred to as blanking of 20). The processing circuitry may compute the cycle length as (first peak location) * 7.8125 ms.
- the processing circuitry may evaluate cycle length in addition to the logic described above (e.g., shown in Tables 1 and 2 of FIG. 26 and FIG. 27) to determine whether to deliver therapy, continue collecting data, or terminate the episode. For example, if the processing circuitry determines that the rhythm is MVT, the processing circuitry may count the number of segments with cycle length less than 300 ms. If the number of segments with cycle length less than 300 ms is equal to 0, then instead of delivering therapy, the processing circuitry may continue collecting data to keep monitoring the MVT in case of acceleration of the MVT.
- the processing circuitry may estimate episode duration and use the episode duration estimate when determining an action to perform.
- the processing circuitry may determine episode duration by computing the difference between the current data timestamp (until offset occurs) and an onset stamp for the episode as provided by IMD 10.
- the processing circuitry may deliver therapy for PVT/VF or MVT only if the episode duration exceeds the programmed sustained duration for PVT/VF and MVT separately. If the result of the evaluation is to deliver therapy, then the processing circuitry may determine whether the rhythm is a PVT/VF or MVT.
- the processing circuitry may deliver therapy if the episode duration exceeds programmed sustained duration for MVT.
- the processing circuitry may instead use the programmed sustained duration for PVT if the number of segments with cycle length ⁇ 200 ms is equal to or greater than a ratio (e.g., 1 of 4 segments).
- the processing circuitry may stop processing the episode and not deliver therapy.
- the processing circuitry may stop the evaluation at that point.
- the processing circuitry reevaluates the new data and may continue collecting additional data until the episode duration exceeds the Date Request Timeout duration.
- the processing circuitry may stop evaluation at this time and effectively transition to the “Terminate” state.
- a Timeout duration may be set to 5 minutes to minimize false positives instead of checking consistently for a longer period.
- the classification techniques described herein may be implemented to determine whether to deliver a responsive therapy to an acute event, such as antitachyarrhythmia therapy by IMD 10C in response to an arrhythmia detected by IMD IOC.
- the techniques for classifying episode data e.g., described with respect to FIGS. 13-22, 24, and 25, may be used to determine whether a suspected lethal tachyarrhythmia should be treated by IMD IOC.
- HMS 22 or computing device 12 may perform the analysis and communicate with IMD IOC regarding whether treatment is warranted.
- FIG. 28 is a flowchart illustrating one example method of the disclosure.
- the techniques of FIG. 28 may be performed by processing circuitry, such as processing circuitry 50A of IMD 10A of FIG. 2A, processing circuitry 50B of IMD 10B of FIG. 2B, and/or processing circuitry 130 of computing device 12 of FIG. 3.
- processing circuitry such as processing circuitry 50A of IMD 10A of FIG. 2A, processing circuitry 50B of IMD 10B of FIG. 2B, and/or processing circuitry 130 of computing device 12 of FIG. 3.
- computing device 12 may be configured to receive episode data associated with an acute health event detected by an implantable medical device (e.g., IMD 10A or IMD 10B) (2800).
- Computing device 12 may be further configured to apply a first one or more machine learning models and first classification logic to the episode data to determine a first respective classification of a plurality of predetermined classifications for the episode data (2810).
- computing device 12 may further determine whether to retrieve first additional data from the implantable medical device and apply a second one or more machine learning models and second classification logic to the first additional data, or retrieve second additional data from the implantable medical device and apply a third one or more machine learning models and third classification logic to the second additional data (2820).
- Computing device 12 may further determine a classification of the acute health event from the plurality of predetermined classifications based on the application of the second one or more machine learning models to the first additional data or the third one or more machine learning models to the second additional data (2830).
- Computing device 12 may also determine whether to control the implantable medical device (e.g., therapy delivery circuitry 57B of IMD 10B) to deliver therapy based on the classification (2840).
- FIG. 29 is a process diagram illustrating an example implementation of detecting an acute health emergency in accordance with the techniques of this disclosure.
- FIG. 29 illustrates a process 2900 that may be performed by IMD 10 and computing device 12.
- process 2900 includes IMD 10 sending episode data to computing device 12 on a periodic basis.
- Computing device 12 may then apply classification logic to the episode data to determine a classification, of a plurality of predetermined classifications, for the episode data.
- IMD 10 may first detect an episode trigger at 2902. IMD 10 may then send collected episode data to computing device 12 at 2904. IMD 10 may continuously store episode data in memory on IMD 10. Computing device 12 may periodically request fresh episode data to evaluate. In some examples, computing device 12 may request episode data on a periodic basis, such as every 15 seconds, every 60 seconds, or some other time period. Computing device 12 may receive the episode data at 2906, and apply classification logic to the episode data at 2908. For example, computing device 12 may apply a first one or more machine learning models and first classification logic to the episode data to determine a first respective classification of a plurality of predetermined classifications for the episode data.
- the episode data may include a plurality of segments, and computing device 12 may apply the first one or more machine learning models to each segment of the plurality of segments.
- computing device 12 may determine if additional data is needed or whether or not a health emergency is detected. [0257] If, at 2910, computing device 12 determines that additional data is needed, computing device 12 may request IMD 10 to send additional collected episode data at 2904. Computing device 12 may receive the additional episode data at 2906, and may then apply a second one or more machine learning models and second classification logic to the additional data, and/or may apply a third one or more machine learning models and third classification logic to the additional data.
- computing device 12 may determine a classification of the acute health event from the plurality of predetermined classifications based on the application of the second one or more machine learning models to the additional data or the third one or more machine learning models to the additional data. If computing device 12 determines that the acute health event is a health emergency, computing device 12 may output an alarm signal at 2912. If computing device 12 determines that the acute health event is not a health emergency, computing device 12 may end process 2900. In some examples, computing device 12 may also be configured to determine whether to control IMD 10 to deliver therapy based on the classification.
- FIG. 30 is a process diagram illustrating another example implementation of detecting an acute health emergency in accordance with the techniques of this disclosure.
- FIG. 30 illustrates a process 3000 that may be performed by IMD 10 and computing device 12.
- process 3000 included IMD 10 applying classification logic to the episode data to determine a classification, of a plurality of predetermined classifications, for the episode data and sending alert data to computing device 12.
- IMD 10 may first detect an episode trigger at 3002. IMD 10 may apply classification logic to episode data at 3004. For example, IMD 10 may apply a first one or more machine learning models and first classification logic to the episode data to determine a first respective classification of a plurality of predetermined classifications for the episode data.
- the episode data may include a plurality of segments, and IMD 10 may apply the first one or more machine learning models to each segment of the plurality of segments.
- IMD 10 may determine if additional data is needed.
- IMD 10 may gather additional data. IMD 10 may then apply a second one or more machine learning models and second classification logic to the additional data, and/or may apply a third one or more machine learning models and third classification logic to the additional data. Different than process 2900, in process 3000, IMD 10 may be configured to evaluate episode data more continuously rather than periodically (e.g., every 15 seconds or every 60 seconds). IMD 10 may further determine a classification of the acute health event from the plurality of predetermined classifications based on the application of the second one or more machine learning models to the additional data or the third one or more machine learning models to the additional data.
- IMD 10 may send alert data to computing device 12 at 3010.
- Computing device 12 may receive the alert data (at 3012) and may output an alarm signal (3014).
- IMD 10 may end process 3000. In process 3000, IMD 10 may not be required to communicate with computing device 12 unless sending alert data. That is, compared to process 2900, process 3000 may have fewer communications between IMD 10 and computing device 12, thus saving power.
- IMD 10 and/or computing device 12 may also be configured to determine whether to control IMD 10 to deliver therapy based on the classification.
- FIG. 31 is a process diagram illustrating another example implementation of detecting an acute health emergency in accordance with the techniques of this disclosure.
- FIG. 31 illustrates a process 3100 that may be performed by IMD 10 and computing device 12.
- process 3100 includes IMD 10 intermittently and/or continuously sending episode data to computing device 12.
- Computing device 12 may then apply classification logic to the episode data to determine a classification, of a plurality of predetermined classifications, for the episode data.
- Computing device 12 may receive the episode data at 3106, and apply classification logic to the episode data at 3108. For example, computing device 12 may apply a first one or more machine learning models and first classification logic to the episode data to determine a first respective classification of a plurality of predetermined classifications for the episode data.
- the episode data may include a plurality of segments, and computing device 12 may apply the first one or more machine learning models to each segment of the plurality of segments.
- computing device 12 may determine if additional data is needed or whether or not a health emergency is detected.
- computing device 12 may continue to apply classification logic to episode data until computing device 12 has made a determination of a health event. For example, at 3110, computing device 12 may determine a classification of the acute health event from the plurality of predetermined classifications based on the application of the second one or more machine learning models to the additional data or the third one or more machine learning models to the additional data. If computing device 12 determines that the acute health event is a health emergency, computing device 12 may output an alarm signal at 3112. If computing device 12 determines that the acute health event is not a health emergency, computing device 12 may end process 3100. In some examples, computing device 12 may also be configured to determine whether to control IMD 10 to deliver therapy based on the classification.
- the described techniques may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a computer-readable medium and executed by a hardware -based processing unit.
- Computer-readable media may include non-transitory computer-readable media, which corresponds to a tangible medium such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer).
- processors such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry.
- DSPs digital signal processors
- ASICs application specific integrated circuits
- FPGAs field programmable logic arrays
- processors may refer to any of the foregoing structure or any other physical structure suitable for implementation of the described techniques. Also, the techniques could be fully implemented in one or more circuits or logic elements.
- Example 5 The medical device system of any one or more of examples 1 to 4, wherein the processing circuitry is configured to: apply the one or more machine learning models to each segment of a plurality of segments of the episode data to determine a respective probability associated with the respective classification for each segment of the plurality of segments; and determine the classification of the acute health event based on the respective probabilities.
- Example 10 The medical device system of example 1, wherein the one or more machine learning models comprise a first one or more machine learning models, and wherein, to determine the classification of acute health event from the plurality of predetermined classifications based on the respective classifications of a plurality of segments of the episode data, the processing circuitry is configured to apply the respective classifications to a second one or more machine learning models.
- Example 11 The medical device system of example 10, wherein the second one or more machine learning models comprise a long short-term memory network.
- Example 12 The medical device system of any one or more of examples 1 to 11, wherein the medical device system comprises a smartphone.
- Example 14 The medical device system of any one or more of examples 1 to 13, wherein the processing circuitry is configured to apply the first machine learning model to each segment of a plurality of segments of the episode data to determine, for each segment of the plurality of segments, the first respective classification of the plurality of predetermined classifications for the episode data.
- Example 16 The medical device system of any one or more of examples 1 to
- Example 17 The medical device system of any one or more of examples 1 to
- Example 18 The medical device system of example 17, wherein the duration of the first time period is approximately 15 seconds, and wherein the duration of the second time period is approximately 60 seconds.
- Example 20 The medical device system of any one or more of examples 1 to
- Example 21 The medical device system of any one or more of examples 1 to
- processing circuitry is configured to determine whether to retrieve the first additional data or the second additional data based on noisiness of the episode data.
- Example 22 The medical device system of any one or more of examples 1 to
- processing circuitry is configured to determine whether to retrieve the first additional data or the second additional data based on a health risk severity of the first respective classification.
- Example 23 The medical device system of any one or more of examples 1 to
- Example 24 The medical device system of any one or more of examples 1 to
- Example 25 The medical device system of any one or more of examples 1 to
- the implantable medical device comprises an insertable cardiac monitor includes a housing configured for subcutaneous implantation in the patient, the housing having a length between 40 millimeters (mm) and 60 mm between a first end and a second end, a width less than the length, and a depth less than the width; a first electrode at or proximate to the first end; a second electrode at or proximate to the second end; and circuitry within the housing and configured to sense an electrocardiogram corresponding to the episode data via the first electrode and the second electrode and detect the acute health event based on the electrocardiogram.
- mm millimeters
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Pathology (AREA)
- Animal Behavior & Ethology (AREA)
- Heart & Thoracic Surgery (AREA)
- Cardiology (AREA)
- Veterinary Medicine (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- Biophysics (AREA)
- Radiology & Medical Imaging (AREA)
- Fuzzy Systems (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
A medical device system includes processing circuitry configured to receive episode data for an acute health event detected by an implantable medical device. The processing circuitry is configured to apply one or more machine learning models and classification logic to the episode data to determine a first classification. Based on a probability of the first classification, the processing circuitry is configured to retrieve additional data and apply one or more machine learning models and classification logic. The processing circuitry is configured to deliver therapy based on the application of the one or more machine learning models and classification logic to the additional data.
Description
DELIVERING THERAPY BASED ON MACHINE LEARNING MODEL CLASSIFICATION OF HEALTH EVENTS
[0001] This application claims the benefit of U.S. Provisional Patent Application Serial No. 63/610,318, filed December 14, 2023, and U.S. Provisional Patent Application Serial No. 63/639,335, filed April 26, 2024, the entire contents of each is incorporated herein by reference.
FIELD
[0002] This disclosure generally relates to systems including medical devices and, more particularly, to monitoring of patient health using such systems.
BACKGROUND
[0003] A variety of devices are configured to monitor physiological signals of a patient. Such devices include implantable or wearable medical devices, as well as a variety of wearable health or fitness tracking devices. The physiological signals sensed by such devices include as examples, electrocardiogram (ECG) signals, respiration signals, perfusion signals, activity and/or posture signals, pressure signals, blood oxygen saturation signals, body composition, and blood glucose or other blood constituent signals. In general, using these signals, such devices facilitate monitoring and evaluating patient health over a number of months or years, outside of a clinic setting.
[0004] In some cases, such devices are configured to detect acute health events based on the physiological signals, such as episodes of cardiac arrhythmia, myocardial infarction, stroke, or seizure. Example arrhythmia types include cardiac arrest (e.g., asystole), ventricular tachycardia (VT), and ventricular fibrillation (VF). The devices may store ECG and other physiological signal data collected during a time period including an episode as episode data. Such acute health events are associated with significant rates of death, particularly if not treated quickly.
[0005] For example, VF and other malignant tachyarrhythmias are the most commonly identified arrhythmia in sudden cardiac arrest (SCA) patients. If this arrhythmia continues for more than a few seconds, it may result in cardiogenic shock and cessation of effective
blood circulation. The survival rate from SCA decreases between 7 and 10 percent for every minute that the patient waits for defibrillation. Consequently, sudden cardiac death (SCD) may result in a matter of minutes.
SUMMARY
[0006] In general, the disclosure describes techniques for detection of acute health events, such as VT, VF, and/or SCA, by monitoring patient parameter data, such as ECG data. More particularly, the disclosure describes techniques for applying rules, which may include one or more machine learning models, to patient parameter data to detect acute health events. The techniques include configuring rules and/or the application of the rules to the patient parameter data in order to improve the efficiency and effectiveness of the detection of acute health events. For example, the techniques may include applying one or more machine learning (ML) models to each of a plurality of segments of patient parameter data (e.g., episode data) received from a sensor device in response the sensor device detecting an acute health event to determine a classification of the episode from a plurality of predetermined classifications. One or more of the possible classifications are acute health event(s) of interest, such as potentially lethal tachyarrhythmias that may result in SCA.
[0007] Unlike conventional acute health event (e.g., potentially lethal ventricular tachyarrhythmia or other SCA) detection systems, the techniques and systems of this disclosure may use one or more classifiers to more accurately classify the acute health event as one of a plurality of classifications that are clinically relevant to the actions taken or not taken by a system on behalf of the patient and a caregiving team of the patient. The classifications may include ventricular tachyarrhythmias of different severities, such as VF and polymorphic VT, or monomorphic VT, as well as classifications for which no action or cancelation of action may be appropriate, such as supraventricular tachycardia, oversensing, or other noise.
[0008] In this manner, the system may avoid expensive medical system and user response, and/or delivery of unnecessary therapy, to likely erroneous determinations regarding the health of the patient. In some examples, the machine learning model is trained with a set of training instances, where one or more of the training instances comprise data that indicate relationships between patient parameter data and
classifications related to the acute health event, e.g., related to potentially lethal cardiac arrhythmias. Because the machine learning model is trained with potentially thousands or millions of training instances, the machine learning model may, for example, reduce the amount of classification error in classifying ECG data as different arrhythmia classifications when compared to conventional detection systems.
[0009] Additionally, the techniques and systems of this disclosure may be implemented with an implantable medical device (IMD) that can continuously (e.g., on a periodic or triggered basis without human intervention) sense the ECG and/or other patient parameter data while subcutaneously implanted in a patient over months or years and perform numerous operations per second on patient parameter data to enable the systems herein to detect acute health events. Using techniques of this disclosure with an IMD may be advantageous when a physician cannot be continuously present with the patient over weeks or months to evaluate the patient parameter data and/or where performing the operations on the ECG and/or other patient parameter described herein (e.g., application of a machine learning model) on weeks or months of data could not practically be performed in the mind of a physician.
[0010] In some examples, processing circuitry of a computing device configured to wirelessly communicate with an IMD or other medical device applies a machine learning model to patient parameter data as a second set of rules to confirm or reject detection of an acute health event by the medical device using a first set of rules. Reducing classification errors for acute health events with a machine learning model implementing techniques of this disclosure may provide one or more technical and clinical advantages. For example, improved specificity and sensitivity may increase the ability of another device, user, and/or clinician to rely on the accuracy of the system’s assessment of the patient’s condition and improve resulting treatment of the patient and patient outcomes.
[0011] In some examples, processing circuitry may operate on different segments of data, such as ECG data. Segments of data may include a segment from a period of time at the onset of arrhythmia, another segment when the episode reaches sustained detection, and multiple ongoing segments thereafter. The segments may be contiguous, separated by time, and/or overlapping. Segment-based classification of episode data according to the techniques described herein may improve the accuracy of classification/detection of health events, particularly in situations where shorter segments of continuous episode data are
available to train the one or more ML models. Segment-based classification of episode data according to the techniques described herein may improve the accuracy of classification/detection of health events where the patient condition may change during an episode, e.g., where a tachyarrhythmia may spontaneously terminate or change during an episode.
[0012] In some examples, a medical device system includes an implantable medical device configured to: detect an acute health event; collect episode data associated with the acute health event; transmit the episode data; and deliver therapy to a patient; and processing circuitry configured to: receive episode data for the acute health event detected by the implantable medical device, the episode data transmitted by the implantable medical device in response to detecting the acute health event; apply a first one or more machine learning models and first classification logic to the episode data to determine a first respective classification of a plurality of predetermined classifications for the episode data; based on a probability of the first respective classification determined by the first one or more machine learning models, determine whether to: retrieve first additional data from the implantable medical device and apply a second one or more machine learning models and second classification logic to the first additional data, or retrieve second additional data from the implantable medical device and apply a third one or more machine learning models and third classification logic to the second additional data; determine a classification of the acute health event from the plurality of predetermined classifications based on the application of the second one or more machine learning models to the first additional data or the third one or more machine learning models to the second additional data; and determine whether to control the implantable medical device to deliver therapy based on the classification.
[0013] In some examples, a computing device includes communication circuitry configured to wirelessly communicate with a sensor device on a patient or implanted within the patient; one or more output devices; and processing circuitry configured to: determine a first classification of an acute health event based on applying a first one or more machine learning models and first classification logic to episode data; and based on a probability of the first classification, applying at least one of a second one or more machine learning models or a second classification logic to additional data to determine a final classification of the acute health event.
[0014] This summary is intended to provide an overview of the subject matter described in this disclosure. It is not intended to provide an exclusive or exhaustive explanation of the apparatus and methods described in detail within the accompanying drawings and description below. Further details of one or more examples are set forth in the accompanying drawings and the description below.
BRIEF DESCRIPTION OF DRAWINGS
[0015] FIG. 1 is a block diagram illustrating an example system configured detect acute health events of a patient, and to respond to such detections, in accordance with one or more techniques of this disclosure.
[0016] FIG. 2A is a block diagram illustrating an example configuration of a patient sensing device that operates in accordance with one or more techniques of the present disclosure.
[0017] FIG. 2B is a block diagram illustrating an example configuration of a patient sensing device, including therapy delivery circuitry, that operates in accordance with one or more techniques of the present disclosure.
[0018] FIG. 3 is a block diagram illustrating an example configuration of a computing device that operates in accordance with one or more techniques of the present disclosure. [0019] FIG. 4 is a block diagram illustrating an example configuration of a health monitoring system that operates in accordance with one or more techniques of the present disclosure.
[0020] FIG. 5 is a flow diagram illustrating an example operation for applying rules to patient parameter data to determine whether an acute health event is detected.
[0021] FIG. 6 is a flow diagram illustrating another example operation for applying rules to patient parameter data to determine whether an acute health event is detected.
[0022] FIG. 7 is a flow diagram illustrating an example operation for configuring rules applied to patient parameter data to determine whether an acute health event is detected for a patient.
[0023] FIG. 8 is a flow diagram illustrating another example operation for configuring rules applied to patient parameter data to determine whether an acute health event is detected for a patient.
[0024] FIG. 9 is a block diagram illustrating an example of an ensemble of neural networks configured to classify ventricular tachyarrhythmias.
[0025] FIG. 10 is a block diagram illustrating an example of a single classifier utilizing raw signals and derived features.
[0026] FIG. 11 is a block diagram illustrating a staged approach for classifying a ventricular tachyarrhythmia episode.
[0027] FIGS. 12A and 12B illustrate frequency decompositions of a monomorphic ventricular tachycardia episode and a supraventricular tachycardia episode, respectively.
[0028] FIG. 13 is a block diagram illustrating an example configuration of a classifier configured to classify episode data.
[0029] FIGS. 14-17 are tables illustrating example segment classifications and associated episode classifications.
[0030] FIG. 18 is a flow diagram illustrating an example operation of the classifier of FIG. 13.
[0031] FIG. 19 is a block diagram illustrating an example configuration of a classifier configured to classify episode data.
[0032] FIG. 20 is a flow diagram illustrating an example operation of the classifier of FIG. 19.
[0033] FIG. 21 is a conceptual diagram illustrating an example machine learning model configured to determine an extent to which data of a patient indicates an acute health event.
[0034] FIG. 22 is a conceptual diagram illustrating an example training process for a machine learning model, in accordance with examples of the current disclosure.
[0035] FIG. 23A is a perspective drawing illustrating an insertable cardiac monitor.
[0036] FIG. 23B is a perspective drawing illustrating another insertable cardiac monitor.
[0037] FIG. 24 is a conceptual flow diagram illustrating an example operation of one or more machine learning model, in accordance with examples of the current disclosure.
[0038] FIG. 25 is a conceptual flow diagram illustrating example classification logic by a system, in accordance with examples of the current disclosure.
[0039] FIG. 26 is a table showing one example of logic for initial and subsequent evaluation following a 60 second data request.
[0040] FIG. 27 is a table showing one example of subsequent evaluation logic following a 15 second data request.
[0041] FIG. 28 is a flowchart illustrating one example method of the disclosure.
[0042] FIG. 29 is a process diagram illustrating an example implementation of detecting an acute health emergency in accordance with the techniques of this disclosure.
[0043] FIG. 30 is a process diagram illustrating another example implementation of detecting an acute health emergency in accordance with the techniques of this disclosure.
[0044] FIG. 31 is a process diagram illustrating another example implementation of detecting an acute health emergency in accordance with the techniques of this disclosure.
[0045] Like reference characters refer to like elements throughout the figures and description.
DETAILED DESCRIPTION
[0046] A variety of types of implantable and external devices are configured to detect arrhythmia episodes and other acute health events based on sensed ECGs and, in some cases, other physiological signals. External devices that may be used to non-invasively sense and monitor ECGs and other physiological signals include wearable devices with electrodes configured to contact the skin of the patient, such as patches, watches, rings, necklaces, hearing aids, a wearable cardiac monitor or automated external defibrillator (AED), clothing, car seats, or bed linens. Such external devices may facilitate relatively longer-term monitoring of patient health during normal daily activities.
[0047] Implantable medical devices (IMDs) also sense and monitor ECGs and other physiological signals, and detect acute health events such as episodes of arrhythmia, cardiac arrest, myocardial infarction, stroke, and seizure. Example IMDs include pacemakers and implantable cardioverter-defibrillators, which may be coupled to intravascular or extravascular leads, as well as pacemakers with housings configured for implantation within the heart, which may be leadless. Some IMDs do not provide therapy, such as implantable patient monitors. One example of such an IMD is the Reveal LINQ™ and LINQ II™ Insertable Cardiac Monitors (ICMs), available from Medtronic, Inc., which may be inserted subcutaneously. Some IMDs do not provide therapy, such as implantable patient monitors. One example of such an IMD is the Reveal LINQ™ and LINQ II™ Insertable Cardiac Monitors (ICMs), available from Medtronic, Inc., which may be
inserted subcutaneously. Such IMDs may facilitate relatively longer-term monitoring of patients during normal daily activities, and may periodically transmit collected data, e.g., episode data for detected arrhythmia episodes, to a remote patient monitoring system, such as the Medtronic Carelink™ Network.
[0048] FIG. 1 is a block diagram illustrating an example system 2 configured to detect acute health events of a patient 4, and to respond to such detection, in accordance with one or more techniques of this disclosure. As used herein, the terms “detect,” “detection,” and the like may refer to detection of an acute health event presently (at the time the data is collected) being experienced by patient 4, as well as detection based on the data that the condition of patient 4 is such that they have a suprathreshold likelihood of experiencing the event within a particular timeframe, e.g., prediction of the acute health event. The example techniques may be used with one or more patient sensing devices, e.g., IMD 10, which may be in wireless communication with one or more patient computing devices, e.g., patient computing devices 12A and 12B (collectively, “patient computing devices 12”). Although not illustrated in FIG. 1, IMD 10 include electrodes and other sensors to sense physiological signals of patient 4, and may collect and store sensed physiological data based on the signals and detect episodes based on the data.
[0049] IMD 10 may be implanted outside of a thoracic cavity of patient 4 (e.g., subcutaneously in the pectoral location illustrated in FIG. 1). IMD 10 may be positioned near the sternum near or just below the level of the heart of patient 4, e.g., at least partially within the cardiac silhouette. In some examples, IMD 10 takes the form of a LINQ ICM. Although described primarily in the context of examples in which IMD 10 takes the form of an ICM, the techniques of this disclosure may be implemented in systems including any one or more implantable or external medical devices, including monitors, pacemakers, defibrillators (e.g., subcutaneous or substemal), wearable external defibrillators (WAEDs), neurostimulators, or drug pumps. Furthermore, although described primarily in the context of examples including a single implanted patient sensing device, in some examples a system includes one or more patient sensing devices, which may be implanted within patient 4 or external to (e.g., worn by) patient 4. For example, a system with two IMDs 10 may capture different values of a common patient parameter with different resolution/accuracy based on their respective locations. In some examples, instead of or in
addition to two IMDs 10, system 2 may include a ventricular assist device or WAED in addition to IMD 10.
[0050] Patient computing devices 12 are configured for wireless communication with IMD 10. Computing devices 12 retrieve event data and other sensed physiological data from IMD 10 that was collected and stored by the IMD. In some examples, computing devices 12 take the form of personal computing devices of patient 4. For example, computing device 12A may take the form of a smartphone of patient 4, and computing device 12B may take the form of a smartwatch or other smart apparel of patient 4. In some examples, computing devices 12 may be any computing device configured for wireless communication with IMD 10, such as a desktop, laptop, or tablet computer. Computing devices 12 may communicate with IMD 10 and each other according to the Bluetooth® or Bluetooth® Low Energy (BLE) protocols, as examples. In some examples, only one of computing devices 12, e.g., computing device 12A, is configured for communication with IMD 10, e.g., due to execution of software (e.g., part of a health monitoring application as described herein) enabling communication and interaction with an IMD.
[0051] In some examples, computing device(s) 12, e.g., wearable computing device 12B in the example illustrated by FIG. 1, may include electrodes and other sensors to sense physiological signals of patient 4, and may collect and store physiological data and detect episodes based on such signals. Computing device 12B may be incorporated into the apparel of patient 14, such as within clothing, shoes, eyeglasses, a watch or wristband, a hat, etc. In some examples, computing device 12B is a smartwatch or other accessory or peripheral for a smartphone computing device 12 A.
[0052] One or more of computing devices 12 may be configured to communicate with a variety of other devices or systems via a network 16. For example, one or more of computing devices 12 may be configured to communicate with one or more computing systems, e.g., computing systems 20A and 20B (collectively, “computing systems 20”) via network 16. Computing systems 20A and 20B may be respectively managed by manufacturers of IMD 10 and computing devices 12 to, for example, provide cloud storage and analysis of collected data, maintenance and software services, or other networked functionality for their respective devices and users thereof. Computing system 20A may comprise, or may be implemented by, the Medtronic Carelink™ Network, in
some examples. In the example illustrated by FIG. 1, computing system 20A implements a health monitoring system (HMS) 22, although in other examples, either of both of computing systems 20 may implement HMS 22. As will be described in greater detail below, HMS 22 facilities detection of acute health events of patient 4 by system 2, and the responses of system 2 to such acute health events.
[0053] Computing device(s) 12 may transmit data, including data retrieved from IMD 10, to computing system(s) 20 via network 16. The data may include sensed data, e.g., values of physiological parameters measured by IMD 10 and, in some cases one or more of computing devices 12, data regarding episodes of arrhythmia or other acute health events detected by IMD 10 and computing device(s) 12, and other physiological signals or data recorded by IMD 10 and/or computing device(s) 12. HMS 22 may also retrieve data regarding patient 4 from one or more sources of electronic health records (EHR) 24 via network. EHR 24 may include data regarding historical (e.g., baseline) physiological parameter values, previous health events and treatments, disease states, comorbidities, demographics, height, weight, and body mass index (BMI), as examples, of patients including patient 4. HMS 22 may use data from EHR 24 to configure algorithms implemented by IMD 10 and/or computing devices 12 to detect acute health events for patient 4. In some examples, HMS 22 provides data from EHR 24 to computing device(s) 12 and/or IMD 10 for storage therein and use as part of their algorithms for detecting acute health events.
[0054] Network 16 may include one or more computing devices, such as one or more non-edge switches, routers, hubs, gateways, security devices such as firewalls, intrusion detection, and/or intrusion prevention devices, servers, cellular base stations and nodes, wireless access points, bridges, cable modems, application accelerators, or other network devices. Network 16 may include one or more networks administered by service providers, and may thus form part of a large-scale public network infrastructure, e.g., the Internet. Network 16 may provide computing devices and systems, such as those illustrated in FIG. 1, access to the Internet, and may provide a communication framework that allows the computing devices and systems to communicate with one another. In some examples, network 16 may include a private network that provides a communication framework that allows the computing devices and systems illustrated in FIG. 1 to communicate with each other, but isolates some of the data flows from devices external to
the private network for security purposes. In some examples, the communications between the computing devices and systems illustrated in FIG. 1 are encrypted.
[0055] As will be described herein, IMD 10 may be configured to detect acute health events of patient 4, such as SCA, based on data sensed by IMD 10 and, in some cases, other data, such as data sensed by computing devices 12A and/or 12B, and data from EHR 24. To detect acute health events, IMD 10 may apply rules to the data, which may be referred to as patient parameter data. In response to detection of an acute health event, IMD 10 may wirelessly transmit a message to one or both of computing devices 12A and 12B. The message may indicate that IMD 10 detected an acute health event of the patient. The message may indicate a time that IMD 10 detected the acute health event. The message may include physiological data collected by IMD 10, e.g., data which lead to detection of the acute health event, data prior to detection of the acute health event, and/or real-time or more recent data collected after detection of the acute health event. The physiological data may include values of one or more physiological parameters and/or digitized physiological signals. Examples of acute health events are SCA, a ventricular fibrillation, a ventricular tachycardia, myocardial infarction, a pause in heart rhythm (asystole), or Pulseless Electrical Activity (PEA), acute respiratory distress syndrome (ARDS), a stroke, a seizure, or a fall.
[0056] In some examples, the detection of the acute health event by IMD 10 may include multiple phases. For example, IMD 10 may complete an initial detection of the acute health event, e.g., SCA or tachyarrhythmia, and initiate wireless communication, e.g., Bluetooth® or Bluetooth Low Energy®, with computing device(s) 12 in response to the initial detection. The initial detection may occur five to ten seconds after onset of the acute health event, for example. IMD 10 may continue monitoring to determine whether the acute health event is sustained, e.g., a sustained detection of SCA or tachyarrhythmia. In some examples, IMD 10 may use more patient parameters and/or different rules to determine whether event is sustained or otherwise confirm detection.
[0057] Initiating communication with computing device(s) 12 in response to an initial detection may facilitate the communication being established at the time the acute health event is confirmed as sustained. To conserve power of IMD 10 and computing device(s) 12, IMD 10 may wait to send the message, e.g., including sensed data associated with the acute health event, until it is confirmed as sustained, which may be determined about
thirty seconds after onset of the event, or after a longer period of time. Less urgent events may have longer confirmation phases and may be alerted with less urgency, such being alerted as health care events rather than acute health events. However, the initiation of communication after initial detection may still benefit less urgent events. Conserving power may be significant in the case of non-rechargeable IMDs to prolong their life prior to needing surgery for replacement, as well as for rechargeable IMDs or external devices to reduce recharge frequency.
[0058] Based on the message from IMD 10, computing device(s) 12 may output an alarm that may be visual and/or audible, and configured to immediately attract the attention of patient 4 or any person in environment 28 with patient 4. Additionally or alternatively, computing device(s) 12 may transmit an alert or alarm message to devices and users outside the visible/audio range of computing device(s) 12, e.g., to loT devices 30 or HMS 22. Environment 28 may be a home, office, or place of business, or public venue, as examples. An alert or alarm message sent to HMS 22 via network 16, or other messages sent by computing device(s) 12, may include the data received from IMD 10 and, in some cases, additional data collected by computing device(s) 12 or other devices in response to the detection of the acute health event by IMD 10.
[0059] Other devices in the environment 28 of patient 4 may also be configured to output alarms or take other actions to attract the attention of patient 4 or to otherwise facilitate the delivery of care to patient 4. For example, environment 28 may include one or more Internet of Things (loT) devices, such as loT devices 3OA-3OD (collectively “loT devices 30”) illustrated in the example of FIG. 1. loT devices 30 may include, as examples, so called “smart” speakers, cameras, televisions, lights, locks, thermostats, appliances, actuators, controllers, or any other smart home (or building) devices. In the example of FIG. 1, loT device 30C is a smart speaker and/or controller, which may include a display. loT devices 30 may provide audible and/or visual alarms when configured with output devices to do so. As other examples, loT devices 30 may cause smart lights throughout environment 28 to flash or blink and unlock doors. In some examples, loT devices 30 that include cameras, microphones, or other sensors may activate those sensors to collect data regarding patient 4, e.g., for evaluation of the condition of patient 4.
[0060] Computing device(s) 12 may be configured to wirelessly communicate with loT devices 30 to cause loT devices 30 to take the actions described herein. In some examples, HMS 22 communicates with loT devices 30 via network 16 to cause loT devices 30 to take the actions described herein, e.g., in response to receiving the alert message from computing device(s) 12 as described above. In some examples, IMD 10 is configured to communicate wirelessly with one or more of loT devices 30, e.g., in response to detection of an acute health event when communication with computing devices 12 is unavailable. In such examples, loT device(s) 30 may be configured to provide some or all of the functionality ascribed to computing devices 12 herein.
[0061] Environment 28 includes computing facilities, e.g., a local network 32, by which computing devices 12, loT devices 30, and other devices within environment 28 may communicate via network 16, e.g., with HMS 22. For example, environment 28 may be configured with wireless technology, such as IEEE 802.11 wireless networks, IEEE 802.15 ZigBee networks, an ultra- wideband protocol, near-field communication, or the like. Environment 28 may include one or more wireless access points, e.g., wireless access points 34A and 34B (collectively, “wireless access points 34”) that provide support for wireless communications throughout environment 28. Additionally or alternatively, e.g., when local network is unavailable, computing devices 12, loT devices 30, and other devices within environment 28 may be configured to communicate with network 16, e.g., with HMS 22, via a cellular base station 36 and a cellular network.
[0062] Computing device(s) 12, and in some examples loT device(s) 30, may include input devices and interfaces to allow a user to override the alarm in the event the detection of the acute health event by IMD 10 was false. In some examples, one or more of computing device(s) 12 and loT device(s) 30 may implement an event assistant. The event assistant may provide a conversational interface for patient 4 to exchange information with the computing device or loT device. The event assistant may query the user regarding the condition of patient 4 in response to receiving the alert message from IMD 10. Responses from the user may be used to confirm or override detection of the acute health event by IMD 10, or to provide additional information about the acute health event or the condition of patient 4 more generally that may improve the efficacy of the treatment of patient 4. For example, information received by the event assistant may be used to provide an indication of severity or type (differential diagnosis) for the acute
health event. The event assistant may use natural language processing and context data to interpret utterances by the user. In some examples, in addition to receiving responses to queries posed by the assistant, the event assistant may be configured to respond to queries posed by the user. For example, patient 4 may indicate that they feel dizzy and ask the event assistant, “how am I doing?”.
[0063] In some examples, computing device(s) 12 and/or HMS 22 may implement one or more techniques to evaluate the sensed physiological data received from IMD 10, and in some cases additional physiological or other patient parameter data sensed or otherwise collected by the computing device(s) or loT devices 30, to confirm or override the detection of the acute health event by IMD 10. In some examples, computing device(s) 12 and/or computing system(s) 20 may have greater processing capacity than IMD 10, enabling more complex analysis of the data. In some examples, the computing device(s) 12 and/or HMS 22 may apply the data to one or more machine learning models or other artificial intelligence developed algorithms, e.g., to determine whether the data is sufficiently indicative of the acute health event.
[0064] In examples in which computing device(s) 12 are configured perform an acute health event confirmation analysis, computing device(s) 12 may output alert messages and/or transmit alert messages to HMS 22 and/or loT devices 30 in response to confirming the acute health event. In some examples, computing device(s) 12 may be configured to output/transmit the alert messages prior to completing the confirmation analysis, and output/transmit cancellation messages in response to the analysis overriding the detection of the acute health event by IMD 10. HMS 22 may be configured to perform a number of operations in response to receiving an alert message from computing device(s) 12 and/or loT device(s) 30. HMS 22 may be configured to cancel such operations in response to receiving a cancellation message from computing device(s) 12 and/or loT device(s) 30. [0065] Any of IMD 10, computing device(s) 12, loT device(s) 30, computing device(s) 38, or HMS 22 may, individually or in any combination, perform the operations described herein for detection of acute health events, such as SCA, by applying rules, which may include one or more machine learning models, to patient parameter data to detect acute health events. For example, one of these devices, or more than one of them in cooperation, may apply a first set of rules to patient parameter data for a first determination of whether an acute health event is detected and, based on whether one or
more context criteria associated with the first determination are satisfied, determine whether to apply a second set of rules to patient parameter data to determine whether the acute health event is detected.
[0066] FIG. 2A is a block diagram illustrating an example configuration of IMD 10 of FIG. 1. As shown in FIG. 2A, IMD 10A includes processing circuitry 50A, memory 52A, sensing circuitry 54A coupled to electrodes 56A and 56B (hereinafter, “electrodes 56”) and one or more sensor(s) 58 A, and communication circuitry 60A.
[0067] Processing circuitry 50A may include fixed function circuitry and/or programmable processing circuitry. Processing circuitry 50A may include any one or more of a microprocessor, a controller, a graphics processing unit (GPU), a tensor processing unit (TPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or equivalent discrete or analog logic circuitry. In some examples, processing circuitry 50A may include multiple components, such as any combination of one or more microprocessors, one or more controllers, one or more GPUs, one or more TPUs, one or more DSPs, one or more ASICs, or one or more FPGAs, as well as other discrete or integrated logic circuitry. The functions attributed to processing circuitry 50A herein may be embodied as software, firmware, hardware, or any combination thereof. In some examples, memory 52A includes computer-readable instructions that, when executed by processing circuitry 50A, cause IMD 10A and processing circuitry 50A to perform various functions attributed herein to IMD 10A and processing circuitry 50A. Memory 52A may include any volatile, non-volatile, magnetic, optical, or electrical media, such as a random-access memory (RAM), read-only memory (ROM), non-volatile RAM (NVRAM), electrically-erasable programmable ROM (EEPROM), flash memory, or any other digital media.
[0068] Sensing circuitry 54A may monitor signals from electrodes 56 in order to, for example, monitor electrical activity of a heart of patient 4 and produce ECG data for patient 4. In some examples, processing circuitry 50A may identify features of the sensed ECG, such as heart rate, heart rate variability, T-wave altemans, intra-beat intervals (e.g., QT intervals), and/or ECG morphologic features, to detect an episode of cardiac arrhythmia of patient 4. Example Processing circuitry 50A may store the digitized ECG and features of the ECG used to detect the arrhythmia episode in memory 52A as episode data for the detected arrhythmia episode.
[0069] In some examples, sensing circuitry 54A measures impedance, e.g., of tissue proximate to IMD 10A, via electrodes 56. The measured impedance may vary based on respiration, cardiac pulse or flow, and a degree of perfusion or edema. Processing circuitry 50A may determine physiological data relating to respiration, cardiac pulse or flow, perfusion, and/or edema based on the measured impedance.
[0070] In some examples, IMD 10A includes one or more sensors 58 A, such as one or more accelerometers, gyroscopes, microphones, optical sensors, temperature sensors, pressure sensors, and/or chemical sensors. In some examples, sensing circuitry 54A may include one or more filters and amplifiers for filtering and amplifying signals received from one or more of electrodes 56 and/or sensors 58A. In some examples, sensing circuitry 54A and/or processing circuitry 50A may include a rectifier, filter and/or amplifier, a sense amplifier, comparator, and/or analog-to-digital converter. Processing circuitry 50A may determine physiological data, e.g., values of physiological parameters of patient 4, based on signals from sensors 58 A, which may be stored in memory 52A. Patient parameters determined from signals from sensors 58 A may include oxygen saturation, glucose level, stress hormone level, heart sounds, body motion, body posture, or blood pressure.
[0071] Memory 52A may store applications 70A executable by processing circuitry 50A, and data 80A. Applications 70A may include an acute health event surveillance application 72A. Processing circuitry 50A may execute event surveillance application 72 A to detect an acute health event of patient 4 based on combination of one or more of the types of physiological data described herein, which may be stored as sensed data 82A. In some examples, sensed data 82 A may additionally include patient parameter data sensed by other devices, e.g., computing device(s) 12 or loT device(s) 30, and received via communication circuitry 60A. Event surveillance application 72 A may be configured with a rules engine 74 A. Rules engine 74 A may apply rules 84 A to sensed data 82A. Rules 84A may include one or more models, algorithms, decision trees, and/or thresholds. In some cases, rules 84A may be developed based on machine learning, e.g., may include one or more machine learning models.
[0072] As examples, event surveillance application 72A may detect SCA, a ventricular fibrillation, a ventricular tachycardia, supra- ventricular tachycardia (e.g., conducted atrial fibrillation), ventricular asystole, or a myocardial infarction based on an ECG and/or other
patient parameter data indicating the electrical or mechanical activity of the heart of patient 4. In some examples, event surveillance application 72A may detect stroke based on such cardiac activity data. In some examples, sensing circuitry 54A may detect brain activity data, e.g., an electroencephalogram (EEG) via electrodes 56, and event surveillance application 72 A may detect stroke or a seizure based on the brain activity alone, or in combination with cardiac activity data or other physiological data. In some examples, event surveillance application 72A detects whether the patient has fallen based on data from an accelerometer alone, or in combination with other physiological data. When event surveillance application 72 A detects an acute health event, event surveillance application 72 A may store the sensed data 82 A that lead to the detection (and in some cases a window of data preceding and/or following the detection) as event data 86A, also referred to herein as episode data.
[0073] In some examples, in response to detection of an acute health event, processing circuitry 50A transmits, via communication circuitry 60A, event data 86A for the event to computing device(s) 12 (FIG. 1). This transmission may be included in a message indicating the acute health event, as described herein. Transmission of the message may occur on an ad hoc basis and as quickly as possible. Communication circuitry 60A may include any suitable hardware, firmware, software, or any combination thereof for wirelessly communicating with another device, such as computing devices 12 and/or loT devices 30.
[0074] As will be explained in more detail below, and in accordance with the techniques of this disclosure, IMD 10A is one example of a device configured to detect an acute health event, collect episode data associated with the acute health event, and transmit the episode data. Processing circuitry 50A of IMD 10A may be further configured to receive the episode data associated with the acute health event detected by IMD 10A, apply a first one or more machine learning models and first classification logic to the episode data to determine a first respective classification of a plurality of predetermined classifications for the episode data. Based on a probability of the first respective classification determined by the first one or more machine learning models, processing circuitry 50 A may further determine whether to retrieve first additional data from the IMD 10A and apply a second one or more machine learning models and second classification logic to the first additional data, or retrieve second additional data from the IMD 10A and
apply a third one or more machine learning models and third classification logic to the second additional data. Processing circuitry 50A may further determine a classification of the acute health event from the plurality of predetermined classifications based on the application of the second one or more machine learning models to the first additional data or the third one or more machine learning models to the second additional data.
[0075] FIG. 2B is a block diagram illustrating another example configuration of IMD 10 of FIG. 1. As shown in FIG. 2B, IMD 10B includes processing circuitry 50B, memory 52B, sensing circuitry 54B coupled to electrodes 55B, and one or more sensor(s) 58B, therapy delivery circuitry 57B, and communication circuitry 60B. IMD 10B may operate in the same manner as IMD 10A of FIG. 2 A, but additional includes therapy delivery circuitry 57B.
[0076] Processing circuitry 50B may include fixed function circuitry and/or programmable processing circuitry. Processing circuitry 50B may include any one or more of a microprocessor, a controller, a GPU, a TPU, a DSP, an ASIC, an FPGA, or equivalent discrete or analog logic circuitry. In some examples, processing circuitry 50B may include multiple components, such as any combination of one or more microprocessors, one or more controllers, one or more GPUs, one or more TPUs, one or more DSPs, one or more ASICs, or one or more FPGAs, as well as other discrete or integrated logic circuitry. The functions attributed to processing circuitry 50B herein may be embodied as software, firmware, hardware, or any combination thereof. In some examples, memory 52B includes computer-readable instructions that, when executed by processing circuitry 50B, cause IMD 10B and processing circuitry 50B to perform various functions attributed herein to IMD 10B and processing circuitry 50B. Memory 52B may include any volatile, non-volatile, magnetic, optical, or electrical media, such as a RAM, ROM, NVRAM, EEPROM, flash memory, or any other digital media.
[0077] Sensing circuitry 54B and therapy delivery circuitry 57B are coupled to electrodes 55B. Sensing circuitry 54B may monitor signals from electrodes 55B in order to, for example, monitor electrical activity of a heart of patient 4 and produce ECG data for patient 4. In some examples, processing circuitry 50B may identify features of the sensed ECG, such as heart rate, heart rate variability, T-wave altemans, intra-beat intervals (e.g., QT intervals), and/or ECG morphologic features, to detect an episode of cardiac arrhythmia of patient 4. Example processing circuitry 50B may store the digitized ECG
and features of the ECG used to detect the arrhythmia episode in memory 52B as episode data for the detected arrhythmia episode.
[0078] In some examples, sensing circuitry 54B measures impedance, e.g., of tissue proximate to IMD 10B, via electrodes 55B. The measured impedance may vary based on respiration, cardiac pulse or flow, and a degree of perfusion or edema. Processing circuitry 50B may determine physiological data relating to respiration, cardiac pulse or flow, perfusion, and/or edema based on the measured impedance.
[0079] In some examples, IMD 10B includes one or more sensors 58B, such as one or more accelerometers, gyroscopes, microphones, optical sensors, temperature sensors, pressure sensors, and/or chemical sensors. In some examples, sensing circuitry 54B may include one or more filters and amplifiers for filtering and amplifying signals received from one or more of electrodes 55B and/or sensors 58B. In some examples, sensing circuitry 54B and/or processing circuitry 50B may include a rectifier, filter and/or amplifier, a sense amplifier, comparator, and/or analog-to-digital converter. Processing circuitry 50B may determine physiological data, e.g., values of physiological parameters of patient 4, based on signals from sensors 58B, which may be stored in memory 52B. Patient parameters determined from signals from sensors 58B may include oxygen saturation, glucose level, stress hormone level, heart sounds, body motion, body posture, or blood pressure.
[0080] Memory 52B may store applications 70B executable by processing circuitry 50B, and data 80B. Applications 70B may include an acute health event surveillance application 72B. Processing circuitry 50B may execute event surveillance application 72B to detect an acute health event of patient 4 based on combination of one or more of the types of physiological data described herein, which may be stored as sensed data 82B. In some examples, sensed data 82B may additionally include patient parameter data sensed by other devices, e.g., computing device(s) 12 or loT device(s) 30, and received via communication circuitry 60B. Event surveillance application 72B may be configured with a rules engine 74B. Rules engine 74B may apply rules 84B to sensed data 82B. Rules 84B may include one or more models, algorithms, decision trees, and/or thresholds. In some cases, rules 84B may be developed based on machine learning, e.g., may include one or more machine learning models.
[0081] As examples, event surveillance application 72B may detect SCA, a ventricular fibrillation, a ventricular tachycardia, supra- ventricular tachycardia (e.g., conducted atrial fibrillation), ventricular asystole, or a myocardial infarction based on an ECG and/or other patient parameter data indicating the electrical or mechanical activity of the heart of patient 4. In some examples, event surveillance application 72B may detect stroke based on such cardiac activity data. In some examples, sensing circuitry 54B may detect brain activity data, e.g., an electroencephalogram (EEG) via electrodes 55B, and event surveillance application 72B may detect stroke or a seizure based on the brain activity alone, or in combination with cardiac activity data or other physiological data. In some examples, event surveillance application 72B detects whether the patient has fallen based on data from an accelerometer alone, or in combination with other physiological data. When event surveillance application 72B detects an acute health event, event surveillance application 72B may store the sensed data 82B that lead to the detection (and in some cases a window of data preceding and/or following the detection) as event data 86B, also referred to herein as episode data.
[0082] In some examples, in response to detection of an acute health event, processing circuitry 50B transmits, via communication circuitry 60B, event data 86B for the event to computing device(s) 12 (FIG. 1). This transmission may be included in a message indicating the acute health event, as described herein. Transmission of the message may occur on an ad hoc basis and as quickly as possible. Communication circuitry 60B may include any suitable hardware, firmware, software, or any combination thereof for wirelessly communicating with another device, such as computing devices 12 and/or loT devices 30.
[0083] Therapy delivery circuitry 57B may be configured to generate and deliver electrical therapy to the heart, brain, spinal cord, nerves, or other part of the body of patient 4. In one example, therapy delivery circuitry 57B may include one or more pulse generators, capacitors, and/or other components capable of generating and/or storing energy to deliver as pacing therapy, defibrillation therapy, cardioversion therapy, other therapy, or a combination of therapies. In some instances, therapy delivery circuitry 57B may include a first set of components configured to provide pacing therapy and a second set of components configured to provide anti-tachyarrhythmia shock therapy. In other instances, therapy delivery circuitry 57B may utilize the same set of components to
provide both pacing and anti-tachyarrhythmia shock therapy. In still other instances, therapy delivery circuitry 57B may share some of the pacing and shock therapy components while using other components solely for pacing or shock delivery.
[0084] Therapy delivery circuitry 57B may include charging circuitry, one or more charge storage devices, such as one or more capacitors, and switching circuitry that controls when the capacitor(s) are discharged to electrodes 55B and the widths of pulses. Charging of capacitors to a programmed pulse amplitude and discharging of the capacitors for a programmed pulse width may be performed by therapy delivery circuitry 57B according to control signals received from processing circuitry 50B, which are provided by processing circuitry 50B according to parameters stored in memory 52B. Processing circuitry 50B controls therapy delivery circuitry 57B to deliver the generated therapy to patient 4 via one or more combinations of electrodes 55B, e.g., according to parameters stored in memory 52B. Therapy delivery circuitry 57B may include switch circuitry to select which of the available electrodes 55B are used to deliver the therapy, e.g., as controlled by processing circuitry 50B.
[0085] In some examples, IMD 10B may additionally or alternatively be configured to deliver other therapies configured to prevent the predicted acute cardiac event. For example, processing circuitry 50B may control therapy delivery circuitry 57B to deliver cardiac pacing therapy configured to prevent a ventricular tachyarrhythmia, such as overdrive pacing therapy when one or more of the patient parameters indicate that the heart rate is not fast or down-drive pacing therapy if one or more of the patient parameters indicate that the heart rate is too fast.
[0086] As another example, IMD 10B may additionally or alternatively be configured to deliver neuromodulation therapy to prevent an acute cardiac event, such as ventricular tachyarrhythmia, heart failure decompensation, or ischemia. In such examples, processing circuitry 50B may be programmed, and therapy delivery circuitry 57B and electrodes 55B configured and placed, to generate and deliver the neuromodulation therapy. Example neuromodulation therapies include vagal nerve stimulation, spinal cord stimulation, peripheral nerve stimulation, cardiac intrinsic nerve modulation, and cardiac stellate ganglion stimulation.
[0087] As another example, IMD 10B may additionally or alternatively be configured to deliver a therapeutic substance, e.g., infuse a drug. In such examples, IMD 10B may
include a pump to deliver the substance, and processing circuitry 50B may be configured to control the pump according to therapy parameters stored in memory 52B. Examples of delivery of therapy substances to prevent an acute cardiac event include delivery of substances that modulate the cardiovascular or neurological systems of the patient.
[0088] As will be explained in more detail below, and in accordance with the techniques of this disclosure, IMD 10B is one example of a device configured to detect an acute health event, collect episode data associated with the acute health event, transmit the episode data, and deliver therapy to a patient. Processing circuitry 50B of IMD 10B may be further configured to receive the episode data associated with the acute health event detected by IMD 10B, apply a first one or more machine learning models and first classification logic to the episode data to determine a first respective classification of a plurality of predetermined classifications for the episode data. Based on a probability of the first respective classification determined by the first one or more machine learning models, processing circuitry 50B may further determine whether to retrieve first additional data from the IMD 10B and apply a second one or more machine learning models and second classification logic to the first additional data, or retrieve second additional data from the IMD 10B and apply a third one or more machine learning models and third classification logic to the second additional data. Processing circuitry 50B may further determine a classification of the acute health event from the plurality of predetermined classifications based on the application of the second one or more machine learning models to the first additional data or the third one or more machine learning models to the second additional data. Processing circuitry 50B may also determine whether to control IMD 10B (e.g., therapy delivery circuitry 57B) to deliver therapy based on the classification.
[0089] FIG. 3 is a block diagram illustrating an example configuration of a computing device 12 of patient 4, which may correspond to either (or both operating in coordination) of computing devices 12A and 12B illustrated in FIG. 1. In some examples, computing device 12 takes the form of a smartphone, a laptop, a tablet computer, a personal digital assistant (PDA), a smartwatch or other wearable computing device. In some examples, loT devices 30, and/or computing device 38 may be configured similarly to the configuration of computing device 12 illustrated in FIG. 3.
[0090] As shown in the example of FIG. 3, computing device 12 may be logically divided into user space 102, kernel space 104, and hardware 106. Hardware 106 may include one or more hardware components that provide an operating environment for components executing in user space 102 and kernel space 104. User space 102 and kernel space 104 may represent different sections or segmentations of memory, where kernel space 104 provides higher privileges to processes and threads than user space 102. For instance, kernel space 104 may include operating system 120, which operates with higher privileges than components executing in user space 102.
[0091] As shown in FIG. 3, hardware 106 includes processing circuitry 130, memory 132, one or more input devices 134, one or more output devices 136, one or more sensors 138, and communication circuitry 140. Although shown in FIG. 3 as a stand-alone device for purposes of example, computing device 12 may be any component or system that includes processing circuitry or other suitable computing environment for executing software instructions and, for example, need not necessarily include one or more elements shown in FIG. 3.
[0092] Processing circuitry 130 is configured to implement functionality and/or process instructions for execution within computing device 12. For example, processing circuitry 130 may be configured to receive and process instructions stored in memory 132 that provide functionality of components included in kernel space 104 and user space 102 to perform one or more operations in accordance with techniques of this disclosure.
Examples of processing circuitry 130 may include, any one or more microprocessors, controllers, GPUs, TPUs, DSPs, ASICs, FPGAs, or equivalent discrete or integrated logic circuitry.
[0093] Memory 132 may be configured to store information within computing device 12, for processing during operation of computing device 12. Memory 132, in some examples, is described as a computer-readable storage medium. In some examples, memory 132 includes a temporary memory or a volatile memory. Examples of volatile memories include random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art. Memory 132, in some examples, also includes one or more memories configured for long-term storage of information, e.g., including non-volatile storage elements. Examples of such non-volatile storage elements include magnetic hard discs,
optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. In some examples, memory 132 includes cloud-associated storage.
[0094] One or more input devices 134 of computing device 12 may receive input, e.g., from patient 4 or another user. Examples of input are tactile, audio, kinetic, and optical input. Input devices 134 may include, as examples, a mouse, keyboard, voice responsive system, camera, buttons, control pad, microphone, presence-sensitive or touch- sensitive component (e.g., screen), or any other device for detecting input from a user or a machine. [0095] One or more output devices 136 of computing device 12 may generate output, e.g., to patient 4 or another user. Examples of output are tactile, haptic, audio, and visual output. Output devices 136 of computing device 12 may include a presence- sensitive screen, sound card, video graphics adapter card, speaker, cathode ray tube (CRT) monitor, liquid crystal display (LCD), light emitting diodes (LEDs), or any type of device for generating tactile, audio, and/or visual output.
[0096] One or more sensors 138 of computing device 12 may sense physiological parameters or signals of patient 4. Sensor(s) 138 may include electrodes, accelerometers (e.g., 3-axis accelerometers), an optical sensor, impedance sensors, temperature sensors, pressure sensors, heart sound sensors (e.g., microphones), and other sensors, and sensing circuitry (e.g., including an ADC), similar to those described above with respect to IMD 10A and IMD 10B of FIG. 2A and FIG. 2B.
[0097] Communication circuitry 140 of computing device 12 may communicate with other devices by transmitting and receiving data. Communication circuitry 140 may include a network interface card, such as an Ethernet card, an optical transceiver, a radio frequency transceiver, or any other type of device that can send and receive information. For example, communication circuitry 140 may include a radio transceiver configured for communication according to standards or protocols, such as 3G, 4G, 5G, WiFi (e.g., 802.11 or 802.15 ZigBee), Bluetooth®, or Bluetooth® Low Energy (BLE).
[0098] As shown in FIG. 3, health monitoring application 150 executes in user space 102 of computing device 12. Health monitoring application 150 may be logically divided into presentation layer 152, application layer 154, and data layer 156. Presentation layer 152 may include a user interface (UI) component 160, which generates and renders user interfaces of health monitoring application 150.
[0099] Application layer 154 may include, but is not limited to, an event engine 170, rules engine 172, rules configuration component 174, and event assistant 176. Event engine 170 may be responsive to receipt of a transmission from IMD 10 indicating that IMD 10 detected an acute health event. Event engine 170 may control performance of any of the operations in response to detection of an acute health event ascribed herein to computing device 12, such as transmitting messages to HMS 22, controlling loT devices 30, and analyzing data to confirm or override the detection of the acute health event by IMD 10.
[0100] Rules engine 172 analyzes sensed data 190, and in some examples, patient input 192 and/or EHR data 194, to determine whether there is a sufficient likelihood that patient 4 is experiencing the acute health event detected by IMD 10. Sensed data 190 may include data received from IMD 10 as part of the alert transmission, additional data transmitted from IMD 10, e.g., in “real-time,” and physiological and other data related to the condition of patient 4 collected by, for example, computing device(s) 12 and/or loT devices 30. As examples sensed data 190 from computing device(s) 12 may include one or more of: activity levels, walking/running distance, resting energy, active energy, exercise minutes, quantifications of standing, body mass, body mass index, heart rate, low, high, and/or irregular heart rate events, heart rate variability, walking heart rate, heart beat series, digitized ECG, blood oxygen saturation, blood pressure (systolic and/or diastolic), respiratory rate, maximum volume of oxygen, blood glucose, peripheral perfusion, and sleep patterns.
[0101] Patient input 192 may include responses to queries posed by health monitoring application 150 regarding the condition of patient 4, input by patient 4 or another user, such as bystander 26. The queries and responses may occur responsive to the detection of the event by IMD 10, or may have occurred prior to the detection, e.g., as part long-term monitoring of the health of patient 4. User recorded health data may include one or more of: exercise and activity data, sleep data, symptom data, medical history data, quality of life data, nutrition data, medication taking or compliance data, allergy data, demographic data, weight, and height. EHR data 194 may include any of the information regarding the historical condition or treatments of patient 4 described above. EHR data 194 may relate to history of SCA, tachyarrhythmia, myocardial infarction, stroke, seizure, one or more disease states, such as status of heart failure chronic obstructive pulmonary disease
(COPD), renal dysfunction, or hypertension, aspects of disease state, such as ECG characteristics, cardiac ischemia, oxygen saturation, lung fluid, activity, or metabolite level, genetic conditions, congenital anomalies, history of procedures, such as ablation or cardioversion, and healthcare utilization. EHR data 194 may also include cardiac indicators, such as ejection fraction and left-ventricular wall thickness. EHR data 194 may also include demographic and other information of patient 4, such as age, gender, race, height, weight, and BMI.
[0102] Rules engine 172 may apply rules 196 to the data. Rules 196 may include one or more models, algorithms, decision trees, and/or thresholds. In some cases, rules 196 may be developed based on machine learning, e.g., may include one or more machine learning models. In some examples, rules 196 and the operation of rules engine 172 may provide a more complex analysis the patient parameter data, e.g., the data received from IMD 10A or IMD 10B, than is provided by rules engine 74 A or 74B and rules 84 A or 84B. In examples in which rules 196 include one or more machine learning models, rules engine 172 may apply feature vectors derived from the data to the model(s).
[0103] Rules configuration component 174 may be configured to modify rules 196 (and in some examples rules 84 A or 84B) based on feedback indicating whether the detections and confirmations of acute health events by IMD 10 and computing device 12 were accurate. The feedback may be received from patient 4 and/or EHR 24 via HMS 22. In some examples, rules configuration component 174 may utilize the data sets from true and false detections and confirmations for supervised machine learning to further train models included as part of rules 196.
[0104] Rules configuration component 174, or another component executed by processing circuitry of system 2, may select a configuration of rules 196 based on etiological data for patient, e.g., any combination of one or more of the examples of sensed data 190, patient input 192, and EHR data 194 discussed above. In some examples, different sets of rules 196 tailored to different cohorts of patients may be available for selection for patient 4 based on such etiological data.
[0105] As discussed above, event assistant 176 may provide a conversational interface for patient 4 to exchange information with computing device 12. Event assistant 176 may query the user regarding the condition of patient 4 in response to receiving the alert message from IMD 10. Responses from the user may be included as patient input 192.
Event assistant 176 may use natural language processing and context data to interpret utterances by the user. In some examples, in addition to receiving responses to queries posed by the assistant, event assistant 176 may be configured to respond to queries posed by the user. In some examples, event assistant 176 may provide directions to and respond to queries regarding treatment of patient 4 from patient 4.
[0106] As will be explained in more detail below, and in accordance with the techniques of this disclosure, computing device 12 may be configured to receive the episode data associated with the acute health event detected by IMD 10 (e.g., IMD 10A or IMD 10B), apply a first one or more machine learning models and first classification logic to the episode data to determine a first respective classification of a plurality of predetermined classifications for the episode data. Based on a probability of the first respective classification determined by the first one or more machine learning models, computing device 12 may further determine whether to retrieve first additional data from IMD 10 and apply a second one or more machine learning models and second classification logic to the first additional data, or retrieve second additional data from IMD 10 and apply a third one or more machine learning models and third classification logic to the second additional data. Computing device 12 may further determine a classification of the acute health event from the plurality of predetermined classifications based on the application of the second one or more machine learning models to the first additional data or the third one or more machine learning models to the second additional data.
Computing device 12 may also determine whether to control IMD 10 (e.g., therapy delivery circuitry 57B of IMD 10B) to deliver therapy based on the classification. [0107] FIG. 4 is a block diagram illustrating an operating perspective of HMS 22. HMS 22 may be implemented in a computing system 20, which may include hardware components such as those of computing device 12, e.g., processing circuitry, memory, and communication circuitry, embodied in one or more physical devices. FIG. 4 provides an operating perspective of HMS 22 when hosted as a cloud-based platform. In the example of FIG. 4, components of HMS 22 are arranged according to multiple logical layers that implement the techniques of this disclosure. Each layer may be implemented by one or more modules comprised of hardware, software, or a combination of hardware and software.
[0108] Computing devices, such as computing devices 12, loT devices 30, and computing devices 38 operate as clients that communicate with HMS 22 via interface layer 200. The computing devices typically execute client software applications, such as desktop application, mobile application, and web applications. Interface layer 200 represents a set of application programming interfaces (API) or protocol interfaces presented and supported by HMS 22 for the client software applications. Interface layer 200 may be implemented with one or more web servers.
[0109] As shown in FIG. 4, HMS 22 also includes an application layer 202 that represents a collection of services 210 for implementing the functionality ascribed to HMS herein. Application layer 202 receives information from client applications, e.g., a message indicating detection of an acute health event from a computing device 12 or loT device 30, and further processes the information according to one or more of the services 210 to respond to the information. Application layer 202 may be implemented as one or more discrete software services 210 executing on one or more application servers, e.g., physical or virtual machines. That is, the application servers provide runtime environments for execution of services 210. In some examples, the functionality interface layer 200 as described above and the functionality of application layer 202 may be implemented at the same server. Services 210 may communicate via a logical service bus 212. Service bus 212 generally represents a logical interconnections or set of interfaces that allows different services 210 to send messages to other services, such as by a publish/subscription communication model.
[0110] Data layer 204 of HMS 22 provides persistence for information using one or more data repositories 220. A data repository 220, generally, may be any data structure or software that stores and/or manages data. Examples of data repositories 220 include but are not limited to relational databases, multi-dimensional databases, maps, and hash tables, to name only a few examples.
[0111] As shown in FIG. 4, each of services 230-238 is implemented in a modular form within HMS 22. Although shown as separate modules for each service, in some examples the functionality of two or more services may be combined into a single module or component. Each of services 230-238 may be implemented in software, hardware, or a combination of hardware and software. Moreover, services 230-238 may be implemented
as standalone devices, separate virtual machines or containers, processes, threads or software instructions generally for execution on one or more physical processors.
[0112] Event processor service 230 may be responsive to receipt of a transmission from computing device(s) 12 and/or loT device(s) 30 indicating that IMD 10 detected an acute health event of patient and, in some examples, that the transmitting device confirmed the detection. Event processor service 230 may initiate performance of any of the operations in response to detection of an acute health event ascribed herein to HMS 22, such as communicating with patient 4 and, in some cases, analyzing data to confirm or override the detection of the acute health event by IMD 10. Record management service 238 may store the patient data included in a received alert message within event records 252. In examples in which HMS 22 performs an analysis to confirm or override the detection of the acute health event by IMD 10, event processor service 230 may apply one or more rules 250 to the data received in the message, e.g., to feature vectors derived by event processor service 230 from the data, or to raw data, e.g., digitized ECG or other waveforms. Rules 250 may include one or more models, algorithms, decision trees, and/or thresholds, which may be developed by rules configuration service 234 based on machine learning.
[0113] Example machine learning techniques that may be employed to generate rules 250 (as well as rules 84A or 84B and/or 196) can include various learning styles, such as supervised learning, unsupervised learning, and semi- supervised learning. Example types of algorithms include Bayesian algorithms, Clustering algorithms, decision-tree algorithms, regularization algorithms, regression algorithms, instance-based algorithms, artificial neural network algorithms, deep learning algorithms, dimensionality reduction algorithms and the like. Various examples of specific algorithms include Bayesian Linear Regression, Boosted Decision Tree Regression, and Neural Network Regression, Back Propagation Neural Networks, Convolution Neural Networks (CNN), Long Short Term Networks (LSTM), the Apriori algorithm, K-Means Clustering, k-Nearest Neighbour (kNN), Learning Vector Quantization (LVQ), Self-Organizing Map (SOM), Locally Weighted Learning (LWL), Ridge Regression, Least Absolute Shrinkage and Selection Operator (LASSO), Elastic Net, and Least-Angle Regression (LARS), Principal Component Analysis (PCA) and Principal Component Regression (PCR).
[0114] In some examples, in addition to rules used by HMS 22 to confirm acute health event detection, (or in examples in which HMS 22 does not confirm event detection) rules 250 maintained by HMS 22 may include rules 196 utilized by computing devices 12 and rules 84A or 84B used by IMD 10A or IMD 10B. In such examples, rules configuration service 234 may be configured to develop and maintain rules 196 and rules 84A or 84B. Rules configuration service 234 may be configured to develop different sets of rules 84A or 84B, 196, 250, e.g., different machine learning models, for different cohorts of patients. Rules configuration service 234 may be configured to modify these rules based on event feedback data 254 that indicates whether the detections and confirmations of acute health events by IMD 10, computing device 12, and/or HMS 22 were accurate. Event feedback data 254 may be received from patient 4, e.g., via computing device(s) 12, and/or EHR 24. In some examples, rules configuration service 234 may utilize event records from true and false detections (as indicated by event feedback data 254) and confirmations for supervised machine learning to further train models included as part of rules 250.
[0115] As illustrated in the example of FIG. 4, services 210 may also include an assistant configuration service 236 for configuring and interacting with event assistant 176 implemented in computing device 12 or other computing devices. For example, assistant configuration service 236 may provide event assistants updates to their natural language processing and context analyses to improve their operation over time. In some examples, assistant configuration service 236 may apply machine learning techniques to analyze sensed data and event assistant interactions stored in event records 252, as well as the ultimate disposition of the event, e.g., indicated by EHR 24, to modify the operation of event assistants, e.g., for patient 4, a class of patients, all patients, or for particular users or devices.
[0116] FIG. 5 is a flow diagram illustrating an example operation for applying rules to patient parameter data to determine whether an acute health event is detected. The example operation of FIG. 5 may be performed by processing circuitry of any one of IMD 10, computing device(s) 12, 38, loT devices 30, or HMS 22 (e.g., by processing circuitry 50A or 50B or 130 implementing rules engine 74A, 74B or 172 and applying rules 84A, 84B or 196), or by processing circuitry of two or more of these devices respectively performing portions of the example operation.
[0117] According to the example of FIG. 5, the processing circuitry applies a first set of rules to first patient parameter data for a first determination of whether an acute health event, e.g., SCA, is detected (300). The processing circuitry determines whether one or more context criteria associated with the first determination are satisfied (302). If the one or more context criteria are not satisfied (NO of 302), the processing circuitry may determine whether the acute health event is detected based on the first determination (304). If the acute health event is detected (YES of 304), the processing circuitry may generate an alert, e.g., a message to another device and/or a user-perceptible alert as described herein (306). If the acute health event is not detected (NO of 304) or the alert has been generated, the example operation of FIG. 5 may end. If the one or more context criteria are satisfied (YES of 302), the processing circuitry may apply a second set of rules to second patient parameter data for a second determination of whether the acute health event, e.g., SCA, is detected (308), and determine whether the acute health event is detected based on the second determination (304).
[0118] The first and second sets of rules are different in at least one aspect. In some examples, the second set of rules comprises at least one machine learning model. In some examples, both the first and second sets of rules comprise at least one machine learning model.
[0119] In some examples, the processing circuitry determines a risk score of the acute health event, e.g., SCA, based on the application of the first set of rules to the first patient parameter data, and compares the risk score to a threshold to determine whether the one or more context criteria are satisfied. In some examples, the context indicating that the second set of rules should be applied to the second patient parameter data may be that the risk score produced by the first determination does not meet a threshold indicating a sufficient certainty that the acute health event is occurring. The risk score may be a percentage likelihood of the acute health event.
[0120] In some examples, the processing circuitry determines a confidence level of the first determination of whether the acute health event is detected, and compares the confidence level to a threshold. In some examples, the one or more context criteria may be satisfied where the first determination does not have a threshold degree of confidence, or where the first determination is associated with a likelihood of being a false positive that exceeds a threshold. In such examples, application of the second set of rules to the
second patient parameter data may act as a “tie -breaker” when the first determination is not confident. In some examples, the processing circuitry determines that the one or more context criteria are satisfied when input from a user, e.g., the patient, contradicts the first determination (e.g., that the acute health event was detected or not detected), indicating that the likelihood that the first determination is false may be relatively high.
[0121] The processing circuitry may determine a confidence level of the first determination of whether the acute health event is present using a variety of techniques. For example, the application of the first set of rules to the first patient parameter data may produce a level of confidence through its output, e.g., a risk score. In such examples, a higher output indicating a higher likelihood of the acute health event may indicate a higher level of confidence. Examples of rules that may produce such outputs include machine learning models and time-domain signal processing algorithms.
[0122] In some examples, the processing circuitry may determine a noise level of one or more signals from which the first patient parameter data is determined. In such examples, the processing circuitry may determine a confidence level of the first determination of whether the acute health event is present based on a noise level. In general, confidence level and noise level may be inversely related. In some examples, the processing circuitry may determine the confidence level based on health record data for patient 4. For example, if a clinician has indicated in a health record or via programming IMD 10 that patient 4 has experienced a myocardial infarction or has heart failure, confidence levels may be increased and/or thresholds included in the rules applied to the first patient parameter data may be lowered.
[0123] In some examples, a context criterion may be satisfied when a component of system 2, e.g., IMD 10 or computing devices 12, has sufficient power to enable the application of the second set of rules to the second patient parameter data. In some examples, to determine whether the one or more context criteria are satisfied, the processing circuitry may determine a power level of system 2, e.g., of the relevant device, and compare the power level threshold. In some examples, the second patient parameter data includes data of at least one patient parameter that is not included in the first patient parameter data. In some examples, the processing circuitry activates a sensor to sense this patient parameter, e.g., when the device including the sensor has sufficient power for the measurement.
[0124] In some examples, the first patient parameter data and the second patient parameter data are both sensed by an implantable medical device. In some examples, the at least one patient parameter that is included in the second patient parameter data but not included in the first patient parameter data is sensed by an external device. In some examples, processing circuitry 50A or 50B of IMD 10A or IMD 10B or processing circuitry 130 of computing device(s) 12 (or loT devices 30 or the other devices discussed herein) performs each of sub-operations 300-308. In other examples, processing circuitry 50 A or 50B of IMD 10A or IMD 10B performs the first determination of whether the acute health event, e.g., SCA, is detected (300), and processing circuitry 130 of computing device(s) 12 (or loT devices 30 or the other devices discussed herein) performs each of sub-operations 302-308.
[0125] In some examples, the first patient parameter data includes at least one patient parameter determined from ECG data, and the at least one patient parameter comprises a patient parameter determined from at least one of heart sounds of the patient, an impedance of the patient, motion of the patient (e.g., whether a fall occurred or is suspected), respiration of the patient, posture of the patient, blood pressure of the patient, a chemical detected in the patient, or an optical signal from the patient. In some examples, the first patient parameter data and second patient parameter data may be determined using different combinations of sensors, e.g., internal and/or external sensors. The first and second determinations may be considered different tiers, with the second determination utilizing additional sensor(s), data, and/or power if the context suggests it would be desirable to supplement the first determination.
[0126] In some examples, the processing circuitry selects at least one of the second set of rules or the parameters used for the second patient parameter data based on at least one of user (e.g., patient or care giver or bystander) input or medical record information of the patient. In some examples, the user input and/or medical history information may include information entered by a clinician when programming IMD 10. For example, the processing circuitry may select at least one of the second set of rules or the parameters used for the second patient parameter data based on user input or medical record information indicating a particular symptom or condition of the patient. In some examples, the first patient parameter data comprises data for a first set of patient parameters, and the processing circuitry may select at least one of the second set of rules
or a second patient parameter for the second patient parameter data based on the level. A level for a particular parameter that is clinically significant but contrary to the first determination (either a detection or non-detection), may suggest that the second determination should be performed, and should be performed with a particular parallel (but different) or orthogonal patient parameter to resolve the uncertainty about whether the acute health event is detected.
[0127] In some examples, the first patient parameter data includes at least one patient parameter determined from ECG data of the patient, and the second patient parameter data comprises at least one of a morphological change or a frequency shift of the ECG data over time. The processing circuitry may analyze ECG data for timing or morphology changes. For example, morphological or frequency changes as a ventricular fibrillation persists may indicate an increase lethality of the ventricular fibrillation. In some examples, the rules applied processing circuitry may determine a higher likelihood of the acute health event, e.g., lethal ventricular fibrillation or SCA, the presence of such morphological or frequency shifts.
[0128] The example operation of FIG. 5 may result in a hierarchy of rules or even sensor measurements. In some examples, one or more sensors may be activated in certain contexts, and may be inactive for first determinations of whether the acute health event is detected, e.g., to conserve power of IMD 10. For example, if in a first determination ECG data indicates ventricular fibrillation and other sensor data indicates no pulse and no heart sounds, there may be no need for the second determination. But if those levels of evidence are not high, e.g., not sure if it definitely ventricular fibrillation there might be faint heart sounds, faint pulses, a fall, or a gait change, then a second determination could be employed.
[0129] Further, the rules and sensors used in either or both of the first as second determinations can be configured/personalized for each patient based on their medical history from EHR 24 or their history of previous events or by their physicians/caregivers depending on the situation. For example, if a caregiver has to leave town for few days, the processing circuitry could configure the rules to be satisfied by lower levels of evidence, e.g., automatically, which may advantageously tailor the monitoring provided by system 2 to the context of patient 4 and care givers of the patient.
[0130] FIG. 6 is a flow diagram illustrating another example operation for applying rules to patient parameter data to determine whether an acute health event is detected. The example operation of FIG. 6 may be performed by processing circuitry of any one of IMD 10, computing device(s) 12, 38, loT devices 30, or HMS 22 (e.g., by processing circuitry 50A or 50B or 130 implementing rules engine 74A, 74B or 172 and applying rules 84A, 84B, or 196), or by processing circuitry of two or more of these devices respectively performing portions of the example operation.
[0131] According to the example of FIG. 6, the processing circuitry applies a set of rules to patient parameter data to determine whether an acute health event, e.g., SCA, is detected (320). The processing circuitry determines whether one or more context criteria associated with the determination are satisfied (322). If the one or more context criteria are not satisfied (NO of 322), the processing circuitry may determine whether the acute health event is detected based on the determination (324). If the acute health event is detected (YES of 324), the processing circuitry may generate an alert, e.g., a message to another device and/or a user-perceptible alert as described herein (326). If the acute health event is not detected (NO of 324) or the alert has been generated, the example operation of FIG. 6 may end. If the one or more context criteria are satisfied (YES of 322), the processing circuitry may modify the set of rules (328), apply second patient parameter data to the second set of rules (330), and determine whether the acute health event is detected based on the application of the second patient parameter data to the second set of rules (324).
[0132] The processing circuitry may determine whether the one or more context criteria are satisfied in the manner described with respect to FIG. 5. In some examples, the first and second patient parameter data may be determined from the same patient parameters or (with respect to at least one parameter) different patient parameters. In some examples, the first patient parameter data and the second patient parameter data include at least one common patient parameter, and the processing circuitry may change a mode sensing for the common patient parameter between the first patient parameter data and the second patient parameter data in response to satisfaction of the one or more context criteria. For example, the processing circuitry may change a sampling frequency for the common patient parameter.
[0133] In some examples in which IMD 10 senses patient parameters used to determine the first patient parameter data, the processing circuitry may determine that a context criterion is satisfied by detecting that IMD 10 has flipped or otherwise migrated within patient 4. Such migration may lead to significant changes in patient parameter data, e.g., ECG data, impedance data, or heart sound data. Changing a mode employed by IMD 10 to sense one or more patient parameters, or changing rules to account for changes in patient parameter data resulting from device migration, may help ameliorate the device migration and maintain effective acute health event detection. In addition to the mode of sensing and/or rules, the processing circuity may adjust other aspects of system, such mode of wireless communication between the IMD and other devices. Techniques for detecting and mitigating migration of IMD 10 are described in commonly-assigned U.S. Patent Application No. 17/101,945, filed November 23, 2020 by Anderson et al., titled “DETECTION AND MITIGATION OF INACCURATE SENSING BY AN
IMPLANTED SENSOR OF A MEDICAL SYSTEM,” which is incorporated herein by reference in its entirety.
[0134] In some examples, the processing circuitry determines that the one or more context criteria are satisfied when the processing circuitry determines that the acute health event, e.g., ventricular tachyarrhythmia or SCA, is detected, but the patient or another user cancels the alarm or otherwise provides user input contradicting the determination. In such examples, the processing circuitry may modify one or both of the sensed patient parameters or the rules applied to the patient parameter data.
[0135] For example, the patient may have tolerated a rapid ventricular tachycardia that lasted for a sustained period (e.g., a programmed 10 or 20 seconds), but could experience another arrhythmia, e.g., syncope, soon even though the patient believes they are OK. The modification may include adapting the rules based on the rhythm. Sometimes a long duration episode accelerates to ventricular fibrillation or more rapid ventricular tachycardia. Sometimes ventricular fibrillation slows down. In either case, the modification could include changing a heart rate threshold, e.g., applying hysteresis to the heart rate threshold. In some examples, ventricular fibrillation becomes difficult to sense. In such examples, the modification may include changing a ventricular depolarization detection threshold to allow more undersensing of depolarizations.
[0136] In some examples, the processing circuitry determines that the one or more context criteria are satisfied based on a recent history of high arrhythmia burden. Some patients have electrical storms. Their electrolytes may be imbalanced, and they may experience a cluster of ventricular arrhythmias, but the patient parameter data may not satisfy the rules for detection of the acute health event. In such cases, the processing circuitry may adapt a tachyarrhythmia duration the threshold, may alert patient and caregivers and inform them to seek care ASAP, and/or may alert a clinician and send patient parameter data, e.g., ECG data, so the clinician can review.
[0137] FIG. 7 is a flow diagram illustrating an example operation for configuring rules applied to patient parameter data to determine whether an acute health event is detected for a patient. The example operation of FIG. 7 may be performed by processing circuitry that implements HMS 22, e.g., that implements rules configuration service 234. In some examples, the operation of FIG. 7 may be performed by processing circuitry of any one of IMD 10, computing device(s) 12, 38, loT devices 30, or HMS 22, e.g., implementing a rules configuration module, or by processing circuitry of two or more of these devices respectively performing portions of the example operation.
[0138] According to the example operation of FIG. 7, the processing circuitry determines whether an acute health event, e.g., SCA, is detected (340). The processing circuitry receives feedback for the event (342). The feedback indicates whether the detection a true or false positive, or the non-detection is a true or false negative. The processing circuitry may receive the feedback from patient 4 or EHR 24. The processing circuity updates rules (e.g., rules 84A or 84B, rules 196, and/or rules 250) based on the feedback and event data, e.g., event data 86A or 86B or event records 252. In some examples, uses the event data as a training set for one or more machine learning models based on the feedback.
[0139] Through predictive and “self-learning” techniques, the operation of a system used to detect, confirm and respond to acute health events, such as SCA, can be improved. The information used to improve the performance could include physiologic sensor data that may indicate an SCA event is likely (QT prolongation, T-wave alternans, changes in respiration rate or thoracic impedance, history of PVCs or non-sustained VT, reduction in 02 saturation and/or perfusion, etc.). The information used to improve the performance could include information indicating whether the prior SCA event was alerted
appropriately and accurately, clinical or physiologic characteristics of the patient (disease state, weight, gender, etc.), data from EHR 24, and data input from the patient (e.g., symptom logging, confirmation that he/she is OK and not suffering from SCA, etc.).
[0140] Implementing the example operation of FIG. 7, the processing circuitry may personalize the rules for patient 4 over time. If patient 4 has a lot of false positives, the example operation of FIG. 7 may modify the rules to be less sensitive and, conversely, if the patient 4 has a lot of false negatives may modify the rules to be more sensitive. In some examples, the processing circuitry may use the feedback and event data to update rules, e.g., machine learning models, for other patients, such as all patients whose IMDs are served by HMS 22, or a particular population or cohort of patients. In some examples, the processing circuitry may use data from a number of sources (e.g., computing devices 12, loT devices 30, etc.) to modify the rules or the collection of patient parameter data. Data used by processing circuitry to update rules may include data collected using an accelerometer, speaker, light detector, or camera on a computing device or loT device.
[0141] FIG. 8 is a flow diagram illustrating another example operation for configuring rules applied to patient parameter data to determine whether an acute health event is detected for a patient. The example operation of FIG. 7 may be performed by processing circuitry that implements HMS 22, e.g., that implements rules configuration service 234. In some examples, the operation of FIG. 8 may be performed by processing circuitry of any one of IMD 10, computing device(s) 12, 38, loT devices 30, or HMS 22, e.g., implementing a rules configuration module, or by processing circuitry of two or more of these devices respectively performing portions of the example operation.
[0142] According to the example operation of FIG. 8, the processing circuitry determines an etiology or risk stratification of patient 4 (360). The processing circuitry selects a set of rules (e.g., a set of rules 84A or 84B, rules 196, and/or rules 250), which may be a first set of rules and/or a second set of rules, for acute health event, e.g., SCA, detection for patient 4 based on the patient etiology (362). In some examples, rules 250 include different sets of rules for different patient cohorts having different etiologies, and processing circuitry may select different rule sets to implement as rules 84 A or 84B in IMD 10A or IMD 10B and rules 196 in computing device(s) 12 for a given patient based on the etiology of that patient. The processing circuitry may apply the selected set of rules
to patient parameter data to determine whether the acute health event is detected using any of the techniques described herein (364).
[0143] Detection of SCA can be achieved by looking at a number of possible markers that occur prior to and during the event. The best markers to detect an impending or ongoing event are likely to be different based on an etiology of the patient. An SCA detection algorithm which uses a generic algorithm designed for a broad population may not achieve satisfactory sensitivity and specificity. The etiology of patient 4 may include baseline characteristics, medical history, or disease state. The etiology of patient 4 may include any EHR data 194 described herein, as well as patient activity level or metabolite level. With such possible inputs, the rules could look for certain markers to exhibit certain trends or threshold crossings to detect an impending or ongoing acute health event, e.g., SCA.
[0144] In some examples, selection of a set of rules may include modification of a universal rule set to turn certain rules (or markers of the acute health event) on or off, or change the weight of certain rules or markers. In some examples, a family of devices could be designed such that individual models have sensors or calculation for only a limited set of inputs motivated by a need to reduce manufacturing costs or energy consumption. [0145] While SCA is typically detected by heart rate/rhythm, rules related to other patient parameter data may be set to a heightened alert based patient etiology. For example, a patient with prior myocardial infarction may have rules that weigh ischemia factors such as ST segment elevation more heavily than for patients lacking this etiology. As another example, a patient with long QT syndrome may have rules that more heavily weight QT interval and activity. As another example, rules for a heart failure patient may have rules that apply greater weight to patient parameter data related to lung fluid and QRS duration.
[0146] In some examples, processing circuitry of system 2 may use patient etiology to “personalize” other aspects of the operation of system 2 for patient 4 or a cohort including patient 4. For example, the processing circuitry may provide alerts and user interfaces that guide patient 4 or others based on the etiology. The processing circuitry can provide patient-specific care recommendations (e.g., delivery of antitachyarrhythmia shock or pacing, neurostimulation, or potential drug therapy for prevention or therapy (termination) of SCA). For example, the etiology may indicate patient 4 is more at risk for pulseless
electrical activity vs. ventricular fibrillation/ventricular tachycardia. Additionally or alternatively, the processing circuitry of system 2 may control IMD to deliver therapy to the patient, such as providing defibrillation, to treat the patient’s condition. Thus, the processing circuitry of system 2 may recommend or otherwise cause the delivery of patient-specific care actions (e.g., by IMD 10) based on the etiology.
[0147] Although described primarily in the context of detection of SCA, system 2 may be used to detection any of a number of acute health events of patient 4. For example, system 2 may be used to detect stroke. Stroke can often present in the form of facial droop. This change in facial tone could be identified using facial image processing on a computing device 12, e.g., a smartphone, or loT devices 30. Such image processing could be a primary indicator of possible stroke or a part of a confirmation after another device indications changes related to stroke.
[0148] Some computing devices 12, e.g., smartphones, include facial processing for access, e.g., face ID, and are accessed in this manner frequently throughout the day. Processing circuitry, e.g., of the computing device, may analyze the facial images to detect subtle changes in facial tone over time. The processing circuitry could detect possible stroke, and various devices of system 2 could provide alerts.
[0149] In some examples, in response to detection based on the camera images, the device could output a series of prompts (audible and/or visual) to access a current state of patient 4. Patient 4 could be prompted to repeat a phrase or answer audible questions to assess cognitive ability. The device could use additional motion processing to further verify the state of patient 4, e.g., using an accelerometer of computing device 12A and/or 12B. Changes in body motion and asymmetry, e.g., of the face and/or body motion, are indictive of stroke. In some examples, the device may ask patient 4 questions. Processing circuitry may analyze the response to detect speech difficulties associated with stroke. In some examples, the alert could include information on where the facial tone has changed, which could aid in diagnosis by indicating possible primary locations for the stroke (ex: left side droop= right side clot).
[0150] As described herein, processing circuitry of one or more devices of system 2, e.g., IMD 10, edge devices such as computing devices 12 or loT devices 30, and/or HMS 22 (or other cloud services), may be configured to analyze episode data associated with an acute health event, such a ventricular tachyarrhythmia or SCA, detected by IMD 10. The
episode data may include ECG and other physiological parameter data collected by IMD 10 for the event, e.g., leading up to, during, and/or after the event. As described herein, the analysis may include the application of a second set of rules (as opposed to a first set applied by IMD 10), e.g., a machine learning model or other artificial intelligence algorithm, decision trees, and/or thresholds, to the episode data and, in some cases, a variety of patient data collected by devices of system 2.
[0151] The initial detection of a ventricular tachyarrhythmia episode by IMD 10 may be based on a first set rules relating to rate and regularity of RR intervals as well as morphological features of the ECG, e.g., of the R-wave. These rules may lead to inappropriate detections due to oversensing R-waves. Further, true ventricular tachyarrhythmia can be of supraventricular origin, e.g., SVT or SVT with aberrancy, or ventricular origin such as VF and VT. VT may be monomorphic or polymorphic. In some cases, VT may be wide complex VT. In general, polymorphic VT (PVT) and VF are life threatening, while monomorphic VT (MVT) are life threatening if sustained for durations on the order of minutes, and SVTs are generally not life threatening unless sustained for greater than 1 hour. The techniques of this disclosure may include use of a second set of rules that includes machine learning models or other Al algorithms to improve accuracy of classification of these different forms of ventricular tachyarrhythmia that maybe detected by IMDs.
[0152] In some examples, the second set of rules may comprise an ensemble of deep learning neural networks configured to discriminate or classify these rhythms. Techniques for configuring an ensemble of deep learning neural networks for classifying cardiac rhythms are described in U.S. Provisional Application Serial No. 63/194,451, filed May 28, 2021, and titled “DYNAMIC AND MODUEAR CARDIAC EVENT DETECTION,” the entire contents of which are incorporated herein by reference. In some examples, the second set of rules may comprise a single classifier that receives, as input, a raw ECG data or a specific feature derived from raw ECG data.
[0153] In some examples, an ensemble of neural networks may include CNNs and/or recurrent neural networks. One or more neural networks of the ensemble may be trained to discriminate or classify based on raw ECG data collected by IMD 10 as an input. One or more networks of the ensemble may be trained to discriminate or classify based on custom features determined by IMD 10 from the ECG or other signals sensed by IMD 10,
or determined by the processing circuitry implementing the second set of rules (e.g., processing circuitry of any of, or any combination of, the devices of system 2). An ensemble of neural networks may improve sensitivity and specificity of the overall analysis by allowing for different inputs to have respective networks of different forms, e.g., one can use recurrent neural networks for one or more specific inputs and CNNs for one or more other inputs. In some examples, the output of each network may be concatenated and flattened, and then fed as input into the final stages of the ensemble network which may have fully connected layers and classification layers.
[0154] FIG. 9 is a block diagram illustrating an example of an ensemble classifier 400 of neural networks configured to classify ventricular tachyarrhythmias. Processing circuitry, e.g., processing circuitry 130 of computing device 12 or loT device 30, may apply a plurality of inputs 402 to a plurality of neural networks 404 of ensemble classifier 400. Inputs 402 include raw signal inputs 406A or other raw parameter data of patient 4, e.g., from IMD 10 or other devices as described herein, and inputs 406B comprising features derived from the raw data. Inputs 406A may include a raw ECG segment sensed by IMD 10 including a ventricular tachyarrhythmia onset detected by IMD 10 based on the ECG, and a raw ECG segment sensed by IMD 10 including a portion of the ECG by which IMD 10 determined the ventricular tachyarrhythmia was sustained.
[0155] Inputs 406B may include features derived from the raw ECG sensed by IMD 10 and data indicating timing of and intervals between R-waves detected by IMD 10 during, and in some cases before and/or after, an episode of ventricular tachyarrhythmia sensed by IMD 10. The features may include a sequence of R-R intervals during, and in some cases prior to, detection of the ventricular tachyarrhythmia by IMD 10, an overly of raw ECG data and R-sense timing information, autocorrelation, cross-correlation, and/or wavelet transformation of ECG signal data, a histogram of R-R intervals, and a temporal history of prior ventricular tachyarrhythmia episodes detected by IMD 10 and their adjudication by the processing circuitry applying the ensemble classifier 400. Inputs 402 may also include any other sensed parameters of patient 4, e.g., sensed by IMD 10 or other devices as described above. In some examples, inputs 406B may include a feature determined by the processing circuitry based on a temporal history of other sensed parameters of patient 4.
[0156] In some examples, one or more inputs 402 or portions thereof may be fed into separate individual neural networks 404, which may include 1 or 2-dimensional CNNs, RNNs, or long short-term memory (LSTM) memory networks (which may be a type of RNN). The processing circuitry may flatten 408 and concatenate 410 the outputs from the plurality of neural networks to provide ensemble classifier 400. The processing circuitry may apply the flattened and concatenated outputs to a fully connected layer 412, and the outputs of the fully connected layer to one or more SoftMax functions 414. The outputs of the one or more SoftMax functions 414 are probabilities 416, e.g., respective probabilities of different classifications of the data for the episode of ventricular tachyarrhythmia detected by IMD 10. In the example illustrated by FIG. 9, the classifications are different classifications are PVT, MVT, SVT, noise, and oversensing.
[0157] The processing circuitry, e.g., processing circuitry 130 of computing device 12 or loT device 30, may determine a classification of the episode based on probabilities 416. In this manner processing circuitry may confirm or overrule the detection of a ventricular tachyarrhythmia by IMD 10. Ensemble classifier 400 may be an example of a second set of rules as described above.
[0158] In some examples, however, the processing circuitry may combine the raw signals and derived features in a 2D array format (to form an input ensemble) for a single CNN or other neural network. FIG. 10 is a block diagram illustrating an example of a single classifier 430 utilizing raw signals and derived features as inputs 432. Inputs 432 of FIG. 10 may be substantially similar to inputs 406B of FIG. 9.
[0159] The processing circuitry, e.g., processing circuitry 130 of computing device 12 or loT device 30, may concatenate 434 inputs 432. In the example of FIG. 10, the processing circuitry may concatenate 434 inputs 432 to form a concatenated 2D array 436 of input values to be applied to a neural network 438 including one or more of an ESTM/RNN, rectifier function, and/or multiplex pooling layers. The processing circuitry may concatenate 440 the output of neural network 438 for application to a fully connected layer 442 and SoftMax function 444 to produce probabilities 446 in the manner described above with respect to FIG. 9. Classifier 430 may be an example of a second set of rules as described above.
[0160] In some examples, the processing circuitry uses different segments of ECG, such as a segment from period of time at onset of arrhythmia, another segment when the
episode reaches sustained detection, and multiple ongoing segments thereafter, as respective inputs to the one or more neural networks, e.g., of ensemble classifier 400 or classifier 430. In some examples, the processing circuitry uses features derived from different segments of the ECG in the episode data as respective inputs to the one or more neural networks, such as RR intervals during the episode and prior to start of episode, RR interval stability or variability, or short term HRV prior to onset of the episode. In general, the segments may be timewise, e.g., respective periods of the ECG. The segments may be contiguous, separated by time, and/or overlapping.
[0161] In some examples, the processing circuitry uses data from other sensors, e.g., of IMD 10, computing devices 12, and/or loT devices 30. The additional data may include patient motion (e.g., gait) or posture, e.g., from an accelerometer, which may indicate activity level during arrhythmia or gait/posture during arrhythmia or if patient 4 had a fall during the detected episode. In some examples, other data, e.g., historical data, may be obtained from IMD 10, computing devices 12, HMS 22, and/or EHR 24. The other data may include, as examples, ventricular tachyarrhythmia episode detection history, Al based episode classification history, AF burden history, or clinical history. The processing circuitry may derive features from sensor signals using signal processing techniques such as autocorrelation, Short Time Fourier transforms, Continuous Wavelet transforms, principal component analysis, independent component analysis, etc.
[0162] In some examples, the processing circuitry may use a staged approach to classify an episode detected by IMD 10. FIG. 11 is a block diagram illustrating a staged classifier 460 for classifying a ventricular tachyarrhythmia episode. For example, the processing circuitry, e.g., processing circuitry 130 of computing device 12 or loT device 30, may first apply a 5-class classifier 462, e.g., similar to ensemble classifier 400 or classifier 430, and the most dominant classes, such as inappropriate detections, noise, and oversensing episodes, are removed. The processing circuitry then classifies episodes that are classified as appropriate tachycardia (PVT, MVT, and SVT) using a 3-class classifier 464. Then the next dominant class (SVT) is removed. The processing circuitry then classifies the remainder episodes using a 2-class classifier 466 to classify PVT vs MVT episodes.
[0163] In some examples, the processing circuitry may discriminate SVT from other ventricular tachyarrhythmia classifications based on a comparison of ECG data for the
episode to a historical ECG segment. The episode ECG data may be received from IMD 10 as described herein, and the historical ECG segment may be retrieved from HMS 22. The historical ECG segment may be from a previous transmission from IMD 10 to HMS 22, e.g., a daily transmission, such as the most recent transmission. The historical ECG segment may be a segment prior, e.g., most recently prior to a fast heart rate associated with the detected ventricular tachyarrhythmia, or a most recent periodically, e.g., every hour, collected ECG. The historical ECG segment may be a segment of normal sinus rhythm ECG collected when the device was not currently detecting any cardiac events nor arrhythmias, or may be a segment previously verified as SVT, e.g., based on a user or algorithmic analysis of the segment.
[0164] In such examples, the processing circuitry may apply a convolutional filter and/or bank of convolutional filters to the ECG data for an episode to discriminate SVT from other classifications. The processing circuitry may generate the convolutional filter based on the historical ECG segment, which may be about 8 seconds in length. The processing circuitry may generate the bank of convolutional filters based on a wavelet or other decomposition of the historical ECG segment. The processing circuitry may classify the episode as SVT based on a suprathreshold output of the convolutional filter(s). In some examples, an additional classifier may further classify SVT as one of sinus tachycardia, atrial arrhythmia, SVT with aberrancy, junctional rhythms, atrioventricular nodal reentry tachycardia, or others.
[0165] In some examples, the processing circuitry, e.g., processing circuitry 130 of computing device 12 or loT device 30, or any processing circuitry of any device of system 2 described herein, may discriminate SVT from other ventricular tachyarrhythmia classifications based on a feature indicative of the presence of absence of high frequency harmonics in the episode ECG data. FIGS. 12A and 12B illustrate frequency decompositions of ECG 470 and ECG 480 of a MVT episode and an SVT episode, respectively. As illustrated by FIGS. 12A and 12B, the magnitude at certain higher frequency harmonics is greater in the decomposed ECG 480 for the SVT episode (FIG. 12B) than the decomposed ECG 470 for the MVT episode (FIG. 12A). In some examples, the processing circuitry applies a bank of complex exponential functions as convolutional filters to the ECG data for the episode. The frequency range of the bank may be configured to span a frequencies of interest, which may be integer multiples of a lowest
frequency in the decomposed ECG data for the episode. For example, the lowest frequency may be about 60 Hertz (Hz), and the bank may span a range from 100 Hz to 500 Hz, continuously across the range or via bands centered on respective integer multiples of 60 Hz. The processing circuitry may classify the episode as SVT based on a suprathreshold output of the convolutional filter(s).
[0166] In some examples, the processing circuitry, e.g., processing circuitry 130 of computing device 12 or loT device 30, or any processing circuitry of any device of system 2 described herein, may apply a beat- wise morphological comparator to discriminate PVT from MVT. In such examples, the processing circuitry may generate a convolutional filter from a selected beat, e.g., the first beat, in the ECG stored by IMD 10 for the episode. The processing circuitry may generate a plurality of convolutional filters based on a decomposition, e.g., Walsh, Fourier, or wavelet, of the selected beat. The processing circuitry may apply the filter(s) to some or all of the other beats in the ECG stored by IMD 10 for the episode, e.g., sequentially. The processing circuitry may classify the episode as PVT based on a suprathreshold variability in the output of the convolutional filter(s).
[0167] In some examples, the processing circuitry, e.g., processing circuitry 130 of computing device 12, applies a classifier to event or episode data collected by IMD 10 for a suspected acute health event to determine one of a plurality of possible classifications. The possible classifications may include one or more acute health events of interest, including the one suspected by the IMD. The event data may include ECG data, and the classifications may include the classifications discussed above with respect to FIGS. 9-11. The classifier may be implemented by a rules engine, such as rules engine 172, and may be an example of application of a second set of rules to patient parameter data.
[0168] FIG. 13 is a block diagram illustrating an example configuration of a classifier 490 configured to classify episode data collected and transmitted by IMD 10 in response to detecting an acute health event, e.g., transmitted by the IMD based on application of a first set of rules, as described herein. Classifier 490 respectively analyzes timewise segments 492 of the episode data, e.g., M second segments of N seconds of episode data transmitted by IMD 10, to determine a classification 494. In some examples, the episode data comprises ECG data transmitted by IMD 10 in response to detecting a sustained ventricular tachyarrhythmia, and possible classifications include the classifications discussed above with respect to FIGS. 9-11. Classifier 490 may be implemented by
processing circuitry 130 of computing device 12, and/or processing circuitry of any one or more devices described herein.
[0169] Classifier 490 may analyze all available segments of the episode data, or selected segments of the episode data, which may be consecutive or non-consecutive. For example, classifier 490 may analyze a plurality of consecutive segments at the end of the episode and, in some cases, additionally analyze one or more non-consecutive segments preceding the plurality of segments. The segments may be adjacent in time, overlap in time, or be spaced apart in time. In some examples, segments 492 include a historical or baseline segment, from the beginning of the episode data or from another transmission from IMD 10, as described above. Additionally, in some examples, classifier 490 may analyze later segments, after the end of the episode data, when computing device 12 and/or any one or more devices described herein requests additional data from IMD 10 based on an uncertain (e.g., lower confidence level) classification.
[0170] As illustrated in FIG. 13, classifier 490 includes one or more machine learning models 496. One or more machine learning models 496 may be configured and operate as illustrated and described with respect to FIGS. 9-11. One or more machine learning models 496 may output, for each of one or more segments 492, respective classifications, probabilities, decisions, or other outputs 499 to classification logic 498. For example, one or more machine learning models 496 may output, for each of segments 492, a respective classification (e.g., tachyarrhythmia type as described above) and, in some cases, an associated probability or confidence level. In some examples, one or more machine learning models 496 may output, for each of segments 492, a respective probability for each possible classification (e.g., each tachyarrhythmia type).
[0171] Classification logic 498 determines a classification 494 of the episode data based on the classifications of segments 492 of episode data by machine learning model(s) 496. Based on the classification of the episode data, e.g., based on the classification being certain tachyarrhythmias such as VF or PVT, processing circuitry 130 may control output as described herein. In some examples, processing circuitry 130 requests additional patient parameter data from IMD 10 based on the classification, e.g., if the classification being certain tachyarrhythmias such as VF or PVT, but with a relatively lower probability and/or duration. Segment-based classification of episode data according to the techniques described herein may improve the accuracy of classification/detection of health events,
particularly in situations where shorter segments of continuous episode data are available to train the one or more machine learning models. Segment-based classification of episode data according to the techniques described herein may improve the accuracy of classification/detection of health events where the patient condition may change during an episode, e.g., where a tachyarrhythmia may spontaneously terminate or change during an episode.
[0172] In some examples, classification logic 498 determines the classification of the episode based on a number of the segments determined to have the classification, or a total duration of segments having the classification, satisfying a threshold. In some examples, classification logic 498 additionally or alternatively determines the classification of the episode based on a time location of one or more segments determined to have the classification within the episode data. For example, classification logic 498 may require that the last N segments, where N is an integer greater than or equal to 1, have the classification in order for the episode data as a whole to have the same classification.
[0173] In some examples, classification logic 498 additionally or alternatively determines the classification of the episode based on respective probabilities associated with the classifications of the segments, e.g., probabilities output by machine learning model(s) 496. In some examples, classification logic 498 compares the respective probabilities to one or more thresholds. In some examples, classification logic 498 compares a number or duration of segments having a common classification to a threshold as described above, but not include segments for which the probability of the classification does not satisfy a threshold.
[0174] In some examples, classification logic 498 additionally or alternatively determines the classification of the episode based on a comparison of a combination, e.g., sum or average, of the probabilities associated with segments having the classification to a threshold. In some examples, the combination is weighted, with one or more segments being weighted differently than one or more other segments. In some examples, one or more segments later in the episode are weighted more heavily than one or more segments earlier in the episode.
[0175] FIGS. 14-17 are tables 500-800 illustrating example segment classifications, and associated episode classifications that may be determined by classification logic 498 based on the segment classifications. For example, as illustrated by table 500 of FIG. 14,
classification logic 498 may determine a classification PVT/VF or MVT in response to each of the four segments W5-W8 (at the end of the episode) being classified as PVT/VF or MVT. In response to the classification of PVT/VF or MVT, processing circuitry 130 may cause the delivery of therapy (e.g., by therapy delivery circuitry 57B of IMD 10B of FIG. 2B).
[0176] As illustrated by table 600 of FIG. 15, classification logic 498 may determine a classification of semi-sustained or non-sustained PVT/VF or MVT based on the number/location of segments classified as PVT/VF or MVT not satisfying a threshold or criterion. In response to such a classification, processing circuitry 130 may control communication circuitry 140 to communicate with IMD 10 to retrieve additional ECG data and/or other patient parameter data.
[0177] As illustrated by table 700 of FIG. 16, classification logic 498 may also determine a classification of semi-sustained or non-sustained PVT/VF or MVT based on a certain amount of, e.g., 2 of 4, segments being classified as PVT/VF or MVT, and in which N most recent segments did not have that classification. Where one or more of the N most recent segments did have that classification, classification logic 498 may determine a classification of PVT/VF or MVT, or non-sustained PVT/VF or MVT, based on the probabilities associated with the segments classified as PVT/VF or MVT, e.g., based on comparison of the probabilities to a threshold. Example probability criteria include: 2 of 4 segments having a classification with a probability being greater than 0.98; 3 of 4 segments having a classification with a probability greater than 0.9; and/or an average probability of a classification across segments greater than 0.5. As illustrated by table 800 of FIG. 17, classification logic 498 may also determine a classification of semisustained or non-sustained PVT/VF or MVT based on the presence of normal sinus rhythm (NSR) classifications for N latest segments of episode data.
[0178] In some examples, to determine the classification of the episode data, classification logic 498 may apply a second one or more machine learning models to the classifications and, in some examples, probabilities, determined for each segment by one or more machine learning models 496. In some examples, the second one or more machine learning models implemented by classification logic 498 may include on or more convolutional neural networks or recurrent neural networks, such as long short-term networks (LSTMs) that encode changes over time. Other examples of machine learning
methods to combine classifications from individual segments that may be implemented by classification logic 498 include state space machines, Bayesian belief networks or fuzzy logic, or other data fusion techniques. In some examples classification logic 498 includes one or more machine learning models that receive as input features identified automatically by a deep learning model, e.g., convolutional neural network, of one or more machine learning models 496 and/or output from non-machine learning rules 497 (FIG. 19). Non-machine learning rules 497 may provide outputs to classification logic 498 based on morphological features, such as morphological features determined using wavelets or cross-correlation, or RR interval features, such as metrics of regularity, irregularity or entropy, or presence of rate onset or irregularity onset.
[0179] FIG. 18 is a flow diagram illustrating an example operation of classifier 490 of FIG. 13. According to the example of FIG. 18, processing circuitry, e.g., processing circuitry 130 of computing device 12, receives episode data (also referred to as event data) from IMD 10 (900). IMD 10 may have transmitted the episode data to computing device 12 in response to detecting a tachyarrhythmia or other health event based on application of a first set of rules as described herein. Processing circuitry 130 may implement classifier 490, which may apply one or more machine learning models 496 to each segment of a plurality of segments 492 of the episode data received from IMD 10 (902). Based on the respective segment classifications, classification logic 498 may output a classification 494 of the episode (904).
[0180] FIG. 19 is a block diagram illustrating another example configuration of a classifier 1000 configured to classify episode data collected and transmitted by IMD 10 in response to detecting an acute health event, e.g., transmitted by the IMD based on application of a first set of rules, as described herein. Classifier 1000 may be configured similarly to ensemble classifier 400 of FIG. 13 except as noted herein. Classifier 1000 may be implemented by processing circuitry 130 of computing device 12, and/or processing circuitry of any one or more devices described herein.
[0181] As illustrated in FIG. 19, in addition to one or more machine learning models 496, classifier 1000 includes one or more non-machine learning rules 497. One or more non-machine learning rules 497 may include rules applied to morphological stability or variability of the electrocardiogram data, frequency content of the electrocardiogram data,
and/or heart rate stability or variability. One or more non-machine learning rules 497 may include template matching or RR interval modesum.
[0182] One or more non-machine learning rules 497 may output, for each of one or more segments 492, respective classifications, probabilities, decisions, parameter values, or other outputs 495 to classification logic 498. For example, one or more non-machine learning rules 497 may output, for each of one or more segments 492, a classification, binary decision (e.g., between classifications), or parameter value indicative of one or more classifications (e.g., of different types of tachyarrhythmia as described above). Based on outputs 499 and outputs 495 for segments 492, classification logic 498 determines a classification for the episode or, in some cases, whether to request additional data from IMD 10 for making the classification.
[0183] In examples in which outputs 499 comprise respective classifications for segments 492, classification logic 498 may require a threshold level of agreement, e.g., complete, majority, or other voting threshold, between the classifications of segments 492 in order to output the predominant classification as classification 494. In some examples, classification logic 498 determines classification 494 based on a weighted combination of outputs 499, e.g., in comparison to a threshold. Classification logic 498 may weight outputs 499 based on respective probabilities and and/or the time sequence position of segments, e.g., with one or more segments 492 later in the episode data being weighted more than one or more segments 492 earlier in the episode.
[0184] Based on outputs 495 from non-machine learning rules 497 that contradict classification outputs 499 from machine learning models 496 for a segment 492, classification logic 498 may adopt the output 495 of non-machine learning rules 497, ignore the output 499 from machine learning models 496, or decrease a weight applied to the output 499 from machine learning models 496 for the segment 492. In some examples, classification logic 498 only considers outputs 495 (and/or classifier 1000 only applies non-machine learning rules 497) for a subset of segments 492 to which machine learning models 496 are applied, such as segments 492 for which a probability /confidence of a classification output 499 is less than (or equal to) a threshold, or for which classification output 499 is a predetermined classification. In the latter case, non-machine learning rules 497 may provide independent assessment of a key classification (e.g., VT vs. VF or VT vs. PVT discrimination). In general, classifier 1000 that applies both
machine learning models 496 and non-machine learning rules 497 to segments 492 of episode data as described herein may improve the accuracy of classification/detection of health events, such as tachyarrhythmias, particularly in situations where availability of training data may limit the accuracy of one or more machine learning models 496 in isolation.
[0185] Machine learning models have clear advantages but require significant quantities of representative signals for training to achieve accurate and robust results on independent data sets. There are important clinical/physiologic conditions that are less common (e.g., for rhythm classification problem, ventricular tachycardia and ventricular fibrillation occur much less frequently than noise/oversensing and supraventricular rhythms) thus causing major challenges in training a purely machine learning approach to be accurate and robust to the “rare” events due to a lesser quantity of representative data. [0186] FIG. 20 is a flow diagram illustrating an example operation of classifier 1000 of FIG. 19. According to the example of FIG. 20, processing circuitry, e.g., processing circuitry 130 of computing device 12, receives episode data (also referred to as event data) from IMD 10 (1100). IMD 10 may have transmitted the episode data to computing device 12 in response to detecting a tachyarrhythmia or other health event based on application of a first set of rules as described herein.
[0187] Processing circuitry 130 may implement classifier 1000, which may apply one or more machine learning models 496 to each segment of a plurality of segments 492 of the episode data received from IMD 10 (1102). Classifier 1000 may also apply one or more non-machine learning rules 497 to one or more segments of the plurality of segments 492 (1104). Based on resulting outputs 499 and 495 of one or more machine learning models 496 and one or more non-machine learning rules 497, classifier 1000 may output a classification 494 of the episode (1106).
[0188] In some examples, one or more non-machine learning rules 497 may be configured to discriminate MVT and PVT. In some examples, one or more non-machine learning rules 497 may include one or more rules applied to a metric of regularity /variability of heart rate (e.g., RR intervals). In some examples, one or more non-machine learning rules 497 may include one or more rules, e.g., thresholds, applied to a modesum of RR intervals. In some examples, one or more non-machine learning rules 497 may include a linear modesum threshold (LMS), which is a modesum threshold that
linearly decreases with increasing cycle length (RR interval length). An LMS may be advantageously account for a phenomenon in which cycle length variability for faster MVTs is less than slower MVTs. In some examples, a metric value to which classifier 1000 may apply one or more non-machine learning rules 497 includes a sum of standard deviations of cycle lengths.
[0189] In general, the beat (e.g., R-wave) morphology of MVTs is more stable than PVTs over an episode. In some examples, one or more non-machine learning rules 497 may include one or more rules applied to a metric of stability /variability or instability of beat morphology. The metric may be a degree of similarity of morphology of different beats during the episode. Morphology of beats may be compared using any known techniques, e.g., cross-correlation, point-by-point differences, or comparison of wavelet decompositions. In some examples, selective wavelet coefficients may be compared. In some examples, morphology of beats may be compared by comparing features of beats, such as peak-to-peak amplitude, maximum amplitude, minimum amplitude, slope or slew rate, or relative timing or values of the maximum and minimum. In some examples, morphology of beats may be compared by comparing normalized energy distributions or imprints for the beats, e.g., comparing histograms for each beat with bins corresponding to different energy levels.
[0190] In some examples, one or more non-machine learning rules 497 may be configured to discriminate VF and rapidly conducting SVT, such as AF. Beat morphology of rapidly conducting SVTs generally is distinct from VF due to conduction of SVTs through the His-Purkinje system. In some examples, a weighted zero crossing sum (WZCS) technique uses baseline information and frequency content information for discrimination between VF and SVT. The WZCS technique may include determining zero crossings of a filtered ECG signal and weighting each zero crossing point by consecutive sample difference or slope at that point. The WZCS technique may include summing absolute values of the weighted zero-crossings within a window and comparing the sum to a sum for a baseline window. In some examples, a slope metric is a metric of comparison of slopes within a window for a beat to slopes within a baseline window and/or a previous beat window.
[0191] Metrics to which one or more non-machine learning rules 497 are applied may be designed such that the values show distinctly different distribution depending on the
tachyarrhythmia type. Based on the distribution of metric values, a threshold can be set to provide a desired sensitivity and specificity.
[0192] In some examples, instead of or in addition to ECG features, non-machine learning rules 497 may be applied to data from other sensors indicative of other physiological signals or parameters, e.g., respiration, perfusion, activity and/or posture, heart sounds, blood pressure, blood oxygen saturation signals, or other data orthogonal to ECG features but indicative of the presence of or classification of tachyarrhythmia. Based on such data, non-machine learning rules 497 may provide inputs to classification logic 498 indicating falls, respiration changes, lack of tissue perfusion, or lack of pulsatile flow, the presence of which may indicate that ventricular tachyarrhythmia, e.g., PVT or VF, is more likely.
[0193] FIG. 21 is a conceptual diagram illustrating an example machine learning model 1200 configured to determine an extent to which patient parameter data is indicative of an acute health event, such as a ventricular tachyarrhythmia or SCA. Machine learning model 1200 is an example of a set of rules implemented by any rules engine described herein, neural networks 404 and 438 described with respect to FIGS. 9 and 10, or machine learning model(s) 496 of FIGS. 13 and 19, any of which may be implemented by processing circuitry 130 and/or rules engine 172 of computing device 12 in wireless communication with IMD 10, as discussed above. Machine learning model 1200 is an example of a deep learning model, or deep learning algorithm, trained to determine whether a particular set of patient parameter data indicates the presence of an acute health event, e.g., whether a particular segment of ECG signal data indicates SCA, or a certain classification related to ventricular tachyarrhythmia, as described herein.
[0194] One or more of IMD 10, computing device 12, an loT device 30, or a computing system 20 may train, store, and/or utilize machine learning model 1200, but other devices may apply inputs associated with a particular patient to machine learning model 1200 in other examples. As discussed above, other types of machine learning and deep learning models or algorithms may be utilized in other examples. For example, a CNN model of ResNet-18 may be used. Some non-limiting examples of models that may be used for transfer learning include AlexNet, VGGNet, GoogleNet, ResNet50, or DenseNet, etc. Some non-limiting examples of machine learning techniques include Support Vector Machines, K-Nearest Neighbor algorithm, and Multi-layer Perceptron.
[0195] As shown in the example of FIG. 21, machine learning model 1200 may include three layers. These three layers include input layer 1202, hidden layer 1204, and output layer 1206. Output layer 1206 comprises the output from the transfer function 1205 of output layer 1206. Input layer 1202 represents each of the input values XI through X4 provided to machine learning model 1200. The number of inputs may be equal to, less than, or greater than 4, including much greater than 4, e.g., hundreds or thousands. In some examples, the input values may any of the of values input into a machine learning model, as described above. In some examples, input values may include samples of an ECG signal. In addition, in some examples input values of machine learning model 1200 may include additional data, such as R-wave data, R-R interval data, or other data relating to one or more additional parameters of patient 4, as described herein.
[0196] Each of the input values for each node in the input layer 1202 is provided to each node of hidden layer 1204. In the example of FIG. 21, hidden layers 1204 include two layers, one layer having four nodes and the other layer having three nodes, but fewer or greater number of nodes may be used in other examples. Each input from input layer 1202 is multiplied by a weight and then summed at each node of hidden layers 1204. During training of machine learning model 1200, the weights for each input are adjusted to establish the relationship between the inputs, e.g., input ECG segment, to determining whether a particular set of inputs represents an acute health event and/or determining a score indicative of whether a set of inputs may be representative of SCA, MVT, PVT, VR, or another acute health event. In some examples, one hidden layer may be incorporated into machine learning model 1200, or three or more hidden layers may be incorporated into machine learning model 1200, where each layer includes the same or different number of nodes.
[0197] The result of each node within hidden layers 1204 is applied to the transfer function of output layer 1206. The transfer function may be liner or non-linear, depending on the number of layers within machine learning model 1200. Example non-linear transfer functions may be a sigmoid function or a rectifier function. The output 1207 of the transfer function may be a classification that indicates whether the particular ECG segment or other input set represents an acute health event, e.g., ventricular tachyarrhythmia, and/or a score indicative of an extent to which the input data set represents an acute health event. In some examples, output 1207 may include respective
probabilities for a plurality of classifications, e.g., as discussed herein with respect to FIGS. 9-11, 13, and 20.
[0198] By applying the ECG signal data and/or other patient parameter data to a machine learning model, such as machine learning model 1200, processing circuitry, such as processing circuitry 130 of computing device 12, is able to determine a patient is experiencing or will soon experience an acute health event with great accuracy, specificity, and sensitivity. This may facilitate determinations of risk of sudden cardiac death and may lead to therapy delivery and other interventions as described herein. Machine learning model 1200 may correspond to any one or more of rules 84 A or 84B, rules 196, and rules 250 described herein.
[0199] FIG. 22 is an example of a machine learning model 1200 being trained using supervised and/or reinforcement learning techniques. Machine learning model 1200 may be implemented using any number of models for supervised and/or reinforcement learning, such as but not limited to, an artificial neural network, a decision tree, naive Bayes network, support vector machine, or k-nearest neighbor model, to name only a few of the examples discussed above. In some examples, processing circuitry one or more of IMD 10, computing device 12, an loT device 30, and/or computing system(s) 20 (e.g., rules configuration components 174 and/or 234) initially trains the machine learning model 1200 based on training set data 1300 including numerous instances of input data corresponding to acute health events and non-acute health events, e.g., as labeled by an expert. A prediction or classification by the machine learning model 1200 may be compared 1304 to the target output 1303, e.g., as determined based on the label. Based on an error signal representing the comparison, the processing circuitry implementing a leaming/training function 1305 may send or apply a modification to weights of machine learning model 1200 or otherwise modify/update the machine learning model 1200. For example, one or more of IMD 10, computing device 12, loT device 30, and/or computing system(s) 20 may, for each training instance in the training set data 1300, modify machine learning model 1200 to change a score generated by the machine learning model 1200 in response to data applied to the machine learning model 1200.
[0200] FIG. 23A is a perspective drawing illustrating an IMD 10A, which may be an example configuration of IMD 10 of FIGS. 1, 2 A, and 2B as an ICM. In the example shown in FIG. 23A, IMD 10A may be embodied as a monitoring device having housing
1412, proximal electrode 1416A and distal electrode 1416B. Housing 1412 may further comprise first major surface 1414, second major surface 1418, proximal end 1420, and distal end 1422. Housing 1412 encloses electronic circuitry located inside the IMD 10A and protects the circuitry contained therein from body fluids. Housing 1412 may be hermetically sealed and configured for subcutaneous implantation. Electrical feedthroughs provide electrical connection of electrodes 1416A and 1416B.
[0201] In the example shown in FIG. 23A, IMD 10A is defined by a length L, a width W and thickness or depth D and is in the form of an elongated rectangular prism wherein the length L is much larger than the width W, which in turn is larger than the depth D. In one example, the geometry of the IMD 10A - in particular a width W greater than the depth D - is selected to allow IMD 10A to be inserted under the skin of the patient using a minimally invasive procedure and to remain in the desired orientation during insertion. For example, the device shown in FIG. 23A includes radial asymmetries (notably, the rectangular shape) along the longitudinal axis that maintains the device in the proper orientation following insertion. For example, the spacing between proximal electrode 1416A and distal electrode 1416B may range from 5 millimeters (mm) to 55 mm, 30 mm to 55 mm, 35 mm to 55 mm, and from 40 mm to 55 mm and may be any range or individual spacing from 5 mm to 60 mm. In addition, IMD 10A may have a length E that ranges from 30 mm to about 70 mm. In other examples, the length L may range from 5 mm to 60 mm, 40 mm to 60 mm, 45 mm to 60 mm and may be any length or range of lengths between about 30 mm and about 70 mm. In addition, the width W of major surface 1414 may range from 3 mm to 15, mm, from 3 mm to 10 mm, or from 5 mm to 15 mm, and may be any single or range of widths between 3 mm and 15 mm. The thickness of depth D of IMD 10A may range from 2 mm to 15 mm, from 2 mm to 9 mm, from 2 mm to 5 mm, from 5 mm to 15 mm, and may be any single or range of depths between 2 mm and 15 mm. In addition, IMD 10A according to an example of the present disclosure is has a geometry and size designed for ease of implant and patient comfort. Examples of IMD 10A described in this disclosure may have a volume of three cubic centimeters (cm) or less, 1.5 cubic cm or less or any volume between three and 1.5 cubic centimeters.
[0202] In the example shown in FIG. 23 A, once inserted within the patient, the first major surface 1414 faces outward, toward the skin of the patient while the second major surface 1418 is located opposite the first major surface 1414. In addition, in the example
shown in FIG. 23A, proximal end 1420 and distal end 1422 are rounded to reduce discomfort and irritation to surrounding tissue once inserted under the skin of the patient. IMD 10A, including instrument and method for inserting IMD 10A is described, for example, in U.S. Patent Publication No. 2014/0276928, incorporated herein by reference in its entirety.
[0203] Proximal electrode 1416A is at or proximate to proximal end 1420, and distal electrode 1416B is at or proximate to distal end 1422. Proximal electrode 1416A and distal electrode 1416B are used to sense cardiac EGM signals, e.g., ECG signals, thoracically outside the ribcage, which may be sub-muscularly or subcutaneously. Cardiac signals may be stored in a memory of IMD 10 A, and data may be transmitted via integrated antenna 1430A to another device, which may be another implantable device or an external device. In some example, electrodes 1416A and 1416B may additionally or alternatively be used for sensing any bio-potential signal of interest, which may be, for example, an EGM, EEG, EMG, or a nerve signal, or for measuring impedance, from any implanted location.
[0204] In the example shown in FIG. 23A, proximal electrode 1416A is at or in close proximity to the proximal end 1420 and distal electrode 1416B is at or in close proximity to distal end 1422. In this example, distal electrode 1416B is not limited to a flattened, outward facing surface, but may extend from first major surface 1414 around rounded edges 1424 and/or end surface 1426 and onto the second major surface 1418 so that the electrode 1416B has a three-dimensional curved configuration. In some examples, electrode 1416B is an uninsulated portion of a metallic, e.g., titanium, part of housing 1412.
[0205] In the example shown in FIG. 23A, proximal electrode 1416A is located on first major surface 1414 and is substantially flat, and outward facing. However, in other examples proximal electrode 1416A may utilize the three-dimensional curved configuration of distal electrode 1416B, providing a three-dimensional proximal electrode (not shown in this example). Similarly, in other examples distal electrode 1416B may utilize a substantially flat, outward facing electrode located on first major surface 1414 similar to that shown with respect to proximal electrode 1416A.
[0206] The various electrode configurations allow for configurations in which proximal electrode 1416A and distal electrode 1416B are located on both first major
surface 1414 and second major surface 1418. In other configurations, such as that shown in FIG. 23A, only one of proximal electrode 1416A and distal electrode 1416B is located on both major surfaces 1414 and 1418, and in still other configurations both proximal electrode 1416A and distal electrode 1416B are located on one of the first major surface 1414 or the second major surface 1418 (e.g., proximal electrode 1416A located on first major surface 1414 while distal electrode 1416B is located on second major surface 1418). In another example, IMD 10A may include electrodes on both major surface 1414 and 1418 at or near the proximal and distal ends of the device, such that a total of four electrodes are included on IMD 10 A. Electrodes 1416A and 1416B may be formed of a plurality of different types of biocompatible conductive material, e.g., stainless steel, titanium, platinum, iridium, or alloys thereof, and may utilize one or more coatings such as titanium nitride or fractal titanium nitride.
[0207] In the example shown in FIG. 23A, proximal end 1420 includes a header assembly 1428 that includes one or more of proximal electrode 1416A, integrated antenna 1430A, anti-migration projections 1482, and/or suture hole 1434. Integrated antenna 1430A is located on the same major surface (i.e., first major surface 1414) as proximal electrode 1416A and is also included as part of header assembly 1428. Integrated antenna 1430A allows IMD 10A to transmit and/or receive data. In other examples, integrated antenna 1430A may be formed on the opposite major surface as proximal electrode 1416A or may be incorporated within the housing 1412 of IMD 10A. In the example shown in FIG. 23A, anti-migration projections 1432 are located adjacent to integrated antenna 1430A and protrude away from first major surface 1414 to prevent longitudinal movement of the device. In the example shown in FIG. 23A, anti-migration projections 1432 include a plurality (e.g., nine) small bumps or protrusions extending away from first major surface 1414. As discussed above, in other examples anti-migration projections 1432 may be located on the opposite major surface as proximal electrode 1416A and/or integrated antenna 1430A. In addition, in the example shown in FIG. 23A, header assembly 1428 includes suture hole 1434, which provides another means of securing IMD 10A to the patient to prevent movement following insertion. In the example shown, suture hole 1434 is located adjacent to proximal electrode 1416A. In one example, header assembly 1428 is a molded header assembly made from a polymeric or plastic material, which may be integrated or separable from the main portion of IMD 10A.
[0208] FIG. 23B is a perspective drawing illustrating another IMD 10B, which may be another example configuration of IMD 10 from FIGS. 1, 2 A, and 2B as an ICM. IMD 10B of FIG. 23B may be configured substantially similarly to IMD 10A of FIG. 23 A, with differences between them discussed herein.
[0209] IMD 10B may include a leadless, subcutaneously-implantable monitoring device, e.g., an ICM. IMD 10B includes housing having a base 1440 and an insulative cover 1442. Proximal electrode 1416C and distal electrode 1416D may be formed or placed on an outer surface of cover 1442. Various circuitries and components of IMD 10B, e.g., described above with respect to FIG. 2A or FIG. 2B, may be formed or placed on an inner surface of cover 1442, or within base 1440. In some examples, a battery or other power source of IMD 10B may be included within base 1440. In the illustrated example, antenna 1430B is formed or placed on the outer surface of cover 1442, but may be formed or placed on the inner surface in some examples. In some examples, insulative cover 1442 may be positioned over an open base 1440 such that base 1440 and cover 1442 enclose the circuitries and other components and protect them from fluids such as body fluids. The housing including base 1440 and insulative cover 1442 may be hermetically sealed and configured for subcutaneous implantation.
[0210] Circuitries and components may be formed on the inner side of insulative cover 1442, such as by using flip-chip technology. Insulative cover 1442 may be flipped onto a base 1440. When flipped and placed onto base 1440, the components of IMD 10B formed on the inner side of insulative cover 1442 may be positioned in a gap 1444 defined by base 1440. Electrodes 1216C and 1216D and antenna 1230B may be electrically connected to circuitry formed on the inner side of insulative cover 1442 through one or more vias (not shown) formed through insulative cover 1442. Insulative cover 1442 may be formed of sapphire (i.e., corundum), glass, parylene, and/or any other suitable insulating material. Base 1440 may be formed from titanium or any other suitable material (e.g., a biocompatible material). Electrodes 1416C and 1246D may be formed from any of stainless steel, titanium, platinum, iridium, or alloys thereof. In addition, electrodes 1246C and 1246D may be coated with a material such as titanium nitride or fractal titanium nitride, although other suitable materials and coatings for such electrodes may be used.
[0211] In the example shown in FIG. 23B, the housing of IMD 10B defines a length L, a width W and thickness or depth D and is in the form of an elongated rectangular prism wherein the length L is much larger than the width W, which in turn is larger than the depth D, similar to IMD 10A of FIG. 23 A. For example, the spacing between proximal electrode 1416C and distal electrode 1416D may range from 5 mm to 50 mm, from 30 mm to 50 mm, from 35 mm to 45 mm, and may be any single spacing or range of spacings from 5 mm to 50 mm, such as approximately 40 mm. In addition, IMD 10B may have a length L that ranges from 5 mm to about 70 mm. In other examples, the length L may range from 30 mm to 70 mm, 40 mm to 60 mm, 45 mm to 55 mm, and may be any single length or range of lengths from 5 mm to 50 mm, such as approximately 45 mm. In addition, the width W may range from 3 mm to 15 mm, 5 mm to 15 mm, 5 mm to 10 mm, and may be any single width or range of widths from 3 mm to 15 mm, such as approximately 8 mm. The thickness or depth D of IMD 10B may range from 2 mm to 15 mm, from 5 mm to 15 mm, or from 3 mm to 5 mm, and may be any single depth or range of depths between 2 mm and 15 mm, such as approximately 4 mm. IMD 10B may have a volume of three cubic centimeters (cm) or less, or 1.5 cubic cm or less, such as approximately 1.4 cubic cm.
[0212] In the example shown in FIG. 23B, once inserted subcutaneously within the patient, outer surface of cover 1442 faces outward, toward the skin of the patient. In addition, as shown in FIG. 23B, proximal end 1446 and distal end 1448 are rounded to reduce discomfort and irritation to surrounding tissue once inserted under the skin of the patient. In addition, edges of IMD 10B may be rounded.
[0213] FIG. 23C illustrates an example environment of an example medical system comprising an implantable cardiac defibrillator device in conjunction with a patient, in accordance with one or more examples of the present disclosure. IMD 10 may be included in system 2 and configured to communicate with devices such as computing devices 12 shown in and described with respect to FIG. 1.
[0214] As described above, medical device system 2 may detect an acute health event based on episode data collected by IMD 10C. For example, computing system 20 may receive data from IMD 10C, and perform classification to classify the acute health event. Responsive to classification, IMD 10C may be configured to deliver a therapy configured to address the acute health event detected by IMD 10c, computing device 38, or
computing system 20. In the illustrated example, IMD 10C is coupled to a ventricular lead 1520 and an atrial lead 1521. IMD 10C may be an ICD capable of delivering pacing, cardioversion and defibrillation therapy to the heart 1516A of a patient. In the example of FIG. 23C, ventricular lead 1520 and atrial lead 1521 are electrically coupled to IMD 10C and extend into the patient's heart 1516A. Ventricular lead 1520 includes electrodes 1522 and 1524 shown positioned on the lead in the patient's right ventricle (RV) for sensing ventricular EGM signals and pacing in the RV. Atrial lead 1521 includes electrodes 1526 and 1528 positioned on the lead in the patient's right atrium (RA) for sensing atrial EGM signals and pacing in the RA.
[0215] Ventricular lead 1520 additionally carries a high voltage coil electrode 1542, and atrial lead 1521 carries a high voltage coil electrode 1544, used to deliver cardioversion and defibrillation shocks. The term “anti-tachy arrhythmia shock” may be used herein to refer to both cardioversion shocks and defibrillation shocks. In other examples, ventricular lead 1520 may carry both of high voltage coil electrodes 1542 and 1544, or may carry a high voltage coil electrode in addition to those illustrated in the example of FIG. 23C.
[0216] IMD 10C may use both ventricular lead 1520 and atrial lead 1521 to acquire cardiac electrogram (EGM) signals from the patient and to deliver therapy in response to the acquired data. Medical system 2 is shown as having a dual chamber ICD configuration, but other examples may include one or more additional leads, such as a coronary sinus lead extending into the right atrium, through the coronary sinus and into a cardiac vein to position electrodes along the left ventricle (LV) for sensing LV EGM signals and delivering pacing pulses to the LV. In other examples, a medical device system may be a single chamber system, or otherwise not include atrial lead 1521.
[0217] IMD 10C may include processing circuitry, sensing circuitry, and other circuitry that are configured for performing the techniques described herein. IMD 10C may include therapy delivery circuitry configured to deliver antitachyarrhythmia pacing (ATP), other types of cardiac pacing, and antitachyarrhythmia shocks via the leads and electrodes described above. Sealed housing 12C may house such circuitries. Housing 12C (or a portion thereof) may be conductive so as to serve as an electrode for pacing or sensing or as an active electrode during shock delivery. As such, housing 12C is also referred to herein as “housing electrode” 12C.
[0218] In some examples, IMD 10C may transmit patient physiological data and/or cardiac rhythm episode data acquired by IMD 10C, as well as data regarding delivery of therapy by IMD 10C, to computing device 12 and/or computing system 20 (e.g., implementing HMS 22), which may perform any of the techniques described herein, e.g., for classifying episode data and confirming acute health events.
[0219] As discussed above, processing circuitry, e.g., processing circuitry 130 of computing device 12, may apply one or more machine learning models to episode data to determine a classification of a plurality of predetermined classifications for the episode data. For instance, the processing circuitry may evaluate one or more segments, such as segments 2492 of FIG. 24, to classify arrhythmia episodes. As described above, segments of data may include a segment from a period of time at the onset of arrhythmia, another segment when the episode reaches sustained detection, and multiple ongoing segments thereafter. The segments may be contiguous, separated by time, and/or overlapping. FIG. 24 is a conceptual flow diagram illustrating an example operation of one or more machine learning models, such as machine learning models 2496A-2496B (collectively, “machine learning models 2496”), which may be configured to output probabilities for the various arrhythmia classifications (e.g., VT vs. VF or VT vs. PVT discrimination). As shown in FIG. 24, machine learning models 2496 may discriminate between tachycardia (TACH), oversensing due to noise (NOS), and oversensing due to cardiac reasons (COS). For example, machine learning models 2496 may determine a probability of TACH, probability of NOS, and probability of COS.
[0220] Machine learning models may assess which rhythm classification of segments 2492 is the most likely to be accurate (represented by decision block 2498 in FIG. 24). For example, if machine learning models 2496 A determines the probability of TACH is the maximum of the three probabilities, then the processing circuitry may apply machine learning models 2496B to discriminate between polymorphic ventricular tachycardia or ventricular fibrillation (PVT/VF), monomorphic ventricular tachycardia (MVT), and supraventricular tachycardia (SVT). Machine learning models 2496B may then determine the probability of PVT/VF, the probability of MVT, and the probability of SVT based on segments 2492. Alternatively, if the probability of TACH is the maximum of the three probabilities, then machine learning models 2496 A may determine that the probability of PVT/VF, the probability of MVT, and the probability of SVT are all equal to the
probability of TACH divided by three. Therefore, for each of segments 2492, machine learning models 2496 may output the probability of probability of PVT/VF, the probability of MVT, and the probability of SVT, the probability of NOS, and the probability of COS. [0221] Note that structure of FIG. 24 is only one example of implementation of operating one or more machine learning models. In one example, a classification implementation may include one AI/ML stage with 5 classifications (e.g., PVT/VF, MVT, SVT, COS, and NOS). In another example, a classification implementation may include three AI/ML stages, where the first stage is the example shown in FIG. 24, a second stage classifies between PVT/MVT vs SVT, and then if PVT/MVT, a third stage further classifies into PVT or MVT.
[0222] By determining the probabilities of PVT,VF, MVT, and so on for each segment, the processing circuitry may determine a health of the patient (e.g., a health risk severity). Depending on that determination (e.g., whether the patient is experiencing an emergency, meaning a high health risk severity, or not, meaning a low health risk severity), the processing circuitry may deliver a therapy (e.g., via leads of IMD 10C), terminate the evaluation, and/or continue the evaluation (e.g., collect more data to reduce uncertainty, which may reduce the frequency of inappropriate therapy delivery). For example, based on the classification of the acute health event, IMD 10 may deliver an electrical impulse to the heart muscles to treat the detected arrhythmia.
[0223] For example, if IMD 10 detects a Fast VT episode that has a duration exceeding a threshold (e.g., 30 seconds), processing circuitry may analyze ECG segments 2492 from the relevant time period (e.g., segments that occurred before, during, and after the episode). The processing circuitry may implement any combination of the techniques disclosed herein to determine whether to deliver therapy to the patient for PVT/VF/MVT or to terminate the evaluation. In some examples, the processing circuitry may apply classification logic 2598A-2598N shown in FIG. 25 (collectively, “classification logic 2598”) to the probabilities determined by machine learning models 2496 to determine whether to deliver therapy to the patient for PVT/VF/MVT or to terminate the evaluation. Only 2598A-2598C are shown in FIG. 25 for ease of illustration. If the processing circuitry determines that there is some degree of uncertainty, but there is some evidence (e.g., a non-negligible probability) of PVT/VF/MVT, then the processing circuitry may
evaluate one or more additional segments, such as segments occurring later in time (e.g., 15 seconds of later occurring data later).
[0224] The processing circuitry may re-evaluate the additional segments using the same methodology and/or with altered models/logic (e.g., to improve sensitivity to PVT/VF/MVT). For example, if the processing circuitry determines there is some evidence of SVT and no evidence of PVT/VF/MVT, data circuitry may evaluate an appropriate number of additional segments (e.g., 60 seconds worth of segments) to keep monitoring the rapid SVT in case the SVT transitions to MVT/PVT/VF or the processing circuitry misclassified a PVT/VF/MVT as SVT.
[0225] In other words, based on the probability of the first respective classification determined by a first one or more machine learning models (e.g., machine learning models 2596A of FIG. 25), the processing circuitry may be configured to determine whether to retrieve first additional data from the sensor device (e.g., IMD 10) and apply a second one or more machine learning models (e.g., machine learning models 2596B) to the first additional data. The additional data may start any amount of time (e.g., 15 seconds, 30 seconds, 60 seconds, etc.) after the initial classification or previous decision by the processing circuitry. Also, the additional data may have any possible duration of time, such as 15 seconds, 30 seconds, 60 seconds, etc., and the additional data may be segmented (e.g., 15 seconds of data may consist of two 8 second segments of data). In other examples, the episode data and additional data may include two segments from the time of detection and two segments which are the most recent in time. Furthermore, the processing circuitry may iteratively retrieve additional data (and apply one or more machine learning models to the additional data) if necessary in order to classify the episode data with sufficient certainty.
[0226] In examples in which the episode data are segmented, the processing circuitry may apply the one or more machine learning models and/or classification logic to each of segment 2492 of the episode data to determine the classification for the episode data. The processing circuitry may then determine the classification of the acute health event based on the respective classifications of segments 2492. In some examples, the classification logic (e.g., classification logic 498 shown in FIG. 9) may be a neural network or any other advanced machine learning algorithm and receive as input the probabilities of segments 2492. In some examples, the one or more machine learning models may each evaluate a
segment, and the classification logic may receive the output from the one or more machine learning models to then generate a final classification.
[0227] The processing circuitry may determine various properties of each of segments 2492 during evaluation. The properties may include the probability of MP (e.g., the probability of MVT or PVT/VF), the probability of NC (probability of NOS or COS), the number of segments where the probability of RHY is > 0.98 (where RHY is any cardiac rhythm classification for which model(s) 2496 are configured, such as any of MP, SVT, and NC), the number of segments where the probability of RHY is the maximum, the average probability of RHY, the maximum probability of RHY, etc.
[0228] The processing circuitry may implement classification logic (e.g., classification logic 498 shown in FIG. 19, classification logic 2598 shown in FIG. 25, or any other classification logic described herein) to output the overall classification for the episode. For example, the processing circuitry may determine the overall rhythm classification for 4 combined segments by applying the following logic:
1) If Probavg{NC] > Probavg{MP] AND ProbaVg{NC}> Probavg{SVT] then a) If Probavg{NOS] > Probavg{COS] then Rhythm = NOS else Rhythm = COS
2) Else If Probavg{SVT] > Probavg{NC] AND Probavg{SVT] > Probavg{MP] then a) Rhythm = SVT
3) Else If Probavg{MP] > Probavg{NC] AND Probavg{MP] > Probavg{SVT] then a) If Probavg{PVT] > Probavg{MVT] then Rhythm = PVT else Rhythm = MVT
[0229] The purpose of the above logic is to determine the value of the variable named “Rhythm.” To do so, the processing circuitry may check a first condition of whether the average probability (Probavg) of NC is greater than or equal to the average probability of MP and SVT. The average probability may refer to the median of the segments, the mean, or any other statistical measure of the set of segments. If the processing circuitry determines that the first condition is true, then the processing circuitry may check a subcondition of whether the average probability of NOS is greater than or equal to the average probability of COS. If the average probability of NOS is greater than or equal to the average probability of COS, the processing circuitry may classify the Rhythm as NOS. However, if the processing circuitry determines that the average probability of COS is greater than or equal to the average probability of NOS, the processing circuitry may classify the Rhythm as NOS.
[0230] If the processing circuitry determines that the first condition is false, then the processing circuitry may check a second condition of whether the average probability of S VT is greater than or equal to the average probability of both NC and MP. If the second condition is true, then the processing circuitry may classify the Rhythm as SVT. If the processing circuitry determines that the second condition is also false, the processing circuitry may check a third condition of whether the average probability of MP is greater than or equal to the average probability both NC and SVT. If this is true, then the processing circuitry may check a sub-condition of whether the average probability of PVT is greater than or equal to the average probability of MVT. If the average probability of PVT is greater than or equal to the average probability of MVT, then the processing circuitry may classify the Rhythm as PVT. If the average probability of MVT is greater than or equal to the average probability of PVT, then the processing circuitry may classify the Rhythm as MVT.
[0231] In other examples, additional constraints may be used. For example, if the probability of PVT (Pr(PVT)) is greater than the probability of MVT minus 0.2 (Pr(MVT) - 0.2), then the process circuitry may classify the Rhythm as PVT.
[0232] Although described above as being sequential, Note the logic can be done in any order as the conditions are generally mutually exclusive. Thus, the processing circuitry may check the conditions in any order, and the order described here is merely an example for purposes of illustration.
[0233] FIG. 25 is a conceptual flow diagram illustrating example operation of machine learning models and classification logic by a system in accordance with techniques of this disclosure. As shown in FIG. 25, processing circuitry may implement one or more machine learning models, such as machine learning models 2596A-2596C (collectively, “machine learning models 2596), and different configurations of classification logic, such as classification logic 2598A-2598C (collectively, “classification logic 2598”) to perform the classification. The processing circuitry may initially evaluate (e.g., using machine learning models and/or non-machine learning rules) one or more segments, such as segments 2592A-2592C (collectively, “segments 2592”) of data, and apply classification logic 2598A-2598C to classify the rhythm (e.g., as NOS, COS, SVT, MVT, PVT/VF, etc.) as well as determine an action to perform. For example, machine learning models 2596A and classification logic 2598A may determine an action of actions 2599A to perform;
machine learning models 2596B and classification logic 2598B may determine an action of actions 2599B to perform; and machine learning models 2596C and classification logic 2598C may determine an action of actions 2599C to perform.
[0234] Any two or more of machine learning models 2596A-2596C may be the same or substantially similar to each other (e.g., first machine learning models 2596A may be the same as second machine learning models 2596B and/or third machine learning models 2596C). Any two or more of classification logic 2598A-2598C may be the same or substantially similar to each other (e.g., first classification logic 2598 A may be the same as second classification logic 2598C and/or third classification logic 2598C).
[0235] In general, if the processing circuitry determines that there is sufficient evidence to determine whether the patient requires urgent medical treatment, the processing circuitry may either deliver therapy (e.g., because the patient is experiencing an emergency) or terminate the evaluation (e.g., because the patient is not experiencing an emergency). However, if the evidence is unclear, the processing circuitry may continue collecting data by retrieving first additional data (e.g., segments 2592B), second additional data (e.g., segments 2592C), and so on.
[0236] As shown in FIG. 25, the processing circuitry may request and evaluate additional data (e.g., 15 seconds of data, 30 seconds of data, 60 seconds of data, etc.) after any period of time following a prior decision by the one or more machine learning models and/or classification logic, such as 15 seconds, 30 seconds, 60 seconds, etc. For example, based on classification logic 2598A and a classification determination by machine learning models 2596A, the processing circuitry may determine whether to retrieve first additional data (e.g., 30seconds of addition data 15 seconds after the previous decision), which may include segments 2592B, from IMD 10, or retrieve second additional data (e.g., 30 seconds of data 60s later), which may include segments 2592C, from IMD 10 and apply machine learning models 2596C. If the processing circuitry determines to retrieve the first additional data, the processing circuitry may apply machine learning models 2596B and second classification logic 2598B. If the processing circuitry determines to retrieve the second additional data, the processing circuitry may apply machine learning models 2596C and third classification logic 2598C.
[0237] If there is insufficient evidence to accurately classify the segments, the processing circuitry may iteratively retrieve and evaluate additional data and reassess
whether to deliver therapy or terminate the evaluation in view of the additional data. The processing circuitry may repeat the process of retrieving and evaluating additional data (e.g., using the same or different logic, such as more sensitive logic as the number of data requests increases) until either the processing circuitry successfully classifies the acute health event with a high degree of certainty or the evaluation times out (e.g., the duration of the evaluation reaches a time limit).
[0238] In some examples, the resolution or sensitivity of the additional data may depend on the data request sent by the processing circuitry. For example, the processing circuitry may receive higher resolution data in response to a 15s data request and lower resolution data in response to a 60s data request. The processing circuitry may consider a variety of factors when determining whether to send a shorter-duration data request or a longer-duration data request, such as the need for higher resolution (e.g., if the initial data were too noisy), the urgency of the patient’s condition, and so on. For example, even if there is not sufficient evidence for the classification to be high certainty, if the processing circuitry determines that the most likely classifications are any of PVT/VF/MVT, the processing circuitry may send a data request 15s later in order to evaluate higher resolution data as soon as possible due to time being of the essence.
[0239] Thus, in general, the processing circuitry may determine whether to retrieve first additional data or second additional data. The first additional data may derive from a first time period (e.g., a time period that begins 15 seconds after an earlier classification or decision), and the second additional data may derive from a second time period (e.g,. a time period that begins 60 seconds after an earlier classification or decision). The properties of the data (e.g., resolution) may differ depending on the outputs of machine learning models 2596 and/or classification logic 2598. For example, the outputs may indicate that the first additional data or the second additional data is necessary to determine the classification of the acute health event. A subsequent output of machine learning models 2596 and/or classification logic 2598 may result in a different indication (e.g., a different data request). This process may reiterate until a therapy or terminate decision is made or until the process times out.
[0240] In some examples, the processing circuitry may generate an action command to configure the classification logic 2598 based on a count{MP}max value (where count {MP} max value is equal to the total number of segments for which MP, i.e.,
Prob(PVT)+Prob(MVT) for the segment, is the highest probability classification). For example, if the count{MP]max value is X (where X is equal to, e.g., 0, 1, 2, 3, 4, etc.), the processing circuitry may apply LOGIC(X) (e.g., the logic of the associated row in the Tables FIG. 26 and FIG. 27) to determine between delivering therapy, collecting additional data, or terminating an episode. Table 1 in FIG. 26 shows example LOGIC(X) as a function of X for the initial evaluation and subsequent evaluation following a “Data Request -60s” for SVT monitoring. As used herein, data request -60s refers to a request for additional data (e.g., 30 seconds of data) 60 seconds after an earlier classification or decision. Data request-60s is an example action from the set of actions 2599 that the processing circuitry may perform in response to applying machine learning models 2596 and classification logic 2598.
[0241] The criteria included in Table 1 of FIG. 26 for application by classification logic 2598 may include IF, THAN, AND, OR logic statements that include various variables, thresholds, and comparison statements (e.g., less than, greater than, equal to). The variables may include counts of segments where NC is the max probability, counts of segments where the probability of NC is above a threshold, the probability of NC in the segment X, the average probability of NC over all segments, the maximum probability of NC over all segments, counts of segments where MP is the max probability, counts of segments where the probability of MP is above a threshold, the probability of MP in the segment X, the average probability of MP over all segments, the maximum probability of MP over all segments, counts of segments where SVT is the max probability, counts of segments where the probability of SVT is above a threshold, the probability of SVT in the segment X, the average probability of SVT over all segments, the maximum probability of SVT over all segments, etc.
[0242] Table 2 of FIG. 27 shows example LOGIC(X) as a function of X for subsequent evaluation following a “Data Request - 15s” for continued monitoring in case of some but uncertain evidence of PVT/VF/MVT. As used herein, data request -15s refers to a request for additional data (e.g., 30 seconds of data) 15 seconds after an earlier classification or decision. Data request -15s is an example action from the set of actions 2599 that the processing circuitry may perform in response to applying machine learning models 2596 and classification logic 2598. Compared to Table 1, the logic in Table 2
makes it more likely to call the episode PVT/MVT and more likely to deliver therapy, i.e. it is logic with higher sensitivity to VT/VF.
[0243] Like Table 1 of FIG. 26, the criteria included in Table 2 of FIG. 27 for application by classification logic 2598 may include IF, THAN, AND, OR logic statements that include various variables, thresholds, and comparison statements (e.g., less than, greater than, equal to). For example, The variables may include counts of segments where NC is the max probability, counts of segments where the probability of NC is above a threshold, the probability of NC in the segment X, the average probability of NC over all segments, the maximum probability of NC over all segments, counts of segments where MP is the max probability, counts of segments where the probability of MP is above a threshold, the probability of MP in the segment X, the average probability of MP over all segments, the maximum probability of MP over all segments, counts of segments where SVT is the max probability, counts of segments where the probability of SVT is above a threshold, the probability of SVT in the segment X, the average probability of SVT over all segments, the maximum probability of SVT over all segments, etc. That said, the criteria in Table 2 may be more sensitive than the criteria in Table 1. For example, thresholds in Table 2 may be easier to satisfy than thresholds in Table 1, possibly resulting in more detections/classifications.
[0244] The tables of FIG. 26 and FIG. 27 are for purposes of explanation only and are not intended to be limiting. For example, the processing circuitry may include fewer or more levels of logic, such as levels based on probability and length of an event. For example, if the evidence is uncertain such that the processing circuitry requests additional data, the processing circuitry may increase resolution of the data in order to collect more detailed information (which may facilitate a determination). This logic structure may even incorporate expert knowledge (e.g., make determinations based on input from clinicians, academics, or other experts on behavior, progression, or other aspects of cardiac arrhythmias).
[0245] In some examples, the processing circuitry may evaluate additional factors when determining to perform and/or performing an action. Additional factors or considerations may include cycle length cutoff, duration programming, timeout considerations, time of day, etc. In one example, the processing circuitry may estimate the cycle length by automatically correlating segments 2492 with little to no sample lags (e.g.,
0-150 sample lags). The processing circuitry may find the first peak of the signal (e.g., by detecting when a difference signal crosses zero) with autocorrelation value > 0.5 after a certain number of samples (e.g., 20, which may be referred to as blanking of 20). The processing circuitry may compute the cycle length as (first peak location) * 7.8125 ms. The processing circuitry may find segments with valid cycle length (e.g., with autocorrelation value > 0.5. The processing circuitry may require that at least a certain percentage or ratio of segments have a valid length cycle (e.g., 3 of 4 segments must have valid cycle lengths). In some examples, the processing circuitry may count the number of segments with cycle length < 300 ms. In some examples, the processing circuitry may count the number of segments with cycle length < 200 ms.
[0246] The processing circuitry may evaluate cycle length in addition to the logic described above (e.g., shown in Tables 1 and 2 of FIG. 26 and FIG. 27) to determine whether to deliver therapy, continue collecting data, or terminate the episode. For example, if the processing circuitry determines that the rhythm is MVT, the processing circuitry may count the number of segments with cycle length less than 300 ms. If the number of segments with cycle length less than 300 ms is equal to 0, then instead of delivering therapy, the processing circuitry may continue collecting data to keep monitoring the MVT in case of acceleration of the MVT.
[0247] Additionally or alternatively, the processing circuitry may estimate episode duration and use the episode duration estimate when determining an action to perform. The processing circuitry may determine episode duration by computing the difference between the current data timestamp (until offset occurs) and an onset stamp for the episode as provided by IMD 10. In some examples, the processing circuitry may deliver therapy for PVT/VF or MVT only if the episode duration exceeds the programmed sustained duration for PVT/VF and MVT separately. If the result of the evaluation is to deliver therapy, then the processing circuitry may determine whether the rhythm is a PVT/VF or MVT.
[0248] If the Rhythm output is MVT, the processing circuitry may deliver therapy if the episode duration exceeds programmed sustained duration for MVT. The processing circuitry may instead use the programmed sustained duration for PVT if the number of segments with cycle length < 200 ms is equal to or greater than a ratio (e.g., 1 of 4 segments).
[0249] If the processing circuitry continues to issue data requests (e.g., extend data collection due to uncertainty) and the Episode Duration exceeds a Data Request Timeout duration, then the processing circuitry may stop processing the episode and not deliver therapy. Thus, in general, after every evaluation, if the action output is delivering therapy or terminating the episode, then the processing circuitry may stop the evaluation at that point. If the collecting more data (e.g., “Data Request - 60s” or “Data Request - 15s”), then the processing circuitry reevaluates the new data and may continue collecting additional data until the episode duration exceeds the Date Request Timeout duration. The processing circuitry may stop evaluation at this time and effectively transition to the “Terminate” state. A Timeout duration may be set to 5 minutes to minimize false positives instead of checking consistently for a longer period.
[0250] It should be understood that the techniques of this disclosure are not limited to the specific medical conditions used as examples herein and may be applied broadly. For example, the techniques of this disclosure may be used for any condition requiring acute intervention based on ECG or electrical signal analysis, MI, stroke, etc. As an example, the techniques may be applied to sleep apnea by first monitoring cardiac signals (e.g., due to the lower power consumption of heart rate based analysis) and responsive to classifying the sleep apnea activating impedance sensors for more sensitive and/or precise measurements. Furthermore, the techniques of this disclosure may be used for classification of episode data collected by a medical device based on detection of any health event, whether or not acute, e.g., to determine whether the detection of the event by the medical device was a true or false event, to determine whether therapy should be delivered in response to the event, or to determine whether or not the occurrence of the event should be reported to the patient, a clinician, or any other user. Examples of nonacute health events include various atrial or other non-lethal cardiac arrhythmias, for which associated episode ECGs may be analyzed using the techniques described herein. Although generally described with respect to examples in which classification is performed by a computing device 12, in some examples the classification techniques may be performed by HMS 22 or an IMD 10.
[0251] In some examples, the classification techniques described herein may be implemented to determine whether to deliver a responsive therapy to an acute event, such as antitachyarrhythmia therapy by IMD 10C in response to an arrhythmia detected by IMD
IOC. In such examples, the techniques for classifying episode data, e.g., described with respect to FIGS. 13-22, 24, and 25, may be used to determine whether a suspected lethal tachyarrhythmia should be treated by IMD IOC. HMS 22 or computing device 12 may perform the analysis and communicate with IMD IOC regarding whether treatment is warranted.
[0252] FIG. 28 is a flowchart illustrating one example method of the disclosure. The techniques of FIG. 28 may be performed by processing circuitry, such as processing circuitry 50A of IMD 10A of FIG. 2A, processing circuitry 50B of IMD 10B of FIG. 2B, and/or processing circuitry 130 of computing device 12 of FIG. 3. For ease of understanding, the following will be described with reference to computing device 12. [0253] In one example of the disclosure, computing device 12 may be configured to receive episode data associated with an acute health event detected by an implantable medical device (e.g., IMD 10A or IMD 10B) (2800). Computing device 12 may be further configured to apply a first one or more machine learning models and first classification logic to the episode data to determine a first respective classification of a plurality of predetermined classifications for the episode data (2810).
[0254] Based on a probability of the first respective classification determined by the first one or more machine learning models, computing device 12 may further determine whether to retrieve first additional data from the implantable medical device and apply a second one or more machine learning models and second classification logic to the first additional data, or retrieve second additional data from the implantable medical device and apply a third one or more machine learning models and third classification logic to the second additional data (2820). Computing device 12 may further determine a classification of the acute health event from the plurality of predetermined classifications based on the application of the second one or more machine learning models to the first additional data or the third one or more machine learning models to the second additional data (2830). Computing device 12 may also determine whether to control the implantable medical device (e.g., therapy delivery circuitry 57B of IMD 10B) to deliver therapy based on the classification (2840).
[0255] FIG. 29 is a process diagram illustrating an example implementation of detecting an acute health emergency in accordance with the techniques of this disclosure. FIG. 29 illustrates a process 2900 that may be performed by IMD 10 and computing
device 12. In general, process 2900 includes IMD 10 sending episode data to computing device 12 on a periodic basis. Computing device 12 may then apply classification logic to the episode data to determine a classification, of a plurality of predetermined classifications, for the episode data.
[0256] IMD 10 may first detect an episode trigger at 2902. IMD 10 may then send collected episode data to computing device 12 at 2904. IMD 10 may continuously store episode data in memory on IMD 10. Computing device 12 may periodically request fresh episode data to evaluate. In some examples, computing device 12 may request episode data on a periodic basis, such as every 15 seconds, every 60 seconds, or some other time period. Computing device 12 may receive the episode data at 2906, and apply classification logic to the episode data at 2908. For example, computing device 12 may apply a first one or more machine learning models and first classification logic to the episode data to determine a first respective classification of a plurality of predetermined classifications for the episode data. In accordance with the techniques of this disclosure, the episode data may include a plurality of segments, and computing device 12 may apply the first one or more machine learning models to each segment of the plurality of segments. At 2910, and based on a probability of the first respective classification determined by the first one or more machine learning models, computing device 12 may determine if additional data is needed or whether or not a health emergency is detected. [0257] If, at 2910, computing device 12 determines that additional data is needed, computing device 12 may request IMD 10 to send additional collected episode data at 2904. Computing device 12 may receive the additional episode data at 2906, and may then apply a second one or more machine learning models and second classification logic to the additional data, and/or may apply a third one or more machine learning models and third classification logic to the additional data.
[0258] Further at 2910, computing device 12 may determine a classification of the acute health event from the plurality of predetermined classifications based on the application of the second one or more machine learning models to the additional data or the third one or more machine learning models to the additional data. If computing device 12 determines that the acute health event is a health emergency, computing device 12 may output an alarm signal at 2912. If computing device 12 determines that the acute health event is not a health emergency, computing device 12 may end process 2900. In some
examples, computing device 12 may also be configured to determine whether to control IMD 10 to deliver therapy based on the classification.
[0259] FIG. 30 is a process diagram illustrating another example implementation of detecting an acute health emergency in accordance with the techniques of this disclosure. FIG. 30 illustrates a process 3000 that may be performed by IMD 10 and computing device 12. In general, process 3000 included IMD 10 applying classification logic to the episode data to determine a classification, of a plurality of predetermined classifications, for the episode data and sending alert data to computing device 12.
[0260] IMD 10 may first detect an episode trigger at 3002. IMD 10 may apply classification logic to episode data at 3004. For example, IMD 10 may apply a first one or more machine learning models and first classification logic to the episode data to determine a first respective classification of a plurality of predetermined classifications for the episode data. In accordance with the techniques of this disclosure, the episode data may include a plurality of segments, and IMD 10 may apply the first one or more machine learning models to each segment of the plurality of segments. At 3004, and based on a probability of the first respective classification determined by the first one or more machine learning models, IMD 10 may determine if additional data is needed.
[0261] If, at 3004, IMD 10 determines that additional data is needed, IMD 10 may gather additional data. IMD 10 may then apply a second one or more machine learning models and second classification logic to the additional data, and/or may apply a third one or more machine learning models and third classification logic to the additional data. Different than process 2900, in process 3000, IMD 10 may be configured to evaluate episode data more continuously rather than periodically (e.g., every 15 seconds or every 60 seconds). IMD 10 may further determine a classification of the acute health event from the plurality of predetermined classifications based on the application of the second one or more machine learning models to the additional data or the third one or more machine learning models to the additional data.
[0262] At 3006, if IMD 10 determines that the acute health event is a health emergency, IMD 10 may send alert data to computing device 12 at 3010. Computing device 12 may receive the alert data (at 3012) and may output an alarm signal (3014). If, at 3006 IMD 10 determines that the acute health event is not a health emergency, IMD 10 may end process 3000. In process 3000, IMD 10 may not be required to communicate
with computing device 12 unless sending alert data. That is, compared to process 2900, process 3000 may have fewer communications between IMD 10 and computing device 12, thus saving power. In some examples, IMD 10 and/or computing device 12 may also be configured to determine whether to control IMD 10 to deliver therapy based on the classification.
[0263] FIG. 31 is a process diagram illustrating another example implementation of detecting an acute health emergency in accordance with the techniques of this disclosure. FIG. 31 illustrates a process 3100 that may be performed by IMD 10 and computing device 12. In general, process 3100 includes IMD 10 intermittently and/or continuously sending episode data to computing device 12. Computing device 12 may then apply classification logic to the episode data to determine a classification, of a plurality of predetermined classifications, for the episode data.
[0264] IMD 10 may first detect an episode trigger at 3102. IMD 10 may then send collected episode data to computing device 12 at 3104. IMD 10 may continuously store episode data in memory on IMD 10. In one example of FIG. 31, IMD 10 may be configured to continuously send episode data to computing device 12. In other examples, IMD 10 may be configured to first send episode data to computing device 12 in an intermittent or periodic fashion, such as every 15 seconds, every 60 seconds, or some other time period. Such intermittent transmission may be done initially to monitor a low severity event. IMD 10 and/or computing device 12 may cause IMD 10 to transition to continuous transmission of the episode data as the severity of the health event increases. Such continuous transmission may result in faster or more reliable alerting.
[0265] Computing device 12 may receive the episode data at 3106, and apply classification logic to the episode data at 3108. For example, computing device 12 may apply a first one or more machine learning models and first classification logic to the episode data to determine a first respective classification of a plurality of predetermined classifications for the episode data. In accordance with the techniques of this disclosure, the episode data may include a plurality of segments, and computing device 12 may apply the first one or more machine learning models to each segment of the plurality of segments. At 3110, and based on a probability of the first respective classification determined by the first one or more machine learning models, computing device 12 may determine if additional data is needed or whether or not a health emergency is detected.
[0266] If, at 3110, computing device 12 determines that additional data is needed, computing device 12 may receive additional collected episode data at 3104. Computing device 12 may receive the additional episode data at 3106, and may then apply a second one or more machine learning models and second classification logic to the additional data, and/or may apply a third one or more machine learning models and third classification logic to the additional data.
[0267] In this example, computing device 12 may continue to apply classification logic to episode data until computing device 12 has made a determination of a health event. For example, at 3110, computing device 12 may determine a classification of the acute health event from the plurality of predetermined classifications based on the application of the second one or more machine learning models to the additional data or the third one or more machine learning models to the additional data. If computing device 12 determines that the acute health event is a health emergency, computing device 12 may output an alarm signal at 3112. If computing device 12 determines that the acute health event is not a health emergency, computing device 12 may end process 3100. In some examples, computing device 12 may also be configured to determine whether to control IMD 10 to deliver therapy based on the classification.
[0268] It should be understood that various aspects disclosed herein may be combined in different combinations than the combinations specifically presented in the description and accompanying drawings. It should also be understood that, depending on the example, certain acts or events of any of the processes or methods described herein may be performed in a different sequence, may be added, merged, or left out altogether (e.g., all described acts or events may not be necessary to carry out the techniques). In addition, while certain aspects of this disclosure are described as being performed by a single module, unit, or circuit for purposes of clarity, it should be understood that the techniques of this disclosure may be performed by a combination of units, modules, or circuitry associated with, for example, a medical device.
[0269] In one or more examples, the described techniques may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a computer-readable medium and executed by a hardware -based processing unit. Computer-readable media may include non-transitory computer-readable media, which corresponds to a tangible
medium such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer).
[0270] Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor” or “processing circuitry” as used herein may refer to any of the foregoing structure or any other physical structure suitable for implementation of the described techniques. Also, the techniques could be fully implemented in one or more circuits or logic elements.
[0271] The following examples are illustrative of the techniques described herein.
[0272] Example 1: A medical device system includes an implantable medical device configured to: detect an acute health event; collect episode data associated with the acute health event; transmit the episode data; and deliver therapy to a patient; and processing circuitry configured to: receive the episode data associated with the acute health event detected by the implantable medical device; apply a first one or more machine learning models and first classification logic to the episode data to determine a first respective classification of a plurality of predetermined classifications for the episode data; based on a probability of the first respective classification determined by the first one or more machine learning models, determine whether to: retrieve first additional data from the implantable medical device and apply a second one or more machine learning models and second classification logic to the first additional data, or retrieve second additional data from the implantable medical device and apply a third one or more machine learning models and third classification logic to the second additional data; determine a classification of the acute health event from the plurality of predetermined classifications based on the application of the second one or more machine learning models to the first additional data or the third one or more machine learning models to the second additional data; and determine whether to control the implantable medical device to deliver therapy based on the classification.
[0273] Example 2: The medical device system of example 1, wherein the processing circuitry is configured to request additional episode data from the implantable medical device based on the classification.
[0274] Example 3: The medical device system of example 1 or 2, wherein the processing circuitry is configured to determine the first respective classification of the acute health event based on a number of segments of a plurality of segments of the episode data determined to have the first respective classification.
[0275] Example 4: The medical device system of any one or more of examples 1 to 3, wherein the processing circuitry is configured to determine the classification of the acute health event based on a time location of one or more segments determined to have the classification within the episode data.
[0276] Example 5: The medical device system of any one or more of examples 1 to 4, wherein the processing circuitry is configured to: apply the one or more machine learning models to each segment of a plurality of segments of the episode data to determine a respective probability associated with the respective classification for each segment of the plurality of segments; and determine the classification of the acute health event based on the respective probabilities.
[0277] Example 6: The medical device system of example 5, wherein, to determine the classification of the acute health event based on the respective probabilities, the processing circuitry is configured to compare the respective probabilities to a threshold.
[0278] Example 7: The medical device system of example 5 or 6, wherein, to determine the classification the acute health event based on the respective probabilities, the processing circuitry is configured to compare a combination of probabilities to a threshold.
[0279] Example 8: The medical device system of example 7, wherein the combination comprises a weighted combination.
[0280] Example 9: The medical device system of example 8, wherein the processing circuitry weights the probabilities based on a time location of the corresponding segments within the episode data.
[0281] Example 10: The medical device system of example 1, wherein the one or more machine learning models comprise a first one or more machine learning models, and wherein, to determine the classification of acute health event from the plurality of predetermined classifications based on the respective classifications of a plurality of segments of the episode data, the processing circuitry is configured to apply the respective classifications to a second one or more machine learning models.
[0282] Example 11: The medical device system of example 10, wherein the second one or more machine learning models comprise a long short-term memory network.
[0283] Example 12: The medical device system of any one or more of examples 1 to 11, wherein the medical device system comprises a smartphone.
[0284] Example 13: The medical device system of any one or more of examples 1 to 11, wherein the medical device system comprises an Internet of Things device.
[0285] Example 14: The medical device system of any one or more of examples 1 to 13, wherein the processing circuitry is configured to apply the first machine learning model to each segment of a plurality of segments of the episode data to determine, for each segment of the plurality of segments, the first respective classification of the plurality of predetermined classifications for the episode data.
[0286] Example 15: The medical device system of example 14, wherein the processing circuitry is configured to determine the classification of the acute health event from the plurality of predetermined classifications further based on the respective classifications of the plurality of segments.
[0287] Example 16: The medical device system of any one or more of examples 1 to
15, wherein a resolution of the first additional data is greater than a resolution of the second additional data.
[0288] Example 17: The medical device system of any one or more of examples 1 to
16, wherein the first additional data derives from a first time period, wherein the second additional data derives from a second time period, and wherein a duration of the first time period is less than a duration of the second time period.
[0289] Example 18: The medical device system of example 17, wherein the duration of the first time period is approximately 15 seconds, and wherein the duration of the second time period is approximately 60 seconds.
[0290] Example 19: The medical device system of any one or more of examples 1 to
18, wherein the second one or more machine learning models are the same as the first one or more machine learning models.
[0291] Example 20: The medical device system of any one or more of examples 1 to
19, wherein the third one or more machine learning models are the same as the first one or more machine learning models.
[0292] Example 21: The medical device system of any one or more of examples 1 to
20, wherein the processing circuitry is configured to determine whether to retrieve the first additional data or the second additional data based on noisiness of the episode data.
[0293] Example 22: The medical device system of any one or more of examples 1 to
21, wherein the processing circuitry is configured to determine whether to retrieve the first additional data or the second additional data based on a health risk severity of the first respective classification.
[0294] Example 23: The medical device system of any one or more of examples 1 to
22, wherein the second classification logic is the same as the first classification logic. [0295] Example 24: The medical device system of any one or more of examples 1 to
23, wherein the third classification logic is the same as the first classification logic.
[0296] Example 25: The medical device system of any one or more of examples 1 to
24, wherein the therapy is antitachyarrhythmia shock.
[0297] Example 26: The medical device system of any one or more of examples 1 to
25, wherein the implantable medical device comprises an insertable cardiac monitor includes a housing configured for subcutaneous implantation in the patient, the housing having a length between 40 millimeters (mm) and 60 mm between a first end and a second end, a width less than the length, and a depth less than the width; a first electrode at or proximate to the first end; a second electrode at or proximate to the second end; and circuitry within the housing and configured to sense an electrocardiogram corresponding to the episode data via the first electrode and the second electrode and detect the acute health event based on the electrocardiogram.
[0298] Example 27 : A computing device includes communication circuitry configured to wirelessly communicate with a sensor device on a patient or implanted within the patient; one or more output devices; and processing circuitry configured to: determine a first classification of an acute health event based on applying a first one or more machine learning models and first classification logic to episode data; and based on a probability of the first classification, applying at least one of a second one or more machine learning models or a second classification logic to additional data to determine a final classification of the acute health event.
[0299] Various examples have been described. These and other examples are within the scope of the following claims.
Claims
1. A medical device system comprising: an implantable medical device configured to: detect an acute health event; collect episode data associated with the acute health event; transmit the episode data; and deliver therapy to a patient; and processing circuitry configured to: receive the episode data associated with the acute health event detected by the implantable medical device; apply a first one or more machine learning models and first classification logic to the episode data to determine a first respective classification of a plurality of predetermined classifications for the episode data; based on a probability of the first respective classification determined by the first one or more machine learning models, determine whether to: retrieve first additional data from the implantable medical device and apply a second one or more machine learning models and second classification logic to the first additional data, or retrieve second additional data from the implantable medical device and apply a third one or more machine learning models and third classification logic to the second additional data; determine a classification of the acute health event from the plurality of predetermined classifications based on the application of the second one or more machine learning models to the first additional data or the third one or more machine learning models to the second additional data; and determine whether to control the implantable medical device to deliver therapy based on the classification.
2. The computing device of claim 1, wherein the processing circuitry is configured to apply the first machine learning model to each segment of a plurality of segments of the episode data to determine, for each segment of the plurality of segments, the first
respective classification of the plurality of predetermined classifications for the episode data.
3. The computing device of claim 1 or 2, wherein the processing circuitry is configured to determine the classification of the acute health event from the plurality of predetermined classifications further based on the respective classifications of the plurality of segments.
4. The computing device of any one or more of claims 1 to 3, wherein a resolution of the first additional data is greater than a resolution of the second additional data.
5. The computing device of any one or more of claims 1 to 4, wherein the first additional data derives from a first time period, wherein the second additional data derives from a second time period, and wherein a duration of the first time period is less than a duration of the second time period.
6. The computing device of claim 5, wherein the duration of the first time period is approximately 15 seconds, and wherein the duration of the second time period is approximately 60 seconds.
7. The computing device of any one or more of claims 1 to 6, wherein the second one or more machine learning models are the same as the first one or more machine learning models.
8. The computing device of any one or more of claims 1 to 7, wherein the third one or more machine learning models are the same as the first one or more machine learning models.
9. The computing device of any one or more of claims 1 to 8, wherein the processing circuitry is configured to determine whether to retrieve the first additional data or the second additional data based on noisiness of the episode data.
10. The computing device of any one or more of claims 1 to 9, wherein the processing circuitry is configured to determine whether to retrieve the first additional data or the second additional data based on a health risk severity of the first respective classification.
11. The computing device of any one or more of claims 1 to 10, wherein the therapy is antitachyarrhythmia shock.
12. A medical device system comprising: a sensor device communication circuitry configured to wirelessly communicate with the sensor device on a patient or implanted within the patient; one or more output devices; and processing circuitry configured to: receive episode data for an acute health event detected by the sensor device via the communication circuitry, the episode data transmitted by the sensor device in response to detecting the acute health event; apply a first one or more machine learning models and first classification logic to the episode data to determine a first respective classification of a plurality of predetermined classifications for the episode data; based on a probability of the first respective classification determined by the first one or more machine learning models, determine whether to: retrieve first additional data from the sensor device and apply a second one or more machine learning models and second classification logic to the first additional data, or retrieve second additional data from the sensor device and apply a third one or more machine learning models and third classification logic to the second additional data; determine a classification of the acute health event from the plurality of predetermined classifications based on the application of the second one or more machine learning models to the first additional data or the third one or more machine learning models to the second additional data; and determine whether to control the implantable medical device to deliver therapy based on the classification.
13. The medical device system of claim 12, wherein the sensor device comprises an implantable medical device.
14. The medical device system of claim 13, wherein the implantable medical device comprises an insertable cardiac monitor comprising: a housing configured for subcutaneous implantation in the patient, the housing having a length between 40 millimeters (mm) and 60 mm between a first end and a second end, a width less than the length, and a depth less than the width; a first electrode at or proximate to the first end; a second electrode at or proximate to the second end; and circuitry within the housing and configured to sense an electrocardiogram corresponding to the episode data via the first electrode and the second electrode and detect the acute health event based on the electrocardiogram.
15. The medical device system of any one or more of claims 12 to 14, wherein the therapy is antitachyarrhythmia shock.
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363610318P | 2023-12-14 | 2023-12-14 | |
| US63/610,318 | 2023-12-14 | ||
| US202463639335P | 2024-04-26 | 2024-04-26 | |
| US63/639,335 | 2024-04-26 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025125944A1 true WO2025125944A1 (en) | 2025-06-19 |
Family
ID=93842119
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IB2024/061512 Pending WO2025125944A1 (en) | 2023-12-14 | 2024-11-18 | Delivering therapy based on machine learning model classification of health events |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025125944A1 (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140276928A1 (en) | 2013-03-15 | 2014-09-18 | Medtronic, Inc. | Subcutaneous delivery tool |
| US9307920B2 (en) * | 2012-04-17 | 2016-04-12 | Cardiac Pacemakers, Inc. | Method and apparatus for automatic arrhythmia classification with confidence estimation |
| WO2022231678A1 (en) * | 2021-04-30 | 2022-11-03 | Medtronic, Inc. | Response by robotic device to an acute health event reported by medical device |
| WO2023154864A1 (en) * | 2022-02-10 | 2023-08-17 | Medtronic, Inc. | Ventricular tachyarrhythmia classification |
-
2024
- 2024-11-18 WO PCT/IB2024/061512 patent/WO2025125944A1/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9307920B2 (en) * | 2012-04-17 | 2016-04-12 | Cardiac Pacemakers, Inc. | Method and apparatus for automatic arrhythmia classification with confidence estimation |
| US20140276928A1 (en) | 2013-03-15 | 2014-09-18 | Medtronic, Inc. | Subcutaneous delivery tool |
| WO2022231678A1 (en) * | 2021-04-30 | 2022-11-03 | Medtronic, Inc. | Response by robotic device to an acute health event reported by medical device |
| WO2023154864A1 (en) * | 2022-02-10 | 2023-08-17 | Medtronic, Inc. | Ventricular tachyarrhythmia classification |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12232851B2 (en) | Acute health event monitoring | |
| US20250090076A1 (en) | Ventricular tachyarrhythmia classification | |
| US12161487B2 (en) | Personalization of artificial intelligence models for analysis of cardiac rhythms | |
| US11826152B2 (en) | Arrhythmia classification using correlation image | |
| WO2024059054A1 (en) | Segment-based machine learning model classification of health events | |
| US20260026691A1 (en) | Acute health event detection during drug loading | |
| US20250118426A1 (en) | Techniques for improving efficiency of detection, communication, and secondary evaluation of health events | |
| US20250268523A1 (en) | A system configured for chronic illness monitoring using information from multiple devices | |
| WO2025125944A1 (en) | Delivering therapy based on machine learning model classification of health events | |
| WO2025125945A1 (en) | Alerting based on machine learning model classification of acute health events | |
| EP4586913A1 (en) | Combined machine learning and non-machine learning health event classification | |
| US20250040890A1 (en) | High-resolution diagnostic data system for patient recovery after heart failure intervention | |
| WO2024059101A1 (en) | Adaptive user verification of acute health events | |
| US20240008821A1 (en) | Systems and methods for detecting premature ventricular contraction | |
| US20250090090A1 (en) | Prediction of ventricular tachycardia or ventricular fibrillation termination to limit therapies and emergency medical service or bystander alerts | |
| EP4580487A1 (en) | Electrocardiogram-based left ventricular dysfunction and ejection fraction monitoring | |
| WO2024249414A1 (en) | Operation of implantable medical device system to determine atrial fibrillation recurrence likelihood | |
| WO2024246636A1 (en) | Using a machine learning model pretrained with unlabeled training data to generate information corresponding to cardiac data sensed by a medical device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24820813 Country of ref document: EP Kind code of ref document: A1 |