[go: up one dir, main page]

CN119301428A - Posture detection using thermal data - Google Patents

Posture detection using thermal data Download PDF

Info

Publication number
CN119301428A
CN119301428A CN202380043635.7A CN202380043635A CN119301428A CN 119301428 A CN119301428 A CN 119301428A CN 202380043635 A CN202380043635 A CN 202380043635A CN 119301428 A CN119301428 A CN 119301428A
Authority
CN
China
Prior art keywords
data
human
processor
sensor
bounding box
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202380043635.7A
Other languages
Chinese (zh)
Other versions
CN119301428B (en
Inventor
鲁德拉西斯·查克拉博蒂
周凯冬
张砚
曾佳旎
邓鸿浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bailuo Technology Co ltd
Original Assignee
Bailuo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/708,493 external-priority patent/US12050133B2/en
Application filed by Bailuo Technology Co ltd filed Critical Bailuo Technology Co ltd
Publication of CN119301428A publication Critical patent/CN119301428A/en
Application granted granted Critical
Publication of CN119301428B publication Critical patent/CN119301428B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/778Active pattern-learning, e.g. online learning of image or video features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J2005/0077Imaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Pathology (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Alarm Systems (AREA)
  • Image Analysis (AREA)
  • Medical Treatment And Welfare Office Work (AREA)
  • Radiation Pyrometers (AREA)

Abstract

The system may provide functionality for gesture detection by using a gesture detection algorithm on the reduced resolution image. The method may include receiving an image of a human from a sensor, receiving a location of a bounding box on the image, wherein the bounding box contains pixel data of the human in the image, acquiring the bounding box data from within the bounding box, and determining a pose of the human based on the bounding box data.

Description

Gesture detection using thermal data
Cross Reference to Related Applications
The present application claims priority and benefit from U.S. Ser. No. 17/708,493, filed 3/30/2022, entitled "POSE DETECTION USING THERMAL DATA (gesture detection Using thermal data)". U.S. Ser. No. 17/708,493 is a continuation of, AND requires priority AND benefit from, the section of U.S. Ser. No. 17/516,954 submitted at month 11 AND 2 of 2021 AND titled "USER INTERFACE FOR DETERMINING LOCATION, TRAJECTORY AND BEHAVIOR (user interface for determining location, track AND BEHAVIOR)". U.S. Ser. No. 17/516,954 is a continuation of the portion of U.S. Ser. No. 17/232,551 submitted at month 4 of 2021 AND titled "THERMAL DATAANALYSIS FOR DETERMINING LOCATION, TRAJECTORY AND BEHAVIOR (thermal data analysis for determination of location, trajectory AND BEHAVIOR)". U.S. Ser. No. 17/232,551 is a continuation of U.S. Ser. No. 17/178,784 (also known as U.S. patent No. 11,022,495 issued on month 16 of 2021) submitted on month 18 of 2021 and titled "MONITORING HUMAN LOCATION, TRAJECTORY AND BEHAVIOR USING THERMAL DATA (use of thermal data to monitor the position, trajectory and behavior of humans)". U.S. Ser. No. 17/178,784 claims priority from U.S. provisional Ser. No. 62/986,442, filed on 3/6/2020, and titled "MULTI-WIRELESS-SENSOR SYSTEM, DEVICE, AND METHOD FOR MONITORING HUMAN LOCATION AND BEHAVIOR (Multi-Wireless SENSOR System, device and method for monitoring human position and behavior)". All of the disclosures described above are incorporated herein by reference in their entirety for all purposes.
Technical Field
The present disclosure relates generally to detecting gestures using thermal data, and more particularly to fall detection and other activity analysis using gesture information.
Background
While some merchants may attempt to use basic machines to count the number of people entering and leaving a particular door to a store, this information is very limited to analyzing the actions of those people within the store. Merchants may be very interested in better understanding the movement, trajectory, and activities of customers within the merchant's store. For example, a merchant may be interested in knowing whether a particular show in a particular aisle in a store attracts more customers to that aisle. In addition, merchants may be interested in knowing how many customers that walk along aisle #4 also walk along aisle #5 and how many customers that walk along aisle #4 bypass aisle #5 and then walk along aisle # 6. These data may help merchants optimize their operations and maximize profits.
Merchants may also be interested in more fully knowing the general flow patterns associated with time in the merchant's store. To help merchants better allocate their own resources and optimize their business relationship with cooperating third parties, merchants may want to learn traffic patterns during the busiest times of the day, during the busiest days of the week, during particular months, and/or during particular years. Further, to assist in identifying abnormal behavior and/or detecting incidents in real-time, merchants may want to obtain more information about the spatial and/or temporal patterns and occupancy levels of traffic.
In addition, in order to analyze the sense of well-being of a resident and determine whether the resident qualifies for independent life, auxiliary life providers often wish to obtain spatial and temporal movement data of the tenant. For example, the provider may want to analyze the movement speed of the tenant based on the tenant's indoor location throughout the time, calculate the total calories consumed based on the tenant's movement, and/or monitor the tenant's body temperature.
Detecting human gestures may also be helpful in fall detection and other activity analysis. However, in private homes or other private environments where privacy is a concern, gesture detection is often required. In this regard, users typically prefer that no data collection or scanner acquire or store personnel identity information. Thus, there is a need for a non-invasive technique to implement such fall detection analysis in a residence. In this regard, high resolution cameras or other techniques may be undesirable because such cameras may use keypoints to acquire and/or store facial data or other person-identity information. Key points may include a subset of points on a human bone, which requires detailed information about the human being to be obtained. Identification of keypoints in low resolution images is very difficult and often impossible, and thus high resolution techniques are typically required to identify keypoints. In contrast, residential environments tend to use low resolution data. An example of low resolution data may be thermal data. As such, there is a need for using thermal data in combination with algorithms that can process the thermal data to provide gesture detection.
In addition to cameras, other solutions have also been used for fall detection, such as watches (e.g. using accelerometers and gyroscopes), radar or lidar. However, watch technology requires a human to actively wear the technology. Furthermore, the use of a radar may lead to false positives (e.g., the radar may be triggered by a pet). Furthermore, radar technology is often not effective in distinguishing between stationary people as part of the detection process. Furthermore, existing systems may require analysis of densely aggregated data points, which typically results in lower accuracy. Analysis of such dense points may be more expensive because additional computing power may be required to distinguish points in a dense point cloud.
Disclosure of Invention
In various embodiments, a system may implement a method that includes receiving, by a processor, an image of a human from a sensor, receiving, by the processor, a location (placement) of a bounding box on the image, wherein the bounding box contains pixel data of the human in the image, obtaining, by the processor, the bounding box data from within the bounding box, and determining, by the processor, a pose of the human based on the bounding box data.
In various embodiments, the method may further include training, by the processor, the neural network to predict a location of the bounding box on the image. The method may further include training, by the processor, the neural network using the pixel data of the human, the thermal data of the human, and the environmental data. The method may further include adjusting, by the processor, an algorithm of the neural network based on the environmental data, wherein the environmental data includes at least one of an ambient temperature, an indoor temperature, a plan, a non-human thermal object, a sex of the human, an age of the human, a height of the sensor, a clothing of the human, or a body weight of the human.
In various embodiments, the sensor may acquire thermal data about a human. The user may indicate the location of the bounding box on the image. The determination of the pose may be made for a frame of an image captured by the sensor. Determining the pose may also include determining an aggregate pose spanning multiple frames over a period of time. The image may be part of a video clip of a human being. Acquiring the bounding box data may include acquiring the bounding box data over time or during at least one of an initial calibration session. The posture may include at least one of sitting, standing, lying, exercising, dancing, running, or eating.
In various embodiments, the method may further include determining, by the processor, a fall based on the aggregated pose changing from at least one of a standing pose or a sitting pose to a lying pose for a certain amount of time. The method may include extracting, by a processor, distinguishing features from a plurality of frames of an image using pattern recognition. The method may include limiting, by the processor, a resolution of the image based on at least one of a privacy issue, power consumption of the sensor, cost of the pixel data, bandwidth of the pixel data, computational cost, or computational bandwidth. The method may include marking, by the processor, a pose of a human in the image.
In various embodiments, the method may further include determining, by the processor, a temperature of the human in the space based on Infrared (IR) energy data regarding IR energy from the human, determining, by the processor, a position coordinate of the human in the space, comparing, by the sensor system, the position coordinate of the human with a position coordinate of the fixture, and determining, by the sensor system, that the human (human) is a human (human bearing) in response to the temperature of the subject being within a range and in response to the position coordinate of the human being different from the position coordinate of the fixture. The method may include analyzing, by the processor, distinguishing features from patterns of overhead thermal features of the human to determine a pose of the human. The method may include determining, by the processor, a trajectory of the human based on a change in temperature in the pixel data, wherein the temperature is projected onto the pixel grid.
Drawings
The subject matter of the present disclosure is particularly pointed out and distinctly claimed in the concluding portion of the specification. A more complete appreciation of the present disclosure, however, can be best gained by reference to the detailed description and claims when considered in connection with the accompanying drawings.
Fig. 1A is an exemplary schematic diagram of the major components of a sensor node as part of an overall system, according to various embodiments.
Fig. 1B is an exemplary schematic diagram of a gateway and microprocessor as part of an overall system according to various embodiments.
FIG. 2 is an exemplary data flow diagram according to various embodiments.
FIG. 3 is an exemplary system architecture according to various embodiments.
Fig. 4A and 4B are exemplary user interfaces according to various embodiments.
Fig. 5 is an exemplary building layout according to various embodiments.
Fig. 6 is an exemplary building layout showing certain sensor nodes and coverage areas for each sensor node, according to various embodiments.
FIG. 7 is an exemplary user interface showing an application's response to detecting a sensor node in physical space after a user successfully logs in to the sensor node, according to various embodiments.
8A and 8B are exemplary user interfaces that prompt a user to upload any particular file to an application that may be supplemental visual information as a representation of their physical space, according to various embodiments.
FIG. 9 is an exemplary user interface showing certain sensor nodes located in a space that can be labeled with names to allow a user to understand the context information of each sensor node in their equivalent space, according to various embodiments.
FIG. 10 is an exemplary user interface showing the ability to set up and calibrate sensor nodes, according to various embodiments.
Fig. 11A-11C illustrate exemplary thermal characterization patterns, including standing, sitting, and lying positions, according to various embodiments.
FIG. 12 illustrates an exemplary gesture inference process in accordance with various embodiments.
Fig. 13 illustrates an exemplary fall detection process according to various embodiments.
Detailed Description
In various embodiments, the system is configured to locate, track, and/or analyze activities of the living being in the environment. The system does not require the input of personnel biometric data. Although the present disclosure may discuss human activity, the present disclosure contemplates tracking any item that may provide Infrared (IR) energy, such as, for example, an animal or any object. Although the present disclosure may discuss an indoor environment, the system may also track in an outdoor environment (e.g., an outdoor concert venue, an outdoor entertainment park, etc.) or a hybrid environment of an outdoor environment and an indoor environment.
As set forth in more detail in fig. 1A, 1B, and 3, in various embodiments, the system may include a plurality of sensor nodes 102, a gateway 135, a microprocessor 140, a computing module 350 (e.g., a cloud computing module), a database 360, and/or a user interface 370 (e.g., fig. 4A, 4B, 7, 8, 9, and 10). Each sensor node 102 may include an enclosure 105, an antenna 110, a sensor module 115, a switch 120, a Light Emitting Diode (LED) 125, and an electrical power source 130.
In various embodiments, the sensor module 115 may be any type of sensor, such as a thermopile sensor module. The thermopile sensor module 115 may include, for example, heimann GmbH sensor modules or a Panasonic AMG8833. Each sensor module 115 may be housed in enclosure 105. The sensor module 115 is configured to measure temperature from a distance by detecting IR energy from an object (e.g., living being). If the organism has a higher temperature, the organism will emit more IR energy. The thermopile sensing elements in the thermopile sensor module 115 may include thermocouples on a silicon chip. The thermocouple absorbs IR energy and generates an output signal indicative of the amount of IR energy. As such, higher temperatures may result in more IR energy being absorbed by the thermocouple, resulting in a higher signal output.
In various embodiments, the sensor node 102 interface (interface) may be wireless to help reduce labor and materials costs associated with the installation. In various embodiments, each of the sensor nodes 102 may obtain power from any power source 130. The power source 130 may power one or more sensor nodes 102. Each of the sensor nodes 102 may be individually battery powered. The battery may be of sufficiently low power to operate for more than 2 years using a single battery (e.g., a 19wh battery). The battery 130 may include a battery from any manufacturer and/or PKCELL batteries, D-cell batteries, or any other battery type. The battery 130 may be housed within a battery holder (e.g., bulgin battery holders). The system may also measure the battery voltage of the battery 130 (e.g., a D-cell battery). The battery voltage may be measured using an analog-to-digital converter located on board with the antenna 110 (e.g., midatronics Dusty PCB antenna). The system may also add a time stamp to the battery voltage data when the battery voltage measurement is obtained.
By adding more sensor nodes 102 to the array of sensor nodes 102, the system can be expanded to a larger footprint. In various embodiments, the sensor nodes 102 may be added dynamically, with an exemplary user interface for adding sensors being set forth in FIG. 7. In particular, a user may add a sensor node 102 to the network of established sensor nodes 102 or remove a sensor node 102 from the network of established sensor nodes 102 at any time or any place. The sensor node 102 may be located anywhere as long as the sensor node 102 location is within the bandwidth of the gateway 135. As such, a large number of sensor nodes 102 may form a mesh network to communicate with gateway 135. The sensor nodes 102 may communicate with the gateway 135 at about the same time. Each new sensor node 102 connected to gateway 135 may further extend the mesh network boundaries, improving system stability and improving system performance. As such, a large number of sensor nodes 102 may form a mesh network to communicate with gateway 135. The sensor nodes 102 may communicate with the gateway 135 at about the same time. Each new sensor node 102 connected to gateway 135 extends the mesh network boundary further, improving system stability and improving system performance. The sensor nodes 102 may be positioned or mounted on any portion of a building or on any object. For example, the sensor node 102 may be installed into a ceiling, side wall, or floor of a desired space using any fastener known in the art. For a ceiling 2.5 meters high, the distance between the sensor nodes 102 may be, for example, 4 meters apart.
In various embodiments, each sensor node 102 installed in any given space may have a unique number (e.g., MAC address) assigned to the sensor node 102. The system uses the unique number to create a structured network with the numbered sensor nodes 102. As shown in fig. 6, each sensor node may cover a different area and carry this unique number that is accessible by the user both in the digital environment and also in the physical environment. In particular, the unique number is clearly printed on the enclosure 105 of the sensor node and is published on the user's screen when the user is installing the sensor node. In this way, the user can identify whether the location of their physical sensor node matches a digital representation in the electronic space that they have created in the installation application.
As set forth in fig. 7 and 8B, after the sensor node 102 is added and set in the system, the sensor node 102 creates a profile. The user is prompted to either scan a QR code located on the sensor, automatically registering a digital representation of the physical sensor, or manually entering the MAC address of the sensor in question. The sensor node 102 profile may include the sensor node 102's altitude, mac address, relative position in space, surrounding objects, and/or background information. The system determines the extent of coverage of each sensor node based on the height of the sensor node 102 that the user entered as information in his profile. Some examples of context information may include the name of the room in which the sensor is located, the name of the sensor itself (if the user wishes to assign a name), and the number assigned to the sensor. The user may upload files about the surrounding environment as shown in fig. 8A. Such files may include PDF, JPG, PNG, 3DM, OBJ, FBX, STL, and/or SKP, for example. The ambient information may include furniture within the sensor node 102 field, building layout around the sensor node 102 field, and the like. The user may decide to add surrounding objects within the space at his discretion. The sensor node may not register surrounding objects, however, the surrounding objects may provide a richer visual environment for the user's own personal use. The sensor node 102 profile may be stored in a profile of the user in the database 360.
The thermopile sensor module 115 may project the temperature of the object onto the grid. The grid may be an 8-pixel by 8-pixel grid, a 16-pixel by 16-pixel grid, or a 32-pixel by 32-pixel grid (64-pixel, 256-pixel, or 1024-pixel, respectively). The thermopile sensor module 115 may be tuned to detect a specific thermal spectrum, allowing detection of an object (e.g., a human body) having a standard temperature. Average normothermia is generally recognized as 98.6°f (37 ℃). Normothermia, however, can have a wide range from 97°f (36.1 ℃) to 99°f (37.2 ℃). Higher temperatures generally indicate infection or disease. The system can detect this temperature difference because the sensor module 115 can have an accuracy of 0.5 ℃. In the case where multiple human bodies are in the same area, the thermopile sensor module 115 captures and processes each body as a different heat source. In particular, the system avoids overlapping body temperature readings from different bodies by including a calibration process (an exemplary user interface is shown in fig. 10) built into the 3D front end.
As part of the calibration process (an exemplary user interface is shown in fig. 10), the interface through the application requires the user to exit the coverage of all sensor modules 115 so that the system can automatically adjust the sensitivity of the sensor modules 115. Starting from the maximum sensitivity, the system gradually decreases its sensitivity until high frequency noise is no longer detected. Absolute cancellation of noise allows the body to be detected as different heat sources and subsequently, two heat sources, such as the human body, to be detected as different and independent entities. For spatially overlapping readings between the "fields of view" of the two sensor modules 115, during the calibration process, the system identifies the overlapping region between the two sensor modules 115 and averages the common detection between the two sensor modules 115. In the event that overlap is detected between more than two sensor modules 115, the system will average the overlap of each pair in sequence. For example, for the overlap between sensor modules 115A, B and C, the system first averages A and B and then continues to average the results of AB with C.
If automatic calibration fails during the calibration process (an exemplary user interface is shown in FIG. 10), the system automatically generates a digital path between installed sensor nodes, prompting the user to physically stand under each sensor node. When doing so, the sensor node 102 detects the movement of the user and the detection becomes visible within the application. If successful, the user is prompted to follow a path and thus complete calibration of each sensor node 102 of the network. If any of the sensors does not respond as mentioned above, the user is prompted to digitally manipulate the sensitivity of the sensor module 115 by sliding the digital bar and adjusting the sensitivity level of the sensor node in question accordingly. More specifically, during troubleshooting of the calibration process, the user is prompted to stand under the physical sensor, one corner at a time, at each four corners of his field of view. At each corner, designated by default as A, B, C and D in the application, the user is called to stay as long as the system detects their presence and successfully prints it on the digital proxy of the sensor. Once the location is detected, the application asks the user which of the four possible corners they are attempting to mark.
In various embodiments, the thermopile sensor module 115 may detect an array of temperature readings. In addition to detecting the temperature of the living being based on the pixel values, the thermopile sensor module 115 may also obtain the temperature of the environment in which the sensor module 115 is located. The system uses local and global information-both for each pixel individually and for all the pixels of the sensor module 115 that form the network as a whole) to determine what the background temperature field is. The thermopile sensor module 115 may obtain independent temperature measurements of the sensor node 102 itself. The temperature of the sensor module 115 may be obtained using an on-board thermocouple. The system may use the temperature of the sensor nodes 102 and/or the temperature of the sensor modules 115 to give an assessment of the temperature profile (profile) of the monitored space. The on-board thermocouple measurement itself measures the temperature of the space where the sensor is located. The system uses bilinear interpolation to estimate the temperature between the sensor nodes 102 and/or the sensor modules 115 in space to approximate the temperature distribution. Furthermore, in various embodiments, the system may measure and capture the temperature of the environment multiple times throughout the day in order to reduce the adverse effects of maintaining a fixed background temperature field for the threshold calculation, thereby increasing the accuracy of overall detection in real world scenarios where the ambient temperature is dynamic.
In various embodiments, the plurality of sensor nodes 102 may provide information in real-time to aid in generating real-time location, trajectory, and/or behavioral analysis of human activity. By employing multiple sensor nodes 102, and based on the density of the network, the system can infer the trajectory of any moving object detected by the sensor nodes 102. As mentioned above, the thermopile sensor module 115 inside the sensor node is designed to measure temperature from a distance by detecting Infrared (IR) energy of the object. The higher the temperature, the more IR energy is emitted. The thermopile sensor module 115, which consists of a small thermocouple on a silicon chip, absorbs energy and generates an output signal. The output signal is a small voltage that is proportional to the surface temperature of the IR-emitting object in front of the sensor. Each thermopile sensor module 115 has 64 thermopiles, each of which is sensitive to IR energy emitted by the subject. To determine the trajectory, in various embodiments, each sensor module 115 divides the area captured by the sensor module 115 into a plurality of pixels organized in a rectangular grid in a direction aligned with 64 thermopiles, each of the 64 thermopiles being associated with one 8 x 8 portion of the grid. The system monitors the sequence of changes in temperature of successive pixels. The system determines that such a sequence change is indicative of motion of the living being. The system records this movement as the formation of a track in space. The more nodes in the network, the more accurate the inference made about the trajectory, since the detected trajectory is not interrupted by "blind spots".
The computing engine analyzes human behavior and trajectories. For example, with respect to occupancy control, the system may calculate the total number of people in the space to compare with established occupancy requirements. The system identifies all heat sources in the space monitored by the sensor module 115 and adds the number of all heat sources generated by the person.
With regard to occupant temperature screening, the system may detect the presence of a person by capturing the body heat of the person. Once such detection is detected, temperature screening may include automatic adjustment of the sensitivity of the sensor module 115. It should be noted that body temperature screening may be different from body position detection. Body temperature screening means detecting elevated body temperature of the person being tested such that the sensitivity requirements are higher than body position detection alone.
With respect to monitoring the body temperature of the occupant, by using a more detailed 32 x 32 grid in the sensor module 115 to read the temperature of the area near the eye socket, the system 100 may be able to obtain the body temperature of the user in the near field one (1) meter from the sensor node 102. In order for the system to locate the eye sockets, the user may be required to directly gaze at the sensor node 102, allowing the sensor module 115 to detect the highest temperature pixel.
With respect to analyzing the speed of occupant movement, the system may record the movement of personnel under the network of sensor nodes 102. A series of "waypoints" is generated as a function of time. The system uses the distance travelled based on the waypoint information and the time it takes to travel said distance to calculate the speed of movement of the user in question.
Regarding calculating the total calories burned based on the movement of the occupant, the user inputs information such as the weight, sex, and age of the occupant to the system through the interface 370. The system may use the captured movement speed and distance (as mentioned above) to calculate the rough calories burned during the time of the captured movement.
Behavioral analysis stems from the fact that by overlapping the structured network with the actual space, the captured data becomes contextualized. For example, the system may learn about the shopping behavior of a mobile body by cross-referencing the actual trajectories and residence times captured by the sensor nodes 102 with a building plan carrying information about the location of specific products and aisles, as set forth in fig. 5 and 6. In particular, in various embodiments, the user interface 370 allows a user to create a three-dimensional representation of the space in question, as shown in fig. 5, 6, and 9. For example, the owner of the grocery store may record information about the location or product of the ice cream freezer by naming or "marking" each sensor node 102 with a particular product located within the sensor node 102's field. As such, if sensor node #1 detects IR energy for 30 seconds, then the system determines that the person remains within the field of sensor node #1 for 30 seconds. If sensor node #1 is marked as being in front of the ice cream freezer, the system will provide data that the person is in front of the ice cream freezer for 30 seconds. Examples of system outputs are shown in fig. 4A and 4B.
In various embodiments, and as shown in fig. 1A and 1B, each of the plurality of sensor nodes 102 may communicate (interface) with a module. The module may include, for example, an HTPA32D module or an HTPA16D module. The module may be a radio module. The module may be wireless. The module may be a hardware module.
The sensor node 102 may include a switch 120 (e.g., ALPS) that controls the power supply to the sensor node 102. The switch 120 may allow the manufacturer of the system to shut down the power to the sensor node 102 to conserve the battery 130 of the module throughout the transfer or shipment from the manufacturer to the customer. After the sensor node 102 is delivered to the customer, the system may be installed with the switch 120 for the sensor node 102 turned on and kept on. If the customer closes the store or system for a period of time, the customer may use the switch 120 to turn off the sensor node 102 to conserve battery life. LED 125 on sensor node 102 indicates a system state such as, for example, on mode and off mode.
A generic data flow according to various embodiments is illustrated in fig. 2. The sensor node 102 may receive raw thermal data from the environment (step 205). The original thermal data is compressed to create compressed thermal data (step 210). Gateway 135 receives compressed thermal data from sensor node 102. Gateway 135 decompresses the compressed thermal data to create decompressed thermal data (step 215). The cloud computing module 350 on the server receives the decompressed thermal data and creates detection data (step 220). The post-processing computing module on the server receives the detection data from the cloud computing module 350. The post-processing calculation module processes the detection data to create post-processed detection data (step 225). The post-processing calculation module sends the post-processed detection data to database 360. Database 360 uses the post-processed detection data to create time-series detection result data (step 230). The system applies a background analysis algorithm to the time series detection result data to create an analysis result (step 235). The system applies the API service 380 and client APP to the analysis results to obtain a 3D/2D visualization of the data (step 240).
According to various embodiments, a general system architecture including more details about data flows is set forth in fig. 3. In hardware, the sensor node 102 obtains raw data and then performs edge compression (step 305) and/or edge computation (step 310) to create message queue telemetry transport (MQTT, message Queuing Telemetry Transport) raw data. The cloud computing module 350 on the server receives the MQTT raw data topic 1 from the sensor module 115 via the gateway 135. The cloud computing module 350 applies data stitching and decompression (step 315) to the MQTT raw data topic 1 to create the MQTT raw data topic 2. The cloud computing module 350 applies a core algorithm to the MQTT raw data topic 2 (step 320) to create MQTT result topic 1. The cloud computing module 350 applies world coordinate remapping to MQTT result topic 1 (step 325) to create MQTT result topic 2. The cloud computing module 350 sends the MQTT raw data topic 1, the MQTT raw data topic 2, the MQTT result topic 1, and the MQTT result topic 2 to the database 360.influxDB receive the data. Database 360 applies a background analysis to the data via a background analysis algorithm (step 330) to create a background result (analysis result). The background results are stored in DynamoDB. The DynamoDB also stores sensor node 102 profiles. Database 360 may apply additional context analysis in response to updates or additional settings to the sensor module 115 profile. API 380 obtains the background results from database 360. The API applies real-time raw data, real-time detection, historical raw data, historical detection, historical occupancy, historical traffic, and/or historical duration to the data (step 335). The API sends the results to the user interface 370 (e.g., on the client device). The user interface 370 provides a visualization (step 340) (e.g., fig. 4A and 4B). The user interface 370 also provides a setup interface (step 345). The setup interface may provide for updating the sensor module 115 profile. The user interface 370 also provides a login function (step 350). The login function may include AuthO/Firebase.
More particularly, the sensor module 115 may collect sensor module 115 data, pre-process the data, and/or send the collected sensor module 115 data to the gateway 135. The module may include an on-board microprocessor 140. Raw data from the sensor module 115 may be saved in RAM of the microprocessor 140. The RAM serves as temporary storage for the system. The microprocessor 140 is configured to preprocess the raw data by eliminating outliers in the raw data.
Specifically, microprocessor 140 applies a defined statistical program to the raw data to obtain processed data. In various embodiments, the module uses firmware software for preprocessing. The firmware software statistically determines outliers of temperature readings. For example, outliers may be defined by normalizing the data by subtracting each pixel value from the average value of the frame. The result is divided by the standard deviation of the frame. The pixel values above three times the standard deviation or below one third the standard deviation are removed and replaced using bilinear interpolation techniques, i.e. the product of interpolation with neighboring values. The pixel values are replaced rather than removed so that the input detection is similar before and after the program. This technique helps to fix small minor data problems that may be caused by potential imperfections in the data quality of the sensor module 115. The combination of firmware software, circuit design, and drivers enables the system to run algorithms to determine a "region of interest" on each of the data frames that represents human activity in the field of view of the sensor module 115. The region of interest is not a pixel with a specific temperature, but a pixel with a different temperature (higher or sometimes lower) relative to its surrounding pixels. The region of interest is then used to compress the processed data and prepare the compressed data for wireless transmission.
The system may include a rolling cache of ten data frames for preprocessing. More specifically, the firmware of microprocessor 140 may use the last ten frames of captured data for pre-and post-processing. Because of the limited amount of RAM memory (memory) on board (e.g., 8kb needed by applications), the system may only process a subset of the data.
The data through gateway 135 may be uploaded to cloud computing module 350 on the server. Gateway 135 may be powered by any source of electrical power. In various embodiments, gateway 135 is powered by a 110V outlet (outlet). Gateway 135 includes modules that connect to a network (e.g., the internet) via ethernet, wifi, and/or cellular connections. As such, gateway 135 may upload data to any database 360, server, and/or cloud. Gateway 135 sends the preprocessed data and the compressed data to a computing engine in cloud computing module 350, which in turn outputs the results to database 360s. The gateway 135 pulls the operation commands from the server to perform management functions such as software updates, command modules to turn the sensor module 115 on and off, change sampling frequency, etc.
In various embodiments, gateway 135 captures the compressed raw data in transit and sends the compressed raw data to an algorithm running on a processor (e.g., raspberry Pi 4, model BCM 2711) and then forwards the information to a server (e.g., cloud computing) for further processing. Processing of the data on the server includes decoding the compressed raw data, normalizing the temperature data of the sensor modules 115 according to the firmware and environmental settings of each sensor module 115, detecting objects, classifying objects, spatially transforming for world coordinate system positioning and fusion, multi-sensor module 115 data fusion, object tracking and trajectory generation, cleaning of outlier pixel level readings, and other post-processing.
In various embodiments, the processing step works with decompressed raw data. Decoding the compressed raw data optimizes data transmission and the consumption level of the battery 130. Furthermore, the normalization of the temperature of the sensor node 102 to the appropriate temperature range allows the processing steps to accommodate various quality differences and environmental differences (which are expected from sensor nodes 102 located at different locations (spots) in space).
One of the core processing steps of the computing engine of the cloud computing module 350 is object detection and classification. This processing step detects the position (position) of the object of interest in the frame and classifies the object as a person or a different class of object (e.g. laptop, coffee cup, etc.). The spatial transformation from the local coordinate system to the world coordinate system makes the analysis "background perceived". With spatial transformation, the system can compare and cross-reference the coverage of the spatial sensor module 115 with the actual plan view and 3D model of the space in question. The multi-sensor module 115 data fuses the data in the event of missing information or overlapping coverage between the plurality of sensor modules 115. As mentioned above, with various algorithms, object tracking and trajectory generation can distinguish multiple people from one another by time. Object tracking and trajectory generation provide a set of trajectories that originate from detected objects and people. The system uses such trajectories to determine behavioral analysis (e.g., location and duration of stay), speed of movement, and direction. The post-processing step solves any minor inconsistencies in the detection and tracking algorithms. For example, when there are missing detections or gaps in the track, the post-processing step helps splice the information together and repair any damaged tracks.
The system may use a hot Application Protocol Interface (API), which may be located in an API layer in the system architecture, as set forth in fig. 3. The API hosts real-time and historical demographic data for space enhanced with the sensor module 115 solution. Based on REST construction, the API returns JSON response and supports cross-domain resource sharing. The solution employs standard HTTP verbs to perform CRUD operations, while the API returns standard HTTP response codes for error indication purposes. In addition, the namespace is used to implement the API version while each API request is authenticated using token authentication. The API token on the dashboard is used to authenticate all API endpoints.
The token may be included in an authorized HTTP header prefixed with the string word "token" with a single space separating the two strings. If the correct authorization header is not involved in the API call, an error message is generated 403. HTTP authorization header authorizes TOKEN YOUR _API_TOKEN. The endpoint uses standard HTTP error codes. The response includes any additional information about the error.
The API lists low-level "sensor module 115 events" for the sensor module 115 and a period of time. A time stamp and trace relative to the sensor module 115 in question is included in each sensor module 115 event. The trajectory need not be equal to any direction relative to the space (e.g., inlet or outlet). The call should only be used to test the performance of the sensor module 115.
The API provides information about the total number of daily entries into a particular space for the duration of a week. The analysis objects that accompany the data and entry population of the interval are nested in the interval object of each result. The call may be used to know how many people access the space on different days of the week.
The API records, counts and lists all individuals exiting from the space of interest for an entire day (or any 24 hour period). Each result is time stamped and direction (e.g., -1). The call is used to ascertain when people leave the space.
The API provides information about the current latency and the historical latency at the entrance of a particular space at any given time during the day. The analysis objects that accompany the interval's data and the estimated total wait duration are nested in the interval object for each result. The call is used to find out how many people are waiting in line to enter space of different time spans.
Webhook subscriptions allow callbacks to specified endpoints on the receiving servers. For each space in which an event occurs, webhook may be triggered after each event is received from one of the sensor modules 115. The system may create webhook, acquire webhook, update or delete webhook to webhook. When webhook is received, the JSON data will be similar to the space and sensor module 115 events in the previous section. It will have additional information relating to the current count of the space and the ID of the space itself. The direction field will be 1, representing the entrance, and-1, representing the exit. If any additional header is configured for webhook, the additional header will be included in the POST request. An example of Webhook data received may be a single event that occurs at a path that connects to two spaces.
In various embodiments, the system may include one or more tools to help facilitate installation, to help set up software and hardware, to provide more accurate detection, to create virtual representations, to visualize human movements, to test devices, and/or to troubleshoot devices. The system may include any type of software and/or hardware, such as, for example, one or more applications, GUIs, dashboards, APIs, platforms, tools, web-based tools, and/or algorithms. The system may be in the form of downloadable software. For example, the software may be in the form of an application downloaded from a website, which may be used on a desktop or laptop computer. The software may also include a web application that may be accessed via a browser. Such web applications may be device independent and adaptive such that the web application may be accessible on a desktop, laptop, or mobile device. The system may be obtained through permissions or subscriptions. One or more login credentials may be used to access the system in part or in whole.
As used herein, a space may include an overall layout of areas that may be comprised of one or more rooms. The system functions may affect different spaces, respectively. The system may associate multiple rooms within a space. A room may include any walled portion of a space (e.g., a conference room) or an open area within a space (e.g., a hot spot desk area or hallway). The head count may include the number of people that are in and out of a space or room in a given time frame. Occupancy may include the number of people in a room or space within a particular time. The fixture may include furniture (e.g., a chair, a table, etc.) or equipment (e.g., a washing machine, a stove, etc.).
Generally, in various embodiments, the system may plan for installation by, for example, visualizing sensor locations, visualizing coverage in space using a 3D drag-and-drop interface, and/or knowing the number of sensors and/or boxes (hives) that may be appropriate for a room or space. The system may enable more accurate detection by, for example, analyzing the spatial background to distinguish between human and inanimate objects (and other confounding factors). The system may obtain a spatial context by receiving input regarding the layout of a space, such as 3D furniture, rooms, and tags/labels. Analysis of the spatial context may involve artificial intelligence and/or machine learning. AI is used to learn about space using a spatial context so that artificial intelligence can accurately recognize the presence, behavior, gestures, and other specific activities of humans. The system may create a virtual representation of the real location (e.g., across any dashboard, setting applications that use algorithms and APIs to visualize spatial data, and any other application). The virtual representation may receive labels, tags, and/or names for the sensors, rooms, and spaces based on the tools. The virtual representation may be shown as a unique identifier in the dashboard and/or API.
In various embodiments, the system may visualize human movements by, for example, displaying a current frame and a previous frame (e.g., in the form of images such as dots), displaying and listing position coordinates within the background of the sensor, virtual spatial layout, and virtual fixtures, displaying a trajectory of a person, and/or displaying a pose of a person (e.g., standing, sitting, lying). The system may test and troubleshoot the device by, for example, showing the user what the sensor is detecting. The user may then confirm that the type and location of the actual object is relevant to the visual representation. The system may display what the sensor is detecting in real-time and/or frame-by-frame representations of human presence and movement within the space. The system may also show when the sensor is online, offline, connected, and/or disconnected.
In various embodiments, the system may include functionality to create a spatial layout. Creating a spatial layout may involve adding a space and adding a space name. The system will then store the space with its name. The system provides the user with the ability to create and manage multiple spaces. Having several separate spaces may be useful in monitoring multiple floors (e.g., first floor, second floor), facilities having multiple separate rooms (e.g., aged apartments), or multiple facilities located at different physical locations (e.g., boston laboratories, san francisco laboratories) in a building. In various embodiments, the system may provide the ability to rename a space, add automatic alignment, add visual smoothing, add display local detection, add a toolbar (e.g., master toolbar, side toolbar, etc.), add a fixture from a library (e.g., piece of furniture from a furniture library), or return to a project library as part of the setup. The main toolbar may include functionality related to rooms, sensors, boxes, languages, and/or saving. Exemplary sidebar functionality may include 2D or 3D, displaying or hiding sensors, displaying or hiding rooms, displaying or hiding fixtures, and the like. The toolbar and functionality may be described as being located on the main toolbar or side toolbar, but any functionality may be associated with any toolbar.
In various embodiments, the system may include functionality for adding rooms to match or resemble a layer layout. The system may display a control panel (e.g., in response to selecting a room). The control panel may allow for changing dimensions, marking certain locations or features, and/or selecting a border color for each room. In response to selection of the room icon, the system may add one or more rooms to the space. In response to the auto-alignment being activated, the moved object (e.g., fixture, sensor, or room) may automatically capture the edges of nearby like objects. For example, the chair may automatically move aside another object to allow the user to more easily and quickly align objects in a orderly and orderly manner to match a plan view. The system may determine that the moved object is a homogeneous object with the existing objects based on similar identifiers or tags associated with each of the objects. In response to automatic alignment being disabled, the user may freely, incrementally move objects that may not be aligned with similar objects.
In various embodiments, the system may include functionality for adding fixtures to the virtual space in the GUI (which may be related to fixtures present in the physical space). The fixture may include, for example, furniture or equipment that a user may add to the space. The fixture may allow the user to distinguish between rooms and allow the user to take into account the movement seen on the screen in the background. The system may allow a user to add fixtures by selecting any of the furniture or device icons and then drag and drop the fixtures to specific locations in different rooms. The system also provides the user with functionality for virtually selecting a fixture and then deleting or rotating the fixture using the panel controls. The system also provides the user with functionality for virtually adjusting the size or position of the fixture. In various embodiments, a user may enter specific coordinates of the furniture so that the system knows the size and location of the furniture. In addition, when the user moves the fixture, the system may display the distance between the center point of the fixture and each of the four walls of the room in which the fixture is placed. The coordinates of the furniture with respect to the whole space can be stored in the system through the API. In this way, the user can extract this information from the backend as desired. The user may add any number of fixtures in a space or room as desired. The user may stack the fixture or furniture over other fixtures or furniture. In addition, if the physical table is particularly large, the user may use multiple virtual tables to match their size. The system encodes, identifies, stores and considers the presence and coordinates of the fixture. The system uses this data when determining whether the test is a human and determining whether the test should account for occupancy or the number of people in the room or space. The system includes functionality (e.g., using APIs and algorithms) that encodes each fixture (e.g., table, door, etc.) using the fixture type. The user may select a fixture and a fixture type from the icons. The API may save the fixture and its coordinates on the system so that the algorithm can use this information to identify the detection and behavior. In response to the position selected by the user, the system records the fixture center point x-y coordinates, the fixture type, and the rotation (in degrees (e.g., 0, 90, 180, 270) degrees) from the center point. Rotation may include the act of rotating the object (room, sensor, fixture) 0, 90, 180, 270 degrees from its central point. Rotation may be achieved by selecting a circular arrow to rotate the object. The fixtures in the system may be set to be in a default pointing direction, which may be different from the direction of real furniture in the room. The user may rotate the virtual furniture model to 4 different directions (i.e., 0 degrees, 90 degrees, 180 degrees, 270 degrees from its default direction) along the center point of the model.
The system may display when a person is located on, around, or passing by the piece of furniture. The system stores names or icons associated with different fixtures. These names or icons include factors or rules that the system considers when analyzing the fixture. For example, a bed icon may include rules that a human may be located on a bed, while a table icon may include rules that a human may not be located on a table. Examples of other fixtures (with rules that a human will not be on) include tables, counters, ranges, refrigerators, dishwashers, sinks, radiators, counters, and/or washing machines. Examples of other fixtures (having rules by which a human may be located on or in the fixture) include beds, sofas, chairs, toilets, and/or shower stalls. The system may infer its activity based on a person being located beside the furniture for a certain amount of time. For example, if the system detects that a person is in the vicinity of a television for an hour, the system may infer that the person likes a program being played on the television. The location and inference information also provides valuable context information for the algorithm to allow the system to infer daily activities and enable more accurate human detection. For example, for daily activities, if a detected object (e.g., represented by a purple sphere) is located on or within the outline of a bed fixture, the system may infer that the human is sleeping. The system may then infer the sleep time of the person by determining how long the person is on the bed fixture. As another example, for more accurate human detection, the system may detect both a human heat source and a non-human heat source (such as stoves and laptops). If the detected heat source is found to be in the middle of a fixture such as a table, the system may recognize that this is not a human. Thus, the system may not factor this detection into a portion of occupancy data (e.g., the user may receive the occupancy data via an API or dashboard). The system may include functionality for activating or deactivating a "show local detect" option to view or hide the detect (e.g., purple sphere).
In various embodiments, the system may include a "visualization smoothing" function. Disabling "visual smoothing" causes the detected sphere to be displayed accurately on a particular coordinate frame by frame in a detected manner. Activating "visual smoothing" causes the frame-by-frame movement of the purple sphere to be displayed in a smooth, continuous animation. More particularly, when a sensor detects an object in physical space, the system may create a "proxy" (e.g., a purple sphere) that appears at the corresponding location in virtual space. The system may also assign a "lifecycle" to the agent. The "lifecycle" may be the length of time that a purple sphere appears in virtual space to display detection. In various embodiments, this lifecycle may be set to last 300 milliseconds, which matches (or is similar to) the interval at which the sensor sends a new detection. The system uses a new detection of a new position in the physical world to update the position of the purple sphere in the virtual space. As mentioned, the system may display the purple sphere on a frame-by-frame basis or by visual smoothing. The purple sphere may appear to be blinking, but the sphere may actually be displaying real-time detection captured by the sensor every 300 milliseconds. The flickering effect is due to the fact that the purple sphere changes from opaque to transparent during its 300 ms life cycle. If the sphere is more opaque, the detection may be newer. If a person stands still under the actual sensor, the purple sphere looks like a flickering in place. If a person moves under the actual sensor and the visualization smoothing is not activated, the system may display a string of purple spheres. For example, each of the purple spheres may be continuously opaque to transparent at each detection coordinate (or pixel) every 300 milliseconds. With the smooth activation of the visualization, only one purple sphere appears on the screen and appears to move linearly. During a 300 millisecond lifecycle, the system would search for the next test and "refill" its lifecycle to another 300 millisecond over a1 foot radius. This means that the sphere is continuously present, moving from one point to another according to the detection coordinates (pixels).
In various embodiments, the system may include functionality for adding one or more bins and/or thermal sensors. Data from outside the room may still be stored via the API, but the data outside the room may not be displayed on the dashboard. The dashboard displays activities and occupancy specific to rooms within the space. Thus, the system instructs the user to place the sensor and object in the room. The enclosure may provide gateway functionality to connect the thermal sensor and transfer data from the thermal sensor to a storage location (e.g., cloud). For example, the tank may be connected to a number of sensors (e.g., up to 12 sensors), and then additional tanks may be required for additional sensors. For ease of installation, each box may be preconfigured with a set of sensors (e.g., sensor IDs are loaded as sensor data into a database of boxes). The pre-configuration procedure may comprise two parts. The sensor may be configured to have the same NetID as the tank. This enables the sensors to be connected to each other. The box may obtain the sensor MAC address and sensor pattern programmed into the configuration file. This allows the housing to manage the sensor frame rate correctly. The system may allow a user to add one or more boxes to a space. The system may include functionality to add boxes to the space by scanning a code (e.g., a QR code) or by entering a box ID (e.g., an input box ID found on a sticker underneath the box). The system also receives preconfigured sensor data (because the tank data has been preconfigured to include the sensor data) during the process of adding the tank data to the system. The system uses the sensor data to display a "sensor" icon for each bin. The system may display sensors from different bins as the sensors may be color coded to show that the sensors belong to a particular group and bin. The particular implementation may vary, but is typically visually identifiable by the system. In response to receiving a selection of a sensor icon for a box, all sensors associated with the box will be displayed so that each sensor can be added to the space. The user may drag and drop each virtual sensor somewhere in the room to record room occupancy, or on the doorway to record the number of people entering and exiting the doorway. Each sensor may be unique and may be identified by a unique address (e.g., a MAC address). As such, the user should place the virtual sensor in the same location as the location of the corresponding physical sensor, in the same orientation as the orientation of the corresponding physical sensor, and in the same room as the room of the corresponding physical sensor.
In various embodiments, the system may include functionality to set and/or calibrate the virtual thermal sensor. Each virtual thermal sensor may appear (e.g., as a square) on the display. In response to the virtual sensor being turned off, the virtual sensor may appear in some manner (e.g., black squares) to indicate that the actual sensor is not detecting anything. In response to the virtual sensor being turned on, the virtual sensor is displayed as a grid (e.g., an 8 x 8 grid consisting of 64 squares). Each of the 64 squares may represent each pixel, and the color of each pixel may represent the temperature detected by the actual sensor at that point in the grid. The color of each pixel may include a series of chromaticities to indicate a temperature level. For example, the range of chromaticities may be from yellow (lower temperature) to red (higher temperature).
In response to receiving the selection of the virtual sensor, the system displays a control panel (e.g., located on the left side of the display). Using the control panel, the system may allow a user to set the virtual sensor height. For example, it may be important that the aisle in the grocery store be located within the field of view, but the gatekeeper's closet may be located outside the field of view. The optimal (and preferred maximum) height of the actual sensor is about 3.2 meters. Such a height may provide the maximum coverage and optimal resolution required for human detection. The optimal resolution may be the ability to detect human presence from the heat map image from the sensor based on an algorithm. The optimal resolution refers to the resolution at which the system can accurately and reliably detect human presence. The height of the virtual sensor corresponds to the height of the ceiling or the height on the wall to which the actual sensor is to be attached. The higher the sensor is above the floor, the wider the coverage of the floor by the sensor. The lower the sensor is from the floor, the narrower the coverage of the sensor to the floor. Based on the height, the system can determine how much floor space the sensor is monitoring (covering).
The system may use the formula (90% ×2×tan (30 °) ×height)/(2). For example, a ceiling height of 3 meters may give a coverage of 3.03m by 3.03m on the floor. The standard supermarket ceiling height was 5.78m, the standard office ceiling height was 3.12m, and the standard door height was 2.43m. The system also allows the user to test for different heights to confirm that certain areas are within (or outside of) the field of view of the sensor. A sensor height of 110 inches (2.8 m) may provide an effective coverage width of 106 inches (2.7 m), a sensor height of 102 inches (2.6 m) may provide an effective coverage width of 78 inches (2.0 m), a sensor height of 95 inches (2.4 m) may provide an effective coverage width of 63 inches (1.6 m), and a sensor height of 87 inches (2.2 m) may provide an effective coverage width of 56 inches (1.4 m).
In various embodiments, the system may also allow a user to set the virtual sensor orientation to conform to the manner in which the actual sensor is physically installed to ensure an accurate representation between the virtual world (e.g., in a setup application) and the physical world. The sensor directions in both the real world and the virtual world must be similar so that the visible detections also match. For example, a person standing in the northeast corner of a room must be present in the northeast corner of the corresponding virtual sensor pixel in the setup application. To help match the sensor orientation, the physical sensor may include an arrow (e.g., on its mounting plate). When a user adds a physical sensor to a setup application as a virtual sensor, the user matches the direction of the virtual sensor arrow with the direction of the physical sensor arrow by rotating the virtual sensor.
In various embodiments, the system may include functionality to view the detection. "detection" refers primarily to the detection of the presence of a person. The actual sensor may capture a thermal map of an area at, for example, 3 frames to 5 frames per second. The system may detect human presence by identifying areas of the heat map that are related to the body temperature (or "thermal signature") of the human. The system may represent the detection of a person (e.g., a purple sphere) on a display. The average normothermic of a human is typically accepted as 98.6°f (37 ℃). Normothermia, however, can have a wide range from 97°f (36.1 ℃) to 99°f (37.2 ℃). Temperatures above 100.4°f (38 ℃) may mean that the person may develop fever due to infection or disease.
In various embodiments, the detection process may include sensitivity adjustment. The system may receive data regarding the temperature in a space or room based on a thermometer located within the physical space or room. Sensitivity adjustment may involve increasing the ability of the system to detect human presence in an environment whose temperature is close to the temperature of the human body. Detection is improved by changing parameters related to the temperature difference between the detection (of the human temperature) and the surrounding environment. Improving sensitivity may involve minimizing such temperature differences so that the system may more easily detect the presence of humans in, for example, very warm climates. In other words, in a colder climate of 65 degrees, the system can more easily determine that any object 30 degrees or more above normal room temperature is a human. However, in a warm weather of 96 degrees, the system may need to detect an object having a temperature 2 to 3 degrees higher than room temperature to treat the object as a human.
The test information may be further processed by the system algorithm to again check and ensure that the data ultimately sent to the API and dashboard is accurate. Such further processing may include additional criteria to filter out detections that do not behave like humans. The system may filter out any objects having a temperature below the human body temperature range, e.g., from 97°f (36.1 ℃) to 99°f (37.2 ℃). The filtering may include detection of no movement at all (e.g., a device such as a stove) or stationary detection that appears to be located on a fixture where humans are not expected to be located (e.g., in the middle of a table). For example, the system may determine that a heat map displayed on a table is more likely to indicate a laptop. Further, the system may store coordinates around each fixture such that detection of objects having coordinates that overlap with the coordinates of any fixture is not counted. In other words, the coordinates around the fixture are blacklisted so that no detection within these coordinates is counted.
In various embodiments, the detection may include a number of people. A sensor that determines the number of people from entering and exiting the room (a people sensor) may be mounted over the entrance doorway of the room (e.g., on a wall facing the interior of the room). The people data sensor may use data associated with a virtual threshold (or door pocket line), which may be a distance from the door. The head count sensor may count only the detection of persons passing through the door pocket line. For example, "In" is when a person passes through the door pocket line from left to right, and "Out" is when a person passes through the door pocket line from right to left. Door pocket lines reduce false readings for people who, for example, might extend their head into the interior of the door to see what is inside the room but the person never fully enters the room.
In various embodiments, the system may include functionality for 2D or 3D view settings. In response to receiving a selection of the 2D/3D button, the system may display all (or any portion) of the entire space in 2D or 3D form. The 3D view may provide a more "realistic" spatial background for users unfamiliar with the plan view. Such views may also provide a more intuitive way to understand space, detection, and data. In various embodiments, the system may include functionality for displaying or hiding various features (such as, for example, sensors, rooms, or fixtures). The system may include functionality for editing a space, managing a space, visually communicating different viewing modes of data, and so forth.
In various embodiments, the system may visually communicate data regarding people flow and/or residence time over a period of time. The flow of people may be communicated based on the number of people "entering" and "exiting" a given doorway (or crossing a door pocket line) and entering a room or space. The flow of people may also be communicated by paths or trajectories of people moving in space. The system may include a people count view in which the number of in and out may be displayed on a virtual layout. The system may also include displaying the plurality of detected trajectory views over a period of time, thereby forming a detected flow in the display. The detected flow may be used to determine a path of the flow of people. The system may also display a linear movement trajectory, wherein the system creates a line through the plurality of detections, thereby creating a line representing the path of the flow of people. The residence time may be the amount of time a person spends in a set of coordinates of a room or space. The residence time may be determined by measuring the amount of time a person is detected in a certain area. The system can infer or identify a unique detection (of the same person) by location and trajectory even though the system cannot determine if it is the "same" person that was previously detected. The dwell time may be displayed as a heat map, where darker colors may indicate more time spent in a particular area, while lighter colors may indicate less time spent in a certain area.
As discussed above, traditional computer vision uses high resolution images to develop human gesture detection algorithms. The high resolution image may comprise, for example, a recorded high resolution video clip. Many of the existing algorithms detect keypoints (which may be points on human bones) to detect human gestures from images, which require high resolution camera systems. As such, detection of keypoints using reduced resolution systems may not be feasible. However, systems that can use reduced resolution images may be important to facilitate more privacy.
In various embodiments, the system may provide functionality for gesture detection by using a gesture detection algorithm on the reduced resolution image. For example, the system may show whether the person is standing, sitting, lying down or has fallen. In general, the algorithm may include a sub-module for extracting a bounding box containing human pixels. The algorithm may further comprise a sub-module for detecting the pose of the human inside the bounding box. These algorithms may be deep neural network learning based algorithms, which may be data driven. The neural network may be a CNN (convolutional neural network), which is an artificial neural network specifically designed to process pixel data for use in image recognition and processing.
In general, defining the boundaries of a bounding box is one of the behaviors learned by a neural network. The neural network may be trained on an image having a bounding box, which may contain all pixels (or a subset of pixels) corresponding to a human in the image. The neural network may use this information to predict where the bounding box(s) should be on the image. More specifically, in various embodiments, the system may receive an image of a human. The input image/frame may be low resolution such as, for example, an 8 x 8, 32 x 32, or 64 x 64 pixel image. For each frame, a human annotator may first observe the pose of a human (e.g., test subject) in the image, mark the pose and draw a bounding box around the human in the image. The human annotator can confirm that a human is present in the image by looking at the recorded high resolution video/image. However, such high resolution images may not be useful for training algorithms.
In various embodiments, the user may mark the gesture with an integer. The user may input an integer for a particular gesture on the screen. The GUI may include a text box with fields that accept integers for one or more gestures. The system may associate the integer with a gesture in the database. For example, the mark 0 may represent a sitting posture, the mark 1 may represent a standing posture, the mark 2 may represent a lying posture, or the like. In various implementations, a user may draw a bounding box around a human in an image on a screen (e.g., using any type of device that accepts input on a GUI) such that the bounding box is stored as (x, y) coordinates that the system can recognize. The system may use such labels and manually annotated data (e.g., bounding boxes) to train the learning algorithm. Learning algorithms may be trained using, for example, gradient descent based training. Based on the training, the learning algorithm may learn how to automatically create a bounding box, collect data from within the bounding box, and determine a human gesture from the collected data. The collected data used by the algorithm may be collected over time or during an initial calibration session while the system is running. The system may obtain environmental data from the particular environment in which the sensor is deployed, so the system may use the environmental data to adjust its algorithm based on the particular environment. The data from the environment may include parameters and/or variables from the environment such as, for example, ambient temperature, room temperature, plan view, non-human thermal object, sex of the human, age of the human, height of the installed sensor, clothing of the human, body weight of the human, etc. Pixel data, thermal data, and/or environmental data collected from the environment may be used to train the algorithm through machine learning or artificial intelligence.
Because of the low resolution, it may be difficult to determine differences in image intensity between different pixels, and thus the algorithm may not attempt to determine features based on image intensity. Instead, the algorithm may focus on distinguishing features from a pattern of overhead thermal features that are present in humans. CNNs may implicitly define what constitutes a "distinguishing feature". Each layer of the network defines a "signature" learned by random gradient descent. The user may not be aware of the features that the network deems important. The system may find edges, curves, sharp contrast of adjacent pixel values, etc. However, due to the nature of the neural network, the system may not have any (or very little) information about which features are important to distinguishing gestures.
As discussed above, in various embodiments, the system may extract an area that includes distinguishing features of a human (e.g., the area is represented as a rectangle representing a bounding box). The bounding box may limit the amount of data analyzed by the system and this box may force the algorithm to focus only on human contours. At this step of the algorithm, the system may focus on the extracted bounding box in an attempt to classify the human gesture. The system may extract human pose information for each frame. A frame may be a single image captured by one of the thermal cameras.
In various embodiments, the system may calculate "aggregated gestures" to help smooth out changes in gestures across multiple frames over a period of time. For example, the aggregated gestures may be determined based on a pattern of a set of gestures (e.g., the most frequently occurring gestures in the set) collected over a given period of time. All gestures in the set may not be identical, but the gesture aggregation method creates a consensus. This consensus may be referred to as a "aggregated gesture". For example, the system may obtain a gesture consensus every 5 seconds. The aggregated posture may include sitting, standing or lying, for example. The system can determine that the event is a fall based on some change in posture or posture for a certain amount of time. For example, if an event consists of (i) an aggregated posture changing from standing/sitting to lying, and (ii) a lying posture lasting for a certain amount of time (e.g., at least 30 seconds), then the event is determined to be a fall.
Using pattern recognition, fuzzy logic, artificial intelligence, and/or machine learning, the system may determine human gestures based on similar thermal characteristics. In various embodiments, with respect to pattern recognition, the system may extract distinguishing features and/or patterns from the frame to aid in recognizing gestures contained by the frame. In various embodiments, with respect to artificial intelligence, and based on collected human annotation data and/or frames, the system may automatically learn these patterns through the use of CNNs. As illustrated in fig. 11, the patterns from human thermal features may be different, and the different patterns may cause the system to classify human gestures differently. For example, fig. 11A shows a thermal feature indicating a standing position, fig. 11B shows a thermal feature indicating a sitting position, and fig. 11C shows a thermal feature indicating a lying position.
As illustrated in fig. 12, in various embodiments, the system may perform a gesture inference process. The system may acquire an input image (e.g., an 8 x 8, 32 x 32, or 64 x 64 pixel image) from the infrared sensor (step 1205). The upper and/or lower limits of resolution may be based on privacy concerns, sensor power consumption, data costs, data bandwidth, computational cost, and/or computational bandwidth. The system may apply the CNN to the input image (step 1210). CNNs can learn different filters at each subsequent layer of the network through random gradient descent, thereby implicitly extracting important information from the image.
The system may create a vector of image features (step 1215). The vector may be an intermediate result of the learned image features output by the CNN. A transducer encoder/decoder may be used (step 1220) to convert the vectors of image features into frame predictions (step 1225). DETR (DEtectionTRansformer) can learn the 2D representation of the input image using a conventional CNN backbone. The model may flatten the image before passing it to the transducer encoder and supplement the image with position coding. The transducer decoder may then embed a small fixed number of learned locations (e.g., object queries) as input and additionally participate in the encoder output. The system may pass each output of the decoder embedded to a shared Feed Forward Network (FFN) that may predict the detection (e.g., class and bounding box) or "no object" class.
In various embodiments, the system may include a frame prediction that may predict the size and location of a bounding box that includes distinguishing features of a human. The system may obtain the interior of the bounding box (step 1230). The system obtains the interior of the bounding box because anything that falls outside the bounding box may not be of interest to the system because it is the recognition of the human gestures of a human being that is of interest to the system. In various embodiments, the system applies the CNN to the interior of the bounding box (step 1235). More specifically, the CNN applied to the inside of the bounding box may implicitly learn the features of the image given as training. These features may not be visible to a human user and may never be explicitly expressed by the neural network. Based on the CNN, the system creates a pose prediction for the image (step 1240). More specifically, during a training phase of the model, the system may extract features to distinguish human gestures. This is a priori knowledge used in using training models to distinguish between various poses during the inference/testing phase. Posture prediction may include, for example, sitting, standing, lying or any other posture or configuration of a human. The system may also detect other gestures using higher resolution (32 x 32 pixels or higher). For example, the system may use higher resolution for better performance, such that the system may be able to distinguish between a standing position and a sitting position. The system can also distinguish between exercise, dancing, running, eating, etc.
As illustrated in fig. 13, in various embodiments, the system may perform a fall detection process. The system may acquire an image (step 1305). For example, all images may be aggregated within 5 seconds. The system may benchmark the amount of time for aggregating images. If the amount of time for aggregating the images is too short, the system may lose image detail. If the amount of time to aggregate the images is too long, the system may acquire more noisy data in the images. The system may make no assumptions about the location of the person in the frame. The person may be stationary or moving. The system acquires data about the posture of the person in each frame. The above gesture inference process is performed on the image, as set forth in fig. 12 (step 1310). The result of performing the pose inference algorithm on each frame is a pose prediction for that particular frame. After passing the gesture inference, the system may derive a number of predicted "gestures". Fig. 12 shows that it is "posture collected within 5 seconds". After passing through the "pose aggregation" module, the system may determine whether a "aggregated pose" is standing/sitting or lying. The system can then use this information to further determine if a fall has occurred.
In various embodiments, the system may perform gesture aggregation (step 1315). More specifically, the system may determine the aggregated gestures based on a pattern of a set of gestures collected over a given period of time. For example, the most frequently occurring gesture in the set. While all gestures in a set may not be identical, the gesture aggregation method creates a consensus, i.e., aggregated gestures. The system may determine an aggregated gesture (step 1320). The aggregated pose may be determined to be standing or sitting (step 1325). The system may determine that the aggregated pose is lying down (step 1330). If the aggregated pose is lying, the system will look at a database of previously aggregated pose data and determine if the aggregated pose at the previous timestamp is standing or sitting. The system may compare the gesture to a gesture that has been recognized in a previous frame. The frames may be ordered in time. The system may determine that the aggregated posture changes from standing or sitting to lying, and then the posture remains in the lying position for at least 30 seconds (step 1335).
The system may store a timestamp associated with each of the various actions. The system may check the difference between the start timestamp and the current timestamp to determine if the time difference exceeds a threshold. The threshold may be pre-specified, predetermined, dynamically adjusted, algorithm-based, etc. In various implementations, if a fall exceeds a threshold amount of time, the system issues a fall alert. The fall alert may be sent to any other device via any communication means. For example, the system may send a signal over the internet to an application on the relative smartphone to inform the relative that a fall may have occurred. In addition to alarms, the system may also provide data about personnel, location, facilities, health information, demographic profiles, fall history, etc.
Due to the nature of CNNs, CNNs may assign some type of gesture. But if the system is uncertain about the pose or frame, the system may ignore some frames. For example, the system may use a confidence score of 0.7. A confidence score of 0.7 may indicate that the gesture is a fall to some extent if the system predicts with a confidence of 70%. The system can also predict falls with a confidence level of more than 0.7 or less than 0.7.
The system can determine potential falls (step 1340). For example, if a person is lying on a bed or sofa, the system may determine this action as a "potential fall. However, lying on a bed or sofa may be a normal motion, and such motion may not be a true fall. As such, the system can perform post-processing on the potential fall (step 1345). In various embodiments, the post-processing may include filters, such as, for example, shielding some spatial locations (e.g., beds) where a fall is unlikely to occur reasonably. Another filter may include a confidence threshold for the detection of potential falls using an algorithm.
Based on the post-processing, the system can confirm the potential fall as a confirmed fall (step 1350). In particular, after using one or more filters, the system can confirm the fall and provide notification of the confirmed fall. For example, in a user interface/software product, a user may create and place virtual furniture (e.g., beds, chairs, closets, etc.) or occlusion areas in a setup application. Even if a "potential fall" is detected according to the machine learning algorithm, the system can automatically blacklist these areas to prevent triggering a fall alarm. For example, as discussed above, if a person lies or sleeps on a bed, the machine learning algorithm may detect a "possible fall". Because lying or sleeping in the bed is a normal action, such an action does not trigger a fall alarm for the user. However, if the person actually falls on the floor, the machine learning algorithm can detect a "potential fall". The system then determines whether the potential fall is within or outside the blacklist area. If the potential fall is in the blacklist area, the system will not sound an alarm. If the potential fall is outside the blacklist area, the system can determine that the fall is an "acknowledged fall" and trigger (e.g., send) a user-oriented fall alert.
Gesture detection systems may include many commercial applications and commercial interests. For example, for an elderly living community, the system may provide predictive insights and prescriptions. In this regard, the system may provide a flag or notice to the caregiver for intervention as early as possible. In particular, the system may measure and track frailty based on, for example, analysis of baseline movement patterns and changes in frailty. The system may also detect and/or flag abnormal activity (e.g., excessive time in a bed or bathroom). The system may also detect and/or mark travel to the toilet at night.
Previous systems typically required active steps and use of wearable devices or completed surveys. However, current systems can provide privacy and non-invasive value claims. The system may also passively perceive actual behavior regardless of changes in behavior.
Currently, frailty is measured in a clinic or doctor's office. A physician may perform a test on a patient that includes a series of activities that may take from 5 minutes to 10 minutes to complete. However, such rapid testing is not fully known to the patient, and the testing is performed in an artificial setting, the patient may be more attentive, struggling, etc. In this regard, the frailty score may vary over time due to different scenarios and different efforts made by the patient. Current systems improve upon prior art frailty tests by using "longitudinal" tracking, including tracking personnel movement over time. As such, current systems can provide a more comprehensive understanding of weakness over time.
Using the gesture detection function, the system may provide notifications or reports of events (e.g., falls, abnormal behaviors, etc.) to, for example, caregivers, building management systems, alarm systems, notification systems, and/or emergency response systems. The report may include data associated with the event (e.g., a fall), such as, for example, location, time of day, actions before and/or after the fall, nearby items (e.g., furniture), items held by a person (e.g., groceries, walkers, walking sticks, additional people), and so forth. The system may be cheaper, faster to install and easier to install than radar or other sensor devices. The system may also analyze the images to facilitate auditing and/or compliance. For example, the system may use images over a period of time to monitor or audit the care provided (e.g., bed examination completed 11 pm before day). The system may also use the images over a period of time to measure the time it takes to provide care (e.g., average minutes in the bathroom with the patient per day). The system may also be integrated with other systems such as scheduling systems and reporting systems. For example, the system may obtain data regarding when an employee starts a shift, when an employee ends a shift, employee name and/or identifier, time purported by an employee to help a patient, and so forth. The system may compare these submitted data with actual data obtained from images of resident activity and/or resident care to determine the accuracy of these submitted data.
The detailed description of the various embodiments herein makes reference to the accompanying drawings and figures, which illustrate by way of illustration the various embodiments. While these various embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, it is to be understood that other embodiments may be realized and that logical and mechanical changes may be made without departing from the spirit and scope of the disclosure. Accordingly, the detailed description herein is presented for purposes of illustration only and not for purposes of limitation. For example, the steps recited in any method or process description may be performed in any order and are not limited to the order presented. Further, any functions or steps may be outsourced to or performed by one or more third parties. Modifications, additions, or omissions may be made to the systems, apparatuses, and methods described herein without departing from the scope of the present disclosure. For example, components of the systems and devices may be integrated or separated. Moreover, the operations of the systems and apparatus disclosed herein may be performed by more, fewer, or other components, and the described methods may include more, fewer, or other steps. Furthermore, the steps may be performed in any suitable order. As used in this document, "each" refers to each member of a collection or each member of a subset of a collection. Furthermore, any reference to the singular includes the plural embodiments, and any reference to more than one element may include the singular embodiments. Although specific advantages have been enumerated herein, various embodiments may include some, none, or all of the enumerated advantages. Systems and methods are provided.
In the detailed description herein, references to "various embodiments," "one embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Furthermore, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. After reading the description, it will become apparent to a person skilled in the relevant art how to implement the present disclosure in alternative embodiments.
Benefits, other advantages, and solutions to problems have been described herein with regard to specific embodiments. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced, however, are not to be construed as a critical, required, or essential feature or element of the invention. Accordingly, the scope of the invention is limited only by the appended claims, wherein reference to an element in the singular is not intended to mean "one and only one" unless explicitly so stated, but rather "one or more". Furthermore, when a phrase similar to "at least one of A, B or C" is used in the claims, it is intended that the phrase be construed to mean that a may be present alone in an embodiment, B may be present alone in an embodiment, C may be present alone in an embodiment, or any combination of elements A, B and C may be present in a single embodiment, e.g., a and B, A and C, B and C, or a and B and C. Furthermore, no element, component, or method step in the present disclosure is intended to be dedicated to the public regardless of whether the element, component, or method step is explicitly recited in the claims. No claim element herein should be construed in accordance with the specification of 35u.s.c.112 (f) unless the element is explicitly recited using the phrase "means for. As used herein, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Computer programs (also called computer control logic) are stored in main memory and/or secondary memory. The computer program may also be received via a communication interface. Such computer programs, when executed, enable the computer system to perform the features as discussed herein. In particular, the computer programs, when executed, enable the processor to perform the features of the various embodiments. Such computer programs thus represent controllers of the computer system.
These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create means for implementing the functions specified in the flowchart block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.
In various embodiments, the software may be stored in a computer program product and loaded into a computer system using a removable storage drive, hard drive, or communications interface. When executed by a processor, the control logic (software) causes the processor to perform the functions of the various embodiments as described herein. In various embodiments, the hardware components may take the form of Application Specific Integrated Circuits (ASICs). Implementation of the hardware to perform the functions described herein will be apparent to persons skilled in the relevant art(s).
As will be appreciated by one of ordinary skill in the art, the system may be embodied as a custom, add-on product, processing device executing upgrade software, stand-alone system, distributed system, method, data processing system, apparatus for data processing, and/or computer program product for existing systems. Thus, any portion of the system or module may take the form of a processing device executing code, an internet-based implementation, an entirely hardware implementation, or an implementation combining internet, software and hardware aspects. Furthermore, the system may take the form of a computer program product on a computer-readable storage medium having computer-readable program code means embodied in the storage medium. Any suitable computer readable storage medium may be utilized including hard disks, CD-ROMs, BLU-RAYOptical storage devices, magnetic storage devices, etc.
In various embodiments, the components, modules, and/or engines of the system may be implemented as micro-applications (micro-applications) or micro-application programs (micro-apps). Micro-applications are typically deployed in the context of a mobile operating system, including, for exampleA mobile operating system,An operating system,An operating system,An operating system of a company, etc. The micro-application may be configured to utilize the resources of the larger operating system and associated hardware via a set of predetermined rules governing the operation of the various operating system and hardware resources. For example, where the micro-application desires to communicate with a mobile device or a device or network other than a mobile operating system, the micro-application may utilize the communication protocols of the operating system and associated device hardware under the predetermined rules of the mobile operating system. Further, in the event that the micro-application expects input from a user, the micro-application may be configured to request a response from an operating system that monitors various hardware components and then communicates the detected input from the hardware to the micro-application.
The system and method may be described herein in terms of functional block components, screen shots, optional selections, and various processing steps. It should be appreciated that such functional blocks may be implemented by any number of hardware and/or software components configured to perform the specified functions. For example, the system may employ various integrated circuit components, e.g., memory elements, processing elements, logic elements, look-up tables, or the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices. Similarly, the software elements of the system may be implemented in any programming or scripting language such as C, C ++, C#,Object Notation(JSON)、VBScript、Macromedia COLD FUSION、COBOL、Dynamic servo pages, compilations, company,PHP、awk、Visual Basic, SQL storage procedure, PL/SQL, anyShell scripts, and extensible markup language (XML), where various algorithms are implemented using any combination of data structures, objects, processes, routines, or other programming elements. Further, it should be noted that the system may employ any number of conventional techniques for data transmission, signal transmission, data processing, network control, etc. Still further, the system can be used to provide a user interface to a client scripting language (such asVBScript, etc.) to detect or prevent client scripting languages (such asVBScript, etc.).
Systems and methods are described herein with reference to screen shots, block diagrams, and flowchart illustrations of methods, apparatuses, and computer program products according to various embodiments. It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by computer program instructions.
Accordingly, blocks of the block diagrams and flowchart illustrations support combinations of means for performing the specified functions, combinations of steps for performing the specified functions and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, can be implemented by special purpose hardware-based computer systems that perform the specified functions or steps, or combinations of special purpose hardware and computer instructions. In addition, illustrations of process flows and descriptions thereof may refer to usersApplications, web pages, websites, web forms, prompts, etc. It will be appreciated by the practitioner that the illustrated steps described herein may be included in any number of configurations includingApplication, web page, web form, pop-upApplication programs, prompts, etc. It should also be appreciated that the various steps illustrated and described may be combined into a single web page and/orApplications, but have been extended for simplicity. In other cases, the steps illustrated and described as a single processing step may be divided into multiple web pages and/orApplications, but have been combined for simplicity.
Middleware may include any hardware and/or software suitably configured to facilitate communication and/or processing transactions between disparate computing systems. Middleware components are commercially available and known in the art. Middleware may be implemented in commercially available hardware and/or software, in custom hardware and/or software components, or in a combination thereof. Middleware may reside in various configurations and may exist as a stand-alone system or may be a software component residing on an internet server. The middleware may be configured to process transactions between various components of the application server and any number of internal or external systems for any of the purposes disclosed herein.Inc (Armonk, NY)MQTM (previous MQSeries) are examples of commercially available middleware products. An enterprise service bus ("ESB") application is another example of middleware.
The computers discussed herein may provide a suitable website or other internet-based graphical user interface that is accessible by a user. In one embodiment of the present invention, in one embodiment,Company Internet Information Service (IIS), transaction server (MTS) service, andDatabase and method for storing dataAn operating system,Network server software, SQLDatabase, database management system, and computer program productThe commerce server is used in combination. In addition, such asSoftware (software),A database(s),Software (software),Software (software),Software (software),Software (software),Software, etc., may be used to provide a dynamic data object (ADO) compliant database management system. In one embodiment of the present invention, in one embodiment,Network serverAn operating system,Database and method for storing dataPHP, ruby and/orThe programming language is used in combination.
For brevity, conventional data networking, application development, and other functional aspects of the systems (and components of the various operational components of the systems) may not be described in detail herein. Furthermore, the connecting lines shown in the various figures contained herein are intended to represent example functional relationships and/or physical couplings between the various elements. It should be noted that many alternative or additional functional relationships or physical connections may be present in a practical system.
In various embodiments, the methods described herein are implemented using various specific machines described herein. As will be immediately understood by those skilled in the art, the methods described herein may be implemented in any suitable combination using the specific machines below as well as those machines developed hereinafter. Furthermore, as apparent from the present disclosure, the methods described herein may result in various transformations of certain articles.
In various embodiments, the system and various components may be integrated with one or more intelligent digital assistant technologies. For example, exemplary intelligent digital assistant technology may include a system consisting ofDeveloped by companiesSystem, GOOGLE developed by Alphabet, incA system(s),The companySystems and/or similar digital assistance techniques.System and GOOGLESystem and method for controlling a systemThe systems may each provide cloud-based voice-activated services that may assist in completing tasks, entertainment, general information, or even more. All ofDevices, e.g. AMAZONAMAZON ECHOAMAZONAnd AMAZONTV, all can accessThe system.System and GOOGLESystem and method for controlling a systemThe system may receive voice commands, activate other functions, control the smart device, and/or collect information via its voice activation technique. For example, intelligent digital assistant technology may be used to interact with music, email, text, telephone, questions and answers, house furnishing (improvement) information, intelligent house communications/activation, games, shopping, making backlog, setting alarm clocks, streaming media interactive podcasts, playing audio readings, and providing weather, traffic, and other real-time information, such as news.GOOGLEAndThe system may also allow the user to access information about qualified transaction accounts that are associated with the online account on all devices that support the digital assistant.
Various system components discussed herein may include one or more of a host server or other computing system including a processor for processing digital data, a memory coupled to the processor for storing digital data, an input digitizer coupled to the processor for inputting digital data, an application stored in the memory and accessible by the processor for directing the processing of digital data by the processor, a display device coupled to the processor and the memory for displaying information derived from the digital data processed by the processor, and a plurality of databases. Various databases used herein may include client data, merchant data, financial institution data, and/or data useful to the operation of the system. As will be appreciated by those skilled in the art, a user computer may include an operating system (e.g.,Etc.) and various conventional support software and drivers typically associated with computers.
The present system, or any portion(s) or function(s) thereof, may be implemented using hardware, software, or a combination thereof, and may be implemented in one or more computer systems or other processing systems. However, the operations performed by an embodiment may be referred to by terms such as matching or selecting, which are typically associated with mental operations performed by a human operator. In most cases, such capability of a human operator is not necessary or desirable in any of the operations described herein. Rather, the operations may be machine operations, or any operations may be performed or enhanced by Artificial Intelligence (AI) or machine learning. AI may generally refer to research on agents (e.g., machines, computer-based systems, etc.) that perceive the surrounding world, form a plan, and make decisions to achieve their goals. The basis of AI includes mathematics, logic, philosophy, probability, linguistics, neuroscience, and decision theory. Many areas fall within the category of AI, such as computer vision, robotics, machine learning, and natural language processing. Useful machines for performing the various embodiments include general purpose digital computers or similar devices.
In various embodiments, the embodiments are directed to one or more computer systems capable of performing the functions described herein. The computer system includes one or more processors. The processor is connected to a communication infrastructure (e.g., a communication bus, a cross bar, a network, etc.). Various software implementations are described in terms of this exemplary computer system. After reading this description, it will become apparent to a person skilled in the relevant art how to implement the various embodiments using other computer systems and/or architectures. The computer system may include a display interface that forwards graphics, text, and other data from the communication infrastructure (or from a frame buffer, not shown) for display on the display unit.
The computer system also includes a main memory, such as Random Access Memory (RAM), and may also include a secondary memory. The secondary memory may include, for example, a hard disk drive, a solid state drive, and/or a removable storage drive. The removable storage drive reads from and/or writes to a removable storage unit in a well known manner. As will be appreciated, the removable storage unit includes a computer usable storage medium having stored therein computer software and/or data.
In various embodiments, the secondary memory may include other similar devices for allowing computer programs or other instructions to be loaded into the computer system. Such a device may include, for example, a removable storage unit and an interface. Examples of such may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an erasable programmable read-only memory (EPROM), programmable read-only memory (PROM)) and associated socket, or other removable storage units and interfaces that allow software and data to be transferred from the removable storage unit to the computer system.
The terms "computer program medium", "computer usable medium", and "computer readable medium" are used to generally refer to media such as removable storage drives and hard disks installed in hard disk drives. These computer program products provide software to a computer system.
The computer system may also include a communication interface. The communication interface allows software and data to be transferred between the computer system and an external device. Examples of such communication interfaces may include a modem, a network interface (such as an ethernet card), a communication port, and the like. Software and data transferred via the communications interface are in the form of signals which may be electronic, electromagnetic, optical, or other signals capable of being received by the communications interface. These signals are provided to a communication interface via a communication path (e.g., channel). The channels carry signals and may be implemented using wire, cable, fiber optic, telephone lines, cellular links, radio Frequency (RF) links, wireless and other communication channels.
As used herein, an "identifier" may be any suitable identifier that uniquely identifies an item. For example, the identifier may be a globally unique identifier ("GUID"). The GUID may be an identifier created and/or implemented under a universally unique identifier standard. Further, the GUID may be stored as a 128-bit value, which 128-bit value may be displayed as 32 hexadecimal digits. The identifier may also include a primary number and a secondary number. Both the primary number and the secondary number may be 16-bit integers.
In various embodiments, the server may comprise an application server (e.g., POSTGRES PLUS ADVANCEDEtc.). In various embodiments, the server may comprise a web server (e.g., apache, IIS,A network server,System network server, inOr (b)Running on an operating systemVirtual machines).
A network client includes any device or software that communicates via any network, such as, for example, any of the devices or software discussed herein. The network client may include internet browsing software installed within a computing unit or system to conduct online transactions and/or communications. These computing units or systems may take the form of computers or groups of computers, although other types of computing units or systems may be used, including personal computers, laptop computers, notebook computers, tablet computers, smart phones, cell phones, personal digital assistants, servers, pool servers, mainframe computers, distributed computing clusters, kiosks, terminals, point-of-sale (POS) devices or terminals, televisions, or any other device capable of receiving data over a network. The network client may include an operating system (e.g.,WINDOWAn operating system,An operating system,An operating system,Operating system, etc.) as well as various conventional support software and drivers, which are typically associated with computers. The network client may also operateINTERNETSoftware (software),Software, GOOGLE CHROME TM software,Software, or any other countless software packages available for browsing the internet.
As will be appreciated by those skilled in the art, a network client may or may not be in direct contact with a server (e.g., an application server, a network server, etc., as discussed herein). For example, a network client may access the services of a server through another server and/or hardware component, which may be directly or indirectly connected to an internet server. For example, a network client may communicate with a server via a load balancer. In various embodiments, the web client access is through a network or through the internet of commercially available web browser software packages. In this regard, the network client may be in a residential or commercial environment having access to a network or the Internet. The network client may implement security protocols such as Secure Sockets Layer (SSL) and Transport Layer Security (TLS). The network client may implement several application layer protocols, including HTTP, HTTPS, FTP and SFTP.
The various system components may be suitably coupled to the network, either independently, separately or together, via data links, including connections to an Internet Service Provider (ISP), such as through local loops typically in communication with standard modems, cable modems, DISHISDN, digital Subscriber Line (DSL), or various wireless communication methods are used in combination. Notably, the network can be implemented as other types of networks, such as an Interactive Television (ITV) network. Further, the system contemplates the use, sale, or distribution of any goods, services, or information over any network having similar functionality as described herein.
The system contemplates use associated with network services, utility computing, pervasive and personalized computing, security and identity solutions, autonomic computing, cloud computing, commodity computing, mobile and wireless solutions, open source, biometric, grid computing, and/or mesh computing.
Any of the communications, inputs, stores, databases, or displays discussed herein may be facilitated by a website having a web page. The term "web page" as used herein is not meant to limit the types of documents and applications that may be used to interact with a user. For example, a typical web site may include various forms in addition to standard HTML documents,Small procedure,Programs, active Server Pages (ASPs), common Gateway Interface Scripts (CGIs), extensible markup language (XML), dynamic HTML, cascading Style Sheets (CSS), AJAX (asynchronous JAVASCRIPT and XML) programs, helper applications, plug-ins, and the like. The server may include a web service that receives a request from a web server, the request including a URL and an IP address (192.168.1.1). The web server retrieves the appropriate web page and sends the data or application of the web page to the IP address. Web services are applications that are capable of interacting with other applications through a communication means such as the internet. Web services are typically based on standards or protocols such as XML, SOAP, AJAX, WSDL and UDDI. Web service methods are well known in the art and are referred to in many standard texts. For example, representational state transfer (REST) or RESTful, web services may provide a way to achieve interoperability between applications.
The computing unit of the network client may also be equipped with an internet browser that connects to the internet or intranet using standard dial-up, cable, DSL, or any other internet protocol known in the art. Transactions originating from network clients may pass through a firewall to prevent unauthorized access from users of other networks. In addition, additional firewalls may be deployed between the different components of the CMS to further enhance security.
Encryption may be performed by any technique now or that may become available in the art-e.g., twofish, RSA, el Gamal, schorr signature, DSA, PGP, PKI, GPG (GnuPG), HPE Format Preserving Encryption (FPE), voltage, triplex DES, blowfish, AES, MD, HMAC, IDEA, RC6, and symmetric and asymmetric cryptosystems. The system and method may also incorporate SHA-series cryptographic methods, elliptic curve cryptography (e.g., ECC, ECDH, ECDSA, etc.), and/or other post quantum cryptographic algorithms being developed.
A firewall may include any hardware and/or software suitably configured to protect CMS components and/or enterprise computing resources from users of other networks. In addition, the firewall may be configured to restrict or restrict access to various systems and components behind the firewall of the network client connected through the network server. The firewall may reside in different configurations including stateful inspection, proxy-based, access control lists, packet filtering, and so forth. The firewall may be integrated in a web server or any other CMS component, or may otherwise reside as a separate entity. The firewall may implement network address translation ("NAT") and/or network address port translation ("NAPT"). Firewalls may accommodate various tunneling protocols to facilitate secure communications, such as those used in virtual private networks. Firewalls may implement a demilitarized zone ("DMZ") to facilitate communication with public networks, such as the internet. The firewall may be integrated as software in an internet server or any other application server component, resident in another computing device, or in the form of a stand-alone hardware component.
Any database discussed herein may include relationships, hierarchies, graphs, blockchains, object-oriented structures, and/or any other database configuration. Any database may also include a flat file structure in which data may be stored in a single file in rows and columns, without structure for indexing, and without structural relationships between records. For example, the flat file structure may include a delimited text file, a CSV (comma separated value) file, and/or any other suitable flat file structure. Common database products that may be used to implement a database include(Armonk, NY)Commercially available fromVarious database products from Corporation (Redwood Shores, calif.)MICROSOFT of Corporation (Redmond, washington)Or MICROSOFT SQLMySQL AB (Uppsala, sweden)Redis、Apache MapR-DB of company, or any other suitable database product. Furthermore, any database may be organized in any suitable manner, for example, as a data table or a look-up table. Each record may be a single file, a series of files, a linked series of data fields, or any other data structure.
As used herein, big data may refer to a partially or fully structured, semi-structured, or unstructured dataset that includes millions of rows and hundreds of thousands of columns. For example, a large dataset may be compiled from a history of purchase transactions over a period of time, from network registration, from social media, from a record of charges (ROC), from a summary of charges (SOC), from internal data, or from other suitable sources. Large datasets may be compiled without descriptive metadata (such as column type, count, percentile, or other explanatory data points).
The association of certain data may be accomplished by any desired data association technique, such as those known or practiced in the art. For example, the association may be done manually or automatically. Automatic association techniques may include, for example, database searching, database merging, GREP, AGREP, SQL, using key fields in tables to speed up searching, sequential searching through all tables and files, sorting records in files according to a known order to simplify lookups, and so forth. The association step may be accomplished by a database merge function, for example, using a pre-selected "key field" in a database or data sector. Various database tuning steps are contemplated to optimize database performance. For example, frequently used files, such as indexes, may be placed on separate file systems to reduce input/output ("I/O") bottlenecks.
More particularly, the "key fields" divide the database according to the high-level categories of objects defined by the key fields. For example, certain types of data may be designated as key fields in a plurality of related data tables, and the data tables may then be linked based on the data types in the key fields. The data corresponding to the key fields in each of the linked data tables are preferably of the same or the same type. However, data tables with similar but not identical data in key fields may also be linked by using, for example, AGREP. According to one embodiment, data without a standard format may be stored using any suitable data storage technology. The data sets may be stored using any suitable technique including, for example, storing a single file using an ISO/IEC 7816-4 file structure, implementing a domain to select a private file exposing one or more base files containing one or more data sets, utilizing a data set stored in a single file using a hierarchical archiving system, as a record of data sets stored in a single file (including compressed, SQL accessible, hashed via one or more keys, digital, alphabetically arranged by a first tuple, etc.), data stored as Binary Large Objects (BLOBs), data stored as ungrouped data elements encoded using ISO/IEC7816-6 data elements, data stored as ungrouped data elements IEC encoded using ISO/IEC abstract syntax notation (asn.1), as in ISO/proprietary 8824 and 8825, other techniques may include fractal compression methods, image compression methods, etc.
In various embodiments, the ability to store multiple information in different formats is facilitated by storing the information as a BLOB. Thus, any binary information may be stored in the memory space associated with the data set. As discussed above, the binary information may be stored in association with the system or external to the system but attached to the system. The BLOB method may use fixed memory allocation, circular queue techniques, or best practices regarding memory management (e.g., paged memory, least recently used, etc.), store the data set via a fixed memory offset as ungrouped data elements formatted as binary blocks. The ability to store various data sets having different formats facilitates storing data in a database or storing data associated with a system by multiple and unrelated owners of the data sets by using a BLOB method. For example, a first data set that may be stored may be provided by a first party, a second data set that may be stored may be provided by an unrelated second party, and a third data set that may be stored may be provided by a third party that is unrelated to the first and second parties. Each of the three exemplary data sets may contain different information stored using different data storage formats and/or techniques. Furthermore, each data set may contain a subset of data that may also be different from the other subsets.
As stated above, in various embodiments, data may be stored regardless of the common format. However, when provided for manipulating data in a database or system, a data set (e.g., a BLOB) may be annotated in a standard manner. The annotations may include short headers, tails, or other suitable indicators associated with each data set that is configured to convey information useful for managing the various data sets. For example, an annotation may be referred to herein as a "condition header," "footer," or "state," and may include an indication of the state of the data set or may include an identifier associated with a particular publisher or owner of the data. In one example, the first three bytes of each data set BLOB may be configured or configurable to indicate the state of that particular data set, e.g., loaded, initialized, ready, blocked, removable, or deleted. Subsequent bytes of data may be used to indicate, for example, the identity of the issuer, user, transaction/membership account identifier, etc. Each of these conditional annotations is discussed further herein.
Dataset annotations may also be used for other types of state information as well as various other purposes. For example, the dataset annotation may include security information that establishes the level of access. For example, the access level may be configured to allow only certain individuals, employee levels, companies or other entities to access the data set, or to allow access to specific data sets based on transactions, merchants, issuers, users, etc. Furthermore, the security information may restrict/allow only certain actions, such as accessing, modifying and/or deleting data sets. In one example, the dataset annotation indicates that only the dataset owner or user is allowed to delete the dataset, various identified users may be allowed to access the dataset for reading, and other users are completely excluded from accessing the dataset. However, other access restriction parameters may be used to allow various entities to access data sets having various appropriate levels of rights.
The data comprising the header or the trailer may be received by a separate interaction device configured to add, delete, modify or supplement the data according to the header or the trailer. Thus, in one embodiment, the header or trailer is not stored on the transaction device with the data owned by the associated issuer, but rather the appropriate action can be taken by providing the user with the appropriate options for the action to be taken at the separate device. The system may contemplate a data storage arrangement in which a header or trailer, or header history or trailer history, of data is stored on the system, device or transaction instrument in association with the appropriate data.
Those skilled in the art will also appreciate that any database, system, device, server, or other component of a system may include any combination thereof at a single location or multiple locations for security reasons, where each database or system includes any of a variety of suitable security features (such as firewalls, access codes, encryption, decryption, compression, decompression, etc.).
Practitioners will also appreciate that there are many ways to display data within a browser-based document. The data may be represented as standard text or in a fixed list, scrollable list, drop down list, editable text field, fixed text field, pop-up window, or the like. Also, there are many methods available for modifying data in a web page, such as, for example, entering free text using a keyboard, selecting menu items, check boxes, option boxes, and the like.
The data may be big data processed by a distributed computing cluster. The distributed computing cluster may be, for exampleA software cluster configured to process and store large data sets, wherein some nodes include a distributed storage system and some nodes include a distributed processing system. In this regard, the distributed computing cluster may be configured to support a distributed computing system as specified by the Apache software Foundation at www.hadoop.apache.org/docsSoftware distributed file system (HDFS).
As used herein, the term "network" includes any cloud, cloud computing system, or electronic communication system or method that incorporates hardware and/or software components. Communication between the parties may be accomplished through any suitable communication channel, such as, for example, a telephone network, an extranet, an intranet, the internet, point-of-interaction devices (point-of-sale devices, personal digital assistants (e.g.,Equipment(s),Devices), cell phones, kiosks, etc.), online communications, satellite communications, offline communications, wireless communications, transponder communications, local Area Networks (LANs), wide Area Networks (WANs), virtual Private Networks (VPNs), networking or linking devices, keyboards, mice, and/or any suitable form of communication or data entry. Furthermore, although the system is often described herein as being implemented using the TCP/IP communication protocol, the system may also use IPX,Programs, IP-6, netBIOS, OSI, any tunneling protocol (e.g., IPsec, SSH, etc.), or any number of existing or future protocols. If the network is of a nature of a public network, such as the internet, it may be advantageous to assume that the network is not secure and open to eavesdroppers. Specific information about protocols, standards and applications used in connection with the internet is generally known to those skilled in the art and need not be described in detail herein.
"Cloud" or "cloud computing" includes models for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud computing may include location-independent computing, providing resources, software, and data to computers and other devices as needed through a shared server.
As used herein, "transmitting" may include sending electronic data from one system component to another system component over a network connection. Further, as used herein, "data" may include information that encompasses, in numerical or any other form, such as commands, queries, files, data for storage, and the like.
Any database discussed herein may include a distributed ledger maintained by a plurality of computing devices (e.g., nodes) over a peer-to-peer network. Each computing device maintains a copy and/or partial copy of the distributed ledger and communicates with one or more other computing devices in the network to verify and write data to the distributed ledger. The distributed ledger may use features and functions of blockchain technology, including, for example, authentication based on consensus, invariance, and encrypting linked blocks of data. The blockchain may include an ledger containing interconnected blocks of data. Blockchains may provide enhanced security in that each block may hold the results of a separate transaction and any blockchain executable. Each block may be linked to a previous block and may include a timestamp. Blocks may be linked in that each block may include a hash of a previous block in the blockchain. The linked blocks form a chain in which only one subsequent block allows linking to another precursor block of a single chain. In the case of divergent chains established from a previously unified blockchain, bifurcation may occur, although typically only one divergent chain will be maintained as a consensus chain. In various embodiments, the blockchain may implement intelligent contracts that implement the data workflow in a decentralized manner. The system may also include applications deployed on user devices, such as, for example, computers, tablets, smartphones, internet of things ("IoT" devices), and the like. Applications may communicate with the blockchain (e.g., directly or via blockchain nodes) to transmit and retrieve data. In various embodiments, a management organization or federation may control access to data stored on a blockchain. Registration with the management organization(s) may participate in the blockchain network.
Data transfer performed by a blockchain-based system may propagate to connected peers within the blockchain network for a duration that may be determined by the block creation time of the particular blockchain technology being implemented. For example, on the basis ofNew data entries may become available within about 13 seconds to 20 seconds after the write is completed. Based onOn the Fabric 1.0 platform, the duration is driven by the particular consensus algorithm chosen and can be performed in a few seconds. In this regard, the propagation time in the system may be improved compared to existing systems, and implementation costs and time to market may also be greatly reduced. The system also provides greater security due, at least in part, to the invariance of data stored in the blockchain, reducing the likelihood of tampering with various data inputs and outputs. In addition, the system may provide greater data security by performing encryption processing on the data prior to storing the data on the blockchain. Thus, by using the system described herein to transfer, store, and access data, the security of the data is improved, which reduces the risk of harm to the computer or network.
In various embodiments, the system may also reduce database synchronization errors by providing a generic data structure, thus at least partially improving the integrity of the stored data. The system also provides greater reliability and fault tolerance than conventional databases (e.g., relational databases, distributed databases, etc.), because each node operates with a complete copy of the stored data, thereby at least partially reducing downtime due to local network outages and hardware failures. The system may also improve reliability of data transfer in a network environment with reliable and unreliable peers, because each node broadcasts messages to all connected peers, and because each block includes a link to the previous block, the node may quickly detect the lost block and propagate requests for the lost block to other nodes in the blockchain network.
The particular blockchain implementations described herein provide improvements over conventional techniques through the use of decentralized databases and improved processing environments. In particular, blockchain implementations improve computer performance by, for example, utilizing scattered resources (e.g., lower latency). Distributed computing resources improve computer performance by, for example, reducing processing time. Furthermore, distributed computing resources improve computer performance by improving security using, for example, cryptographic protocols.
Any of the communications, transmissions, and/or channels discussed herein may include any system or method for delivering content (e.g., data, information, metadata, etc.) and/or the content itself. The content may be presented in any form or medium, and in various embodiments, the content may be electronically delivered and/or capable of being electronically presented. For example, the channel may include a website, mobile application, or device (e.g., GOOGLE CHROMECASTTMEtc.), a uniform resource locator ("URL"), a document (e.g.,Word or EXCEL TM,Portable Document Format (PDF) documents, etc.), "electronic book," "electronic magazine," application or micro-application (as described herein), short Message Service (SMS) or other types of text messages, e-mail,Message(s),Push, multimedia Messaging Service (MMS), and/or other types of communication technologies. In various embodiments, the channel may be hosted or provided by a data partner. In various implementations, the distribution channel may include at least one of a merchant website, a social media website, an affiliated or partner website, an external provider, mobile device communications, a social media network, and/or a location-based service. The distribution channel may include at least one of a merchant website, a social media website, an affiliated or partner website, an external provider, and mobile device communications. Examples of social media websites include
Etc. Examples of affiliated or partner websites include AMERICAN Etc.

Claims (20)

1. A method, the method comprising:
receiving, by a processor, an image of a human from a sensor;
receiving, by the processor, a location of a bounding box on the image, wherein the bounding box contains pixel data of the human in the image;
Obtaining, by the processor, bounding box data from within the bounding box, and
A pose of the human is determined by the processor based on the bounding box data.
2. The method of claim 1, further comprising training, by the processor, a neural network to predict a location of the bounding box on the image.
3. The method of claim 1, further comprising training, by the processor, the neural network using the pixel data of the human, the thermal data of the human, and the environmental data.
4. The method of claim 1, further comprising adjusting, by the processor, an algorithm of a neural network based on the environmental data.
5. The method of claim 1, further comprising adjusting, by the processor, an algorithm of a neural network based on environmental data, wherein the environmental data includes at least one of an ambient temperature, an indoor temperature, a floor plan, a non-human thermal object, a gender of the human, an age of the human, a height of the sensor, a clothing of the human, or a body weight of the human.
6. The method of claim 1, wherein the sensor acquires thermal data about the human.
7. The method of claim 1, wherein a user indicates a location of the bounding box on the image.
8. The method of claim 1, wherein determining the pose is performed for a frame in an image captured by the sensor.
9. The method of claim 1, wherein determining the pose further comprises determining an aggregate pose spanning multiple frames over a period of time.
10. The method of claim 1, further comprising determining, by the processor, a fall based on the aggregated pose changing from at least one of a standing pose or a sitting pose to a lying pose and the lying pose lasting for a time amount.
11. The method of claim 1, further comprising extracting, by the processor, distinguishing features from a plurality of frames of the image using pattern recognition.
12. The method of claim 1, further comprising limiting, by the processor, a resolution of the image based on at least one of a privacy issue, a power consumption of the sensor, a cost of the pixel data, a bandwidth of the pixel data, a computational cost, or a computational bandwidth.
13. The method of claim 1, further comprising marking, by the processor, a pose of the human in the image.
14. The method of claim 1, wherein the image is part of a video clip of the human.
15. The method of claim 1, wherein acquiring the bounding box data comprises acquiring the bounding box data at least one of over time or during an initial calibration session.
16. The method of claim 1, further comprising analyzing, by the processor, distinguishing features from a top of head thermal feature pattern of the human to determine a pose of the human.
17. The method of claim 1, wherein the posture comprises at least one of sitting, standing, lying, exercising, dancing, running, or eating.
18. The method of claim 1, the method further comprising:
Determining, by the processor, a temperature of the human in space based on infrared energy data regarding Infrared (IR) energy from the human;
Determining, by the processor, location coordinates of the human in the space;
Comparing, by the sensor system, the position coordinates of the human being with the position coordinates of the fixture, and
The method further includes determining, by the sensor system, that the human being is a person in response to the temperature of the object being within a range and in response to the position coordinates of the human being different from the position coordinates of the fixture.
The method of claim 1, further comprising determining, by the processor, a trajectory of the human based on a change in temperature in the pixel data, wherein the temperature is projected onto a grid of pixels.
19. An article of manufacture comprising a non-transitory tangible computer-readable storage medium having instructions stored thereon that, in response to execution by a computer-based system, cause the computer-based system to perform operations comprising:
Receiving, by the processor, an image of a human from a sensor;
receiving, by the processor, a location of a bounding box on the image, wherein the bounding box contains pixel data of the human in the image;
Obtaining, by the processor, bounding box data from within the bounding box, and
A pose of the human is determined by the processor based on the bounding box data.
20. A system, the system comprising:
processor, and
A tangible non-transitory memory configured to communicate with the processor,
The tangible non-transitory memory has instructions stored thereon that, in response to execution by the processor, cause the processor to perform operations comprising:
Receiving, by the processor, an image of a human from a sensor;
receiving, by the processor, a location of a bounding box on the image, wherein the bounding box contains pixel data of the human in the image;
Obtaining, by the processor, bounding box data from within the bounding box, and
A pose of the human is determined by the processor based on the bounding box data.
CN202380043635.7A 2022-03-30 2023-02-27 Posture detection using thermal data Active CN119301428B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US17/708,493 2022-03-30
US17/708,493 US12050133B2 (en) 2020-03-06 2022-03-30 Pose detection using thermal data
PCT/US2023/013980 WO2023191987A1 (en) 2022-03-30 2023-02-27 Pose detection using thermal data

Publications (2)

Publication Number Publication Date
CN119301428A true CN119301428A (en) 2025-01-10
CN119301428B CN119301428B (en) 2025-06-17

Family

ID=88203351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202380043635.7A Active CN119301428B (en) 2022-03-30 2023-02-27 Posture detection using thermal data

Country Status (6)

Country Link
EP (1) EP4500127A4 (en)
JP (1) JP7667990B1 (en)
CN (1) CN119301428B (en)
AU (1) AU2023241553B2 (en)
CA (1) CA3247080A1 (en)
WO (1) WO2023191987A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN121059145A (en) * 2025-08-20 2025-12-05 数据少年(北京)健康科技有限公司 A method for detecting adolescent scoliosis based on three-dimensional imaging

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120216523B (en) * 2025-05-27 2025-10-03 浙江省测绘科学技术研究院 Method, system, device, terminal and medium for logically checking and quality checking vector data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184020A (en) * 2010-05-18 2011-09-14 微软公司 Method for manipulating posture of user interface and posture correction
CN102262438A (en) * 2010-05-18 2011-11-30 微软公司 Gestures and gesture recognition for manipulating a user-interface
CN102541256A (en) * 2010-10-28 2012-07-04 微软公司 Position aware gestures with visual feedback as input method
CA2781511A1 (en) * 2011-06-24 2012-12-24 American Express Travel Related Services Company, Inc. Systems and methods for gesture-based interaction with computer systems
CN112651291A (en) * 2020-10-01 2021-04-13 新加坡依图有限责任公司(私有) Video-based posture estimation method, device, medium and electronic equipment

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2978374B2 (en) * 1992-08-21 1999-11-15 松下電器産業株式会社 Image processing device, image processing method, and control device for air conditioner
US7340293B2 (en) * 2003-05-27 2008-03-04 Mcquilkin Gary L Methods and apparatus for a remote, noninvasive technique to detect core body temperature in a subject via thermal imaging
US8718748B2 (en) * 2011-03-29 2014-05-06 Kaliber Imaging Inc. System and methods for monitoring and assessing mobility
CA2773507C (en) 2011-04-04 2020-10-13 Mark Andrew Hanson Fall detection and reporting technology
US8509495B2 (en) * 2011-04-15 2013-08-13 Xerox Corporation Subcutaneous vein pattern detection via multi-spectral IR imaging in an identity verification system
EP3143931B1 (en) 2014-05-13 2020-12-09 Omron Corporation Posture estimation device and posture estimation method
US9989965B2 (en) * 2015-08-20 2018-06-05 Motionloft, Inc. Object detection and analysis via unmanned aerial vehicle
KR102013935B1 (en) * 2017-05-25 2019-08-23 삼성전자주식회사 Method and system for detecting a dangerous situation
US11055574B2 (en) 2018-11-20 2021-07-06 Xidian University Feature fusion and dense connection-based method for infrared plane object detection
JP7196645B2 (en) 2019-01-31 2022-12-27 コニカミノルタ株式会社 Posture Estimation Device, Action Estimation Device, Posture Estimation Program, and Posture Estimation Method
JP7591577B2 (en) * 2020-01-29 2024-11-28 イントリンジック イノベーション エルエルシー Systems and methods for posture detection and measurement - Patents.com
US20210279967A1 (en) * 2020-03-06 2021-09-09 Apple Inc. Object centric scanning
US11320312B2 (en) 2020-03-06 2022-05-03 Butlr Technologies, Inc. User interface for determining location, trajectory and behavior

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184020A (en) * 2010-05-18 2011-09-14 微软公司 Method for manipulating posture of user interface and posture correction
CN102262438A (en) * 2010-05-18 2011-11-30 微软公司 Gestures and gesture recognition for manipulating a user-interface
CN102541256A (en) * 2010-10-28 2012-07-04 微软公司 Position aware gestures with visual feedback as input method
CA2781511A1 (en) * 2011-06-24 2012-12-24 American Express Travel Related Services Company, Inc. Systems and methods for gesture-based interaction with computer systems
CN112651291A (en) * 2020-10-01 2021-04-13 新加坡依图有限责任公司(私有) Video-based posture estimation method, device, medium and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN121059145A (en) * 2025-08-20 2025-12-05 数据少年(北京)健康科技有限公司 A method for detecting adolescent scoliosis based on three-dimensional imaging

Also Published As

Publication number Publication date
AU2023241553A1 (en) 2024-10-17
EP4500127A4 (en) 2025-07-09
CA3247080A1 (en) 2023-10-05
WO2023191987A1 (en) 2023-10-05
JP2025514572A (en) 2025-05-07
AU2023241553B2 (en) 2024-12-12
JP7667990B1 (en) 2025-04-24
EP4500127A1 (en) 2025-02-05
CN119301428B (en) 2025-06-17

Similar Documents

Publication Publication Date Title
US12050133B2 (en) Pose detection using thermal data
Taiwo et al. Enhanced intelligent smart home control and security system based on deep learning model
JP7531815B2 (en) Monitoring human location, trajectory, and behavior using thermal data
US11774292B2 (en) Determining an object based on a fixture
US10586433B2 (en) Automatic detection of zones of interest in a video
US20160035052A1 (en) Systems and methods for reducing energy usage
US20120101653A1 (en) Systems and methods for reducing energy usage,
CN119301428B (en) Posture detection using thermal data
Bouaziz et al. Technological solutions for social isolation monitoring of the elderly: a survey of selected projects from academia and industry
Kang et al. A smart device for non-invasive ADL estimation through multi-environmental sensor fusion
US20210158057A1 (en) Path analytics of people in a physical space using smart floor tiles
US20220093277A1 (en) Path analytics of disease vectors in a physical space using smart floor tiles
US20220087574A1 (en) Neurological and other medical diagnosis from path data
JP2023028573A (en) Prediction system, prediction device, prediction method and prediction program
Devare Analysis and design of IoT based physical location monitoring system
Tan et al. An artificial intelligence and internet of things platform for healthcare and industrial applications
Gingras et al. Eldercare Smart Home Sensor Based System: Approach, Deployment and Insights
Chen et al. Ubi-Care: An Elderly Life Support Healthcare Framework Based on Ubiquitous Personal Online Data Stores
MAKHLOUF Monitoring System Development To Non-Invasively Forecast Future Body Temperature
Khadidja Monitoring System Development To Non-Invasively Forecast Future Body Temperature
US20210353146A1 (en) System and method for people wellness monitoring
Chalmers Adaptive Health Monitoring Using Aggregated Energy Readings from Smart Meters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant