[go: up one dir, main page]

JP6975952B2 - Living body motion identification system and living body motion identification method - Google Patents

Living body motion identification system and living body motion identification method Download PDF

Info

Publication number
JP6975952B2
JP6975952B2 JP2016136349A JP2016136349A JP6975952B2 JP 6975952 B2 JP6975952 B2 JP 6975952B2 JP 2016136349 A JP2016136349 A JP 2016136349A JP 2016136349 A JP2016136349 A JP 2016136349A JP 6975952 B2 JP6975952 B2 JP 6975952B2
Authority
JP
Japan
Prior art keywords
motion
living body
specific
movement
identification system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2016136349A
Other languages
Japanese (ja)
Other versions
JP2018000871A (en
Inventor
誠 佐々木
勇 柴本
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Iwate University
Original Assignee
Iwate University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Iwate University filed Critical Iwate University
Priority to JP2016136349A priority Critical patent/JP6975952B2/en
Publication of JP2018000871A publication Critical patent/JP2018000871A/en
Application granted granted Critical
Publication of JP6975952B2 publication Critical patent/JP6975952B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Description

本発明は、生体が行う例えば食事運動等の特定の動作を識別する生体の動作識別システム及び生体の動作識別方法に係り、生体に適したより良い生活の実現や,診断・リハビリテーションへのフィードバック,他への異常状態の通報等に有用な生体の動作識別システム及び生体の動作識別方法に関する。 The present invention relates to a living body motion identification system for identifying a specific motion such as eating exercise performed by the living body and a living body motion identification method, and realizes a better life suitable for the living body, feedback to diagnosis / rehabilitation, and the like. The present invention relates to a living body motion identification system and a living body motion identification method useful for reporting an abnormal state to a living body.

従来、この種の生体の動作識別システムとしては、例えば、特許文献1(特開2003−220039号公報)に掲載されたものが知られている。これは、被検者の血圧、脈拍、呼吸数、血中酸素濃度の生体情報の中の少なくとも一つを測定する生体情報測定装置と、被検者の脚部、腕部、頭部、胸部及び腰部の複数部位の加速度を測定する加速度測定装置とを備え、プロセッサ装置により加速度情報に基づいて被検者の運動量を算出し、運動量に基づいて被検者の生体情報の閾値を算出し、被検者の生体情報と閾値とを比較して生体情報が閾値を越えている場合は被検者に身体異常の発生を知らせるものである。プロセッサ装置は、血圧、脈拍、呼吸数、血中酸素濃度の生体情報の夫々が、算出された血圧、脈拍、呼吸数、血中酸素濃度の夫々の閾値と比較し、複数の生体情報がその閾値を超える場合に、被検者に身体異常が発生しているものとする判断する。
生体情報の閾値は、算出された運動量と被検者個人の安静時における血圧、脈拍、呼吸数、血中酸素濃度の夫々の数値に基づいて、各被検者毎に予め設定するようにしている。また、プロセッサは、複数の生体情報から警告に係る被検者の動作及び/又は姿勢(就寝、正座、椅子等に腰掛、歩行、走行、階段歩行、入浴、食事、排泄における動作及び/又は姿勢)を判断し、測定された複数部位の夫々の加速度値に基づいて異常処置の必要性を判断している。
Conventionally, as a motion identification system of this kind of living body, for example, the one published in Patent Document 1 (Japanese Unexamined Patent Publication No. 2003-220039) is known. This is a biometric information measuring device that measures at least one of the biometric information of the subject's blood pressure, pulse, respiratory rate, and blood oxygen concentration, and the subject's legs, arms, head, and chest. And an acceleration measuring device that measures the acceleration of multiple parts of the waist, the processor device calculates the exercise amount of the subject based on the acceleration information, and the threshold value of the biological information of the subject is calculated based on the exercise amount. The biological information of the subject is compared with the threshold value, and if the biological information exceeds the threshold value, the subject is notified of the occurrence of a physical abnormality. The processor device compares each of the biometric information of blood pressure, pulse, respiratory rate, and blood oxygen concentration with the calculated thresholds of blood pressure, pulse, respiratory rate, and blood oxygen concentration, and the plurality of biometric information is the same. If the threshold is exceeded, it is determined that the subject has a physical abnormality.
The threshold of biometric information is set in advance for each subject based on the calculated amount of exercise and the respective values of blood pressure, pulse, respiratory rate, and blood oxygen concentration at rest of the subject. There is. In addition, the processor uses a plurality of biometric information to indicate the movement and / or posture of the subject related to the warning (movement and / or posture in sleeping, sitting, sitting on a chair, walking, running, stair walking, bathing, eating, and excretion). ), And the necessity of abnormal treatment is judged based on the measured acceleration values of each of the multiple sites.

特開2003−220039号公報Japanese Unexamined Patent Publication No. 2003-220039

ところで、上記従来のシステムにおいては、生体情報の閾値は、算出された運動量と被検者個人おける血圧、脈拍、呼吸数、血中酸素濃度の夫々について設定されるので、設定が多岐に亘って複雑で、その設定作業も極めて煩雑になっているという問題があった、また、複数の生体情報によって被検者の動作及び/又は姿勢を判断しているが、その判断は定性的であり、細かい動作の検出ができにくくなっており、そのため、例えば、食事の動作において、窒息や誤嚥を引き起こす前兆などの細かい動きには対応することができない。 By the way, in the above-mentioned conventional system, the threshold value of the biological information is set for each of the calculated amount of exercise and the blood pressure, pulse, respiratory rate, and blood oxygen concentration in the individual subject, so that the setting is wide-ranging. There was a problem that it was complicated and the setting work was extremely complicated, and the movement and / or posture of the subject was judged from a plurality of biometric information, but the judgment was qualitative. It is difficult to detect fine movements, and therefore, for example, in the movement of meals, it is not possible to deal with fine movements such as signs that cause choking or aspiration.

本発明は上記の点に鑑みて為されたもので、動作を識別するための設定を複雑になることなく容易に行うことができるようにするとともに、細かい動作にも確実に対応できるようにした生体の動作識別システム及び生体の動作識別方法を提供することを目的とする。 The present invention has been made in view of the above points, so that the setting for identifying the operation can be easily performed without being complicated, and the detailed operation can be reliably handled. It is an object of the present invention to provide a living body motion identification system and a living body motion identification method.

このような目的を達成するための本発明の生体の動作識別方法は、生体が行う特定の動作を検出する1もしくは複数の動作検出手段と、該動作検出手段からの検出信号に基づいて上記生体の特定の動作を識別する識別処理部を備えた生体の動作識別システムにおいて、
上記識別処理部を、上記動作検出手段からの検出信号の全部若しくは一部を用いて特定の動作を判別して該判別結果を出力する機械学習手段を備えて構成している。
The motion identification method for a living body of the present invention for achieving such an object is based on one or a plurality of motion detecting means for detecting a specific motion performed by the living body and a detection signal from the motion detecting means. In a biological motion identification system equipped with an identification processing unit that identifies a specific motion of
The identification processing unit is configured to include a machine learning means that discriminates a specific motion using all or part of the detection signal from the motion detection means and outputs the discrimination result.

ここで、特定の動作とは、例えば、食事動作、就寝動作、正座動作、腰掛動作、歩行動作、走行動作、入浴動作、排泄動作等、一連の動作をはじめ、例えば、表情に係る動作,目の動き,呼吸,体温,筋肉の動き、音声などの音、皮膚などの色の変化,表面には出ないが内部の動き等、生体の一部の変化等も含む概念である。また、静止状態も含む概念である。 Here, the specific motion includes a series of motions such as eating motion, sleeping motion, sitting motion, sitting motion, walking motion, running motion, bathing motion, excretion motion, etc., for example, facial expression-related motion, eyes. It is a concept that includes changes in parts of the living body such as movements of the body, breathing, body temperature, muscle movements, sounds such as voice, changes in the color of the skin, and internal movements that do not appear on the surface. It is also a concept that includes a stationary state.

また、動作検出手段としては、例えば、フォトセンサ(フォトインタラプタ(透過型とアクチエータ型),フォトリフレクタ(反射型)等)、光電素子(フォトレジスタ,LDR(light-dependent resistor),サーモパイル赤外線センサ(熱起電力効果),焦電型赤外線センサ等)、イメージセンサ(リニアイメージセンサ,CCD,CMOSイメージセンサ等)、光リモコン(受光モジュール等)、光センサ(光電子増倍管,イメージインテンシファイア等)、視覚センサ(マシンビジョン,カラーセンサ,全方位視覚,両眼視覚,能動視覚等)、嗅覚センサ(匂いセンサ,匂いコードセンサ,油臭センサ,バイオスニファ等)、味覚センサ(コク・キレセンサ等)、温度センサ(体温計、半導体温度センサ、白金温度センサ、 サーミスタ、熱電対センサ,サーモスタット等)、温湿度センサ(湿度センサ,温湿度センサ等)、圧力センサ(血圧計、圧力センサ,触覚センサ,圧電素子(ピエゾ)等)、磁気センサ(MR,GMR,TMR,MI,SQUID等)、人感センサ(赤外線,超音波,可視光等)、近接センサ、近接スイッチ、超音波センサ、レーザーセンサ、ファイバセンサ、タッチセンサ、加速度センサ(圧電型,半導体等)、角速度センサ(タコジェネ,ジャイロセンサ等)、振動センサ、傾斜/回転センサ(ジャイロセンサ,ロータリーエンコーダ,レゾルバ,サーボ式傾斜角センサ等)、位置角度センサ(変位センサ,測長センサ,リニアスケール,エンコーダ,ポテンショメータ等)、方位角センサ(姿勢・方位センサ等)、液面センサ(レベルセンサ等)、漏液・水検出センサ、流量センサ、電流センサ(筋電センサ等)、電力センサ、バイオセンサ(電気化学バイオセンサ,オプティカルバイオセンサ等)、顔認識センサ、安全センサ、モーションセンサ(光学式,磁気式,慣性式等)、カウンタ(呼吸数計等)、心電センサ、血中濃度計、マイクロフォン、深度センサ、脈波センサ、RGBセンサ、RGB-Dカメラ装置などのカメラ装置が挙げられる。カメラ装置は、撮像データを画像処理して、生体の動作に係る検出信号を出力する。これらから、1つ若しくは複数選択される。動作検出手段は上記に限定されない。 Further, as the operation detecting means, for example, a photo sensor (photo interrupter (transmission type and actuator type), photo reflector (reflection type), etc.), a photoelectric element (photo register, LDR (light-dependent resistor)), a thermopile infrared sensor ( Thermoelectric power effect), pyroelectric infrared sensor, etc.), image sensor (linear image sensor, CCD, CMOS image sensor, etc.), optical remote control (light receiving module, etc.), optical sensor (photoelectron multiplying tube, image intensifier, etc.) ), Visual sensor (machine vision, color sensor, omnidirectional vision, binocular vision, active vision, etc.), olfactory sensor (smell sensor, odor code sensor, oil odor sensor, biosniffer, etc.), taste sensor (richness / sharpness sensor, etc.) ), Temperature sensor (body thermometer, semiconductor temperature sensor, platinum temperature sensor, thermista, thermocouple sensor, thermostat, etc.), temperature / humidity sensor (humidity sensor, temperature / humidity sensor, etc.), pressure sensor (blood pressure sensor, pressure sensor, tactile sensor, etc.) Piezoelectric elements (piezo), etc.), magnetic sensors (MR, GMR, TMR, MI, SQUID, etc.), human sensor (infrared, ultrasonic, visible light, etc.), proximity sensor, proximity switch, ultrasonic sensor, laser sensor, Fiber sensor, touch sensor, acceleration sensor (piezoelectric type, semiconductor, etc.), angular velocity sensor (tacogene, gyro sensor, etc.), vibration sensor, tilt / rotation sensor (gyro sensor, rotary encoder, resolver, servo tilt angle sensor, etc.), Position angle sensor (displacement sensor, length measurement sensor, linear scale, encoder, potency meter, etc.), azimuth angle sensor (attitude / orientation sensor, etc.), liquid level sensor (level sensor, etc.), liquid leakage / water detection sensor, flow rate sensor, Current sensors (myoelectric sensors, etc.), power sensors, biosensors (electrochemical biosensors, optical biosensors, etc.), face recognition sensors, safety sensors, motion sensors (optical, magnetic, inertial, etc.), counters (breathing) Camera devices such as electrocardiographs, electrocardiographic sensors, blood densitometers, microphones, depth sensors, pulse wave sensors, RGB sensors, and RGB-D camera devices can be mentioned. The camera device performs image processing on the image pickup data and outputs a detection signal related to the operation of the living body. From these, one or more are selected. The motion detecting means is not limited to the above.

このシステムによれば、機械学習手段により、学習結果から特定の動作を判別するので、例えば、動作検出手段が複数あっても、従来のようにこれらの各動作検出手段ごとに閾値を設定しなくても、各動作検出信号をまとめて特定の動作に関係付けることができ、そのため、動作を識別するための設定を複雑になることなく容易に行うことができるとともに、細かい動作にも確実に対応できるようになる。 According to this system, a specific motion is discriminated from the learning result by the machine learning means. Therefore, for example, even if there are a plurality of motion detecting means, a threshold value is not set for each of these motion detecting means as in the conventional case. However, each motion detection signal can be collectively related to a specific motion, so that the settings for identifying the motion can be easily performed without complicatedness, and fine motion can be reliably handled. become able to.

これにより、識別処理部においては、機械学習手段が、例えば、予め動作検出手段からの検出信号に基づいて、特定の動作について予め学習しており、動作検出手段からの検出信号があると学習結果から特定の動作を判別し、判別結果を出力する。そのため、この判別結果から、特定の動作である場合には特定の動作が行われたことを確認することができる。尚、予定外の別の動作も確認できる。この場合、特定の動作を正常動作に設定しておけば、特定の動作が行われたと判別されれば、正常動作が行われ、予定外の別の動作のときは例えば異常と判断できる。あるいは、例えば正常動作と異常動作とを夫々特定の動作として設定しておけば、何れかの動作が行われたことを確認することができ、もし、異常動作のときは異常と判断できる。 As a result, in the identification processing unit, for example, the machine learning means has learned in advance about a specific motion based on the detection signal from the motion detection means, and the learning result is that there is a detection signal from the motion detection means. A specific operation is discriminated from, and the discriminant result is output. Therefore, from this determination result, it can be confirmed that the specific operation has been performed in the case of the specific operation. You can also check other unplanned operations. In this case, if a specific operation is set to a normal operation, if it is determined that the specific operation has been performed, the normal operation is performed, and if it is another unplanned operation, it can be determined that the operation is abnormal, for example. Alternatively, for example, if normal operation and abnormal operation are set as specific operations, it is possible to confirm that either operation has been performed, and if the abnormal operation is performed, it can be determined that the operation is abnormal.

例えば、特定の動作としての食事動作としては、座位において、円背姿勢(呼吸機能低下)、頸部伸展位(肩や頸部の不必要な緊張.食道への空間が狭くなる)、口のふくらみ(一口量で摂取しているか?)、早食い、かき込み、すすり込み、丸飲み等の動作がある。
これらを特定の動作として設定して識別することにより、食事動作においては、円背姿勢(呼吸機能低下)になっていないか?、頸部伸展位(肩や頸部の不必要な緊張.食道の空間が狭くなる)を行っていないか?、個人に適した食事姿勢を維持しているか?(例:横向き嚥下などを維持しているか?)、適切な一口量で摂取しているか?、固形物と流動物をどの順番で食べているか?、患者によっては異なる物性の食物を交互に嚥下させる交互嚥下を行っているか?、早食い,かき込み,すすり込み,丸飲み等危険な食べ方をしていないか?、食事ペースは良いか?、食事に集中しているか?(ぼーっとしていないか?)、食べながら話していないか?、食事中に呼吸パターンが変化していないか?などを知ることことができる。
For example, as a specific movement of eating, in the sitting position, a hunched posture (decreased respiratory function), a cervical extension position (unnecessary tension in the shoulders and neck, narrowing of space to the esophagus), and a mouth. There are movements such as swelling (are you taking a bite?), Eating fast, stirring, rubbing, and drinking whole.
By setting these as specific movements and identifying them, isn't there a hunched posture (respiratory function decline) in the eating movement? Are you in the cervical extension position (unnecessary tension in the shoulders and neck, narrowing the space in the esophagus)? Do you maintain a personalized diet? (Example: Do you maintain sideways swallowing?), Are you taking an appropriate bite? , In what order are you eating solids and liquids? Do some patients perform alternate swallowing by alternately swallowing foods with different physical characteristics? Are you eating dangerously, such as eating fast, squeezing, rubbing, or drinking whole? , Is the meal pace good? Are you focused on your diet? (Are you vacant?) Are you talking while eating? , Is the breathing pattern changed during meals? And so on.

本発明において、上記機械学習手段は、人工ニューラルネットワーク(ANN)、サポートベクターマシン(SVM:Support Vector Machine)、決定木、ランダムフォレスト、k平均法クラスタリング、自己組織化マップ、遺伝的アルゴリズム、ベイジアンネットワーク、ディープラーニング手法などから選択される1つ若しくは複数の組み合わせで構成することができる。 In the present invention, the machine learning means is an artificial neural network (ANN), a support vector machine (SVM), a decision tree, a random forest, k-means clustering, a self-organizing map, a genetic algorithm, and a Bayesian network. , It can be configured by one or a plurality of combinations selected from deep learning methods and the like.

そして、必要に応じ、上記機械学習手段は、予め、上記動作検出手段からの検出信号の全部若しくは一部から特定の動作に係る教師データを作成し、該教師データに基づく学習により得られる判断基準を記憶する学習機能と、上記動作検出手段からの検出信号に基づいて上記記憶した判断基準により対応する特定の動作を判別して該判別結果を出力する実行機能とを有した構成としている。動作検出手段からの総合的な検出信号であっても、これと特定の動作とを一対一で関連付けることができるようになり、簡便にしかも高精度で特定の動作を把握することができるようになる。 Then, if necessary, the machine learning means creates teacher data related to a specific motion from all or part of the detection signals from the motion detection means in advance, and a judgment standard obtained by learning based on the teacher data. It is configured to have a learning function for storing data and an execution function for discriminating a specific motion corresponding to the stored determination criteria based on a detection signal from the motion detection means and outputting the discrimination result. Even if it is a comprehensive detection signal from the motion detecting means, it becomes possible to associate this with a specific motion on a one-to-one basis, so that a specific motion can be grasped easily and with high accuracy. Become.

この場合、上記機械学習手段は、上記動作検出手段からの検出信号の特徴量を抽出する特徴量抽出手段と、学習時に上記特徴量抽出手段によって抽出された特徴量をラベリングして得られた特定の動作に対応する特徴量を有したラベル及び当該対応する特徴量からなる教師データを作成し、該教師データに基づく学習により得られる判断基準を記憶する判断基準記憶手段と、実行時に上記特徴量抽出手段によって抽出された特徴量と上記判断基準記憶手段に記憶された判断基準から対応するラベルを推定する推定手段と、該推定手段が推定したラベルを示す指示手段とを備えて構成することができる。 In this case, the machine learning means is a specification obtained by labeling a feature amount extracting means for extracting a feature amount of a detection signal from the motion detecting means and a feature amount extracted by the feature amount extracting means at the time of learning. A judgment standard storage means for creating teacher data consisting of a label having a feature amount corresponding to the operation of the above and the corresponding feature amount and storing the judgment standard obtained by learning based on the teacher data, and the above-mentioned feature amount at the time of execution. It may be configured to include an estimation means for estimating a corresponding label from the feature amount extracted by the extraction means and the judgment criteria stored in the judgment criterion storage means, and an instruction means for indicating the label estimated by the estimation means. can.

これによれば、ラベリングによって、例えば、正常動作に対応するラベルと、異常動作に対応するラベルを設けておけば、推定手段により該当するラベルが推定され、指示手段により該当するラベルが示されることから、正常動作、あるいは、異常動作の何れかの動作が行われたことを確認することができる。 According to this, for example, if a label corresponding to normal operation and a label corresponding to abnormal operation are provided by labeling, the corresponding label is estimated by the estimation means, and the corresponding label is indicated by the instruction means. From, it can be confirmed that either normal operation or abnormal operation has been performed.

更に、必要に応じ、上記特定の動作は、単一の動作若しくは単一の動作の複数の集合により特定される構成としている。例えば、食事動作の内、開口運動→咀嚼運動→嚥下運動→閉口運動からなる摂食嚥下動作の場合、開口運動,咀嚼運動,嚥下運動,閉口運動を、夫々、単一の動作として夫々を特定し、あるいは、開口運動→咀嚼運動→嚥下運動→閉口運動の一連の動作を1つの動作として特定することができる。 Further, if necessary, the specific operation is configured to be specified by a single operation or a plurality of sets of a single operation. For example, in the case of a swallowing motion consisting of an opening motion, a chewing motion, a swallowing motion, and a closing motion, the opening motion, the chewing motion, the swallowing motion, and the closing motion are each specified as a single motion. Alternatively, a series of movements of opening movement → chewing movement → swallowing movement → closing movement can be specified as one movement.

更にまた、必要に応じ、上記特定の動作は、生体が行う正常動作,生体に異常をもたらす虞のある危険動作,上記正常動作及び危険動作とは異なる異常動作の何れか1つ若しくは2つ以上から選択される構成とすることができる。
生体に異常をもたらす虞のある危険動作を選択すれば、異常事態を引き起こす予防に寄与することができる。
Furthermore, if necessary, the specific operation is one or more of a normal operation performed by the living body, a dangerous operation that may cause an abnormality in the living body, and an abnormal operation different from the normal operation and the dangerous operation. It can be configured to be selected from.
By selecting dangerous movements that may cause abnormalities in the living body, it is possible to contribute to prevention of causing abnormal situations.

また、上記正常動作及び危険動作は、人体がテーブルの前に座して行う食事動作である構成にすることができる。
上記食事動作は、嚥下運動、咀嚼運動、開口運動(摂食運動)を含む構成にすることができる。
In addition, the normal operation and the dangerous operation can be configured to be a meal operation performed by the human body sitting in front of a table.
The eating movement can be configured to include a swallowing movement, a chewing movement, and an opening movement (feeding movement).

上記異常動作は、バイタルサインである構成としている。
バイタルサインとしては、例えば、食事動作の場合、上を向いての嚥下、顔が下向き,倒れる(窒息等による意識消失の疑い)、胸をたたく、首元を手で押さえる(ユニバーサル・チョークサイン)、苦しい表情、窒素時の急激なチアノーゼによる顔の色、むせる、せきをする、湿性咳嗽(痰を伴う咳)、乾性咳嗽、SpO2や呼吸パターンの変化、呼吸停止、胸郭の挙上なし(窒息の疑い)、頻脈、心拍や脈波の下降,上昇、湿性嗄声、痰がらみの声、ガラガラ声への変化(誤嚥の疑い)、苦しむようなうなる声(窒息の疑い)などが挙げられる。
The above abnormal operation is configured to be a vital sign.
Vital signs include, for example, in the case of eating, swallowing upwards, facing downwards, falling down (suspicion of loss of consciousness due to suffocation, etc.), tapping the chest, and holding the neck with hands (universal choke sign). , Painful facial expression, facial color due to sudden thianose during nitrogen, swelling, coughing, wet cough (cough with sputum), dry cough, changes in SpO2 and breathing pattern, respiratory arrest, no asphyxiation (suffocation) Suspicion), tachycardia, descent and rise of heartbeat and pulse wave, moist cough, sputum-related voice, change to rattling voice (suspicion of aspiration), distressing grunt (suspicion of choking), etc. ..

更に、必要に応じ、上記識別処理部に、上記正常動作以外の少なくとも異常動作であることを判別したとき、警報を発する警報手段を備えた構成としている。 Further, if necessary, the identification processing unit is provided with an alarm means for issuing an alarm when it is determined that the operation is at least an abnormal operation other than the normal operation.

また、上記目的を達成するため、本発明は、上記生体の動作識別システムを用い、上記生体の特定の動作を識別する生体の動作識別方法にある。 Further, in order to achieve the above object, the present invention is a method for identifying a specific motion of a living body by using the motion identification system of the living body.

本発明によれば、機械学習手段により、学習結果から特定の動作を判別するので、例えば、動作検出手段が複数あっても、従来のようにこれらの各動作検出手段ごとに閾値を設定しなくても、各動作検出信号をまとめて特定の動作に関係付けることができ、そのため、動作を識別するための設定を複雑になることなく容易に行うことができるとともに、細かい動作にも確実に対応できるようになる。 According to the present invention, since a specific motion is discriminated from the learning result by the machine learning means, for example, even if there are a plurality of motion detecting means, a threshold value is not set for each of these motion detecting means as in the conventional case. However, each motion detection signal can be collectively related to a specific motion, so that the settings for identifying the motion can be easily performed without complicatedness, and fine motion can be reliably handled. become able to.

本発明の実施の形態に係る生体の動作識別システムの構成を示す図である。It is a figure which shows the structure of the motion identification system of the living body which concerns on embodiment of this invention. 本発明の実施の形態に係る生体の動作識別システムが識別する特定の動作例を示す図である。It is a figure which shows the specific operation example which the operation identification system of the living body which concerns on embodiment of this invention identifies. 本発明の実施の形態に係る生体の動作識別システムにおいて、機械学習手段の学習時における工程を示すフローチャートである。It is a flowchart which shows the process at the time of learning of the machine learning means in the motion identification system of the living body which concerns on embodiment of this invention. 本発明の実施の形態に係る生体の動作識別システムにおいて、機械学習手段の実行時における工程を示すフローチャートである。It is a flowchart which shows the process at the time of execution of the machine learning means in the motion identification system of the living body which concerns on embodiment of this invention. 本発明の第1の実施例に係り、検出項目に対して用いる検出器を示す表図である。It is a table diagram which shows the detector used for the detection item which concerns on 1st Embodiment of this invention. 本発明の第1の実施例に係るシステム構成(多チャンネルアクティブ電極のシステム構成の概略)を示す図である。It is a figure which shows the system structure (the outline of the system structure of a multi-channel active electrode) which concerns on 1st Embodiment of this invention. 本発明の第1の実施例に係るシステム構成(センサを用いたシステムの概略)を示す図である。It is a figure which shows the system configuration (the outline of the system which used the sensor) which concerns on 1st Embodiment of this invention. 本発明の第1の実施例に係るシステム構成(磁気ユニットを用いたシステムの概略)を示す図である。It is a figure which shows the system configuration (the outline of the system which used the magnetic unit) which concerns on 1st Embodiment of this invention. 本発明の実施例1に係り、加速度計,筋電計,咽喉マイクを被験者に装着した状態及び加速度センサの向きを示す図である。FIG. 3 is a diagram showing a state in which an accelerometer, an electromyogram, and a throat microphone are attached to a subject and the orientation of the acceleration sensor according to the first embodiment of the present invention. 本発明の実施例1に係り、多チャンネル電極を被験者に装着した状態を示す図である。FIG. 3 is a diagram showing a state in which a multi-channel electrode is attached to a subject according to the first embodiment of the present invention. 本発明の実施例1に係り、磁気センサを被験者に装着した状態を示す図である。FIG. 3 is a diagram showing a state in which a magnetic sensor is attached to a subject according to the first embodiment of the present invention. 本発明の実施例1に係り、磁気ユニットのソースを被験者に装着した状態を示す図である。FIG. 3 is a diagram showing a state in which the source of the magnetic unit is attached to a subject according to the first embodiment of the present invention. 本発明の実施例1に係り、使用したセンサの仕様を示す表図である。It is a table diagram which shows the specification of the sensor used which concerns on Example 1 of this invention. 本発明の実施例1に係り、識別方法の概略を示す図である。It is a figure which shows the outline of the identification method which concerns on Example 1 of this invention. 本発明の第1の実施例に係り、フレームシフトの様子を示す表図である。It is a table diagram which shows the state of the frame shift which concerns on 1st Embodiment of this invention. 本発明の第1の実施例に係り、ラベリングの様子を示す図である。It is a figure which shows the state of labeling according to 1st Example of this invention. 本発明の第1の実施例に係る予備実験の結果を示すグラフ図である。It is a graph which shows the result of the preliminary experiment which concerns on 1st Example of this invention. 本発明の第1の実施例に係る実験動作を示すフローチャートである。It is a flowchart which shows the experimental operation which concerns on 1st Embodiment of this invention. 本発明の実施例1に係り、被験者Aのイベントの検出結果を示す図である。It is a figure which shows the detection result of the event of the subject A which concerns on Example 1 of this invention. 本発明の実施例1に係り、被験者Bのイベントの検出結果を示す図である。It is a figure which shows the detection result of the event of the subject B which concerns on Example 1 of this invention. 本発明の実施例1に係り、被験者Cのイベントの検出結果を示す図である。It is a figure which shows the detection result of the event of the subject C which concerns on Example 1 of this invention. 本発明の実施例1に係り、摂食ペースの追従結果を示す図である。It is a figure which shows the follow-up result of the feeding pace which concerns on Example 1 of this invention. 本発明の実施例2に係り、センサを被験者に装着した状態を示す図である。FIG. 3 is a diagram showing a state in which a sensor is attached to a subject according to a second embodiment of the present invention. 本発明の実施例2の実験結果を示す図である。It is a figure which shows the experimental result of Example 2 of this invention. 本発明の利用例を示す図である。It is a figure which shows the use example of this invention.

以下、添付図面に基づいて、本発明の実施の形態に係る生体の動作識別システム及び生体の動作識別方法について詳細に説明する。本発明の実施の形態に係る生体の動作識別方法は、本発明の実施の形態に係る生体の動作識別システムによって実現されるので、このシステムの説明において説明する。 Hereinafter, the motion identification system for a living body and the motion identification method for a living body according to the embodiment of the present invention will be described in detail with reference to the accompanying drawings. Since the method for identifying the motion of a living body according to the embodiment of the present invention is realized by the motion identification system for the living body according to the embodiment of the present invention, it will be described in the description of this system.

本発明の実施の形態に係る生体の動作識別システムは、図1に示すように、生体(実施の形態では人体)が行う特定の動作を検出する1もしくは複数の動作検出手段(T1,T2,T3・・・・Tn)と、動作検出手段からの検出信号に基づいて生体の特定の動作を識別する識別処理部1とを備えている。 As shown in FIG. 1, the motion identification system for a living body according to an embodiment of the present invention is one or a plurality of motion detecting means (T1, T2,) for detecting a specific motion performed by a living body (human body in the embodiment). T3 ... Tn) and an identification processing unit 1 for identifying a specific motion of a living body based on a detection signal from the motion detection means.

特定の動作は、単一の動作若しくは単一の動作の複数の集合により特定される。また、特定の動作は、図2に示すように、生体が行う正常動作,生体に異常をもたらす虞のある危険動作,正常動作及び危険動作とは異なる異常動作の何れか1つ若しくは2つ以上から選択される。実施の形態では、正常動作,危険動作及び異常動作の何れもが特定される。例えば、特定の動作は、人体がテーブルの前に座して行う食事動作であり、この食事動作においての正常動作及び危険動作は、人体がテーブルの前に座して行う通常の食事動作である。食事動作は、嚥下運動,咀嚼運動,開口運動(摂食運動)を含む。異常動作は、バイタルサインであリ、食事動作の場合、例えば、首元を手で押さえるユニバーサル・チョークサインにしている。 A particular action is specified by a single action or a set of single actions. Further, as shown in FIG. 2, the specific operation is one or more of a normal operation performed by the living body, a dangerous operation that may cause an abnormality in the living body, a normal operation, and an abnormal operation different from the dangerous operation. Is selected from. In the embodiment, any of normal operation, dangerous operation and abnormal operation is specified. For example, a specific movement is a meal movement performed by the human body sitting in front of a table, and normal and dangerous movements in this meal movement are normal meal movements performed by the human body sitting in front of a table. .. Eating movements include swallowing movements, chewing movements, and opening movements (feeding movements). The abnormal movement is a vital sign, and in the case of a meal movement, for example, a universal choke sign that holds the neck by hand is used.

動作検出手段(T1,T2,T3・・・・Tn)は、実施の形態では複数設けられ、例えば、モーションセンサ、マイクロフォン、筋電センサ、心電センサ、呼吸センサ、SpO2センサ等が挙げられる。カラー画像(RGB)と奥行き画像(Depth)を測定するRGB-Dカメラ装置等のカメラ装置も有効である。カメラ装置は、撮像データを画像処理して、生体の動作に係る検出信号を出力する。 A plurality of motion detecting means (T1, T2, T3 ... Tn) are provided in the embodiment, and examples thereof include a motion sensor, a microphone, a myoelectric sensor, an electrocardiographic sensor, a respiratory sensor, and a SpO2 sensor. Camera devices such as RGB-D camera devices that measure color images (RGB) and depth images (Depth) are also effective. The camera device performs image processing on the image pickup data and outputs a detection signal related to the operation of the living body.

識別処理部1は、動作検出手段(T1,T2,T3・・・・Tn)からの検出信号の全部若しくは一部を用いて特定の動作を判別して該判別結果を出力する機械学習手段2を備えて構成されている。実施の形態では、機械学習手段2は、サポートベクターマシン(SVM:Support Vector Machine)で構成されている。この機械学習手段2は、予め、動作検出手段(T1,T2,T3・・・・Tn)からの検出信号の全部若しくは一部(実施の形態では全部)から特定の動作に係る教師データを作成し、この教師データに基づく学習により得られる判断基準を記憶する学習機能と、動作検出手段(T1,T2,T3・・・・Tn)からの検出信号に基づいて記憶した判断基準により対応する特定の動作を判別してこの判別結果を出力する実行機能とを有している。 The identification processing unit 1 discriminates a specific motion by using all or a part of the detection signals from the motion detection means (T1, T2, T3 ... Tn) and outputs the discrimination result. Machine learning means 2 It is configured with. In the embodiment, the machine learning means 2 is composed of a support vector machine (SVM). The machine learning means 2 creates teacher data related to a specific motion in advance from all or part (all in the embodiment) of the detection signals from the motion detection means (T1, T2, T3 ... Tn). Then, the learning function that stores the judgment criteria obtained by learning based on this teacher data and the judgment criteria stored based on the detection signals from the motion detection means (T1, T2, T3 ... Tn) correspond to the corresponding identification. It has an execution function that discriminates the operation of and outputs the discriminant result.

詳しくは、機械学習手段2は、動作検出手段(T1,T2,T3・・・・Tn)からの検出信号の特徴量を抽出する特徴量抽出手段3と、学習時に特徴量抽出手段3によって抽出された特徴量をラベリングして得られた特定の動作に対応する特徴量を有したラベル及び当該対応する特徴量からなる教師データを作成し、この教師データに基づく学習により得られる判断基準を記憶する判断基準記憶手段4と、実行時に特徴量抽出手段3によって抽出された特徴量と判断基準記憶手段4に記憶された判断基準から対応するラベルを推定する推定手段5と、推定手段5が推定したラベルを示す指示手段6とを備えて構成されている。例えば、食事動作の場合、正常動作,危険動作及び異常動作の何れかに分類したラベルを特定する。指示手段6は、正常動作,危険動作及び異常動作の何れかを示す。 Specifically, the machine learning means 2 is extracted by the feature amount extracting means 3 for extracting the feature amount of the detection signal from the motion detecting means (T1, T2, T3 ... Tn) and the feature amount extracting means 3 at the time of learning. A teacher data consisting of a label having a feature amount corresponding to a specific motion obtained by labeling the obtained feature amount and the corresponding feature amount is created, and the judgment criteria obtained by learning based on the teacher data is stored. The estimation means 5 and the estimation means 5 that estimate the corresponding label from the feature amount extracted by the feature amount extraction means 3 at the time of execution and the judgment criteria stored in the judgment standard storage means 4 are estimated. It is configured to include an instruction means 6 indicating a label. For example, in the case of eating behavior, the labels classified into normal movement, dangerous movement, and abnormal movement are specified. The instruction means 6 indicates any of normal operation, dangerous operation, and abnormal operation.

識別処理部1には、指示手段6の指示結果を表示装置7に表示する表示手段8を備えている。表示装置7の表示は、例えば、食事動作の場合、正常動作,危険動作及び異常動作の何れかが表示される。また、識別処理部1には、正常動作以外の危険動作,異常動作であることを判別したとき、表示装置7に警告を表示し、警報器9に警報を発する警報手段10が備えられている。 The identification processing unit 1 is provided with a display means 8 for displaying the instruction result of the instruction means 6 on the display device 7. For example, in the case of a meal operation, the display of the display device 7 displays any one of normal operation, dangerous operation, and abnormal operation. Further, the identification processing unit 1 is provided with an alarm means 10 that displays a warning on the display device 7 and issues an alarm to the alarm device 9 when it is determined that the operation is dangerous or abnormal other than the normal operation. ..

従って、本発明の実施の形態に係る生体の動作識別システムによって、生体の動作の識別を行うときは以下のようになる。
<学習>
図3に示すように、例えば、食事動作の場合、被験者に、正常動作,危険動作及び異常動作を夫々行わせ、識別処理部1は、標準データを収集する。識別処理部1は、動作検出手段(T1,T2,T3・・・・Tn)から、夫々の場合の検出信号を取得し(S101)、特徴量抽出手段3によりその特徴量を抽出する(S102)。それから、この特徴量をラベリングし、特定の動作(正常動作,危険動作及び異常動作)に対応する特徴量を有したラベルを特定し(S103)、このラベル及びこのラベルに対応する特徴量からなる教師データを作成し、この教師データに基づく学習により得られる判断基準を判断基準記憶手段4に記憶する(S104)。
Therefore, when the motion identification system of the living body according to the embodiment of the present invention is used to identify the motion of the living body, it is as follows.
<Learning>
As shown in FIG. 3, for example, in the case of a meal operation, the subject is made to perform a normal operation, a dangerous operation, and an abnormal operation, respectively, and the identification processing unit 1 collects standard data. The identification processing unit 1 acquires the detection signals in each case from the operation detecting means (T1, T2, T3 ... Tn) (S101), and extracts the feature amount by the feature amount extracting means 3 (S102). ). Then, this feature amount is labeled, a label having a feature amount corresponding to a specific operation (normal operation, dangerous operation, and abnormal operation) is specified (S103), and this label and the feature amount corresponding to this label are composed. Teacher data is created, and the judgment criteria obtained by learning based on the teacher data are stored in the judgment criteria storage means 4 (S104).

<実行>
図4に示すように、識別処理部1が、動作検出手段(T1,T2,T3・・・・Tn)からの検出信号動を取得すると(S201)、特徴量抽出手段3が検出信号の特徴量を抽出する(S202)。推定手段5は、特徴量抽出手段3によって抽出された特徴量と判断基準記憶手段4に記憶された判断基準から対応するラベル(正常動作,危険動作及び異常動作)を推定する(S203)。指示手段6は、該当する正常動作,危険動作及び異常動作の何れかのラベルであることを示す(S204)。表示手段8はこれを表示する(S205)。また、警告手段は、正常動作以外の危険動作,異常動作であることを判別したとき、表示装置7に警告を表示し、警報器9により警報を発する(S205)。
<Execution>
As shown in FIG. 4, when the identification processing unit 1 acquires the detection signal motion from the operation detection means (T1, T2, T3 ... Tn) (S201), the feature amount extraction means 3 features the detection signal. The amount is extracted (S202). The estimation means 5 estimates the corresponding label (normal operation, dangerous operation, and abnormal operation) from the feature amount extracted by the feature amount extraction means 3 and the judgment standard stored in the judgment standard storage means 4 (S203). The indicating means 6 indicates that the label is any of the corresponding normal operation, dangerous operation, and abnormal operation (S204). The display means 8 displays this (S205). Further, when the warning means determines that the operation is dangerous or abnormal other than the normal operation, the warning means displays a warning on the display device 7 and issues an alarm by the alarm device 9 (S205).

この場合、ラベリングによって、例えば、正常動作に対応するラベルと、危険動作に対応するラベル及び異常動作に対応するラベルを設けてあるので、推定手段5が推定したラベルが、指示手段6により示されることから、正常動作、あるいは、危険動作及び異常動作の何れかの動作が行われたことを確認することができる。 In this case, for example, a label corresponding to the normal operation, a label corresponding to the dangerous operation, and a label corresponding to the abnormal operation are provided by labeling, so that the label estimated by the estimation means 5 is indicated by the instruction means 6. Therefore, it can be confirmed that either normal operation, dangerous operation, or abnormal operation has been performed.

このシステムによれば、機械学習手段2により、学習結果から特定の動作を判別するので、例えば、動作検出手段(T1,T2,T3・・・・Tn)が複数あっても、従来のようにこれらの各動作検出手段(T1,T2,T3・・・・Tn)ごとに閾値を設定しなくても、各動作検出信号をまとめて特定の動作に関係付けることができ、そのため、動作を識別するための設定を複雑になることなく容易に行うことができるとともに、細かい動作にも確実に対応できるようになる。 According to this system, the machine learning means 2 discriminates a specific motion from the learning result. Therefore, for example, even if there are a plurality of motion detecting means (T1, T2, T3 ... Tn), as in the conventional case. Even if a threshold value is not set for each of these motion detection means (T1, T2, T3 ... Tn), each motion detection signal can be collectively related to a specific motion, and therefore, the motion is identified. The settings for this can be easily performed without becoming complicated, and it will be possible to reliably handle detailed operations.

以下実施例を示す。
(1)第1の実施例
この実施例は、食事動作を識別するシステムであり、誤嚥を未然に防ぐことに有用な摂食・嚥下を検出するシステムである。摂食、嚥下は様々な動作の連動で成り立っており、摂食は口腔期、咽頭期、食道期に大別される。口腔期とは、食べ物が口腔内でかみ砕かれ唾液と混ざり合い、飲み込みやすい塊(食塊)となり口から咽頭へと送り込まれる動作の期間である。咽頭期とは、舌尖が持ち上がり、食塊が咽頭に達した後、舌骨が持ち上げられ、同時に咽頭も上前方に持ち上げられ咽頭蓋が反転し気道が閉じられ呼吸が一旦停止し、
食塊を食道まで導くまでの動作期間である。食道期とは、食道に食塊が導かれたあと、食塊が食道下方に送られるまでの動作期間である。これらは口腔内で起こる運動であるため、口腔期は目視で確認できても咽頭期と食道期の同定や、飲み下しした量を推定することは困難である。さらに、摂食ペースの同定をするために開口の検出も必須である。本実施例では、口腔期を咀嚼、咽頭期と食道期を嚥下と再定義した上で、摂食・嚥下を開口と咀嚼、および嚥下のサイクルで成り立つと考えた。さらに、摂食ペースと、口腔内の食塊残量を推定する必要もあり、それらを勘案した上で状態検出のために必要な筋電などの信号の決定、およびシステム構成を行った。
An example is shown below.
(1) First Example This embodiment is a system for identifying eating behavior, and is a system for detecting eating / swallowing, which is useful for preventing aspiration. Eating and swallowing are linked by various movements, and eating is roughly divided into the oral phase, pharyngeal phase, and esophageal phase. The oral period is the period of movement in which food is chewed in the oral cavity and mixed with saliva to form an easy-to-swallow mass (food mass) that is sent from the mouth to the pharynx. In the pharyngeal stage, the tip of the tongue is lifted, the hyoid bone reaches the pharynx, and then the hyoid bone is lifted.
It is the operating period until the bolus is guided to the esophagus. The esophageal period is the operating period from when the bolus is guided to the esophagus until the bolus is sent to the lower part of the esophagus. Since these are movements that occur in the oral cavity, it is difficult to identify the pharyngeal stage and the esophageal stage and to estimate the amount swallowed even if the oral stage can be visually confirmed. In addition, detection of openings is essential for identification of feeding pace. In this example, after redefining the oral phase as mastication and the pharyngeal phase and esophageal phase as swallowing, it was considered that feeding / swallowing consists of a cycle of opening and mastication, and swallowing. Furthermore, it was also necessary to estimate the feeding pace and the remaining amount of bolus in the oral cavity, and after taking these into consideration, the signals such as myoelectricity required for state detection were determined, and the system configuration was performed.

(1−1)デバイスの決定
検出するべき項目,その有用性及び用いるセンサを図5に示す。口腔期,咽頭期,食道期において,下顎の運動,咀嚼筋および舌骨上筋群の筋活動,咀嚼音・嚥下音がそれぞれ変化し,特定動作の検出に資する情報が含まれると予想されることから,加速度センサ,筋電センサ,多チャンネル筋電センサ,咽喉マイクを装着した。尚、本実施例でデータは提示しないが、同時に心電計、SpO2計、呼気計、体動を検知する磁気センサも用いた。
(1-1) Device determination Figure 5 shows the items to be detected, their usefulness, and the sensors used. In the oral, pharyngeal, and esophageal stages, the movement of the lower jaw, the muscle activity of the masticatory muscles and suprahyoid muscles, and the masticatory and swallowing sounds are expected to change, and information that contributes to the detection of specific movements is expected to be included. Therefore, an acceleration sensor, a myoelectric sensor, a multi-channel myoelectric sensor, and a throat microphone were attached. Although the data is not presented in this embodiment, an electrocardiograph, a SpO2 meter, an exhalation meter, and a magnetic sensor for detecting body movement were also used at the same time.

(1−2)システム構成
本実施例で用いた動作検出のシステムは三軸加速度計、咀嚼筋活動を検知するための筋電計、舌骨上筋群の活動を検出するための多チャンネル電極、咽喉マイクからなる。図6乃至図8にシステム構成の概略図を示した。計測に用いたセンサには、三軸加速度センサ(ZB-150H、NIHON KOHDEN)、筋電計(ZB-150、NIHON KOHDEN)、表面筋電位(Surface Electromyogram: EMG)を計測するために多チャンネル筋電計、咽喉マイクを用いた。また、今回の解析では用いなかったが、他にも心電図送信機(ZB-151.H)、呼吸送信機(ZB-153H、NIHON KOHDEN)、SpO2送信機(ZB-157H、NIHON KOHDEN)、及び体動の検出のために磁気センサを体に8個装着してデータの収集も同時に行った。図9乃至図12には、センサを装着した状態を示した。加速度センサについては図9内に示した方向に各軸が向くように装着した。
(1-2) System configuration The motion detection system used in this example is a triaxial accelerometer, an electromyogram for detecting masticatory muscle activity, and a multi-channel electrode for detecting suprahyoid muscle group activity. , Consists of a throat microphone. 6 to 8 show a schematic diagram of the system configuration. The sensors used for measurement include a three-axis accelerometer (ZB-150H, NIHON KOHDEN), an electromyogram (ZB-150, NIHON KOHDEN), and a multi-channel muscle for measuring surface myoelectric potential (EMG). An electromyogram and a throat microphone were used. In addition, although not used in this analysis, there are other ECG transmitters (ZB-151.H), respiratory transmitters (ZB-153H, NIHON KOHDEN), SpO2 transmitters (ZB-157H, NIHON KOHDEN), and Eight magnetic sensors were attached to the body to detect body movements, and data was collected at the same time. 9 to 12 show a state in which the sensor is attached. The accelerometer was mounted so that each axis faces in the direction shown in FIG.

多チャンネル電極の信号は、耳朶に貼りつけた基準電極と多チャンネルアクティブ電極を構成する各電極との電位差を、もう片方の耳朶に貼りつけたGND電極を基準に差動増幅することで計測した。
三軸加速度と咀嚼筋の表面筋電位はデータ収録装置(WEB-1000、NIHON KOHDEN)を用い、舌骨上筋群の表面筋電位と咽喉マイクについてはAD変換器(NI USB-6218、NATIONAL INSTRUMENTS) を用い,これらの同期計測を行った。図13に各種センサの仕様を示す。
The signal of the multi-channel electrode was measured by differentially amplifying the potential difference between the reference electrode attached to the earlobe and each electrode constituting the multi-channel active electrode with reference to the GND electrode attached to the other earlobe. ..
A data recording device (WEB-1000, NIHON KOHDEN) is used for triaxial acceleration and the surface myoelectric potential of the masticatory muscles, and an AD converter (NI USB-6218, NATIONAL INSTRUMENTS) is used for the surface myoelectric potential of the suprahyoid muscle group and the throat microphone. ) Was used to perform these synchronous measurements. FIG. 13 shows the specifications of various sensors.

(1−3)食事動作識別法
食事動作の識別アルゴリズムの概略図を図14に示す。動作識別のアルゴリズムには大きく分けて、「特徴量を抽出する特徴抽出部」と「機械学習による動作学習、識別部」で構成される。本実施例では計測した筋電信号を用いて、特徴量を抽出し、サポートベクターマシンによって動作識別を行う。以下に各部分について詳しく述べる。
(1−3−1)特徴抽出部
動作識別を行う前に、舌骨上筋群のEMG[V]から、動作に関連した特徴的な信号成分(特徴量)を抽出する。この特徴量には時間領域の特徴であるRoot mean square(RMS)と,周波数領域の特徴であるCepstrum coefficient(CC)を用いた。
Root Mean Square(RMS)
RMSは式(1.)で表され、EMG信号の振幅に関する特徴が得られる。
(1-3) Meal movement identification method FIG. 14 shows a schematic diagram of a meal movement identification algorithm. The algorithm for motion identification is roughly divided into a "feature extraction unit for extracting features" and a "motor learning and identification unit by machine learning". In this embodiment, the feature quantity is extracted using the measured myoelectric signal, and the operation is identified by the support vector machine. Each part will be described in detail below.
(1-3-1) Feature extraction unit Before performing motion identification, characteristic signal components (features) related to motion are extracted from EMG [V] of the suprahyoid muscle group. For this feature, the root mean square (RMS), which is a feature of the time domain, and the Cepstrum coefficient (CC), which is a feature of the frequency domain, were used.
Root Mean Square (RMS)
The RMS is expressed by Eq. (1.), and the characteristics related to the amplitude of the EMG signal can be obtained.

Figure 0006975952
Figure 0006975952

Cepstrum coefficient(CC)
CCは式(2)で表される。
Cepstrum coefficient (CC)
CC is expressed by equation (2).

Figure 0006975952
Figure 0006975952

周波数領域から抽出する特徴量であり、パワースペクトルの包絡形状と微細構造の分離を行える特徴がある。次数が低いと包絡形状の特徴が、次数が高いと微細構造の特徴が表れる。 It is a feature quantity extracted from the frequency domain, and has a feature that the envelope shape of the power spectrum and the fine structure can be separated. When the order is low, the characteristic of the envelope shape appears, and when the order is high, the feature of the fine structure appears.

RMSとCCの計算には過去nサンプルのEMGを用いる。この際、nサンプル分を一つのフレームとして切り出して計算し、切り出す範囲を一定周期でシフトさせていくフレームシフト方式を用いる。フレームシフトを行う様子を以下の図15に示す。 The past n samples of EMG are used to calculate RMS and CC. At this time, a frameshift method is used in which n samples are cut out as one frame and calculated, and the cutout range is shifted at regular intervals. The state of performing a frame shift is shown in FIG. 15 below.

本実施例では、長さ128ms(0.5ms×256)のフレームを、16ms(0.5ms×32)の周期でシフトさせながらEMG信号を切り出して特徴量を抽出した。
さらに式(3)のような移動平均を行って特徴量を平滑化した。ここで,pはフレーム番号,Mは移動平均点数である。
In this embodiment, the EMG signal was cut out while shifting the frame having a length of 128 ms (0.5 ms × 256) in a cycle of 16 ms (0.5 ms × 32), and the feature amount was extracted.
Furthermore, the moving average as shown in Eq. (3) was performed to smooth the features. Here, p is the frame number and M is the moving average score.

Figure 0006975952
Figure 0006975952

RMS,CCを用いて構成した特徴ベクトルに,動作ラベルを付与し,学習・識別に用いる。 An action label is attached to the feature vector constructed using RMS and CC, and it is used for learning and identification.

(1−3−2)ラベリング
図16に示すように、SVMによる学習では、ラベリングをする必要がある。本実施例では開口、咀嚼、嚥下の動作を識別した。以下にラベリングの手順とその様子を示した。ラベリングは、開口と嚥下のみ行い、その間を咀嚼とした。上の波形が舌骨上筋群のEMG、中央の波形が咀嚼筋のEMG、下の波形が三軸加速度の各成分の二乗和を示す。
(1)開口時に口を開ける動作が必要になるため1口目の開口開始は加速度や筋活動の値が大きく変化し始める部分とした。
(2)開口終了は口が閉じる動作であり、加速度や筋活動の値が低下したところとし、(1)と(2)の間を開口とラベリングした。
(3)嚥下開始は咀嚼が終了してから行われるので咀嚼筋の活動が大きく低下した部分とした。
(4)嚥下終了は(3)の次の舌骨上筋群の活動が落ちた部分とし、(3)と(4)の間を嚥下とラベリングした。
図16の左の縦線から順に(1)(2)(3)(4)を示しており、開口とラベリングした部分を(イ)咀嚼とラベリングした部分を(ロ)、嚥下とラベリングした部分を(ハ)として示した。
(1-3-2) Labeling As shown in FIG. 16, it is necessary to perform labeling in learning by SVM. In this example, the movements of opening, chewing, and swallowing were identified. The labeling procedure and its appearance are shown below. Labeling was performed only by opening and swallowing, and chewing was performed between them. The upper waveform shows the EMG of the suprahyoid muscle group, the middle waveform shows the EMG of the masticatory muscle, and the lower waveform shows the sum of squares of each component of triaxial acceleration.
(1) Since it is necessary to open the mouth at the time of opening, the start of opening the first mouth is the part where the values of acceleration and muscle activity begin to change significantly.
(2) The end of the opening is the action of closing the mouth, and it is assumed that the values of acceleration and muscle activity have decreased, and the space between (1) and (2) is labeled as the opening.
(3) Since the start of swallowing is performed after the mastication is completed, the part where the activity of the masticatory muscles is significantly reduced is used.
(4) The end of swallowing was defined as the part where the activity of the suprahyoid muscle group following (3) was reduced, and swallowing and labeling were performed between (3) and (4).
(1) (2) (3) (4) are shown in order from the vertical line on the left side of FIG. Was shown as (c).

(1−3−3)動作学習と識別部
最初に学習データを用いて識別関数を構成する必要がある。SVMでは、学習に用いる際の動作学習部と、学習結果を基に識別を行う動作識別部がある。
動作学習部では、動作クラスを付与した学習データからSVMの初期設定パラメータを求め、識別関数を構成する。γとCをSVMハイパパラメータとし、初期設定パラメータであるγとCは格子探索により決定する。γとCの探索範囲はγ={ 、 、…、 }、C={ 、 、…、 }の48通りの組み合わせとし、各格子点の識別率の中から最も高い識別率を示す組み合わせを探索する。なお、この際の識別率は識別結果と、学習に用いたデータの動作クラスとの正誤から求められている。
動作識別部では学習によって作成された識別関数を基に特徴ベクトルを識別し、動作クラスを付与する。その後、過去k個の付与された動作クラスに対して多数決判定を行い、摂食・嚥下の状態を最終決定する。
(1-3-3) Motor learning and discrimination unit First, it is necessary to construct a discrimination function using learning data. In SVM, there is a motion learning unit when used for learning and a motion identification unit that performs discrimination based on the learning result.
In the motion learning unit, the initial setting parameters of SVM are obtained from the learning data to which the motion class is given, and the discrimination function is constructed. γ and C are SVM hyperparameters, and the initial setting parameters γ and C are determined by grid search. The search range of γ and C is 48 combinations of γ = {,, ...,}, C = {,,, ...,}, and the combination showing the highest discrimination rate is searched from among the discrimination rates of each grid point. .. The discrimination rate at this time is obtained from the correctness between the discrimination result and the operation class of the data used for learning.
The motion identification unit identifies the feature vector based on the discrimination function created by learning, and assigns an motion class. After that, a majority decision is made for the past k given motion classes, and the state of eating and swallowing is finally determined.

(1−3−4)SVMマルチクラス拡張
SVMは2クラスを識別する手法であるため、たくさんのクラスを識別する際にはマルチクラスへの拡張が必要となる。一般的にこの方法にはone-against-one法とone-against-all法の2種類が存在するが、本手法ではone-against-one法を採用した。これは、O個のクラスすべての組み合わせ、すなわちO(O-1)/2個の識別関数を構成し、各識別関数を用いて特徴ベクトルの識別を行う方法である。この手法の優位性はHsuらによる2種類の手法の学習時間と識別精度に関する比較実験より示されている。
(1-3-4) SVM multi-class extension
Since SVM is a method for identifying two classes, it is necessary to extend it to multiple classes when identifying many classes. Generally, there are two types of this method, the one-against-one method and the one-against-all method, but in this method, the one-against-one method was adopted. This is a method of constructing a combination of all O classes, that is, O (O-1) / 2 discriminant functions, and discriminating feature vectors using each discriminant function. The superiority of this method is shown by the comparative experiment on the learning time and discrimination accuracy of the two methods by Hsu et al.

(1−4)食事動作の識別
若年男性を対象に摂食・嚥下時の咀嚼筋のEMG、顎の運動による加速度、舌骨上筋群のEMGおよび嚥下音を計測し、上記した食事動作の識別結果を示す。
(1−4−1)実験条件
被験者は、舌機能が正常な成人男性3名(年齢 23.0±1.0 歳、mean±SD)とした。先に述べた摂食・嚥下の動作識別実験動作として、コーンフレーク、プリン、水の通常時の摂食・嚥下をする動作、それらを無理のない早さで摂食・嚥下する動作、さらに異常動作のサンプルとして咳1と題して咳に近い咳払い、咳2と題してむせに近い咳払いの計5動作を行った。各動作10回を1セットとし、それぞれ2セット行った。各動作の1セット目をSVMの学習用、2セット目を各動作の推定用に用いた。今回は通常早さと無理のない範囲で早くコーンフレークの摂食・嚥下したデータと咳1および咳2のデータで学習、識別を行った。
(1-4) Identification of eating movements For young men, the EMG of the masticatory muscles during eating and swallowing, the acceleration due to jaw movement, the EMG of the suprahyoid muscles, and the swallowing sound were measured, and the above-mentioned eating movements were measured. The identification result is shown.
(1-4-1) Experimental conditions The subjects were 3 adult males with normal tongue function (age 23.0 ± 1.0 years, mean ± SD). As the above-mentioned operation for identifying the operation of eating / swallowing, the operation of eating / swallowing corn flakes, pudding, and water at normal times, the operation of eating / swallowing them at a reasonable speed, and the abnormal operation. As a sample of, a total of 5 actions were performed, entitled Cough 1 and clearing a cough close to a cough, and Cough 2 and a cough close to a corn. Each operation was performed 10 times as one set, and 2 sets were performed for each. The first set of each motion was used for learning SVM, and the second set was used for estimating each motion. This time, learning and identification were performed using the data of eating and swallowing corn flakes and the data of cough 1 and cough 2 as soon as possible and within a reasonable range.

あらかじめコーンフレークを通常早さで10回摂食・嚥下する予備実験を行った。予備実験は被験者6名(年齢 22.5±1.5歳、mean±SD)で行った。10回の嚥下間隔をそれぞれをラップタイム形式で計測した。結果を図17に示す。被験者ごと結果の線の上端が最長の咀嚼時間であり、下端が最短の咀嚼時間、中ほどの灰色の円が平均値である。この結果、共通の摂食・嚥下ペースが見当たらないことから、人によって違うと判断し、動作周期は規定せずに実験を行った。
詳細な実験動作を示したフローチャートを図18に示す。なお、計測、解析した動作を(*)、計測のみ行った動作を(**)で示した。
A preliminary experiment was conducted in advance in which corn flakes were eaten and swallowed 10 times at normal speed. Preliminary experiments were performed with 6 subjects (age 22.5 ± 1.5 years, mean ± SD). Each of the 10 swallowing intervals was measured in the lap time format. The results are shown in FIG. The upper end of the result line for each subject is the longest mastication time, the lower end is the shortest mastication time, and the gray circle in the middle is the average value. As a result, since no common feeding / swallowing pace was found, it was judged that it differs depending on the person, and the experiment was conducted without specifying the operation cycle.
A flowchart showing a detailed experimental operation is shown in FIG. The measured and analyzed operations are shown in (*), and the measured and analyzed operations are shown in (**).

(1−4−2)実験結果
(1−4−2−1)イベントの検出結果
各被験者のイベントの検出結果を図19乃至図21に結果を示す。上から咳2と推定されたところ、咳1と推定されたところ、嚥下と推定されたところ、咀嚼と推定されたところ、開口と推定されたところ、判定なしと推定されたところに夫々マークを入れた。
(1-4-2) Experimental result (1-4-2-21) Event detection result The event detection results of each subject are shown in FIGS. 19 to 21. Marks were placed on the areas where cough 2 was estimated, cough 1 was estimated, swallowing was estimated, mastication was estimated, opening was estimated, and no judgment was made. I put it in.

(1−4−2−2)摂食ペースの検出結果
以下に通常早さで摂食・嚥下した時を実線、無理のない範囲で早く摂食・嚥下した時の咀嚼時間を点線、平均値の推移を一点差線にして図22に示す。
(1-4-2-2) Detection result of eating pace Below is the solid line when eating and swallowing at normal speed, and the dotted line and average value when eating and swallowing quickly within a reasonable range. The transition of is shown in FIG. 22 as a one-dot difference line.

(2)第2の実施例
この実施例は、食事動作を識別するシステムであり、食事動作の異常検出に関する例である。
被験者は,健常な成人男性5名(平均年齢21歳,平均身長171.0cm)とした。身体運動は,磁気式3次元位置・姿勢計測システム(LIBERTY, Polhemus Co。)を用いて計測した。本システムはコントロールユニット,ソース,複数のセンサから構成され,ソースが発生した磁界の変化をセンサで検出することで,ソースを基準座標系とする各センサの位置・姿勢を算出する。本実施例では、図23に示すように、ソースを腰部,センサを胸部と両手の甲にそれぞれ固定した。
(2) Second Example This embodiment is a system for identifying eating movements, and is an example relating to anomaly detection of eating movements.
The subjects were 5 healthy adult males (mean age 21 years, average height 171.0 cm). Physical movement was measured using a magnetic three-dimensional position / posture measurement system (LIBERTY, Polhemus Co.). This system consists of a control unit, a source, and multiple sensors, and by detecting changes in the magnetic field generated by the source, the position and orientation of each sensor with the source as the reference coordinate system is calculated. In this embodiment, as shown in FIG. 23, the source is fixed to the waist, and the sensor is fixed to the chest and the backs of both hands.

測定動作は,一般的な食事動作と異常動作とした。被験者は,まず初めに,お米を食べる動作,お味噌汁を飲む動作,コップに入った飲み物を飲む動作,おかずを食べる動作,お米を掻き込む動作を模擬的に行い,最後に異常動作の1つであるチョークサインを行った。食事のスピードは人によって異なることから食事動作の時間や回数は規定せず行った。また,チョークサインは上体を前後左右に動かしながら行った。これら一連の動作を1セットとし,サンプリング周波数240Hzにて合計2セットの計測を行った。 The measurement movements were general eating movements and abnormal movements. The subjects first simulated the movement of eating rice, drinking miso soup, drinking a drink in a glass, eating side dishes, and scraping rice, and finally the abnormal movement. I made one choke sign. Since the speed of meals varies from person to person, the time and number of meals were not specified. In addition, the chalk sign was performed while moving the upper body back and forth and left and right. These series of operations were regarded as one set, and a total of two sets were measured at a sampling frequency of 240 Hz.

解析手法は、機械学習の一つであるSVM(Support Vector Machine)を用いて,測定した身体各部の3次元位置・姿勢データから食事動作と異常動作の2クラス分類を行った。特徴量には,各センサの位置座標と両手甲の距離を用いた。動作ラベルは,食事動作を0,異常動作を1として付与した。カーネル関数にはRBFカーネルを採用し,そのハイパーパラメータ≡とCを格子探索により最適化した。≡とCの探索範囲は≡={2-5,2-4,…,20},C={21,22,…,28}の48通りの組み合わせとし,各格子点の識別率の中から最も高い識別率を示す組み合わせを探索した。計測した最初の1セットでSVMの学習を行い,残りの1セットで識別精度の評価を行った。SVMによる識別結果は,過去nサンプルの多数決判定を行うことで平滑化した。 The analysis method used SVM (Support Vector Machine), which is one of machine learning, to classify two classes of eating movement and abnormal movement from the measured three-dimensional position / posture data of each part of the body. The position coordinates of each sensor and the distance between the backs of both hands were used as the features. The motion label was given with 0 for meal motion and 1 for abnormal motion. The RBF kernel was adopted as the kernel function, and its hyperparameters ≡ and C were optimized by grid search. The search range of ≡ and C is 48 combinations of ≡ = {2-5, 2-4, ..., 20}, C = {21, 22, ..., 28}, and the identification rate of each grid point is selected. We searched for the combination that showed the highest discrimination rate. SVM was trained in the first set of measurements, and the discrimination accuracy was evaluated in the remaining one set. The identification result by SVM was smoothed by making a majority judgment of the past n samples.

本実施例では,チョークサインを正しくチョークサインと識別した割合を識別率(Accuracy)としてを算出した。下に計算式を示す。 In this example, the rate at which the choke sign was correctly identified as the choke sign was calculated as the accuracy. The calculation formula is shown below.

識別率
=(正しく識別されたチョークサインと食事動作のラベル数)÷(全動作のラベル総数)
×100 [%]
Identification rate
= (Correctly identified chalk sign and number of labels for meal movements) ÷ (total number of labels for all movements)
× 100 [%]

被験者5名の識別結果を図24に示した。識別率は,86。5〜99。0%に分布し,平均93。7±4。1%の精度で食事動作と異常動作を識別できることが確認された。 The identification results of the five subjects are shown in FIG. 24. The discrimination rate was distributed from 86.5 to 99.0%, and it was confirmed that eating behavior and abnormal behavior could be discriminated with an accuracy of 93.7 ± 4.1% on average.

平成23年度に、我が国の死因第3位は「脳血管疾患」から「肺炎」へと移り変わった。肺炎による死亡者12万4652人のうち96.5%が65歳以上の高齢者であり、その約半数は「摂食・嚥下障害による誤嚥性肺炎」が原因とされている。嚥下障害とは、口腔機能の筋肉の衰えにより飲み込む過程に問題が起き、食物を飲み込めなくなったり、食物が気管へ流入(誤嚥)したりする障害である。また、後期高齢者の約30%に嚥下障害があると言われており、「日本国内の患者数は少なくとも500万人」が見積もられている。更に、高齢者の多くが、摂食・嚥下障害の低下を自覚できず、自身の能力を超えた食物を摂取することで誤嚥し、むせや食塊がのどに詰まった時に起こるチョークサインといった異常動作を起こし、気道閉鎖(窒息)で死に至るケースのみでも年間5000件と言われている。このような誤嚥や窒息の事故は、家庭内だけでなく、老人ホームなどの介護施設でも急増しており、介護事故に関する裁判例も散見されるようになってきた。 In 2011, the third leading cause of death in Japan changed from "cerebrovascular disease" to "pneumonia". Of the 124,652 deaths from pneumonia, 96.5% are elderly people aged 65 and over, and about half of them are attributed to "aspiration pneumonia due to dysphagia." Dysphagia is a disorder in which a problem occurs in the process of swallowing due to the weakening of the muscles of the oral function, and food cannot be swallowed or food flows into the trachea (aspiration). In addition, it is said that about 30% of the late-stage elderly have dysphagia, and it is estimated that "the number of patients in Japan is at least 5 million." In addition, many elderly people are unaware of the decline in eating and swallowing disorders, and aspiration occurs by ingesting food that exceeds their abilities, such as choke signs that occur when suffocation or lumps of food become clogged in the throat. It is said that there are 5,000 cases per year in cases of abnormal operation and death due to airway closure (suffocation). Such accidents of aspiration and suffocation are increasing rapidly not only in the home but also in nursing care facilities such as nursing homes, and judicial precedents regarding nursing care accidents are becoming more common.

本発明は、この解決策として、食事見守り支援の技術として提供することができる。食事見守り支援は食事をしている人が誤嚥しないように食事を見守ることであるが、老人ホームでは介護に従事する人員不足などの問題があり、食事見守り支援は、見守るポイントがむせやこぼし、頸部や体幹の姿勢から摂食ペースや一口で食べる量、患者の注意力が散漫してないかなど多岐にわたり、これらすべてを人的に見守るのは非常に労力のいることであると考えられる。実際には介護士などの有識者がそばで見守ることであったり、直接モニタリングすることも可能ではあるが、家族や介護士などが普段の業務と兼任して行うと、疲労がたまったり業務への支障をきたす可能性もある。食事見守り支援を自動化することで介護士や家族の負担が減ることが見込まれる。高齢化社会に直面し、介護者負担などの問題も顕在化してきた我が国にとって、摂食嚥下障害とその周りの問題は、今後ますます深刻化することが予想されており、喫緊の対策が不可欠である。 The present invention can be provided as a technique for supporting meal watching as a solution to this problem. Support for watching meals is to watch over meals so that people who are eating do not swallow, but there are problems such as a shortage of staff engaged in nursing care at nursing homes, and support for watching meals is a point to watch over. From the posture of the neck and trunk to the pace of eating, the amount of food eaten in one bite, and whether the patient's attention is distracted, it is extremely laborious to keep an eye on all of these. Conceivable. Actually, it is possible for an expert such as a caregiver to watch over by the side, or to monitor directly, but if a family member or a caregiver performs it concurrently with the usual work, fatigue will accumulate and the work will be done. It may cause problems. It is expected that the burden on caregivers and families will be reduced by automating meal monitoring support. In Japan, where problems such as the burden on caregivers have become apparent in the face of an aging society, it is expected that dysphagia and its surrounding problems will become more serious in the future, and urgent measures are indispensable. Is.

本発明では、食事中の異常動作の検出や、異常動作に繋がる可能性のある動作のセンシングをすることで、介護士などの食事支援・指導の有識者がまるでそこにいるような食事見守り支援を自動で行うシステムの構築を目指すことができる。そのためにはリアルタイムで今何を食べているか、今咀嚼しているのかなどを自動検知で把握する必要があると考えられる。自動検知の結果から、食事見守り支援の有識者がまるでそこにいるような食べ方指導や、異常動作をしていたら、家族や介護士に警報が鳴るようなシステムの構築を目指すことができる。 In the present invention, by detecting abnormal movements during meals and sensing movements that may lead to abnormal movements, it is possible to support meal watching as if a caregiver or other expert in meal support / guidance is there. You can aim to build an automatic system. For that purpose, it is necessary to automatically detect what is being eaten and what is being chewed in real time. From the results of automatic detection, it is possible to aim to build a system that gives guidance on how to eat as if an expert in food watching support is there, and sounds an alarm to family members and caregivers if they are behaving abnormally.

高齢者や嚥下障害者の誤嚥,窒息を防ぐためには,
『自身の嚥下能力に適した食物を,適切なペース,適切な一口量で食べること』
が重要であり,
『汁物をすすり飲む,ご飯をかき込む,姿勢が悪いなどの危険動作(注意動作)』
は,誤嚥や気道閉鎖を引き起こす危険性が極めて高い.食事中の誤嚥では,
『むせや咳』
が生じるが,日常的な嚥下機能の低下を自覚したり,効果的なリハビリを行うためには,
『どのような食事条件の中で,どの程度の頻度で誤嚥が生じるかを,日々記録すること』
が重要となる。また,万が一,食物が気道を塞ぎ,窒息の兆候が見られた場合には,『速やかに家族や介護者等に通報すること』
が必要である。高齢者や嚥下障害者の食事を見守るためには,『』で示した各レベルでの状態検出が不可欠といえるが、本発明は,これに応えることができる。
To prevent aspiration and suffocation in the elderly and people with dysphagia
"Eat food that is suitable for your swallowing ability at an appropriate pace and at an appropriate bite."
Is important,
"Dangerous movements such as sipping soup, stirring rice, and poor posture (cautionary movements)"
Has an extremely high risk of causing aspiration and airway obstruction. In aspiration during meals
"Museya cough"
However, in order to be aware of the deterioration of daily swallowing function and to perform effective rehabilitation,
"Record daily under what dietary conditions and how often aspiration occurs."
Is important. In the unlikely event that food blocks the airways and there are signs of suffocation, "promptly notify family members and caregivers."
is required. In order to monitor the diet of the elderly and dysphagia, it can be said that the state detection at each level shown in "" is indispensable, and the present invention can respond to this.

図25に示すように、本発明は、手首部に装着したモーションセンサ(3軸加速度,3軸角速度,3軸地磁気),筋電センサ,心拍センサ,マイクロフォンの情報を,SVM(サポートベクターマシン)をはじめとする機械学習法(人工知能)を用いて分析することで,本人の食事ペース,注意動作(すすり飲む,かき込む,上体の姿勢),誤嚥(むせ,咳など),窒息時のボディーサインなどの各状態の検知を行う技術として提供することができる。 As shown in FIG. 25, the present invention uses SVM (support vector machine) to provide information on a motion sensor (3-axis acceleration, 3-axis angular velocity, 3-axis geomagnetism), myoelectric sensor, heart rate sensor, and microphone mounted on the wrist. By analyzing using machine learning methods (artificial intelligence) such as It can be provided as a technology for detecting each state such as a body sign.

T1〜Tn 動作検出手段
1 識別処理部
2 機械学習手段
3 特徴量抽出手段
4 判断基準記憶手段
5 推定手段
6 指示手段
7 表示装置
8 表示手段
9 警報器
10 警報手段
T1 to Tn motion detection means 1 identification processing unit 2 machine learning means 3 feature quantity extraction means 4 judgment standard storage means 5 estimation means 6 instruction means 7 display device 8 display means 9 alarm 10 alarm means

Claims (10)

生体が行う特定の動作を検出する複数種の動作検出手段と、該動作検出手段からの検出信号に基づいて上記生体の特定の動作を識別する識別処理部を備えた生体の動作識別システムにおいて、
上記特定の動作は、単一の動作の複数の集合であって該複数の単一の動作が一連に行われる動作で特定され、
上記複数の単一の動作を、夫々、1種以上の動作検出手段により検出し、且つ、上記複数の単一の動作の内少なくとも1つの単一の動作を2種以上の動作検出手段により検出し、
上記識別処理部を、上記複数種の動作検出手段からの検出信号の全部を用いて特定の動作を判別して該判別結果を出力する機械学習手段を備えて構成し、
上記特定の動作は、生体が行う正常動作,生体に異常をもたらす虞のある危険動作の何れか1つ若しくは2つから選択され、
上記正常動作及び危険動作は、人体がテーブルの前に座して行う食事動作であり、
上記食事動作は、開口運動→咀嚼運動→嚥下運動→閉口運動の一連の動作を含むことを特徴とする生体の動作識別システム。
In a living body motion identification system including a plurality of types of motion detecting means for detecting a specific motion performed by a living body and an identification processing unit for identifying the specific motion of the living body based on a detection signal from the motion detecting means.
The specific operation is specified by an operation in which a plurality of sets of a single operation and the plurality of single operations are performed in a series.
The plurality of single motions are detected by one or more kinds of motion detecting means, and at least one single motion among the plurality of single motions is detected by two or more kinds of motion detecting means. death,
The identification processing unit is configured to include a machine learning means that discriminates a specific motion using all of the detection signals from the plurality of types of motion detection means and outputs the discrimination result.
The specific action is selected from one or two of a normal action performed by the living body and a dangerous action that may cause an abnormality in the living body.
The above-mentioned normal movements and dangerous movements are meal movements performed by the human body sitting in front of a table.
The above-mentioned meal movement is a movement identification system for a living body, which includes a series of movements of opening movement → chewing movement → swallowing movement → closing movement.
上記識別処理部の上記機械学習手段を、予め、上記動作検出手段からの検出信号の全部若しくは一部から特定の動作に係る教師データを作成する機能を備えて構成し、
上記特定の動作としての正常動作及び危険動作の他に、上記正常動作及び危険動作とは異なる異常動作を上記動作検出手段の1もしくは複数種により検出し、
上記識別処理部の機械学習手段を、上記動作検出手段からの検出信号の全部若しくは一部を用いて異常動作を判別して該判別結果を出力する機能を備えて構成したことを特徴とする請求項1記載の生体の動作識別システム。
The machine learning means of the identification processing unit is configured in advance with a function of creating teacher data related to a specific motion from all or part of the detection signals from the motion detection means.
In addition to the normal operation and the dangerous operation as the specific operation, an abnormal operation different from the normal operation and the dangerous operation is detected by one or more kinds of the operation detecting means.
The claim is characterized in that the machine learning means of the identification processing unit is configured to have a function of discriminating an abnormal operation by using all or a part of the detection signals from the motion detection means and outputting the discrimination result. Item 1. The biological motion identification system according to item 1.
生体が行う特定の動作を検出する複数種の動作検出手段と、該動作検出手段からの検出信号に基づいて上記生体の特定の動作を識別する識別処理部を備えた生体の動作識別システムにおいて、 In a living body motion identification system including a plurality of types of motion detecting means for detecting a specific motion performed by a living body and an identification processing unit for identifying the specific motion of the living body based on a detection signal from the motion detecting means.
上記特定の動作は、単一の動作の複数の集合であって該複数の単一の動作が一連に行われる動作で特定され、 The specific operation is specified by an operation in which a plurality of sets of a single operation and the plurality of single operations are performed in a series.
上記複数の単一の動作を、夫々、1種以上の動作検出手段により検出し、且つ、上記複数の単一の動作の内少なくとも1つの単一の動作を2種以上の動作検出手段により検出し、 The plurality of single motions are detected by one or more kinds of motion detecting means, and at least one single motion among the plurality of single motions is detected by two or more kinds of motion detecting means. death,
上記識別処理部を、上記複数種の動作検出手段からの検出信号の全部を用いて特定の動作を判別して該判別結果を出力する機械学習手段を備えて構成し、 The identification processing unit is configured to include a machine learning means that discriminates a specific motion using all of the detection signals from the plurality of types of motion detection means and outputs the discrimination result.
上記特定の動作は、生体が行う正常動作,生体に異常をもたらす虞のある危険動作の何れか1つ若しくは2つから選択され、 The specific action is selected from one or two of a normal action performed by the living body and a dangerous action that may cause an abnormality in the living body.
上記識別処理部の上記機械学習手段を、予め、上記動作検出手段からの検出信号の全部若しくは一部から特定の動作に係る教師データを作成する機能を備えて構成し、 The machine learning means of the identification processing unit is configured in advance with a function of creating teacher data related to a specific motion from all or part of the detection signals from the motion detection means.
上記特定の動作としての正常動作及び危険動作の他に、上記正常動作及び危険動作とは異なる異常動作を上記動作検出手段の1もしくは複数種により検出し、 In addition to the normal operation and the dangerous operation as the specific operation, an abnormal operation different from the normal operation and the dangerous operation is detected by one or more kinds of the operation detecting means.
上記識別処理部の機械学習手段を、上記動作検出手段からの検出信号の全部若しくは一部を用いて異常動作を判別して該判別結果を出力する機能を備えて構成し、 The machine learning means of the identification processing unit is configured to have a function of discriminating abnormal operations using all or part of the detection signals from the motion detection means and outputting the discrimination results.
上記異常動作は、上を向いての嚥下、顔が下向き,倒れる、胸をたたく、首元を手で押さえる(ユニバーサル・チョークサイン)、苦しい表情、窒素時の急激なチアノーゼによる顔の色、むせる、せきをする、湿性咳嗽(痰を伴う咳)、乾性咳嗽、SpO2や呼吸パターンの変化、呼吸停止、胸郭の挙上なし、頻脈、心拍や脈波の下降,上昇、湿性嗄声、痰がらみの声、ガラガラ声への変化(誤嚥の疑い)、苦しむようなうなる声から選択されるバイタルサインであることを特徴とする生体の動作識別システム。 The above abnormal movements include swallowing upwards, face downwards, falling, tapping the chest, holding the neck with hands (universal choke sign), painful expression, facial color due to sudden coughing during nitrogen, and coughing. , Coughing, wet cough (cough with sputum), dry cough, changes in SpO2 and respiratory patterns, respiratory arrest, no elevation of the thorax, tachycardia, descent and rise of heartbeat and pulse waves, moist hoarseness, sputum entanglement A biological motion identification system characterized by a vital sign selected from a cough, a change to a rattling voice (suspicion of aspiration), and a distressing hoarseness.
上記正常動作及び危険動作は、人体がテーブルの前に座して行う食事動作であることを特徴とする請求項3記載の生体の動作識別システム。 The motion identification system for a living body according to claim 3 , wherein the normal motion and the dangerous motion are meal motions performed by a human body sitting in front of a table. 上記食事動作は、開口運動→咀嚼運動→嚥下運動→閉口運動の一連の動作を含むことを特徴とする請求項4記載の生体の動作識別システム。 The movement identification system for a living body according to claim 4 , wherein the meal movement includes a series of movements of opening movement → chewing movement → swallowing movement → closing movement. 上記正常動作以外の少なくとも異常動作であることを判別したとき、警報を発する警報手段を備えたことを特徴とする請求項2乃至5何れかに記載の生体の動作識別システム。 The motion identification system for a living body according to any one of claims 2 to 5 , further comprising an alarm means for issuing an alarm when it is determined that the motion is at least an abnormal motion other than the normal motion. 上記機械学習手段は、人工ニューラルネットワーク(ANN)、サポートベクターマシン(SVM:Support Vector Machine)、決定木、ランダムフォレスト、k平均法クラスタリング、自己組織化マップ、遺伝的アルゴリズム、ベイジアンネットワーク、ディープラーニング手法などから選択される1つ若しくは複数の組み合わせで構成されることを特徴とする請求項1乃至6何れかに記載の生体の動作識別システム。 The above machine learning means are artificial neural network (ANN), support vector machine (SVM), decision tree, random forest, k-means clustering, self-organization map, genetic algorithm, Bayesian network, deep learning method. The motion identification system for a living body according to any one of claims 1 to 6, wherein the system is composed of one or a plurality of combinations selected from the above. 上記機械学習手段は、予め、上記動作検出手段からの検出信号の全部から特定の動作に係る教師データを作成し、該教師データに基づく学習により得られる判断基準を記憶する学習機能と、上記動作検出手段からの検出信号に基づいて上記記憶した判断基準により対応する特定の動作を判別して該判別結果を出力する実行機能とを有したことを特徴とする請求項1乃至6何れかに記載の生体の動作識別システム。 The machine learning means has a learning function that creates teacher data related to a specific motion from all the detection signals from the motion detection means in advance and stores a judgment criterion obtained by learning based on the teacher data, and the motion. 6. Biological motion identification system. 上記機械学習手段は、上記動作検出手段からの検出信号の特徴量を抽出する特徴量抽出手段と、学習時に上記特徴量抽出手段によって抽出された特徴量をラベリングして得られた特定の動作に対応する特徴量を有したラベル及び当該対応する特徴量からなる教師データを作成し、該教師データに基づく学習により得られる判断基準を記憶する判断基準記憶手段と、実行時に上記特徴量抽出手段によって抽出された特徴量と上記判断基準記憶手段に記憶された判断基準から対応するラベルを推定する推定手段と、該推定手段が推定したラベルを示す指示手段とを備えて構成したことを特徴とする請求項8記載の生体の動作識別システム。 The machine learning means has a feature amount extracting means for extracting a feature amount of a detection signal from the motion detecting means and a specific motion obtained by labeling the feature amount extracted by the feature amount extracting means at the time of learning. A judgment standard storage means that creates teacher data consisting of a label having a corresponding feature amount and the corresponding feature amount and stores the judgment criteria obtained by learning based on the teacher data, and the above-mentioned feature amount extraction means at the time of execution. It is characterized in that it is configured to include an estimation means for estimating a corresponding label from the extracted feature amount and the judgment criteria stored in the judgment criterion storage means, and an instruction means for indicating the label estimated by the estimation means. The motion identification system for a living body according to claim 8. 上記請求項1乃至9何れかに記載の生体の動作識別システムを用い、上記生体の特定の動作を識別することを特徴とする生体の動作識別方法。 A method for identifying an action of a living body, which comprises using the motion identification system of the living body according to any one of claims 1 to 9 to identify a specific action of the living body.
JP2016136349A 2016-07-08 2016-07-08 Living body motion identification system and living body motion identification method Active JP6975952B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2016136349A JP6975952B2 (en) 2016-07-08 2016-07-08 Living body motion identification system and living body motion identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2016136349A JP6975952B2 (en) 2016-07-08 2016-07-08 Living body motion identification system and living body motion identification method

Publications (2)

Publication Number Publication Date
JP2018000871A JP2018000871A (en) 2018-01-11
JP6975952B2 true JP6975952B2 (en) 2021-12-01

Family

ID=60945483

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2016136349A Active JP6975952B2 (en) 2016-07-08 2016-07-08 Living body motion identification system and living body motion identification method

Country Status (1)

Country Link
JP (1) JP6975952B2 (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7073122B2 (en) * 2018-01-31 2022-05-23 Dynabook株式会社 Electronic devices, control methods and programs
KR102142647B1 (en) * 2018-03-28 2020-08-07 주식회사 아이센스 Artificial Neural Network Model-Based Methods, Apparatus, Learning starategy and Systems for Analyte Analysis
KR102039584B1 (en) * 2018-04-12 2019-11-01 고려대학교 세종산학협력단 System and Method for Machine Learning-based Scoring Using Smart Yoga Mat
JP7133200B2 (en) * 2018-05-31 2022-09-08 国立大学法人岩手大学 Evaluation method by swallowing function evaluation device and swallowing function evaluation device
US12048552B2 (en) * 2018-10-05 2024-07-30 Nippon Telegraph And Telephone Corporation Electromyography processing apparatus, electromyography processing method and electromyography processing program
CN111009297B (en) * 2019-12-05 2023-09-19 中新智擎科技有限公司 Supervision method and device for medicine taking behaviors of user and intelligent robot
JP6997228B2 (en) * 2020-01-08 2022-01-17 和寛 瀧本 Deep muscle state estimator
CN111709282A (en) * 2020-05-07 2020-09-25 中粮营养健康研究院有限公司 Method for characterizing food oral processing
CN111931602B (en) * 2020-07-22 2023-08-08 北方工业大学 Human action recognition method and system based on multi-stream segmentation network based on attention mechanism
JP7251804B2 (en) * 2020-09-15 2023-04-04 国立大学法人岩手大学 Wearing device around the ear
JP6903368B1 (en) * 2020-10-23 2021-07-14 Plimes株式会社 Swallowing evaluation device, swallowing evaluation system, swallowing evaluation method and swallowing evaluation program
JPWO2022114070A1 (en) * 2020-11-26 2022-06-02
JP7570687B2 (en) * 2021-02-19 2024-10-22 国立大学法人静岡大学 EVALUATION APPARATUS, EVALUATION METHOD, AND EVALUATION PROGRAM
CN113111750A (en) * 2021-03-31 2021-07-13 智慧眼科技股份有限公司 Face living body detection method and device, computer equipment and storage medium
WO2022269724A1 (en) * 2021-06-21 2022-12-29 日本電信電話株式会社 Exercise content estimation device, exercise content estimation method, and program
JP7682838B2 (en) * 2022-09-26 2025-05-26 Kddi株式会社 Aspiration risk determination device, method, and program
JPWO2024096074A1 (en) * 2022-11-04 2024-05-10

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3570163B2 (en) * 1996-07-03 2004-09-29 株式会社日立製作所 Method and apparatus and system for recognizing actions and actions
JP2005304890A (en) * 2004-04-23 2005-11-04 Kumamoto Technology & Industry Foundation Method of detecting dysphagia
WO2006033104A1 (en) * 2004-09-22 2006-03-30 Shalon Ventures Research, Llc Systems and methods for monitoring and modifying behavior
JP4992043B2 (en) * 2007-08-13 2012-08-08 株式会社国際電気通信基礎技術研究所 Action identification device, action identification system, and action identification method
JP5359414B2 (en) * 2009-03-13 2013-12-04 沖電気工業株式会社 Action recognition method, apparatus, and program
JP5607952B2 (en) * 2010-02-26 2014-10-15 国立大学法人東京工業大学 Gait disorder automatic analysis system
JP4590018B1 (en) * 2010-02-26 2010-12-01 エンパイア テクノロジー ディベロップメント エルエルシー Feature value conversion apparatus and feature value conversion method
JP5464072B2 (en) * 2010-06-16 2014-04-09 ソニー株式会社 Muscle activity diagnosis apparatus and method, and program
US9687191B2 (en) * 2011-01-18 2017-06-27 Holland Bloorview Kids Rehabilitation Hospital Method and device for swallowing impairment detection
JP5924724B2 (en) * 2011-05-09 2016-05-25 国立大学法人岩手大学 Mouth-mouth movement state estimation method and jaw-mouth movement state estimation device
JP5874963B2 (en) * 2011-07-07 2016-03-02 株式会社リコー Moving body motion classification method and moving body motion classification system
JP5740285B2 (en) * 2011-10-31 2015-06-24 株式会社東芝 Gait analyzer and gait analysis program
JP5333567B2 (en) * 2011-11-28 2013-11-06 沖電気工業株式会社 Data processing apparatus, motion recognition system, motion discrimination method, and program
US20160026767A1 (en) * 2013-03-13 2016-01-28 The Regents Of The University Of California Non-invasive nutrition monitor
JP2016034325A (en) * 2014-08-01 2016-03-17 国立大学法人京都大学 Deglutition detection method using fuzzy inference, deglutition activity monitoring system, and deglutition function evaluation method
US10083233B2 (en) * 2014-09-09 2018-09-25 Microsoft Technology Licensing, Llc Video processing for motor task analysis
US20160073953A1 (en) * 2014-09-11 2016-03-17 Board Of Trustees Of The University Of Alabama Food intake monitor
JP2016120271A (en) * 2014-11-17 2016-07-07 ルネサスエレクトロニクス株式会社 Phase correction apparatus, action identification apparatus, action identification system, microcontroller, phase correction method, and program
JP6489130B2 (en) * 2014-12-12 2019-03-27 富士通株式会社 Meal estimation program, meal estimation method, and meal estimation device

Also Published As

Publication number Publication date
JP2018000871A (en) 2018-01-11

Similar Documents

Publication Publication Date Title
JP6975952B2 (en) Living body motion identification system and living body motion identification method
JP7303854B2 (en) Evaluation device
Farooq et al. Accelerometer-based detection of food intake in free-living individuals
Kalantarian et al. A survey of diet monitoring technology
RU2637610C2 (en) Monitoring device for physiological signal monitoring
Farooq et al. Segmentation and characterization of chewing bouts by monitoring temporalis muscle using smart glasses with piezoelectric sensor
Lokavee et al. Sensor pillow and bed sheet system: Unconstrained monitoring of respiration rate and posture movements during sleep
CN105118236A (en) Paralysis falling detection and prevention device and processing method thereof
CN106413533A (en) Device, system and method for detecting apnoea of a subject
EP3806722B1 (en) Apparatus for sensing
Klonovs et al. Distributed computing and monitoring technologies for older patients
JP6425393B2 (en) Prediction system, prediction method, and prediction program
TWI536962B (en) Swallowing function detection system
JPWO2019022259A1 (en) System, method, and program for recognizing motion derived from myoelectric signal
Hussain et al. Food intake detection and classification using a necklace-type piezoelectric wearable sensor system
Adami et al. A method for classification of movements in bed
WO2013086615A1 (en) Device and method for detecting congenital dysphagia
Fontana et al. Evaluation of chewing and swallowing sensors for monitoring ingestive behavior
Pouyan et al. Sleep state classification using pressure sensor mats
Laguna et al. Eating capability assessments in elderly populations
US20200029901A1 (en) Contactless measurement, monitoring and communication of bodily functions of infants and other helpless individuals
Cano et al. Wearable solutions using physiological signals for stress monitoring on individuals with autism spectrum disorder (ASD): A systematic literature review
Dong et al. Analyzing breathing signals and swallow sequence locality for solid food intake monitoring
JP6915175B1 (en) Monitoring the subject&#39;s swallowing
WO2017038966A1 (en) Bio-information output device, bio-information output method and program

Legal Events

Date Code Title Description
A80 Written request to apply exceptions to lack of novelty of invention

Free format text: JAPANESE INTERMEDIATE CODE: A80

Effective date: 20160716

A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20190617

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20200417

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20200602

A601 Written request for extension of time

Free format text: JAPANESE INTERMEDIATE CODE: A601

Effective date: 20200727

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20200930

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20210302

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20210430

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20211005

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20211101

R150 Certificate of patent or registration of utility model

Ref document number: 6975952

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150