CN103577827A - Image identification device and elevator device - Google Patents
Image identification device and elevator device Download PDFInfo
- Publication number
- CN103577827A CN103577827A CN201310305951.5A CN201310305951A CN103577827A CN 103577827 A CN103577827 A CN 103577827A CN 201310305951 A CN201310305951 A CN 201310305951A CN 103577827 A CN103577827 A CN 103577827A
- Authority
- CN
- China
- Prior art keywords
- mentioned
- personage
- range image
- motion characteristic
- characteristic amount
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
- Indicating And Signalling Devices For Elevators (AREA)
- Maintenance And Inspection Apparatuses For Elevators (AREA)
- Elevator Door Apparatuses (AREA)
- Closed-Circuit Television Systems (AREA)
- Burglar Alarm Systems (AREA)
Abstract
The invention provides an image identification device capable of accurately identifying the motion of peoples passing the viewing angle of a distance image sensor camera and an elevator employing the image identification device. The motion characteristic quantity of the state of a body portion beyond the viewing angle is extracted from the distance image beyond the viewing angle, and motion characteristic quantity of the state of the body within the viewing angle is determined based on the motion characteristic quantity of the state of a body portion beyond the viewing angle and the body portion beyond the viewing angle. Even if a portion of a human body is beyond the viewing angle, the reliability of image identification is improved based on motion characteristic quantity approximate to that of the case where body portion is within the viewing angle.
Description
Technical field
The present invention relates to the action of the personage in recognition image and grasp personage's the pattern recognition device of movement and the lift appliance that possesses this pattern recognition device.
Background technology
For example in order to ensure lift appliance take car in the passenger's that takes safety, surveillant monitored by visual that the photographs of the monitor camera arranging in taking car confirmed to take the safety in car in the past.But, by surveillant all the time visual photographs surveillant has been brought to very large body burden, require to alleviate this burden.
And in order to alleviate the burden of this health, the surveillance of movement of grasping recently the personage in the camera images of monitor area is universal gradually.As the function of the representative of surveillance, require to adopt image recognition to grasp the action of the personage in camera images and detect the generation of accident or the personage's that causes the accident abnormal movement.
As this surveillance, it is effective being suitable for and having used the image recognition technology of the range image being obtained by range image sensor.So-called range image sensor is, obtains the two-dimentional shooting identical with camera images and measures the sensor of the distance value in each pixel.The distance value of range image sensor is difficult to be subject to the impact of outer light or shade, therefore has advantages of with using camera images phase specific energy and identifies more accurately personage's action.
For example, in TOHKEMY 2010-67079 communique (patent documentation 1), following technology is disclosed: by the shelf in shop, the range image sensor of oblique lower installation is identified personage and taken out the technology of the movement of article from shelf.
Patent documentation 1:JP JP 2010-67079 communique
Summary of the invention
Yet, if adopt the technology of recording in this patent documentation 1 to grasp all personages' of monitor area movement, produce following unfavorable condition.
That is, due to the narrow viewing angle of this range image sensor, therefore if exist a part for personage's health to exceed outside the visual angle of range image sensor, the image recognition difficult problem that becomes.
Its reason is, in the situation that a part for health exceeds visual angle, can not calculate according to the health outside visual angle personage's motion characteristic amount.Therefore, whole body is brought personage's the motion characteristic amount of inside at visual angle into and the motion characteristic amount that a part for health exceeds the personage outside visual angle deviates from mutually, even if therefore both carry out identical action and are also difficult to be identified as identical action.In general, the optical system of range image sensor is compared the narrowed down part of the mechanism with measuring distance value of visual angle with monitor camera, so this problem becomes remarkable.
In addition,, if monitor area is broad, personage is towards various directions and carry out exercises, thereby therefore also exists motion characteristic amount to reduce such problem according to the different precision that produce deviation action recognition of personage's direction.
In the technology of recording in patent documentation 1, using and move as prerequisite towards the direction that the shelf of range image sensor is installed, therefore can not produce this problem, if but using the action of the personage in broad visual angle as object, the impact of above-mentioned problem becomes large.
Fundamental purpose of the present invention is, a kind of pattern recognition device of the action that can identify accurately the personage in the visual angle of making a video recording by range image sensor and the lift appliance that possesses this pattern recognition device are provided.
The of the present invention the 1st is characterised in that, from the range image that the part of health exceeds from visual angle, extract the motion characteristic amount that exceeds state, and according to this, exceed the motion characteristic amount of state and health and from the plussage at visual angle, infer the motion characteristic amount of the state that health do not exceed from visual angle.
The of the present invention the 2nd is characterised in that, the filter house that exceeds part by removal, for example personage is divided into a plurality of regions, obtain the ratio that each cut apart region exceeds from the visual angle of range image, the evaluation of the characteristic quantity in the ratio not exceeding region how, or reduce characteristic quantity evaluation, infer thus personage's motion characteristic amount.
The of the present invention the 3rd is characterised in that, in the situation that using around structure as the action of object, carry out personage's coordinate transform so that personage's direction of action is consistent with the reference direction predetermining, under the state after this coordinate transform, infer personage's motion characteristic amount.
According to the 1st feature of the present invention, even if the health that a part for personage's health also can approach personage when the visual angle of range image the exceeds motion characteristic amount from the situation that the visual angle of range image exceeds not can improve the reliability of image recognition.
According to the 2nd feature of the present invention, for whole body, be present in any personage the personage that the part of personage in range image and health exceeds from range image, use enters into the characteristic quantity in the ratio region how at visual angle, therefore both characteristic quantities are approximate, thereby can improve the reliability of image recognition.
According to the 3rd feature of the present invention, in the situation that using around structure as the action of object, by making personage's direction of action consistent with the reference direction predetermining, thereby approximate motion characteristic amount can be extracted roughly, the reliability of image recognition can be improved.
Accompanying drawing explanation
Fig. 1 is the schematic configuration diagram of the 1st embodiment of the present invention.
Fig. 2 is the functional block diagram of the 1st embodiment.
Fig. 3 means the key diagram of the personage's who moves in the range image in the 1st embodiment a example.
Fig. 4 means the key diagram of another example of the personage who moves in the range image in the 1st embodiment.
Fig. 5 is the key diagram that the signal of the plussage calculating part in explanation the 1st embodiment is processed.
Fig. 6 means the process flow diagram of the control flow of the motion characteristic amount correction portion in the 1st embodiment.
Fig. 7 means that the recurrence of the motion characteristic amount correction portion in the 1st embodiment infers the figure of an example of the table content of parameter.
Fig. 8 is the functional block diagram of the 2nd embodiment of the present invention.
Fig. 9 means the key diagram that the region of the personage in the 2nd embodiment has been carried out to an example of filtration.
Figure 10 means another the routine key diagram filtering has been carried out in the region of the personage in the 2nd embodiment.
Figure 11 is the functional block diagram of the 3rd embodiment.
Figure 12 means the key diagram of an example of the range image that the viewpoint of the hypothesis from the 3rd embodiment is observed.
Figure 13 means another routine key diagram of the range image that the viewpoint of the hypothesis from the 3rd embodiment is observed.
Figure 14 is the key diagram of the relation of the pixel of explanation in range image and corresponding point 50.
Embodiment
Below, with reference to accompanying drawing, embodiments of the present invention are at length described, but the present invention is not limited to following embodiment, in the scope of the concept of technology of the present invention, also comprises various variation or application examples.
Embodiment 1
Even if the pattern recognition device in the 1st embodiment shown below, in the situation that a part for personage's health exceeds from the visual angle of range image sensor, also can be identified the action of the personage in made a video recording visual angle accurately.
In Fig. 1, reference number 51 is lift appliance takes car, reference number 52 is for taking the range image sensor of car 51 interior installations, and reference number 53 is for to be arranged at the door of taking in car 51, and reference number 54 is for to load in the treating apparatus of taking the outer upper of car 51.
Take in car 51, defined the coordinate system 59 with initial point O and coordinate axis (X, Y, Z).And, the initial point O of coordinate system 59 be set to be positioned at range image sensor 52 under.In addition, range image sensor 52 is with θ, position angle, the angle of depression
the angle that arranges of roll angle (roll angle) ρ is installed.In addition, angle of depression θHe position angle
when observing Z-direction, camera is 0 °, now angle of depression θ, position angle
the turning axle of roll angle ρ is consistent with X-axis, Y-axis, Z axis respectively.
The calculation element that treating apparatus 54 is processed for the necessary signal of image recognition processing carrying out in order to carry out in the present embodiment, can be suitable for computing machine arbitrarily.In Fig. 1, establishing treating apparatus 54 is 1 computing machine, but treating apparatus 54 also can consist of two above computing machines.In addition, also can adopt the treating apparatus of the inside that is arranged on range image sensor 52 grades as treating apparatus 54.Like this, treating apparatus 54 is suitably arranged according to the form of applicable goods.
The mode of obtaining of the range image described in above explanation is known as Time Of Flight mode.Range image sensor 52, except Time Of Flight mode, can also be suitable for the mode of the distance value of each pixel in can measurement image.Stereocamera or laser radar are one example.
Next, adopt the functional block diagram shown in Fig. 2 to describe the processing capacity of the treating apparatus 54 of the pattern recognition device in the 1st embodiment.
First the summary of each functional module is described, range image acquisition unit 2 by range image sensor 52 with official hour interval acquiring range image.
Person extraction portion 3 extracts the part that is equivalent to take the personage in car 51 from the range image of range image acquisition unit 2.
In the personage's that 4 calculating of plussage calculating part extract in person extraction portion 3 whole body, which partly exceeds outside the field angle of range image sensor 52.This uses the geometric data keeping in geometric data maintaining part 1 in calculating.In this geometric data maintaining part 1, at least store range image sensor 52 visual angle, setting position, angle is set.The plussage that the health that plussage calculating part 4 is obtained personage exceeds from the visual angle of range image sensor 52.
Motion characteristic amount extraction unit 5 is extracted the motion characteristic amount of taking the personage in car 51 from the character image being extracted by person extraction portion 3.
When a part for the personage's that motion characteristic amount correction portion 6 extracts in person extraction portion 3 health exceeds outside the field angle of range image sensor 52, the plussage calculating according to plussage calculating part 4 is revised the motion characteristic amount of motion characteristic amount extraction unit 5.
The motion characteristic amount that this correction computing makes a part for health exceed the personage outside the visual field of range image sensor 52 approaches the motion characteristic amount of the personage in the visual field that health all appears at range image sensor 52.
Personage's action is analogized by action recognition portion 7 according to the motion characteristic amount of motion characteristic amount correction portion 6.More specifically, action recognition portion 7 identifies which action in the personage who takes in car 51 action of classification (category) of having carried out predefined according to the motion characteristic amount of motion characteristic amount correction portion 6.
The action executing that control part 8 is identified according to action recognition portion 7 takes the record of reflection in car 51 or range image, to taking the output of the alarm in car 51, the operation of taking car 51 is controlled or the switching of door 53 at least more than one action.
Next each functional module is elaborated.Geometric data maintaining part 1 is stored and visual angle, the setting position of the imageing sensor 52 of keeping at a distance, angle is set.During these information exchanges are crossed and be pre-entered into treating apparatus 54 when operator mounting distance imageing sensors 52 and be stored, keep.
Or also can be after range image sensor 52 be set, the accessed range image of the range image sensor 52 of usining is as object, and the method for the calibration (calibration) of using by applicable monitor camera is calculated and is stored, keeps.
And then select the data range image sensor 52 consistent with visual angle in geometric data maintaining part 1, operator also can adopt the setting position in geometric data maintaining part 1 and angle is set and carry out mounting distance imageing sensor 52.
In a word, in geometric data maintaining part 1, adopt any above-mentioned method store, keep at a distance imageing sensor 52 visual angle, setting position, angle is set.
Range image acquisition unit 2 is obtained range image with official hour interval from range image sensor 52.The example that represents the range image that range image acquisition unit 2 is obtained in Fig. 3, in this range image, personage plays the abnormal action of wall.
In Fig. 3, reference number 151 is range image, and reference number 150 is each pixel in range image 151, and reference number 130a is personage, and reference number 154a is 53 walls that are positioned at a side of door.In Fig. 3, omitted diagram, but range image 151 is divided into clathrate by more pixel 150.Pixel 150 keeps taking the distance value till the range image sensor 52 in car 51 apart from each.
At this, it is with reference to the coordinate figure that is transformed to coordinate system 59 that the distance value of pixel 150 can be take the storage content (visual angle of range image sensor 52, setting position, angle etc. is set) of geometric data maintaining part 1.This conversion is carried out through the step in 2 stages, carry out successively to take the coordinate system that range image sensor 52 is benchmark conversion step, after this step to the step of the conversion of coordinate system 59.
Below, two steps are described successively, but first adopt Figure 14 to illustrate to take the step of conversion of the coordinate system that range image sensor 52 is benchmark.
In Figure 14, reference number 69 is for take the coordinate system that range image sensor 52 is benchmark, and reference number 50 is pixel 150 takes the corresponding point in car 51, and i (u, v) is the coordinate on the range image 151 of pixel 150, I
s(X
s, Y
s, Z
s) be the coordinate of the coordinate system 69 of corresponding point 50.
The initial point O of coordinate system 69
sfor the center of the projection of range image sensor 52, coordinate axis X
s, Y
s, Z
sfrom range image sensor 52, observe and be positioned at a left side, upper and depth.At this, at the coordinate I of coordinate system 69
sthe distance value of middle pixel 150 equals Z
s.If employing pin-hole model (pinhole model) carrys out the projection model of approximate distance imageing sensor 52, at I
sin remaining X
s, Y
scan adopt successively following formula (1) and formula (2) to calculate.
(several 1)
X
s=uZ
s/ λ is (to u=λ X
s/
sand so on the formula of general projective transformation be out of shape)
(several 2)
Y
s=vZ
s/ λ is (to v=λ Y
s/ Z
sand so on the formula of general projective transformation be out of shape)
In formula (1) and formula (2), λ is the focal length of range image sensor 52, uses the data of geometric data maintaining part 1 interior existence.
Next, to the step of the conversion of coordinate system 59, by the coordinate transform of general rotation and parallel, according to following formula (3), carry out.
In formula (3), I (X, Y, Z) is the coordinate figure of the coordinate system 59 of corresponding point 50.In addition position (X,
c, Y
c, Z
c) be the setting position of the range image sensor 52 in coordinate system 59, angle (θ,
be ρ) angle that arranges in coordinate system 59 as shown in Figure 1, these data are used the data in the 1 interior existence of geometric data maintaining part.
(several 3)
Next, person extraction portion 3 extracts the part of personage 130a from range image 151.The extraction of this personage 130a for example can by passenger (personage 130a) not time the range image of only taking the background in car 51 that obtains and the personage 130a range image 151 of being photographed the distance of each pixel do subtraction and extract the part that distance occurred to change and realize.
, from range image sensor 52, observe and take the interior personage 130a of car 51, owing to being positioned at than the wall of taking car 51, floor and Men Geng near range image sensor 52 1 sides, if thereby personage 130a enters into range image 151, the distance of the existing part of health of personage 130a is shorter than the distance of range image of background of taking car 51, therefore can extract personage.
Person extraction portion 3, except the method, also applicablely can extract the additive method of personage 130a from range image 151.For example also can adopt the pattern (pattern) of the shape of learning in advance personage 130a, the method that the place that is suitable for learnt pattern is extracted as personage 130a.
Motion characteristic amount extraction unit 5 changes to extract the motion characteristic amount identical with camera images according to the seasonal effect in time series of the range image of range image acquisition unit 2.The three-dimensional local auto-correlation of the applicable high order of the extraction of this motion characteristic amount in the present embodiment.Extracting method based on the three-dimensional local autocorrelative motion characteristic amount of this high order for example as " South Zhuo also, large Tianjin open up it, ' Complex counts people Move image か ら abnormal operation and detects ', ' コ Application ピ ユ mono-タ PVC ジ ヨ Application と イ メ mono-ジ メ デ イ ア ', P.43-50, in October, 2005 " shown in (" South Zhuo also, large Tianjin opens up it; ' abnormal operation of many people moving image detects ', ' computer vision and image media ' ").
And, in the three-dimensional local autocorrelative extraction of this high order, first try to achieve the variable quantity of distance value of each pixel of 2 range images 151 constantly.In the situation that the range image 151 shown in Fig. 3 becomes large in the part that this variable quantity moves at passenger 130a in range image 151.
Therefore especially personage 130a plays wall 154a, centered by foot and the variation quantitative change of distance value is large.
And then in the health of personage 130a, the variation quantitative change of the distance value of the part that dependency moves along with the movement of foot is large.The wrist that to obtain balance in order playing in foot and to wave or by playing the retroaction of wall 154a, shake first as one example.
Next, the variable quantity that is extracted in the three-dimensional local autocorrelative extraction middle distance value of high order has surpassed the part of the threshold value of stipulating as 2 value pixels of action, and then according to 2 value pixels of the action in the 3 next continuous moment, tries to achieve the component of action.
The component of each action in the three-dimensional local auto-correlation of this high order has reflected the direction of action or the shape of the part of moving of the part of moving in range image 151.At this, the direction of moving is direction (right side, ,Shang, upper left, upper right etc.) mobile in image, the profile that is shaped as the part of moving of the part of moving towards (right side, ,Shang, upper left, upper right etc.).
And motion characteristic amount can be represented by the f of following formula (4).N in formula (4) is the dimension (observation side and the pattern moving) of motion characteristic amount, and dimension N is generally 251 in the three-dimensional local auto-correlation of high order, but is not limited to this.
(several 4)
f=[f
1,f
2,...f
N]…(4)
The plussage calculating part 4 reckoners personage 130a that thing extraction unit 3 extracts that lets others have a look at exceeds the plussage of which kind of degree from the visual angle of range image sensor 52.In Fig. 4, different from Fig. 3, personage 130b is towards the direction of a side contrary to door 53.The example that the part that Fig. 4 is personage exceeds from the visual angle of range image sensor 52, more than half part of the foot of personage 130b exceeds from the visual angle of range image sensor 52.Therefore, different from Fig. 3, range image sensor 52 can not be grasped all pictures of personage 130b.
More than half part that personage 130b plays the wall 154b ,Dan foot of a door contrary side of 53 exceeds from the visual angle of range image sensor 2, therefore on range image 151, has only occurred the movement of the foot of personage 130b.Which kind of degree the health that plussage calculating part 4 calculates personage 130b exceeds quantitatively.
Adopt Fig. 5 that the calculation process of plussage calculating part 4 is described.Fig. 5 represents to take personage A30 in car 51 and the vertical profile of personage B30 '.
In Fig. 5, Y
cthe height that arranges for range image sensor 52, θ is the angle of depression of range image sensor 52, ω is the visual angle of the vertical direction of range image sensor 52, α is the lower limit in the visual field of range image sensor 52 and the angle that plummet direction forms, L and L ' be personage A30 and personage B30 ' with the floor of range image sensor 52 on distance, P is the plussage of personage A30, the exceed part of reference number 40 for exceeding from the visual angle of range image sensor 52 in personage A30.
In addition, the coordinate figure (X, Y, Z) of distance L employing personage's 30 the focus point distance (X apart from initial point O in the plane
2+ Y
2)
1/2calculate.
The focus point of personage A30 can adopt following method to ask for.First try to achieve the focus point of personage A30 and the distance value of this focus point in range image 151, next by according to the setting position of the range image sensor 52 of storage in the distance value of this focus point, geometric data maintaining part 1 with angle be set calculate the coordinate figure (X on the coordinate system 59 of focus point, Y, Z), thus ask for.
In addition, the focus point of personage A30 is an example of trying to achieve the representative point of personage A30, also can try to achieve other representative points according to personage A30.For example also can adopt and using the method that the crown portion of personage A30 extracts as representative point.
In Fig. 5, the plussage P of personage A30 can be calculated by following formula (5) according to distance L.At this, the max () in formula (5) is for obtaining peaked function.
In formula (5), distance L is larger, the 2nd of max () function the less, if distance L than regulation value large, plussage P becomes 0.During the middle P=0 of formula (5), personage A30 does not exceed, at the interior whole body that occurs personage A30 of range image 151.If for example calculate distance L in Fig. 5 ' plussage of larger personage B30 ', be knownly 0 and do not exceed.
(several 5)
P=max(0,Y
C-L/tan(α))=max(0,Y
C-L/tan(90-θ-ω/2))…(5)
The process flow diagram of motion characteristic amount correction portion 6 execution graphs 6, according to the plussage of calculating in plussage calculating part 4, revise, so that the personage's who exceeds from the visual angle of range image sensor 52 motion characteristic amount approaches personage's the motion characteristic amount that whole body enters into the visual angle of range image sensor 52.At this, use following illustrated statistical presuming method as the gimmick of this correction.In the present embodiment, by recurrence, infer to revise.
In the process flow diagram of Fig. 6, in step 1 (below step is economized to slightly " S " and carried out mark), be judged to be and there is no plussage, if there is no plussage, enter into S4 and carry out the not processing of corrective action characteristic quantity and so on.In this case, the whole body of personage B30 ' is present in range image 151, can try to achieve motion characteristic amount according to personage's full-length picture.
In S1, be judged to be and the in the situation that of thering is plussage, enter into S2, for the motion characteristic amount when range image sensor 52 exceeds enters into motion characteristic amount the situation in the visual field of range image sensor 52 and returns and infer to the whole body of personage A30 according to a part for the health of personage A30, according to the value of plussage, select to return the parameter (coefficient is inferred in recurrence) of inferring.
At this, so-called return to infer refer to following gimmick: when the object variable of prior 2 groups of variablees having collected sample and explanatory variable are provided, statistics relevant of utilizing object variable and explanatory variable, adopts statistical methods to calculate the value of best object variable to the value of explanatory variable with minimum 2 meanings of taking advantage of.
It is the parameter of inferring for this recurrence that coefficient is inferred in so-called recurrence.Usually, establishing explanatory variable is x=[x
1, x
2... x
n], object variable is y, the mean value of establishing explanatory variable and object variable is μ
x=[μ
x1, μ
x2... μ
xN] and μ
y, establish to return and infer coefficient a=[a
1, a
2... a
n] time, the recurrence presumed value y ' of object variable y can be calculated by formula (6).
(several 6)
In S2, the data of the form T1 of the parameter of inferring with reference to the recurrence of each plussage shown in Fig. 7, and the motion characteristic amount of establishing (when personage's whole body occurs) while there is no plussage is object variable, motion characteristic amount when setting tool has plussage is that explanatory variable returns and infers.When plussage is P in S2, in form T1, select to approach most the plussage P of P
k.This form is an example, in addition can also process more variable.
If return the selection infer coefficient in S2, complete, enter into S3, the recurrence presumed value f of the j component of the motion characteristic amount while adopting following formula (7) to come computing there is no plussage
j'.In addition, μ in formula (7)
fjmean value for the prior sample of the j component of motion characteristic amount, calculates in advance.In motion characteristic amount correction portion 6 by adopting full component to carry out calculating formula (7), thereby motion characteristic amount [x that can be when thering is plussage
1, x
2... x
n] calculate the recurrence presumed value f '=[f of the motion characteristic amount while there is no plussage
1', f
2' ... f
n'].
(several 7)
Motion characteristic amount correction portion 6 has this recurrence presumed value f ' and exports as the corrective action characteristic quantity that plussage has been carried out revise.Prior plussage [P according to the rules
1, P
2... P
m] personage A30 and do not have the motion characteristic amount of each sample of the personage B30 ' of plussage to carry out the parameter that the recurrence of computation sheet T1 is inferred in advance.
In this recurrence presumed value f ', there are following various character.In addition, the plussage of establishing the personage 130b that plays wall 154b of Fig. 4 is P, considers the situation that the recurrence presumed value f ' of 6 pairs of motion characteristic amounts of motion characteristic amount correction portion infers.
In this case, using following situation as prerequisite: according to comprising the motion characteristic amount of the action of playing wall 154a as the personage 130a that there is no plussage of Fig. 3 or playing the sample parameter that suitably recurrence of computation sheet T1 is inferred of motion characteristic amount of the action of any wall of taking car 51.
If, almost there is not the foot of personage 130b so the motion characteristic amount f of personage 130b in comparison diagram 3 and Fig. 4
bmotion characteristic amount f with personage 130a
acompare the diminished amount of the movement that lacks foot of the value that catches the movement of foot or the component of shape.
Wherein, in personage 130b as described in the explanation of motion characteristic amount extraction unit 5, because existence is accompanied by the waving or the action of shaking above the waist of wrist that wall 154b plays in foot, therefore the corresponding component of movement that the employing foot of the motion characteristic amount of these actions when there is no plussage plays has correlativity, and can infer by the motion characteristic amount f ' when there is no plussage in formula (7).
Motion characteristic amount that motion characteristic amount correction portion 6 exported is usingd as input in action recognition portion 7, analogizes and identify optimal action from the classification being associated with prior logined motion characteristic amount.
In these classifications, except the such action of playing wall of Fig. 3, be also included in advance and take interior the supposed several actions of car 51.As the example of this action, can enumerate the action of collision wall, the abnormal movement of action and so on that attacks other personages or the personage action of taking walking in car 51 by bus time in taking car 51 conventionally, arrange the normal movement of action and so on of hair.In addition the classification being associated with this motion characteristic amount can also comprise more action.
In addition, for improve action recognition portion 7 identifying processing possibility and applicable nerual network technique is effective.If carry out in advance the loading coefficient (load coefficient) of learning neural network according to the learning sample of the motion characteristic amount of each classification, can realize.The applicable situation that there is no plussage of motion characteristic amount adopting in this study.
And then the recognition function of action recognition portion 7, except neural network, can also be suitable for the recognizer that can process a plurality of classifications and realize.For example, by adopting the recognizer of SVM (Support Vector Machine) or study vector quantization and so on can realize the identifying processing in action recognition portion 7.
Following such abnormal corresponding control is carried out in the action that control part 8 is identified according to action recognition portion 7, but carry out which kind of control, also can decide according to operator's expectation.
If for example control part 8 is by the identification personages' of action recognition portion 7 abnormal movement, carry out following at least more than one control: recording distance imageing sensor 52 in Fig. 1, take the range image of the not shown pen recorder in car 51 or take the reflection of the not shown camera in car 51, be the control of abnormal movement; Or towards the alarm device of not shown loudspeaker or monitor and so on, export the control of alarm; Or the floor of taking car of change etc. of taking the destination floor of car 51 stops controlling or the gate control of the switching of door.
For example, while having identified the abnormal movement of such action of playing wall of Fig. 3 and so in action recognition portion 7, pen recorder the record of the abnormal corresponding control part that is arranged at control part 8 in taking car 51 becomes the range image of its evidence or the reflection of camera.Now, the classification of the additional above-mentioned action of the reflection of also can adjust the distance image or camera, so that easily learn which kind of abnormal movement carried out to record.
Or, be arranged at abnormal corresponding control part in control part 8 also can be to loudspeaker or monitor output alarm so that prevent its movement or be arranged at abnormal corresponding control part in control part 8 and also can urge so that take car 51 and rest in nearest floor and open shutter door carrying out the personage of abnormal movement, the personage who carries out abnormal movement is got off.In addition,, in order to ensure the safety that is positioned at the passenger around of personage who carries out abnormal movement, also can carry out getting in touch with and call out the corresponding of guard with central management center.
According to the 1st embodiment discussed above, even the personage's that a part for health exceeds in range image sensor 52 action also can be identified with high precision personage's action.In addition, can utilize the action of this identification to carry out alarm in record, loudspeaker or the monitor in pen recorder, take the control of car 51.
In the explanation of the 1st embodiment, the situation that personage's the latter half of take exceeds from the visual angle of range image sensor 52 is illustrated as example, but in the situation that the first half of personage from the visual angle of range image sensor 52, exceed similarly and process.
Now, plussage calculating part 4 calculates according to personage's position the plussage that is more positioned at upside than the upper limit at the visual angle of the upside of range image sensor 52.In addition, the form of the parameter of inferring by each plussage preparation recurrence identical with form T1 of personage's upside in advance in motion characteristic amount correction portion 6, the plussage of the upside calculating according to plussage calculating part 4 selects to return the parameter of inferring.In the situation that exceeding similarly, personage's left side or right side process.
In the explanation of the 1st embodiment, motion characteristic amount correction portion 6 use return infers corrective action characteristic quantity, but also can use the presuming method of other statistics that can infer successive value.For example can be suitable for 2 above item (f with motion characteristic amount f
1 2, f
2 2, f
1f
2deng) multiple regression analysis.In addition, also can adopt fuzzy deduction.
In the situation that applicable recurrence inferred presuming method in addition in motion characteristic amount correction portion 6, according to applicable presuming method, carry out the amendment type of change formula (7).In addition, need to change in advance the data in form T1 according to applicable presuming method.
In the 1st embodiment, in order to make the data of form T1, after the sample of the range image 151 that need to move by each plussage collection personage, extract motion characteristic amount.
The sample of range image 151 need to extract according to the mode of the classification of overlap action identification part 7 from the action of a plurality of classifications.The sample of range image 151 also can be by adopting the personage of computer graphics method synthesized to substitute, and photographs replacing in fact catching personage's range image 151.
The personage of computer graphics method has the size same with actual figure picture, and expectation has identical joint in addition.The personage's of computer graphics method action can make by controlling this joint.
Visual angle with geometric data maintaining part 1, camera system that angle or the corresponding hypothesis of setting position be set are arranged in computer graphics method, and by try to achieve the personage's of computer graphics method range image by the distance value of the camera system of the personage of each pixel computing computer drawing and hypothesis in computer graphics method.
If utilize as above computer graphics method to synthesize the sample of range image 151, the range image 151 that can in fact catch personage with comparison few manpower of photographing is collected the sample of above-mentioned range image 151 and seeks efficient activity.
The personage of computer graphics method also can have variation on height, the bodily form, clothes, makes the data of form T1 in the situation that the personage's of range image 151 etc. height, the bodily form, clothes are various according to this multifarious mode of covering.
Next the pattern recognition device in the 2nd embodiment of the present invention is described, the pattern recognition device of the 2nd embodiment shown below, in the situation that with the 1st embodiment similarly a part for personage's health from the visual angle of range image sensor, exceed, the action of the personage in the visual angle that also can be made a video recording with high precision identification.
In the 2nd embodiment, the structure of all pattern recognition devices is identical with the structure shown in Fig. 1.And, being plussage calculating part 4 and motion characteristic amount correction portion 6 with the functional module difference shown in Fig. 2 of the 1st embodiment, other functional module is identical functional module.
In the 2nd embodiment, with respect to the 1st embodiment first illustrating, plussage calculating part 4 and motion characteristic amount correction portion 6 are replaced into personage's area filter portion 9.The idea of this personage's area filter portion 9 is, personage is divided into a plurality of regions (such as the region, position of head, wrist, body, leg etc. or adopt apart from the height of floor be divided into a plurality of height region etc.), the ratio exceeding from the visual angle of range image 151 is obtained in each region of cutting apart and personage's action, movement accordingly, gives the filtering function of the characteristic quantity and so in the region that the evaluation of characteristic quantity in the region that the ratio of not carrying out exceeding is many or the ratio that reduces to exceed are many.
Therefore, even if the personage that the personage who exists for whole body and a part for health exceed also uses the characteristic quantity in the region more than the ratio that enters into visual angle, therefore because both are similar to as characteristic quantity, therefore high-precision identification can be carried out, the reliability of image recognition can be improved.
In the personage's of the range image 151 that thus, this personage's area filter portion 9 extracts in person extraction portion 3 part, extract no matter the personage who takes in car 51 is positioned at the part occurring everywhere.In other words, even exceed part beyond the part that any position of removing the scope that personage moves to hypothesis in the personage the range image that personage's area filter portion 9 extracts from person extraction portion 3 all occurs at personage's health all the time.
Its result, no matter motion characteristic amount extraction unit 5 is the position of personage in taking car 51, thereby becomes easy by the identification that motion characteristic amount is similar to the action in action recognition portion 7.
Adopt Fig. 9 and Figure 10 that the processing of personage's area filter portion 9 is described.In Fig. 9 and Figure 10, the action that personage 131a and personage 131b collide respectively wall 154a and wall 154b.
Therefore,, even where move within the scope of personage 131a and the personage 131b mobile hypothesis in taking car 51, the part that surpasses height PF in the pixel in the personage of personage 131a and personage 131b occurs all the time in range image 151.Y coordinate when in addition, extracting region 141a below height PF and can the pixel of the inside of personage 131a be transformed to the coordinate figure of coordinate system 59 with reference to through type (1), formula (2) and formula (3) from personage 131a.From personage 131b, extract region 141b and also can adopt same process.
In Figure 10, the regions many due to the foot of personage 131b exceed from the visual angle of range image sensor 52, therefore in the action of personage 131a and personage 131b collision wall, in action when being accompanied by such foot that enters into mobile, if extract motion characteristic amount from the whole body of personage 131a and personage 131b, motion characteristic amount due to the difference of the scope of the foot occurring in range image 151 difference.
On the other hand, in the motion characteristic amount extraction unit 5 of embodiment 2, in image by the personage 131a from removed region 141a and region 141b by personage's area filter portion 9 after and personage 131b, extract motion characteristic amount, thereby the observable scope of personage 131a and personage 131b is roughly the same, and the motion characteristic amount of personage 131a and personage 131b is approximate.
According to the 2nd embodiment, any in the personage that the personage who exists for whole body and a part for health exceed, use enters into the characteristic quantity in the ratio region how at visual angle, therefore no matter how the camera positions in personage's range image are similar to as characteristic quantity, therefore high-precision identification can be carried out, the reliability of image recognition can be improved.
At this, establishing in the present embodiment the region of being filtered by personage's area filter portion 9 is foot, but can determine the region of in addition suitably being filtered.In addition, also can the region being filtered be made as a plurality of.
Embodiment 3
The pattern recognition device of the 3rd embodiment shown below, even in the situation that due to personage's direction the raw deviation of motion characteristic volume production, also can identify with high precision the action of the personage in the visual angle of being made a video recording.
In the 3rd embodiment, the structure of all pattern recognition devices is identical with the structure shown in Fig. 1.And, being geometric data maintaining part 1, plussage calculating part 4 and motion characteristic amount correction portion 6 with the functional module difference shown in Fig. 2 of the 1st embodiment, other functional module is identical functional module.
In the 3rd embodiment, replace the geometric data maintaining part 1, plussage calculating part 4 and the motion characteristic amount correction portion 6 that in the 1st embodiment, adopt, and be replaced into, again appended the geometric data maintaining part 11 of storage information, around structure identification part 12 and personage's coordinate converting section 13.
In Figure 11, around structure identification part 12 possesses the function of the structure (structure around) that extracts personage's periphery in the range image of range image sensor 52.
In addition, the data of the geometric data maintaining part 1 that geometric data maintaining part 11 is used in the 1st embodiment (visual angle of range image sensor 52, angle and setting position are set), also storage keeps the data relevant to the structure of 12 references in structure identification part around.
Personage's coordinate converting section 13 in the situation that the action of the personage 130a of the range image 151 that person extraction portion 3 extracts and 130b etc. for take the action that structure is object around, possess the range image that carries out personage coordinate transform so that the direction of the action of the personage on range image 151 with the location-independent of personage with the reference direction predetermining consistent function.
; for the line segment 254a of wall that represents to take car to which line segment of line segment 254d; judge whether to carry out the action that personage for example hits and so on; by personage's range image is carried out to coordinate transform, (being the direction towards becoming the line segment of the object that personage hits in this case) is for example, with the reference direction predetermining (direction on range image is) consistent so that the direction of the action of the personage the range image of observing from top (preferably directly over), thereby makes personage's direction of action consistent.
At this, in Figure 12, personage 231a hits action (direction of action is upper direction) to line segment 254a, and in Figure 13, personage 231b hits action (direction of action is lower direction) to line segment 254b.Certainly, there are couple line segment 254c and line segment 254d also to hit the situation of action.In any situation, carry out coordinate transform so that direction of action becomes reference direction and go up direction, and the direction of action that makes personage is consistent with specific reference direction, extracts afterwards characteristic quantity, thereby can suppress the deviation of the characteristic quantity in personage's direction of action.
Next, geometric data maintaining part 11, the detailed functions of structure identification part 12 and personage's coordinate converting section 13 are around described.
The data of the structure of geometric data maintaining part 11 keep the information relevant to the wall of taking car 51.Wall is positioned to four sides, each wall each wall quadrature these situations vertical, adjacent with floor of taking car 51 to be stored, keeps as information.
Around first through type (1) and formula (2) are transformed to the distance value of each pixel of range image 151 coordinate figure (X, Y, Z) of coordinate system 59 in structure identification part 12.Next, to hypothesis from directly over observe this coordinate figure range image synthesize.
Figure 12 for from directly over observe the range image take car 51 an example, reference number 251 be from directly over the range image, the reference number 254a that observe, 254b, 254c, the line segment that 254d is range image, and the wall of car 51 is taken in expression.In addition, reference number 231a is for towards the direction of line segment 254a and hit the personage of action.
Suppose from directly over the distance value of range image of viewing coordinates value, same with Figure 14, for the corresponding point 50 of the pixel 150 from range image 151 are till the distance of hypothesis viewpoint.The around condition of taking the wall in car 51 of the 12 pairs of geometric data maintaining parts 11 in structure identification part (wall is positioned at 4, each wall quadrature that each wall is vertical, adjacent with floor) and line segment 254a, 254b, 254c, 254d is that this situation of wall of taking in car 51 is identified.In addition, line segment 254a, 254b, 254c, 254d also can try to achieve by the Straight Line Extraction of the applicable Hough transformations of image 251 of adjusting the distance etc.
In personage's coordinate converting section 13 judgement surrounding's structure that around structure portion thing identification part 12 is identified, which around structure become personage action object and from directly over the direction of action of personage the range image 251 observed, by personage being carried out to coordinate transform, make personage's direction of action consistent with reference direction, thereby make personage's the direction of action consistent.
The range image 251 of observing directly over Figure 12 and Figure 13, exist and hit personage 231a and the personage 231b of action towards each wall (line segment) 254a and 254b.And, according to mutual distance, infer personage which wall is hit to action.
In Figure 12, the coordinate system 201a that it is benchmark that 13 pairs of personage 231a settings of personage's coordinate converting section be take from the nearest line segment 254a of personage 231a.In this case, set and will enter into direction by the represented wall of line segment 254a as X ' axle, this X ' axle has been carried out to the direction of-90 ° of rotations as the coordinate system of Z ' axle.
Equally, in Figure 13, the coordinate system 201b that it is benchmark that 13 pairs of personage 231b settings of personage's coordinate converting section be take from the nearest line segment 254b of personage 231b.In this case, set the direction that enters into the wall being represented by line segment 254b is X ' axle, this X ' axle has been rotated to the direction of-90 ° as the coordinate system of Z ' axle.
Personage's coordinate converting section 13 is carried out rotating coordinate transformation so that each coordinate system 201a of personage 231a and 231b and 201b conform to the coordinate system 200 of taking car 51.In this case, the rotation amount of personage 231a is 0 °, and the rotation amount of personage 231b is 180 °.Therefore, be rotated and become the range image that approaches Figure 12 in the situation that of Figure 13, the direction of action of personage 231b becomes reference direction and goes up direction.
In the motion characteristic amount extraction unit 7 of the 3rd embodiment, from personage's coordinate converting section 13, carried out similarly extracting motion characteristic amount with the 1st embodiment in the personage 231a of the range image coordinate transform and personage 231b.
As above, in the pattern recognition device of the 3rd embodiment, taking in car 51, can be in taking car 51 towards the personage's of wall action, extracting roughly approximate motion characteristic amount.This approximate motion characteristic amount can realize the effect of the reliability that improves the image recognition in action recognition portion 7.
According to above-described the 3rd embodiment, 13 pairs of personage's coordinate converting section hypothetically from directly over observe the personage 231a etc. that takes car 51 and process, but in addition also can adopt under or the range image observed of other the viewpoint of hypothesis such as positive side.
For example, in the situation that the hypothesis viewpoint of observing from positive side is carried out coordinate transform so that the wall of taking in car 51 appears at right side (or left side) to personage's coordinate figure.If hypothesis viewpoint is positive side as above, the movement of observing from transverse direction is that the identification of large action (action of bending over or falling down) becomes easy.
In addition, according to the 3rd embodiment, if structure is the inwall of taking car 51, but this is an example, for example range image sensor 52 is arranged to parking lot, as the structure of structure cognitron motor-car, by carrying out coordinate transform to take this motor vehicle as the personage that object moves, thereby can carry out following application: the action to the personage who takes in motor vehicle and get off is identified, or the action of the such questionable person's thing of the windowpane of damage motor vehicle is identified.
In the 1st embodiment to the 3 embodiments discussed above, the recognition device of the present invention of representative is typically used in the movement of taking the personage in car of lift appliance is identified.
But like that the present invention is not limited to illustrated embodiment as previously described, in the scope of the concept of technology of the present invention, also comprise various variation and application examples.
For example, the system that service range imageing sensor is identified personage's action generally can be widely used.Such as can be applicable to the supervision of the supervision of elevator lobby, near accident escalator, in addition with personage's action to computing machine regularity provide in the posture input media etc. of indication.
In addition, in these embodiments pattern recognition device is constructed in lift appliance take car in the control device that carries, but as such application start, also pattern recognition device can be arranged to administrative center, only range image be sent to administrative center and adopt the control device carries out image recognition in administrative center.
Symbol description
1... geometric data, 2... range image acquisition unit, 3... person extraction portion, 4... plussage calculating part, 5... motion characteristic amount extraction unit, 6... motion characteristic amount correction portion, 7... action recognition portion, 8... control part.
Claims (19)
1. a pattern recognition device, at least possesses:
Range image acquisition unit, it obtains and comprises personage at interior range image from range image sensor;
Person extraction portion, it extracts above-mentioned personage from above-mentioned range image;
Motion characteristic amount extraction unit, it extracts above-mentioned personage's motion characteristic amount; And
Action recognition portion, its motion characteristic amount according to above-mentioned personage is analogized above-mentioned personage's action,
This pattern recognition device is characterised in that also possess:
Plussage calculating part, the plussage that its health of obtaining above-mentioned personage exceeds from the visual angle of above-mentioned range image sensor; And
Motion characteristic amount correction portion, above-mentioned motion characteristic amount and above-mentioned plussage when its health based on above-mentioned personage exceeds from the visual angle of above-mentioned range image sensor, infer the motion characteristic amount of above-mentioned personage's whole body,
According to the motion characteristic amount of above-mentioned motion characteristic amount correction portion, by above-mentioned action recognition portion, above-mentioned personage's action is analogized.
2. pattern recognition device according to claim 1, is characterized in that,
Above-mentioned range image acquisition unit is based on carrying out computing range image from the information that stores at least visual angle, the setting position of above-mentioned range image sensor and the geometric data maintaining part of angle is set.
3. pattern recognition device according to claim 1, is characterized in that,
Above-mentioned plussage calculating part calculates the amount that above-mentioned personage's top or bottom exceed from the visual angle of above-mentioned range image sensor.
4. pattern recognition device according to claim 1, is characterized in that,
Above-mentioned motion characteristic amount correction portion has the storage part of the optimum value that stores the statistics of trying to achieve according to the plussage of prior collection, by adopting the computing of the statistics of this optimum value to infer the motion characteristic amount of above-mentioned personage's whole body.
5. pattern recognition device according to claim 4, is characterized in that,
The computing of the above-mentioned statistics that above-mentioned motion characteristic amount correction portion is carried out is usingd the above-mentioned personage's that there is no plussage the whole body motion characteristic amount while occurring as object variable, motion characteristic amount when having plussage is as explanatory variable, by inferring according to the relevant recurrence of calculating the value of best object variable of the statistics of coefficient, above-mentioned purpose variable and the above-mentioned explanatory variable corresponding to the plussage of storing in above-mentioned storage part, infer thus the motion characteristic amount of above-mentioned personage's whole body.
6. pattern recognition device according to claim 5, is characterized in that,
The personage who is made by computer graphics method in the coefficients by using corresponding with above-mentioned plussage of inferring middle use of the motion characteristic amount of above-mentioned motion characteristic amount correction portion tries to achieve.
7. pattern recognition device according to claim 1, is characterized in that,
Above-mentioned action recognition portion at least identifies above-mentioned personage's abnormal movement according to the motion characteristic amount of above-mentioned motion characteristic amount correction portion.
8. pattern recognition device according to claim 7, is characterized in that,
Above-mentioned action recognition portion stores for identifying, judge the identifying information of above-mentioned personage's normal movement and abnormal movement.
9. a lift appliance, possesses the pattern recognition device described in any one in claim 1~8, takes car and at the above-mentioned above-mentioned range image sensor arranging in car of taking,
This lift appliance is characterised in that, also possesses control part, if identify above-mentioned personage's abnormal movement by above-mentioned action recognition portion, control or the control giving the alarm of the above-mentioned abnormal movement of this control part executive logging or the floor taking the gate control of car or take car at least one control in stopping controlling.
10. a pattern recognition device, at least possesses:
Range image acquisition unit, it obtains and comprises personage at interior range image from range image sensor;
Person extraction portion, it extracts above-mentioned personage from above-mentioned range image;
Motion characteristic amount extraction unit, it extracts above-mentioned personage's motion characteristic amount;
Action recognition portion, its motion characteristic amount according to above-mentioned personage is analogized above-mentioned personage's action,
This pattern recognition device is characterised in that,
Personage's area filter portion is set, in personage the range image that this personage's area filter portion extracts from above-mentioned person extraction portion, even exceed part beyond the part that any position of removing the scope that above-mentioned personage moves to hypothesis all occurs at above-mentioned personage's health all the time
From the above-mentioned personage by the resulting range image of above-mentioned personage's area filter portion, by above-mentioned motion characteristic amount extraction unit, extract above-mentioned personage's motion characteristic amount, according to this motion characteristic amount, by above-mentioned action recognition portion, above-mentioned personage's action is analogized.
11. pattern recognition devices according to claim 10, is characterized in that,
Above-mentioned range image acquisition unit is based on carrying out computing range image from the information that stores at least visual angle, the setting position of above-mentioned range image sensor and the geometric data maintaining part of angle is set.
12. pattern recognition devices according to claim 10, is characterized in that,
Above-mentioned personage's area filter portion removes and exceeds the part that exceeds while measuring maximum value.
13. pattern recognition devices according to claim 10, is characterized in that,
Above-mentioned action recognition portion stores for identifying, judge the identifying information of above-mentioned personage's normal movement and abnormal movement.
14. 1 kinds of lift appliances, possess the pattern recognition device described in any one in claim 10~13, take car and at the above-mentioned above-mentioned range image sensor arranging in car of taking,
This lift appliance is characterised in that, also possesses control part, if identify above-mentioned personage's abnormal movement by above-mentioned action recognition portion, control or the control giving the alarm of the above-mentioned abnormal movement of this control part executive logging or the floor taking the gate control of car or take car at least one control in stopping controlling.
15. 1 kinds of pattern recognition devices, at least possess:
Range image acquisition unit, it obtains and comprises personage at interior range image from range image sensor;
Person extraction portion, it extracts above-mentioned personage from above-mentioned range image;
Motion characteristic amount extraction unit, it extracts above-mentioned personage's motion characteristic amount; And
Action recognition portion, its motion characteristic amount according to above-mentioned personage is analogized above-mentioned personage's action,
This pattern recognition device is characterised in that also possess:
Structure identification part around, it identifies surrounding's structure of above-mentioned personage according to above-mentioned range image; And
Personage's coordinate converting section, itself in the situation that the personage's that above-mentioned person extraction portion extracts action for usining above-mentioned structure around as the action of object, above-mentioned personage's range image is carried out to coordinate transform so that the direction of the action of the above-mentioned personage on above-mentioned range image is consistent with the reference direction predetermining
From the above-mentioned personage by the resulting range image of above-mentioned personage's coordinate converting section, by above-mentioned motion characteristic amount extraction unit, extract above-mentioned personage's motion characteristic amount, according to this motion characteristic amount, by above-mentioned action recognition portion, above-mentioned personage's action is analogized.
16. pattern recognition devices according to claim 15, is characterized in that,
At least visual angle, the setting position of above-mentioned structure identification part around based on from except above-mentioned range image sensor and arranging also stores the information of the geometric data maintaining part of the structure information relevant to structure and carrys out computing range image angle.
17. pattern recognition devices according to claim 15, is characterized in that,
Above-mentioned personage's coordinate converting section, in the situation that above-mentioned structure is around wall, the above-mentioned wall of usining carries out coordinate transform as benchmark to above-mentioned personage's range image.
18. pattern recognition devices according to claim 15, is characterized in that,
Above-mentioned action recognition portion stores for identifying, judge the identifying information of above-mentioned personage's normal movement and abnormal movement.
19. 1 kinds of lift appliances, possess the pattern recognition device described in any one in claim 15~18, take car and at the above-mentioned above-mentioned range image sensor arranging in car of taking,
This lift appliance is characterised in that, also possesses control part, if identify above-mentioned personage's abnormal movement by above-mentioned action recognition portion, control or the control giving the alarm of the above-mentioned abnormal movement of this control part executive logging or the floor taking the gate control of car or take car at least one control in stopping controlling.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012161373A JP5877135B2 (en) | 2012-07-20 | 2012-07-20 | Image recognition apparatus and elevator apparatus |
JP2012-161373 | 2012-07-20 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103577827A true CN103577827A (en) | 2014-02-12 |
CN103577827B CN103577827B (en) | 2016-12-28 |
Family
ID=50049576
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310305951.5A Active CN103577827B (en) | 2012-07-20 | 2013-07-19 | Pattern recognition device and lift appliance |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP5877135B2 (en) |
CN (1) | CN103577827B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105035888A (en) * | 2015-08-03 | 2015-11-11 | 陈思 | Intelligent elevator for preventing dangerous person from riding |
CN106395528A (en) * | 2015-07-27 | 2017-02-15 | 株式会社日立制作所 | Parameter adjustment method, parameter adjustment device for range image sensor and elevator system |
CN106946109A (en) * | 2017-04-25 | 2017-07-14 | 中国计量大学 | A kind of elevator induction installation and method based on Laser Radar Scanning |
CN107000982A (en) * | 2014-12-03 | 2017-08-01 | 因温特奥股份公司 | Replace the system and method for interaction with elevator |
CN109071149A (en) * | 2016-05-04 | 2018-12-21 | 通力股份公司 | System and method for enhancing elevator positioning |
CN111847166A (en) * | 2020-07-20 | 2020-10-30 | 日立楼宇技术(广州)有限公司 | Elevator protection method, device and system |
US11312594B2 (en) | 2018-11-09 | 2022-04-26 | Otis Elevator Company | Conveyance system video analytics |
CN114424263A (en) * | 2020-03-25 | 2022-04-29 | 株式会社日立制作所 | Behavior Recognition Server and Behavior Recognition Method |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106144862B (en) | 2015-04-03 | 2020-04-10 | 奥的斯电梯公司 | Depth sensor based passenger sensing for passenger transport door control |
CN112850406A (en) | 2015-04-03 | 2021-05-28 | 奥的斯电梯公司 | Traffic list generation for passenger transport |
JP6713837B2 (en) * | 2016-05-31 | 2020-06-24 | 株式会社日立製作所 | Transport equipment control system and transport equipment control method |
JP6617081B2 (en) * | 2016-07-08 | 2019-12-04 | 株式会社日立製作所 | Elevator system and car door control method |
JP6769859B2 (en) | 2016-12-19 | 2020-10-14 | 株式会社日立エルジーデータストレージ | Image processing device and image processing method |
US12142002B2 (en) | 2019-09-20 | 2024-11-12 | Sony Interactive Entertainment Inc. | Information processing device, information processing method, and program |
AT523031A1 (en) * | 2019-10-14 | 2021-04-15 | View Promotion Gmbh | METHOD OF MONITORING AN ELEVATOR CAB |
CN111814587B (en) * | 2020-06-18 | 2024-09-03 | 浙江大华技术股份有限公司 | Human behavior detection method, teacher behavior detection method, and related systems and devices |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030156756A1 (en) * | 2002-02-15 | 2003-08-21 | Gokturk Salih Burak | Gesture recognition system using depth perceptive sensors |
CN1759613A (en) * | 2003-03-20 | 2006-04-12 | 因温特奥股份公司 | Monitoring a lift area by means of a 3d sensor |
JP2009014415A (en) * | 2007-07-02 | 2009-01-22 | National Institute Of Advanced Industrial & Technology | Object recognition apparatus and object recognition method |
JP2010067079A (en) * | 2008-09-11 | 2010-03-25 | Dainippon Printing Co Ltd | Behavior analysis system and behavior analysis method |
CN102317977A (en) * | 2009-02-17 | 2012-01-11 | 奥美可互动有限责任公司 | Method and system for gesture recognition |
-
2012
- 2012-07-20 JP JP2012161373A patent/JP5877135B2/en active Active
-
2013
- 2013-07-19 CN CN201310305951.5A patent/CN103577827B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030156756A1 (en) * | 2002-02-15 | 2003-08-21 | Gokturk Salih Burak | Gesture recognition system using depth perceptive sensors |
CN1759613A (en) * | 2003-03-20 | 2006-04-12 | 因温特奥股份公司 | Monitoring a lift area by means of a 3d sensor |
JP2009014415A (en) * | 2007-07-02 | 2009-01-22 | National Institute Of Advanced Industrial & Technology | Object recognition apparatus and object recognition method |
JP2010067079A (en) * | 2008-09-11 | 2010-03-25 | Dainippon Printing Co Ltd | Behavior analysis system and behavior analysis method |
CN102317977A (en) * | 2009-02-17 | 2012-01-11 | 奥美可互动有限责任公司 | Method and system for gesture recognition |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107000982A (en) * | 2014-12-03 | 2017-08-01 | 因温特奥股份公司 | Replace the system and method for interaction with elevator |
US10457521B2 (en) | 2014-12-03 | 2019-10-29 | Inventio Ag | System and method for alternatively interacting with elevators |
CN106395528A (en) * | 2015-07-27 | 2017-02-15 | 株式会社日立制作所 | Parameter adjustment method, parameter adjustment device for range image sensor and elevator system |
CN106395528B (en) * | 2015-07-27 | 2018-08-07 | 株式会社日立制作所 | Parameter regulation means, parameter adjustment controls and the elevator device of range image sensor |
CN105035888A (en) * | 2015-08-03 | 2015-11-11 | 陈思 | Intelligent elevator for preventing dangerous person from riding |
CN105502110A (en) * | 2015-08-03 | 2016-04-20 | 陈思 | Intelligent elevator capable of refusing dangerous persons |
CN109071149A (en) * | 2016-05-04 | 2018-12-21 | 通力股份公司 | System and method for enhancing elevator positioning |
CN106946109A (en) * | 2017-04-25 | 2017-07-14 | 中国计量大学 | A kind of elevator induction installation and method based on Laser Radar Scanning |
US11312594B2 (en) | 2018-11-09 | 2022-04-26 | Otis Elevator Company | Conveyance system video analytics |
CN114424263A (en) * | 2020-03-25 | 2022-04-29 | 株式会社日立制作所 | Behavior Recognition Server and Behavior Recognition Method |
CN114424263B (en) * | 2020-03-25 | 2023-06-27 | 株式会社日立制作所 | Behavior recognition server and behavior recognition method |
CN111847166A (en) * | 2020-07-20 | 2020-10-30 | 日立楼宇技术(广州)有限公司 | Elevator protection method, device and system |
Also Published As
Publication number | Publication date |
---|---|
JP5877135B2 (en) | 2016-03-02 |
JP2014021816A (en) | 2014-02-03 |
CN103577827B (en) | 2016-12-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103577827A (en) | Image identification device and elevator device | |
KR101127493B1 (en) | Image processing apparatus, image processing method and air conditioning control apparatus | |
Navarro-Serment et al. | Pedestrian detection and tracking using three-dimensional ladar data | |
DE102013012224B4 (en) | Device for removing loosely stored objects by a robot | |
Zhang et al. | A viewpoint-independent statistical method for fall detection | |
JP2009048430A (en) | Customer behavior analysis device, customer behavior determination system, and customer buying behavior analysis system | |
JP5853141B2 (en) | People counting device, people counting system, and people counting method | |
US11922391B2 (en) | Article deduction apparatus, article deduction method, and program | |
JP2014211763A5 (en) | ||
JP7210890B2 (en) | Behavior recognition device, behavior recognition method, its program, and computer-readable recording medium recording the program | |
JP2010067079A (en) | Behavior analysis system and behavior analysis method | |
US11779260B2 (en) | Cognitive function evaluation method, cognitive function evaluation device, and non-transitory computer-readable recording medium in which cognitive function evaluation program is recorded | |
JP2018156408A (en) | Image recognition imaging device | |
DE102019200407A1 (en) | PARTIAL DETECTION AND DAMAGE CHARACTERIZATION BY DEEP-LEARNING | |
CN113569793A (en) | Fall recognition method and device | |
CN115004268A (en) | Fraud Detection System and Method | |
JP2021081804A (en) | State recognition device, state recognition method, and state recognition program | |
CN107408354A (en) | Act apparatus for evaluating, action appraisal procedure and computer-readable recording medium | |
JP7011569B2 (en) | Skill level judgment system | |
EP3838432A1 (en) | System for predicting contraction | |
Nguyen et al. | Extracting silhouette-based characteristics for human gait analysis using one camera | |
US11587325B2 (en) | System, method and storage medium for detecting people entering and leaving a field | |
DE102009007842A1 (en) | Method and device for operating a video-based driver assistance system in a vehicle | |
CN116524584A (en) | Detection method and device for detecting human fall, human pick-up or put-back behavior | |
JP2005033518A (en) | Information collection system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |