CN106966277A - The seating detecting system of elevator - Google Patents
The seating detecting system of elevator Download PDFInfo
- Publication number
- CN106966277A CN106966277A CN201710025809.3A CN201710025809A CN106966277A CN 106966277 A CN106966277 A CN 106966277A CN 201710025809 A CN201710025809 A CN 201710025809A CN 106966277 A CN106966277 A CN 106966277A
- Authority
- CN
- China
- Prior art keywords
- personage
- door
- wish
- seating
- block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000000694 effects Effects 0.000 claims abstract description 71
- 238000001514 detection method Methods 0.000 claims abstract description 51
- 230000009471 action Effects 0.000 claims description 16
- 238000012360 testing method Methods 0.000 description 19
- 238000000034 method Methods 0.000 description 14
- 210000001699 lower leg Anatomy 0.000 description 10
- 230000003139 buffering effect Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000000717 retained effect Effects 0.000 description 4
- 238000000605 extraction Methods 0.000 description 3
- 238000009499 grossing Methods 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 241000288673 Chiroptera Species 0.000 description 1
- 230000003466 anti-cipated effect Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000002224 dissection Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000002045 lasting effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B66—HOISTING; LIFTING; HAULING
- B66B—ELEVATORS; ESCALATORS OR MOVING WALKWAYS
- B66B13/00—Doors, gates, or other apparatus controlling access to, or exit from, cages or lift well landings
- B66B13/02—Door or gate operation
- B66B13/14—Control systems or devices
- B66B13/143—Control systems or devices electrical
- B66B13/146—Control systems or devices electrical method or algorithm for controlling doors
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B66—HOISTING; LIFTING; HAULING
- B66B—ELEVATORS; ESCALATORS OR MOVING WALKWAYS
- B66B13/00—Doors, gates, or other apparatus controlling access to, or exit from, cages or lift well landings
- B66B13/24—Safety devices in passenger lifts, not otherwise provided for, for preventing trapping of passengers
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Indicating And Signalling Devices For Elevators (AREA)
- Elevator Door Apparatuses (AREA)
Abstract
The present invention is a wide range of and detects there is the user for taking wish and reflects in the open and close controlling of door exactly.The seating detecting system of the elevator of one embodiment of the present invention includes image pickup part, activity detection portion, position deduction portion, resume maintaining part, seating wish presumption unit and control unit.The block for having activity that position deduction portion is detected for each image zooming-out by the activity detection portion, the coordinate position in the block is estimated as the position of personage.Resume maintaining part will represent that the current position data untill before the stipulated time in the position data of the position of the personage estimated by the position deduction portion remains history data.The history data that position deduction portion is kept with reference to resume maintaining part, decides whether that the position of the personage to being deduced is corrected.
Description
The application is with the Japanese patent application 2016-4594 (dates of application:1/13/2016) this application is enjoyed based on
Priority.The application includes the entire disclosure of which by reference to this application.
Technical field
Embodiments of the present invention are related to a kind of seating for the elevator that user to cage to be taken is detected
Detecting system.
Background technology
Generally, when the cage of elevator reaches hall and opens the door, it can be closed the door after by the stipulated time
Hair.At this moment, because the user of elevator does not know when cage closes the door, therefore riding sedan-chair is being entered from hall sometimes
The door closed is bumped against during railway carriage or compartment.
The collision of door during in order to avoid this seating, it is considered to have using sensor to detect making for cage to be taken
User and the on-off action for controlling door.As the sensor, usually using photoelectric sensor.That is, in advance in cage
Top sets photoelectric sensor, and optical detection is carried out to the user of cage to be taken.During user is detected
Interior, door can maintain door opening state, therefore user can be avoided to bump against the door closed, in addition, can prevent from being drawn into door
Door pocket.
The content of the invention
However, the detection range of photoelectric sensor is narrower, user can only be detected by PinPoint.Accordingly, there exist such as
Lower situation:When having user from cage slightly remote place, it is impossible to detect the user and start to close the door;Or it is anti-
Come over, flase drop measures the personage simply passed by near cage and opened the door.
The problem to be solved in the present invention is to provide a kind of user that can on a large scale and exactly detect and take wish
And the seating detecting system reflected to the elevator in the open and close controlling of door.
The seating detecting system of the elevator of one embodiment includes:Image pickup part, activity detection portion, position deduction portion, resume
Maintaining part, seating wish presumption unit and control unit.Image pickup part can be from the cage when cage has reached hall
Door nearby prescribed limit is shot towards the direction of the hall.Activity detection portion is with block unit to by the shooting
The brightness of multiple continuous images in time series that portion is photographed is compared to detect the activity of personage.Position deduction portion
The block for having activity detected for each Zhang Shangshu image zooming-outs by the activity detection portion, by the coordinate position in the block
It is estimated as the position of personage.In the position data of the position for the personage that resume maintaining part is estimated expression by the position deduction portion
Currently the position data untill before the stipulated time remains history data.Wish presumption unit is taken according to by the position deduction
The time series variation of the position for the personage that portion is deduced estimates that user whether there is seating wish.Control unit is anticipated based on the seating
It is willing to the presumption result of presumption unit to control the on-off action of the door.The position deduction portion is protected with reference to by the resume maintaining part
The history data held, decides whether to be corrected the position of the personage being deduced.
Brief description of the drawings
Fig. 1 is the figure of the composition for the seating detecting system for representing the elevator involved by an embodiment.
Fig. 2 is the figure of one for representing the image photographed by video camera in the embodiment.
Fig. 3 for represent the embodiment in shooting image is divided with block unit after state figure.
Fig. 4 is the figure to illustrate the detection zone in the real space in the embodiment.
Fig. 5 is the figure to illustrate the coordinate system in the real space in the embodiment.
Fig. 6 is the figure for the activity detection that the utilization image to illustrate in the embodiment compares, to schematically show
In the figure of a part for the moment t images photographed.
Fig. 7 is the figure for the activity detection that the utilization image to illustrate in the embodiment compares, to schematically show
In the figure of a part for the moment t+1 images photographed.
Fig. 8 is the flow chart of the flow for the overall processing for representing the seating detecting system in the embodiment.
Fig. 9 is the flow chart for the activity detection processing for representing the seating detecting system in the embodiment.
Figure 10 is the flow chart for the position deduction processing for representing the seating detecting system in the embodiment.
The flow chart that seating wish presumptions of the Figure 11 to represent the seating detecting system in the embodiment is handled.
Figure 12 is the figure for representing the variable condition for having the underfooting position for taking wish in the embodiment.
Processing in the door closing procedure of cage of the Figure 13 to represent the seating detecting system in the embodiment is moved
The flow chart of work.
Figure 14 be to illustrate the moving body in the embodiment cross hall in the case of activity detection figure, be
It is shown schematically in the figure of a part for the image that moment t is photographed.
Figure 15 be to illustrate the moving body in the embodiment cross hall in the case of activity detection figure, be
It is shown schematically in the figure of a part for the image that moment t+1 is photographed.
Figure 16 be to illustrate the moving body in the embodiment cross hall in the case of activity detection figure, be
It is shown schematically in the figure of a part for the image that moment t+2 is photographed.
Figure 17 be represent the embodiment in erroneous judgement be have take wish underfooting position variable condition figure.
The flow chart that another location presumptions of the Figure 18 to represent the seating detecting system in the embodiment is handled.
Figure 19 is the figure for representing the variable condition without the underfooting position for taking wish in the embodiment.
Figure 20 is the figure for representing the variable condition for having the underfooting position for taking wish in the embodiment.
Embodiment
Below, refer to the attached drawing, is illustrated to embodiment.
Fig. 1 is the figure of the composition for the seating detecting system for representing the elevator involved by an embodiment.Furthermore, be herein with
Illustrated exemplified by 1 cage, but many cages are also same composition.
The gateway top of cage 11 is provided with video camera 12.Specifically, going out in covering cage 11
The camera lens part of video camera 12 is provided with the lintel plate 11a on entrance top towards the side of hall 15.Video camera 12 is, for example,
The miniaturized surveillance video camera such as vehicle-mounted vidicon, with wide-angle lens, can be continuously shot several two field pictures (such as 30 in 1 second
Frame/second).When cage 11 reaches each floor and opened the door, by the state bag near the car door 13 in cage 11
Mode including including is shot to the state of hall 15.
Coverage now is adjusted to L1+L2 (L1>>L2).L1 is the coverage of hall side, from car door 13
It is, for example, 3m towards hall 15.L2 be cage side coverage, from car door 13 towards the car back side be, for example, 50cm.Again
Person, L1, L2 are the scope of depth direction, and the scope of width (direction orthogonal with depth direction) is at least than cage 11
Width it is big.
Furthermore, in the hall 15 of each floor, hall door is provided with freely in the arrival mouthful opening and closing of cage 11
14.Hall door 14 engages with car door 13 when cage 11 is reached and action is opened and closed.Power source (door motor) place
In the side of cage 11, hall door 14 simply follows car door 13 and is opened and closed.In the following description, it is set as car door 13
Terraced hall door 14 is also opened the door when enabling, car door 13 at closing time also close the door by hall door 14.
Each image (image) photographed by video camera 12 is subject to dissection process in real time by image processing apparatus 20.Again
Person, image processing apparatus 20 in Fig. 1, taken out and shown from cage 11 for convenience of description, but actually
Image processing apparatus 20 is accommodated in together with video camera 12 in lintel plate 11a.
Herein, equipped with storage part 21 and user's test section 22 in image processing apparatus 20.Storage part 21 has slow
Area is rushed, the buffering area is used to preserve the image photographed by video camera 12 successively and temporarily keeps user's test section 22
Data needed for reason.One of the buffering area is underfooting position resume maintaining part 21a.User's test section 22 is by video camera 12
Photograph be conceived in time series in multiple continuous images the people/thing nearest from car door 13 activity and detect have
Without the user for having seating wish.It is by activity detection portion if functionally being divided to user's test section 22
22a, position deduction portion 22b and seating wish presumption unit 22c are constituted.
Activity detection portion 22a is compared to detect the activity of people/thing with block unit to the brightness of each image.This place
So-called " activity of people/thing " said, refers to the activity of the moving bodys such as personage, the wheelchair of hall 15.
Position deduction portion 22b from by activity detection portion 22a be directed to each image in each and detect have activity
The block nearest from car door 13 is extracted in block, the center (center of door frontal width) of the car door 13 in the block is arrived
The coordinate position (Y-coordinate shown in Fig. 5) in hall direction is estimated as the position (underfooting position) of user.Take wish presumption
Portion 22c judges the user whether there is seating wish according to the time series variation by the position deduction portion 22b positions deduced.
Furthermore, these functions (activity detection portion 22a, position deduction portion 22b, seating wish presumption unit 22c) may also set up
In video camera 12, it may also be arranged in car control device 30.
Car control device 30 is connected with elevator control gear (not shown), carries out waiting ladder between the elevator control gear
The transmitting-receiving of the various signals such as Room calling, car call.Furthermore, so-called " hall calling " refers to by being arranged on each floor
The operation of the hall call button (not shown) of hall 15 and the call signal registered, include registration floor and purpose place
To information.So-called " car call ", refers to exhale by the destination (not shown) being arranged in the cage of cage 11
The call signal cried the operation of button and registered, the information comprising destination floor.
In addition, car control device 30 includes door open and close controlling portion 31.Door open and close controlling portion 31 is reached to cage 11
The door opening and closing of car door 13 during hall 15 is controlled.Specifically, door open and close controlling portion 31 is reached in cage 11
Car door 13 is opened during hall 15, car door 13 is closed after by the stipulated time.But, in the enabling shape of car door 13
In the case of the personage for taking wish being detected under state by user's test section 22 of image processing apparatus 20, door opening and closing
Control unit 31 forbids closing the door for car door 13 to act and maintain door opening state.
Then, referring to figs. 2 to Fig. 7, the seating wish detection method in present embodiment is illustrated.
The figure of one of the image that Fig. 2 is photographed for expression by video camera 12.E1 in figure represents position deduction region, yn
Expression detects the Y-coordinate of the underfooting position of user.After Fig. 3 is represents to divide shooting image with block unit
The figure of state.Furthermore, original image will be divided into while being WblockClathrate obtained by part be referred to as " block ".
Video camera 12 is arranged on the gateway top of cage 11.Thus, opened in hall 15 in cage 11
Men Shi, shoots the prescribed limit (L2) in the prescribed limit (L1) and car of hall side.Herein, if using video camera 12,
Detection range expands, in can also be detected from the slightly remote local user of cage 11.But then, have
Possible error detection does not take the personage of cage 11 and opens car door 13.
Therefore, the system is set to following composition:As shown in figure 3, the image photographed by video camera 12 is divided into necessarily
The block of size, detects the movable block of someone/thing and follows the trail of this have the block of activity, thus determine whether have seating to anticipate
The user of hope.
Furthermore, in the example in figure 3, the length in length and breadth of block is identical, but the vertical length from horizontal stroke also can be different.In addition,
Can across and image whole region and block is set to uniform size, can also be set to for example shorten vertical closer to image top
The uneven size of length to (Y-direction) etc..Thus, can be with the uniform resolution in higher resolution or real space
Rate obtains the later underfooting position estimated, and (if equably being divided on image, then real space is interior from car door
13 is more remote, and resolution ratio is lower).
Fig. 4 is the figure to illustrate the detection zone in real space.Fig. 5 is to illustrate the coordinate system in real space
Figure.
It is first, good to each Block Set in order to detect the activity for the user for taking wish according to shooting image
Activity detection region.Specifically, as shown in figure 4, at least setting position deduction region E1 and taking wish presumption region E2.
Position deduction region E1 is a part for the body of the user to being come up from hall 15 towards car door 13, is specially user
The region that is estimated of underfooting position.It is detected in estimated position presumption region E1 make to take wish presumption region E2
Whether user has the region for taking wish.Furthermore, take wish presumption region E2 and be included in the position deduction region E1,
It is the region for estimating the underfooting position of user.That is, taking in wish presumption region E2, estimating the underfooting position of user simultaneously
Estimate the seating wish of the user.
In real space, position deduction region E1 has apart from L3 from the centrally directed hall direction of car door 13,
For example it is set as 2m (the coverage L1 of L3≤hall side).Position deduction region E1 width W1 is set as car door 13
More than width W0 distance.Take wish presumption region E2 has apart from L4 from the centrally directed hall direction of car door 13,
For example it is set as 1m (L4≤L3).Seating wish presumption region E2 width W2 is set as the width W0 substantially phases with car door 13
Same distance.
Furthermore, the width W2 for taking wish presumption region E2 is also greater than W0.In addition, taking wish estimates region E2 in reality
Also can not be rectangle but trapezoidal except the dead angle of doorframe in the space of border.
Herein, as shown in figure 5, video camera 12 shoots the level of car door 13 with the gateway with being arranged on cage 11
Direction be X-axis, take the direction (perpendicular to the direction of car door 13) at the center of car door 13 to hall 15 as Y-axis, to multiply
Short transverse with car 11 is the image of Z axis.In each image photographed by the video camera 12, with block unit to Fig. 4
Shown position deduction region E1 and seating wish presumption region E2 part are compared, and are thus detected in car door 13
I.e. the heart is to the variation of the underfooting position of user on the move in the direction Y direction of hall 15.
The situation is shown in Fig. 6 and Fig. 7.
Fig. 6 and Fig. 7 is to the figure for the activity detection for illustrating to compare using image.Fig. 6 is shown schematically in moment t bats
A part for the image taken the photograph, Fig. 7 is shown schematically in a part for the image that moment t+1 is photographed.
P1, P2 in figure is detects the image section of the user of activity in shooting image, indeed through figure
The aggregate of the block of activity is detected as comparing.Extracting nearest from car door 13 in image section P1, P2 has activity
Block Bx and follow the trail of block Bx Y-coordinate, thus determine whether seating wish.In this case, if being marked along Y direction
Such equal space line (the equally spaced horizontal line parallel with car door 13) is represented with dotted line, then can be appreciated that block Bx and sedan-chair
The distance of the Y direction of railway carriage or compartment door 13.
In Fig. 6 and Fig. 7 example, the block Bx that has activity nearest from car door 13 test position is from ynIt is changed into
yN-1, it is known that user comes up close to car door 13.
Then, the action to the system is described in detail.
Fig. 8 is the flow chart for the flow for representing the overall processing in the system.
When cage 11 reaches the hall 15 of any floor (step S11's be), car control device 30 is opened
The user (step S12) of car door 13 and wait seating cage 11.
Now, by be arranged on cage 11 gateway top video camera 12 with regulation frame rate (such as 30 frames/
Second) shoot prescribed limit (L2) in the prescribed limit (L1) and car of hall side.Image processing apparatus 20 is in temporal sequence
The image photographed by video camera 12 is obtained, while these images are preserved to storage part 21 (step S13) successively, it is real-time on one side
Perform following user's detection process (step S14).
User's detection process user's test section 22 provisioned in image processing apparatus 20 is performed.The user examines
Survey processing is divided into activity detection processing (step S14a), position deduction processing (step S14b) and takes wish presumption processing (step
Rapid S14c).
(a) activity detection is handled
The flow chart that Fig. 9 is handled for expression step S14a activity detection.Activity detection processing is by as described
The activity detection portion 22a of one of the inscape of user's test section 22 is performed.
Activity detection portion 22a reads each image retained in storage part 21 one by one, is calculated for each block average
Brightness value (step A11).At this moment, activity detection portion 22a is by the average bright of each block calculated when inputting initial image
Angle value keeps buffering area (not shown) to storage part 21 as initial value (step A12).
When obtaining the image after the 2nd, average brightness values of the activity detection portion 22a to each block of present image
It is compared (step A13) with the average brightness value of each block of previous image retained in the buffering area.As a result,
In the case of the block that there is the luminance difference with the value above set in advance in present image, activity detection portion 22a should
Block is determined as the block (step A14) for having activity.
After activity is determined whether for present image, activity detection portion 22a is put down each block of the image
Equal brightness value keeps to the buffering area using (step A15) as the comparison with next image.
Afterwards, activity detection portion 22a is repeated below operation:On one side in temporal sequence successively with block unit to by taking the photograph
The brightness value for each image that camera 12 is photographed is compared, while determining whether activity.
(b) position deduction is handled
The flow chart that Figure 10 is handled for expression step S14b position deduction.Position deduction processing is by as described
The position deduction portion 22b of one of the inscape of user's test section 22 is performed.
Position deduction portion 22b checks the block for having activity in present image according to activity detection portion 22a testing result
(step B11).As a result, in the case of the block that activity is there are in the position deduction region E1 shown in Fig. 4, user's detection
Extract block (step B12) nearest from car door 13 in the block that this has activity in portion 22.
Herein, as shown in figure 1, video camera 12 is arranged on the gateway top of cage 11 towards hall 15.Cause
And, in the case where user goes on foot from hall 15 towards car door 13, the right crus of diaphragm of the user or the part of left foot are shone upon
Shooting image most nearby the possibility of the namely block of the side of car door 13 is higher.Therefore, position deduction portion 22b is asked from car
The Y-coordinate (coordinate of the center of car door 13 to the direction of hall 15) of the nearest block for having activity of door 13 is as user's
The data of underfooting position, and keep to the underfooting position resume maintaining part 21a (step B13) in storage part 21.
Afterwards, be directed in each image each of position deduction portion 22b and obtaining nearest from car door 13 has work
The Y-coordinate of dynamic block and keeps to the underfooting position resume in storage part 21 protecting as the data of the underfooting position of user
Hold portion 21a.Furthermore, the presumption processing of this underfooting position is not only to be carried out in the E1 of position presumptive region domain, is pushed away taking wish
Determine similarly carrying out in the E2 of region.
(c) wish presumption processing is taken
The flow chart that seating wish presumptions of the Figure 11 to represent the step S14c is handled.The seating wish presumption processing by
The seating wish presumption unit 22c of one of inscape as user's test section 22 is performed.
Wish presumption unit 22c is taken by the user of each image retained in the underfooting position resume maintaining part 21a
Underfooting position data smoothing (step C11).Furthermore, as the method for smoothing, such as using mean filter method or card
The method of the generally known dawn such as Kalman Filtering method, description is omitted herein.
When the data to underfooting position are smoothed, in the case where there is variable quantity for data more than setting
(step C12's be), takes wish presumption unit 22c and removes the data (step C13) as outlier.Furthermore, the rule
Definite value is determined by the standard walking speed of user and the frame rate of shooting image.In addition, also can be in the data to underfooting position
Outlier is found out before being smoothed and is removed.
Figure 12 represents the variable condition of underfooting position.Transverse axis represents the time, and the longitudinal axis represents position (Y-coordinate value).Using
In the case that person comes up from hall 15 towards car door 13, pass through over time, the Y-coordinate value of the underfooting position of user can be by
Gradual change is small.
Furthermore, if for example, the moving body such as wheelchair, then such linearity data variation shown in dotted line, in user
In the case of, due to alternately detecting the underfooting of left and right pin, therefore it is the data variation bent as solid line.In addition, when inspection
When survey result is mixed into some noises, the instantaneous variation amount of underfooting position can increase.The number of the larger underfooting position of this variable quantity
Remove according to as outlier.
Herein, seating wish presumption unit 22c confirms the underfooting position in the seating wish presumption region E2 shown in Figure 12
Change (data variation) (step C14).As a result, it can confirm along Y direction towards car taking in wish presumption region E2
In the case of the variation (data variation) of the underfooting position for the user that door 13 is gone on foot (step C15 Yes), wish presumption is taken
Portion 22c judges that the user has seating wish (step C16).
On the other hand, fail to confirm in wish presumption region E2 along Y direction towards making that car door 13 is gone on foot taking
In the case of the variation of the underfooting position of user (step C15's is no), take wish presumption unit 22c and judge the user without seating
Wish (step C17).
In this way, by the way that the block that has activity nearest from car door 13 is considered as into the underfooting position of user and the pin is followed the trail of
The time change of the Y direction of lower position, can estimate user whether there is seating wish.
Fig. 8 is back to, when detecting the user for taking wish (step S15's be), from image processing apparatus 20
To the output user's detection signal of car control device 30.Car control device 30 is prohibited because receiving user detection signal
Only car door 13 action of closing the door, maintain door opening state (step S16).
Specifically, when car door 13 reaches full-gear, the timing that car control device 30 starts opening time is moved
Make, closed the door at the time point of timing to stipulated time T minute (such as 1 minute).Seating has been detected when interior in the meantime
The user of wish and receive user detection signal when, car control device 30 stop timing action simultaneously clocking value is clear
Zero.Thus, the door opening state of car door 13 is maintained during the time T.
Furthermore, when in the meantime it is interior detect it is new have and take wish user when, clocking value is reset again so that
The door opening state of car door 13 is maintained during the time T.But, if repeatedly coming up user during the time T,
The situation that car door 13 can not all be closed forever can be continued, it is therefore preferable that allowed time Tx (such as 3 minutes) is set,
Car door 13 is forcibly closed in the case of by allowed time Tx.
When the timing release of described time T minutes (step S17), car control device 30 close car door 13 and
Cage 11 is set to be set out (step S18) towards destination.
In this way, according to present embodiment, by the video camera 12 using the gateway top for being arranged on cage 11
Image obtained by shooting hall 15 is parsed, detectable for example local to be walked from from cage 11 is slightly remote towards car door 13
Come user and reflect into door on-off action.
In addition, in said embodiment, it is contemplated that be the state opened in the car door 13 of the cage 11 of hall 15
And be illustrated, even if but car door 13 closing, can also use the image photographed by video camera 12 to detect the presence of
There is the user for taking wish.When detecting the user for taking wish, it is opened and closed and is controlled by the door of car control device 30
Portion 31 processed acts and carried out again door opening action to interrupt closing the door for car door 13.
Below, with reference to Figure 13 flow chart, the processing action in door closing procedure is illustrated.
When cage 11 car door 13 from full-gear by the stipulated time when, by door open and close controlling portion 31
Start action (step S21) of closing the door.Now, the shooting action of video camera 12 is carried out lasting.Described image processing unit 20 is pressed
Time series obtains the image photographed by the video camera 12, while these images are preserved to (the step of storage part 21 successively
S22), user's detection process (step S23) is performed in real time on one side.
User's detection process user's test section 22 provisioned in image processing apparatus 20 is performed.The user examines
Survey processing is divided into activity detection processing (step S23a), position deduction processing (step S23b) and takes wish presumption processing (step
Rapid S23c).Furthermore, these processing are because identical with Fig. 8 step S14a, S14b, S14c, so description is omitted.
Herein, when detecting the user for taking wish (step S24's be), from image processing apparatus 20 to car
The output user's detection signal of control device 30.When receiving user's detection signal in door closing procedure, car control
Device 30 processed interrupts the action and progress door opening action (opens) (step S25) again again of closing the door of car door 13.
Afterwards, it is back to Fig. 8 step S12 and repeats and identical processing.But, if continuing in door closing procedure
The user for taking wish is detected, then can again be opened repeatedly and be delayed cage 11 and set out.Thus, it is preferable to be:Even if
, also will not be again if passing through the allowed time Tx (such as 3 minutes) in the case where detecting the user for taking wish
Open but close the door.
Even if in this way, can also detect the presence of the user for taking wish in door closing procedure, and by the testing result
Reflect in door on-off action.Thus, can avoid user bumped against when wanting to enter into the cage 11 closed door this
The situation of sample.
Although in addition, in order to which precision prevents moving body from only (being crossed in the X-axis direction by the vicinity of cage well
Hall) but erroneous judgement is that the moving body has seating wish, and position deduction portion 22b, which can perform position deduction processing described later, to be come
Replace the position deduction processing shown in Figure 10.Hereinafter, although pair moving body only crosses hall erroneous judgement in the X-axis direction first
Breaking as the moving body has the situation for taking wish to illustrate.
Figure 14 to Figure 16 shows that moving body is detected in the case of crossing hall in the X-axis direction by activity detection portion 22a
To the aggregate for the block for having activity.A part for the image that moment t is photographed is shown schematically in fig. 14, in Figure 15
In be shown schematically in the part of the image that moment t+1 is photographed, moment t+2 shootings are shown schematically in figure 16
A part for the image arrived.
P3 in Figure 14 is the figure of the right crus of diaphragm for the moving body (being herein personage) that activity is detected as in shooting image
As part, the P4 in Figure 15 be P5 in the image section of the left foot of the personage, Figure 16 be the personage right crus of diaphragm image section.
As shown in Figure 14 and Figure 16, in moment t and moment t+2 shooting image, the personage of hall is crossed in the X-axis direction
Left foot be not detected as activity.Because left foot is made in order to step right crus of diaphragm in direction of advance (X-axis positive direction)
For axle foot, the reason of left foot is not moved.Similarly, as shown in figure 15, it is horizontal in the X-axis direction in moment t+1 shooting image
The right crus of diaphragm for wearing the personage of hall has not been detected as activity.Because, in order to be stepped in direction of advance (X-axis positive direction)
Left foot and using right crus of diaphragm as axle foot, the reason of right crus of diaphragm is not moved.
Therefore, an image section P3 part for the right crus of diaphragm of the personage of hall is crossed in the X-axis direction as from car
The nearest block Bx for having activity of door 13 is extracted from moment t and moment t+2 shooting image.Also, in the X-axis direction
An image section P4 part for the left foot of the personage of hall is crossed as the block Bx that has activity nearest from car door 13
Extracted from moment t+1 shooting image.
In this case, according to the line of equidistance being represented by dotted lines in the Y-axis direction, nearest from car door 13 has activity
Block Bx test position moment t between moment t+2 change it is as follows:ym→ym+1→ym.Specifically, such as Figure 17 institutes
Show, the test position such as y that has movable block Bx nearest from car door 13m→ym+1→ym→... changes repeatedly like that.Thus,
It is although personage only crosses hall in the X-axis direction, such as nearest from car door 13 in moment t+1 between moment t+2
The block Bx for having activity test position in the Y-axis direction towards car door 13, so the seating meaning of user's test section 22
It is willing to that presumption unit 22c is possible to erroneous judgement and has seating wish for the personage.
In order to which precision prevents such false judgment well, position deduction portion 22b performs the position deduction shown in Figure 18
Handle to replace the position deduction shown in Figure 10 to handle.In addition, for being handled with the position deduction processing identical shown in Figure 10
Identical symbol is marked, description is omitted herein.
After step B12, position deduction portion 22b obtains pin from the underfooting position resume maintaining part 21a in storage part 21
Data (the resume for the underfooting position obtained respectively to each shooting image from current shooting image untill regulation frame before
Data) (step B21).
Position deduction portion 22b is with reference to the data of each underfooting position untill the acquired regulation frame before, and decision is
The Y-coordinate of the no block nearest from car door 13 to being extracted from current shooting image is corrected (step B22).
Specifically, position deduction portion 22b passes through each underfooting position untill comparing the regulation frame before acquired
The Y-coordinate of the block nearest from car door 13 that is shown respectively of data in Y-coordinate (that is, the minimum Y nearest from car door 13
Coordinate) and the Y-coordinate of the block nearest from car door 13 that is extracted from current shooting image, come decide whether to from
The Y-coordinate for the block nearest from car door 13 that current shooting image is extracted is corrected.
Situation about being corrected in the Y-coordinate of the block nearest from car door 13 to being extracted from current shooting image
Under, i.e. (step in the case that the Y-coordinate is bigger than the Y-coordinate represented by the data of the underfooting position untill regulation frame before
B22's is), position deduction portion 22b is by the Y-coordinate school of the block nearest from car door 13 extracted from current shooting image
The Y-coordinate just represented for the data by the underfooting position untill regulation frame before, regard the Y-coordinate after correction as current pin
The data of lower position are preserved to the underfooting position resume maintaining part 21a (step B23) in storage part 21.
On the other hand, the Y-coordinate of the block nearest from car door 13 extracted from current shooting image is not being corrected
In the case of, i.e., the Y-coordinate is than in the case that the Y-coordinate represented by the data of the underfooting position untill regulation frame before is small
(step B22's is no), position deduction portion 22b is by the Y of the block nearest from car door 13 extracted from current shooting image
Coordinate is kept to the underfooting position resume maintaining part 21a (step B24) in storage part 21 as the data of current underfooting position.
In addition, the regulation frame was set to equivalent to the time (such as 0.5 second) with standard walking period same degree
Frame or the frame equivalent to the time longer than standard walking period.For example, can be continuously shot in video camera 12 in 1 second
In the case of the image of 30 frames, the regulation frame is set to 15 frames.Walking period refers to that right crus of diaphragm and left foot each advance 1 step
The required time.According to Figure 14 to Figure 17, from time Xiang Dang Yu Walk line periods of the moment t untill moment t+2.
When performing the processing of step B23, B24, position deduction portion 22b returns to block B11, afterwards similarly, obtains personage
Underfooting position data, be retained in the underfooting position resume maintaining part 21a in storage part 21.In addition, underfooting position is carried out
As long as the data of the underfooting position untill regulation frame before going through described in maintaining part 21a holdings.That is, than it is described it
The data of preceding regulation frame also early underfooting position are suitably deleted from underfooting position resume maintaining part 21a.
Position deduction processing according to Figure 18, as shown in figure 15, even if sufficient without movement using right crus of diaphragm as axle, also can
The block Bx that has activity nearest from car door 13 of enough shooting image extractions by from moment t+1 Y-coordinate ym+1It is corrected to figure
The Y-coordinate y for the block Bx that has activity nearest from car door 13 that the shooting image from moment t shown in 14 is extractedm.Therefore,
Moving body crosses the test position that has movable block Bx nearest from car door 13 in the case of hall in the X-axis direction
It is certain as Figure 19 bold portion.Even if that is, moving body crosses hall in the X-axis direction, also can
The reduction seating wish presumption unit 22c erroneous judgements moving body has the possibility for taking wish.
So, due to also there are data of the position deduction portion 22b with reference to past underfooting position to from current shooting figure
As the function that the Y-coordinate of the block nearest from car door 13 extracted is corrected, so can accurately and accurately only
Detection has the user for taking wish, and is reflected in an on-off action.
In addition, hereinbefore, it is contemplated to which moving body crosses hall (that is, by without the people for taking wish in the X-axis direction
Thing is set to calibration object) in the case of, but moving body (that is, will have the user for taking wish to be set to correction towards car door 13
Object) situation be also likewise, being handled by position deduction shown in Figure 18, the block that has activity nearest from car door 13
Y-coordinate be corrected.Specifically, as shown in figure 20, have from moment t+1 the nearest from car door 13 of shooting image extraction
The Y-coordinate y of the block of activityl+1It is corrected as from the area that has activity nearest from car door 13 of moment t shooting image extraction
The Y-coordinate y of blockl.Thus, in moving body towards the block for having activity nearest from car door 13 in the case of car door 13
Test position is stepped as Figure 20 bold portion.
At least one kind of embodiment in accordance with the above, it is possible to provide one kind can detect seating on a large scale and exactly
The user of wish and the seating detecting system reflected to the elevator in the open and close controlling of door.
Furthermore, the several embodiments to the present invention are illustrated, but these embodiments are to be subject to as an example
Displaying, is not intended to limit the scope of invention.These novel embodiments can be carried out in other various modes, and can
Various omissions are carried out in the range of the purport of invention is not departed from, replaces, change.These embodiments and its deformation are included in hair
In bright scope, purport, and in the scope comprising invention described in detail in the claims and its equalization.
Claims (6)
1. a kind of seating detecting system of elevator, it is characterised in that including:
Image pickup part, its can when cage has reached hall near the door of the cage towards the hall
Direction is shot to prescribed limit;
Activity detection portion, its with block unit to by the image pickup part photograph in time series multiple continuous images it is bright
Degree is compared to detect the activity of personage;
Position deduction portion, its block for having activity detected for each Zhang Shangshu image zooming-outs by the activity detection portion will
Coordinate position in the block is estimated as the position of personage;
Resume maintaining part, its by represent in the position data of the position of personage that is deduced by the position deduction portion it is current extremely
Position data untill before stipulated time remains history data;
Wish presumption unit is taken, it is pushed away according to the time series variation of the position of the personage deduced by the position deduction portion
Determine user whether there is seating wish;And
Control unit, it controls the on-off action of the door based on the presumption result of the seating wish presumption unit,
The history data that the position deduction portion is kept with reference to the resume maintaining part, decides whether to the people being deduced
The position of thing is corrected.
2. the seating detecting system of elevator according to claim 1, it is characterised in that
The position deduction portion is to the position of the personage being deduced and by before the stipulated time represented by the history data
Untill the position of personage be compared,
In position of the position of the personage represented by the history data than the personage being deduced closer to the door
In the case of, the position correction of the personage being deduced is the personage represented by the history data by the position deduction portion
Position.
3. the seating detecting system of elevator according to claim 2, it is characterised in that
In position of the position of the personage being deduced than the personage represented by the history data closer to the door
In the case of, the position deduction portion does not correct the position of the personage being deduced.
4. the seating detecting system of elevator according to claim 1, it is characterised in that
The stipulated time is the time or the time longer than the walking period with standard walking period equal extent.
5. the seating detecting system of elevator according to claim 1, it is characterised in that
The position deduction portion by it is described have activity block in the center from the door to the hall direction coordinate
Position deduction is the underfooting position of personage,
Detect the underfooting position of the personage in region set in advance close to the state of the door in the case of,
The seating wish presumption unit is determined as there is seating wish.
6. the seating detecting system of elevator according to claim 1, it is characterised in that
The image pickup part is arranged on the gateway top of the cage,
The image pickup part is shot using the direction with the door level as X-axis, with the center of the door to the direction of the hall
For Y-axis, using the short transverse of the cage as the image of Z axis.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016004594A JP5969147B1 (en) | 2016-01-13 | 2016-01-13 | Elevator boarding detection system |
JP2016-004594 | 2016-01-13 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106966277A true CN106966277A (en) | 2017-07-21 |
CN106966277B CN106966277B (en) | 2018-12-25 |
Family
ID=56701578
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710025809.3A Active CN106966277B (en) | 2016-01-13 | 2017-01-13 | The seating detection system of elevator |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP5969147B1 (en) |
CN (1) | CN106966277B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110294371A (en) * | 2018-03-22 | 2019-10-01 | 东芝电梯株式会社 | User's detection system |
CN111717761A (en) * | 2019-03-20 | 2020-09-29 | 东芝电梯株式会社 | Elevator with a movable elevator car |
US11524868B2 (en) | 2017-12-12 | 2022-12-13 | Otis Elevator Company | Method and apparatus for effectively utilizing cab space |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6242966B1 (en) * | 2016-08-24 | 2017-12-06 | 東芝エレベータ株式会社 | Elevator control system |
CN106219370A (en) * | 2016-08-31 | 2016-12-14 | 合肥同益信息科技有限公司 | A kind of intelligent elevator control system |
JP6270948B1 (en) * | 2016-09-21 | 2018-01-31 | 東芝エレベータ株式会社 | Elevator user detection system |
JP2018090351A (en) * | 2016-11-30 | 2018-06-14 | 東芝エレベータ株式会社 | Elevator system |
JP6271776B1 (en) * | 2017-01-17 | 2018-01-31 | 東芝エレベータ株式会社 | Elevator boarding detection system |
JP6377795B1 (en) * | 2017-03-24 | 2018-08-22 | 東芝エレベータ株式会社 | Elevator boarding detection system |
CN109279466B (en) * | 2017-07-21 | 2021-08-17 | 奥的斯电梯公司 | Automatic detection of abnormal movement of elevator passengers |
JP6517306B1 (en) * | 2017-11-15 | 2019-05-22 | 東芝エレベータ株式会社 | Control device and control method for elevator |
JP6538142B2 (en) * | 2017-11-21 | 2019-07-03 | 東芝エレベータ株式会社 | Elevator group management control system and elevator control method |
JP6553249B1 (en) * | 2018-05-24 | 2019-07-31 | 東芝エレベータ株式会社 | Elevator alerting system |
JP6693624B2 (en) * | 2019-02-21 | 2020-05-13 | 東芝エレベータ株式会社 | Image detection system |
JP6922951B2 (en) * | 2019-08-09 | 2021-08-18 | フジテック株式会社 | Image processing device and image processing method |
JP6874796B2 (en) * | 2019-08-09 | 2021-05-19 | フジテック株式会社 | Elevator door control device and control method |
JP7074161B2 (en) * | 2020-07-07 | 2022-05-24 | フジテック株式会社 | Image processing device and image processing method |
JP7036244B2 (en) * | 2021-02-02 | 2022-03-15 | フジテック株式会社 | Ride detection system and ride detection method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5001557A (en) * | 1988-06-03 | 1991-03-19 | Inventio Ag | Method of, and apparatus for, controlling the position of an automatically operated door |
WO2007081345A1 (en) * | 2006-01-12 | 2007-07-19 | Otis Elevator Company | Video aided system for elevator control |
CN101304939A (en) * | 2006-03-20 | 2008-11-12 | 三菱电机株式会社 | Elevator door device |
JP2011241002A (en) * | 2010-05-14 | 2011-12-01 | Hitachi Ltd | Safety elevator |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006096517A (en) * | 2004-09-29 | 2006-04-13 | Mitsubishi Electric Corp | Elevator control system |
JP5317426B2 (en) * | 2007-05-01 | 2013-10-16 | 三菱電機株式会社 | Elevator equipment |
CA2819574C (en) * | 2010-12-03 | 2016-01-19 | Nabtesco Corporation | Sensor for use with automatic door |
-
2016
- 2016-01-13 JP JP2016004594A patent/JP5969147B1/en active Active
-
2017
- 2017-01-13 CN CN201710025809.3A patent/CN106966277B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5001557A (en) * | 1988-06-03 | 1991-03-19 | Inventio Ag | Method of, and apparatus for, controlling the position of an automatically operated door |
WO2007081345A1 (en) * | 2006-01-12 | 2007-07-19 | Otis Elevator Company | Video aided system for elevator control |
CN101304939A (en) * | 2006-03-20 | 2008-11-12 | 三菱电机株式会社 | Elevator door device |
JP2011241002A (en) * | 2010-05-14 | 2011-12-01 | Hitachi Ltd | Safety elevator |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11524868B2 (en) | 2017-12-12 | 2022-12-13 | Otis Elevator Company | Method and apparatus for effectively utilizing cab space |
CN110294371A (en) * | 2018-03-22 | 2019-10-01 | 东芝电梯株式会社 | User's detection system |
CN110294371B (en) * | 2018-03-22 | 2021-11-05 | 东芝电梯株式会社 | User detection system |
CN111717761A (en) * | 2019-03-20 | 2020-09-29 | 东芝电梯株式会社 | Elevator with a movable elevator car |
CN111717761B (en) * | 2019-03-20 | 2022-06-24 | 东芝电梯株式会社 | Elevator with a movable elevator car |
Also Published As
Publication number | Publication date |
---|---|
JP5969147B1 (en) | 2016-08-17 |
JP2017124898A (en) | 2017-07-20 |
CN106966277B (en) | 2018-12-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106966277A (en) | The seating detecting system of elevator | |
CN106966276B (en) | The seating detection system of elevator | |
CN106966278B (en) | The seating detection system of elevator | |
JP6377797B1 (en) | Elevator boarding detection system | |
JP6377796B1 (en) | Elevator boarding detection system | |
JP6367411B1 (en) | Elevator system | |
CN108116956B (en) | Elevator device | |
JP6495424B1 (en) | Image detection system | |
JP6377795B1 (en) | Elevator boarding detection system | |
JP6139729B1 (en) | Image processing device | |
US12094236B2 (en) | Method of object re-identification | |
JP6271776B1 (en) | Elevator boarding detection system | |
JP7605219B2 (en) | Authentication system, authentication method, and computer program | |
JP6693624B2 (en) | Image detection system | |
JP2007156771A (en) | Image detection tracing device, image detection tracing method and image detection tracing program | |
JP7711268B1 (en) | Elevator System | |
JP2009038779A (en) | Moving body detection apparatus, moving body imaging system, moving body imaging method, and moving body detection program | |
HK1238230B (en) | Riding detection system of elevator | |
HK40005411B (en) | Image detection system | |
HK40005411A (en) | Image detection system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |