WO2016002400A1 - 誘導処理装置及び誘導方法 - Google Patents
誘導処理装置及び誘導方法 Download PDFInfo
- Publication number
- WO2016002400A1 WO2016002400A1 PCT/JP2015/065405 JP2015065405W WO2016002400A1 WO 2016002400 A1 WO2016002400 A1 WO 2016002400A1 JP 2015065405 W JP2015065405 W JP 2015065405W WO 2016002400 A1 WO2016002400 A1 WO 2016002400A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target area
- information
- guidance
- target
- degree
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 154
- 238000000034 method Methods 0.000 title claims description 88
- 238000012544 monitoring process Methods 0.000 claims description 275
- 230000002093 peripheral effect Effects 0.000 claims description 49
- 238000004458 analytical method Methods 0.000 claims description 48
- 230000008859 change Effects 0.000 claims description 38
- 230000007613 environmental effect Effects 0.000 claims description 17
- 230000006698 induction Effects 0.000 claims description 17
- 238000003384 imaging method Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 description 21
- 238000004891 communication Methods 0.000 description 14
- 238000010586 diagram Methods 0.000 description 14
- 238000009434 installation Methods 0.000 description 13
- 235000019645 odor Nutrition 0.000 description 12
- 238000012986 modification Methods 0.000 description 7
- 230000004048 modification Effects 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 238000005286 illumination Methods 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 6
- 230000002159 abnormal effect Effects 0.000 description 5
- 238000001514 detection method Methods 0.000 description 5
- 238000004378 air conditioning Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000003703 image analysis method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000003595 mist Substances 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 230000008450 motivation Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000002040 relaxant effect Effects 0.000 description 1
- 239000000725 suspension Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
- G06F3/147—Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06M—COUNTING MECHANISMS; COUNTING OF OBJECTS NOT OTHERWISE PROVIDED FOR
- G06M11/00—Counting of objects distributed at random, e.g. on a surface
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/005—Traffic control systems for road vehicles including pedestrian guidance indicator
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0137—Measuring and analyzing of parameters relative to traffic conditions for specific applications
- G08G1/0145—Measuring and analyzing of parameters relative to traffic conditions for specific applications for active traffic flow control
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/04—Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/188—Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2380/00—Specific applications
- G09G2380/06—Remotely controlled electronic signs other than labels
Definitions
- the present invention relates to a technique for guiding a crowd based on information obtained by image analysis.
- “crowd” means a plurality of persons existing in an arbitrary spatial range.
- the number of persons included in the crowd is not particularly limited as long as it is plural, and the size of the space range is not limited.
- Patent Document 1 a customer who sets the position with the least number of people as a guidance destination, arranges a guidance robot at the entrance of the passage that leads to that position, and causes the guidance robot to display and output voice information on the guidance destination.
- a guidance method has been proposed. According to this customer guidance method, a shopper can be guided from a crowded place to a place where he / she is living.
- Patent Document 2 proposes a system that provides advertisements and music that are most effective for a person existing in each section of a passage.
- a monitoring camera, a display monitor, a speaker, and the like are arranged for each section of the passage, and information on the number and attributes (men, women, children, etc.) of persons in each section is acquired from an image from the monitoring camera. Based on these pieces of information, advertisements and music that are most effective are provided from the display monitor and speakers in each section. For example, when it is determined that the number of people passing through the passage is small, this system plays light music and guides the person into the passage.
- Patent Document 3 proposes a monitoring device that detects the number of people passing through a specific area and the direction of passage thereof with high accuracy. Furthermore, it is described that a plurality of devices for calculating the degree of congestion of a room are installed in a building to inform a visitor of a room with a low degree of congestion.
- Each of the above-described guidance methods only guides a specific crowd such as a person in a crowded place or a new visitor to a certain place such as a place (room, passage, etc.).
- a specific crowd such as a person in a crowded place or a new visitor to a certain place such as a place (room, passage, etc.).
- a place room, passage, etc.
- one place of the destination is quickly congested, and there is a possibility that the crowd cannot be guided appropriately.
- the present invention has been made in view of such circumstances, and provides a technique for appropriately guiding a crowd.
- the first aspect relates to the guidance processing device.
- the guidance processing device includes information acquisition means for acquiring a plurality of different guidance information based on the states of a plurality of persons in one or more images, and a plurality of different information corresponding to the plurality of guidance information.
- Control means for executing control of a plurality of target devices existing in different spaces or time-division control of the target devices.
- the second aspect relates to a guidance method executed by at least one computer.
- the guidance method according to the second aspect acquires a plurality of different guidance information based on the states of a plurality of persons in one or more images so that a plurality of different states corresponding to the plurality of guidance information are obtained.
- the control of a plurality of target devices existing in different spaces or the time division control of the target devices is included.
- a program for causing at least one computer to execute the method of the second aspect or a computer-readable recording medium recording such a program. May be.
- This recording medium includes a non-transitory tangible medium.
- FIG. 1 It is a figure which shows notionally the system configuration
- FIG. 1 is a diagram conceptually showing the system configuration of the guidance system 1 in the first embodiment.
- the guidance system 1 includes a guidance processing device 10, a plurality of monitoring cameras 5, a plurality of display devices 6, and the like.
- the target device in the first embodiment is the display device 6.
- the guidance processing apparatus 10 is a so-called computer, and includes a CPU (Central Processing Unit) 2, a memory 3, a communication unit 4, and the like that are connected to each other via a bus as shown in FIG.
- the memory 3 is a RAM (Random Access Memory), a ROM (Read Only Memory), or an auxiliary storage device (such as a hard disk).
- the communication unit 4 communicates with other computers via a communication network (not shown), exchanges signals with other devices, and the like.
- a portable recording medium or the like can be connected to the communication unit 4.
- the guidance processing device 10 may include hardware elements not shown in FIG. 1, and the hardware configuration of the guidance processing device 10 is not limited.
- Each surveillance camera 5 is installed at a position and orientation where an arbitrary place to be monitored can be photographed, and sends the photographed video signal to the guidance processing device 10.
- a place taken by each monitoring camera 5 may be referred to as a monitoring place or a target area.
- the number of surveillance cameras 5 is arbitrary.
- Each surveillance camera 5 is connected to the guidance processing device 10 via the communication unit 4 so as to be communicable, for example.
- the communication mode and connection mode between each monitoring camera 5 and the guidance processing device 10 are not limited.
- Each display device 6 displays a screen corresponding to drawing data, such as an LCD (Liquid Crystal Display) or a CRT (Cathode Ray Tube) display.
- the display device 6 receives drawing data processed by a CPU 2 or a GPU (Graphics Processing Unit) (not shown) of the guidance processing device 10 from the guidance processing device 10 and displays a screen corresponding to the drawing data. be able to.
- the display device 6 includes a CPU and a GPU, and the display device 6 can process drawing data based on data sent from the guidance processing device 10 and display a screen.
- the communication mode and connection mode between each display device 6 and the guidance processing device 10 are not limited.
- a range in which a person can visually recognize the display by each display device 6 may be referred to as a display space of each display device 6.
- FIG. 2 is a diagram illustrating an example of an installation form of the monitoring camera 5 and the display device 6.
- each surveillance camera 5 images a different surveillance location.
- the monitoring camera 5 (# 1) images the area AR1
- the monitoring camera 5 (# 2) images the area AR2, 5 (# 3) images the area AR3, and the monitoring camera 5 ( In # 4), the area AR4 is imaged.
- Video signals captured by each surveillance camera 5 are sent to the guidance processing device 10.
- a display device 6 is installed at each monitoring place in order to present guidance information to a person existing at each monitoring place of the monitoring camera 5.
- FIG. 2 is a diagram illustrating an example of an installation form of the monitoring camera 5 and the display device 6.
- each of the areas AR1 to AR4 becomes a monitoring place and a display space.
- the installation form of the monitoring camera 5 and the display device 6 is not limited to the example shown in FIG. A part of the monitoring place may be overlapped by a plurality of monitoring cameras 5.
- the display device 6 may be installed in a place other than the monitoring place.
- FIG. 3 is a diagram conceptually illustrating a processing configuration example of the guidance processing device 10 in the first embodiment.
- the guidance processing device 10 includes an image acquisition unit 11, an analysis unit 12, a storage unit 13, an information acquisition unit 14, a control unit 15, and the like.
- Each of these processing units is realized, for example, by executing a program stored in the memory 3 by the CPU 2.
- the program is installed from a portable recording medium such as a CD (Compact Disc) or a memory card or another computer on the network via the communication unit 4 or the input / output I / F (not shown). And may be stored in the memory 3.
- a portable recording medium such as a CD (Compact Disc) or a memory card or another computer on the network via the communication unit 4 or the input / output I / F (not shown).
- I / F not shown
- the image acquisition unit 11 acquires each monitoring image captured by each monitoring camera 5. Specifically, the image acquisition unit 11 sequentially acquires a monitoring image by capturing a video signal from the monitoring camera 5 at an arbitrary timing.
- the arbitrary timing is, for example, a predetermined cycle.
- the analysis unit 12 analyzes a plurality of images in which different target areas (monitoring places) are captured, and acquires the states of a plurality of persons in each target area.
- the analysis unit 12 stores the acquired state in the storage unit 13.
- the analysis unit 12 analyzes each monitoring image corresponding to each monitoring camera 5 acquired by the image acquisition unit 11. Specifically, the analysis unit 12 detects a person from each monitoring image using a known image recognition technique. For example, the analysis unit 12 can hold the feature amount of the image corresponding to the human detection range, and detect a region similar to the feature amount in the monitoring image as the detection range.
- the person detection method is not limited.
- the analysis unit 12 may detect the whole body of the person, or may detect a part of the person such as the head, face, upper body, and the like. Or the analysis part 12 may detect a crowd collectively instead of detecting a person separately. In this case, the analysis unit 12 can detect a crowd composed of a plurality of people as a lump without separating them individually.
- the analysis unit 12 acquires a crowd state in the monitoring image using the result of the person detection as described above for the monitoring image.
- the analysis unit 12 includes, as a crowd state, the number of people, density, congestion, moving speed, moving direction, flow rate, presence / absence of a matrix, matrix length, matrix waiting time, matrix advance speed, presence / absence of residence, residence time The number of people staying, dissatisfaction, abnormal conditions, etc. can be acquired.
- the density is a value obtained by dividing the number of persons by the size of the place shown in the monitoring image.
- the degree of congestion is an index value indicating the degree of the number of people present at the monitoring place, and may be indicated by a value obtained by calculation using at least one of the number of people, the density, the number of people, and the density.
- the analysis unit 12 can estimate the number of people in the monitoring image with high accuracy using a crowd patch.
- the movement speed and direction of the crowd can be obtained by measuring pixel movement between time-series monitoring images using well-known techniques such as object (person) tracking (tracking) and optical flow. Can do.
- the flow rate can be calculated by multiplying the number of people by the moving speed.
- the analysis part 12 can also acquire the presence or absence of a queue, the length of a queue, the presence or absence of a stay, a stay time, and the number of people staying by further using a known stay detection method.
- the analysis unit 12 can acquire the waiting time of the matrix and the speed at which the matrix advances by combining with the above-described technique such as tracking.
- the analysis unit 12 can acquire the degree of dissatisfaction of the crowd using the residence time, the length of the queue, the waiting time of the queue, and the like. For example, it can be estimated that the greater the dwell time, the longer the queue, and the longer the queue wait time, the higher the crowd dissatisfaction.
- the analysis part 12 can also estimate a facial expression and an attitude
- the analysis part 12 can detect the state change of a crowd, and can detect the abnormal state of a crowd by the detected state change.
- the analysis unit 12 can detect a state change such as squatting, turning around, and running, and can acquire the degree of abnormality of the crowd based on the number of people who have caused the change.
- the storage unit 13 stores the identification information (ID) of the monitoring camera 5 and the crowd state extracted from the monitoring image captured by the monitoring camera 5 in association with each other.
- the relationship between the ID of the monitoring camera 5 and the monitoring image can be grasped by the image acquisition unit 11 that acquires the monitoring image. From this relationship and the relationship (analysis unit 12) between the monitoring image and the crowd state acquired therefrom, the ID of the surveillance camera 5 and the crowd state can be associated with each other.
- the storage unit 13 further stores the relationship between the ID of the monitoring camera 5 and information indicating the monitoring location of the monitoring camera 5.
- the storage unit 13 may store the positional relationship (distance, average travel time, etc.) between monitoring locations.
- FIG. 4 is a diagram illustrating an example of information stored in the storage unit 13 in the first embodiment.
- the storage unit 13 may store information indicating the monitoring location of the monitoring camera 5 and the state of the crowd in association with each other.
- the degree of congestion indicated by numerical values is used as the crowd state.
- each monitoring place may be divided into smaller areas, and the degree of congestion may be stored for each divided area.
- the storage unit 13 stores the identification information (ID) of the display device 6 and information indicating the location of the display space indicating the range in which a person can visually recognize the display by the display device 6 in association with each other.
- the storage unit 13 may store the positional relationship (distance, average travel time, etc.) between the place of the display space and the monitoring place.
- the information acquisition unit 14 is based on the crowd status at each monitoring location acquired by the analysis unit 12, between the monitoring locations, between the display spaces of each display device 6, or between each display space and each monitoring location.
- Each guidance information corresponding to the positional relationship is generated. For example, when the monitoring location of each monitoring camera 5 and the display space of each display device 6 substantially match each other as shown in FIG. 2, the information acquisition unit 14 corresponds to the positional relationship between the monitoring locations. Generate guidance information. When each monitoring place and each display space are different, the information acquisition unit 14 generates guidance information corresponding to the positional relationship between each display space or between each display space and each monitoring place.
- a specific example of generating the guidance information corresponding to the positional relationship between the monitoring locations or the positional relationship between each display space and each monitoring location will be described in detail in the section of the embodiment.
- the position relationship includes distance, directionality, required travel time, and the like.
- the information acquisition unit 14 can acquire the positional relationship from the storage unit 13.
- the information acquisition unit 14 can also calculate the positional relationship from the information indicating each monitoring location stored in the storage unit 13 and the information indicating each monitoring location.
- the distance can be stored in advance.
- the average movement time may be stored in advance in the storage unit 13 or may be calculated using the movement speed of the crowd acquired by the analysis unit 12.
- the information acquisition unit 14 when there is a monitoring place where the crowd state indicates an abnormal state, the information acquisition unit 14 generates each guidance information so that the crowd state at the monitoring place indicates a normal state. . Further, the information acquisition unit 14 generates each piece of guidance information so that the state of the crowd is uniform when the state of the crowd is drastically different from other monitoring places only at a specific monitoring place.
- the display device 6 for displaying each guidance information is also determined.
- the guidance information since the guidance information is displayed by the display device 6, the guidance information includes information indicating a guidance destination, information prompting suspension, information indicating a congestion state, and the like. Since the congestion situation is presented, a person wants to refrain from going to a place with a high degree of congestion, so information indicating the congestion situation can be guidance information.
- the content of the guidance information is not limited as long as it is information that can move or hold a person as intended by the guidance system 1.
- the information that can put a person on hold may be information such as music, video, store information that is on sale, etc. that is interesting and makes the person want to stay there.
- the time-limited discount coupon which can be utilized in a specific store can also be an example of the guidance information which can keep a person in the specific store. It is desirable that the plurality of guidance information generated by the information acquisition unit 14 includes guidance information having different contents.
- the control unit 15 displays each guidance information on each display device 6 based on the correspondence between the guidance information determined by the information acquisition unit 14 and the display device 6. When the guidance information is generated for all the display devices 6, the control unit 15 displays the guidance information on all the display devices 6. When the guidance information is generated for some display devices 6, the control unit 15 displays the guidance information on some display devices 6.
- the control unit 15 can realize display control of the display device 6 by instructing the communication unit 4 to transmit the guidance information to the display device 6.
- the control unit 15 can also generate drawing data of guidance information and instruct the communication unit 4 to transmit the drawing data to the display device 6.
- FIG. 5 is a flowchart showing an operation example of the guidance processing device 10 in the first embodiment.
- the guidance method in the first embodiment is executed by at least one computer such as the guidance processing device 10.
- each illustrated process is executed by each processing unit included in the guidance processing device 10. Since each process is the same as the processing content of each above-mentioned processing part which guidance processing device 10 has, details of each process are omitted suitably.
- the guidance processing device 10 acquires each monitoring image captured by each monitoring camera 5 (S51).
- the guidance processing device 10 sequentially acquires the monitoring images in time series.
- the monitoring image is an image obtained by imaging the target area (target monitoring location) by each monitoring camera 5.
- the guidance processing device 10 acquires the state of the crowd in the target area by analyzing the monitoring image acquired in (S51) (S52).
- the analysis method of the monitoring image, the state of the crowd, and the acquisition method of the state are as described above.
- the guidance processing device 10 determines between the target areas, between the display spaces of the display devices 6, or between each display space and each target area, based on the crowd state in each target area acquired in (S52). Is acquired (S53).
- the guidance processing device 10 generates each guidance information corresponding to the positional relationship obtained in (S53) based on the crowd state in each target area obtained in (S52) (S54). At this time, the guidance processing device 10 determines the display device 6 on which each guidance information is displayed. The guidance processing device 10 generates each guidance information to be displayed on all the display devices 6 or some of the display devices 6.
- the guidance processing device 10 displays each guidance information generated in (S54) on each display device 6 (S55). Thereby, all the display apparatuses 6 or one part display apparatus 6 displays guidance information.
- a plurality of monitoring images in which each target area (each monitoring place) is captured by each monitoring camera 5 are acquired, and the state of the crowd in each target area is obtained by analyzing each monitoring image. Are acquired respectively. Then, based on the state of the crowd in each target area, each guidance information corresponding to the positional relationship between the target areas, between the display spaces of each display device 6 or between each display space and each target area is generated. Each guidance information is displayed by each corresponding display device 6.
- the guidance information can be generated in consideration of the state of the crowd at a plurality of places (target areas). Further, a plurality of locations (target areas) from which the crowd state is acquired, between spaces where guidance information is presented, or a plurality of locations corresponding to the positional relationship between each location and such spaces. Guidance information can be generated. Thereby, different guidance information can be presented for each display space depending on the positional relationship with other places. That is, according to the first embodiment, the crowd can be guided by an appropriate method for each place, and consequently, the place where the state of the crowd shows an abnormal state can be eliminated, and the state of the crowd can be made uniform. it can.
- FIG. 6 is a diagram illustrating an installation form of the monitoring camera 5 and the display device 6 according to the first embodiment.
- the guidance system 1 in the first embodiment appropriately guides a user group of ticket vending machines.
- Each surveillance camera 5 images people lined up at each ticket machine.
- the monitoring camera 5 (# 1) images the monitoring location AR1 in front of the ticket vending machine M1, the monitoring camera 5 (# 2) images the monitoring location AR2 in front of the ticket vending machine M2, and the monitoring camera 5 (# 3)
- the monitoring location AR3 in front of the ticket vending machine M3 is imaged
- the monitoring camera 5 (# 4) images the monitoring location AR4 in front of the ticket vending machine M4.
- the display device 6 (# 1) uses the space including the monitoring locations AR1 and AR2 as the display space
- the display device 6 (# 2) uses the space including the monitoring location AR3 as the display space
- the display device 6 (# 3) uses the monitoring location.
- a space including AR4 is used as a display space
- display devices 6 (# 4) and 6 (# 5) use a passage toward the ticket vending machine as a display space.
- the image acquisition unit 11 acquires a plurality of monitoring images obtained by capturing the monitoring locations AR1 to AR4.
- the analysis unit 12 analyzes the plurality of monitoring images and acquires the degree of congestion at the monitoring locations AR1 to AR4 as a crowd state. According to the example of FIG. 6, a high congestion level is acquired for the monitoring location AR3, and a low congestion level is acquired for the monitoring locations AR1, AR2, and AR4.
- the information acquisition unit 14 specifies the monitoring place AR1 having a low degree of congestion and closest to the monitoring place AR3 as the positional relationship between the monitoring places. This is the agreement that the ticket vending machine M1 closest to the ticket vending machine M3 and not crowded is specified. Thereby, the information acquisition part 14 produces
- the information acquisition unit 14 specifies the monitoring location AR2 that is closest to the display space of the display device 6 (# 4) and has a low degree of congestion as the positional relationship between the display space and the monitoring location, and displays the display device 6 (
- the monitoring location AR4 that is closest to the display space of # 5) and has a low congestion level is specified. This is in agreement with the fact that a ticket vending machine closest to a passage is not crowded. Thereby, the information acquisition part 14 produces
- the control unit 15 displays the generated guidance information on the display devices 6 (# 2), 6 (# 4), and 6 (# 5).
- the display device 6 (# 2) displays information to be guided to the ticket vending machine M1.
- people lined up in front of the ticket vending machine M3 know the existence of a nearby empty ticket vending machine M1 and move to use the ticket vending machine M1. Therefore, the congestion in front of the ticket vending machine M3 can be eliminated.
- the display device 6 (# 4) displays information to be guided to the ticket vending machine M2, and the display device 6 (# 5) displays information to be guided to the ticket vending machine M4. According to this, people who are going to use the ticket vending machine can be directed to the vacant ticket vending machine, and the degree of congestion in front of each ticket vending machine can be made uniform.
- Example 1 can be applied to various places such as toilets, shops, ticket gates, etc. other than ticket vending machines.
- FIG. 7 is a diagram illustrating an installation form of the monitoring camera 5 and the display device 6 according to the second embodiment.
- the guidance system 1 in the second embodiment appropriately guides a crowd leaving a certain event venue (soccer field in the example of FIG. 7).
- surveillance cameras 5 (# 1) and 5 (# 2) image people who use stations ST1 and ST2.
- Stations ST1 and ST2 are stations used by people who have left the venue.
- the monitoring location of each monitoring camera 5 is not particularly limited as long as the congestion level of each station ST1 and ST2 can be grasped.
- the monitoring location of the monitoring camera 5 (# 1) is denoted by ST1
- the monitoring location of the monitoring camera 5 (# 2) is denoted by ST2.
- Each display device 6 is provided for each section of the seat in order to show each guidance information to the visitors of the venue.
- the display device 6 (# 1) includes the seat section DS1 in the display section
- the display device 6 (# 2) includes the seat section DS2 in the display section
- the display device 6 (# 3) includes the seat section DS3 in the display section.
- the display device 6 (# 4) includes the seat section DS4 in the display section.
- each seat section has its own exit, and people sitting in each seat section will exit using the exit provided in that seat section.
- the exit E1 is provided in the seat section DS1
- the exits E2 and E3 are provided in the seat section DS2
- the exit E4 is provided in the seat section DS3
- the exits E5 and E6 are provided in the seat section DS4. Is provided.
- Example 2 the image acquisition unit 11 acquires a plurality of monitoring images obtained by capturing the monitoring locations ST1 and ST2.
- the analysis unit 12 analyzes the plurality of monitoring images and acquires the degree of congestion at the monitoring locations ST1 and ST2 as a crowd state.
- a high congestion degree is acquired for the monitoring place ST1
- a low congestion degree is acquired for the monitoring place ST2.
- the information acquisition unit 14 acquires the distance between the monitoring location ST2 and each display space and the distance between the monitoring location ST1 and each display space as the positional relationship between the display space and the monitoring location. At this time, the position of the exit provided in the seat section included in the display space is used as the position of each display space. Furthermore, the information acquisition unit 14 calculates the magnitude (absolute value) of the difference between the distance to the monitoring location ST1 and the distance to the monitoring location ST2 for each display space.
- the information acquisition unit 14 specifies a display space in which the monitoring location ST2 is closer than the monitoring location ST1 because the monitoring location ST1 has a high congestion level and the monitoring location ST2 has a low congestion level.
- the seat section DS3 is specified. Thereby, the information acquisition part 14 produces
- the seat sections DS2 and DS4 since two exits are provided, it is assumed that it is determined that both stations have the same proximity. However, since the congestion degree of the monitoring place ST1 is high, the information acquisition unit 14 guides to the station of the monitoring place ST2 where the degree of congestion is low as guidance information for the display devices 6 (# 2) and 6 (# 4). Generate information.
- the information acquisition unit 14 determines whether or not the difference in distance exceeds a predetermined value. Since the difference in distance exceeds a predetermined value, the information acquisition unit 14 generates information for guiding to the station at the monitoring location ST1 where the degree of congestion is high. However, for a display space closer to the monitoring place ST1 having a higher degree of congestion, the guidance destination may be determined based on the balance of the number of people for each guidance destination instead of the difference in distance.
- the information acquisition unit 14 may include the degree of congestion of the stations ST1 and ST2, the distance from the corresponding seat section to each station, and the required travel time in the guidance information.
- the control unit 15 displays the generated guidance information on the display devices 6 (# 1) to 6 (# 4).
- the display device 6 (# 1) displays information to be guided to the station ST1
- the display devices 6 (# 2) to 6 (# 4) display information to be guided to the station ST2.
- the method of presenting the vacant station ST2 to all the visitors it is possible that the visitors sitting in the seat section DS1 are far away but may be congested when they arrive at the station ST2.
- the guidance information is generated according to the positional relationship between the seat section of the venue and the station, each crowd can be guided appropriately.
- FIG. 8 is a diagram illustrating an installation form of the monitoring camera 5 and the display device 6 according to the third embodiment.
- the guidance system 1 in the third embodiment appropriately guides a crowd waiting for a train on a station platform.
- each surveillance camera 5 captures an image of each vehicle in the target train as a monitoring location. Specifically, the monitoring camera 5 (# 1) images inside the vehicle VH1, the monitoring camera 5 (# 2) images inside the vehicle VH2, and the monitoring camera 5 (# 3) images inside the vehicle VH3.
- the monitoring locations of the monitoring cameras 5 (# 1) to 5 (# 3) are denoted by VH1 to VH3, respectively.
- Each display device 6 includes the boarding position of each vehicle on the platform in the display space.
- the display device 6 (# 1) uses the space including the boarding position RP1 of the vehicle VH1 as a display space
- the display device 6 (# 2) uses the space including the boarding position RP2 of the vehicle VH2 as a display space.
- the display spaces of the display devices 6 (# 1) to 6 (# 3) are denoted by RP1 to RP3, respectively.
- the image acquisition unit 11 acquires a plurality of monitoring images obtained by imaging the monitoring locations VH1 to VH3.
- the analysis unit 12 analyzes the plurality of monitoring images and acquires the degree of congestion at the monitoring locations VH1 to VH3 as a crowd state.
- a low congestion level is acquired for the monitoring locations VH1 and VH2
- a high congestion level is acquired for the monitoring location VH3.
- the information acquisition unit 14 acquires the correspondence between each monitoring place and each display space based on the correspondence between each vehicle and the boarding position of each vehicle. Specifically, the information acquisition unit 14 grasps the correspondence between the monitoring location VH1 and the display space RP1, the correspondence between the monitoring location VH2 and the display space RP2, and the correspondence between the monitoring location VH3 and the display space RP3. Furthermore, the information acquisition part 14 grasps
- the information acquisition unit 14 has the lowest degree of congestion and the closest boarding as guidance information for the display device 6 (# 3) of the boarding position (display space) PR3 of the vehicle (monitoring place) VH3 having a high degree of congestion.
- Information to be guided to the position PR2 is generated.
- the guidance information indicates information indicating that the vehicle VH2 is vacant and the boarding position PR2 of the vehicles.
- the information acquisition unit 14 since the information acquisition unit 14 generates information for guiding from the boarding position PR3 to the boarding position PR2, the closest boarding position having a low degree of congestion as guidance information for the display device 6 (# 2) of the boarding position PR2 Information for guiding to PR1 may be generated.
- the control unit 15 displays the generated guidance information on the display device 6 (# 3).
- the display device 6 (# 3) displays information for guiding to the boarding position PR2.
- the display device 6 (# 2) can display information for guidance to the boarding position PR1.
- the display device 6 (# 3) can display information for guidance to the boarding position PR1.
- each monitoring camera 5 may image each boarding position PR1 to PR3 on the platform as a monitoring place.
- the analysis part 12 can also analyze the monitoring image of each boarding position, and can acquire the unloading situation in the monitoring locations RP1 to PR3 as the state of the crowd.
- the analysis unit 12 estimates a situation where the train cannot be reached even if a train arrives at each boarding position as an unrecorded situation. For example, the analysis unit 12 calculates a difference in the degree of congestion at each boarding position immediately before the train stops and immediately after the train departs, and determines the difference or a value calculated from the difference as an unrecorded situation. . The smaller the difference, the smaller the number of people who can get on, so a larger value is calculated as the remaining state. Further, the analysis unit 12 may also measure the movement of the queue and determine the remaining state in consideration of how far the queue has advanced.
- the information acquisition unit 14 generates the guidance information by replacing the degree of congestion of each vehicle or taking into account the remaining state of each boarding position in addition to the degree of congestion of each vehicle.
- each display device 6 may include each boarding position in the display space instead of in each vehicle or together with each vehicle.
- FIG. 9 is a diagram illustrating an installation form of the monitoring camera 5 and the display device 6 according to the fourth embodiment.
- the guidance system 1 according to the fourth embodiment appropriately guides a crowd (passengers) on the train.
- each surveillance camera 5 images each station ticket gate as a surveillance location. Specifically, the surveillance camera 5 (# 1) images the vicinity of the ticket gate TG1, the surveillance camera 5 (# 2) images the periphery of the ticket gate TG2, and the surveillance camera 5 (# 3) captures the periphery of the ticket gate TG3. Take an image.
- the monitoring locations of the monitoring cameras 5 (# 1), 5 (# 2), and 5 (# 3) are denoted by TG1, TG2, and TG3, respectively.
- Each display device 6 includes each vehicle in the train in the display space.
- the display device 6 (# 1) uses the inside of the vehicle VH1 as a display space
- the display device 6 (# 2) uses the inside of the vehicle VH2 as a display space
- the display device 6 (# 3) displays the inside of the vehicle VH3.
- the display device 6 (# 4) is a display space in the vehicle VH4
- the display device 6 (# 5) is a display space in the vehicle VH5.
- the display spaces of the display devices 6 (# 1) to 6 (# 5) are denoted by VH1 to VH5, respectively.
- Example 4 the image acquisition unit 11 acquires a plurality of monitoring images in which the monitoring locations TG1 to TG3 are captured.
- the analysis unit 12 analyzes the plurality of monitoring images and acquires the degree of congestion at the monitoring locations TG1 to TG3 as a crowd state.
- a low congestion level is acquired for the monitoring locations TG1 and TG3 and a high congestion level is acquired for the monitoring location TG2.
- the information acquisition unit 14 acquires the positional relationship between each monitoring location and each display space based on the correspondence between each vehicle and the stop position of each vehicle on the platform. Specifically, the information acquisition unit 14 indicates that the display spaces VH1 and VH2 are close to the monitoring location TG1, the display spaces VH3 and VH4 are close to the monitoring location TG2, and the display space VH5 is close to the monitoring location TG3. To figure out. Furthermore, the information acquisition unit 14 indicates that the display space VH2 is next to the monitoring location TG2 after the monitoring location TG1, the display space VH3 is next to the monitoring location TG1 after the monitoring location TG2, and the display space VH4 is the monitoring location TG2. Next, it is grasped that it is close to the monitoring place TG3.
- the information acquisition unit 14 guides the display devices 6 (# 3) and 6 (# 4) of the vehicles (display space) VH3 and VH4 that stop near the ticket gate (monitoring place) TG2 having a high degree of congestion. As information, information for guiding to another vacant ticket gate is generated.
- the information acquisition unit 14 generates information to be guided to the ticket gate TG1 next to the ticket gate TG2 as guidance information for the display device 6 (# 3), and as guidance information for the display device 6 (# 4), Information for guiding to the ticket gate TG3 next to the ticket gate TG2 is generated.
- the control unit 15 displays the generated guidance information on the display devices 6 (# 3) and 6 (# 4).
- the display device 6 (# 3) displays information to be guided to the ticket gate TG1
- the display device 6 (# 4) displays information to be guided to the ticket gate TG3.
- guidance information is displayed in each vehicle, people can check the ticket gates to go before getting off, so that the crowds in the platform can be smoothly guided.
- each monitoring camera 5 can use the passage of the station as a monitoring place instead of the ticket gate of the station or together with the ticket gate.
- FIG. 9 shows an example in which the guidance information is displayed on the display device 6 in the vehicle.
- the control unit 15 You may make it display guidance information on the terminal which passengers carry. That is, the control unit 15 presents information suitable for the vehicle (both eyes) on the user's mobile terminal.
- Information on the vehicle on which each user is aboard can be obtained by various sensors mounted on the mobile terminal, GPS (Global Positioning System), information exchange between the device installed on the platform and the mobile terminal, etc. it can.
- a place where the degree of congestion is acquired among the monitoring places imaged by the monitoring camera 5 is described as a target area (corresponding to the first target area), and people reach the target area.
- a place where there is a possibility of passing is indicated as an intermediate area (corresponding to a second target area).
- FIG. 10 is a diagram conceptually illustrating a processing configuration example of the guidance processing device 10 in the second embodiment.
- the guidance processing device 10 further includes a prediction unit 17 in addition to the configuration of the first embodiment.
- the prediction unit 17 is also realized in the same manner as other processing units.
- the prediction unit 17 is shown as a part of the information acquisition unit 14, but may be realized as a processing unit different from the information acquisition unit 14.
- the analysis unit 12 analyzes the monitoring image acquired by the image acquisition unit 11 in which the target area is imaged to acquire the degree of congestion of the person in the target area, analyzes the image in which the intermediate area is captured, and analyzes the image in the intermediate area Get the flow of people.
- the method for acquiring the flow rate and the degree of congestion is as described above.
- the analysis unit 12 may estimate the movement direction of the person in the monitoring image in the midway area, and acquire the flow rate only for the person whose movement direction indicates the direction toward the target area.
- the storage unit 13 stores the congestion degree of the target area acquired by the analysis unit 12 and the history of the flow rate in the middle area. Further, the storage unit 13 stores the distance between each display space and the target area or the time required for a person to move from each display space to the target area as the positional relationship between the display space and the monitoring place. To do.
- the prediction unit 17 acquires the predicted congestion level of the person in the target area at an arbitrary time point based on the congestion level of the target area acquired by the analysis unit 12 and the flow rate in the middle area.
- the flow rate in the midway area obtained from the monitoring image captured at a certain time T is considered to be the number of people who reach the target area after the required time ( ⁇ T) required to move from the midway area to the target area.
- the prediction part 17 can acquire the prediction congestion degree of the target area in arbitrary time as follows, for example.
- the predicting unit 17 is based on the history data stored in the storage unit 13, and the degree of congestion of the target area obtained from the monitoring image captured at time (T + ⁇ T) and the middle obtained from the monitoring image captured at time T Learn correlation with area flow rate. Based on this learning, the prediction unit 17 generates a function f (t) for predicting the degree of congestion of the target area at an arbitrary time t as the predicted degree of congestion.
- the information acquisition unit 14 uses the time required for the movement of the person from the display space of each display device 6 to the target area and the predicted congestion level acquired by the prediction unit 17 to determine whether the person existing in each display space has a future. Specifically, the predicted congestion degree of the target area at the time when the target area is reached is acquired as guidance information for each display space. For example, when the prediction unit 17 obtains a function f (t) of the predicted congestion level, the information acquisition unit 14 uses the current time tc and each required time ⁇ r to calculate the predicted congestion level f (tc + ⁇ r) of the target area. Can be acquired.
- the information acquisition unit 14 may calculate each required time using the moving speed acquired by the analysis unit 12 together with the flow rate for each display space. In this case, the information acquisition unit 14 may acquire the distance from each display space to the target area from the storage unit 13.
- the information acquisition unit 14 is based on the flow rate obtained for the midway area by the analysis unit 12 and is therefore a display space that matches the midway area.
- the predicted congestion degree may be further increased.
- the information acquisition unit 14 sets the value obtained by multiplying the predicted congestion degree calculated from the required time from the midway area to the target area by the weight value corresponding to the flow rate as the final guidance information.
- the information acquisition unit 14 calculates f (tc + ⁇ r) ⁇ (1.0 + ⁇ ) as guidance information using a value ⁇ that increases as the flow rate increases. According to this, the effect of suppressing movement to the target area in the display space can be increased.
- the control unit 15 causes each display device 6 to output the predicted congestion degree of the target area acquired for each display space. Thereby, each display apparatus 6 displays each prediction congestion degree corresponding to the distance to a target area, respectively.
- FIG. 11 is a flowchart illustrating an operation example of the guidance processing device 10 according to the second embodiment.
- the execution subject of the guidance method in the second embodiment is the same as in the first embodiment. Since each process is the same as the processing content of each above-mentioned processing part which guidance processing device 10 has, details of each process are omitted suitably.
- the guidance processing device 10 acquires each monitoring image captured by each monitoring camera 5 (S111).
- the guidance processing device 10 sequentially acquires the monitoring images in time series.
- the acquired monitoring image includes an image in which the target area is imaged and an image in which the intermediate area is imaged.
- the guidance processing device 10 acquires the congestion degree of the target area by analyzing the monitoring image of the target area acquired in (S111) (S112). Furthermore, the guidance processing apparatus 10 acquires the flow rate in the midway area by analyzing the monitoring image of the midway area acquired in (S111) (S113).
- the monitoring image analysis method and the crowd degree and flow rate acquisition method as the crowd state are as described in the first embodiment.
- the guidance processing device 10 acquires the predicted congestion level of the target area at an arbitrary time point based on the congestion degree of the target area acquired in (S112) and the flow rate history of the midway area acquired in (S113) ( S114). Further, the guidance processing device 10 acquires each required time required for the person to move from the display area of each display device 6 to the target area (S115).
- the guidance processing device 10 uses the predicted congestion level at the arbitrary time acquired in (S114) and the required time acquired in (S115) to predict the current target area for each display space. Each degree is acquired as guidance information (S116). At this time, if the midway area (monitoring place) matches the display space, the guidance processing device 10 matches the midway area based on the flow rate acquired for the midway area in (S113). The predicted congestion level for may be further increased.
- the guidance processing device 10 displays each predicted congestion degree acquired in (S116) on each display device 6 (S117).
- FIG. 11 a plurality of steps (processes) are shown in order, but the steps executed in the second embodiment and the execution order of the steps are not limited to the example of FIG. (S112) and (S113) may be executed asynchronously with each other at an arbitrary timing.
- (S114) may be executed according to the history storage status in the storage unit 13 without depending on the execution timing of (S111) to (S113).
- (S115) may be executed once if the positions of the display area and the target area are not changed. Of course, the required time may be changed according to the acquired speed information. In this case, (S115) may be executed periodically after (S112) and (S113). Further, (S116) and (S117) may be executed at any timing without depending on the execution timings from (S111) to (S115).
- the predicted congestion degree of the target area at an arbitrary time point from the history of the congestion degree of the target area acquired from the monitoring image and the flow rate history of the midway area acquired from the monitoring image. Is acquired. Then, the predicted congestion level of the target area in each display space is acquired based on the predicted congestion level at the arbitrary time point and each required time required for a person to move from each display area to the target area. Each display device 6 displays the predicted congestion degree of the target area acquired for the display space.
- each crowd who sees the display on each display device 6 may know the predicted congestion level of the target area to be visited and change the current target area to another area. This is because the high degree of predicted congestion in the target area can be a motivation to change the destination from the target area to another area. Also, what is presented here is not the degree of congestion at that time, but the predicted degree of congestion at the time when people who viewed the display on the display device 6 will reach the target area. Therefore, according to the second embodiment, it is possible to avoid the situation that the crowd was crowded when reaching the target area, and to appropriately guide the crowd while preventing congestion in the specific area. Can do.
- Example 5 a user's portable terminal is used as the display device 6.
- a portable terminal used as the display device 6 is a general portable computer such as a notebook PC (Personal Computer), a cellular phone, a smartphone, or a tablet terminal.
- the guidance processing device 10 and each mobile terminal are communicably connected via a communication network such as a mobile phone network, a Wi-Fi network, and an Internet communication network.
- the information acquisition unit 14 acquires position information and moving speed information of each mobile terminal, and using the acquired position information and moving speed information, each user holding each mobile terminal is in the target area. Estimate each required time to reach.
- the information acquisition part 14 can acquire those information from the other computer which is collecting position information and movement speed information from each portable terminal. Moreover, the information acquisition part 14 can also acquire those information directly from each portable terminal.
- the moving speed information may be calculated by a sensor mounted on the mobile terminal, or may be calculated using GPS (Global Positioning System).
- the information acquisition unit 14 acquires the position information of the target area of each mobile terminal.
- the information acquisition unit 14 can specify a target area by displaying a screen for designating a target area on each portable terminal and detecting a designation operation on the screen.
- the information acquisition unit 14 may acquire position information of the target area stored in the storage unit 13.
- the information acquisition unit 14 calculates the distance from the position of each mobile terminal to the target area, and divides the distance by the moving speed, so that it is necessary for the user holding each mobile terminal to reach the target area. Calculate time.
- the information acquisition unit 14 grasps which train has been taken from the change in position information, and calculates the required time from the arrival time at the destination (or the vicinity) of the train. You may make it take out.
- the information acquisition unit 14 can determine whether each user has a target area in the future for each mobile terminal.
- the predicted congestion degree of the target area at the time of reaching is acquired.
- the control unit 15 displays each predicted congestion level of the target area on each mobile terminal.
- the predicted congestion degree of the target area can be accurately presented to each individual of the crowd according to the position and moving speed of the individual. Moreover, even when the target area is different for each individual, the predicted congestion degree of each target area can be presented to each portable terminal. Therefore, it is possible to guide the crowd appropriately according to the state of each individual.
- a plurality of monitoring locations picked up by a plurality of monitoring cameras 5 may be expressed by aliases as follows based on the positional relationship between the monitoring locations.
- Another monitoring location located in the vicinity of a certain monitoring location is denoted as a peripheral area, and the certain monitoring location is denoted as a central area.
- the monitoring place which becomes the central area among all the monitoring places may be all the monitoring places or a part thereof.
- FIG. 12 is a diagram showing an installation form of the monitoring camera 5 and the display device 6 in the third embodiment.
- the area AR1 is a monitoring place that is a central area
- the areas AR2 to AR4 are monitoring places that are peripheral areas of the central area.
- the monitoring camera 5 (# 1) images the central area AR1
- the monitoring cameras 5 (# 2) to 5 (# 4) images the peripheral areas AR2 to AR4.
- the example of FIG. 12 is applicable to a theme park or park.
- the center area AR1 is a place for popular attractions of a theme park, and each of the surrounding areas AR2 to AR4 is a part of a route toward the popular attractions.
- the central area AR1 is a place for popular playground equipment in the park, and the surrounding areas AR2 to AR4 are places for unpopular playground equipment.
- the plurality of display devices 6 are installed so as to include display spaces each including a monitoring place serving as a peripheral area.
- the display devices 6 (# 1) to 6 (# 3) have respective display spaces including the peripheral areas.
- FIG. 13 is a diagram conceptually illustrating a processing configuration example of the guidance processing device 10 in the third embodiment.
- the guidance processing device 10 further includes a determination unit 18 in addition to the configuration of the first embodiment.
- the determination unit 18 is also realized in the same manner as other processing units.
- the determination unit 18 is illustrated as a part of the information acquisition unit 14, but may be realized as a processing unit different from the information acquisition unit 14.
- FIG. 13 illustrates a configuration in which the determination unit 18 is added to the processing configuration in the first embodiment, but the determination unit 18 may be added to the processing configuration in the second embodiment.
- the analysis unit 12 analyzes each monitoring image and acquires the degree of congestion and the moving direction of each person for each monitoring place (each target area).
- the analysis unit 12 may acquire the degree of congestion for each monitoring place serving as the central area, and may acquire the degree of congestion and the moving direction for each monitoring place serving as a peripheral area of any one central area. .
- the method for acquiring the degree of congestion and the moving direction is as described in the first embodiment. However, since there is a possibility that a plurality of movement directions are detected from the monitoring image, the analysis unit 12 can acquire the direction detected most greatly as the movement direction of the monitoring image. Moreover, the analysis part 12 can also acquire the number of persons (congestion degree) for every moving direction.
- the storage unit 13 stores the congestion degree and movement direction history acquired from the monitoring image captured by the monitoring camera 5 in association with the information indicating the monitoring location of the monitoring camera 5.
- the storage unit 13 stores relationship information between the central area and the peripheral area. According to the example of FIG. 2, the storage unit 13 relates to the monitoring location AR1 serving as the central area, the monitoring locations AR2 and AR3 serve as peripheral areas, and the monitoring location AR2 serving as the central area includes the monitoring locations AR1, AR3, and AR4. Stores related information such as the surrounding area.
- the determination unit 18 determines the degree of influence of each peripheral area on the congestion degree of the central area with respect to each monitoring place serving as the central area based on the congestion degree and the movement direction history stored in the storage unit 13. .
- the influence degree determined by the determination unit 18 means the degree of influence on the congestion degree of the central area when a person existing in each peripheral area moves. For example, the determination unit 18 uses only the congestion degree stored together with the moving direction indicating the direction toward the central area in the history of the congestion degree of the peripheral area, and uses the congestion degree stored in each peripheral area and the congestion degree of the central area.
- the correlation coefficient is calculated.
- the determination unit 18 determines the degree of influence of each peripheral area based on the calculated correlation coefficient.
- the determination unit 18 can also use the correlation coefficient as the degree of influence as it is.
- the influence degree may be indicated by a binary value indicating whether or not there is an influence, or may be indicated by a ternary value.
- the calculation method of the influence degree is not limited.
- the influence degree of the peripheral area AR3 is determined to be higher than that of the peripheral areas AR2 and AR4.
- the determination unit 18 sequentially updates the degree of influence of each peripheral area with respect to each central area. Moreover, the determination part 18 may update the said influence degree with a predetermined period, may determine the said influence degree only once, and does not need to update after that.
- the information acquisition unit 14 acquires a plurality of different pieces of guidance information based on the degree of influence of each peripheral area determined by the determination unit 18.
- the information acquisition unit 14 generates guidance information that suppresses an increase in the number of people who move from a peripheral area with a high degree of influence to the central area for the peripheral area with a high degree of influence. For example, when the information acquisition unit 14 detects a central area where the degree of congestion exceeds a predetermined threshold, the information acquisition unit 14 determines the center of a peripheral area having a high influence and a high degree of congestion among the peripheral areas of the central area. Guidance information that prevents people from moving toward the area is generated. In this case, the information acquisition unit 14 may generate guidance information for guiding to other areas other than the central area.
- the information acquisition part 14 when using the prediction congestion degree shown by 2nd embodiment as guidance information, the information acquisition part 14 produces
- the information acquisition unit 14 can also generate guidance information including a display frequency corresponding to the influence degree of each peripheral area. For example, the information acquisition unit 14 generates guidance information including a high display frequency for a peripheral area having a high influence level.
- the control unit 15 displays the guidance information of each peripheral area on each display device 6 including the peripheral area in the display space.
- the control unit 15 causes the display device 6 to display the guide information at the display frequency.
- FIG. 14 is a flowchart illustrating an operation example of the guidance processing device 10 according to the third embodiment.
- the execution subject of the guidance method in the third embodiment is the same as in the first embodiment. Since each process is the same as the processing content of each above-mentioned processing part which guidance processing device 10 has, details of each process are omitted suitably.
- the guidance processing device 10 stores in advance information on a monitoring location serving as a central area and a monitoring location (target area) serving as a peripheral area with respect to each central area.
- the guidance processing device 10 acquires each monitoring image captured by each monitoring camera 5 (S131).
- the guidance processing apparatus 10 acquires the congestion degree and the moving direction in each target area by analyzing the monitoring image acquired in (S131) (S132).
- the method for analyzing the monitoring image and the method for acquiring the degree of congestion and the moving direction as the crowd state are as described in the first embodiment.
- the guidance processing device 10 determines the influence degree of each peripheral area with respect to each central area based on the congestion degree history of the central area and the congestion degree and movement direction history of the peripheral area acquired in (S132). (S133).
- the guidance processing device 10 determines whether there is a central area with a high degree of congestion based on the degree of congestion of the target area that is each central area acquired in (S132) (S134). For example, the guidance processing device 10 determines whether or not there is a target area that shows a degree of congestion higher than a predetermined threshold in the target area that is the central area.
- the guidance processing device 10 If there is a central area with a high degree of congestion (S134; YES), the guidance processing device 10 generates guidance information for each peripheral area of the central area (S135).
- the guidance processing device 10 generates guidance information that prevents a person from moving from each peripheral area to the central area. At this time, the guidance processing device 10 may generate guidance information only in the peripheral area where the degree of congestion is high.
- the guidance processing device 10 can also generate different guidance information for each peripheral area based on the congestion degree and the influence degree of each peripheral area. In this case, the guidance processing device 10 may generate guidance information with stronger guidance power in a peripheral area with a higher degree of congestion and a higher degree of influence.
- the guidance processing device 10 may include the guidance information display frequency in the guidance information.
- the guidance processing device 10 displays the guidance information generated for each peripheral area on each display device 6 including each peripheral area in the display space (S136).
- the guide processing device 10 causes the display devices 6 to display the guide information at the display frequency.
- FIG. 14 a plurality of steps (processes) are shown in order, but the steps executed in the third embodiment and the execution order of the steps are not limited to the example of FIG. (S133) may be executed at an arbitrary timing using the congestion degree and the history of the moving direction stored in the storage unit 13, as a process different from that in FIG.
- the influence of each monitoring place serving as the peripheral area is related to each monitoring place serving as the central area.
- Each degree is determined.
- Guide information for each peripheral area is generated based on the degree of influence on the central area determined for each peripheral area, and each display device 6 including each peripheral area in the display space displays the guide information.
- the guidance information presented in the other area is generated according to the high degree of influence of the other area on the congestion degree of the certain area. Therefore, according to the third embodiment, an increase in the degree of congestion in other areas can be efficiently suppressed by the guidance information presented in a certain area, and efficient crowd guidance can be performed.
- FIG. 15 is a diagram conceptually illustrating a processing configuration example of the guidance processing device 10 in the fourth embodiment.
- the guidance processing device 10 further includes a state monitoring unit 21 and an information changing unit 22 in addition to the configuration of the first embodiment.
- the state monitoring unit 21 and the information changing unit 22 are also realized in the same manner as other processing units.
- FIG. 15 illustrates a configuration in which the state monitoring unit 21 and the information changing unit 22 are added to the processing configuration in the first embodiment, but they are added to the processing configuration in the second embodiment or the third embodiment. Also good.
- the state monitoring unit 21 acquires the change state of the crowd state based on the time-series monitoring images captured after displaying the guidance information on each display device 6. Specifically, the state monitoring unit 21 acquires a history of the crowd state extracted by the analysis unit 12 from the time-series monitoring image after the guidance information is displayed, and based on this history, changes in the state of the crowd Get the status. The history of the state of the crowd can also be acquired from the storage unit 13.
- the state monitoring unit 21 may acquire a change state indicating whether there is a change and no change as a binary value, or may acquire a change state indicating the degree of change as a numerical value. For example, the state monitoring unit 21 acquires information indicating an increase, decrease, and no change in the degree of congestion as a change state.
- the state monitoring unit 21 may acquire the change state of the crowd state only for the monitoring place (target area) that affects the crowd state by the presentation of the guidance information. Thereby, since the monitoring place where a change situation should be acquired is limited, the processing load can be reduced.
- a monitoring place (target area) that affects the state of the crowd by the presentation of guidance information is referred to as a control target area.
- the target area in the second embodiment and the center area in the third embodiment correspond to the control target area.
- the information changing unit 22 changes at least one guide information acquired by the information acquiring unit 14 based on the change situation acquired by the state monitoring unit 21.
- a change mode of the guidance information there may be a change of the guidance destination, stop of guidance, increase / decrease of guidance force, and the like.
- the information changing unit 22 changes the guidance information to guidance information having a stronger guidance force.
- the information changing unit 22 may change the guidance information to guidance information having a weaker guidance force.
- the increase / decrease in the guidance force of the guidance information can be realized by, for example, an increase / decrease in the predicted congestion level, a display frequency of the guidance information, or the like.
- the control unit 15 causes the display device 6 corresponding to at least one guide information changed by the information changing unit 22 to display the changed guide information.
- FIG. 16 is a flowchart illustrating an operation example regarding the change of the guidance information of the guidance processing device 10 according to the fourth embodiment.
- the execution subject of the guidance method in the fourth embodiment is the same as in the first embodiment. Since each process is the same as the processing content of each above-mentioned processing part which guidance processing device 10 has, details of each process are omitted suitably.
- the guidance processing device 10 has already displayed guidance information on at least one display device 6.
- the presentation of this guidance information affects the state of the crowd in the control target area.
- the guidance processing device 10 acquires each monitoring image captured by each monitoring camera 5 (S161).
- the guidance processing device 10 sequentially acquires the monitoring images in time series.
- the acquired monitoring image includes a monitoring image obtained by capturing the control target area.
- the guidance processing device 10 acquires the state of the crowd in the target area by analyzing the monitoring image acquired in (S161) (S162).
- the guidance processing device 10 may acquire only the state of the crowd in the control target area.
- the guidance processing device 10 acquires a change state of the crowd state in the control target area (S163).
- the guidance processing device 10 changes the displayed guidance information based on the change status acquired in (S163) (S164).
- the change mode of the guidance information is as described above.
- the guidance processing device 10 displays the changed guidance information on the corresponding display device 6 (S165).
- the change state of the crowd state in the control target area is acquired, and the guidance information is changed according to the change state.
- the result of the crowd being guided by the presentation of the guidance information is determined based on the change in the state of the crowd in the control target area, so that the state of the crowd becomes a desired state.
- the guidance information is adjusted accordingly. Accordingly, it is possible to efficiently guide the crowd so that the state of the crowd becomes a desired state.
- guidance processing device 10 may acquire guidance information further using environmental situation information which shows the situation of an environment.
- FIG. 17 is a diagram conceptually illustrating a processing configuration example of the guidance processing device 10 in the modification. As illustrated in FIG. 17, the guidance processing device 10 further includes an environment acquisition unit 25 in addition to the configuration of the first embodiment. The environment acquisition unit 25 is also realized in the same manner as other processing units. FIG. 17 illustrates a configuration in which the environment acquisition unit 25 is added to the processing configuration in the first embodiment.
- the environment acquisition unit 25 acquires environment status information.
- Environmental status information includes, for example, weather information (weather, warnings, etc.) and weather element information (temperature, humidity, etc.), abnormal information (train delays, accidents, breakdowns, etc.) of the location where the target crowd exists and the destination. , Natural disasters, etc.). Further, when a crowd to be guided exists in the game hall, the win / loss of the game, the game content, and the like can be included in the environmental status information.
- the environment acquisition unit 25 acquires such environmental status information from other systems and services via communication.
- the information acquisition unit 14 further acquires guidance information by further using the environmental status information acquired by the environment acquisition unit 25.
- the information acquisition unit 14 may specify a ticket vending machine to be presented as a guide destination using the failure information of the ticket vending machine in addition to the congestion degree and proximity of the ticket vending machine.
- the information acquisition unit 14 generates guidance information so that the seat sections of the winning team and the losing team are distinguished from each other so that the crowds of both seat sections do not become the same destination. May be.
- guidance information may be generated so that a ticket gate connected to a passage that does not get wet in rain is preferentially a guidance destination in the case of rainy weather.
- FIG. 18 is a diagram conceptually illustrating a processing configuration example of the guidance processing device in the fifth embodiment.
- the guidance processing device 100 includes an information acquisition unit 101 and a control unit 102.
- the guidance processing device 100 illustrated in FIG. 18 has a hardware configuration similar to that of the above-described guidance processing device 10 illustrated in FIG. 1, for example.
- the guidance processing device 100 may not be connected so that it can directly communicate with the monitoring camera 5 and the display device 6.
- each processing unit described above is realized.
- the information acquisition unit 101 acquires a plurality of different guidance information based on the states of a plurality of persons in one or more images. Like the image acquisition unit 11 and the analysis unit 12 described above, the information acquisition unit 101 can itself extract the states of a plurality of people (crowds) from the image. Further, the information acquisition unit 101 can also acquire information regarding the states of a plurality of people extracted by another computer from the other computer via communication. The state of multiple persons is the same as the state of the crowd described in the first embodiment. Further, the information acquisition unit 101 uses an image captured by one monitoring camera 5 or a plurality of persons extracted from a plurality of images captured by a plurality of monitoring cameras 5. In the former case, guidance information is acquired based on the status of people present at one monitoring location. In the latter case, guidance information is acquired based on the status of people present at a plurality of monitoring locations.
- the information acquisition unit 101 acquires guidance information by the same method as in each of the above-described embodiments.
- the information acquisition unit 101 can also select guidance information for each target device from a plurality of guidance information held in advance based on the states of the plurality of persons. For example, the information acquisition unit 101 stores in advance guide information indicating a first guide destination and guide information indicating a second guide destination for the first target device, and sets a third information for the second target device.
- the guide information indicating the guide destination and the guide information indicating the fourth guide destination are stored in advance.
- the information acquisition unit 101 acquires the guidance information indicating the second guidance destination for the first target device when the state of the plurality of people indicates a high degree of congestion, and the fourth for the second target device.
- Guidance information indicating the guidance destination of is acquired.
- the information acquisition unit 101 acquires guidance information indicating the first guidance destination for the first target device, and for the second target device.
- Guidance information indicating the third guidance destination is acquired.
- the guidance information acquired by the information acquisition unit 101 means not only displayed information but also any information for guiding people.
- the specific content of the guidance information may vary depending on the target device to be controlled. Examples of guidance information are described below.
- the control unit 102 executes control of a plurality of target devices existing in different spaces or time-division control of the target devices so as to be in a plurality of different states corresponding to a plurality of guidance information.
- the target device controlled by the control unit 102 corresponds to various devices that can guide people.
- a speaker, illumination, an air conditioner, an odor generator, a passage control device that controls the width of the passage, and the like also correspond to the target device.
- such a target device may be controlled by the guidance processing device 10 instead of or together with the display device 6.
- the control of a plurality of target devices and the time-sharing control of the target devices may be directly executed by the control unit 102 or may be indirectly executed by the control unit 102 via another device.
- the control unit 102 controls each target device (each display device 6) so as to display each guidance information as in the above-described embodiments. .
- Control of the display device 6 is realized by transmitting drawing data or guidance information to the display device 6.
- the guidance information may include designation of a display form such as a display frequency, a display size, and a display color.
- the control unit 102 controls each target device so that the guidance information is displayed in a display form designated by the guidance information.
- the control unit 102 controls each target device (each speaker) so as to output each voice or each sound corresponding to each guidance information.
- the control unit 102 acquires each voice data or each acoustic data corresponding to each guidance information, and realizes control of the target device by transmitting each acquired voice data or each acoustic data to each target device.
- the guidance information acquired by the information acquisition unit 101 corresponds to voice identification information for identifying voice data or acoustic data, or voice data or acoustic data itself.
- the speaker installed in the guidance destination and the passage to the guidance destination When sound is output from each speaker, the speaker installed in the guidance destination and the passage to the guidance destination outputs relaxing music, and the speaker installed in the passage in other places outputs noise, thereby Can be guided in the direction of the guidance destination. Further, when it is desired to keep the crowd at a certain place (to reduce the flow rate to the destination) in order to ease the congestion situation, the control unit 102 attracts the crowd that encourages the stay on the way. You may make it play music.
- the control unit 102 controls each target device (each illumination) so as to satisfy at least one of color and brightness corresponding to each guidance information.
- the guidance information acquired by the information acquisition unit 101 corresponds to illumination instruction information (illumination instruction signal) for designating at least one of illumination color and brightness.
- illumination instruction information illumination instruction signal
- the lighting installed in the guidance destination and the passage to the guidance destination is brightened, and the lighting installed in the passage in other places is darkened, so that the crowd can be guided in the direction of the guidance destination. Further, when it is desired to keep the crowd at a certain place in order to alleviate the congestion situation, the control unit 102 may brighten only that part.
- the control unit 102 has at least one of temperature, humidity, wind strength, and wind direction corresponding to each guidance information.
- Each target device (each air conditioner) is controlled to satisfy each.
- the guidance information acquired by the information acquisition unit 101 corresponds to air conditioning instruction information (air conditioning instruction signal) for designating temperature, humidity, wind strength, wind direction, and the like.
- air conditioning instruction information air conditioning instruction signal
- the air conditioner installed in the passage to the guidance destination and the guidance destination is set to low temperature and low humidity, and the air conditioner installed in the passage in other places is stopped, You can guide the crowd in the direction of the destination.
- the control unit 102 may operate the air conditioning at the place to increase the comfort.
- the control unit 102 controls each target device (each odor generator) so that the odor corresponding to the guidance information is generated.
- the guidance information acquired by the information acquisition unit 101 corresponds to odor instruction information (odor instruction signal) for specifying the odor to be generated.
- the odor generator installed in the guidance destination and the passage to the guidance destination generates odors preferred by people, and the odor generator installed in the passages in other places generates odors that people hate, thereby It can be guided in the direction of the destination.
- the control unit 102 may generate an odor that attracts the crowd at the place.
- the control unit 102 controls each target device (each passage control device) so that the passage controlled by the passage control device has a width corresponding to the guidance information.
- the guidance information acquired by the information acquisition unit 101 corresponds to passage width instruction information (passage width instruction signal) for designating the passage width.
- the width of the guidance destination and the passage to the guidance destination is made wide or normal, and the width of the passage in other places is made narrow, so that the crowd can be guided in the direction of the guidance destination.
- the control unit 102 may change the path length.
- the time-division control of the target device by the control unit 102 means that the state of the target device is switched to different states corresponding to a plurality of guidance information over time. For example, when each guide information is written or spoken in different languages, the control unit 102 sequentially switches and outputs each guide information to the display device 6 or the speaker. For example, when guiding a crowd of a plurality of nationalities, the information acquisition unit 101 acquires guidance information with different guidance methods for each language. The control unit 102 controls each speaker so that each voice uttered in a language corresponding to each guidance information is output in a time division manner.
- the voice announcement that leads to Exit A is first output in Chinese, then the voice announcement that leads to Exit B is output in Korean, and then the voice announcement that leads to Exit C is output in Japanese Is done.
- a crowd can be induced
- the information acquisition unit 101 may acquire guidance information corresponding to the state (number of people, etc.) of each nationality as the state of a plurality of people.
- guidance control can be performed in which a crowd with a small number of nationalities is guided first, and a crowd with a large number of nationalities is guided later.
- FIG. 19 is a flowchart illustrating an operation example of the guidance processing device 100 according to the fifth embodiment.
- the guidance method in the fifth embodiment is executed by at least one computer such as the guidance processing device 100.
- each illustrated process is executed by each processing unit included in the guidance processing apparatus 100.
- the guidance method in the fifth embodiment acquires a plurality of different guidance information based on the states of a plurality of persons in one or more images (S191), resulting in a plurality of different states corresponding to the plurality of guidance information.
- the control of a plurality of target devices existing in different spaces or the time division control of the target devices is executed (S192).
- (S191) corresponds to (S54) in FIG. 5, (S116) in FIG. 11, (S135) in FIG.
- S192) corresponds to (S55) in FIG. 5, (S117) in FIG. 11, (S136) in FIG. 17, and the like.
- Information acquisition means for acquiring a plurality of different guidance information based on the states of a plurality of persons in one or more images;
- Control means for executing control of a plurality of target devices existing in different spaces or time-division control of the target devices so as to be in a plurality of different states corresponding to the plurality of guidance information
- An induction processing apparatus comprising: 2. State monitoring means for acquiring a change state of a plurality of persons based on a time-series image captured after execution of control of the plurality of target devices or time division control of the target devices; Information changing means for changing at least one of the plurality of guidance information based on the change situation; Further comprising The control means changes the control of the target device corresponding to the changed at least one guidance information.
- the induction processing device uses the environmental status information to acquire a plurality of guidance information; 1. Or 2.
- the induction processing device according to 1. 4).
- Analyzing means for analyzing a plurality of images in which different target areas are respectively captured, and acquiring the states of a plurality of persons in each target area; Further comprising The information acquisition means is based on the states of the plurality of persons in each target area, between the target areas, between the spaces corresponding to the plurality of target devices, or between each space and each target area.
- the control means causes each of the plurality of target devices to be in a state corresponding to each guidance information. 1.
- the guidance processing device according to any one of the above. 5. Analyzing the image captured of the first target area to obtain the degree of congestion of the person in the first target area, analyzing the image captured of the second target area toward the first target area An analysis means for obtaining the flow rate of the person in the target area, Further comprising The information acquisition means includes Prediction means for acquiring the predicted congestion degree of the person in the first target area at an arbitrary time point based on the congestion degree of the first target area and the flow rate of the second target area acquired by the analysis means; Including Using each time required for movement of the person from each space corresponding to a plurality of output devices to the first target area and the predicted congestion level, a person who exists in each space may use the first target in the future.
- the control means causes the plurality of output devices to output the predicted congestion degree of the first target area acquired for each space corresponding to each output device, respectively. 1.
- the guidance processing device according to any one of the above. 6).
- the plurality of output devices are a plurality of portable terminals,
- the information acquisition means acquires position information and moving speed information of the plurality of mobile terminals, and using the position information and the moving speed information, each user holding each mobile terminal in the first target area.
- the control means causes each of the plurality of portable terminals to display each predicted congestion degree of the first target area, 5.
- the induction processing device according to 1. 7).
- the information acquisition means increases the predicted congestion of the first target area acquired for each space corresponding to the plurality of output devices, based on the flow rate of the second target area, and increases the predicted congestion. Obtaining the degree as the guidance information, 5.
- the induction processing device according to 1. 8).
- the information acquisition means includes Determining means for determining the degree of influence of the plurality of surrounding areas on the degree of congestion of the target area based on the degree of congestion and the moving direction of the plurality of surrounding areas; Including Based on the degree of influence of the plurality of surrounding areas, obtain a plurality of different guidance information, respectively. 1. To 7. The guidance processing device according to any one of the above.
- a guidance method including: 10. Obtaining a change status of a plurality of persons based on a time-series image captured after execution of control of the plurality of target devices or time-sharing control of the target devices; Changing at least one of the plurality of guidance information based on the change situation; Changing the control of the target device corresponding to the changed at least one guidance information; Further includes: The induction method described in 1. 11.
- the guide information is acquired by further using the environmental status information to obtain a plurality of guide information.
- the induction method described in 1. 12 Analyzing a plurality of images in which different target areas are respectively captured, and acquiring a plurality of states in each target area, Further including Acquisition of the guidance information is based on the states of the plurality of persons in each target area, between the target areas, between the spaces corresponding to the plurality of target devices, or between each space and each target area.
- Each guidance information corresponding to the positional relationship between The control of the target device causes each of the plurality of target devices to be in a state corresponding to each guidance information.
- Analyzing the image captured of the first target area to obtain the degree of congestion of the person in the first target area analyzing the image captured of the second target area toward the first target area Get the flow of people in the target area, Based on the congestion level of the first target area and the flow rate of the second target area, obtain the predicted congestion level of the person in the first target area at an arbitrary time point, Further including The guidance information is acquired by using the time required for the movement of the person from each space corresponding to a plurality of output devices to the first target area and the predicted congestion level, and the person existing in each space.
- the control of the target device causes the plurality of output devices to output the predicted congestion degree of the first target area acquired for each space corresponding to each output device, respectively.
- the plurality of output devices are a plurality of portable terminals,
- the guiding method is: Obtaining position information and moving speed information of the plurality of mobile terminals; Using the position information and the moving speed information, each user who holds each mobile terminal estimates each required time to reach the first target area.
- the control of the target device causes each of the plurality of portable terminals to display each predicted congestion level of the first target area, 13 The induction method described in 1. 15.
- the acquisition of the guidance information is based on the flow rate of the second target area, increasing the predicted congestion degree of the first target area acquired for each space corresponding to the plurality of output devices, and increasing the prediction Obtaining the degree of congestion as the guidance information; 13 The induction method described in 1. 16.
- Analyzing the image in which the target area is imaged to obtain the degree of congestion of the person in the target area analyzing the plurality of images in which the plurality of peripheral areas of the target area are imaged, and the degree of congestion of the person in each peripheral area And get the direction of movement, Based on the degree of congestion and the moving direction of the plurality of surrounding areas, determine the degree of influence of the plurality of surrounding areas on the degree of congestion of the target area. Further including The acquisition of the guide information is to acquire a plurality of different guide information based on the degree of influence of the plurality of surrounding areas, 9. To 15. The induction method according to any one of the above.
- Information acquisition means for generating guidance information based on a crowded situation of persons in a plurality of images obtained by imaging a plurality of monitoring locations, the monitoring location, and a plurality of locations where the target device is provided;
- Control means for controlling target devices in the plurality of locations according to the guidance information;
- An induction processing apparatus comprising: 18. In a guidance method performed by at least one computer, Based on the congestion status of people in a plurality of images obtained by imaging a plurality of monitoring locations, the monitoring location, and a plurality of locations where the target device is provided, to generate guidance information, Controlling target devices in the plurality of locations according to the guidance information;
- a guidance method including: 19. 9. To 16.
- a program causing at least one computer to execute the guidance method according to any one of the above. 20. 19.
- the recording medium which records the program as described in readable to a computer.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Analysis (AREA)
- Alarm Systems (AREA)
- Traffic Control Systems (AREA)
- Train Traffic Observation, Control, And Security (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
以下、第一実施形態における誘導システム及び誘導方法について複数の図面を用いて説明する。
図1は、第一実施形態における誘導システム1のシステム構成を概念的に示す図である。図1に示されるように、誘導システム1は、誘導処理装置10、複数の監視カメラ5、複数の表示装置6等を有する。第一実施形態における対象装置は表示装置6である。
図3は、第一実施形態における誘導処理装置10の処理構成例を概念的に示す図である。図3に示されるように、誘導処理装置10は、画像取得部11、解析部12、格納部13、情報取得部14、制御部15等を有する。これら各処理部は、例えば、CPU2によりメモリ3に格納されるプログラムが実行されることにより実現される。また、当該プログラムは、例えば、CD(Compact Disc)、メモリカード等のような可搬型記録媒体やネットワーク上の他のコンピュータから通信ユニット4又は入出力I/F(図示せず)を介してインストールされ、メモリ3に格納されてもよい。
第一実施形態において、誘導情報は、表示装置6により表示されるため、誘導先を示す情報、保留を促す情報、混雑状況を示す情報等である。混雑状況が提示されることで、人は混雑度の高い場所に行くのを控えたくなるため、混雑状況を示す情報は誘導情報となり得る。誘導情報は、人を誘導システム1が意図したように移動又は保留させ得る情報であれば、その内容は限定されない。例えば、人を保留させ得る情報としては、音楽や映像、セール中の店舗情報等、興味をそそり、人がその場に留まりたくなるような情報があり得る。また、特定店舗で利用できる時限割引きクーポンも、人をその特定店舗に留め置くことのできる誘導情報の一例となり得る。情報取得部14により生成される複数の誘導情報の中には、異なる内容の誘導情報が含まれることが望ましい。
以下、第一実施形態における誘導方法について図5を用いて説明する。図5は、第一実施形態における誘導処理装置10の動作例を示すフローチャートである。図5に示されるように、第一実施形態における誘導方法は、誘導処理装置10のような少なくとも1つのコンピュータにより実行される。例えば、図示される各工程は、誘導処理装置10が有する各処理部により実行される。各工程は、誘導処理装置10が有する上述の各処理部の処理内容と同様であるため、各工程の詳細は、適宜省略される。
上述したように第一実施形態では、各対象エリア(各監視場所)が各監視カメラ5によりそれぞれ撮像された複数の監視画像が取得され、各監視画像に対する解析により、各対象エリアにおける群衆の状態がそれぞれ取得される。そして、各対象エリアにおける群衆の状態に基づいて、対象エリア間、各表示装置6の表示空間の間、又は、各表示空間と各対象エリアとの間の位置関係に対応する各誘導情報が生成され、各誘導情報が、対応する各表示装置6によりそれぞれ表示される。
実施例1における誘導システム1は、券売機の利用者群を適切に誘導する。
実施例2における誘導システム1は、或る催し会場(図7の例ではサッカー場)を退場する群衆を適切に誘導する。
実施例3における誘導システム1は、駅のプラットホームで電車を待つ群衆を適切に誘導する。
実施例4における誘導システム1は、電車に乗車している群衆(乗客)を適切に誘導する。
以下、第二実施形態における誘導システム及び誘導方法について複数の図面を用いて説明する。以下、第二実施形態について、第一実施形態と異なる内容を中心に説明し、第一実施形態と同様の内容については適宜省略する。以下に説明する内容は、上述の第一実施形態の内容に追加されてもよいし、第一実施形態の内容に置き換えられてもよい。
図10は、第二実施形態における誘導処理装置10の処理構成例を概念的に示す図である。図10に示されるように、誘導処理装置10は、第一実施形態の構成に加えて、予測部17を更に有する。予測部17についても他の処理部と同様に実現される。図10の例では、予測部17は、情報取得部14の一部のように示されるが、情報取得部14とは別の処理部として実現されてもよい。
以下、第二実施形態における誘導方法について図11を用いて説明する。図11は、第二実施形態における誘導処理装置10の動作例を示すフローチャートである。第二実施形態における誘導方法の実行主体は、第一実施形態と同様である。各工程は、誘導処理装置10が有する上述の各処理部の処理内容と同様であるため、各工程の詳細は、適宜省略される。
更に、誘導処理装置10は、各表示装置6の表示エリアから目標エリアへの人が移動するのにかかる各所要時間をそれぞれ取得する(S115)。
上述のように、第二実施形態では、監視画像から取得される目標エリアの混雑度の履歴と監視画像から取得される途中エリアの流量の履歴とから、任意の時点における目標エリアの予測混雑度が取得される。そして、その任意の時点における予測混雑度と各表示エリアから目標エリアへ人が移動するのにかかる各所要時間とに基づいて、各表示空間における目標エリアの予測混雑度が取得される。各表示装置6には、その表示空間に関し取得された目標エリアの予測混雑度が表示される。
制御部15は、各携帯端末に、目標エリアの各予測混雑度をそれぞれ表示させる。
以下、第三実施形態における誘導システム及び誘導方法について複数の図面を用いて説明する。以下、第三実施形態について、上述とは異なる内容を中心に説明し、上述と同様の内容については適宜省略する。以下に説明する内容は、上述の内容に追加されてもよいし、上述の内容に置き換えられてもよい。
図13は、第三実施形態における誘導処理装置10の処理構成例を概念的に示す図である。図13に示されるように、誘導処理装置10は、第一実施形態の構成に加えて、決定部18を更に有する。決定部18についても他の処理部と同様に実現される。図13の例では、決定部18は、情報取得部14の一部のように示されるが、情報取得部14とは別の処理部として実現されてもよい。また、図13は、決定部18が第一実施形態における処理構成に追加された構成を例示するが、決定部18は、第二実施形態における処理構成に追加されてもよい。
決定部18は、各中心エリアに関する各周辺エリアの影響度を逐次更新する。また、決定部18は、当該影響度を所定の周期で更新してもよいし、当該影響度を一度だけ決定し、以降、更新しなくてもよい。
以下、第三実施形態における誘導方法について図14を用いて説明する。図14は、第三実施形態における誘導処理装置10の動作例を示すフローチャートである。第三実施形態における誘導方法の実行主体は、第一実施形態と同様である。各工程は、誘導処理装置10が有する上述の各処理部の処理内容と同様であるため、各工程の詳細は、適宜省略される。
誘導処理装置10は、各監視カメラ5により撮像された監視画像をそれぞれ取得する(S131)。
上述のように、第三実施形態では、監視画像の解析により得られる各監視場所の混雑度及び移動方向に基づいて、中心エリアとなる各監視場所に関し、その周辺エリアとなる各監視場所の影響度がそれぞれ決定される。各周辺エリアに関し決定された中心エリアに対する影響度に基づいて、各周辺エリアのための誘導情報が生成され、各周辺エリアを表示空間に含む各表示装置6がその誘導情報を表示する。
以下、第四実施形態における誘導システム及び誘導方法について複数の図面を用いて説明する。以下、第四実施形態について、上述とは異なる内容を中心に説明し、上述と同様の内容については適宜省略する。以下に説明する内容は、上述の内容に追加されてもよいし、上述の内容に置き換えられてもよい。
図15は、第四実施形態における誘導処理装置10の処理構成例を概念的に示す図である。図15に示されるように、誘導処理装置10は、第一実施形態の構成に加えて、状態監視部21及び情報変更部22を更に有する。状態監視部21及び情報変更部22についても他の処理部と同様に実現される。図15は、状態監視部21及び情報変更部22が第一実施形態における処理構成に追加された構成を例示するが、それらは、第二実施形態又は第三実施形態における処理構成に追加されてもよい。
以下、第四実施形態における誘導方法について図16を用いて説明する。図16は、第四実施形態における誘導処理装置10の、誘導情報の変更に関する動作例を示すフローチャートである。第四実施形態における誘導方法の実行主体は、第一実施形態と同様である。各工程は、誘導処理装置10が有する上述の各処理部の処理内容と同様であるため、各工程の詳細は、適宜省略される。
誘導処理装置10は、(S163)で取得された変化状況に基づいて、表示されている誘導情報を変更する(S164)。誘導情報の変更形態については上述したとおりである。
第四実施形態では、誘導情報の表示後の時系列の監視画像に基づいて、制御目的エリアにおける群衆の状態の変化状況が取得され、その変化状況に応じて、誘導情報が変更される。このように、第四実施形態によれば、誘導情報の提示により群衆が誘導された結果が制御目的エリアの群衆の状態の変化状況で判定され、その群衆の状態が所望の状態となるように、誘導情報が適宜調整される。従って、群衆の状態が所望の状態となるように、効率的に群衆を誘導することができる。
上述の各実施形態において、誘導処理装置10(情報取得部14)は、環境の状況を示す環境状況情報を更に用いて、誘導情報を取得するようにしてもよい。
以下、第五実施形態における誘導処理装置及び誘導方法について図18及び図19を用いて説明する。
前記複数の誘導情報に対応する各々異なる複数の状態となるように、異なる空間に対して存在する複数の対象装置の制御、又は、対象装置の時分割制御を実行する制御手段と、
を備える誘導処理装置。
2. 前記複数の対象装置の制御又は前記対象装置の時分割制御の実行後に撮像された時系列の画像に基づく複数人の状態の変化状況を取得する状態監視手段と、
前記変化状況に基づいて、前記複数の誘導情報の少なくとも1つを変更する情報変更手段と、
を更に備え、
前記制御手段は、変更された少なくとも1つの誘導情報に対応する対象装置の制御を変更する、
1.に記載の誘導処理装置。
3. 環境の状況を示す環境状況情報を取得する環境取得手段、
を更に備え、
前記情報取得手段は、前記環境状況情報を更に用いて、複数の誘導情報を取得する、
1.又は2.に記載の誘導処理装置。
4. 異なる対象エリアがそれぞれ撮像された複数画像を解析して、各対象エリアにおける複数人の状態をそれぞれ取得する解析手段、
を更に備え、
前記情報取得手段は、前記各対象エリアにおける前記複数人の状態に基づいて、前記対象エリア間、前記複数の対象装置に対応する前記空間の間、又は、該各空間と前記各対象エリアとの間の位置関係に対応する各誘導情報をそれぞれ生成し、
前記制御手段は、前記複数の対象装置の各々を前記各誘導情報に対応する状態にさせる、
1.から3.のいずれか1つに記載の誘導処理装置。
5. 第一対象エリアが撮像された画像を解析して該第一対象エリアにおける人の混雑度を取得し、該第一対象エリアへ向かう第二対象エリアが撮像された画像を解析して該第二対象エリアの人の流量を取得する解析手段、
を更に備え、
前記情報取得手段は、
前記解析手段により取得される前記第一対象エリアの混雑度及び前記第二対象エリアの流量に基づいて、任意の時点における前記第一対象エリアの人の予測混雑度を取得する予測手段、
を含み、
複数の出力装置に対応する前記各空間から前記第一対象エリアまでの人の移動にかかる各所要時間及び前記予測混雑度を用いて、該各空間に存在する人が将来的に前記第一対象エリアに到達した時点における前記第一対象エリアの予測混雑度を該各空間についてそれぞれ前記誘導情報として取得し、
前記制御手段は、前記複数の出力装置に、各出力装置に対応する各空間に関し取得された前記第一対象エリアの予測混雑度をそれぞれ出力させる、
1.から4.のいずれか1つに記載の誘導処理装置。
6. 前記複数の出力装置は、複数の携帯端末であり、
前記情報取得手段は、前記複数の携帯端末の位置情報及び移動速度情報を取得し、該位置情報及び該移動速度情報を用いて、前記各携帯端末を保持する各ユーザが前記第一対象エリアに到達する前記各所要時間をそれぞれ推定し、前記複数の携帯端末の各々について、各ユーザが将来的に前記第一対象エリアに到達した時点における前記第一対象エリアの予測混雑度をそれぞれ取得し、
前記制御手段は、前記複数の携帯端末に、前記第一対象エリアの各予測混雑度をそれぞれ表示させる、
5.に記載の誘導処理装置。
7. 前記情報取得手段は、前記第二対象エリアの流量に基づいて、前記複数の出力装置に対応する前記各空間に関し取得された前記第一対象エリアの予測混雑度を増加させ、増加させた予測混雑度を前記誘導情報として取得する、
5.に記載の誘導処理装置。
8. 対象エリアが撮像された画像を解析して該対象エリアにおける人の混雑度を取得し、該対象エリアの複数の周辺エリアが撮像された複数画像を解析して該各周辺エリアの人の混雑度及び移動方向を取得する解析手段、
を更に備え、
前記情報取得手段は、
前記複数の周辺エリアの混雑度及び移動方向に基づいて、前記複数の周辺エリアの、前記対象エリアの混雑度に与える影響度を決定する決定手段と、
を含み、
前記複数の周辺エリアの影響度に基づいて、各々異なる複数の誘導情報を取得する、
1.から7.のいずれか1つに記載の誘導処理装置。
1以上の画像内の複数人の状態に基づいて、各々異なる複数の誘導情報を取得し、
前記複数の誘導情報に対応する各々異なる複数の状態となるように、異なる空間に対して存在する複数の対象装置の制御、又は、対象装置の時分割制御を実行する、
ことを含む誘導方法。
10. 前記複数の対象装置の制御又は前記対象装置の時分割制御の実行後に撮像された時系列の画像に基づく複数人の状態の変化状況を取得し、
前記変化状況に基づいて、前記複数の誘導情報の少なくとも1つを変更し、
前記変更された少なくとも1つの誘導情報に対応する対象装置の制御を変更する、
ことを更に含む9.に記載の誘導方法。
11. 環境の状況を示す環境状況情報を取得する、
ことを更に含み、
前記誘導情報の取得は、前記環境状況情報を更に用いて、複数の誘導情報を取得する、
9.又は10.に記載の誘導方法。
12. 異なる対象エリアがそれぞれ撮像された複数画像を解析して、各対象エリアにおける複数人の状態をそれぞれ取得する、
ことを更に含み、
前記誘導情報の取得は、前記各対象エリアにおける前記複数人の状態に基づいて、前記対象エリア間、前記複数の対象装置に対応する前記空間の間、又は、該各空間と前記各対象エリアとの間の位置関係に対応する各誘導情報をそれぞれ生成し、
前記対象装置の制御は、前記複数の対象装置の各々を前記各誘導情報に対応する状態にさせる、
9.から11.のいずれか1つに記載の誘導方法。
13. 第一対象エリアが撮像された画像を解析して該第一対象エリアにおける人の混雑度を取得し、該第一対象エリアへ向かう第二対象エリアが撮像された画像を解析して該第二対象エリアの人の流量を取得し、
前記第一対象エリアの混雑度及び前記第二対象エリアの流量に基づいて、任意の時点における前記第一対象エリアの人の予測混雑度を取得する、
ことを更に含み、
前記誘導情報の取得は、複数の出力装置に対応する前記各空間から前記第一対象エリアまでの人の移動にかかる各所要時間及び前記予測混雑度を用いて、該各空間に存在する人が将来的に前記第一対象エリアに到達した時点における前記第一対象エリアの予測混雑度を該各空間についてそれぞれ前記誘導情報として取得し、
前記対象装置の制御は、前記複数の出力装置に、各出力装置に対応する各空間に関し取得された前記第一対象エリアの予測混雑度をそれぞれ出力させる、
9.から12.のいずれか1つに記載の誘導方法。
14. 前記複数の出力装置は、複数の携帯端末であり、
前記誘導方法は、
前記複数の携帯端末の位置情報及び移動速度情報を取得し、
前記位置情報及び前記移動速度情報を用いて、前記各携帯端末を保持する各ユーザが前記第一対象エリアに到達する前記各所要時間をそれぞれ推定する、
ことを更に含み、
前記誘導情報の取得は、前記複数の携帯端末の各々について、各ユーザが将来的に前記第一対象エリアに到達した時点における前記第一対象エリアの予測混雑度をそれぞれ取得し、
前記対象装置の制御は、前記複数の携帯端末に、前記第一対象エリアの各予測混雑度をそれぞれ表示させる、
13.に記載の誘導方法。
15. 前記誘導情報の取得は、前記第二対象エリアの流量に基づいて、前記複数の出力装置に対応する前記各空間に関し取得された前記第一対象エリアの予測混雑度を増加させ、増加させた予測混雑度を前記誘導情報として取得する、
13.に記載の誘導方法。
16. 対象エリアが撮像された画像を解析して該対象エリアにおける人の混雑度を取得し、該対象エリアの複数の周辺エリアが撮像された複数画像を解析して該各周辺エリアの人の混雑度及び移動方向を取得し、
前記複数の周辺エリアの混雑度及び移動方向に基づいて、前記複数の周辺エリアの、前記対象エリアの混雑度に与える影響度を決定する、
ことを更に含み、
前記誘導情報の取得は、前記複数の周辺エリアの影響度に基づいて、各々異なる複数の誘導情報を取得する、
9.から15.のいずれか1つに記載の誘導方法。
前記誘導情報に応じて、前記複数の場所の対象装置を制御する制御手段と、
を備える誘導処理装置。
18. 少なくとも1つのコンピュータにより実行される誘導方法において、
複数の監視場所が撮像された複数の画像における人物の混雑状況、前記監視場所、及び対象装置が設けられている複数の場所に基づいて、誘導情報を生成し、
前記誘導情報に応じて、前記複数の場所の対象装置を制御する、
ことを含む誘導方法。
19. 9.から16.及び18.のいずれか1つに記載の誘導方法を少なくとも1つのコンピュータに実行させるプログラム。
20. 19.に記載のプログラムをコンピュータに読み取り可能に記録する記録媒体。
Claims (18)
- 1以上の画像内の複数人の状態に基づいて、各々異なる複数の誘導情報を取得する情報取得手段と、
前記複数の誘導情報に対応する各々異なる複数の状態となるように、異なる空間に対して存在する複数の対象装置の制御、又は、対象装置の時分割制御を実行する制御手段と、
を備える誘導処理装置。 - 前記複数の対象装置の制御又は前記対象装置の時分割制御の実行後に撮像された時系列の画像に基づく複数人の状態の変化状況を取得する状態監視手段と、
前記変化状況に基づいて、前記複数の誘導情報の少なくとも1つを変更する情報変更手段と、
を更に備え、
前記制御手段は、変更された少なくとも1つの誘導情報に対応する対象装置の制御を変更する、
請求項1に記載の誘導処理装置。 - 環境の状況を示す環境状況情報を取得する環境取得手段、
を更に備え、
前記情報取得手段は、前記環境状況情報を更に用いて、複数の誘導情報を取得する、
請求項1又は2に記載の誘導処理装置。 - 異なる対象エリアがそれぞれ撮像された複数画像を解析して、各対象エリアにおける複数人の状態をそれぞれ取得する解析手段、
を更に備え、
前記情報取得手段は、前記各対象エリアにおける前記複数人の状態に基づいて、前記対象エリア間、前記複数の対象装置に対応する前記空間の間、又は、該各空間と前記各対象エリアとの間の位置関係に対応する各誘導情報をそれぞれ生成し、
前記制御手段は、前記複数の対象装置の各々を前記各誘導情報に対応する状態にさせる、
請求項1から3のいずれか1項に記載の誘導処理装置。 - 第一対象エリアが撮像された画像を解析して該第一対象エリアにおける人の混雑度を取得し、該第一対象エリアへ向かう第二対象エリアが撮像された画像を解析して該第二対象エリアの人の流量を取得する解析手段、
を更に備え、
前記情報取得手段は、
前記解析手段により取得される前記第一対象エリアの混雑度及び前記第二対象エリアの流量に基づいて、任意の時点における前記第一対象エリアの人の予測混雑度を取得する予測手段、
を含み、
複数の出力装置に対応する前記各空間から前記第一対象エリアまでの人の移動にかかる各所要時間及び前記予測混雑度を用いて、該各空間に存在する人が将来的に前記第一対象エリアに到達した時点における前記第一対象エリアの予測混雑度を該各空間についてそれぞれ前記誘導情報として取得し、
前記制御手段は、前記複数の出力装置に、各出力装置に対応する各空間に関し取得された前記第一対象エリアの予測混雑度をそれぞれ出力させる、
請求項1から4のいずれか1項に記載の誘導処理装置。 - 前記複数の出力装置は、複数の携帯端末であり、
前記情報取得手段は、前記複数の携帯端末の位置情報及び移動速度情報を取得し、該位置情報及び該移動速度情報を用いて、前記各携帯端末を保持する各ユーザが前記第一対象エリアに到達する前記各所要時間をそれぞれ推定し、前記複数の携帯端末の各々について、各ユーザが将来的に前記第一対象エリアに到達した時点における前記第一対象エリアの予測混雑度をそれぞれ取得し、
前記制御手段は、前記複数の携帯端末に、前記第一対象エリアの各予測混雑度をそれぞれ表示させる、
請求項5に記載の誘導処理装置。 - 前記情報取得手段は、前記第二対象エリアの流量に基づいて、前記複数の出力装置に対応する前記各空間に関し取得された前記第一対象エリアの予測混雑度を増加させ、増加させた予測混雑度を前記誘導情報として取得する、
請求項5に記載の誘導処理装置。 - 対象エリアが撮像された画像を解析して該対象エリアにおける人の混雑度を取得し、該対象エリアの複数の周辺エリアが撮像された複数画像を解析して該各周辺エリアの人の混雑度及び移動方向を取得する解析手段、
を更に備え、
前記情報取得手段は、
前記複数の周辺エリアの混雑度及び移動方向に基づいて、前記複数の周辺エリアの、前記対象エリアの混雑度に与える影響度を決定する決定手段と、
を含み、
前記複数の周辺エリアの影響度に基づいて、各々異なる複数の誘導情報を取得する、
請求項1から7のいずれか1項に記載の誘導処理装置。 - 少なくとも1つのコンピュータにより実行される誘導方法において、
1以上の画像内の複数人の状態に基づいて、各々異なる複数の誘導情報を取得し、
前記複数の誘導情報に対応する各々異なる複数の状態となるように、異なる空間に対して存在する複数の対象装置の制御、又は、対象装置の時分割制御を実行する、
ことを含む誘導方法。 - 前記複数の対象装置の制御又は前記対象装置の時分割制御の実行後に撮像された時系列の画像に基づく複数人の状態の変化状況を取得し、
前記変化状況に基づいて、前記複数の誘導情報の少なくとも1つを変更し、
前記変更された少なくとも1つの誘導情報に対応する対象装置の制御を変更する、
ことを更に含む請求項9に記載の誘導方法。 - 環境の状況を示す環境状況情報を取得する、
ことを更に含み、
前記誘導情報の取得は、前記環境状況情報を更に用いて、複数の誘導情報を取得する、
請求項9又は10に記載の誘導方法。 - 異なる対象エリアがそれぞれ撮像された複数画像を解析して、各対象エリアにおける複数人の状態をそれぞれ取得する、
ことを更に含み、
前記誘導情報の取得は、前記各対象エリアにおける前記複数人の状態に基づいて、前記対象エリア間、前記複数の対象装置に対応する前記空間の間、又は、該各空間と前記各対象エリアとの間の位置関係に対応する各誘導情報をそれぞれ生成し、
前記対象装置の制御は、前記複数の対象装置の各々を前記各誘導情報に対応する状態にさせる、
請求項9から11のいずれか1項に記載の誘導方法。 - 第一対象エリアが撮像された画像を解析して該第一対象エリアにおける人の混雑度を取得し、該第一対象エリアへ向かう第二対象エリアが撮像された画像を解析して該第二対象エリアの人の流量を取得し、
前記第一対象エリアの混雑度及び前記第二対象エリアの流量に基づいて、任意の時点における前記第一対象エリアの人の予測混雑度を取得する、
ことを更に含み、
前記誘導情報の取得は、複数の出力装置に対応する前記各空間から前記第一対象エリアまでの人の移動にかかる各所要時間及び前記予測混雑度を用いて、該各空間に存在する人が将来的に前記第一対象エリアに到達した時点における前記第一対象エリアの予測混雑度を該各空間についてそれぞれ前記誘導情報として取得し、
前記対象装置の制御は、前記複数の出力装置に、各出力装置に対応する各空間に関し取得された前記第一対象エリアの予測混雑度をそれぞれ出力させる、
請求項9から12のいずれか1項に記載の誘導方法。 - 前記複数の出力装置は、複数の携帯端末であり、
前記誘導方法は、
前記複数の携帯端末の位置情報及び移動速度情報を取得し、
前記位置情報及び前記移動速度情報を用いて、前記各携帯端末を保持する各ユーザが前記第一対象エリアに到達する前記各所要時間をそれぞれ推定する、
ことを更に含み、
前記誘導情報の取得は、前記複数の携帯端末の各々について、各ユーザが将来的に前記第一対象エリアに到達した時点における前記第一対象エリアの予測混雑度をそれぞれ取得し、
前記対象装置の制御は、前記複数の携帯端末に、前記第一対象エリアの各予測混雑度をそれぞれ表示させる、
請求項13に記載の誘導方法。 - 前記誘導情報の取得は、前記第二対象エリアの流量に基づいて、前記複数の出力装置に対応する前記各空間に関し取得された前記第一対象エリアの予測混雑度を増加させ、増加させた予測混雑度を前記誘導情報として取得する、
請求項13に記載の誘導方法。 - 対象エリアが撮像された画像を解析して該対象エリアにおける人の混雑度を取得し、該対象エリアの複数の周辺エリアが撮像された複数画像を解析して該各周辺エリアの人の混雑度及び移動方向を取得し、
前記複数の周辺エリアの混雑度及び移動方向に基づいて、前記複数の周辺エリアの、前記対象エリアの混雑度に与える影響度を決定する、
ことを更に含み、
前記誘導情報の取得は、前記複数の周辺エリアの影響度に基づいて、各々異なる複数の誘導情報を取得する、
請求項9から15のいずれか1項に記載の誘導方法。 - 請求項9から16のいずれか1項に記載の誘導方法を少なくとも1つのコンピュータに実行させるプログラム。
- 複数の監視場所が撮像された複数の画像における人物の混雑状況、前記監視場所、及び対象装置が設けられている複数の場所に基づいて、誘導情報を生成する情報取得手段と、
前記誘導情報に応じて、前記複数の場所の対象装置を制御する制御手段と、
を備える誘導処理装置。
Priority Applications (11)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/323,307 US11138443B2 (en) | 2014-06-30 | 2015-05-28 | Guidance processing apparatus and guidance method |
JP2016531199A JP6708122B2 (ja) | 2014-06-30 | 2015-05-28 | 誘導処理装置及び誘導方法 |
CN201580036257.5A CN106664391B (zh) | 2014-06-30 | 2015-05-28 | 引导处理装置和引导方法 |
US16/297,414 US11423658B2 (en) | 2014-06-30 | 2019-03-08 | Guidance processing apparatus and guidance method |
US16/297,450 US10878252B2 (en) | 2014-06-30 | 2019-03-08 | Guidance processing apparatus and guidance method |
US16/297,436 US20190205661A1 (en) | 2014-06-30 | 2019-03-08 | Guidance processing apparatus and guidance method |
US17/375,146 US12073627B2 (en) | 2014-06-30 | 2021-07-14 | Guidance processing apparatus and guidance method |
US17/863,921 US12073628B2 (en) | 2014-06-30 | 2022-07-13 | Guidance processing apparatus and guidance method |
US18/238,932 US20230410521A1 (en) | 2014-06-30 | 2023-08-28 | Guidance processing apparatus and guidance method |
US18/238,901 US20230410520A1 (en) | 2014-06-30 | 2023-08-28 | Guidance processing apparatus and guidance method |
US18/238,818 US20230401869A1 (en) | 2014-06-30 | 2023-08-28 | Guidance processing apparatus and guidance method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014134664 | 2014-06-30 | ||
JP2014-134664 | 2014-06-30 |
Related Child Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/323,307 A-371-Of-International US11138443B2 (en) | 2014-06-30 | 2015-05-28 | Guidance processing apparatus and guidance method |
US16/297,436 Continuation US20190205661A1 (en) | 2014-06-30 | 2019-03-08 | Guidance processing apparatus and guidance method |
US16/297,450 Continuation US10878252B2 (en) | 2014-06-30 | 2019-03-08 | Guidance processing apparatus and guidance method |
US16/297,414 Continuation US11423658B2 (en) | 2014-06-30 | 2019-03-08 | Guidance processing apparatus and guidance method |
US17/375,146 Continuation US12073627B2 (en) | 2014-06-30 | 2021-07-14 | Guidance processing apparatus and guidance method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016002400A1 true WO2016002400A1 (ja) | 2016-01-07 |
Family
ID=55018953
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/065405 WO2016002400A1 (ja) | 2014-06-30 | 2015-05-28 | 誘導処理装置及び誘導方法 |
Country Status (4)
Country | Link |
---|---|
US (9) | US11138443B2 (ja) |
JP (6) | JP6708122B2 (ja) |
CN (2) | CN110460821A (ja) |
WO (1) | WO2016002400A1 (ja) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017159060A1 (ja) * | 2016-03-18 | 2017-09-21 | 日本電気株式会社 | 情報処理装置、制御方法、及びプログラム |
JPWO2017141454A1 (ja) * | 2016-05-13 | 2018-02-22 | 株式会社日立製作所 | 混雑状況可視化装置、混雑状況可視化システム、混雑状況可視化方法、および混雑状況可視化プログラム |
JP2018042049A (ja) * | 2016-09-06 | 2018-03-15 | パナソニックIpマネジメント株式会社 | 混雑検知装置、混雑検知システムおよび混雑検知方法 |
JP2018074299A (ja) * | 2016-10-26 | 2018-05-10 | 日本電信電話株式会社 | 流動状況計測装置、方法、及びプログラム |
CN108154110A (zh) * | 2017-12-22 | 2018-06-12 | 任俊芬 | 一种基于深度学习人头检测的密集人流量统计方法 |
JPWO2017168585A1 (ja) * | 2016-03-29 | 2018-09-20 | 三菱電機株式会社 | 列車運行制御システムおよび列車運行制御方法 |
JP2018195292A (ja) * | 2017-05-12 | 2018-12-06 | キヤノン株式会社 | 情報処理装置、情報処理システム、情報処理方法及びプログラム |
JP2019021019A (ja) * | 2017-07-18 | 2019-02-07 | パナソニック株式会社 | 人流分析方法、人流分析装置及び人流分析システム |
WO2019087595A1 (ja) * | 2017-11-06 | 2019-05-09 | 本田技研工業株式会社 | 移動体分布状況予測装置及び移動体分布状況予測方法 |
JP2019171887A (ja) * | 2018-03-26 | 2019-10-10 | 株式会社エヌ・ティ・ティ・データ | 乗客重量均一化支援装置、及び乗客重量均一化支援方法 |
JP2019528391A (ja) * | 2016-11-08 | 2019-10-10 | 中国▲鉱▼▲業▼大学 | 地下鉄の限られた空間において人員通路でのスタンピードを防止する方法 |
JP2019215906A (ja) * | 2014-06-30 | 2019-12-19 | 日本電気株式会社 | 誘導処理装置及び誘導方法 |
JP2021144765A (ja) * | 2017-10-11 | 2021-09-24 | 東芝テック株式会社 | 携帯端末及びプログラム |
JP6983359B1 (ja) * | 2020-08-27 | 2021-12-17 | 三菱電機株式会社 | 誘導装置、誘導プログラム、誘導方法及び管理システム |
Families Citing this family (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108647631B (zh) | 2013-06-28 | 2023-04-07 | 日本电气株式会社 | 人群状态识别设备、方法和计算机可读记录介质 |
US10733853B2 (en) * | 2015-04-20 | 2020-08-04 | Nec Corporation | Crowd guiding device, crowd guiding system, crowd guiding method, and storage medium |
WO2017038978A1 (ja) * | 2015-09-04 | 2017-03-09 | 株式会社イッツ・エムエムシー | 経路選択支援装置、経路選択支援方法及びコンピュータプログラム |
WO2017154655A1 (ja) | 2016-03-07 | 2017-09-14 | 日本電気株式会社 | 群衆種類識別システム、群衆種類識別方法および群衆種類識別プログラムを記憶する記憶媒体 |
US20190230320A1 (en) * | 2016-07-14 | 2019-07-25 | Mitsubishi Electric Corporation | Crowd monitoring device and crowd monitoring system |
US11327467B2 (en) * | 2016-11-29 | 2022-05-10 | Sony Corporation | Information processing device and information processing method |
CN107195065B (zh) * | 2017-07-19 | 2019-08-27 | 英华达(上海)科技有限公司 | 围栏系统及围栏设置方法 |
JP6965735B2 (ja) * | 2017-12-26 | 2021-11-10 | トヨタ自動車株式会社 | 情報処理装置、車載装置および情報処理方法 |
SG10201802673VA (en) * | 2018-03-29 | 2019-10-30 | Nec Asia Pacific Pte Ltd | Method and system for integration and automatic switching of crowd estimation techniques |
US10890460B2 (en) * | 2018-10-19 | 2021-01-12 | International Business Machines Corporation | Navigation and location validation for optimizing vehicle-based transit systems |
JP7286302B2 (ja) * | 2018-11-15 | 2023-06-05 | 清水建設株式会社 | 行列管理装置、行列管理システム、行列管理方法、及びプログラム |
CN109640249B (zh) * | 2018-11-27 | 2020-08-11 | 佛山科学技术学院 | 一种基于大数据的商场人流量预测系统 |
CN109597409A (zh) * | 2018-11-27 | 2019-04-09 | 辽宁工程技术大学 | 一种高尔夫球童机器人控制方法及系统 |
CN110033612B (zh) * | 2019-05-21 | 2021-05-04 | 上海木木聚枞机器人科技有限公司 | 一种基于机器人的行人提醒方法、系统及机器人 |
KR102231922B1 (ko) * | 2019-07-30 | 2021-03-25 | 엘지전자 주식회사 | 인공 지능을 이용하여, 복수의 로봇들을 제어하는 인공 지능 서버 |
DE102019123523A1 (de) * | 2019-09-03 | 2021-03-04 | Innogy Se | Verfahren und Computerprogrammprodukt zum Bestimmen von Bewegungsströmen von Personen |
JP2021120807A (ja) * | 2020-01-30 | 2021-08-19 | ソニーグループ株式会社 | 情報処理装置、及び情報処理方法 |
JP7359293B2 (ja) * | 2020-03-30 | 2023-10-11 | 日本電気株式会社 | 座席案内装置、システム、方法及びプログラム |
CN112199988A (zh) * | 2020-08-26 | 2021-01-08 | 北京贝思科技术有限公司 | 跨区域算法组合配置策略方法、图像处理装置及电子设备 |
CN112269930B (zh) * | 2020-10-26 | 2023-10-24 | 北京百度网讯科技有限公司 | 建立区域热度预测模型、区域热度预测的方法及装置 |
JP7552736B2 (ja) | 2021-01-26 | 2024-09-18 | 日本電気株式会社 | サーバ装置、及びサーバ装置の制御方法 |
JP7521619B2 (ja) * | 2021-02-04 | 2024-07-24 | 日本電気株式会社 | 情報処理装置、情報処理方法およびプログラム |
WO2022176195A1 (ja) * | 2021-02-22 | 2022-08-25 | 三菱電機株式会社 | 情報提供システム |
CN113205631A (zh) * | 2021-03-19 | 2021-08-03 | 武汉特斯联智能工程有限公司 | 一种基于人脸识别的社区访控方法和系统 |
KR20220167090A (ko) * | 2021-06-11 | 2022-12-20 | 현대자동차주식회사 | 혼잡도에 따른 퍼스널 모빌리티 장치의 제어 방법 및 장치 |
FR3128531B1 (fr) * | 2021-10-27 | 2023-10-27 | Genetec Inc | Surveillance d'unités de flux |
JP7483164B2 (ja) | 2022-04-28 | 2024-05-14 | 三菱電機株式会社 | 移動型提供装置、及び情報提供システム |
CN115842848B (zh) * | 2023-03-01 | 2023-04-28 | 成都远峰科技发展有限公司 | 一种基于工业物联网的动态监控系统及其控制方法 |
CN116975466A (zh) * | 2023-07-26 | 2023-10-31 | 中国建筑西南勘察设计研究院有限公司 | 一种智慧公园安全引导系统及方法 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0475199A (ja) * | 1990-07-17 | 1992-03-10 | Shimizu Corp | 群衆歩行シミュレーションシステム |
JPH06240901A (ja) * | 1993-02-16 | 1994-08-30 | Hitachi Ltd | 駐車場流出入車の誘導方法 |
JPH08202849A (ja) * | 1995-01-25 | 1996-08-09 | Murata Mfg Co Ltd | 静止・移動物体の検出処理装置 |
JP2004086762A (ja) * | 2002-08-28 | 2004-03-18 | Nec Corp | 顧客誘導および顧客分散システムと顧客誘導および顧客分散方法 |
JP2010287251A (ja) * | 2003-08-07 | 2010-12-24 | National Institute Of Advanced Industrial Science & Technology | 混雑状況予測プログラム、混雑状況予測プログラムを記録したコンピュータ読み取り可能な記録媒体および混雑状況予測装置、ならびにナビゲーションプログラム、ナビゲーションプログラムを記録したコンピュータ読み取り可能な記録媒体およびナビゲーション装置 |
JP2014049086A (ja) * | 2012-09-04 | 2014-03-17 | Toshiba Tec Corp | 情報処理装置およびプログラム |
Family Cites Families (114)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5218344A (en) | 1991-07-31 | 1993-06-08 | Ricketts James G | Method and system for monitoring personnel |
US5471372A (en) | 1993-12-06 | 1995-11-28 | Ardco, Inc. | Lighting system for commercial refrigerator doors |
JPH08123374A (ja) * | 1994-10-26 | 1996-05-17 | Toshiba Corp | 待ち時間案内装置 |
JPH09138241A (ja) | 1995-11-16 | 1997-05-27 | Matsushita Electric Ind Co Ltd | 通過監視装置と滞在人数監視装置 |
US6548967B1 (en) | 1997-08-26 | 2003-04-15 | Color Kinetics, Inc. | Universal lighting network methods and systems |
DE19930796A1 (de) * | 1999-07-03 | 2001-01-11 | Bosch Gmbh Robert | Verfahren und Vorrichtung zur Übermittlung von Navigationsinformationen von einer Datenzentrale an ein fahrzeugbasiertes Navigationssystem |
US7801629B2 (en) * | 1999-08-10 | 2010-09-21 | Disney Enterprises, Inc. | Management of the flow of passengers, baggage and cargo in relation to travel facilities |
JP4631194B2 (ja) | 2001-03-30 | 2011-02-16 | 三菱電機株式会社 | 避難誘導システム |
US20020168084A1 (en) * | 2001-05-14 | 2002-11-14 | Koninklijke Philips Electronics N.V. | Method and apparatus for assisting visitors in navigating retail and exhibition-like events using image-based crowd analysis |
JP3895155B2 (ja) * | 2001-11-15 | 2007-03-22 | アルパイン株式会社 | ナビゲーション装置 |
JP2003259337A (ja) | 2002-02-26 | 2003-09-12 | Toshiba Lighting & Technology Corp | 監視カメラシステム |
US20040001616A1 (en) | 2002-06-27 | 2004-01-01 | Srinivas Gutta | Measurement of content ratings through vision and speech recognition |
JP2005106769A (ja) | 2003-10-02 | 2005-04-21 | Namco Ltd | 室内用誘導システムおよび室内用誘導方法 |
JP2005189921A (ja) | 2003-12-24 | 2005-07-14 | Nec Software Chubu Ltd | 待ち時間表示システム、待ち時間表示方法 |
US20100033572A1 (en) | 2004-03-24 | 2010-02-11 | Richard Steven Trela | Ticket-holder security checkpoint system for deterring terrorist attacks |
JP4402505B2 (ja) * | 2004-04-27 | 2010-01-20 | グローリー株式会社 | 待ち時間案内システムおよび方法 |
DE102004040057A1 (de) | 2004-08-18 | 2006-03-09 | Rauch, Jürgen, Dr.-Ing. | Verkehrsleitsystem |
JP2006127322A (ja) | 2004-10-29 | 2006-05-18 | Mitsubishi Heavy Ind Ltd | 顧客移動制御システム及び顧客移動制御方法 |
JP2006171204A (ja) | 2004-12-14 | 2006-06-29 | Ito Kogaku Kogyo Kk | 光学要素の製造方法 |
JP4625326B2 (ja) | 2004-12-28 | 2011-02-02 | 富士通株式会社 | 施設利用情報処理装置及びその情報処理方法 |
JP4808409B2 (ja) | 2005-01-14 | 2011-11-02 | 株式会社日立製作所 | センサネットワークシステム、センサデータの検索方法及びプログラム |
US7574822B1 (en) | 2005-03-14 | 2009-08-18 | Moore Harold A | Illuminated label holders and related merchandise display systems |
WO2007007470A1 (ja) * | 2005-07-12 | 2007-01-18 | Pioneer Corporation | テーマパーク管理装置、テーマパーク管理方法、テーマパーク管理プログラムおよび記録媒体 |
JP2007034585A (ja) | 2005-07-26 | 2007-02-08 | Victor Co Of Japan Ltd | 映像監視システム |
CA2624657A1 (en) * | 2005-10-05 | 2007-04-19 | Redxdefense, Llc | Visitor control and tracking system |
WO2007062044A2 (en) | 2005-11-23 | 2007-05-31 | Object Video, Inc | Object density estimation in video |
KR20090006139A (ko) | 2006-03-31 | 2009-01-14 | 코닌클리케 필립스 일렉트로닉스 엔.브이. | 결합된 비디오 및 오디오 기반 주변 조명 제어 |
JP4845580B2 (ja) * | 2006-04-26 | 2011-12-28 | 三菱電機株式会社 | 列車混雑度通知システム |
JP2007317052A (ja) | 2006-05-29 | 2007-12-06 | Japan Airlines International Co Ltd | 行列の待ち時間の計測システム |
US8248214B2 (en) | 2006-07-12 | 2012-08-21 | Wal-Mart Stores, Inc. | Adjustable lighting for displaying products |
FI118963B (fi) * | 2006-10-12 | 2008-05-30 | Kone Corp | Opastusjärjestelmä |
US9165372B2 (en) | 2006-10-16 | 2015-10-20 | Bae Systems Plc | Improvements relating to event detection |
JP4830816B2 (ja) | 2006-11-28 | 2011-12-07 | 富士通株式会社 | 案内ロボットの制御方法、システム、制御装置、案内ロボット、および案内ロボット制御プログラム |
JP2008132566A (ja) | 2006-11-28 | 2008-06-12 | Kyocera Corp | 切削工具 |
JP2008171204A (ja) | 2007-01-11 | 2008-07-24 | Matsushita Electric Works Ltd | 誘導灯装置 |
JP4131742B1 (ja) * | 2007-01-26 | 2008-08-13 | トヨタ自動車株式会社 | 車両用情報提供装置、情報提供センター、及び情報提供システム |
EP2151668A4 (en) | 2007-05-23 | 2013-04-24 | Navitime Japan Co Ltd | NAVIGATION SYSTEM, ROUTE EXTRACTION SERVER AND MOBILE TERMINAL DEVICE AND ROUTE GUIDING METHOD |
DE102007033391A1 (de) | 2007-07-18 | 2009-01-22 | Robert Bosch Gmbh | Informationsvorrichtung, Verfahren zur Information und/oder Navigation von einer Person sowie Computerprogramm |
US8428918B2 (en) | 2007-09-19 | 2013-04-23 | Utc Fire & Security Corporation | System and method for occupancy estimation |
US9109896B2 (en) | 2007-09-20 | 2015-08-18 | Utc Fire & Security Corporation | Model-based egress support system |
JP4858400B2 (ja) * | 2007-10-17 | 2012-01-18 | ソニー株式会社 | 情報提供システム、情報提供装置、情報提供方法 |
US8472715B2 (en) | 2007-10-26 | 2013-06-25 | Panasonic Corporation | Situation determining apparatus, situation determining method, situation determining program, abnormality determining apparatus, abnormality determining method, abnormality determining program, and congestion estimating apparatus |
EP2220611A4 (en) | 2007-11-05 | 2014-01-15 | Sloan Valve Co | SUPRERETTE OF PUBLIC TOILET |
JP5580211B2 (ja) | 2008-01-16 | 2014-08-27 | コーニンクレッカ フィリップス エヌ ヴェ | 存在検出に基づいて照明雰囲気を自動的に調節するシステム及び方法 |
JP5485913B2 (ja) | 2008-01-16 | 2014-05-07 | コーニンクレッカ フィリップス エヌ ヴェ | 環境内のムード及びソーシャルセッティングに適した雰囲気を自動生成するためのシステム及び方法 |
JP2009193412A (ja) | 2008-02-15 | 2009-08-27 | Tokyo Univ Of Science | 動的窓口設定装置及び動的窓口設定方法 |
US20120235579A1 (en) | 2008-04-14 | 2012-09-20 | Digital Lumens, Incorporated | Methods, apparatus and systems for providing occupancy-based variable lighting |
US20090316326A1 (en) | 2008-06-20 | 2009-12-24 | Chiles Bryan D | Systems And Methods For Demotivating Using A Drape |
US8002441B2 (en) | 2008-10-08 | 2011-08-23 | Southern Imperial, Inc. | Adjustable arm gondola lighting system |
GB0820606D0 (en) | 2008-11-11 | 2008-12-17 | Patterson Kieran | Route guidance and evacuation system |
CN101445122B (zh) | 2008-12-25 | 2011-01-19 | 北京交通大学 | 一种城市轨道交通客流监控与紧急疏散系统 |
CN101795395B (zh) | 2009-02-04 | 2012-07-11 | 深圳市先进智能技术研究所 | 一种人群态势监控系统及方法 |
US20140085107A1 (en) * | 2009-03-26 | 2014-03-27 | B&C Electronic Engineering, Inc. | Emergency and traffic alert system |
JP5312227B2 (ja) | 2009-06-29 | 2013-10-09 | 株式会社日本マイクロニクス | プローブカード及び検査装置 |
US8812344B1 (en) * | 2009-06-29 | 2014-08-19 | Videomining Corporation | Method and system for determining the impact of crowding on retail performance |
US8655651B2 (en) | 2009-07-24 | 2014-02-18 | Telefonaktiebolaget L M Ericsson (Publ) | Method, computer, computer program and computer program product for speech quality estimation |
GB2474007A (en) * | 2009-08-27 | 2011-04-06 | Simon R Daniel | Communication in and monitoring of a disaster area, optionally including a disaster medical pack |
TWI482123B (zh) * | 2009-11-18 | 2015-04-21 | Ind Tech Res Inst | 多狀態目標物追蹤方法及系統 |
US8401515B2 (en) | 2010-01-22 | 2013-03-19 | Qualcomm Incorporated | Method and apparatus for dynamic routing |
US8375034B2 (en) | 2010-01-27 | 2013-02-12 | Google Inc. | Automatically schedule and re-schedule meetings using reschedule factors for conflicting calendar events |
US20110184769A1 (en) * | 2010-01-27 | 2011-07-28 | Janet Lynn Tibberts | System and method for planning, scheduling and managing activities |
CA2731535A1 (en) * | 2010-02-12 | 2011-08-12 | Qmb Investments Inc. | A traffic management system |
US8401772B2 (en) | 2010-03-12 | 2013-03-19 | Richard David Speiser | Automated routing to reduce congestion |
US20130041941A1 (en) | 2010-04-09 | 2013-02-14 | Carnegie Mellon University | Crowd-Sourcing of Information for Shared Transportation Vehicles |
US9002924B2 (en) | 2010-06-17 | 2015-04-07 | Microsoft Technology Licensing, Llc | Contextual based information aggregation system |
US20120116789A1 (en) * | 2010-11-09 | 2012-05-10 | International Business Machines Corporation | Optimizing queue loading through variable admittance fees |
US10522518B2 (en) | 2010-12-23 | 2019-12-31 | Bench Walk Lighting, LLC | Light source with tunable CRI |
JP5608578B2 (ja) | 2011-01-31 | 2014-10-15 | 株式会社日立製作所 | エレベータ統合群管理システム |
JP5803187B2 (ja) * | 2011-03-23 | 2015-11-04 | ソニー株式会社 | 情報処理装置、情報処理システム、情報処理方法、プログラム、及び記録媒体 |
CN102724390A (zh) * | 2011-03-29 | 2012-10-10 | 赵山山 | 列车车厢拥挤程度检测的方法以及人流导引系统 |
US8831642B2 (en) | 2011-08-15 | 2014-09-09 | Connectquest Llc | Close proximity notification system |
JP5760903B2 (ja) | 2011-09-27 | 2015-08-12 | 大日本印刷株式会社 | 店舗混雑状況管理システム、店舗混雑状況管理方法、店舗混雑状況管理サーバおよびプログラム |
US20140045517A1 (en) * | 2011-10-19 | 2014-02-13 | Point Inside, Inc. | System for determination of real-time queue times by correlating map data and mobile users' location data |
WO2013128326A1 (en) * | 2012-02-29 | 2013-09-06 | Koninklijke Philips N.V. | Apparatus, method and system for monitoring presence of persons in an area |
CN102816614A (zh) | 2012-04-01 | 2012-12-12 | 李万俊 | 一种高清洁生物柴油及其制备方法 |
US9089227B2 (en) | 2012-05-01 | 2015-07-28 | Hussmann Corporation | Portable device and method for product lighting control, product display lighting method and system, method for controlling product lighting, and -method for setting product display location lighting |
US10304276B2 (en) * | 2012-06-07 | 2019-05-28 | Universal City Studios Llc | Queue management system and method |
US8760314B2 (en) * | 2012-06-11 | 2014-06-24 | Apple Inc. | Co-operative traffic notification |
US8868340B1 (en) | 2012-06-15 | 2014-10-21 | Google Inc. | Proposing transit points by analyzing travel patterns |
US9204736B2 (en) | 2012-08-22 | 2015-12-08 | Streater LLC | Shelving unit lighting system |
US9727037B2 (en) | 2012-08-24 | 2017-08-08 | Abl Ip Holding Llc | Environmental control using a chaotic function |
US9165190B2 (en) | 2012-09-12 | 2015-10-20 | Avigilon Fortress Corporation | 3D human pose and shape modeling |
US20160063144A1 (en) | 2012-09-28 | 2016-03-03 | Gordon Cooke | System and method for modeling human crowd behavior |
JP6040715B2 (ja) | 2012-11-06 | 2016-12-07 | ソニー株式会社 | 画像表示装置及び画像表示方法、並びにコンピューター・プログラム |
US20140163860A1 (en) | 2012-12-10 | 2014-06-12 | International Business Machines Corporation | Managing and directing mass transit system passengers |
WO2014120180A1 (en) * | 2013-01-31 | 2014-08-07 | Hewlett-Packard Development Company, L.P. | Area occupancy information extraction |
US9713963B2 (en) | 2013-02-18 | 2017-07-25 | Ford Global Technologies, Llc | Method and apparatus for route completion likelihood display |
US20140254136A1 (en) | 2013-03-07 | 2014-09-11 | Nthdegree Technologies Worldwide Inc. | Led shelf light for product display cases |
US20140278032A1 (en) * | 2013-03-15 | 2014-09-18 | Inrix, Inc. | Traffic causality |
US20160334235A1 (en) | 2013-03-19 | 2016-11-17 | The Florida International University Board Of Trustees | Itpa informed traveler program and application |
WO2014162131A1 (en) * | 2013-04-05 | 2014-10-09 | Mc Donagh Bernard | Emergency exit sign |
US8738292B1 (en) | 2013-05-14 | 2014-05-27 | Google Inc. | Predictive transit calculations |
US11170350B2 (en) | 2013-05-19 | 2021-11-09 | Verizon Media Inc. | Systems and methods for mobile application requests of physical facilities |
KR101411951B1 (ko) | 2013-06-19 | 2014-07-03 | 한진정보통신(주) | 다국어 정보 안내 시스템 및 장치 |
US20140379477A1 (en) | 2013-06-25 | 2014-12-25 | Amobee Inc. | System and method for crowd based content delivery |
US8939779B1 (en) | 2013-07-22 | 2015-01-27 | Streater LLC | Electro-mechanical connection for lighting |
US20150066558A1 (en) | 2013-08-29 | 2015-03-05 | Thales Canada Inc. | Context aware command and control system |
US20150120340A1 (en) | 2013-10-25 | 2015-04-30 | Elwha Llc | Dynamic seat reservations |
US20150177006A1 (en) | 2013-12-20 | 2015-06-25 | Egan Schulz | Systems and methods for crowd congestion reduction at venue locations using beacons |
CN105096406A (zh) * | 2014-04-30 | 2015-11-25 | 开利公司 | 用于建筑能耗设备的视频分析系统和智能楼宇管理系统 |
CN110460821A (zh) * | 2014-06-30 | 2019-11-15 | 日本电气株式会社 | 引导处理装置和引导方法 |
US10679495B2 (en) * | 2015-10-20 | 2020-06-09 | Stc, Inc. | Systems and methods for detection of travelers at roadway intersections |
CN108370624B (zh) | 2015-10-23 | 2020-05-15 | 飞利浦照明控股有限公司 | 零售空间照明控制系统和方法 |
CN107055231A (zh) | 2016-01-04 | 2017-08-18 | 奥的斯电梯公司 | Mcrl系统中的门厅人群控制调度 |
EP3435248A4 (en) * | 2016-03-24 | 2019-02-06 | Fujitsu Limited | JAM ADMINISTRATIVE DEVICE, JAM ADMINISTRATIVE PROGRAM AND JAM ADMINISTRATIVE PROCEDURE |
US10433399B2 (en) * | 2016-04-22 | 2019-10-01 | Signify Holding B.V. | Crowd management system |
US20200116506A1 (en) * | 2017-01-12 | 2020-04-16 | Xinova, LLC | Crowd control using individual guidance |
US11475671B2 (en) * | 2017-05-26 | 2022-10-18 | Turing Video | Multiple robots assisted surveillance system |
US11387901B2 (en) | 2017-06-30 | 2022-07-12 | Panasonic Intellectual Property Corporation Of America | Communication apparatus and communication method |
JP2019036012A (ja) * | 2017-08-10 | 2019-03-07 | トヨタ自動車株式会社 | 情報通知装置、情報通知システム、情報通知方法、情報通知プログラム |
JP7156242B2 (ja) * | 2019-10-18 | 2022-10-19 | トヨタ自動車株式会社 | 情報処理装置、プログラム及び制御方法 |
US20210216928A1 (en) | 2020-01-13 | 2021-07-15 | Johnson Controls Technology Company | Systems and methods for dynamic risk analysis |
US20220017115A1 (en) * | 2020-07-14 | 2022-01-20 | Argo AI, LLC | Smart node network for autonomous vehicle perception augmentation |
US20220058944A1 (en) * | 2020-08-24 | 2022-02-24 | Quantela Inc | Computer-based method and system for traffic congestion forecasting |
-
2015
- 2015-05-28 CN CN201910865430.2A patent/CN110460821A/zh active Pending
- 2015-05-28 US US15/323,307 patent/US11138443B2/en active Active
- 2015-05-28 JP JP2016531199A patent/JP6708122B2/ja active Active
- 2015-05-28 CN CN201580036257.5A patent/CN106664391B/zh active Active
- 2015-05-28 WO PCT/JP2015/065405 patent/WO2016002400A1/ja active Application Filing
-
2019
- 2019-03-08 US US16/297,414 patent/US11423658B2/en active Active
- 2019-03-08 US US16/297,436 patent/US20190205661A1/en not_active Abandoned
- 2019-03-08 US US16/297,450 patent/US10878252B2/en active Active
- 2019-08-15 JP JP2019149032A patent/JP6954330B2/ja active Active
-
2020
- 2020-05-21 JP JP2020088740A patent/JP6962413B2/ja active Active
-
2021
- 2021-07-14 US US17/375,146 patent/US12073627B2/en active Active
- 2021-10-14 JP JP2021168610A patent/JP7537754B2/ja active Active
-
2022
- 2022-02-25 JP JP2022027782A patent/JP7513044B2/ja active Active
- 2022-07-13 US US17/863,921 patent/US12073628B2/en active Active
-
2023
- 2023-08-28 US US18/238,932 patent/US20230410521A1/en active Pending
- 2023-08-28 US US18/238,818 patent/US20230401869A1/en active Pending
- 2023-08-28 US US18/238,901 patent/US20230410520A1/en active Pending
- 2023-11-24 JP JP2023198996A patent/JP2024015038A/ja active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0475199A (ja) * | 1990-07-17 | 1992-03-10 | Shimizu Corp | 群衆歩行シミュレーションシステム |
JPH06240901A (ja) * | 1993-02-16 | 1994-08-30 | Hitachi Ltd | 駐車場流出入車の誘導方法 |
JPH08202849A (ja) * | 1995-01-25 | 1996-08-09 | Murata Mfg Co Ltd | 静止・移動物体の検出処理装置 |
JP2004086762A (ja) * | 2002-08-28 | 2004-03-18 | Nec Corp | 顧客誘導および顧客分散システムと顧客誘導および顧客分散方法 |
JP2010287251A (ja) * | 2003-08-07 | 2010-12-24 | National Institute Of Advanced Industrial Science & Technology | 混雑状況予測プログラム、混雑状況予測プログラムを記録したコンピュータ読み取り可能な記録媒体および混雑状況予測装置、ならびにナビゲーションプログラム、ナビゲーションプログラムを記録したコンピュータ読み取り可能な記録媒体およびナビゲーション装置 |
JP2014049086A (ja) * | 2012-09-04 | 2014-03-17 | Toshiba Tec Corp | 情報処理装置およびプログラム |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019215906A (ja) * | 2014-06-30 | 2019-12-19 | 日本電気株式会社 | 誘導処理装置及び誘導方法 |
US11823398B2 (en) | 2016-03-18 | 2023-11-21 | Nec Corporation | Information processing apparatus, control method, and program |
US11205275B2 (en) | 2016-03-18 | 2021-12-21 | Nec Corporation | Information processing apparatus, control method, and program |
US11361452B2 (en) | 2016-03-18 | 2022-06-14 | Nec Corporation | Information processing apparatus, control method, and program |
US11158068B2 (en) | 2016-03-18 | 2021-10-26 | Nec Corporation | Information processing apparatus, control method, and program |
US12175687B2 (en) | 2016-03-18 | 2024-12-24 | Nec Corporation | Information processing apparatus, control method, and program |
US10699422B2 (en) | 2016-03-18 | 2020-06-30 | Nec Corporation | Information processing apparatus, control method, and program |
WO2017159060A1 (ja) * | 2016-03-18 | 2017-09-21 | 日本電気株式会社 | 情報処理装置、制御方法、及びプログラム |
JPWO2017159060A1 (ja) * | 2016-03-18 | 2019-01-17 | 日本電気株式会社 | 情報処理装置、制御方法、及びプログラム |
US12165339B2 (en) | 2016-03-18 | 2024-12-10 | Nec Corporation | Information processing apparatus, control method, and program |
JPWO2017168585A1 (ja) * | 2016-03-29 | 2018-09-20 | 三菱電機株式会社 | 列車運行制御システムおよび列車運行制御方法 |
EP3457358A4 (en) * | 2016-05-13 | 2019-11-20 | Hitachi, Ltd. | OVERLOAD ANALYSIS DEVICE, OVERLOAD ANALYSIS PROCEDURE AND OVERLOAD ANALYSIS PROGRAM |
JPWO2017141454A1 (ja) * | 2016-05-13 | 2018-02-22 | 株式会社日立製作所 | 混雑状況可視化装置、混雑状況可視化システム、混雑状況可視化方法、および混雑状況可視化プログラム |
WO2018047646A1 (ja) * | 2016-09-06 | 2018-03-15 | パナソニックIpマネジメント株式会社 | 混雑検知装置、混雑検知システムおよび混雑検知方法 |
JP2018042049A (ja) * | 2016-09-06 | 2018-03-15 | パナソニックIpマネジメント株式会社 | 混雑検知装置、混雑検知システムおよび混雑検知方法 |
JP2018074299A (ja) * | 2016-10-26 | 2018-05-10 | 日本電信電話株式会社 | 流動状況計測装置、方法、及びプログラム |
JP2019528391A (ja) * | 2016-11-08 | 2019-10-10 | 中国▲鉱▼▲業▼大学 | 地下鉄の限られた空間において人員通路でのスタンピードを防止する方法 |
JP2018195292A (ja) * | 2017-05-12 | 2018-12-06 | キヤノン株式会社 | 情報処理装置、情報処理システム、情報処理方法及びプログラム |
JP7009250B2 (ja) | 2017-05-12 | 2022-02-10 | キヤノン株式会社 | 情報処理装置、情報処理システム、情報処理方法及びプログラム |
JP2019021019A (ja) * | 2017-07-18 | 2019-02-07 | パナソニック株式会社 | 人流分析方法、人流分析装置及び人流分析システム |
JP2021144765A (ja) * | 2017-10-11 | 2021-09-24 | 東芝テック株式会社 | 携帯端末及びプログラム |
JP7242763B2 (ja) | 2017-10-11 | 2023-03-20 | 東芝テック株式会社 | 携帯端末及びプログラム |
JPWO2019087595A1 (ja) * | 2017-11-06 | 2020-09-24 | 本田技研工業株式会社 | 移動体分布状況予測装置及び移動体分布状況予測方法 |
US11257376B2 (en) | 2017-11-06 | 2022-02-22 | Honda Motor Co., Ltd. | Mobile body distribution situation forecast device and mobile body distribution situation forecast method |
WO2019087595A1 (ja) * | 2017-11-06 | 2019-05-09 | 本田技研工業株式会社 | 移動体分布状況予測装置及び移動体分布状況予測方法 |
CN108154110A (zh) * | 2017-12-22 | 2018-06-12 | 任俊芬 | 一种基于深度学习人头检测的密集人流量统计方法 |
JP2019171887A (ja) * | 2018-03-26 | 2019-10-10 | 株式会社エヌ・ティ・ティ・データ | 乗客重量均一化支援装置、及び乗客重量均一化支援方法 |
JP6983359B1 (ja) * | 2020-08-27 | 2021-12-17 | 三菱電機株式会社 | 誘導装置、誘導プログラム、誘導方法及び管理システム |
Also Published As
Publication number | Publication date |
---|---|
JP6962413B2 (ja) | 2021-11-05 |
US10878252B2 (en) | 2020-12-29 |
US20170132475A1 (en) | 2017-05-11 |
JP2022023129A (ja) | 2022-02-07 |
US20230410521A1 (en) | 2023-12-21 |
JP2024015038A (ja) | 2024-02-01 |
US20210342598A1 (en) | 2021-11-04 |
US20190205661A1 (en) | 2019-07-04 |
US20230410520A1 (en) | 2023-12-21 |
JP2020149710A (ja) | 2020-09-17 |
US11423658B2 (en) | 2022-08-23 |
CN106664391B (zh) | 2019-09-24 |
US20190272430A1 (en) | 2019-09-05 |
JP6708122B2 (ja) | 2020-06-10 |
JP6954330B2 (ja) | 2021-10-27 |
US20190272431A1 (en) | 2019-09-05 |
CN110460821A (zh) | 2019-11-15 |
CN106664391A (zh) | 2017-05-10 |
JP2022075697A (ja) | 2022-05-18 |
US12073628B2 (en) | 2024-08-27 |
JP7537754B2 (ja) | 2024-08-21 |
US12073627B2 (en) | 2024-08-27 |
US20230401869A1 (en) | 2023-12-14 |
JPWO2016002400A1 (ja) | 2017-05-25 |
US20220351522A1 (en) | 2022-11-03 |
JP2019215906A (ja) | 2019-12-19 |
US11138443B2 (en) | 2021-10-05 |
JP7513044B2 (ja) | 2024-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6962413B2 (ja) | 誘導システム、誘導方法及びプログラム | |
JP7528970B2 (ja) | 映像監視システム、映像監視方法、及びプログラム | |
JP6898165B2 (ja) | 人流分析方法、人流分析装置及び人流分析システム | |
EP2869256A1 (en) | Targeted advertising based on physical traits and anticipated trajectory | |
US20190212719A1 (en) | Information processing device and information processing method | |
CN109311622B (zh) | 电梯系统以及轿厢呼叫估计方法 | |
WO2015194098A1 (en) | Information processing apparatus, information processing method, and program | |
JP2018504726A (ja) | 到着情報提供方法、サーバ及びディスプレイ装置 | |
WO2017130253A1 (ja) | 施設利用支援方法、施設利用支援装置および利用者端末装置 | |
US20200074159A1 (en) | Information processing apparatus and information processing method | |
TW201944324A (zh) | 導引系統 | |
CN112238458B (zh) | 机器人管理装置、机器人管理方法以及机器人管理系统 | |
EP3158294A1 (en) | Apparatus, method and program to position building infrastructure through user information | |
US20220297308A1 (en) | Control device, control method, and control system | |
JP7448682B2 (ja) | 自動ドア装置、表示制御装置、表示制御方法、及び表示制御プログラム | |
US11994875B2 (en) | Control device, control method, and control system | |
CN112238454B (zh) | 机器人管理装置、机器人管理方法以及机器人管理系统 | |
JP6742754B2 (ja) | 画像処理装置、画像処理方法、画像処理システム及びプログラム | |
JP2023038993A (ja) | 情報処理装置、情報処理システム、情報処理方法、およびコンピュータプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15814131 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2016531199 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15323307 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15814131 Country of ref document: EP Kind code of ref document: A1 |