WO2017168684A1 - Monitoring system, monitoring method, program, and computer storage medium - Google Patents
Monitoring system, monitoring method, program, and computer storage medium Download PDFInfo
- Publication number
- WO2017168684A1 WO2017168684A1 PCT/JP2016/060683 JP2016060683W WO2017168684A1 WO 2017168684 A1 WO2017168684 A1 WO 2017168684A1 JP 2016060683 W JP2016060683 W JP 2016060683W WO 2017168684 A1 WO2017168684 A1 WO 2017168684A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- monitoring
- unit
- predetermined area
- imaging
- background
- Prior art date
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 248
- 238000000034 method Methods 0.000 title claims description 25
- 238000003384 imaging method Methods 0.000 claims abstract description 77
- 238000005259 measurement Methods 0.000 claims abstract description 17
- 238000001514 detection method Methods 0.000 claims description 12
- 230000008859 change Effects 0.000 claims description 9
- 238000004891 communication Methods 0.000 description 18
- 238000012806 monitoring device Methods 0.000 description 16
- 230000002159 abnormal effect Effects 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000001678 irradiating effect Effects 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 230000002265 prevention Effects 0.000 description 2
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 1
- 206010039740 Screaming Diseases 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000015654 memory Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Definitions
- the present invention relates to a monitoring system for monitoring a predetermined area, a monitoring method using the monitoring system, a program, and a computer storage medium.
- a surveillance camera is installed in a predetermined surveillance area or surveillance object such as a store, commercial facility, or public facility, and an abnormal state or dangerous event is detected based on an image captured by the surveillance camera.
- image data captured by a surveillance camera is two-dimensional
- height information and depth information cannot be obtained. Then, for example, even if image data that captures two people with different sizes is obtained, the height of the two people is actually different, or one person is near the surveillance camera and the other person
- the computer cannot determine from the image data alone whether the image appears to be different because it is on the far side. That is, it is difficult to accurately grasp a person from two-dimensional image data, and therefore it is difficult to specify the person's action.
- Patent Document 1 proposes adding depth information to image data.
- the imaging system described in Patent Document 1 includes an irradiation source that irradiates a laser, an imaging unit that images a subject, and a measurement unit that measures distance data from the irradiation source to the subject. Then, the imaging system associates the image data of the subject imaged by the imaging unit with the distance data measured by the measurement unit, generates composite data in which the distance data is superimposed on the image data, and further generates a three-dimensional image of the subject. Image data is generated.
- Patent Document 1 when the imaging system described in Patent Document 1 is used as a monitoring system, in order to capture a subject, that is, a monitoring target in three dimensions, it is necessary to always measure the distance to the subject during the monitoring period. In addition, the above-described combined data of the image data and distance data and the three-dimensional image data of the subject must be generated. In such a case, the processing performed during monitoring becomes very complicated, and the amount of data to be handled becomes enormous, which places a heavy load on the computer that performs the processing. Therefore, there is room for improvement as a monitoring system.
- the present invention has been made in view of such a point, and an object thereof is to monitor a predetermined area appropriately and simply.
- the present invention provides a monitoring system for monitoring a predetermined area, which is fixedly provided at a predetermined position, and that captures the predetermined area from above, and the imaging in the predetermined area.
- a distance measuring unit that measures a distance from a measurement unit to a measurement target, a bottom surface of the predetermined area, and a background of the predetermined area that is permanently installed in the predetermined area during a monitoring period.
- the background is partitioned in a horizontal direction and a height direction with a mesh of a predetermined size, and a model generation unit that generates a mesh model of the background, Based on the image data picked up by the image pickup unit, a moving monitoring target is extracted, the mesh model is applied to the background of the predetermined area of the image data, and the mesh model is applied.
- the monitored to generate a new monitoring image data synthesized it is characterized by having a monitoring unit that monitors the predetermined region.
- the mesh generation of the background of the predetermined area is generated by the model generation unit, and further the predetermined area captured by the imaging unit A mesh model is applied to the background of and a predetermined area is monitored by the monitoring unit.
- the mesh model of the present invention includes distance information from the imaging unit measured by the ranging unit to each mesh, that is, position information (three-dimensional information) in the horizontal direction and the height direction of each mesh. Contains. If it does so, the three-dimensional information of the said monitoring object can be acquired by grasping
- the background mesh model of the predetermined area is generated, after that, it is not necessary to always measure the distance of the monitoring target as in Patent Document 1 described above, and only by imaging the predetermined area with the imaging unit, The predetermined area can be monitored. Therefore, according to the present invention, the predetermined area can be monitored appropriately and simply.
- the model generation unit may superimpose the background image in the image data captured by the imaging unit on the mesh model in the monitoring image data.
- the monitoring system may further include a database in which information on objects in the predetermined area is stored, and the monitoring unit may monitor the predetermined area by specifying the object based on the database.
- the model generation unit may three-dimensionally divide a space surrounded by the bottom surface of the predetermined region and the permanent object in the mesh model with the mesh of the predetermined size.
- the monitoring system further includes a person setting unit that divides a person who has entered the predetermined area into a plurality of areas, and the monitoring unit is based on the rate of change of the person in each area set by the person setting unit. Then, the behavior of the person may be monitored.
- the monitoring system further includes an audio detection unit that detects audio generated in the predetermined area, and the monitoring unit sets an imaging direction of the imaging unit in a direction of audio detected by the audio detection unit, You may monitor the area
- Another aspect of the present invention is a monitoring method for imaging and monitoring a predetermined area with an imaging unit, and includes a bottom surface of the predetermined area and a permanent object permanently installed in the predetermined area during a monitoring period.
- a distance measuring step for measuring the distance from the imaging unit to the background with the distance measuring unit, and the background in the horizontal direction with a mesh of a predetermined size based on the distance measured in the distance measuring step
- a model generation step of generating a mesh model of the background by partitioning in the height direction, an imaging step of imaging the predetermined area by the imaging unit, and a motion based on the image data captured in the imaging step Extract a certain monitoring target, apply the mesh model to the background of the predetermined area of the image data, and generate new monitoring image data by combining the monitoring target with the mesh model It is characterized by having a monitoring step of monitoring the predetermined region.
- the background image in the image data captured by the imaging unit may be superimposed on the mesh model in the monitoring image data.
- the information on the object in the predetermined area may be stored in a database, and in the monitoring step, the object may be specified based on the database to monitor the predetermined area.
- a space surrounded by the bottom surface of the predetermined region and the permanent object may be partitioned three-dimensionally with the mesh of the predetermined size.
- a person who has entered the predetermined area may be divided into a plurality of areas, and the behavior of the person may be monitored based on the rate of change of the person in each of the divided areas.
- the imaging direction of the imaging unit is set in the direction of the sound, the region where the sound is generated is imaged, and the monitoring step The area where the sound is generated may be monitored.
- a program that operates on a computer for causing the computer that controls the monitoring system to function so that the monitoring method is executed by the monitoring system.
- a readable computer storage medium storing the program
- the predetermined area can be monitored appropriately and simply.
- Configuration of the monitoring system> 1 and 2 show an outline of the configuration of the monitoring system 1 according to the present embodiment.
- a case will be described in which the monitoring system 1 is used to monitor the inside of a store 10 such as a supermarket or a convenience store.
- the monitoring system 1 includes an imaging device 20 that is fixedly installed on the ceiling surface of the store 10, and a monitoring device 30 that is connected to the imaging device 20 via a network (not shown).
- the network is not particularly limited as long as communication between the imaging device 20 and the monitoring device 30 can be performed.
- the network is configured by the Internet, a wired LAN, a wireless LAN, or the like.
- the imaging device 20 images the monitoring area A in the store 10 from above and measures the distance to the measurement target of the monitoring area A (the background of the monitoring area A described later).
- the monitoring area A coincides with the imaging area of the imaging device 20.
- the monitoring device 30 generates a three-dimensional mesh model for the background of the monitoring region A, and applies the mesh model to the background of the monitoring region A captured by the imaging device 20 to monitor the monitoring region A. .
- the configurations and operations of the imaging device 20 and the monitoring device 30 will be described in detail below.
- the imaging device 20 has a configuration in which a transparent or semi-transparent, substantially hemispherical dome cover 22 is provided in a lower part of a housing 21. Inside the dome cover 22, a distance measuring sensor 23 as a distance measuring unit, a monitoring camera 24 as an imaging unit, and a support member 25 for hanging and supporting the monitoring camera 24 are provided. Further, inside the housing 21, a drive mechanism 26 that controls the rotation operation of the monitoring camera 24 via the support member 25, and a communication unit for transmitting data acquired by the imaging device 20 to the monitoring device 30. 27 are provided. Note that the shape of the imaging device 20 is not limited to this, and can be arbitrarily designed.
- the distance measuring sensor 23 includes, for example, an irradiation source 23a for irradiating infrared rays and a light receiving element 23b for receiving reflected waves of infrared rays.
- an LED is used as the irradiation source 23a.
- PSD or CMOS is used for the light receiving element 23b.
- lenses are provided on the monitoring area A side of the irradiation source 23a and the light receiving element 23b.
- lenses (not shown) for focusing the light are provided on the monitoring area A side of the irradiation source 23a and the light receiving element 23b.
- a plurality of irradiation sources 23a and light receiving elements 23b may be provided.
- the distance measurement sensor 23 measures the distance to the measurement target by irradiating the measurement target (monitoring area A) with infrared rays from the irradiation source 23a and receiving the reflected wave of the infrared rays reflected by the measurement target with the light receiving element 23b. Is done.
- a light receiving element that receives the reflected wave of infrared rays
- the distance data measured by the distance measuring sensor 23 is output to the communication unit 27.
- the distance measuring sensor 23 is provided in the vicinity of the surveillance camera 24 and fixed immediately below it. Therefore, the distance measured by the distance measuring sensor 23 can be regarded as the distance from the monitoring camera 24 to the measurement target.
- the distance measuring sensor 23 of the present embodiment uses infrared rays to measure the distance to the measurement object, it is not limited to this, and can be arbitrarily selected, for example, an ultrasonic wave or a laser.
- the monitoring camera 24 an arbitrary camera such as a CCD camera or a CMOS camera is used.
- the surveillance camera 24 is supported by being suspended from a support member 25. Further, the monitoring camera 24 can be rotated in the horizontal direction (X-axis direction and Y-axis direction, pan direction) and the height direction (Z-axis direction, tilt direction) by the drive mechanism 26, and can perform a zoom operation. It is configured.
- the drive mechanism 26, for example, a stepping motor or a direct drive motor is used.
- the monitoring camera 24 can capture an image of the monitoring area A through the dome cover 22 serving as an imaging window and acquire an image of the monitoring area A. Further, the image data captured by the monitoring camera 24 is output to the communication unit 27.
- the communication unit 27 is a communication interface that mediates communication with the network, and performs data communication with an input unit 31 of the monitoring device 30 described later. Specifically, the communication unit 27 outputs the distance data measured by the distance measuring sensor 23 and the image data captured by the monitoring camera 24 to the monitoring device 30.
- the monitoring device 30 is configured by, for example, a computer, and is configured by, for example, a central processing unit such as a circuit (hardware) or a CPU, and a program (software) for causing them to function.
- the monitoring device 30 includes an input unit 31, a model generation unit 32, a monitoring unit 33, an output unit 34, a control unit 35, and a storage unit 36.
- the input unit 31 is a communication interface that mediates communication with the network, and performs data communication with the communication unit 27 of the imaging device 20. Specifically, the input unit 31 receives the distance data measured by the distance measuring sensor 23 and the image data captured by the monitoring camera 24.
- the model generation unit 32 generates a background mesh model of the monitoring area A based on the distance data of the input unit 31. Further, the monitoring unit 33 monitors the monitoring area A based on the image data and the mesh model. Specific operations of the model generation unit 32 and the monitoring unit 33 will be described later.
- the output unit 34 outputs the monitoring result of the monitoring unit 33.
- the method for outputting the monitoring result is not particularly limited. For example, any method can be selected such as displaying on a display or issuing an alarm when an abnormal state or dangerous event occurs.
- the control unit 35 controls each operation in the imaging device 20. That is, the control unit 35 controls, for example, the timing and position at which the distance measuring sensor 23 measures distance, and the timing and position at which the monitoring camera 24 captures an image.
- the storage unit 36 stores a program for monitoring the monitoring area A by the monitoring system 1.
- the program may be stored in the storage unit 36 as described above, or may be a computer-readable hard disk (HD), flexible disk (FD), compact disk (CD), magnet optical desk (MO), It may be stored in a computer-readable storage medium such as various memories. Further, the program can be stored in the storage medium or the like by downloading it via a communication line network such as the Internet.
- FIG. 3 is a flowchart showing an example of main steps of the monitoring method.
- the monitoring area A in the store 10 that does not move during the monitoring period is defined as the background of the monitoring area A.
- the floor surface 11 of the store 10 and a plurality of fixtures 12 as permanent objects permanently installed in the store 10 constitute the background of the monitoring area A.
- the fixture 12 may be moved intentionally by changing the layout in the store 10 or the like, but it is not a monitoring period during movement. In this case, it is assumed that the monitoring period starts after the fixture 12 is installed at a new position. Therefore, in this embodiment, the fixture 12 is treated as a permanent object.
- the distance between the monitoring camera 24 and the background of the monitoring area A is measured using the distance measuring sensor 23 of the imaging device 20 (step S1 in FIG. 3). Specifically, as shown in FIGS. 5 and 6, distance measurement is started from directly below the imaging device 20 (dotted line portion in FIG. 5), and the distance sensor 23 is rotated and the measurement direction of the imaging device 20 is changed. The distance is measured by moving from directly below to the outside (solid line portion in FIG. 5). Then, the distance between the monitoring camera 24 and the background of the monitoring area A is measured over the entire monitoring area A. The distance data measured by the distance measuring sensor 23 is output to the model generation unit 32 of the monitoring device 30 via the communication unit 27 and the input unit 31.
- the model generation unit 32 generates a background mesh model of the monitoring area A based on the distance data measured by the distance measuring sensor 23 (step S2 in FIG. 3). Specifically, as shown in FIG. 5, the meshes M are stacked from directly below the imaging device 20 toward the outside. The size of the mesh M is arbitrarily set, and is set to 50 cm ⁇ 50 cm, for example. The position in the horizontal direction (X-axis direction and Y-axis direction) of each mesh M can be calculated by the number of meshes M stacked. Further, the position of each mesh M in the height direction (Z-axis direction) can be calculated from the distance data measured by the distance measuring sensor 23.
- step S ⁇ b> 2 based on the distance data measured by the distance measuring sensor 23, the background of the monitoring area A is partitioned in a three-dimensional manner by the plurality of meshes M, and the mesh model D is generated.
- the mesh model D of the two furniture 12 and the floor surface 11 between them is shown, but in reality, the mesh model D is generated for all the backgrounds of the monitoring area A.
- the floor 11 is hatched for easy understanding.
- Steps S1 and S2 so far are preparations for monitoring the monitoring area A, and actual monitoring starts here. That is, the monitoring area A is imaged using the monitoring camera 24 of the imaging device 20 (step S3 in FIG. 3). Image data captured by the monitoring camera 24 is output to the monitoring unit 33 of the monitoring device 30 via the communication unit 27 and the input unit 31.
- the monitoring unit 33 analyzes the image data and extracts a person in the monitoring area A based on, for example, a background difference.
- This background difference is a known technique, and a moving monitoring target is extracted by taking a difference between the image data acquired by the monitoring camera 24 and the background image of the monitoring area A acquired in advance. .
- the monitoring target is described as a person, but the monitoring target is not limited to this.
- the monitoring unit 33 applies the mesh model D generated by the model generation unit 32 to the background of the monitoring region A in the image data. That is, as shown in FIG. 8, the person P is synthesized with the mesh model D to generate new monitoring image data. Based on the monitoring image data, the monitoring area A is monitored (step S4 in FIG. 3). Note that the monitoring result of the monitoring unit 33 is output to the output unit 34.
- FIG. 8 it is assumed that two persons P1 and P2 having different sizes are displayed in the monitoring area A in the monitoring image data.
- the side closer to the monitoring camera 24 Y-axis negative direction side
- the side farther from the monitoring camera 24 Y-axis positive direction side
- a computer Can not judge.
- each mesh M of the mesh model D has three-dimensional information. Then, for example, by recognizing the feet of the persons P1 and P2, it is possible to grasp which mesh M of the floor surface 11 the persons P1 and P2 are located on. Then, since the lengths in the height direction of the persons P1 and P2 on the monitoring image data are known, the heights (heights) of the persons P1 and P2 can be calculated. That is, by grasping which mesh M each person P1, P2 is located on, it is possible to grasp the actual height of the persons P1, P2. Thus, the persons P1 and P2 can be grasped three-dimensionally from the monitoring image data. Even when the monitoring target is not a person, the shape of the monitoring target can be grasped three-dimensionally.
- the persons P1 and P2 can be grasped three-dimensionally in this way, for example, it can be seen from the monitoring image data that even if the movement of the person P1 in front is large, the movement is not so large. On the other hand, even if the movement of the person P2 in the back is small, it can be seen that the movement is actually large. Therefore, it is possible to accurately grasp the movements of the persons P1 and P2. Further, for example, even if the persons P1 and P2 overlap on the monitoring image data, it can be recognized that the persons P1 and P2 are not actually colliding.
- the monitoring area A can be monitored by repeating steps S3 to S4. That is, in the method described in Patent Document 1, it is necessary to always measure the distance between the persons P1 and P2 when monitoring the persons P1 and P2. In this embodiment, the distance to the persons P1 and P2 is measured. do not have to. Therefore, the processing performed during monitoring is very simple and the amount of data handled is small, so that the load on the computer can be greatly reduced.
- the steps S1 and S2 are performed.
- the mesh model D can be automatically generated. Therefore, the monitoring system 1 can flexibly cope with such a change in the background of the monitoring area A.
- the monitoring area A can be monitored appropriately and simply.
- the image data of the background of the monitoring area A captured in advance by the monitoring camera 24 is superimposed on the mesh model D composed of a plurality of meshes M. Also good.
- the mesh model D includes a plurality of meshes M and background image data of the monitoring area A. In such a case, the monitoring area A can be monitored more appropriately in step S4.
- the storage unit 36 of the monitoring device 30 may store a database in which information on objects in the monitoring area A is stored.
- the object in the monitoring area A is an object assumed to exist in the monitoring area A.
- information such as typical product shelves, desks, chairs, and the like, which are fixtures 12, is stored in the database.
- information such as representative products and carts is also stored in the database.
- the shape of the object in the monitoring area A can be grasped by using the mesh model D. Therefore, it is possible to identify what the object is by referring to the information in the database. can do. Thereby, it is possible to grasp even the specific action of the person P. For example, it can be grasped if the person P is going to enter the entry prohibition area, and can be grasped if the person P is going to shoplift a certain product from a certain product shelf.
- the movement of the monitoring target can be specifically grasped, and the monitoring area A can be monitored more appropriately.
- a space surrounded by the floor 11 and the fixture 12 in the monitoring area A (hereinafter referred to as monitoring space) as shown in FIG. You may partition in three dimensions.
- the mesh N formed in the monitoring space has the same size as the mesh M on the floor 11 and the fixture 12.
- the mesh N of the monitoring space is formed in three dimensions so as to complement the space between the floor surface 11 and the mesh M of the fixture 12.
- the mesh M on the floor surface 11 and the furniture 12 is illustrated by dotted lines for easy understanding.
- the mesh N in the monitoring space is provided with three-dimensional position information in the horizontal direction and the height direction, similar to the mesh M of the floor surface 11 and the fixture 12. For this reason, when monitoring the monitoring area A in step S4, the person P can be more accurately and easily grasped three-dimensionally.
- the monitoring device 30 may further include a person setting unit 40 that partitions the person P entering the monitoring area A into a plurality of areas.
- the person setting unit 40 divides the person P into a plurality of regions 50 to 52 in the height direction as shown in FIG.
- the upper area 50 is formed at the position of the head of the person P
- the intermediate area 51 is formed at the position of the torso of the person P
- the lower area 52 is formed at the position of the legs of the person P.
- the person setting unit 40 first recognizes the foot of the person P and recognizes the height of the person P from the mesh model D. Then, the rough position of the head and torso of the person P can be grasped.
- the person P when monitoring the monitoring area A in step S4, the person P is monitored based on the rate of change of the person P in each of the areas 50 to 52. Then, for example, when the rate of change in the upper region 50 is large, it can be seen that the person P performs an operation such as swinging his / her head left and right and scolding the surroundings. In addition, for example, when the rate of change in the intermediate region 51 is large, it can be seen that the person P is performing a motion such as moving the body sideways and flirting. In addition, for example, when the rate of change in the lower region 52 is large, it can be seen that the person P is performing an operation such as waving. Thus, when monitoring the monitoring area A, the specific operation of the person P can be grasped.
- the imaging device 20 further includes a sound collector 60 that collects sound generated in the monitoring area A, and the monitoring device 30 is collected by the sound collector 60. You may further have the audio
- the sound collector 60 is a microphone, and a plurality, for example, three are provided inside the dome cover 22.
- FIG. 13 is a flowchart showing an example of main steps of a monitoring method using voice detection.
- steps S1 to S4 are the same steps as steps S1 to S4 shown in FIG. 3 of the above embodiment.
- the specific sound is a sound that is generated when an abnormal state or a dangerous event occurs in the monitoring area A, and is, for example, a screaming sound of a person P or a sound that breaks an object.
- steps S1 to S2 and T1 are preparations for monitoring the monitoring area A.
- audio data collected by the sound collector 60 is output to the audio detection unit 61 via the communication unit 27 and the input unit 31.
- the voice detection unit 61 collates this voice data with a specific voice stored in the storage unit 36, and if they match, it detects that a specific voice has been generated in the monitoring area A (step T2 in FIG. 13). ).
- step S ⁇ b> 3 the monitoring camera 24 captures an area in the monitoring area A where specific sound is generated. Thereafter, in step S4, the region where the specific sound is generated is monitored.
- a plurality of imaging devices 20 may be installed in the store 10. In such a case, the entire interior of the store 10 can be monitored by monitoring a plurality of monitoring areas A. Further, if one monitoring area A is monitored by a plurality of imaging devices 20, the monitoring area A can be monitored with higher accuracy.
- the monitoring system 1 includes, for example, shopping malls, department stores, complex general facilities, commercial facilities such as exhibitions and trade fair venues, public facilities such as airports and railroads, amusement parks and stadiums.
- Various places such as amusement facilities such as hospitals, specialized facilities such as hospitals and health centers, offices and factories, detached houses, apartment houses, and parking lots can be monitored.
- the present invention is useful, for example, when monitoring a predetermined area.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Alarm Systems (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
This monitoring system for monitoring a predetermined region has: an imaging unit for imaging the predetermined region from above; a range finding unit for measuring the distance from the imaging unit to a measurement subject in the predetermined region; a model generation unit for generating a mesh model of the background of the predetermined region by demarking the background in the horizontal direction and the height direction with a mesh having a predetermined size, on the basis of a distance from the imaging unit to the background measured by the range finding unit; and a monitoring unit for monitoring the predetermined region by extracting a monitored subject that is in motion on the basis of image data taken in the imaging unit, applying the mesh model to the background of the predetermined region of the image data, and generating new monitoring image data obtained by superimposing the monitored subject on the mesh model.
Description
本発明は、所定領域を監視する監視システム、当該監視システムを用いた監視方法、プログラム及びコンピュータ記憶媒体に関する。
The present invention relates to a monitoring system for monitoring a predetermined area, a monitoring method using the monitoring system, a program, and a computer storage medium.
従来、防犯や防災などの目的で監視カメラを用いた監視システムが利用されている。この種の監視システムでは、例えば店舗や商業施設、あるいは公共施設など、所定の監視対象領域や監視対象物に監視カメラを設置し、監視カメラで撮像された画像に基づいて、異常状態や危険事象の発生の有無を監視する。
Conventionally, surveillance systems using surveillance cameras have been used for crime prevention and disaster prevention purposes. In this type of monitoring system, for example, a surveillance camera is installed in a predetermined surveillance area or surveillance object such as a store, commercial facility, or public facility, and an abnormal state or dangerous event is detected based on an image captured by the surveillance camera. Monitor for occurrence of
一般に、監視カメラで撮像される画像データは二次元的なものであるため、高さ情報や奥行き情報が得られない。そうすると、例えば大きさの異なる2人の人物を捉えた画像データが得られたとしても、実際に2人の身長が異なっているのか、あるいは一方の人物が監視カメラの近い側にいて他方の人物が遠い側にいるため、大きさが異なるように見えているのか、コンピュータは画像データだけから判断することはできない。すなわち、二次元の画像データから人物を正確に把握することは困難であり、このため人物の行動を特定するのも難しい。
Generally, since image data captured by a surveillance camera is two-dimensional, height information and depth information cannot be obtained. Then, for example, even if image data that captures two people with different sizes is obtained, the height of the two people is actually different, or one person is near the surveillance camera and the other person The computer cannot determine from the image data alone whether the image appears to be different because it is on the far side. That is, it is difficult to accurately grasp a person from two-dimensional image data, and therefore it is difficult to specify the person's action.
そこで、特許文献1には、画像データに奥行き情報を加えることが提案されている。具体的には、特許文献1に記載された撮像システムは、レーザを照射する照射源と、被写体を撮像する撮像部と、照射源から被写体までの距離データを測定する測定部と、を有する。そして、撮像システムでは、撮像部で撮像された被写体の画像データと、測定部で測定された距離データを対応付けて、画像データに距離データを重畳した合成データを生成し、さらに被写体の三次元画像データを生成している。
Therefore, Patent Document 1 proposes adding depth information to image data. Specifically, the imaging system described in Patent Document 1 includes an irradiation source that irradiates a laser, an imaging unit that images a subject, and a measurement unit that measures distance data from the irradiation source to the subject. Then, the imaging system associates the image data of the subject imaged by the imaging unit with the distance data measured by the measurement unit, generates composite data in which the distance data is superimposed on the image data, and further generates a three-dimensional image of the subject. Image data is generated.
しかしながら、特許文献1に記載された撮像システムを監視システムとして用いる場合、被写体、すなわち監視対象を三次元的に捉えるためには、その被写体までの距離を監視期間中、常に測定する必要がある。そのうえで、上述した画像データと距離データの合成データと、被写体の三次元画像データを生成しなければならい。かかる場合、監視中に行われる処理が非常に複雑となり、また扱うデータ量も膨大になるため、処理を行うコンピュータに多大な負荷かがかかる。したがって、監視システムとしては改善の余地がある。
However, when the imaging system described in Patent Document 1 is used as a monitoring system, in order to capture a subject, that is, a monitoring target in three dimensions, it is necessary to always measure the distance to the subject during the monitoring period. In addition, the above-described combined data of the image data and distance data and the three-dimensional image data of the subject must be generated. In such a case, the processing performed during monitoring becomes very complicated, and the amount of data to be handled becomes enormous, which places a heavy load on the computer that performs the processing. Therefore, there is room for improvement as a monitoring system.
本発明は、かかる点に鑑みてなされたものであり、所定領域を適切且つ簡易的に監視することを目的とする。
The present invention has been made in view of such a point, and an object thereof is to monitor a predetermined area appropriately and simply.
前記の目的を達成するため、本発明は、所定領域を監視する監視システムであって、所定位置に固定して設けられ、前記所定領域を上方から撮像する撮像部と、前記所定領域において前記撮像部から測定対象までの距離を測定する測距部と、前記所定領域の底面と、前記所定領域において監視期間中に常設される常設物とから構成される当該所定領域の背景について、前記測距部で測定される前記撮像部から前記背景までの距離に基づき、前記背景を所定サイズのメッシュで水平方向及び高さ方向に区画して、当該背景のメッシュモデルを生成するモデル生成部と、前記撮像部で撮像される画像データに基づいて動きのある監視対象を抽出し、さらに前記画像データの前記所定領域の背景に前記メッシュモデルを適用し、当該メッシュモデルに前記監視対象を合成した新たな監視画像データを生成して、前記所定領域を監視する監視部と、を有することを特徴としている。
In order to achieve the above object, the present invention provides a monitoring system for monitoring a predetermined area, which is fixedly provided at a predetermined position, and that captures the predetermined area from above, and the imaging in the predetermined area. A distance measuring unit that measures a distance from a measurement unit to a measurement target, a bottom surface of the predetermined area, and a background of the predetermined area that is permanently installed in the predetermined area during a monitoring period. Based on the distance from the imaging unit measured by the unit to the background, the background is partitioned in a horizontal direction and a height direction with a mesh of a predetermined size, and a model generation unit that generates a mesh model of the background, Based on the image data picked up by the image pickup unit, a moving monitoring target is extracted, the mesh model is applied to the background of the predetermined area of the image data, and the mesh model is applied. The monitored to generate a new monitoring image data synthesized it is characterized by having a monitoring unit that monitors the predetermined region.
本発明の監視システムでは、撮像部から所定領域の背景までの距離を測距部で測定した後、モデル生成部で所定領域の背景のメッシュモデルを生成し、さらに撮像部で撮像される所定領域の背景にメッシュモデルを適用して、監視部で所定領域を監視する。
In the monitoring system of the present invention, after the distance from the imaging unit to the background of the predetermined area is measured by the distance measuring unit, the mesh generation of the background of the predetermined area is generated by the model generation unit, and further the predetermined area captured by the imaging unit A mesh model is applied to the background of and a predetermined area is monitored by the monitoring unit.
ここで、本発明のメッシュモデルは、測距部で測定された撮像部から各メッシュまでの距離情報を含んでおり、すなわち各メッシュの水平方向と高さ方向の位置情報(三次元情報)を含んでいる。そうすると、所定領域内の監視対象がどのメッシュに位置しているかを把握することで、当該監視対象の三次元情報を取得することができる。換言すれば、所定領域の背景のメッシュモデルを一旦生成すれば、その後、上述した特許文献1のように常に監視対象の距離を測定する必要がなく、撮像部で所定領域を撮像するだけで、当該所定領域を監視することができるのである。したがって、本発明によれば、所定領域を適切且つ簡易的に監視することができる。
Here, the mesh model of the present invention includes distance information from the imaging unit measured by the ranging unit to each mesh, that is, position information (three-dimensional information) in the horizontal direction and the height direction of each mesh. Contains. If it does so, the three-dimensional information of the said monitoring object can be acquired by grasping | ascertaining in which mesh the monitoring object in a predetermined area | region is located. In other words, once the background mesh model of the predetermined area is generated, after that, it is not necessary to always measure the distance of the monitoring target as in Patent Document 1 described above, and only by imaging the predetermined area with the imaging unit, The predetermined area can be monitored. Therefore, according to the present invention, the predetermined area can be monitored appropriately and simply.
前記モデル生成部は、前記監視画像データにおける前記メッシュモデルに、前記撮像部で撮像された前記画像データにおける前記背景の画像を重ね合せてもよい。
The model generation unit may superimpose the background image in the image data captured by the imaging unit on the mesh model in the monitoring image data.
前記監視システムは、前記所定領域内の物体の情報が保存されたデータベースをさらに有し、前記監視部は、前記データベースに基づき前記物体を特定して、前記所定領域を監視してもよい。
The monitoring system may further include a database in which information on objects in the predetermined area is stored, and the monitoring unit may monitor the predetermined area by specifying the object based on the database.
前記モデル生成部は、前記メッシュモデルにおいて、前記所定領域の底面と前記常設物で囲まれる空間を、前記所定サイズのメッシュで三次元に区画してもよい。
The model generation unit may three-dimensionally divide a space surrounded by the bottom surface of the predetermined region and the permanent object in the mesh model with the mesh of the predetermined size.
前記監視システムは、前記所定領域に進入した人物を複数の領域に区画する人物設定部をさらに有し、前記監視部は、前記人物設定部で設定された各領域における前記人物の変化率に基づいて、当該人物の行動を監視してもよい。
The monitoring system further includes a person setting unit that divides a person who has entered the predetermined area into a plurality of areas, and the monitoring unit is based on the rate of change of the person in each area set by the person setting unit. Then, the behavior of the person may be monitored.
前記監視システムは、前記所定領域で発生する音声を検知する音声検知部をさらに有し、前記監視部は、前記音声検知部で検知した音声の方向に前記撮像部の撮像方向を設定して、当該音声が発生した領域を監視してもよい。
The monitoring system further includes an audio detection unit that detects audio generated in the predetermined area, and the monitoring unit sets an imaging direction of the imaging unit in a direction of audio detected by the audio detection unit, You may monitor the area | region where the said sound generate | occur | produced.
別な観点による本発明は、所定領域を撮像部で撮像して監視する監視方法であって、前記所定領域の底面と、前記所定領域において監視期間中に常設される常設物とから構成される当該所定領域の背景について、前記撮像部から前記背景までの距離を測距部で測定する測距工程と、前記測距工程で測定された距離に基づき、前記背景を所定サイズのメッシュで水平方向及び高さ方向に区画して、当該背景のメッシュモデルを生成するモデル生成工程と、前記撮像部で前記所定領域を撮像する撮像工程と、前記撮像工程で撮像される画像データに基づいて動きのある監視対象を抽出し、さらに前記画像データの前記所定領域の背景に前記メッシュモデルを適用し、当該メッシュモデルに前記監視対象を合成した新たな監視画像データを生成して、前記所定領域を監視する監視工程と、を有することを特徴としている。
Another aspect of the present invention is a monitoring method for imaging and monitoring a predetermined area with an imaging unit, and includes a bottom surface of the predetermined area and a permanent object permanently installed in the predetermined area during a monitoring period. With respect to the background of the predetermined area, a distance measuring step for measuring the distance from the imaging unit to the background with the distance measuring unit, and the background in the horizontal direction with a mesh of a predetermined size based on the distance measured in the distance measuring step And a model generation step of generating a mesh model of the background by partitioning in the height direction, an imaging step of imaging the predetermined area by the imaging unit, and a motion based on the image data captured in the imaging step Extract a certain monitoring target, apply the mesh model to the background of the predetermined area of the image data, and generate new monitoring image data by combining the monitoring target with the mesh model It is characterized by having a monitoring step of monitoring the predetermined region.
前記モデル生成工程において、前記監視画像データにおける前記メッシュモデルに、前記撮像部で撮像された前記画像データにおける前記背景の画像を重ね合せてもよい。
In the model generation step, the background image in the image data captured by the imaging unit may be superimposed on the mesh model in the monitoring image data.
前記所定領域内の物体の情報をデータベースに保存しておき、前記監視工程において、前記データベースに基づき前記物体を特定して、前記所定領域を監視してもよい。
The information on the object in the predetermined area may be stored in a database, and in the monitoring step, the object may be specified based on the database to monitor the predetermined area.
前記モデル生成工程では、前記メッシュモデルにおいて、前記所定領域の底面と前記常設物で囲まれる空間を、前記所定サイズのメッシュで三次元に区画してもよい。
In the model generation step, in the mesh model, a space surrounded by the bottom surface of the predetermined region and the permanent object may be partitioned three-dimensionally with the mesh of the predetermined size.
前記監視工程において、前記所定領域に進入した人物を複数の領域に区画し、当該区画された各領域における前記人物の変化率に基づいて、当該人物の行動を監視してもよい。
In the monitoring step, a person who has entered the predetermined area may be divided into a plurality of areas, and the behavior of the person may be monitored based on the rate of change of the person in each of the divided areas.
前記撮像工程において、前記所定領域で発生する音声が音声検知部で検知されると、当該音声の方向に前記撮像部の撮像方向を設定して、音声が発生した領域を撮像し、前記監視工程において、前記音声が発生した領域を監視してもよい。
In the imaging step, when the sound generated in the predetermined area is detected by the sound detection unit, the imaging direction of the imaging unit is set in the direction of the sound, the region where the sound is generated is imaged, and the monitoring step The area where the sound is generated may be monitored.
また別な観点による本発明によれば、前記監視方法を監視システムによって実行させるように、当該監視システムを制御するコンピュータを機能させるための当該コンピュータ上で動作するプログラムが提供される。
According to another aspect of the present invention, there is provided a program that operates on a computer for causing the computer that controls the monitoring system to function so that the monitoring method is executed by the monitoring system.
さらに別な観点による本発明によれば、前記プログラムを格納した読み取り可能なコンピュータ記憶媒体が提供される。
According to another aspect of the present invention, a readable computer storage medium storing the program is provided.
本発明によれば、所定領域を適切且つ簡易的に監視することができる。
According to the present invention, the predetermined area can be monitored appropriately and simply.
以下、本発明の実施の形態について図面を参照して説明する。なお、本明細書及び図面において、実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複説明を省略する。
Hereinafter, embodiments of the present invention will be described with reference to the drawings. In addition, in this specification and drawing, about the component which has the substantially same function structure, duplication description is abbreviate | omitted by attaching | subjecting the same code | symbol.
<1.監視システムの構成>
図1及び図2は、本実施の形態にかかる監視システム1の構成の概略を示している。本実施の形態では、監視システム1を用いて、例えばスーパーマーケットやコンビニエンスストアなどの店舗10の内部を監視する場合について説明する。 <1. Configuration of the monitoring system>
1 and 2 show an outline of the configuration of themonitoring system 1 according to the present embodiment. In the present embodiment, a case will be described in which the monitoring system 1 is used to monitor the inside of a store 10 such as a supermarket or a convenience store.
図1及び図2は、本実施の形態にかかる監視システム1の構成の概略を示している。本実施の形態では、監視システム1を用いて、例えばスーパーマーケットやコンビニエンスストアなどの店舗10の内部を監視する場合について説明する。 <1. Configuration of the monitoring system>
1 and 2 show an outline of the configuration of the
図1に示すように監視システム1は、店舗10の天井面に固定して設置された撮像装置20と、撮像装置20とネットワーク(図示せず)を介して接続される監視装置30とを有する。ネットワークは、撮像装置20と監視装置30との間の通信を行うことができるものであれば特に限定されるものではないが、例えばインターネットや有線LAN、無線LANなどにより構成される。
As shown in FIG. 1, the monitoring system 1 includes an imaging device 20 that is fixedly installed on the ceiling surface of the store 10, and a monitoring device 30 that is connected to the imaging device 20 via a network (not shown). . The network is not particularly limited as long as communication between the imaging device 20 and the monitoring device 30 can be performed. For example, the network is configured by the Internet, a wired LAN, a wireless LAN, or the like.
撮像装置20では、店舗10内の監視領域Aを上方から撮像すると共に、監視領域Aの測定対象(後述する監視領域Aの背景)までの距離を測定する。監視領域Aは、撮像装置20の撮像領域と一致している。また、監視装置30では、監視領域Aの背景について三次元のメッシュモデルを生成し、さらに撮像装置20で撮像される監視領域Aの背景に当該メッシュモデルを適用して、監視領域Aを監視する。なお、これら撮像装置20と監視装置30の構成と動作については、以下において詳細に説明する。
The imaging device 20 images the monitoring area A in the store 10 from above and measures the distance to the measurement target of the monitoring area A (the background of the monitoring area A described later). The monitoring area A coincides with the imaging area of the imaging device 20. Further, the monitoring device 30 generates a three-dimensional mesh model for the background of the monitoring region A, and applies the mesh model to the background of the monitoring region A captured by the imaging device 20 to monitor the monitoring region A. . The configurations and operations of the imaging device 20 and the monitoring device 30 will be described in detail below.
<2.撮像装置の構成>
図2に示すように撮像装置20は、筐体21の下部に、透明又は半透明の略半球体状のドームカバー22が設けられた構成を有する。ドームカバー22の内部には、測距部としての測距センサ23と、撮像部としての監視カメラ24と、監視カメラ24を吊り下げて支持する支持部材25とが設けられている。また、筐体21の内部には、支持部材25を介して監視カメラ24の回動動作を制御する駆動機構26と、撮像装置20で取得されたデータを監視装置30に送信するための通信部27とが設けられている。なお、撮像装置20の形状はこれに限定されるものではなく、任意に設計できる。 <2. Configuration of Imaging Device>
As shown in FIG. 2, theimaging device 20 has a configuration in which a transparent or semi-transparent, substantially hemispherical dome cover 22 is provided in a lower part of a housing 21. Inside the dome cover 22, a distance measuring sensor 23 as a distance measuring unit, a monitoring camera 24 as an imaging unit, and a support member 25 for hanging and supporting the monitoring camera 24 are provided. Further, inside the housing 21, a drive mechanism 26 that controls the rotation operation of the monitoring camera 24 via the support member 25, and a communication unit for transmitting data acquired by the imaging device 20 to the monitoring device 30. 27 are provided. Note that the shape of the imaging device 20 is not limited to this, and can be arbitrarily designed.
図2に示すように撮像装置20は、筐体21の下部に、透明又は半透明の略半球体状のドームカバー22が設けられた構成を有する。ドームカバー22の内部には、測距部としての測距センサ23と、撮像部としての監視カメラ24と、監視カメラ24を吊り下げて支持する支持部材25とが設けられている。また、筐体21の内部には、支持部材25を介して監視カメラ24の回動動作を制御する駆動機構26と、撮像装置20で取得されたデータを監視装置30に送信するための通信部27とが設けられている。なお、撮像装置20の形状はこれに限定されるものではなく、任意に設計できる。 <2. Configuration of Imaging Device>
As shown in FIG. 2, the
測距センサ23は、例えば赤外線を照射する照射源23aと、赤外線の反射波を受光する受光素子23bと備えている。照射源23aには、例えばLEDが用いられる。受光素子23bには、例えばPSDやCMOSなどが用いられる。照射源23aと受光素子23bの監視領域A側には、それぞれ光を集束させるレンズ(図示せず)が設けられている。なお、照射源23aと受光素子23bは、それぞれ複数設けられていてもよい。
The distance measuring sensor 23 includes, for example, an irradiation source 23a for irradiating infrared rays and a light receiving element 23b for receiving reflected waves of infrared rays. For example, an LED is used as the irradiation source 23a. For example, PSD or CMOS is used for the light receiving element 23b. On the monitoring area A side of the irradiation source 23a and the light receiving element 23b, lenses (not shown) for focusing the light are provided. A plurality of irradiation sources 23a and light receiving elements 23b may be provided.
測距センサ23では、照射源23aから測定対象(監視領域A)に赤外線を照射し、測定対象で反射した赤外線の反射波を受光素子23bで受光することにより、当該測定対象までの距離が測定される。赤外線の反射波に基づいて測定対象までの距離を測定する方法としては、例えば赤外線が照射されてからその反射波が戻ってくるまでの時間や位相差、赤外線の反射波が受光される受光素子上の位置、赤外線の反射波の強度などから算出する方法があり、当業者は公知の方法の中から任意に選択できる。そして、測距センサ23で測定された距離データは、通信部27に出力される。
The distance measurement sensor 23 measures the distance to the measurement target by irradiating the measurement target (monitoring area A) with infrared rays from the irradiation source 23a and receiving the reflected wave of the infrared rays reflected by the measurement target with the light receiving element 23b. Is done. As a method of measuring the distance to the measurement object based on the reflected wave of infrared rays, for example, the time and phase difference from when the reflected wave is irradiated until the reflected wave returns, a light receiving element that receives the reflected wave of infrared rays There are methods for calculating from the above position, the intensity of the reflected wave of infrared rays, etc., and those skilled in the art can arbitrarily select from known methods. Then, the distance data measured by the distance measuring sensor 23 is output to the communication unit 27.
測距センサ23は、監視カメラ24に近接してその直下に固定して設けられている。したがって、測距センサ23で測定された距離は、監視カメラ24から測定対象までの距離と見做すことができる。
The distance measuring sensor 23 is provided in the vicinity of the surveillance camera 24 and fixed immediately below it. Therefore, the distance measured by the distance measuring sensor 23 can be regarded as the distance from the monitoring camera 24 to the measurement target.
なお、本実施の形態の測距センサ23は、測定対象までの距離を測定するために赤外線を用いたが、これに限定されず、例えば超音波やレーザなど、任意に選択できる。
Note that although the distance measuring sensor 23 of the present embodiment uses infrared rays to measure the distance to the measurement object, it is not limited to this, and can be arbitrarily selected, for example, an ultrasonic wave or a laser.
監視カメラ24には、例えばCCDカメラやCMOSカメラなどの任意のカメラが用いられる。監視カメラ24は、支持部材25に吊り下げて支持されている。また監視カメラ24は、駆動機構26によって、水平方向(X軸方向及びY軸方向、パン方向)と高さ方向(Z軸方向、チルト方向)に回転することができ、またズーム動作が可能に構成されている。駆動機構26には、例えばステッピングモータやダイレクトドライブモータが用いられる。そして、監視カメラ24は、撮像窓となるドームカバー22を介して監視領域Aを撮像し、当該監視領域Aの画像を取得できる。また、監視カメラ24で撮像された画像データは、通信部27に出力される。
As the monitoring camera 24, an arbitrary camera such as a CCD camera or a CMOS camera is used. The surveillance camera 24 is supported by being suspended from a support member 25. Further, the monitoring camera 24 can be rotated in the horizontal direction (X-axis direction and Y-axis direction, pan direction) and the height direction (Z-axis direction, tilt direction) by the drive mechanism 26, and can perform a zoom operation. It is configured. As the drive mechanism 26, for example, a stepping motor or a direct drive motor is used. The monitoring camera 24 can capture an image of the monitoring area A through the dome cover 22 serving as an imaging window and acquire an image of the monitoring area A. Further, the image data captured by the monitoring camera 24 is output to the communication unit 27.
通信部27は、ネットワークとの間の通信を媒介する通信インターフェースであり、後述する監視装置30の入力部31とデータ通信を行う。具体的に通信部27は、測距センサ23で測定された距離データと、監視カメラ24で撮像された画像データとを監視装置30に出力する。
The communication unit 27 is a communication interface that mediates communication with the network, and performs data communication with an input unit 31 of the monitoring device 30 described later. Specifically, the communication unit 27 outputs the distance data measured by the distance measuring sensor 23 and the image data captured by the monitoring camera 24 to the monitoring device 30.
<3.監視装置の構成>
監視装置30は、例えばコンピュータによって構成され、例えば回路(ハードウェア)やCPUなどの中央演算処理装置と、これらを機能させるためのプログラム(ソフトウェア)から構成される。監視装置30は、入力部31、モデル生成部32、監視部33、出力部34、制御部35、及び記憶部36を有する。 <3. Configuration of monitoring device>
Themonitoring device 30 is configured by, for example, a computer, and is configured by, for example, a central processing unit such as a circuit (hardware) or a CPU, and a program (software) for causing them to function. The monitoring device 30 includes an input unit 31, a model generation unit 32, a monitoring unit 33, an output unit 34, a control unit 35, and a storage unit 36.
監視装置30は、例えばコンピュータによって構成され、例えば回路(ハードウェア)やCPUなどの中央演算処理装置と、これらを機能させるためのプログラム(ソフトウェア)から構成される。監視装置30は、入力部31、モデル生成部32、監視部33、出力部34、制御部35、及び記憶部36を有する。 <3. Configuration of monitoring device>
The
入力部31は、ネットワークとの間の通信を媒介する通信インターフェースであり、撮像装置20の通信部27とデータ通信を行う。具体的に入力部31には、上述した測距センサ23で測定された距離データと、監視カメラ24で撮像された画像データとが入力される。
The input unit 31 is a communication interface that mediates communication with the network, and performs data communication with the communication unit 27 of the imaging device 20. Specifically, the input unit 31 receives the distance data measured by the distance measuring sensor 23 and the image data captured by the monitoring camera 24.
モデル生成部32は、入力部31の距離データに基づいて監視領域Aの背景のメッシュモデルを生成する。また、監視部33は、画像データとメッシュモデルに基づいて監視領域Aを監視する。これらモデル生成部32と監視部33の具体的な動作については後述する。
The model generation unit 32 generates a background mesh model of the monitoring area A based on the distance data of the input unit 31. Further, the monitoring unit 33 monitors the monitoring area A based on the image data and the mesh model. Specific operations of the model generation unit 32 and the monitoring unit 33 will be described later.
出力部34は、監視部33の監視結果を出力する。監視結果の出力方法は特に限定されるものではなく、例えばディスプレイに表示したり、異常状態や危険事象が発生した際にアラームを発するなど、任意の方法を選択できる。
The output unit 34 outputs the monitoring result of the monitoring unit 33. The method for outputting the monitoring result is not particularly limited. For example, any method can be selected such as displaying on a display or issuing an alarm when an abnormal state or dangerous event occurs.
制御部35は、撮像装置20における各動作を制御する。すなわち、制御部35は、例えば測距センサ23が距離を測定するタイミングと位置を制御し、また監視カメラ24が撮像するタイミングと位置を制御する。
The control unit 35 controls each operation in the imaging device 20. That is, the control unit 35 controls, for example, the timing and position at which the distance measuring sensor 23 measures distance, and the timing and position at which the monitoring camera 24 captures an image.
記憶部36には、監視システム1で監視領域Aを監視するためのプログラムが格納されている。なお、上記プログラムは、このように記憶部36に格納されていてもよいし、あるいはコンピュータ読み取り可能なハードディスク(HD)、フレキシブルディスク(FD)、コンパクトディスク(CD)、マグネットオプティカルデスク(MO)、各種メモリなどのコンピュータに読み取り可能な記憶媒体に格納されていてもよい。また、上記プログラムは、インターネットなどの通信回線網を介してダウンロードすることにより、上記記憶媒体などに格納することもできる。
The storage unit 36 stores a program for monitoring the monitoring area A by the monitoring system 1. The program may be stored in the storage unit 36 as described above, or may be a computer-readable hard disk (HD), flexible disk (FD), compact disk (CD), magnet optical desk (MO), It may be stored in a computer-readable storage medium such as various memories. Further, the program can be stored in the storage medium or the like by downloading it via a communication line network such as the Internet.
<4.監視システムの動作>
次に、以上のように構成された監視システム1で行われる監視領域Aの監視方法について説明する。図3は、かかる監視方法の主な工程の例を示すフローチャートである。 <4. Operation of the monitoring system>
Next, a monitoring method of the monitoring area A performed by themonitoring system 1 configured as described above will be described. FIG. 3 is a flowchart showing an example of main steps of the monitoring method.
次に、以上のように構成された監視システム1で行われる監視領域Aの監視方法について説明する。図3は、かかる監視方法の主な工程の例を示すフローチャートである。 <4. Operation of the monitoring system>
Next, a monitoring method of the monitoring area A performed by the
以下においては、図4に示す店舗10の内部を監視する場合を例にとって説明する。ここで店舗10内の監視領域Aにおいて、監視期間中に移動しないものを監視領域Aの背景と定義する。本実施の形態では、店舗10の床面11と、店舗10内に常設される常設物としての複数の什器12が、監視領域Aの背景を構成する。なお、什器12は店舗10内のレイアウト変更などによって意図的に移動させられる場合があるが、移動中は監視期間ではない。この場合、新たな位置に什器12が設置されてから監視期間が始まるものとする。したがって、本実施の形態では、什器12を常設物として扱う。
In the following, a case where the inside of the store 10 shown in FIG. 4 is monitored will be described as an example. Here, the monitoring area A in the store 10 that does not move during the monitoring period is defined as the background of the monitoring area A. In the present embodiment, the floor surface 11 of the store 10 and a plurality of fixtures 12 as permanent objects permanently installed in the store 10 constitute the background of the monitoring area A. Note that the fixture 12 may be moved intentionally by changing the layout in the store 10 or the like, but it is not a monitoring period during movement. In this case, it is assumed that the monitoring period starts after the fixture 12 is installed at a new position. Therefore, in this embodiment, the fixture 12 is treated as a permanent object.
先ず、撮像装置20の測距センサ23を用いて、監視カメラ24と監視領域Aの背景の間の距離を測定する(図3のステップS1)。具体的には、図5及び図6に示すように撮像装置20の真下から距離測定を開始し(図5中の点線部)、測距センサ23を回転させつつ、測定方向を撮像装置20の真下から外側に移動させて、距離測定を行う(図5中の実線部)。そして監視領域Aの全域で、監視カメラ24と監視領域Aの背景の間の距離を測定する。測距センサ23で測定された距離データは、通信部27と入力部31を介して、監視装置30のモデル生成部32に出力される。
First, the distance between the monitoring camera 24 and the background of the monitoring area A is measured using the distance measuring sensor 23 of the imaging device 20 (step S1 in FIG. 3). Specifically, as shown in FIGS. 5 and 6, distance measurement is started from directly below the imaging device 20 (dotted line portion in FIG. 5), and the distance sensor 23 is rotated and the measurement direction of the imaging device 20 is changed. The distance is measured by moving from directly below to the outside (solid line portion in FIG. 5). Then, the distance between the monitoring camera 24 and the background of the monitoring area A is measured over the entire monitoring area A. The distance data measured by the distance measuring sensor 23 is output to the model generation unit 32 of the monitoring device 30 via the communication unit 27 and the input unit 31.
モデル生成部32では、測距センサ23で測定された距離データに基づいて、監視領域Aの背景のメッシュモデルを生成する(図3のステップS2)。具体的には、図5に示すように撮像装置20の真下から外側に向かってメッシュMを積み上げていく。メッシュMのサイズは任意に設定されるが、例えば50cm×50cmとする。各メッシュMの水平方向(X軸方向及びY軸方向)の位置は、積み上げられたメッシュMの数で算出できる。また、各メッシュMの高さ方向(Z軸方向)の位置は、測距センサ23で測定された距離データから算出できる。
The model generation unit 32 generates a background mesh model of the monitoring area A based on the distance data measured by the distance measuring sensor 23 (step S2 in FIG. 3). Specifically, as shown in FIG. 5, the meshes M are stacked from directly below the imaging device 20 toward the outside. The size of the mesh M is arbitrarily set, and is set to 50 cm × 50 cm, for example. The position in the horizontal direction (X-axis direction and Y-axis direction) of each mesh M can be calculated by the number of meshes M stacked. Further, the position of each mesh M in the height direction (Z-axis direction) can be calculated from the distance data measured by the distance measuring sensor 23.
このように水平方向と高さ方向の三次元の位置情報を備えたメッシュMを積み上げていくと、図7に示すように監視領域Aの背景について、床面11と什器12が反映された三次元のメッシュモデルDが生成される。換言すれば、ステップS2では、測距センサ23で測定された距離データに基づいて、監視領域Aの背景が複数のメッシュMで三次元に区画され、メッシュモデルDが生成される。なお、図7においては、説明を容易にするため、2つの什器12とその間の床面11のメッシュモデルDを図示したが、実際には、監視領域Aの背景すべてについてメッシュモデルDが生成される。また、図7においては、理解を容易にするため、床面11にハッチングを付している。
When the meshes M having the three-dimensional position information in the horizontal direction and the height direction are stacked in this way, the tertiary in which the floor 11 and the furniture 12 are reflected on the background of the monitoring area A as shown in FIG. An original mesh model D is generated. In other words, in step S <b> 2, based on the distance data measured by the distance measuring sensor 23, the background of the monitoring area A is partitioned in a three-dimensional manner by the plurality of meshes M, and the mesh model D is generated. In FIG. 7, for ease of explanation, the mesh model D of the two furniture 12 and the floor surface 11 between them is shown, but in reality, the mesh model D is generated for all the backgrounds of the monitoring area A. The In FIG. 7, the floor 11 is hatched for easy understanding.
ここまでのステップS1~S2は、監視領域Aを監視するための事前準備であって、ここから実際の監視が始まる。すなわち、撮像装置20の監視カメラ24を用いて、監視領域Aを撮像する(図3のステップS3)。監視カメラ24で撮像された画像データは、通信部27と入力部31を介して、監視装置30の監視部33に出力される。
Steps S1 and S2 so far are preparations for monitoring the monitoring area A, and actual monitoring starts here. That is, the monitoring area A is imaged using the monitoring camera 24 of the imaging device 20 (step S3 in FIG. 3). Image data captured by the monitoring camera 24 is output to the monitoring unit 33 of the monitoring device 30 via the communication unit 27 and the input unit 31.
監視部33では、画像データを解析し、例えば背景差分によって監視領域A内の人物を抽出する。この背景差分は公知の技術であり、監視カメラ24で取得された画像データと、事前に取得しておいた監視領域Aの背景画像との差分をとることで、動きのある監視対象を抽出する。本実施の形態では、店舗10を監視するものであるため、監視対象を人物として説明するが、監視対象はこれに限定されるものではない。
The monitoring unit 33 analyzes the image data and extracts a person in the monitoring area A based on, for example, a background difference. This background difference is a known technique, and a moving monitoring target is extracted by taking a difference between the image data acquired by the monitoring camera 24 and the background image of the monitoring area A acquired in advance. . In this embodiment, since the store 10 is monitored, the monitoring target is described as a person, but the monitoring target is not limited to this.
一方、監視部33では、画像データ中の監視領域Aの背景に対し、モデル生成部32で生成されたメッシュモデルDを適用する。すなわち、図8に示すようにメッシュモデルDに人物Pを合成し、新たな監視画像データを生成する。そして、監視画像データに基づいて、監視領域Aを監視する(図3のステップS4)。なお、監視部33の監視結果は、出力部34に出力される。
On the other hand, the monitoring unit 33 applies the mesh model D generated by the model generation unit 32 to the background of the monitoring region A in the image data. That is, as shown in FIG. 8, the person P is synthesized with the mesh model D to generate new monitoring image data. Based on the monitoring image data, the monitoring area A is monitored (step S4 in FIG. 3). Note that the monitoring result of the monitoring unit 33 is output to the output unit 34.
ここで、このように監視画像データを用いることの利点について説明する。例えば図8に示すように、監視画像データに、監視領域A内に大きさの異なる2人の人物P1、P2が表示されていたとする。以下、監視カメラ24に近い側(Y軸負方向側)を「手前」といい、監視カメラ24から遠い側(Y軸正方向側)を「奥」という。従来の画像データであれば、実際に2人の身長が異なっているか、あるいは一方の人物P1が手前にいて他方の人物P2が奥にいるため、大きさが異なるように見えているのか、コンピュータは判断することはできない。
Here, the advantage of using monitoring image data in this way will be described. For example, as shown in FIG. 8, it is assumed that two persons P1 and P2 having different sizes are displayed in the monitoring area A in the monitoring image data. Hereinafter, the side closer to the monitoring camera 24 (Y-axis negative direction side) is referred to as “front”, and the side farther from the monitoring camera 24 (Y-axis positive direction side) is referred to as “back”. In the case of conventional image data, whether the two persons are actually different in height, or whether one person P1 is in front and the other person P2 is in the back, so that it appears to be different in size, a computer Can not judge.
これに対して、本実施の形態の監視画像データでは、メッシュモデルDの各メッシュMは三次元情報を持っている。そうすると、例えば人物P1、P2の足下を認識することで、人物P1、P2が床面11のどのメッシュMに位置しているかを把握できる。そして、監視画像データ上の人物P1、P2の高さ方向の長さが分かるので、その人物P1、P2の高さ(身長)を算出することができる。すなわち、人物P1、P2がそれぞれどのメッシュMに位置しているかを把握することで、人物P1、P2の実際の身長を把握することができるのである。このように監視画像データから、人物P1、P2を三次元的に把握することができる。また、監視対象が人物でない場合でも、当該監視対象の形状を三次元的に把握することができる。
On the other hand, in the monitoring image data of the present embodiment, each mesh M of the mesh model D has three-dimensional information. Then, for example, by recognizing the feet of the persons P1 and P2, it is possible to grasp which mesh M of the floor surface 11 the persons P1 and P2 are located on. Then, since the lengths in the height direction of the persons P1 and P2 on the monitoring image data are known, the heights (heights) of the persons P1 and P2 can be calculated. That is, by grasping which mesh M each person P1, P2 is located on, it is possible to grasp the actual height of the persons P1, P2. Thus, the persons P1 and P2 can be grasped three-dimensionally from the monitoring image data. Even when the monitoring target is not a person, the shape of the monitoring target can be grasped three-dimensionally.
また、このように人物P1、P2を三次元的に把握することができれば、例えば監視画像データ上、手前にいる人物P1の動きが大きくても、実際にはそれほど大きな動きでないことが分かる。一方、奥にいる人物P2の動きが小さくても、実際には大きな動きをしていることが分かる。したがって、人物P1、P2の動きを正確に把握することができる。また、例えば監視画像データ上、人物P1、P2が重なっていても、実際には人物P1、P2がぶつかっているわけでないことを認識することができる。
In addition, if the persons P1 and P2 can be grasped three-dimensionally in this way, for example, it can be seen from the monitoring image data that even if the movement of the person P1 in front is large, the movement is not so large. On the other hand, even if the movement of the person P2 in the back is small, it can be seen that the movement is actually large. Therefore, it is possible to accurately grasp the movements of the persons P1 and P2. Further, for example, even if the persons P1 and P2 overlap on the monitoring image data, it can be recognized that the persons P1 and P2 are not actually colliding.
さらに、ステップS1~S2において監視領域Aの背景のメッシュモデルDを一旦生成すれば、ステップS3~S4を繰り返すことで、監視領域Aを監視することができる。すなわち、特許文献1に記載された方法では、人物P1、P2を監視するにあたり、常に人物P1、P2の距離を測定する必要があるが、本実施の形態では人物P1、P2までの距離を測定する必要はない。したがって、監視中に行われる処理が非常に簡易的で、また扱うデータも少ないため、コンピュータの負荷を大幅に軽減することができる。
Further, once the background mesh model D of the monitoring area A is generated in steps S1 and S2, the monitoring area A can be monitored by repeating steps S3 to S4. That is, in the method described in Patent Document 1, it is necessary to always measure the distance between the persons P1 and P2 when monitoring the persons P1 and P2. In this embodiment, the distance to the persons P1 and P2 is measured. do not have to. Therefore, the processing performed during monitoring is very simple and the amount of data handled is small, so that the load on the computer can be greatly reduced.
しかも、店舗10のレイアウトを変更したり、あるいは撮像装置20の配置を変更したりして、撮像装置20が撮像する監視領域Aの背景が変わったとしても、ステップS1~S2を行うことで、自動でメッシュモデルDを生成することができる。したがって、監視システム1は、このような監視領域Aの背景の変更にも柔軟に対応できる。
Moreover, even if the layout of the store 10 is changed or the arrangement of the imaging device 20 is changed and the background of the monitoring area A captured by the imaging device 20 changes, the steps S1 and S2 are performed. The mesh model D can be automatically generated. Therefore, the monitoring system 1 can flexibly cope with such a change in the background of the monitoring area A.
以上のように本実施の形態によれば、監視領域Aを適切且つ簡易的に監視することができる。
As described above, according to this embodiment, the monitoring area A can be monitored appropriately and simply.
<5.他の実施の形態>
次に、本発明の他の実施の形態について説明する。以下の説明において、上記実施の形態と重複する箇所は説明を省略する。 <5. Other embodiments>
Next, another embodiment of the present invention will be described. In the following description, the description overlapping with the above embodiment is omitted.
次に、本発明の他の実施の形態について説明する。以下の説明において、上記実施の形態と重複する箇所は説明を省略する。 <5. Other embodiments>
Next, another embodiment of the present invention will be described. In the following description, the description overlapping with the above embodiment is omitted.
<5-1.他の実施の形態>
以上の実施の形態のステップS2においてメッシュモデルDを生成する際、複数のメッシュMで構成されるメッシュモデルDに、事前に監視カメラ24で撮像した監視領域Aの背景の画像データを重ね合せてもよい。そして、メッシュモデルDは、複数のメッシュMと、監視領域Aの背景の画像データから構成される。かかる場合、ステップS4において、監視領域Aをより適切に監視することができる。 <5-1. Other embodiments>
When generating the mesh model D in step S2 of the above embodiment, the image data of the background of the monitoring area A captured in advance by the monitoringcamera 24 is superimposed on the mesh model D composed of a plurality of meshes M. Also good. The mesh model D includes a plurality of meshes M and background image data of the monitoring area A. In such a case, the monitoring area A can be monitored more appropriately in step S4.
以上の実施の形態のステップS2においてメッシュモデルDを生成する際、複数のメッシュMで構成されるメッシュモデルDに、事前に監視カメラ24で撮像した監視領域Aの背景の画像データを重ね合せてもよい。そして、メッシュモデルDは、複数のメッシュMと、監視領域Aの背景の画像データから構成される。かかる場合、ステップS4において、監視領域Aをより適切に監視することができる。 <5-1. Other embodiments>
When generating the mesh model D in step S2 of the above embodiment, the image data of the background of the monitoring area A captured in advance by the monitoring
<5-2.他の実施の形態>
以上の実施の形態において、監視装置30の記憶部36には、監視領域A内の物体の情報が保存されたデータベースが記憶されていてもよい。監視領域A内の物体とは、監視領域A内に存在するであろうと想定される物体である。本実施の形態のように店舗10を監視する場合、かかる物体としては、例えば什器12である、代表的な商品棚、机、椅子などの情報がデータベースに保存される。また、このような什器12に加えて、例えば代表的な商品やカートなどの情報もデータベースに保存される。 <5-2. Other embodiments>
In the above embodiment, thestorage unit 36 of the monitoring device 30 may store a database in which information on objects in the monitoring area A is stored. The object in the monitoring area A is an object assumed to exist in the monitoring area A. When the store 10 is monitored as in the present embodiment, as such an object, information such as typical product shelves, desks, chairs, and the like, which are fixtures 12, is stored in the database. In addition to such fixtures 12, information such as representative products and carts is also stored in the database.
以上の実施の形態において、監視装置30の記憶部36には、監視領域A内の物体の情報が保存されたデータベースが記憶されていてもよい。監視領域A内の物体とは、監視領域A内に存在するであろうと想定される物体である。本実施の形態のように店舗10を監視する場合、かかる物体としては、例えば什器12である、代表的な商品棚、机、椅子などの情報がデータベースに保存される。また、このような什器12に加えて、例えば代表的な商品やカートなどの情報もデータベースに保存される。 <5-2. Other embodiments>
In the above embodiment, the
かかる場合、ステップS4で監視領域Aを監視する際、メッシュモデルDを用いれば監視領域A内の物体の形状が把握できるので、上記データベースの情報を参照して、当該物体が何であるかを特定することができる。これにより、人物Pの具体的な行動まで把握することができる。例えば人物Pが進入禁止エリアに進入しようとしていれば把握することができ、また人物Pがある商品棚からある商品を万引きしようとしていれば把握することができる。このように本実施の形態によれば、監視対象の動きを具体的に把握することができ、監視領域Aをより適切に監視することができる。
In such a case, when the monitoring area A is monitored in step S4, the shape of the object in the monitoring area A can be grasped by using the mesh model D. Therefore, it is possible to identify what the object is by referring to the information in the database. can do. Thereby, it is possible to grasp even the specific action of the person P. For example, it can be grasped if the person P is going to enter the entry prohibition area, and can be grasped if the person P is going to shoplift a certain product from a certain product shelf. Thus, according to the present embodiment, the movement of the monitoring target can be specifically grasped, and the monitoring area A can be monitored more appropriately.
<5-3.他の実施の形態>
以上の実施の形態のステップS2においてメッシュモデルDを生成する際、図9に示すように監視領域Aの床面11と什器12で囲まれる空間(以下、監視空間という)を複数のメッシュNで三次元に区画してもよい。監視空間に形成されるメッシュNは、床面11と什器12におけるメッシュMと同じサイズを有している。そして、監視空間のメッシュNは、床面11と什器12のメッシュMの間を補完するように三次元に形成される。なお、図9においては、理解を容易にするため、床面11と什器12におけるメッシュMを点線で図示している。 <5-3. Other embodiments>
When the mesh model D is generated in step S2 of the above embodiment, a space surrounded by thefloor 11 and the fixture 12 in the monitoring area A (hereinafter referred to as monitoring space) as shown in FIG. You may partition in three dimensions. The mesh N formed in the monitoring space has the same size as the mesh M on the floor 11 and the fixture 12. And the mesh N of the monitoring space is formed in three dimensions so as to complement the space between the floor surface 11 and the mesh M of the fixture 12. In FIG. 9, the mesh M on the floor surface 11 and the furniture 12 is illustrated by dotted lines for easy understanding.
以上の実施の形態のステップS2においてメッシュモデルDを生成する際、図9に示すように監視領域Aの床面11と什器12で囲まれる空間(以下、監視空間という)を複数のメッシュNで三次元に区画してもよい。監視空間に形成されるメッシュNは、床面11と什器12におけるメッシュMと同じサイズを有している。そして、監視空間のメッシュNは、床面11と什器12のメッシュMの間を補完するように三次元に形成される。なお、図9においては、理解を容易にするため、床面11と什器12におけるメッシュMを点線で図示している。 <5-3. Other embodiments>
When the mesh model D is generated in step S2 of the above embodiment, a space surrounded by the
かかる場合、監視空間のメッシュNは、床面11と什器12のメッシュMと同様に、水平方向と高さ方向の三次元の位置情報を備えている。このため、ステップS4で監視領域Aを監視する際、人物Pをより正確且つ容易に三次元的に把握することができる。
In such a case, the mesh N in the monitoring space is provided with three-dimensional position information in the horizontal direction and the height direction, similar to the mesh M of the floor surface 11 and the fixture 12. For this reason, when monitoring the monitoring area A in step S4, the person P can be more accurately and easily grasped three-dimensionally.
<5-4.他の実施の形態>
以上の実施の形態において、図10に示すように監視装置30は、監視領域Aに進入した人物Pを複数の領域に区画する人物設定部40をさらに有していてもよい。人物設定部40は、図11に示すように人物Pを高さ方向に複数の領域50~52に区画する。上部領域50は人物Pの頭部の位置に形成され、中間領域51は人物Pの胴部の位置に形成され、下部領域52は人物Pの足部の位置に形成される。具体的に人物設定部40は、先ず、人物Pの足部を認識し、メッシュモデルDからその人物Pの高さを認識する。そうすると、人物Pの頭部と胴部の大まかな位置を把握できる。 <5-4. Other embodiments>
In the above embodiment, as shown in FIG. 10, themonitoring device 30 may further include a person setting unit 40 that partitions the person P entering the monitoring area A into a plurality of areas. The person setting unit 40 divides the person P into a plurality of regions 50 to 52 in the height direction as shown in FIG. The upper area 50 is formed at the position of the head of the person P, the intermediate area 51 is formed at the position of the torso of the person P, and the lower area 52 is formed at the position of the legs of the person P. Specifically, the person setting unit 40 first recognizes the foot of the person P and recognizes the height of the person P from the mesh model D. Then, the rough position of the head and torso of the person P can be grasped.
以上の実施の形態において、図10に示すように監視装置30は、監視領域Aに進入した人物Pを複数の領域に区画する人物設定部40をさらに有していてもよい。人物設定部40は、図11に示すように人物Pを高さ方向に複数の領域50~52に区画する。上部領域50は人物Pの頭部の位置に形成され、中間領域51は人物Pの胴部の位置に形成され、下部領域52は人物Pの足部の位置に形成される。具体的に人物設定部40は、先ず、人物Pの足部を認識し、メッシュモデルDからその人物Pの高さを認識する。そうすると、人物Pの頭部と胴部の大まかな位置を把握できる。 <5-4. Other embodiments>
In the above embodiment, as shown in FIG. 10, the
かかる場合、ステップS4で監視領域Aを監視する際、各領域50~52における人物Pの変化率に基づいて、人物Pを監視する。そうすると、例えば上部領域50における変化率が大きい場合、人物Pが頭を左右に振り周囲の様子を窺うようにきょろきょろしているなどの動作をしていることが分かる。また、例えば中間領域51における変化率が大きい場合、人物Pが体を横に動かしてそわそわしているなどの動作をしていることが分かる。また、例えば下部領域52における変化率が大きい場合、人物Pがウロウロしているなどの動作をしていることが分かる。このように、監視領域Aを監視するにあたり、人物Pの具体的な動作を把握することができる。
In such a case, when monitoring the monitoring area A in step S4, the person P is monitored based on the rate of change of the person P in each of the areas 50 to 52. Then, for example, when the rate of change in the upper region 50 is large, it can be seen that the person P performs an operation such as swinging his / her head left and right and scolding the surroundings. In addition, for example, when the rate of change in the intermediate region 51 is large, it can be seen that the person P is performing a motion such as moving the body sideways and flirting. In addition, for example, when the rate of change in the lower region 52 is large, it can be seen that the person P is performing an operation such as waving. Thus, when monitoring the monitoring area A, the specific operation of the person P can be grasped.
<5-5.他の実施の形態>
以上の実施の形態において、図12に示すように撮像装置20は、監視領域Aで発生する音声を集める集音器60をさらに有し、また監視装置30は、集音器60で集音された音声を識別する音声検知部61をさらに有していてもよい。集音器60はマイクロフォンであって、ドームカバー22の内部に複数、例えば3つ設けられている。 <5-5. Other embodiments>
In the above embodiment, as shown in FIG. 12, theimaging device 20 further includes a sound collector 60 that collects sound generated in the monitoring area A, and the monitoring device 30 is collected by the sound collector 60. You may further have the audio | voice detection part 61 which identifies the audio | voice. The sound collector 60 is a microphone, and a plurality, for example, three are provided inside the dome cover 22.
以上の実施の形態において、図12に示すように撮像装置20は、監視領域Aで発生する音声を集める集音器60をさらに有し、また監視装置30は、集音器60で集音された音声を識別する音声検知部61をさらに有していてもよい。集音器60はマイクロフォンであって、ドームカバー22の内部に複数、例えば3つ設けられている。 <5-5. Other embodiments>
In the above embodiment, as shown in FIG. 12, the
図13は、音声検知を利用した監視方法の主な工程の例を示すフローチャートである。図13においてステップS1~S4は、上記実施の形態の図3で示したステップS1~S4と同じ工程である。
FIG. 13 is a flowchart showing an example of main steps of a monitoring method using voice detection. In FIG. 13, steps S1 to S4 are the same steps as steps S1 to S4 shown in FIG. 3 of the above embodiment.
そして、ステップS1~S2でメッシュモデルDを生成するのに並行して、監視領域Aで検知すべき特定の音声を収集し、監視装置30の記憶部36に記憶する(図13のステップT1)。特定の音声は、監視領域Aにおいて異常状態や危険事象が発生した際に発生する音声であり、例えば人物Pの叫び声や物が壊れる音などである。
In parallel with the generation of the mesh model D in steps S1 and S2, specific sounds to be detected in the monitoring area A are collected and stored in the storage unit 36 of the monitoring device 30 (step T1 in FIG. 13). . The specific sound is a sound that is generated when an abnormal state or a dangerous event occurs in the monitoring area A, and is, for example, a screaming sound of a person P or a sound that breaks an object.
これらステップS1~S2、T1は、監視領域Aを監視するための事前準備である。そして、監視中、例えば集音器60で集音された音声データは、通信部27と入力部31を介して、音声検知部61に出力される。音声検知部61では、この音声データと記憶部36に記憶された特定の音声とを照合し、これらが合致すれば、監視領域Aで特定の音声が発生されたと検知する(図13のステップT2)。
These steps S1 to S2 and T1 are preparations for monitoring the monitoring area A. During monitoring, for example, audio data collected by the sound collector 60 is output to the audio detection unit 61 via the communication unit 27 and the input unit 31. The voice detection unit 61 collates this voice data with a specific voice stored in the storage unit 36, and if they match, it detects that a specific voice has been generated in the monitoring area A (step T2 in FIG. 13). ).
音声検知部61で特定の音声が検知された情報は、監視部33に出力される。監視部33は、音声検知部61で検知した特定の音声の方向に監視カメラ24の撮像方向を設定して、制御部35を介して監視カメラ24を移動させる。そして、ステップS3において監視カメラ24により、監視領域Aにおいて特定の音声が発生した領域を撮像する。その後、ステップS4において、当該特定の音声が発生した領域を監視する。
Information that a specific sound is detected by the sound detection unit 61 is output to the monitoring unit 33. The monitoring unit 33 sets the imaging direction of the monitoring camera 24 in the direction of the specific sound detected by the sound detection unit 61 and moves the monitoring camera 24 via the control unit 35. In step S <b> 3, the monitoring camera 24 captures an area in the monitoring area A where specific sound is generated. Thereafter, in step S4, the region where the specific sound is generated is monitored.
かかる場合、監視領域Aにおいて異常状態や危険事象が発生した場合、その音声を特定し、当該音声が発生した領域を特に注目して監視することができる。また、音声が移動する場合であっても、その音声の方向に監視カメラ24を追従させることで、当該音声が発生した領域を適切に監視することができる。
In such a case, when an abnormal state or dangerous event occurs in the monitoring area A, it is possible to identify the sound and monitor the area where the sound is generated with particular attention. Even if the sound moves, the region where the sound is generated can be appropriately monitored by causing the monitoring camera 24 to follow the direction of the sound.
<5-6.他の実施の形態>
以上の実施の形態において、店舗10内には複数の撮像装置20が設置されていてもよい。かかる場合、複数の監視領域Aを監視することで、店舗10の内部全体を監視することができる。また、複数の撮像装置20で一つの監視領域Aを監視すれば、当該監視領域Aをより精度よく監視することができる。 <5-6. Other embodiments>
In the above embodiment, a plurality ofimaging devices 20 may be installed in the store 10. In such a case, the entire interior of the store 10 can be monitored by monitoring a plurality of monitoring areas A. Further, if one monitoring area A is monitored by a plurality of imaging devices 20, the monitoring area A can be monitored with higher accuracy.
以上の実施の形態において、店舗10内には複数の撮像装置20が設置されていてもよい。かかる場合、複数の監視領域Aを監視することで、店舗10の内部全体を監視することができる。また、複数の撮像装置20で一つの監視領域Aを監視すれば、当該監視領域Aをより精度よく監視することができる。 <5-6. Other embodiments>
In the above embodiment, a plurality of
また、以上の実施の形態では、監視システム1を用いて店舗10を監視する場合について説明したが、監視する場所はこれに限定されない。本発明の監視システム1は、例えばショッピングモール、百貨店、複合総合施設、さらには展示会や見本市の会場などの商業的行為が行われる商業施設や、空港や鉄道などの公共施設、遊園地や球場などの遊戯施設、病院や老健などの専門施設、オフィスや工場、一戸建て、集合住宅、駐車場など、様々な場所を監視することができる。
Moreover, although the above embodiment demonstrated the case where the store 10 was monitored using the monitoring system 1, the place to monitor is not limited to this. The monitoring system 1 according to the present invention includes, for example, shopping malls, department stores, complex general facilities, commercial facilities such as exhibitions and trade fair venues, public facilities such as airports and railroads, amusement parks and stadiums. Various places such as amusement facilities such as hospitals, specialized facilities such as hospitals and health centers, offices and factories, detached houses, apartment houses, and parking lots can be monitored.
以上、添付図面を参照しながら本発明の好適な実施の形態について説明したが、本発明はかかる例に限定されない。当業者であれば、請求の範囲に記載された思想の範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、それらについても当然に本発明の技術的範囲に属するものと了解される。
The preferred embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to such examples. It is obvious for those skilled in the art that various changes or modifications can be conceived within the scope of the idea described in the claims, and these are naturally within the technical scope of the present invention. It is understood.
本発明は、例えば所定領域を監視する際に有用である。
The present invention is useful, for example, when monitoring a predetermined area.
1 監視システム
10 店舗
11 床面
12 什器
20 撮像装置
21 筐体
22 ドームカバー
23 測距センサ
23a 照射源
23b 受光素子
24 監視カメラ
25 支持部材
26 駆動機構
27 通信部
30 監視装置
31 入力部
32 モデル生成部
33 監視部
34 出力部
35 制御部
36 記憶部
40 人物設定部
50 上部領域
51 中間領域
52 下部領域
60 集音器
61 音声検知部
A 監視領域
D メッシュモデル
M メッシュ
N メッシュ
P(P1、P2) 人物 DESCRIPTION OFSYMBOLS 1 Monitoring system 10 Store 11 Floor 12 Furniture 20 Imaging device 21 Case 22 Dome cover 23 Distance sensor 23a Irradiation source 23b Light receiving element 24 Monitoring camera 25 Support member 26 Drive mechanism 27 Communication part 30 Monitoring apparatus 31 Input part 32 Model generation Unit 33 monitoring unit 34 output unit 35 control unit 36 storage unit 40 person setting unit 50 upper region 51 middle region 52 lower region 60 sound collector 61 voice detection unit A monitoring region D mesh model M mesh N mesh P (P1, P2) person
10 店舗
11 床面
12 什器
20 撮像装置
21 筐体
22 ドームカバー
23 測距センサ
23a 照射源
23b 受光素子
24 監視カメラ
25 支持部材
26 駆動機構
27 通信部
30 監視装置
31 入力部
32 モデル生成部
33 監視部
34 出力部
35 制御部
36 記憶部
40 人物設定部
50 上部領域
51 中間領域
52 下部領域
60 集音器
61 音声検知部
A 監視領域
D メッシュモデル
M メッシュ
N メッシュ
P(P1、P2) 人物 DESCRIPTION OF
Claims (14)
- 所定領域を監視する監視システムであって、
所定位置に固定して設けられ、前記所定領域を上方から撮像する撮像部と、
前記所定領域において前記撮像部から測定対象までの距離を測定する測距部と、
前記所定領域の底面と、前記所定領域において監視期間中に常設される常設物とから構成される当該所定領域の背景について、前記測距部で測定される前記撮像部から前記背景までの距離に基づき、前記背景を所定サイズのメッシュで水平方向及び高さ方向に区画して、当該背景のメッシュモデルを生成するモデル生成部と、
前記撮像部で撮像される画像データに基づいて動きのある監視対象を抽出し、さらに前記画像データの前記所定領域の背景に前記メッシュモデルを適用し、当該メッシュモデルに前記監視対象を合成した新たな監視画像データを生成して、前記所定領域を監視する監視部と、を有することを特徴とする、監視システム。 A monitoring system for monitoring a predetermined area,
An imaging unit that is fixedly provided at a predetermined position and images the predetermined area from above;
A distance measuring unit for measuring a distance from the imaging unit to a measurement object in the predetermined region;
About the background of the predetermined area composed of the bottom surface of the predetermined area and the permanent object permanently installed in the predetermined area during the monitoring period, the distance from the imaging unit to the background measured by the distance measuring unit A model generation unit that divides the background in a horizontal direction and a height direction with a mesh of a predetermined size, and generates a mesh model of the background;
A monitoring target that moves based on the image data captured by the imaging unit is extracted, the mesh model is applied to the background of the predetermined area of the image data, and the monitoring target is synthesized with the mesh model. And a monitoring unit that generates the monitoring image data and monitors the predetermined area. - 前記モデル生成部は、前記監視画像データにおける前記メッシュモデルに、前記撮像部で撮像された前記画像データにおける前記背景の画像を重ね合せることを特徴とする、請求項1に記載の監視システム。 The monitoring system according to claim 1, wherein the model generation unit superimposes the background image in the image data captured by the imaging unit on the mesh model in the monitoring image data.
- 前記所定領域内の物体の情報が保存されたデータベースをさらに有し、
前記監視部は、前記データベースに基づき前記物体を特定して、前記所定領域を監視することを特徴とする、請求項1又は2に記載の監視システム。 A database in which information of objects in the predetermined area is stored;
The monitoring system according to claim 1, wherein the monitoring unit specifies the object based on the database and monitors the predetermined area. - 前記モデル生成部は、前記メッシュモデルにおいて、前記所定領域の底面と前記常設物で囲まれる空間を、前記所定サイズのメッシュで三次元に区画することを特徴とする、請求項1~3のいずれか一項に記載の監視システム。 4. The model generation unit according to claim 1, wherein in the mesh model, a space surrounded by a bottom surface of the predetermined region and the permanent object is three-dimensionally divided by the mesh of the predetermined size. A monitoring system according to claim 1.
- 前記所定領域に進入した人物を複数の領域に区画する人物設定部をさらに有し、
前記監視部は、前記人物設定部で設定された各領域における前記人物の変化率に基づいて、当該人物の行動を監視することを特徴とする、請求項1~4のいずれか一項に記載の監視システム。 A person setting section for dividing the person who has entered the predetermined area into a plurality of areas;
The monitoring unit according to any one of claims 1 to 4, wherein the monitoring unit monitors a behavior of the person based on a rate of change of the person in each region set by the person setting unit. Monitoring system. - 前記所定領域で発生する音声を検知する音声検知部をさらに有し、
前記監視部は、前記音声検知部で検知した音声の方向に前記撮像部の撮像方向を設定して、当該音声が発生した領域を監視することを特徴とする、請求項1~5のいずれか一項に記載の監視システム。 A voice detector for detecting voice generated in the predetermined area;
6. The monitoring unit according to claim 1, wherein the monitoring unit sets an imaging direction of the imaging unit in a direction of the voice detected by the voice detection unit, and monitors a region where the voice is generated. The monitoring system according to one item. - 所定領域を撮像部で撮像して監視する監視方法であって、
前記所定領域の底面と、前記所定領域において監視期間中に常設される常設物とから構成される当該所定領域の背景について、前記撮像部から前記背景までの距離を測距部で測定する測距工程と、
前記測距工程で測定された距離に基づき、前記背景を所定サイズのメッシュで水平方向及び高さ方向に区画して、当該背景のメッシュモデルを生成するモデル生成工程と、
前記撮像部で前記所定領域を撮像する撮像工程と、
前記撮像工程で撮像される画像データに基づいて動きのある監視対象を抽出し、さらに前記画像データの前記所定領域の背景に前記メッシュモデルを適用し、当該メッシュモデルに前記監視対象を合成した新たな監視画像データを生成して、前記所定領域を監視する監視工程と、を有することを特徴とする、監視方法。 A monitoring method for imaging and monitoring a predetermined area with an imaging unit,
Ranging that measures the distance from the imaging unit to the background with a ranging unit for the background of the predetermined region that is composed of a bottom surface of the predetermined region and a permanent object that is permanently installed in the predetermined region during a monitoring period Process,
Based on the distance measured in the distance measurement step, the background is partitioned in a horizontal direction and a height direction with a mesh of a predetermined size, and a model generation step for generating a mesh model of the background,
An imaging step of imaging the predetermined area by the imaging unit;
A monitoring target that moves based on the image data captured in the imaging step is extracted, the mesh model is applied to the background of the predetermined area of the image data, and the monitoring target is synthesized with the mesh model. And a monitoring step for generating the monitoring image data and monitoring the predetermined area. - 前記モデル生成工程において、前記監視画像データにおける前記メッシュモデルに、前記撮像部で撮像された前記画像データにおける前記背景の画像を重ね合せることを特徴とする、請求項7に記載の監視方法。 The monitoring method according to claim 7, wherein, in the model generation step, the background image in the image data captured by the imaging unit is superimposed on the mesh model in the monitoring image data.
- 前記所定領域内の物体の情報をデータベースに保存しておき、
前記監視工程において、前記データベースに基づき前記物体を特定して、前記所定領域を監視することを特徴とする、請求項7又は8に記載の監視方法。 Information on objects in the predetermined area is stored in a database;
The monitoring method according to claim 7 or 8, wherein in the monitoring step, the predetermined area is monitored by specifying the object based on the database. - 前記モデル生成工程では、前記メッシュモデルにおいて、前記所定領域の底面と前記常設物で囲まれる空間を、前記所定サイズのメッシュで三次元に区画することを特徴とする、請求項7~9のいずれか一項に記載の監視方法。 The method according to any one of claims 7 to 9, wherein, in the model generation step, in the mesh model, a space surrounded by a bottom surface of the predetermined region and the permanent object is three-dimensionally divided by the mesh of the predetermined size. The monitoring method according to claim 1.
- 前記監視工程において、前記所定領域に進入した人物を複数の領域に区画し、当該区画された各領域における前記人物の変化率に基づいて、当該人物の行動を監視することを特徴とする、請求項7~10のいずれか一項に記載の監視方法。 In the monitoring step, the person who entered the predetermined area is divided into a plurality of areas, and the behavior of the person is monitored based on the rate of change of the person in each of the divided areas. Item 11. The monitoring method according to any one of Items 7 to 10.
- 前記撮像工程において、前記所定領域で発生する音声が音声検知部で検知されると、当該音声の方向に前記撮像部の撮像方向を設定して、音声が発生した領域を撮像し、
前記監視工程において、前記音声が発生した領域を監視することを特徴とする、請求項7~11のいずれか一項に記載の監視方法。 In the imaging step, when the sound generated in the predetermined area is detected by the sound detection unit, the imaging direction of the imaging unit is set in the direction of the sound, and the region where the sound is generated is imaged.
The monitoring method according to any one of claims 7 to 11, wherein, in the monitoring step, a region where the sound is generated is monitored. - 請求項7~12のいずれか一項に記載の監視方法を監視システムによって実行させるように、当該監視システムを制御するコンピュータを機能させるための当該コンピュータ上で動作するプログラム。 A program that operates on a computer that causes the computer that controls the monitoring system to function so that the monitoring method according to any one of claims 7 to 12 is executed by the monitoring system.
- 請求項13に記載のプログラムを格納した読み取り可能なコンピュータ記憶媒体。 A readable computer storage medium storing the program according to claim 13.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2016/060683 WO2017168684A1 (en) | 2016-03-31 | 2016-03-31 | Monitoring system, monitoring method, program, and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2016/060683 WO2017168684A1 (en) | 2016-03-31 | 2016-03-31 | Monitoring system, monitoring method, program, and computer storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017168684A1 true WO2017168684A1 (en) | 2017-10-05 |
Family
ID=59963742
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/060683 WO2017168684A1 (en) | 2016-03-31 | 2016-03-31 | Monitoring system, monitoring method, program, and computer storage medium |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2017168684A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116546171A (en) * | 2023-06-30 | 2023-08-04 | 傲拓科技股份有限公司 | Monitoring equipment data acquisition method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002247585A (en) * | 2001-02-15 | 2002-08-30 | Nippon Telegr & Teleph Corp <Ntt> | Method for transmitting moving image, method for receiving moving image, program for moving image transmitting processing, recording medium for the program, program for moving image receiving processing, recording medium for the program |
JP2009118072A (en) * | 2007-11-05 | 2009-05-28 | Ihi Corp | Remote control device and remote control method |
JP2011199501A (en) * | 2010-03-18 | 2011-10-06 | Aisin Seiki Co Ltd | Image display device |
JP2012015795A (en) * | 2010-06-30 | 2012-01-19 | Hitachi Kokusai Electric Inc | Image monitoring system |
JP2012051678A (en) * | 2010-08-31 | 2012-03-15 | Sumitomo Heavy Ind Ltd | Visibility assisting system |
-
2016
- 2016-03-31 WO PCT/JP2016/060683 patent/WO2017168684A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002247585A (en) * | 2001-02-15 | 2002-08-30 | Nippon Telegr & Teleph Corp <Ntt> | Method for transmitting moving image, method for receiving moving image, program for moving image transmitting processing, recording medium for the program, program for moving image receiving processing, recording medium for the program |
JP2009118072A (en) * | 2007-11-05 | 2009-05-28 | Ihi Corp | Remote control device and remote control method |
JP2011199501A (en) * | 2010-03-18 | 2011-10-06 | Aisin Seiki Co Ltd | Image display device |
JP2012015795A (en) * | 2010-06-30 | 2012-01-19 | Hitachi Kokusai Electric Inc | Image monitoring system |
JP2012051678A (en) * | 2010-08-31 | 2012-03-15 | Sumitomo Heavy Ind Ltd | Visibility assisting system |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116546171A (en) * | 2023-06-30 | 2023-08-04 | 傲拓科技股份有限公司 | Monitoring equipment data acquisition method |
CN116546171B (en) * | 2023-06-30 | 2023-09-01 | 傲拓科技股份有限公司 | Monitoring equipment data acquisition method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6757867B1 (en) | Smart shelf system that integrates image and quantity sensors | |
JP5899506B1 (en) | Monitoring system, monitoring method, program, and computer storage medium | |
JP6690041B2 (en) | Method and device for determining point of gaze on three-dimensional object | |
JP2020115344A6 (en) | Autonomous store tracking system | |
Zhang et al. | A survey on vision-based fall detection | |
WO2014162554A1 (en) | Image processing system and image processing program | |
Pham et al. | Real‐Time Obstacle Detection System in Indoor Environment for the Visually Impaired Using Microsoft Kinect Sensor | |
JP6502491B2 (en) | Customer service robot and related system and method | |
JP6631619B2 (en) | Video monitoring system and video monitoring method | |
KR20210055038A (en) | Autonomous store tracking system | |
JP5424852B2 (en) | Video information processing method and apparatus | |
JP2023502972A (en) | Item identification and tracking system | |
US8845107B1 (en) | Characterization of a scene with structured light | |
JP6647489B1 (en) | Suspicious body / abnormal body detection device | |
CN103823553A (en) | Method for enhancing real display of scenes behind surface | |
WO2014182898A1 (en) | User interface for effective video surveillance | |
US20210374938A1 (en) | Object state sensing and certification | |
JP2021185663A (en) | Video monitoring device, video monitoring method, and program | |
WO2017168684A1 (en) | Monitoring system, monitoring method, program, and computer storage medium | |
US20180350216A1 (en) | Generating Representations of Interior Space | |
TWI590657B (en) | Monitoring system, monitoring method, and computer storage medium | |
Kosmopoulos et al. | Fusion of color and depth video for human behavior recognition in an assistive environment | |
Ntalampiras et al. | PROMETHEUS: heterogeneous sensor database in support of research on human behavioral patterns in unrestricted environments | |
Fu et al. | Robust near-infrared structured light scanning for 3D human model reconstruction | |
Xu et al. | Multi-camera operating room activity analysis for workflow analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16896902 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 14/01/2019) |
|
NENP | Non-entry into the national phase |
Ref country code: JP |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16896902 Country of ref document: EP Kind code of ref document: A1 |