Detailed Description
In the following description, the "interface unit" may be one or more interfaces. The one or more interfaces may include one or more interface devices and one or more communication interface devices for one or more I/O devices. The one or more communication Interface devices may be one or more of the same kind of communication Interface device (e.g., one or more NICs (Network Interface Card)), or 2 or more of different kinds of communication Interface devices (e.g., NICs and HBAs (Host Bus Adapter)).
In the following description, the "memory unit" may be one or more memories. The memory section is typically a main storage device. The at least one memory may be volatile memory or non-volatile memory. The memory unit is mainly used for processing by the processor unit.
In the following description, the "PDEV unit (Physical Device Physical equipment unit)" may be one or more PDEVs. The PDEV portion is typically an auxiliary storage device. "PDEV" denotes a physical storage device, typically a non-volatile storage device, such as an HDD (Hard Disk Drive) or SSD (Solid State Drive).
In the following description, the "storage unit" includes at least a memory unit and a memory unit in the PDEV unit.
In the following description, the "processor unit" may be one or more processors. The at least one processor is typically a microprocessor such as a CPU (Central Processing Unit), but may be another type of processor such as a GPU (Graphics Processing Unit). At least one processor may be single core or multi-core. A portion of the processor may be a hardware Circuit (e.g., an FPGA (Field-Programmable Gate Array) or an ASIC (Application Specific Integrated Circuit)) that performs a portion or all of the processing.
In the following description, although functions will be described by expressions of the "kkk unit" (excluding the interface unit, the storage unit, and the processor unit), the functions may be realized by the processor unit executing one or more computer programs, or may be realized by one or more hardware circuits. In the case where the functions are realized by the processor section executing the program, the predetermined processing is performed by using the storage section and/or the interface section as appropriate, and therefore the functions may be at least a part of the processor section. The processing described with the function as a subject may be processing performed by the processor unit or a device having the processor unit. The program may also be installed from a program source. The program source may be, for example, a program distribution computer or a computer-readable recording medium (e.g., a non-transitory storage medium). The description of each function is an example, and a plurality of functions may be combined into one function or one function may be divided into a plurality of functions.
In the following description, although information is described by the expression of the "xxx table" in some cases, the information may be expressed by an arbitrary data structure. That is, the "xxx table" can be referred to as "xxx information" in order to indicate that the information does not depend on the data structure. In the following description, the structure of each table is an example, and one table may be divided into 2 or more tables, and all or a part of the 2 or more tables may be one table.
Hereinafter, a mode for carrying out the present invention will be described in detail with reference to the accompanying drawings.
Fig. 1 shows an example of the installation of a virtual robot.
The virtual robot 100 is installed in an elevator hall 191 of a building and guides an entrance person to a desired floor. In order to effectively utilize the time for an entrance to wait for an elevator in the elevator hall 191, the virtual robot 100 is provided in the elevator hall 191 in the present embodiment. However, the virtual robot 100 may be installed in a place other than the elevator hall 191, for example, in the car of an elevator as shown in fig. 9. In addition, a physical robot may be used instead of or in addition to the virtual robot 100.
Fig. 2 shows a system configuration of the guidance system 170 according to the present embodiment.
The building 190 includes a plurality of elevators 180 and a plurality of elevator control devices 140 for controlling the plurality of elevators 180, respectively. The elevator control device 140 controls the raising and lowering of the car of the elevator 180 to be controlled.
The guidance system 170 includes: a guidance device 150 that determines a destination floor to be introduced to an entrance person and uses an elevator 180; a virtual robot 100 that introduces the destination floor determined by the guidance device 150 and the elevator 180 to the person who enters the building; a robot controller 110 that controls the virtual robot 100; and a robot monitoring center 120 that accumulates data acquired by the virtual robot 100 via the robot control device 110 and analyzes the data. Further, the guidance system 170 includes: a monitoring camera control device 160 that performs face recognition based on a face image of an entrance person captured by a monitoring camera (not shown) provided in the building 190; and an attendance management device 130 that determines whether or not the person who has entered the building 190 and is identified by face recognition is a person who is working in the building 190, and updates the attendance state of the person who has entered the building based on the determination result. The robot controller 110, the guidance device 150, the robot monitoring center 120, the attendance management device 130, the monitoring camera controller 160, and the elevator controller 140 can communicate with each other via the communication network 195. At least 2 of the robot controller 110, the guide device 150, the robot monitoring center 120, the attendance management device 130, and the monitoring camera controller 160 may be integrated into one device, or at least one of the robot controller 110, the guide device 150, the robot monitoring center 120, the attendance management device 130, and the monitoring camera controller 160 may be divided into a plurality of devices.
Fig. 3 is a structural diagram of the virtual robot 100.
The virtual robot 100 is a computer having a function of performing a conversation with a human or a human interface device for performing a conversation with a human. The virtual robot 100 includes an interface unit 103, a storage unit 101, and a CPU104 connected to these units.
The interface unit 103 communicates with the robot controller 110.
The input/output device 102 is, for example, a monitor 102e that displays an image of a virtual robot, a human body sensor 102c and a camera 102a that recognize a human, and a microphone 102b and a speaker 102d that are used to perform a conversation with a human.
The storage unit 101 stores one or more programs that are executed by the CPU104 to realize the interactive control unit 101a and the input/output unit 101 b.
When a human being is recognized by the human body sensor 102c and the camera 102a, the conversation control unit 101a carries out a conversation with the human being through the microphone 102b and the speaker 102d under an instruction from the robot control device 110. The input/output unit 101b performs input/output of data between an input data processing unit 111a and an output data processing unit 111c, which will be described later, of the robot control device 110.
The virtual robot 100 acquires an image from the camera 102a, acquires a sound from the microphone 102b, and transmits the acquired image and sound to the robot controller 110. The virtual robot 100 speaks from the speaker 102d in accordance with an instruction from the robot control device 110. When a human being is detected by at least one of the human body sensor 102c and the camera 102a, the virtual robot 100 starts a conversation.
Fig. 4 is a configuration diagram of the robot controller 110.
The robot controller 110 includes an interface unit 112, a storage unit 111, and a CPU113 connected to these units.
The interface unit 112 communicates with the virtual robot 100 or with an external device via the communication network 195.
The storage unit 111 stores one or more programs that realize the input data processing unit 111a, the dialogue processing unit 111b, and the output data processing unit 111c when executed by the CPU 13.
The input data processing unit 111a processes data received from the virtual robot 100. The dialogue processing unit 111b determines the content of the next utterance (for example, a question) based on the input data and a question scene described later. The output data processing unit 111c transmits the utterance determined by the dialogue processing unit 111b to the virtual robot 100. The output data processing unit 111c transmits the contents of the dialog (one or more questions and one or more answers) to the guidance device 150.
The robot controller 110 may be integral with the virtual robot 100. That is, the virtual robot 100 stores a question scene described later, and the interactive control unit 101a of the virtual robot 100 can perform an interactive operation based on the answer of the person entering the room and the question scene. Alternatively, the dialog control unit 101a of the virtual robot 100 may perform a dialog in accordance with an instruction from the robot control device 110 (for example, a dialog may be performed in which a person speaks in accordance with the content of the utterance from the output data processing unit 111c of the robot control device 110 and a response to the utterance is transmitted to the robot control device 110).
Fig. 5 is a block diagram of the robot monitoring center 120.
The robot monitoring center 120 includes an interface unit 123, a storage unit 121, and a CPU124 connected to these units.
The interface unit 123 communicates with an external device via a communication network 195.
The storage unit 121 stores one or more programs that are executed by the CPU124 to realize the session storage unit 121a, the information processing unit 121b, and the destination floor indicator 122 c.
The dialogue storage unit 121a stores the problem scenes in the robot controller 110 in advance. The information processing unit 121b specifies the elevator 180 and the destination floor based on the information from the guidance device 150. The destination floor indicating part 122c indicates the determined destination floor to the elevator control device 140 of the determined elevator 180. In response to the instruction, the elevator control device 140 stops the car of the elevator 180 to be controlled at the instructed destination floor, and opens the destination floor and the doors of the car.
Fig. 6 is a configuration diagram of the attendance management apparatus 130.
The attendance management apparatus 130 includes an interface 132, a storage 131, and a CPU133 connected thereto.
The interface unit 132 communicates with an external device via a communication network 195.
The storage unit 131 stores one or more programs that are executed by the CPU133 to realize the entrance person determination unit 131a, the attendance management unit 131b, and the schedule management unit 131 c.
The entrance person determination unit 131a determines whether or not the face information of the entrance person in the building matches the face information of the worker who operates in the building 190, that is, determines the entrance person. The "face information of the entrance person in the building" is a face image of the entrance person captured by a monitoring camera (not shown) provided in the building 190 or information based on the face image (for example, information indicating a feature amount of the face image). For example, face information from the monitoring camera control device 160.
When the result of the attendance determination is true (that is, when the attendance is a worker working in the building 190), the attendance management unit 131b changes the attendance state of the attendance to attendance.
The schedule management unit 131c manages schedules for offices in the building 190.
Fig. 7 is a structural view of the guide device 150.
The guide device 150 includes an interface unit 152, a storage unit 151, and a CPU153 connected to these units. The guidance device 150 may be integrated with the attendance management unit 130.
The interface unit 152 communicates with an external device via the communication network 195.
The storage unit 151 stores one or more programs that realize the meeter specifying unit 151a, the attendance determination unit 151b, and the guide unit 151c when executed by the CPU 153.
The meeter specifying unit 115a specifies the meeter of the person who enters the house based on the contents of the conversation from the robot control device 110. Further, "determining a meeter of an attendee from the conversation content" refers to determining data that matches a meeting person attribute included in the conversation content (for example, an answer to a question of the meeting person attribute (for example, at least a name among a company name, an affiliation, and a name of the meeting person)) from at least a staff table of a staff table and a schedule table, which will be described later, for example.
The attendance determination unit 151b performs an attendance determination for determining whether or not the attendance state of the identified meeter is "attendance". The attendance determination may be, for example, an analysis of the answer received in response to the inquiry, inquiring the attendance management apparatus 130 as to whether the identified meeter is "attendance".
When the result of the attendance determination is true (that is, when the determined attendance state of the meeter is "attendance"), the guide unit 151c introduces the elevator 180 and the destination floor, which can transport the person who entered the hall to the destination floor, which is the floor of the meeter, to the person who entered the hall from the virtual robot 100 via the robot control device 110. That is, the virtual robot 100 makes a speech to introduce the elevator 180 and the destination floor to the entrance person. This enables the entrance person to know the elevator 180 to be used to see the meeter and the floor to which the meeting should arrive.
The "attendance status of the identified meeter is" attendance "can be identified from a staff table described later. The "destination floor" may be, for example, a floor at which a meeter is working (a floor specified from a staff table described later), a floor at which a meeting room in which a meeting is reserved (a floor specified from a calendar described later) specified from a calendar described later is provided, or a floor specified from a meeter in response to a later-described inquiry to the meeter (an inquiry to request a response to permit an entrance person to go to the destination floor). The "elevator capable of transporting the entering person to the destination floor" may be an elevator specified from an elevator table described later, or may be an elevator specified by an answer to an inquiry (an inquiry as to whether or not a stop at the destination floor) to each of the elevator control devices 140.
Fig. 8 is a configuration diagram of the monitoring camera control device 160.
The monitoring camera control device 160 includes an interface unit 162, a storage unit 161, and a CPU163 connected thereto.
The interface unit 162 communicates with an external device via the communication network 195.
The storage unit 161 stores one or more programs that are executed by the CPU163 to realize the face recognition unit 161a and the face information storage unit 161 b.
The face recognition unit 161a performs face recognition based on the face image of the person who enters the house input from the camera 102 a. The face information storage unit 161b stores face information of each worker in the building 190 or face information of each person who enters the building and whose attendance determination result by the attendance management apparatus 130 is false in advance. When face information of each worker in the building 190 is stored in advance, the monitoring camera control device 160 may have an entrance person determination unit 131a instead of the attendance management device 130.
Fig. 10 is a structural diagram of a building summary table.
The building summary sheet F1 is part of the problem scenario. The problem scenario is stored, for example, in the robot control device 110. The problem scenario may be distributed among more than 2 devices of the guidance system 170. The question scenario is composed of a plurality of detailed question scenarios and a summary question used for determining which of the plurality of detailed question scenarios should be used, and the building summary table F1 is a table corresponding to the summary question. The building summary table F1 stores information of the destination floor F1a and the answer content F1b, for example, per each answer (for example, per each category of area of the building 190) to a summary question (for example, "is a question about which area of a shop area, an office area, and a resident area. The destination floor F1a represents a floor in which the area corresponding to the answer to the summary question corresponds to the target area. The answer content F1b represents the content of an answer to the summary question. A detailed question scene (question table) corresponding to the answer content F1b is associated with each answer content F1 b.
Fig. 11 is a structural diagram of an office problem table.
The office question table F2 corresponds to a detailed question scenario for office use. The office question table F2 stores information of the question number F2a and the question content F2b for each question. Question number F2a indicates the number of the question. The question content F2b represents a question when the entrant is not a worker in the building 190. The detailed problem scenario is a problem scenario for specifying a floor (destination floor) desired by an entrance person from information of a meeter or the like.
Fig. 12 is a structural diagram of a calendar.
The schedule G1 is stored in any device of the guidance system 170, for example, the attendance management device 130. The schedule G1 stores information of the destination floor G1a, the conference room name G1b, and the time period G1c per reservation with respect to the office of the building 190. The destination floor G1a represents a floor where a conference room where a predetermined meeting is to be made is located. The conference room name G1b indicates the name of a conference room for a scheduled meeting. Time period G1c includes a predetermined time period for the meeting and a name of the meeter.
Fig. 13 is a configuration diagram of a store area table.
The store area table G2 is stored in any one of the guidance systems 170, for example, the robot controller 110. The store area table G2 stores information of the destination floor G2a, the area summary G2b, and the store name G2c for each floor related to a store area. The destination floor G2a represents a floor. The area summary G2b shows a summary of the store areas located on the corresponding floor. The store name G2c indicates a list of store names of stores located on the corresponding floor.
In the present embodiment, guidance of stores can also be performed. For example, guidance of the store may be started when an entrant who is not a staff member cannot determine meeter-related information (the meeter-related information is not associated with the entrant) or a store area is answered in a summary question. For example, the guidance device 150 announces information about the store in the building 190 to the entrant based on the detailed question scene for the store area and the store area table G2, identifies the store that the entrant desires based on the response of the entrant, and introduces the destination floor of the identified store and the use of an elevator to the entrant.
Fig. 14 is a structural diagram of a staff table.
The staff table G3 is stored in any device of the boot system 170, for example, the attendance management device 130. The staff table G3 stores information of a staff name G3a, a company name G3b, a work place residence G3c, and an attendance status G3d for each staff in the building 190. The worker name G3a represents the name of the worker. The company name G3b indicates the name of the company that the worker works. The work place residence G3c represents the residence of the place where the worker works. When the worker registers at a plurality of locations, the work place address G3c indicates a plurality of addresses corresponding to the plurality of locations, respectively, and a flag indicating which location among the plurality of locations the worker works. Therefore, when the worker is identified by the monitoring camera of another building different from the building 190, the flag is set with respect to the location corresponding to the other building. Attendance status G3d represents the attendance status of the worker (e.g., "attendance", "off duty"). The worker table G3 may store face information for each worker.
Fig. 15 is a structural view of an elevator meter.
The elevator table G4 is stored in any one of the devices of the guidance system 170, such as the guidance device 150. The elevator table G4 stores information of the elevator identifier G4a and the stoppable floor G4b for each elevator 180 in the building 190. The elevator identifier G4a represents an identifier of the elevator 180. The stoppable floor G4b represents a floor at which the car of the elevator 180 can be stopped.
An example of the processing performed in the present embodiment is described below.
Fig. 16 is a flowchart of the attendance status update process.
The images taken by the entrance person and the exit person of the building 190 are input from the monitoring camera to the monitoring camera control device 160. The face recognition unit 161a performs face recognition based on the face images of the photographed images of the entering person and the exiting person (S20). The face recognition unit 161a transmits information including face information as a result of the face recognition and which of the entry and exit is included to the attendance management apparatus 130.
In the attendance management apparatus 130, the entrance person determination unit 131a refers to the staff table G3, and determines which staff member the entrance person or the exit person is based on the information (face information and which of the entrance and the exit person) from the monitoring camera control apparatus 160 (S21). Specifically, the entering person determination unit 131a determines whether or not the face information from the monitoring camera control device 160 matches the face information of any one of the workers.
If the determination result in step S21 is true (yes in S21), the attendance management unit 131b updates the attendance status G3d corresponding to the matching staff member (S22). Specifically, when the worker is an entrance, the attendance management unit 131b updates the attendance state G3d corresponding to the worker to "attendance". When the worker is an exit worker, the attendance management unit 131b updates the attendance state G3d corresponding to the worker to "off duty".
In addition, regarding the staff having a plurality of workplaces, the attendance management unit 131b identifies buildings in which surveillance cameras that capture face images are present (for example, a surveillance camera control device is present for each building, and the building is identified from the surveillance camera control device that is the face information transmission source), and records a flag indicating "attendance" in the identified building in the staff table G3.
The input information of the entrance person may be biometric information such as fingerprint information instead of or in addition to face information, or information in an IC card such as an employee's card. However, according to the present embodiment, since the face image captured by the monitoring camera is used as the input information of the person who enters the facility, it is possible to determine whether or not the person who enters the facility is a worker even if the person who enters the facility does not perform any special operation, and if the person who enters the facility is determined to be a worker, the attendance state of the worker can be automatically updated. In addition, when the worker enters the building in this way, the attendance state becomes "attendance", and guidance to the person who enters the building and who is a meeter can be provided, so that the person who enters the building can receive guidance to the floor of the meeter by feeling that the meeter will not arrive at the work place.
Fig. 17 is a flowchart of the entrance guidance process. The entrance guidance process is started when the virtual robot 100 detects an entrance.
It is determined whether the person who enters the field detected by the virtual robot 100 is a worker (S31). This determination is performed by any of the following methods, for example.
The input/output unit 101b transmits the image captured by the camera 102a of the virtual robot 100 to the robot control device 110, and the robot control device 110 transmits the captured image to the monitoring camera control device 160. The face recognition unit 161a of the monitoring camera control device 160 performs face recognition of the face image in the captured image. The monitoring camera control device 160 or the attendance management device 130 determines whether or not the face information based on the face recognition result matches the face information of any one worker. When the determination result is true, the determination result of S31 is true.
The entrance person determination unit 131a of the attendance management apparatus 130 transmits face information of an entrance person whose entrance person determination result is false to the robot control apparatus 110. The robot controller 110 accumulates face information of an entering person whose result of the determination of the entering person is false. The virtual robot 100 or the robot controller 110 performs face recognition of a face image in the photographed image of the camera 102a of the virtual robot 100. The virtual robot 100 or the robot controller 110 performs face recognition of the face image in the photographed image. The virtual robot 100 or the robot controller 110 determines whether or not the face information based on the face recognition result matches the face information of any of the entrants whose result of the determination by the entrant is false. When the determination result is false, the determination result of S31 is true.
When the determination result at S31 is true (S31: yes), the person who entered guidance process ends, and when the determination result at S31 is false (S31: no), the process continues. That is, the target of the entrance guidance process can be reduced to an appropriate person such as an entrance who is not a worker of the building 190.
When the determination result at S31 is false (S31: no), the virtual robot 100 starts a conversation with the entrant person (S32). The conversation is performed according to the question scenario (for example, tables F1 and F2 shown in fig. 10 and 11). Under the control of the robot controller 110, a conversation is performed between the virtual robot 100 and the person entering the house. The contents of the conversation are transmitted from the robot controller 110 to the guidance device 150 at any time.
Each time the meeter specifying unit 151a of the guidance device 150 receives the contents of the session, it determines whether or not the contents of the session have a destination floor (S33). Note that, instead of S33, steps S34 and thereafter may be performed.
If the result of determination at S33 is true (S33: yes), the guidance unit 151c specifies the destination floor, specifies the elevator 180, i.e., the elevator 180, which can stop at the specified destination floor, from the elevator table G4, and notifies the robot monitoring center 120 and the guidance device 150 of the specified destination floor and the elevator 180. The destination floor indicating unit 122c of the robot monitoring center 120 indicates a destination floor to the elevator control device 140 corresponding to the use elevator 180. The guide unit 151c of the guide device 150 gives an introduction to the entrance person using the elevator 180 and the destination floor via the robot controller 110 and the virtual robot 100 (S40).
When the determination result at S33 is false (S33: NO), the conversation continues. The meeter specifying unit 151a of the guidance device 150 receives one or more answers including the name of the person who entered the entrance from the person who entered the entrance, and specifies the meeter of the person who entered the entrance based on the contents of the conversation (S34). For example, when the one or more answers include information of the attribute of the meeter such as the name of the meeter, and the name of the meeting room and the time period of the scheduled meeting, the meeter is determined according to at least one of the staff table G3 and the schedule table G1. The attendance determination unit 151b of the guidance device 150 acquires the attendance state of the identified meeter from the attendance management device 130 (S35), and determines whether the attendance state is "attendance" (S36).
If the determination result at S36 is true (yes at S36), guide 151c refers to staff table G3, and determines whether the identified meeter is in a different building from building 190 (S37).
When the determination result at S37 is false (S37: no), that is, when the meeter is present at the office of the building 190, the guidance unit 151c specifies the destination floor (the floor of the meeter), specifies the elevator 180, i.e., the elevator 180, which can stop at the specified destination floor, based on the elevator table G4, and notifies the robot monitoring center 120 and the guidance device 150 of the specified destination floor and the use of the elevator 180. The destination floor indicating part 122c of the robot monitoring center 120 indicates a destination floor to the elevator control device 140 corresponding to the use elevator 180. In this way, it can be expected that when the entering person arrives at the elevator 180, the car is already at the current floor, and that the button of the destination floor is already pressed even if the button is not operated in the car using the elevator 180. The guide unit 151c of the guide device 150 gives an introduction to the entrance person using the elevator 180 and the destination floor via the robot controller 110 and the virtual robot 100 (S40).
If the determination result at S37 is true (yes at S37), the guidance unit 151c introduces another building where the person is located to the entrance via the robot control device 110 and the virtual robot 100 (S38). For example, the virtual robot 100 displays the name of another building, the residence, and the path from the building 190 to the other building. This enables the entry person to quickly move to the correct meeting place.
If the determination result at S36 is false (no at S36), the guide unit 151c introduces the absence of the person to the entrance via the robot control device 110 and the virtual robot 100 (S39).
As described above, according to the present embodiment, the destination floor, which is the floor of the meeter, can be specified based on the contents of the conversation with the entrance who is not the worker of the building 190, and the entrance can be introduced from the virtual robot 100 using the elevator 180 and the destination floor.
Although one embodiment of the present invention has been described above, these are examples for illustrating the present invention, and the scope of the present invention is not intended to be limited to this embodiment. The invention can also be carried out in other various ways.
For example, the functions of the virtual robot 100 may be executed as an application program downloaded to a mobile terminal such as a smartphone of an entrance person. This makes it possible to expect to receive guidance from the virtual robot 100 without waiting in line for using the virtual robot 100.
For example, when introducing the use of the elevator 180 and the destination floor to the entrance person, the guidance unit 151c may output a message notifying that the entrance person has gone to the destination floor to the information processing terminal of the meeter (e.g., the address of the meeter specified from the staff table G3). This enables the meeter to prepare for the arrival of the person who enters the hall (for example, to meet the person who enters the hall before the elevator at the destination floor).
For example, the guidance unit 151c may output an inquiry to the information processing terminal of the meeter to request a response to permit the entrance to the destination floor before the entrance is introduced to the entrance using the elevator 180 and the destination floor. This makes it possible to expect a flexible guidance according to the desire of the meeter. For example, the guidance may be performed only when the answer to the inquiry indicates permission, or the meeter may change the destination floor and guide the changed destination floor.