[go: up one dir, main page]

WO2025168547A1 - Determination of objects in spaces connected by transparent surfaces - Google Patents

Determination of objects in spaces connected by transparent surfaces

Info

Publication number
WO2025168547A1
WO2025168547A1 PCT/EP2025/052791 EP2025052791W WO2025168547A1 WO 2025168547 A1 WO2025168547 A1 WO 2025168547A1 EP 2025052791 W EP2025052791 W EP 2025052791W WO 2025168547 A1 WO2025168547 A1 WO 2025168547A1
Authority
WO
WIPO (PCT)
Prior art keywords
space
image
transparent surface
reflection
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/EP2025/052791
Other languages
French (fr)
Inventor
Dave Willem VAN GOOR
Jan EKKEL
Paul Henricus Johannes Maria VAN VOORTHUISEN
Svetlana Vladimirovna MINAKOVA
Eduardo Alejandro DE LEON ROMERO
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Signify Holding BV
Original Assignee
Signify Holding BV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Signify Holding BV filed Critical Signify Holding BV
Publication of WO2025168547A1 publication Critical patent/WO2025168547A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • the present invention generally relates to object detection. More specifically, the present invention is related to determination of in which space an object is present.
  • Security cameras can prevent theft, vandalism, or break-ins by detecting intruders and sending respective notifications to the property owners.
  • a security camera When a security camera is installed in an indoor space which contains transparent or partially transparent surfaces such as glass doors or windows, the camera view may go beyond the target space.
  • Intruders To detect intruders, modem security cameras rely on object detection and face recognition Artificial Intelligence (Al) algorithms. These algorithms have no notion of reflective surfaces such as windows or glass doors. From the perspective of these algorithms, an intruder is a person who has been detected in the camera view and whose face is not recognized when compared to the faces of camera owners. Therefore, security cameras may detect persons that are not present in the target space, i.e., a space observed by a security camera with intent of protection, but are visible to the camera in a reflective surface, e.g., a window, as intruders. This leads to false alarms, unnecessarily disturbing owners of security cameras.
  • Artificial Intelligence Artificial Intelligence
  • detection of people that are not present in the target space e.g., a living room in a house, but are visible to the camera through a reflective surface, e.g., a window
  • a reflective surface e.g., a window
  • security cameras have options to exclude areas from detections. However, this cannot be used to exclude outdoor objects only as this would also block detection of persons in the area. This problem occurs because the algorithms employed by the security cameras cannot tell whether a person is truly present in the target space.
  • the present technology centres around the properties of transparent surfaces, which are known to exhibit reflective characteristics that vary depending on the lighting conditions. Such surfaces can be detected by e.g. a security camera, when the illumination properties of the space observed by the camera are known.
  • a method for identifying a position of an object in a first space or a second space The first space is separated from the second space by an, at least partially, transparent surface.
  • An image or a video is registered by a camera in the first space.
  • the image or video depicts at least a part of the transparent surface.
  • Information of illumination conditions in the first space is obtained.
  • Information of illumination conditions in the second space is obtained.
  • a reflection level in light impinging on the camera from the transparent surface is derived based on the illumination conditions in the first space and the illumination conditions in the second space. The reflection level represents an expected degree of reflected light in the light impinging on the camera from the transparent surface.
  • the reflection level being above a predetermined threshold and to an image of an object being present within the transparent surface, it is determined whether the image of the object within the transparent surface is a reflection image or not based on the reflection level.
  • a position of the object is identified in dependence of the determination.
  • the present technology thus involves detecting of reflective surfaces and discarding people that are not truly present in a target space.
  • the utilization of the illumination conditions in the different spaces makes it easy to distinguish between situations where reflections are not present and situations where reflections may be present. Only the situations where there might be reflections are considered for an accurate determination process. This discrimination reduces the processing requirement considerably.
  • the minimum level of discrimination involves the use of a reflection level having either of two categories, reflection or non-reflection. To this end, in one embodiment the reflection level is expressed as at least two reflection level categories. These two categories then correspond to expected possible reflection and expected non-possible reflection, respectively. This operates as an on/off switch for the actual determination of whether a reflection image is present or not.
  • the reflection level is expressed as a percentage of reflected light in the light impinging on the camera from the transparent surface. In some applications, it may be useful to adapt the breaking point between possible reflections and non-possible reflection conditions. In such cases, the reflection level that may assume continuous values may be used, typically the expected ratio of reflected light, and the discrimination between reflection/non-reflection conditions can easily be tuned by changing a threshold value for the reflection level.
  • a smart home system can determine if it is dark outside using the calendar in combination with the time, or use a light sensor, e.g., embedded into the security camera or a separate ambient light sensor.
  • the smart home system knows the state of every connected luminaire in this system and the room to which the light is assigned. From this it is possible to determine on a per-room basis the degree to which transparent surfaces will exhibit high levels of reflectivity or none at all.
  • At least one of the steps of obtaining information of illumination conditions in the first space and obtaining information of illumination conditions in the second space is performed by determining a luminaire status in a respective space and/or detecting a light level in the respective space.
  • the illumination conditions may preferably be measured and/or concluded from luminaire status information for the different spaces.
  • the novel aspect coming with the luminaire status is that the system knows the positions of the lamps relative to the camera and furthermore the settings of the lamps. Additionally, the locations of the lamps relative to the at least partially transparent surface, e.g. a window, are known. These data are taken into account for determining where the object is present.
  • one of the first space and the second space is an outdoor space.
  • the step of obtaining information of illumination conditions in the outdoor space may then comprise noting a present time and date and retrieving an expected illumination condition from a database with the time and date as input.
  • Outdoor spaces are typically significantly influenced by the sun light. By registering the time and date, it is known from astronomical data if the sun is over or under the horizon, and an estimate of whether the light from the sun may provide a light condition of the outdoor space is thus easily achieved. This may then be combined with e.g. status information of luminaire that may be present in the outdoor space.
  • the deriving of a reflection level is based on an illumination difference between the first space and the second space.
  • the illumination difference is a quantity that is advantageously used, since it is tightly connected to the existence of high-intensity reflections. Reflections occur when there is a significant difference in light between the front of a reflective transparent surface and the other side. Either side can be inside or outside the house, or any combination of the two. Looking though a partly transparent surface towards a space that has a higher illumination than the space your or your camera is positioned in will rarely present any reflection image of a noticeable relative light level. Oppositely, looking though a partly transparent surface towards a space that has a lower illumination than the space your or your camera is positioned in will probably present reflection images of a noticeable relative light level.
  • a transparent reflective surface being present, and its location can optionally be embedded in e.g. the metadata of an image.
  • This information can be used as a post-processing step for a person detection algorithm to determine whether a person is truly present in the observed space.
  • the step of identifying a position of the object is performed as; if a reflection image is determined to exist, the object is identified to be present in the first space, and if a reflection image is not determined to exist, the object is identified to be present in the second space.
  • a generative adversarial network may, as one nonexclusive example, be trained to distinguish between reflections of people and objects and actual images.
  • the method for identifying a position of an object comprising the further step of initiating, as a response to the reflection level being indefinite in distinguishing between reflection and transmission and to an image of an object being present within the transparent surface, a change of illumination in at least one of the first space and the second space. In this manner, a control of the illumination can assist in deriving of conclusive reflection levels.
  • the present technology is applicable to any kind of object.
  • one very advantageous use is, as indicated in the background, the identification of the presence of living creature, e.g. a human being, in a certain target space.
  • This is advantageous applicable to different kinds of security systems.
  • the object is a person.
  • the present technology may be implemented as a computer program and may be loaded into any kind of processor, e.g. of a smart home system.
  • a computer program product comprising a computer-readable medium having stored thereon a computer program comprising instructions.
  • the instructions when executed by at least one processor, cause the at least one processor to register an image or a video from a camera in a first space.
  • the image or video depicts at least a part of the transparent surface.
  • the instructions when executed by at least one processor, cause the at least one processor to further obtain information of illumination conditions in the first space, to obtain information of illumination conditions in the second space, and to derive a reflection level in light impinging on the camera from the transparent surface based on the illumination conditions in the first space and the illumination conditions in the second space.
  • the reflection level represents an expected degree of reflected light in the light impinging on the camera from the transparent surface.
  • the instructions when executed by at least one processor, cause the at least one processor to further to determine, as a response to the reflection level being above a predetermined threshold and to an image of an object being present within the transparent surface, whether the image of the object within the transparent surface is a reflection image or not based on the reflection level, and to identify a position of the object in dependence of the determination.
  • the present technology may be implemented as a localizing system: Such a localizing system may be a stand-alone system or may be integrated into other types of systems, e.g. of a smart home system.
  • a localizing system for identifying a position of an object in a first space or a second space.
  • the first space is separated from the second space by an, at least partially, transparent surface.
  • the localizing system comprises a camera, first illumination condition means, second illumination condition means and a localizer control.
  • the camera is situated within the first space.
  • the camera is configured to register an image or a video in the first space.
  • the camera is directed such that the image or video depicts at least a part of the transparent surface.
  • the first illumination condition means is configured for obtaining information of illumination conditions in the first space.
  • the second illumination condition means is configured for obtaining information of illumination conditions in the second space.
  • the localizer control is connected to or integrated with the camera, connected to or integrated with the first illumination condition means and connected to or integrated with the second illumination condition means.
  • the localizer control is configured for deriving a reflection level in light impinging on the camera from the transparent surface based on the illumination conditions in the first space and the illumination conditions in the second space.
  • the reflection level represents an expected degree of reflected light in the light impinging on the camera from the transparent surface.
  • the localizer control is further configured for determining, as a response to the reflection level being above a predetermined threshold and to an image of an object being present within the transparent surface, whether the image of the object within the transparent surface is a reflection image or not.
  • the localizer control is further configured for identifying a position of the object in dependence of the determination.
  • a “position” defines a place in space of an object. This position may be expressed as a definition of the exact location, or as being located within or outside a particular limited space. The position may be referred to in relative wording, e.g. “outside”, “in front of’, “in level with” an object or space.
  • an “object” is a physically existing item, e.g. a person, an animal or a non-live article of any kind.
  • space defines a three-dimensional volume in the real world.
  • the space may be limited by physical items, such as walls, floors and ceilings, but may also be three-dimensional volumes without physical limitations, or only physical limitations in some directions.
  • “transparent surface” is used for characterizing a surface that allows transmission of electromagnetic radiation, in particular light in the visible, ultraviolet and/or infrared wavelength.
  • the transmission of electromagnetic radiation may be partial, i.e. presenting a transmissivity of less than 100%, however, still being detectable.
  • illumination conditions refer to light-associated conditions of a space.
  • the illumination conditions can be expressed in measurable quantities, such as luminous intensity, but may also be expressed in more general terms, such as “dark”, “light”, “illuminated” etc.
  • “reflection level” refers to the portion of a light intensity that emanates from light that has been reflected at least once between a light source and the detecting means, i.e. the “degree of reflected light”.
  • the reflection level may be expressed in e.g. percentage of a total light intensity, but may also be expressed e.g. in “reflection level categories”, such as “mainly reflected light” and “mainly non-reflected light”.
  • luminaire status concerns conditions of lighting arrangements of e.g. a particular space. This could e.g. be the number of operating luminaire arrangements, their illumination strengths, the direction of the light from luminaire arrangements etc. Luminaire status may also be expressed in categories, such as “on” and “off’.
  • light level refers to a measurable quantity associated with light, such as illuminance or brightness.
  • processor refers to any kind of processing circuitry capable of processing computer instructions and data.
  • processors include, but is not limited to, one or more microprocessors, one or more Digital Signal Processors (DSPs ), one or more Central Processing Units (CPUs), video acceleration hardware, and/ or any suitable programmable logic circuitry such as one or more Field Programmable Gate Arrays (FPGAs), or one or more Programmable Logic Controllers (PLCs).
  • DSPs Digital Signal Processors
  • CPUs Central Processing Units
  • FPGAs Field Programmable Gate Arrays
  • PLCs Programmable Logic Controllers
  • Computer program refer to any collection of instructions, typically digital, that when processed by a processor performs a predetermined series of operations.
  • the computer program may be loaded into an operating memory of a processor for execution by the processing circuitry thereof.
  • “computer-readable medium” refers to any nonvolatile medium capable of storing and retrieving computer program instructions and/or data.
  • the computer-readable medium may include one or more removable or non-removable memory devices including, but not limited to a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc (CD), a Digital Versatile Disc (DVD), a Blu-ray disc, a Universal Serial Bus (USB) memory, a Hard Disk Drive (HDD) storage device, a flash memory, a magnetic tape, or any other conventional memory device.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • CD Compact Disc
  • DVD Digital Versatile Disc
  • Blu-ray disc a Universal Serial Bus
  • USB Universal Serial Bus
  • HDD Hard Disk Drive
  • a “computer program product” comprises a computer-readable medium having stored thereon a computer program.
  • the computer program product is normally carried or stored on a computer readable medium, in particular a non-volatile medium.
  • a “localizing system” refers to a system configured for determining a position of an object.
  • the localizing system may e.g. comprise imaging equipment and image analysing equipment.
  • a “illumination condition means” refers to any arrangement capable of providing information of illumination conditions of e.g. a space. Non-exclusive examples can be or comprise advanced luminaire systems or light sensor systems.
  • Fig. 1 A schematically illustrates two light spaces separated by a transparent surface
  • Fig. IB schematically illustrates one light space and one dark space separated by a transparent surface
  • Fig. 2 is a flow diagram of steps of an embodiment of a method for identifying a position of an object in a first space or a second space
  • Figs. 3 A-B are diagrams illustrating different embodiments of reflection level categories
  • Figs 4A-D are schematic illustrations of a wall with a window during different lighting conditions
  • Fig. 5 schematically shows an embodiment of a localizing system.
  • Reflection images recorded through e.g. mirrors may be identified and/or eliminated by use of different Al techniques. Such procedures are, however, not always perfect and false detections may occur despite relatively high processor capacity demands. Furthermore, surfaces that are partially transparent, may due to changing surrounding light conditions present different levels of reflectivity, which makes such determinations even more difficult. The present technology, however, presents a solution that facilitates handling of partially transparent, partially reflective surfaces.
  • reflections may occur when there is a significant difference in light between the front of a reflective transparent surface and the other side.
  • Either side can be inside or outside the house, or any combination of the two.
  • Fig. 1 A illustrates schematically a situation where a camera 50, situated in a first space 10, records an image based on light impinging on the camera within a certain range of angles 60.
  • the range of angles 60 comprises light impinging from a transparent surface 30 and light impinging from areas outside the transparent surface 30.
  • a second space 20 is present behind the transparent surface 30.
  • An object 40A in this case illustrated as a person is present in the first space 10, but outside the range of angles 60. Nevertheless, some light emanating from the object 40A may be reflected in the transparent surface 30 and in this way reach the camera.
  • An object 40B is present in the second space 20.
  • the second space is highly illuminated and the light intensity from the object 40B in the second space 20 through the transparent surface 30 to the camera 50 is much higher than the intensity of the reflected light from the object 40A. Even if there is a weak reflection image of the object 40A reaching the camera 50, it disappears totally in comparison with the light coming from the object 40B in the second area 20. No reflection image is detectable.
  • Fig. IB illustrates the same arrangement, but when the second space 20 is kept dark. Almost no light is transmitted from the second space 20 through the transparent surface 30 to the camera 50, and in particular very little light from the object 40B. This means that the reflection of the light from the object 40 A in the first space 10 reflected in the transparent surface 30 now becomes larger than the light from the second space 20. A reflection image can thereby be detected by the camera 50. However, due to the reflection, the object 40A seems to be placed within the second space 20 as an imaginary object 40 A*.
  • GAN generative adversarial network
  • the generative adversarial network may for instance be trained using PIR or RF sensing
  • a smart home system e.g., Philips Hue
  • the smart home system may use a light sensor, e.g., embedded into a security camera or a separate ambient light sensor.
  • a light sensor e.g., embedded into a security camera or a separate ambient light sensor.
  • Yet another piece of information that may give information of light conditions is that the smart home system preferably knows the state of every connected luminaire in the system and the room to which the light is assigned. From this it is possible to determine, on a per-room basis, the degree to which partially transparent surfaces will exhibit high levels of relative reflectivity or none at all.
  • the light of interest is mainly visible light.
  • the presented principles are also valid for other ranges of wavelength, such as ultraviolet or infrared, as long as the transmittance is significant for such wavelength.
  • Fig. 2 illustrates a flow diagram of steps of an embodiment of a method for identifying a position of an object in a first space or a second space.
  • the first space is separated from the second space by an, at least partially, transparent surface.
  • step S10 an image or a video is registered by a camera in a first space.
  • the image or video depicts at least a part of the transparent surface.
  • step S20 information of illumination conditions in the first space is obtained.
  • step S22 information of illumination conditions in the second space is obtained.
  • the steps S20 and S22 can by advantage be performed simultaneously or at least both concern illumination conditions being present when the image or video was captured.
  • a reflection level in light impinging on the camera from the transparent surface was derived based on the illumination conditions in the first space and the illumination conditions in the second space.
  • the reflection level represents an expected degree of reflected light in the light impinging on the camera from the transparent surface.
  • the relative strength of reflections is low, when the camera is indoor and, when the lights inside are on but it is light outside.
  • the relative strength of reflections is high, when the camera is indoor and, when the lights inside are on and it is dark outside.
  • the relative strength of reflections is high, when the camera is indoor and a glass window separates two rooms, and the room housing the camera is fully lit up and the other room completely in the dark.
  • the relative strength of reflections may be medium, when the camera is located outdoor, during daylight and inside the house the lights are off.
  • step S32 If there is an object present within the image or video, the process continues to step S32, where it is decided if the reflection level is above a predetermined threshold. If the reflection level is concluded to be below the predetermined threshold, the process continues to step S42, where it is determined that the image of the object within the transparent surface is a direct image. The process continues to step S50, described further below.
  • step S40 whether the image of the object within the transparent surface is a reflection image or not based on the reflection level. If the reflection level exceeds a second threshold, it may be determined that the image of the object within the transparent surface is a reflection image. If the reflection level does not exceed the second threshold, it may be determined that the image of the object within the transparent surface is not a reflection image.
  • step S50 a position of the object is identified in dependence of the determination.
  • the step of identifying a position of the object is performed as a choice between two states. If a reflection image is determined to exist, the object is identified to be present in the first space. If a reflection image is not determined to exist, the object is identified to be present in the second space.
  • a threshold 100 could be predetermined to separate a reflection level 102 that may give rise to a reflection image from a reflection level 104 where reflection images are unlikely to be detectable.
  • step S40 the determination could simply follow the results of the reflection level analysis.
  • a reflection level such as reflection level 102, i.e. above the threshold 100 is determined to be associated with a reflection image and a reflection level, such as reflection level 104, i.e. below the threshold 100 is determined to be associated with a direct image. This may, however, give uncertain results when the reflection level is close to the threshold.
  • the deriving of a reflection level is based on an illumination relation between the first space and the second space.
  • the illumination condition of the first and second spaces could be expressed just in terms of “light” or “dark”.
  • a “reflection possible” reflection level could be assigned if the illumination condition of the second space is “dark” and the illumination condition of the first space is “light”.
  • a “reflection probable” reflection level could be assigned if the illumination condition of the second space is “dark” and the illumination condition of the first space is “light”
  • a “reflection not possible” reflection level could be assigned if the illumination condition of the second space is “light” and the illumination condition of the first space is “dark”
  • a ” reflection possible” reflection level could be assigned if the illumination conditions of the first and second spaces are the same.
  • Fig. 4A illustrates a wall 12 of a first space with a window 32 into a second space. Both the first space and the second space are illuminated. An object 40B in the second space is seen through the window 32. A mirror image of an object 40A* may very weakly be seen as a reflection in the window 32. Typically, this mirror image 40A* is much less intense than the image of the object 40B and may not even be possible to distinguish from the background light from the second space.
  • Fig. 4B illustrates the same wall 12 when the illumination in the first space is turned off. Since the illumination of the real object giving rise to the mirror image of the object 40A* disappears, the intensity of the mirror image of the object 40A* also disappears.
  • Fig. 4C illustrates the same wall when both spaces are dark.
  • FIG. 4D illustrates the same wall when only the first space is illuminated.
  • the object 40B is no longer seen, since there is no illumination in the second space, and the overall light intensity coming from the second space through the window is generally very low. This makes it possible to distinguish the mirror image of the object 40A* despite a relative low intensity of a reflected image.
  • a ratio between a measured or estimated light intensity of the first space and a measured or estimated light intensity of the second space, scaled with a ratio between an estimated reflectance and an estimated transmittance of the transparent surface could for example be used as a measure of the reflection level.
  • Other associated or derivable measures can also be used, e.g. a percentage of reflected light in the light impinging on the camera from the transparent surface.
  • Such a reflection level measure could be used as an input into a routine for determine whether the image of the object within the transparent surface is a reflection image or not.
  • the location of transparent surfaces can be provided to the system as a part of an installation procedure. However, they can also be detected by their light intensity at different situations.
  • the method for identifying a position of an object comprises a further step of localizing the transparent surface by a vision algorithm.
  • the location of the transparent reflective surface can be retrieved by a vision algorithm trained to detect windows, for example, by its frame. For example, in absence of artificial lighting, at daytime the overall light intensity of a window is higher than their surroundings as lux levels inside a building are far below the lux level outside. Similar, a connected lighting system can be used to combine the ambient light levels and the lighting state. This may be apparent comparing the different situations of Figs 4A-D.
  • Transparent surfaces in the image can be located by an algorithm embedded in the camera, combining the camera images and the ambient light levels at different times of the day or luminaire status.
  • the reflectivity of the identified transparent surface depends on the light levels during the detection of a person.
  • the location of the surface in the camera view and the reflectivity is used by the system to determine whether a person is truly present in the observed space.
  • the information on the chance of reflections, i.e. reflection level and the location of the transparent surface may finally be combined with the video footage, e.g., metadata.
  • At least one of the steps of obtaining information of illumination conditions in the first space and obtaining information of illumination conditions in the second space is performed by determining a luminaire status in a respective space, and/or detecting a light level in the respective space.
  • a smart lighting system typically knows each state, i.e. on, off or dimmed, of each luminaire in one or both spaces or rooms and can determine the possible amount of light reflection in e.g. a glass window. If such luminaire status is lacking for one or both spaces, light sensors could assist in finding the illumination conditions. Such sensors may be integrated in the camera or can be separate sensors.
  • the smart lighting system could be used for changing the illumination conditions in either or both of the spaces, in order to create more conclusive reflection levels.
  • the user can optionally enable and disable the detection through transparent reflective surfaces. This may be useful in the case the presence of persons within a certain space is prohibited only for some time periods, e.g. during the night, but not during other periods. This may also be coordinated e.g. with e.g. activation/de- activation of an alarm system.
  • the identification of where an object is situated typically distinguished between the first and second space. However, in some cases, only one of the spaces is of interest e.g. for an alert operation. An object identified to be in the other space may then be left without action.
  • the indoor camera may be configured to notify the user of the system about the existence a person behind the reflective surface and ignore persons in front of the reflective surface.
  • the present technology may advantageously be implemented as a computer program. To that end, it may be loaded into any kind of processor, e.g. of a smart home system.
  • One way to provide the computer program is then to have it stored on a computer program product.
  • Such computer program product then comprises a computer-readable medium having stored thereon a computer program comprising instructions.
  • the instructions cause a processor(s) to register an image or a video from a camera in a first space when the instructions are executed the processor(s).
  • the image or video depicts at least a part of the transparent surface.
  • the instructions further cause the processor to obtain information of illumination conditions in the first space, to obtain information of illumination conditions in the second space, and to derive a reflection level in light impinging on the camera from the transparent surface based on the illumination conditions in the first space and the illumination conditions in the second space.
  • the reflection level represents an expected degree of reflected light in the light impinging on the camera from the transparent surface.
  • the instructions cause the at least one processor to further to determine, as a response to the reflection level being above a predetermined threshold and to an image of an object being present within the transparent surface, whether the image of the object within the transparent surface is a reflection image or not, and to identify a position of the object in dependence of the determination.
  • Fig. 5 schematically illustrates a localizing system 1.
  • the localizing system 1 is configured for identifying a position of an object in a first space 10 or a second space 20.
  • the first space 10 is separated from the second space 20 by an, at least partially, transparent surface 30.
  • the localizing system 1 comprises a camera 50 situated within the first space 10.
  • the camera is configured to register an image or a video in the first space 10.
  • the camera is directed such that the image or video depicts at least a part of the transparent surface 30.
  • the localizing system 1 comprises first illumination condition means 60A for obtaining information of illumination conditions in the first space 10.
  • This first illumination condition means 60A could e.g. be a part of an illumination system providing information about an illumination condition of luminaires 60A in the first space 10.
  • the first illumination condition means 60A could also in alternative or in combination be a light sensor sensing the light intensity within the first space 10.
  • the localizing system 1 further comprises second illumination condition means 60B for obtaining information of illumination conditions in the second space 20.
  • this second illumination condition means 60B could e.g. be a part of an illumination system providing information about an illumination condition of luminaires 60B in the second space 20.
  • the second illumination condition means 60B could also in alternative or in combination be a light sensor sensing the light intensity within the second space 20.
  • the localizing system 1 also comprises a localizer control 70.
  • This localizer control is connected to or integrated with the camera 50, connected to or integrated with the first illumination condition means 60A and connected to or integrated with the second illumination condition means 60B.
  • the localizer control 70 is configured for deriving a reflection level in light impinging on the camera 50 from the transparent surface 30 based on the illumination conditions in the first space 10 and the illumination conditions in the second space 20. As discussed above, the reflection level represents an expected degree of reflected light in the light impinging on the camera 50 from the transparent surface 30.
  • the localizer control 70 can send the reflection level to the camera 50 and the camera 50 may embed this into the video footage.
  • the localizer control 70 is further configured for determining, as a response to the reflection level being above a predetermined threshold and to an image of an object being present within the transparent surface 30, whether the image of the object within the transparent surface 30 is a reflection image or not.
  • the localizer control 70 is also configured for identifying a position of the object in dependence of the determination.
  • the localizer control 70 may be integrated in a smart lighting system or a security system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Circuit Arrangement For Electric Light Sources In General (AREA)

Abstract

A method for identifying a position of an object in a first or second space, separated by a transparent surface, is presented. An image or a video is registered (S10) by a camera. Information of illumination conditions in the first and second space is obtained (S20, S22). A reflection level in light impinging on the camera from the transparent surface is derived (S24) based on the illumination conditions in the first and second space. The reflection level represents an expected degree of reflected light in the light impinging on the camera from the transparent surface. As a response to a reflection level above a predetermined threshold and to a present image of an object, it is determined (S40) whether the image of the object within the transparent surface is a reflection image or not. A position of the object is identified (S50) in dependence of the determination.

Description

Determination of objects in spaces connected by transparent surfaces
FIELD OF THE INVENTION
The present invention generally relates to object detection. More specifically, the present invention is related to determination of in which space an object is present.
BACKGROUND OF THE INVENTION
Security cameras can prevent theft, vandalism, or break-ins by detecting intruders and sending respective notifications to the property owners. When a security camera is installed in an indoor space which contains transparent or partially transparent surfaces such as glass doors or windows, the camera view may go beyond the target space.
To detect intruders, modem security cameras rely on object detection and face recognition Artificial Intelligence (Al) algorithms. These algorithms have no notion of reflective surfaces such as windows or glass doors. From the perspective of these algorithms, an intruder is a person who has been detected in the camera view and whose face is not recognized when compared to the faces of camera owners. Therefore, security cameras may detect persons that are not present in the target space, i.e., a space observed by a security camera with intent of protection, but are visible to the camera in a reflective surface, e.g., a window, as intruders. This leads to false alarms, unnecessarily disturbing owners of security cameras. In other words, detection of people that are not present in the target space, e.g., a living room in a house, but are visible to the camera through a reflective surface, e.g., a window, raises false alarms, unnecessarily disturb users of security cameras with e.g., push notifications on their mobile devices.
Typically, security cameras have options to exclude areas from detections. However, this cannot be used to exclude outdoor objects only as this would also block detection of persons in the area. This problem occurs because the algorithms employed by the security cameras cannot tell whether a person is truly present in the target space.
To tackle this problem, one may re-configure camera view, install a camera to an alternative position or apply masking zones so that no reflective surfaces are present in the camera view. However, this may lead to bad coverage of the observed area and true intruders not being detected. Hence, it is an object of the present invention to improve the determination of true positions of detected objects.
SUMMARY OF THE INVENTION
It is of interest to be able detect if objects are located in specific spaces without posing any limitations on presence of transparent reflective surfaces in the camera view.
This and other objects are achieved by providing methods, systems and computer program products having the features of the independent claim. Preferred embodiments are defined in the dependent claims.
The present technology centres around the properties of transparent surfaces, which are known to exhibit reflective characteristics that vary depending on the lighting conditions. Such surfaces can be detected by e.g. a security camera, when the illumination properties of the space observed by the camera are known.
Hence, according to a first aspect of the present technology, there is provided a method for identifying a position of an object in a first space or a second space. The first space is separated from the second space by an, at least partially, transparent surface. An image or a video is registered by a camera in the first space. The image or video depicts at least a part of the transparent surface. Information of illumination conditions in the first space is obtained. Information of illumination conditions in the second space is obtained. A reflection level in light impinging on the camera from the transparent surface is derived based on the illumination conditions in the first space and the illumination conditions in the second space. The reflection level represents an expected degree of reflected light in the light impinging on the camera from the transparent surface. As a response to the reflection level being above a predetermined threshold and to an image of an object being present within the transparent surface, it is determined whether the image of the object within the transparent surface is a reflection image or not based on the reflection level. A position of the object is identified in dependence of the determination.
The present technology thus involves detecting of reflective surfaces and discarding people that are not truly present in a target space. The utilization of the illumination conditions in the different spaces makes it easy to distinguish between situations where reflections are not present and situations where reflections may be present. Only the situations where there might be reflections are considered for an accurate determination process. This discrimination reduces the processing requirement considerably. The minimum level of discrimination involves the use of a reflection level having either of two categories, reflection or non-reflection. To this end, in one embodiment the reflection level is expressed as at least two reflection level categories. These two categories then correspond to expected possible reflection and expected non-possible reflection, respectively. This operates as an on/off switch for the actual determination of whether a reflection image is present or not.
In one embodiment, the reflection level is expressed as a percentage of reflected light in the light impinging on the camera from the transparent surface. In some applications, it may be useful to adapt the breaking point between possible reflections and non-possible reflection conditions. In such cases, the reflection level that may assume continuous values may be used, typically the expected ratio of reflected light, and the discrimination between reflection/non-reflection conditions can easily be tuned by changing a threshold value for the reflection level.
A smart home system can determine if it is dark outside using the calendar in combination with the time, or use a light sensor, e.g., embedded into the security camera or a separate ambient light sensor. The smart home system knows the state of every connected luminaire in this system and the room to which the light is assigned. From this it is possible to determine on a per-room basis the degree to which transparent surfaces will exhibit high levels of reflectivity or none at all.
In one embodiment, at least one of the steps of obtaining information of illumination conditions in the first space and obtaining information of illumination conditions in the second space is performed by determining a luminaire status in a respective space and/or detecting a light level in the respective space. The illumination conditions may preferably be measured and/or concluded from luminaire status information for the different spaces. The novel aspect coming with the luminaire status is that the system knows the positions of the lamps relative to the camera and furthermore the settings of the lamps. Additionally, the locations of the lamps relative to the at least partially transparent surface, e.g. a window, are known. These data are taken into account for determining where the object is present.
In one embodiment, one of the first space and the second space is an outdoor space. The step of obtaining information of illumination conditions in the outdoor space may then comprise noting a present time and date and retrieving an expected illumination condition from a database with the time and date as input. Outdoor spaces are typically significantly influenced by the sun light. By registering the time and date, it is known from astronomical data if the sun is over or under the horizon, and an estimate of whether the light from the sun may provide a light condition of the outdoor space is thus easily achieved. This may then be combined with e.g. status information of luminaire that may be present in the outdoor space.
In one embodiment, the deriving of a reflection level is based on an illumination difference between the first space and the second space. The illumination difference is a quantity that is advantageously used, since it is tightly connected to the existence of high-intensity reflections. Reflections occur when there is a significant difference in light between the front of a reflective transparent surface and the other side. Either side can be inside or outside the house, or any combination of the two. Looking though a partly transparent surface towards a space that has a higher illumination than the space your or your camera is positioned in will rarely present any reflection image of a noticeable relative light level. Oppositely, looking though a partly transparent surface towards a space that has a lower illumination than the space your or your camera is positioned in will probably present reflection images of a noticeable relative light level.
The possibility of a transparent reflective surface being present, and its location can optionally be embedded in e.g. the metadata of an image. This information can be used as a post-processing step for a person detection algorithm to determine whether a person is truly present in the observed space.
In one embodiment, the step of identifying a position of the object is performed as; if a reflection image is determined to exist, the object is identified to be present in the first space, and if a reflection image is not determined to exist, the object is identified to be present in the second space. A generative adversarial network (GAN) may, as one nonexclusive example, be trained to distinguish between reflections of people and objects and actual images.
In one embodiment, the method for identifying a position of an object comprising the further step of initiating, as a response to the reflection level being indefinite in distinguishing between reflection and transmission and to an image of an object being present within the transparent surface, a change of illumination in at least one of the first space and the second space. In this manner, a control of the illumination can assist in deriving of conclusive reflection levels.
The present technology is applicable to any kind of object. However, one very advantageous use is, as indicated in the background, the identification of the presence of living creature, e.g. a human being, in a certain target space. This is advantageous applicable to different kinds of security systems. Thus, in one embodiment, the object is a person.
The present technology may be implemented as a computer program and may be loaded into any kind of processor, e.g. of a smart home system. Hence, according to a second aspect of the present technology, there is provided a computer program product comprising a computer-readable medium having stored thereon a computer program comprising instructions. The instructions, when executed by at least one processor, cause the at least one processor to register an image or a video from a camera in a first space. The image or video depicts at least a part of the transparent surface. The instructions, when executed by at least one processor, cause the at least one processor to further obtain information of illumination conditions in the first space, to obtain information of illumination conditions in the second space, and to derive a reflection level in light impinging on the camera from the transparent surface based on the illumination conditions in the first space and the illumination conditions in the second space. The reflection level represents an expected degree of reflected light in the light impinging on the camera from the transparent surface. The instructions, when executed by at least one processor, cause the at least one processor to further to determine, as a response to the reflection level being above a predetermined threshold and to an image of an object being present within the transparent surface, whether the image of the object within the transparent surface is a reflection image or not based on the reflection level, and to identify a position of the object in dependence of the determination.
The present technology may be implemented as a localizing system: Such a localizing system may be a stand-alone system or may be integrated into other types of systems, e.g. of a smart home system. Hence, according to a third aspect of the present technology, there is provided a localizing system, for identifying a position of an object in a first space or a second space. The first space is separated from the second space by an, at least partially, transparent surface. The localizing system comprises a camera, first illumination condition means, second illumination condition means and a localizer control. The camera is situated within the first space. The camera is configured to register an image or a video in the first space. The camera is directed such that the image or video depicts at least a part of the transparent surface. The first illumination condition means is configured for obtaining information of illumination conditions in the first space. The second illumination condition means is configured for obtaining information of illumination conditions in the second space. The localizer control is connected to or integrated with the camera, connected to or integrated with the first illumination condition means and connected to or integrated with the second illumination condition means. The localizer control is configured for deriving a reflection level in light impinging on the camera from the transparent surface based on the illumination conditions in the first space and the illumination conditions in the second space. The reflection level represents an expected degree of reflected light in the light impinging on the camera from the transparent surface. The localizer control is further configured for determining, as a response to the reflection level being above a predetermined threshold and to an image of an object being present within the transparent surface, whether the image of the object within the transparent surface is a reflection image or not. The localizer control is further configured for identifying a position of the object in dependence of the determination.
In the present disclosure, a “position” defines a place in space of an object. This position may be expressed as a definition of the exact location, or as being located within or outside a particular limited space. The position may be referred to in relative wording, e.g. “outside”, “in front of’, “in level with” an object or space.
In the present disclosure, an “object” is a physically existing item, e.g. a person, an animal or a non-live article of any kind.
In the present disclosure, “space” defines a three-dimensional volume in the real world. The space may be limited by physical items, such as walls, floors and ceilings, but may also be three-dimensional volumes without physical limitations, or only physical limitations in some directions.
In the present disclosure, “transparent surface” is used for characterizing a surface that allows transmission of electromagnetic radiation, in particular light in the visible, ultraviolet and/or infrared wavelength. The transmission of electromagnetic radiation may be partial, i.e. presenting a transmissivity of less than 100%, however, still being detectable.
In the present disclosure, “illumination conditions” refer to light-associated conditions of a space. The illumination conditions can be expressed in measurable quantities, such as luminous intensity, but may also be expressed in more general terms, such as “dark”, “light”, “illuminated” etc.
In the present disclosure, “reflection level” refers to the portion of a light intensity that emanates from light that has been reflected at least once between a light source and the detecting means, i.e. the “degree of reflected light”. The reflection level may be expressed in e.g. percentage of a total light intensity, but may also be expressed e.g. in “reflection level categories”, such as “mainly reflected light” and “mainly non-reflected light”.
In the present disclosure, “luminaire status” concerns conditions of lighting arrangements of e.g. a particular space. This could e.g. be the number of operating luminaire arrangements, their illumination strengths, the direction of the light from luminaire arrangements etc. Luminaire status may also be expressed in categories, such as “on” and “off’.
In the present disclosure, “light level” refers to a measurable quantity associated with light, such as illuminance or brightness.
In the present disclosure, “processor” refers to any kind of processing circuitry capable of processing computer instructions and data. Examples of processors include, but is not limited to, one or more microprocessors, one or more Digital Signal Processors (DSPs ), one or more Central Processing Units (CPUs), video acceleration hardware, and/ or any suitable programmable logic circuitry such as one or more Field Programmable Gate Arrays (FPGAs), or one or more Programmable Logic Controllers (PLCs).
“Computer program” refer to any collection of instructions, typically digital, that when processed by a processor performs a predetermined series of operations. The computer program may be loaded into an operating memory of a processor for execution by the processing circuitry thereof.
In the present disclosure, “computer-readable medium” refers to any nonvolatile medium capable of storing and retrieving computer program instructions and/or data. The computer-readable medium may include one or more removable or non-removable memory devices including, but not limited to a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc (CD), a Digital Versatile Disc (DVD), a Blu-ray disc, a Universal Serial Bus (USB) memory, a Hard Disk Drive (HDD) storage device, a flash memory, a magnetic tape, or any other conventional memory device.
In the present disclosure, a “computer program product” comprises a computer-readable medium having stored thereon a computer program. The computer program product is normally carried or stored on a computer readable medium, in particular a non-volatile medium.
In the present disclosure, a “localizing system” refers to a system configured for determining a position of an object. The localizing system may e.g. comprise imaging equipment and image analysing equipment. In the present disclosure, a “illumination condition means” refers to any arrangement capable of providing information of illumination conditions of e.g. a space. Non-exclusive examples can be or comprise advanced luminaire systems or light sensor systems.
In the present disclosure, a “localizer control” is a system or device configured for determining a position of an object.
Further objectives of, features of, and advantages with, the present invention will become apparent when studying the following detailed disclosure, the drawings and the appended claims. Those skilled in the art will realize that different features of the present invention can be combined to create embodiments other than those described in the following.
BRIEF DESCRIPTION OF THE DRAWINGS
This and other aspects of the present invention will now be described in more detail, with reference to the appended drawings showing embodiment(s) of the invention.
Fig. 1 A schematically illustrates two light spaces separated by a transparent surface,
Fig. IB schematically illustrates one light space and one dark space separated by a transparent surface,
Fig. 2 is a flow diagram of steps of an embodiment of a method for identifying a position of an object in a first space or a second space,
Figs. 3 A-B are diagrams illustrating different embodiments of reflection level categories,
Figs 4A-D are schematic illustrations of a wall with a window during different lighting conditions, and
Fig. 5 schematically shows an embodiment of a localizing system.
DETAILED DESCRIPTION
Reflection images recorded through e.g. mirrors may be identified and/or eliminated by use of different Al techniques. Such procedures are, however, not always perfect and false detections may occur despite relatively high processor capacity demands. Furthermore, surfaces that are partially transparent, may due to changing surrounding light conditions present different levels of reflectivity, which makes such determinations even more difficult. The present technology, however, presents a solution that facilitates handling of partially transparent, partially reflective surfaces.
For a reflective transparent surface, reflections may occur when there is a significant difference in light between the front of a reflective transparent surface and the other side. Either side can be inside or outside the house, or any combination of the two.
Fig. 1 A illustrates schematically a situation where a camera 50, situated in a first space 10, records an image based on light impinging on the camera within a certain range of angles 60. The range of angles 60 comprises light impinging from a transparent surface 30 and light impinging from areas outside the transparent surface 30.
A second space 20 is present behind the transparent surface 30. An object 40A, in this case illustrated as a person is present in the first space 10, but outside the range of angles 60. Nevertheless, some light emanating from the object 40A may be reflected in the transparent surface 30 and in this way reach the camera. An object 40B is present in the second space 20. The second space is highly illuminated and the light intensity from the object 40B in the second space 20 through the transparent surface 30 to the camera 50 is much higher than the intensity of the reflected light from the object 40A. Even if there is a weak reflection image of the object 40A reaching the camera 50, it disappears totally in comparison with the light coming from the object 40B in the second area 20. No reflection image is detectable.
Fig. IB illustrates the same arrangement, but when the second space 20 is kept dark. Almost no light is transmitted from the second space 20 through the transparent surface 30 to the camera 50, and in particular very little light from the object 40B. This means that the reflection of the light from the object 40 A in the first space 10 reflected in the transparent surface 30 now becomes larger than the light from the second space 20. A reflection image can thereby be detected by the camera 50. However, due to the reflection, the object 40A seems to be placed within the second space 20 as an imaginary object 40 A*.
By performing the determination step by using e.g. a generative adversarial network (GAN) trained to distinguish between reflections of people and objects and actual images of objects, it can be concluded if an image recorder by the camera 50 comprises a reflective image of an object or direct image of an object. The generative adversarial network (GAN) may for instance be trained using PIR or RF sensing
By this comparison, it is understood that the distinguishing between detecting an imaginary object 40A* in the second space and a real image of an object 40B is dependent on the relative light conditions of the two spaces. The relative light conditions between the two spaces 10, 20 connected through the transparent surface 30 highly influences the relative intensities between transmitted light and reflected light.
A smart home system, e.g., Philips Hue, can obtain information assisting in determining light conditions. For instance, a smart home system can typically determine if it is dark outside a building using the calendar in combination with the time. Alternatively, or in combination, the smart home system may use a light sensor, e.g., embedded into a security camera or a separate ambient light sensor. Yet another piece of information that may give information of light conditions is that the smart home system preferably knows the state of every connected luminaire in the system and the room to which the light is assigned. From this it is possible to determine, on a per-room basis, the degree to which partially transparent surfaces will exhibit high levels of relative reflectivity or none at all.
The light of interest is mainly visible light. However, the presented principles are also valid for other ranges of wavelength, such as ultraviolet or infrared, as long as the transmittance is significant for such wavelength.
Fig. 2 illustrates a flow diagram of steps of an embodiment of a method for identifying a position of an object in a first space or a second space. The first space is separated from the second space by an, at least partially, transparent surface. In step S10, an image or a video is registered by a camera in a first space. The image or video depicts at least a part of the transparent surface. In step S20, information of illumination conditions in the first space is obtained. In step S22, information of illumination conditions in the second space is obtained. The steps S20 and S22 can by advantage be performed simultaneously or at least both concern illumination conditions being present when the image or video was captured. In step S24, a reflection level in light impinging on the camera from the transparent surface was derived based on the illumination conditions in the first space and the illumination conditions in the second space. The reflection level represents an expected degree of reflected light in the light impinging on the camera from the transparent surface.
Typically, the relative strength of reflections is low, when the camera is indoor and, when the lights inside are on but it is light outside. The relative strength of reflections is high, when the camera is indoor and, when the lights inside are on and it is dark outside. The relative strength of reflections is high, when the camera is indoor and a glass window separates two rooms, and the room housing the camera is fully lit up and the other room completely in the dark. The relative strength of reflections may be medium, when the camera is located outdoor, during daylight and inside the house the lights are off. In step S30, it is decided if there is an object present within the image or video. If there is no image of an object present, the procedure is ended. If there is an object present within the image or video, the process continues to step S32, where it is decided if the reflection level is above a predetermined threshold. If the reflection level is concluded to be below the predetermined threshold, the process continues to step S42, where it is determined that the image of the object within the transparent surface is a direct image. The process continues to step S50, described further below.
If, as a result of steps S30 and S32, as a response to the reflection level being above a predetermined threshold and to an image of an object being present within the transparent surface it is determined in step S40, whether the image of the object within the transparent surface is a reflection image or not based on the reflection level. If the reflection level exceeds a second threshold, it may be determined that the image of the object within the transparent surface is a reflection image. If the reflection level does not exceed the second threshold, it may be determined that the image of the object within the transparent surface is not a reflection image.
In step S50, a position of the object is identified in dependence of the determination.
In one embodiment, the step of identifying a position of the object is performed as a choice between two states. If a reflection image is determined to exist, the object is identified to be present in the first space. If a reflection image is not determined to exist, the object is identified to be present in the second space.
The most straightforward way of defining the reflection level is to express it in at least categories. In a most simple version, there are just two levels; “reflection possible” and “reflection not possible”. This is schematically illustrated in Fig. 3 A. A threshold 100 could be predetermined to separate a reflection level 102 that may give rise to a reflection image from a reflection level 104 where reflection images are unlikely to be detectable.
In a simplest version of step S40, the determination could simply follow the results of the reflection level analysis. A reflection level, such as reflection level 102, i.e. above the threshold 100 is determined to be associated with a reflection image and a reflection level, such as reflection level 104, i.e. below the threshold 100 is determined to be associated with a direct image. This may, however, give uncertain results when the reflection level is close to the threshold.
In an alternative embodiment, step S40 could e.g. be expanded by an additional analysis when the reflection level is above the threshold 100, concluding whether the image is a reflection or transmission image. The threshold 100 could then be lowered to a “safe” level, guaranteeing that no reflection levels below the threshold could give rise to any reflection image.
Another alternative embodiment is illustrated in Fig. 3B, where the reflective level is divided into three categories; “reflection probable”, “reflection possible” and “reflection not possible”. A reflection level, such as reflection level 106 being situated above a first threshold 100A, will always give a reflection image. A reflection level, such as reflection level 110 being situated below a second threshold 100B will never give rise to a reflection image. For a reflection level, such as reflection level 108, the situation is more uncertain, and additional analysis may be of help.
In one embodiment, the deriving of a reflection level is based on an illumination relation between the first space and the second space. In a very basic embodiment, the illumination condition of the first and second spaces could be expressed just in terms of “light” or “dark”. With reference to Fig. 3 A, a “reflection possible” reflection level could be assigned if the illumination condition of the second space is “dark” and the illumination condition of the first space is “light”. With reference to Fig. 3B, a “reflection probable” reflection level could be assigned if the illumination condition of the second space is “dark” and the illumination condition of the first space is “light”, a “reflection not possible” reflection level could be assigned if the illumination condition of the second space is “light” and the illumination condition of the first space is “dark, and a ” reflection possible” reflection level could be assigned if the illumination conditions of the first and second spaces are the same.
Fig. 4A illustrates a wall 12 of a first space with a window 32 into a second space. Both the first space and the second space are illuminated. An object 40B in the second space is seen through the window 32. A mirror image of an object 40A* may very weakly be seen as a reflection in the window 32. Typically, this mirror image 40A* is much less intense than the image of the object 40B and may not even be possible to distinguish from the background light from the second space. Fig. 4B illustrates the same wall 12 when the illumination in the first space is turned off. Since the illumination of the real object giving rise to the mirror image of the object 40A* disappears, the intensity of the mirror image of the object 40A* also disappears. Fig. 4C illustrates the same wall when both spaces are dark. No objects are seen. Fig. 4D illustrates the same wall when only the first space is illuminated. The object 40B is no longer seen, since there is no illumination in the second space, and the overall light intensity coming from the second space through the window is generally very low. This makes it possible to distinguish the mirror image of the object 40A* despite a relative low intensity of a reflected image.
In one embodiment, one may also use continuous measures of the reflection level. In one embodiment, a ratio between a measured or estimated light intensity of the first space and a measured or estimated light intensity of the second space, scaled with a ratio between an estimated reflectance and an estimated transmittance of the transparent surface could for example be used as a measure of the reflection level. Other associated or derivable measures can also be used, e.g. a percentage of reflected light in the light impinging on the camera from the transparent surface. Such a reflection level measure could be used as an input into a routine for determine whether the image of the object within the transparent surface is a reflection image or not.
The location of transparent surfaces can be provided to the system as a part of an installation procedure. However, they can also be detected by their light intensity at different situations. In one embodiment, the method for identifying a position of an object comprises a further step of localizing the transparent surface by a vision algorithm. In one embodiment, the location of the transparent reflective surface can be retrieved by a vision algorithm trained to detect windows, for example, by its frame. For example, in absence of artificial lighting, at daytime the overall light intensity of a window is higher than their surroundings as lux levels inside a building are far below the lux level outside. Similar, a connected lighting system can be used to combine the ambient light levels and the lighting state. This may be apparent comparing the different situations of Figs 4A-D. Transparent surfaces in the image can be located by an algorithm embedded in the camera, combining the camera images and the ambient light levels at different times of the day or luminaire status. The reflectivity of the identified transparent surface depends on the light levels during the detection of a person. The location of the surface in the camera view and the reflectivity is used by the system to determine whether a person is truly present in the observed space. The information on the chance of reflections, i.e. reflection level and the location of the transparent surface may finally be combined with the video footage, e.g., metadata.
Preferably, at least one of the steps of obtaining information of illumination conditions in the first space and obtaining information of illumination conditions in the second space is performed by determining a luminaire status in a respective space, and/or detecting a light level in the respective space. A smart lighting system typically knows each state, i.e. on, off or dimmed, of each luminaire in one or both spaces or rooms and can determine the possible amount of light reflection in e.g. a glass window. If such luminaire status is lacking for one or both spaces, light sensors could assist in finding the illumination conditions. Such sensors may be integrated in the camera or can be separate sensors.
When there is a connection to a smart lighting system, further advantages can be achieved. In one embodiment, if the deriving of the reflection level gives a result that is non-conclusive or only weakly conclusive concerning the existence of a reflection image, the smart lighting system could be used for changing the illumination conditions in either or both of the spaces, in order to create more conclusive reflection levels.
For instance, if a security system is installed for detecting any person appearing in the second space and an image of a person is present within the transparent surface, but the derived reflection level is inconclusive about the degree of reflection, any identification of a person being located in the second space also becomes inconclusive. By increasing the illumination in the second space and/or decreasing the illumination in the first space, the reflection level can be forced to a state of showing mainly transmitted light. If the image of the person still is present, a conclusive detection of a person in the second space can be made. If the image of the person vanishes or becomes weaker, the image is likely to be associated with a person in the first space instead.
In other words, in one embodiment, the method for identifying a position of an object comprising the further step of initiating, as a response to the reflection level being indefinite in distinguishing between reflection and transmission and to an image of an object (40 A, 40B) being present within the transparent surface (30), a change of illumination in at least one of the first space(10) and the second space (20).
If one of the first space and the second space is an outdoor space, further alternatives exist. The step of obtaining information of illumination conditions in the outdoor space may then comprise noting a present time and date. This information may also be included e.g. in the metadata of the image. Furthermore, an expected illumination condition can then be retrieving from a database with the time and date as input. By knowing the location of the camera, it can easily be determined whether or not there is sun light available. During nights, the base assumption could be that the space is dark, unless there is additional illumination present, and during daytime, light conditions can be assumed.
In another embodiment the camera may detect if it is dark outside and then switch to a night mode. Such setting will change the behaviour of determining reflective surfaces.
The present ideas are advantageously applied to localizing of persons, i.e. when the object is a person. This will e.g. be of interest in security systems surveying different areas for unauthorized persons. This may involve burglar detection systems or systems monitoring areas that are hazardous for people.
However, the basic principles are applicable also to objects, not being persons. The above presented ideas use light and image analysis and for finding e.g. existence of a person in a particular space. In order to reduce mistakes, in one embodiment, presence or motion detection sensors can be used in combination to train the algorithm. Nonexclusive examples of such presence or motion detection sensors may for example use PIR or RF sensing. Only if both the camera and sensor(s) detected a person, the person is considered to be truly in the observed space.
In one embodiment, the user can optionally enable and disable the detection through transparent reflective surfaces. This may be useful in the case the presence of persons within a certain space is prohibited only for some time periods, e.g. during the night, but not during other periods. This may also be coordinated e.g. with e.g. activation/de- activation of an alarm system.
The identification of where an object is situated typically distinguished between the first and second space. However, in some cases, only one of the spaces is of interest e.g. for an alert operation. An object identified to be in the other space may then be left without action. For instance, the indoor camera may be configured to notify the user of the system about the existence a person behind the reflective surface and ignore persons in front of the reflective surface.
The present technology may advantageously be implemented as a computer program. To that end, it may be loaded into any kind of processor, e.g. of a smart home system. One way to provide the computer program is then to have it stored on a computer program product. Such computer program product then comprises a computer-readable medium having stored thereon a computer program comprising instructions. The instructions cause a processor(s) to register an image or a video from a camera in a first space when the instructions are executed the processor(s). The image or video depicts at least a part of the transparent surface. The instructions further cause the processor to obtain information of illumination conditions in the first space, to obtain information of illumination conditions in the second space, and to derive a reflection level in light impinging on the camera from the transparent surface based on the illumination conditions in the first space and the illumination conditions in the second space. The reflection level represents an expected degree of reflected light in the light impinging on the camera from the transparent surface. The instructions cause the at least one processor to further to determine, as a response to the reflection level being above a predetermined threshold and to an image of an object being present within the transparent surface, whether the image of the object within the transparent surface is a reflection image or not, and to identify a position of the object in dependence of the determination.
The above presented methods are preferably performed by a localizing system. Fig. 5 schematically illustrates a localizing system 1. The localizing system 1 is configured for identifying a position of an object in a first space 10 or a second space 20. The first space 10 is separated from the second space 20 by an, at least partially, transparent surface 30. The localizing system 1 comprises a camera 50 situated within the first space 10. The camera is configured to register an image or a video in the first space 10. The camera is directed such that the image or video depicts at least a part of the transparent surface 30.
The localizing system 1 comprises first illumination condition means 60A for obtaining information of illumination conditions in the first space 10. This first illumination condition means 60A could e.g. be a part of an illumination system providing information about an illumination condition of luminaires 60A in the first space 10. The first illumination condition means 60A could also in alternative or in combination be a light sensor sensing the light intensity within the first space 10. The localizing system 1 further comprises second illumination condition means 60B for obtaining information of illumination conditions in the second space 20. In analogy with above, this second illumination condition means 60B could e.g. be a part of an illumination system providing information about an illumination condition of luminaires 60B in the second space 20. The second illumination condition means 60B could also in alternative or in combination be a light sensor sensing the light intensity within the second space 20.
The localizing system 1 also comprises a localizer control 70. This localizer control is connected to or integrated with the camera 50, connected to or integrated with the first illumination condition means 60A and connected to or integrated with the second illumination condition means 60B. The localizer control 70 is configured for deriving a reflection level in light impinging on the camera 50 from the transparent surface 30 based on the illumination conditions in the first space 10 and the illumination conditions in the second space 20. As discussed above, the reflection level represents an expected degree of reflected light in the light impinging on the camera 50 from the transparent surface 30. In one embodiment, the localizer control 70 can send the reflection level to the camera 50 and the camera 50 may embed this into the video footage. The localizer control 70 is further configured for determining, as a response to the reflection level being above a predetermined threshold and to an image of an object being present within the transparent surface 30, whether the image of the object within the transparent surface 30 is a reflection image or not. The localizer control 70 is also configured for identifying a position of the object in dependence of the determination. The localizer control 70 may be integrated in a smart lighting system or a security system.
The person skilled in the art realizes that the present invention by no means is limited to the preferred embodiments described above. On the contrary, many modifications and variations are possible within the scope of the appended claims.

Claims

1. A method for identifying a position of an object (40 A, 40B) in a first space (10) or a second space (20), wherein the first space (10) is separated from the second space (20) by an, at least partially, transparent surface (30), comprising the steps of: registering (S10) an image or a video by a camera (50) in the first space (10), wherein the image or video depicts at least a part of the transparent surface (30), obtaining (S20) information of illumination conditions in the first space (10), obtaining (S22) information of illumination conditions in the second space (20), deriving (S24) a reflection level in light impinging on the camera (50) from the transparent surface (30) based on the illumination conditions in the first space (10) and the illumination conditions in the second space (20), wherein the reflection level represents an expected degree of reflected light in the light impinging on the camera (50) from the transparent surface (30), determining (S30) if there is an object present within the image or video, determining (S40), as a response to the reflection level being above a predetermined threshold (100, 100 A, 100B) and to an image of the object (40 A, 40B) being present within the transparent surface (30), whether the image of the object within the transparent surface (30) is a reflection image or not based on the reflection level, and identifying (S50) a position of the object (40A, 40B) in dependence of the determination.
2. The method according to claim 1, wherein the reflection level is expressed as at least two reflection level categories (102, 104, 106, 108, 110).
3. The method according to claim 1, wherein the reflection level is expressed as a percentage of reflected light in the light impinging on the camera (50) from the transparent surface (30).
4. The method according to any one of claims 1 to 3, wherein at least one of the steps of obtaining (S20) information of illumination conditions in the first space (10) and obtaining (S22) information of illumination conditions in the second space (20) is performed by at least one of: determining a luminaire status in a respective space (10, 20), and detecting a light level in the respective space (10, 20).
5. The method according to any one of claims 1 to 4, wherein one of the first space (10) and the second space (20) is an outdoor space, wherein of the step of obtaining (S20, S22) information of illumination conditions in the outdoor space comprises noting a present time and date and retrieving an expected illumination condition from a database with the time and date as input.
6. The method according to any one of claims 1 to 5, wherein the deriving (S24) of a reflection level is based on an illumination relation between the first space (10) and the second space (20).
7. The method according to any one of claims 1 to 6, wherein the step of identifying (S50) a position of the object (40A, 40B) is performed as: if a reflection image is determined to exist, the object (40A) is identified to be present in the first space (10), and if a reflection image is not determined to exist, the object (40B) is identified to be present in the second space (20).
8. The method according to any one of claims 1 to 7, comprising the further step of: initiating, as a response to the reflection level being indefinite in distinguishing between reflection and transmission and to an image of an object (40A, 40B) being present within the transparent surface (30), a change of illumination in at least one of the first space(10) and the second space (20).
9. The method according to any one of claims 1 to 8, wherein the object (40A, 40B) is a person.
10. A computer program product comprising a computer-readable medium having stored thereon a computer program comprising instructions, which when executed by at least one processor, cause the at least one processor to: register an image or a video by a camera (50) in a first space (10), wherein the image or video depicts at least a part of a transparent surface (30), obtain information of illumination conditions in the first space (10), obtain information of illumination conditions in the second space (20), derive a reflection level in light impinging on the camera (50) from the transparent surface (30) based on the illumination conditions in the first space (10) and the illumination conditions in the second space (20), wherein the reflection level represents an expected degree of reflected light in the light impinging on the camera (50) from the transparent surface (30), determine if there is an object present within the image or video, determine, as a response to the reflection level being above a predetermined threshold (100, 100 A, 100B) and to an image of an object (40 A; 40B) being present within the transparent surface (30), whether the image of the object (40 A, 40B) within the transparent surface (30) is a reflection image or not based on the reflection level, and identify a position of the object (40 A, 40B) in dependence of the determination.
11. A localizing system (1), for identifying a position of an object (40A, 40B) in a first space (10) or a second space (20), wherein the first space (10) is separated from the second space (20) by an, at least partially, transparent surface (30), the localizing system (1) comprising: a camera (50) situated within the first space (10), the camera (50) being configured to register an image or a video in the first space (10), wherein the camera (50) being directed such that the image or video depicts at least a part of the transparent surface (30), first illumination condition means (60A) for obtaining information of illumination conditions in the first space (10), second illumination condition means (60B) for obtaining information of illumination conditions in the second space (20), a localizer control (70), connected to or integrated with the camera (50), connected to or integrated with the first illumination condition means (60A) and connected to or integrated with the second illumination condition means (60B), the localizer control (70) being configured for deriving a reflection level in light impinging on the camera (50) from the transparent surface (30) based on the illumination conditions in the first space (10) and the illumination conditions in the second space (20), wherein the reflection level represents an expected degree of reflected light in the light impinging on the camera (50) from the transparent surface (30), wherein the localizer control (70) being further configured for determining if there is an object present within the image or video, and determining, as a response to the reflection level being above a predetermined threshold (100, 100 A, 100b) and to an image of an object (40 A, 40B) being present within the transparent surface (30), whether the image of the object (40A, 40B) within the transparent surface (30) is a reflection image or not based on the reflection level, and wherein the localizer control (70) being further configured for identifying a position of the object (40 A; 40B) in dependence of the determination.
PCT/EP2025/052791 2024-02-09 2025-02-04 Determination of objects in spaces connected by transparent surfaces Pending WO2025168547A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP24156765 2024-02-09
EP24156765.0 2024-02-09

Publications (1)

Publication Number Publication Date
WO2025168547A1 true WO2025168547A1 (en) 2025-08-14

Family

ID=89900992

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2025/052791 Pending WO2025168547A1 (en) 2024-02-09 2025-02-04 Determination of objects in spaces connected by transparent surfaces

Country Status (1)

Country Link
WO (1) WO2025168547A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120147194A1 (en) * 2010-12-14 2012-06-14 Xerox Corporation Determining a total number of people in an ir image obtained via an ir imaging system
US9082202B2 (en) * 2012-09-12 2015-07-14 Enlighted, Inc. Image detection and processing for building control
US20210042903A1 (en) * 2019-08-05 2021-02-11 Korea Advanced Institute Of Science And Technology Electronic device and method of identifying false image of object attributable to reflection in indoor environment thereof
US20230394824A1 (en) * 2022-06-07 2023-12-07 Axis Ab Detection of reflection objects in a sequence of image frames

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120147194A1 (en) * 2010-12-14 2012-06-14 Xerox Corporation Determining a total number of people in an ir image obtained via an ir imaging system
US9082202B2 (en) * 2012-09-12 2015-07-14 Enlighted, Inc. Image detection and processing for building control
US20210042903A1 (en) * 2019-08-05 2021-02-11 Korea Advanced Institute Of Science And Technology Electronic device and method of identifying false image of object attributable to reflection in indoor environment thereof
US20230394824A1 (en) * 2022-06-07 2023-12-07 Axis Ab Detection of reflection objects in a sequence of image frames

Similar Documents

Publication Publication Date Title
US11184583B2 (en) Audio/video device with viewer
US6937743B2 (en) Process and device for detecting fires based on image analysis
EP3846142A1 (en) A cover for a camera
US6246321B1 (en) Movement detector
WO2018175328A1 (en) Dynamic identification of threat level associated with a person using an audio/video recording and communication device
US10902268B2 (en) Detection of the presence of static objects
US20190174045A1 (en) Control method for surveillance system
JP7654393B2 (en) Detection of approaching objects by surveillance cameras
US11216983B2 (en) Device and method for monitoring a predtermined environment using captured depth image data
CN104969542A (en) Monitor system
WO2025168547A1 (en) Determination of objects in spaces connected by transparent surfaces
JP5475593B2 (en) Combined sensor
US12063458B1 (en) Synchronizing security systems in a geographic network
US10119858B2 (en) Lens for pet rejecting passive infrared sensor
JP7328778B2 (en) Image processing device and image processing program
US20250232652A1 (en) Security system and method for estimation of intrusion by an object in a space
EP3293715B1 (en) Self-contained system for monitoring an area using a multi-zone passive infrared sensor
CN101325693A (en) Position abnormity identification system and identification method thereof
KR102748314B1 (en) Artificial intelligence human body detection device with enhanced personal information protection
JP4947449B2 (en) Automatic door security device
KR20050002148A (en) A night guard sensor bulb and control method thereof
JP2024145322A (en) Image processing device and image processing program
KR102026405B1 (en) smart lampfor crime prevention
JP2024145293A (en) Image processing device and image processing program
IT202100009362A1 (en) BREAK-IN DETECTOR DEVICE

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25702306

Country of ref document: EP

Kind code of ref document: A1