[go: up one dir, main page]

WO2025017284A1 - A method of reading an optically readable security element to distinguish areas of the element - Google Patents

A method of reading an optically readable security element to distinguish areas of the element Download PDF

Info

Publication number
WO2025017284A1
WO2025017284A1 PCT/GB2024/051824 GB2024051824W WO2025017284A1 WO 2025017284 A1 WO2025017284 A1 WO 2025017284A1 GB 2024051824 W GB2024051824 W GB 2024051824W WO 2025017284 A1 WO2025017284 A1 WO 2025017284A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
reading
illumination
identity
security element
Prior art date
Application number
PCT/GB2024/051824
Other languages
French (fr)
Inventor
Hugo BARRELLON-KENDALL
David Ian Howarth
Robert James Young
Original Assignee
Quantum Base Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB2310822.8A external-priority patent/GB2631779A/en
Application filed by Quantum Base Limited filed Critical Quantum Base Limited
Publication of WO2025017284A1 publication Critical patent/WO2025017284A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10712Fixed beam scanning
    • G06K7/10722Photodetector array or CCD scanning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10792Special measures in relation to the object to be scanned

Definitions

  • the present disclosure relates to a method of reading an optically readable security element, and a related system.
  • Security elements or tags are used to provide security in relation to an object to which they are attached. These security elements provide security in relation to the object by labelling the object. For example, a security element may be encoded with a unique (e.g., randomised) identity that can be extracted from the security element, thereby enabling authentication of the object.
  • the location of an area containing the identity, or, indeed, the presence of such an area may not be readily apparent or discernible from a conventional reading.
  • a user may perceive there to be an area present comprising an identity, however that perception may be the result of environmental conditions (e.g., lighting conditions) or despite the absence of an identity (e.g., by spoofing), rather than the presence of a (e.g., genuine) identity.
  • a method of reading an optically readable security element by an image capturing device, wherein the optically readable security element comprises a first area and a second area, the second area comprising an identity, the method comprising: reading the first area as a first reading; reading the second area as a second reading; and using the first reading and the second reading to determine whether a first area-illumination interaction occurring at the first area is different to a second area-illumination interaction occurring at the second area.
  • the optically readable security element comprises a plurality of readable components, and wherein a first readable component thereof comprises the first area and a second readable component thereof comprises the second area.
  • the first area and second area are illuminated by: illuminating the first area and the second area using a source of illumination. In one example, the first area and second area are illuminated by illuminating the first area and the second area using the image capturing device.
  • the method comprises: determining a first value of an optical parameter for the first reading; determining a second value of the optical parameter for the second reading; comparing the first value and second value; and using the comparison to determine whether the first area-illumination interaction occurring at the first area is different to the second area-illumination interaction occurring at the second area.
  • the determining of the first value of the optical parameter and/or determining of the second value of the optical parameter comprises filtering of optical parameter data.
  • the method comprises determining that the first area-illumination interaction occurring at the first area is different to the second area-illumination interaction occurring at the second area; and, optionally, extracting the identity from the second area.
  • the method comprises reading a third area of the optically readable security element, the third area comprising the first area and the second area; and identifying the first area and second area in the third area.
  • the method comprises dividing the third area into a plurality of regions, each region comprising a first area and a second area.
  • the first reading and the second reading are performed in a same field of view of the image capturing device.
  • the first area comprises information about the second area.
  • the second area comprises one or more emitters arranged to be read via emission radiation emitted therefrom.
  • the one or more emitters may be one or more optical emitters.
  • the one or more emitters are arranged to be excited by illumination in the form of excitation radiation, optionally wherein the image capturing device is configured to emit the excitation radiation.
  • the method comprises using the first reading and the second reading to determine whether a first area-excitation radiation interaction occurring at the first area is different to a second area-excitation radiation interaction occurring at the second area.
  • the method comprises determining that the second area-excitation radiation interaction results in the emission of emission radiation from the one or more emitters of the second area; and, optionally, extracting the identity from the second area.
  • the method comprises extracting the identity from the second area to authenticate the optically readable security element based on the second reading. In one example, the method comprises extracting the identity if the determination indicates that the first area-illumination interaction occurring at the first area is different to the second area-illumination interaction occurring at the second area.
  • the first area comprises an engineered component; and/or the first area comprises a reference area; and/or the second area comprises a unique component, encoding the identity.
  • the method may be performed when the first area and second area are, or have been, illuminated. In one example, the method may comprise reading the first area as a first reading and reading the second area as a second reading, wherein the second reading is based on the first reading.
  • the first reading and the second reading are performed in a same field of view of the image capturing device.
  • the method further comprises correcting the second reading based on the first reading.
  • correcting the second reading comprises correcting for at least one of: a colouration; a perspective; and an environmental condition.
  • the method further comprises locating the second area based on the first reading.
  • correcting the second reading based on the first reading comprises extracting the information about the second area from the first area.
  • the information about the first area comprises spatial information relating to at least one of: the first area, the second area and a relative position of the first area and the second area.
  • the method further comprises extracting the identity from the second area to authenticate the optically readable security element based on the second reading.
  • an optically readable security element comprising a first area to be read in a first reading by an image capturing device and a second area to be read in a second reading by an image capturing device and comprising an identity. The second reading is based on the first reading.
  • a system comprising an image capturing device for reading an optically readable security element, the optically readable security element and a data store.
  • the optically readable security element comprises a first area to be read in first reading and a second area to be read in a second reading and comprising an identity.
  • the second reading is based on the first reading.
  • the data store comprises information about the second area.
  • a system comprising: an image capturing device for reading an optically readable security element; the optically readable security element comprising a first area and a second area, the second area comprising an identity, wherein: the first area is to be read in first reading; the second area is to be read in a second reading; and the first reading and second reading are to be used to determine whether a first area-illumination interaction occurring at the first area is different to a second area-illumination interaction occurring at the second area.
  • the system comprises a data store comprising information about the second area.
  • a system comprising: an optically readable security element comprising a first area and a second area, the second area comprising an identity; and an image capturing device for reading the optically readable security element, the image capturing device configured to: read the first area in a first reading; read the second area in a second reading; use the first reading and the second reading to determine whether a first area-illumination interaction occurring at the first area is different to a second area-illumination interaction occurring at the second area.
  • the system comprises a data store comprising information about the second area.
  • Figure 1 shows a flowchart for a method of reading an optically readable security element
  • Figure 2 shows an optically readable security element being read by an image capturing device
  • Figure 3 shows a flowchart for a method of reading an optically readable security element
  • Figure 4 shows a flowchart for a method of reading an optically readable security element
  • Figure 5 shows an optically readable security element being read by an image capturing device
  • Figure 6 shows a captured image of a third area of the optically readable security element
  • Figures 7(a) - (c) show histograms of optical parameters
  • Figure 8 shows an image capturing device
  • Figure 9 shows a system comprising an image capturing device, an optically readable security element, and a data store.
  • this is achievable by determining whether different area-illumination interactions occur in the reading of the optically readable security element.
  • Figure 1 shows a flowchart for a method of reading an optically readable security element (ORSE).
  • ORSE optically readable security element
  • the ORSE 200 comprises a first area 210 and a second area 220.
  • the second area 220 comprises an identity.
  • the identity may be extractable from the second area 220.
  • the identity can be used to authenticate the object to which the ORSE 200 is attached, advantageously, therefore, providing security in relation to the object.
  • the method comprises reading S100 the first area 210 (e.g., a region or patch) as a first reading and reading S200 the second area 220 (e.g., a region or patch) as a second reading.
  • the readings may be undertaken at the same or different times, for example in a single combined reading, or different readings.
  • the method comprises using S300 the first reading and second reading to determine whether a first area-illumination occurring at the first area 210 is different to a second area-illumination interaction occurring at the second area 220.
  • the first area 210 is a region around, or bordering, the second area 220.
  • the first area 210 may not be located around, or border, the second area 220. Instead, the first area 210 may be spatially separated, or provided away from, the second area 220. In yet further examples, the first area 210 and second area 220 may at least partially overlap one another.
  • an image of the ORSE 200 is captured by the image capturing device 100.
  • Reading of the ORSE 200 may be performed on the image of the ORSE 200 captured by the image capturing device 100. That is, in some examples, reading may take place on a previously captured image, or images, of the ORSE 200 by the image capturing device 100 (i.e., after image capture).
  • this facilitates reading at a subsequent, and potentially more convenient, time point, for example when internet connection is available and/or when a plurality of readings are to be performed contemporaneously.
  • the reading and general processing may be undertaken live (i.e., substantially in real-time), or very soon thereafter (e.g., so that user does not see any delay).
  • the first area 210 and second area 220 are illuminated.
  • the first area 210 and second area 220 may be illuminated using a source of illumination.
  • the source of illumination may be an external, or indirect, source of illumination, such as ambient lighting.
  • the first area 210 and second area 220 are illuminated using the image capturing device 100 (e.g., from an electromagnetic radiation source thereof, such as a flash or (e.g., UV) LED).
  • the first area 210 and second area 220 may be illuminated using a combination of external illumination and illumination provided by the image capturing device 100.
  • the method may be performed when the first area and second area are, or have been, illuminated.
  • illumination refers to the incidence of electromagnetic radiation, which may be visible light or other electromagnetic radiation.
  • the illumination is the provision of excitation radiation, which excites a material in the second area 220 thereby to produce emission of emission radiation.
  • the emission radiation may be characteristic of the second area 220, and may contain the identity.
  • the first area 210 and second area 220 of the ORSE 200 are identified. In this way, regions of the captured image for reading can be identified. This may be achieved by performing a tag (i.e., ORSE or area) detection process, or algorithm. That is, the ORSE 200 may be determined to comprise a plurality of readable components, and wherein a first readable component thereof comprises the first area 210 and a second readable component thereof comprises the second area 220.
  • a tag i.e., ORSE or area
  • the first area 210 and second area 220 can be identified according to the example described below in relation to Figures 4 and 5. However, it will be appreciated by those of skill in the art that it may be possible to identify the first area 210 and second area 220 from the captured image in alternative ways (despite potentially resulting in slower and/or less accurate location or identification of regions of the captured image to be analysed).
  • Figure 4 shows a flowchart for a method of identifying the first area 210 and second area 220 of the ORSE 200.
  • the method of Figure 4 is best understood in conjunction with Figure 5, which shows the image capturing device 100 reading the ORSE 200in the field of view 300 (e.g., in a same image frame) of the image capturing device 100.
  • the method comprises reading S1 the first area 210 of the ORSE 200 as a first reading and reading S2 the second area 220 of the ORSE 200 as a second reading.
  • the second reading is based on the first reading.
  • the second reading being based on the first reading means the first reading assists, facilitates, guides, improves or mediates the second reading. This assistance, advantageously, enables authentication of an object to which the ORSE 200 is attached (or simply the ORSE 200 itself) more efficiently and/or accurately, as detailed below.
  • first area 210 and second area 220 shown Figure 5 may not be identical to the first area 210 and second area 220 shown in Figure 2.
  • first area 210 is illustrated as being spatially separated from the second area 220.
  • first area 210 may instead be a region around, or bordering, the second area 220, in an identical manner that of Figure 2 in which the first area 210 which borders the second area 220.
  • the first area 210 may comprise information about the second area 220, enabling the second reading to be based on the first reading.
  • the method may comprise extracting the information about the second area 220 from the first area 210 and using the information about the second area 220 in relation to the second reading.
  • the first area 210 may comprise spatial information about the second area 220 and/or colour information about the second area 220, for example, in an objective sense or relative to the first area 210 or another part of the ORSE 200.
  • the second area 220 comprises an identity extractable therefrom, which can be used to authenticate the object to which the ORSE 200 is attached, advantageously, therefore, providing security in relation to the object.
  • the first area 210 preferably comprises information that describes the position (e.g., distance and orientation, including geometry, such as shape) of the first area 210 relative to the second area 220.
  • the first area 210 may (e.g., in a first sub-region) comprise an engineered component programmed or encoded with the spatial information.
  • the first area 21 may comprise a hologram, bar code, QR code or similar.
  • the first area 210 may simply be a region around, or bordering, the second area 220.
  • the first area 210 may be blank area. Still, it will be appreciated that the first area 210 may indicate position, colour, or other information about the second area 220, by acting as a reference by which the second area 220 may be read.
  • the first area 210 preferably comprises information that describes a predetermined colour.
  • the first area 210 may (e.g., in a second sub-region) comprise a reference area having a predetermined colour that can be used as a baseline (e.g., using a clustering technique, such as k-clustering) to determine the colour of the second area 220.
  • the colour temperature and/or luminosity data from the first area may be used to correct a reading of the second area.
  • the engineered component may be formed by the reference area.
  • the ink of a QR code may function as a reference area.
  • the method may comprise correcting the second reading based on the first reading.
  • Correcting the second reading may mean calibrating the second reading based on the first reading (e.g., as mentioned, using the reference area as a baseline for calibration).
  • the first area 210 e.g., the reference area
  • the first area 210 may be a known colour (e.g., the colour of the reference area may be stored in the engineered component of the first area 210) or have known distribution of RGB values and used to calibrate colour read for the second area 220 to, advantageously, enable accurate reading of the second area 220, whose colour may be affected by ageing or irregularities resulting from its manufacture, for instance.
  • this calibration advantageously, accounts for different environmental conditions (e.g., lighting/shadow gradients) to which the ORSE 200 may be subjected.
  • the first area may have a known shape or geometry and its projection/perspective used to correct (e.g., calculate) a geometry of the second area 220 to, advantageously, enable accurate reading of the second area 220.
  • the first and second areas 210, 220 are square in shape. If the first area 210 is read and found (e.g., using a Hough transform) to be a different shape, any correction or processing required to transform the first area 210 to a square shape can be used (e.g., using a homography matrix transformation) in the reading or processing of the second area 220. That is, a perspective, rotation, and/or magnification of the first area (e.g.
  • the second area comprising or being a QR code or other known feature
  • the method may comprise, in addition to or as an alternative to correcting the second area 220, locating the second area 220 based on the first area 210.
  • Locating the second area 220 may mean guiding a user to the second area 220.
  • reading S1 the first area 210 as part of the first reading may prompt a signal to be transmitted to a display of the image capturing device 100 to cause the display to display directional information based on the spatial information guiding the user to the second area 220.
  • reading S1 the first area 210 as part of the first reading may prompt a signal to be transmitted to a speaker of the image capturing device 100 to cause the speaker to output sound information based on the spatial information guiding the user to the second area 220.
  • the step of locating the second area 220 based on the first area 210 may not be visible to the user but may instead be implemented by (e.g., automatically) implemented by software.
  • efficient reading of the second area 220, hence authentication of the object to which the ORSE 200 is attached, is facilitated.
  • the information about the second area 220 enables the identity to be extracted therefrom, even if the second area 220 is (e.g., for enhanced security) not easily visible to the naked eye, because a user is able to locate the second area 220 efficiently.
  • This advantage may allow more freedom of design or material choice for the second area 202.
  • haptic feedback may be used to guide the user, thereby, advantageously, enabling a visually-impaired user to locate the second area 220.
  • the first area 210 may comprise information about the second area 220 that indicates the object to which the ORSE 200 is attached. That is, ORSEs 200 with particular first area 210 properties (e.g., as information) may enable a determination that the ORSE 200 is provided on a particular type or kind of object. In this way, the information of the first area 210 may inform reading of the second area 220, for example by establishing what properties of the second area 220 should be expected or predicted. For example, information of the first area 210 may indicate that, for example, a certain area-illumination interaction, or level thereof, should be expected at the second area 220.
  • specific types of ORSEs 200 used with specific object types or kinds can be more accurately and securely used to authenticate the object.
  • first area 210 and the second area 220 are shown in Figure 5 as discrete (i.e. , separate and distinct), vertically aligned squares.
  • the first and second areas 210, 220 being distinct and spaced apart may, advantageously, facilitate easier distinction and reading (or even manufacturing) thereof.
  • any arrangement of the first area 210 and the second area is possible.
  • the first area 210 and the second area 220 may correspond to adjacent and contiguous areas (e.g., the ORSE 200 may be divided between the first area 210 and the second area 220 along a longitudinal axis) of the ORSE 200 or adjacent and non-contiguous areas (e.g., an equal number of equally sized strips).
  • the first and second areas 210, 220 may partially or completely overlap, which, advantageously, may reduce the required space for both or make it harder to copy or replicate one or both areas 210, 220.
  • the first area 210 has low symmetry (e.g., is asymmetric) to facilitate determination, for instance, of a direction of the second area 220 relative to the first area 210.
  • the spatial information may include the geometry of the first area 210.
  • the first area 210 may be a border area of the second 220, as shown in Figure 2.
  • the first area and second area may be located.
  • the captured image of the ORSE 200 captured by the image capturing device 100 (or an appropriate portion or region thereof) may be taken as a third area comprising the first area 210 and the second area 220, and the first area 210 and second area 220 identified in the third area.
  • the captured image is divided into a plurality of regions. Dividing the captured image may otherwise be referred to as separating or delineating the captured image.
  • the third area is divided into a plurality of regions.
  • Each region of the plurality of regions comprises a first area and a second area. This may otherwise be referred to as each region of the plurality of regions comprising a region of a (or the) first area and a region of a (or the) second area.
  • the first area 210 may be a reference area or region
  • the second area 220 may be a tag area or region (i.e. , an area or region comprising some or all of the tag/unique component encoding the identity).
  • the impact of non-uniform illumination of the ORSE 200 can be minimised. That is, the effect of uneven lighting, lighting gradients (possibly due to the flash of the image capturing device 100, an ambient light source, or shadow), or directional light sources, can be minimised. This is because multiple regions can be analysed separately.
  • FIG. 6 shows a captured image of the third area 230 divided (or separated/delineated) into a plurality of regions 710a - d.
  • Each of the plurality of regions 710a - d comprises a first area 210a - d and a second area 220a - d.
  • the third area is divided into quadrants. It will nevertheless be appreciated that more or fewer regions (i.e., divisions) may be used.
  • a histogram is produced based on an optical parameter.
  • a histogram of brightness in a colour channel of interest e.g., red or green colour channel
  • a colour channel of interest e.g., red or green colour channel
  • Figure 7(a) shows a first histogram of brightness counts in a colour channel of interest for one region 710a of the plurality of regions 710a - d.
  • Figure 7(b) shows a second histogram of brightness counts in the colour channel of interest for the first area 210a.
  • Figure 7(c) shows a third histogram of brightness counts in the colour channel of interest for the second area 220a.
  • the histogram data is filtered to reduce the impact of noise. This is illustrated in the histograms of Figures 7(b) and 7(c) by the introduction of a noise floor 702 which, in this example, is applied at 1% of the total number of counts. Brightness counts falling below this noise floor 702 (i.e., having fewer counts in that brightness channel than the noise floor) are removed. As shown, this filtering step is performed for both first area 210a and second area 220a of the selected region 710a.
  • a mean brightness value for the first area 210a - d is calculated, and a mean brightness value for the second area 220a - d is calculated, based on the filtered histograms from Step S318.
  • a first value of an optical parameter is determined for the first reading (i.e., of the first area) and a second value of the optical parameter is determined for the second reading (i.e., of the second area).
  • the first value and the second value are compared. Specifically, a ratio of the mean brightness value for the second area 220 and the mean brightness value for the first area 210 is calculated according to:
  • Step S324 the comparison of the first value and the second value is used to determine whether the first area-illumination interaction occurring at the first area 210 is different to the second area-illumination interaction occurring at the second area 220.
  • R > 1 it can be determined that the first area-illumination interaction occurring at the first area is different to the second area-illumination interaction occurring at the second area.
  • R ⁇ 1 it can be determined that the first area-illumination interaction occurring at the first area is different to the second area-illumination interaction occurring at the second area. However, this is typically not to be expected as, in implementation, the mean brightness value for the second area 220 (i.e., the area comprising the identity) is expected to be greater than the mean brightness value for the first area 210 for a genuine unique component.
  • R ⁇ 1 it can be determined that the first area-illumination interaction occurring at the first area is the same as to the second area-illumination interaction occurring at the second area. If such a situation occurs, the ORSE 200 may be determined not to comprise an extractable (or genuine) identity. The ORSE 200, or the reading thereof, may be rejected at S328. Alternatively, even when it is determined that the first area-illumination interaction occurring at the first area 210 is the same as to the second area-illumination interaction occurring at the second area 220, the identity can still be extracted (or attempted to be extracted) from the second area 220.
  • an alert or other output signal may be provided to alert a user, or the system, that the extraction has been performed, or attempted, despite the area-illumination interactions being the same.
  • This is advantageous in warning a user of a potential non- genuine identity, whilst still allowing the method to continue to attempt to authenticate the object to which the ORSE 200 is attached.
  • the identity may be extracted from the second area 220.
  • the extracted identity may be compared against a known identity, for example one stored in data, to authenticate the object to which the ORSE 200 is attached.
  • the second area 220 comprises a unique (e.g., randomised) component (e.g., a random deterministic feature), encoding the identity.
  • a unique component e.g., a random deterministic feature
  • the randomised component encoding the identity compared with the engineered component encoding the identity, advantageously, engenders a more robust barrier to fraudulent reading of the ORSE 200.
  • a specific example of a situation in which the first area-illumination interaction occurring at the first area 210 is different to the second area-illumination interaction occurring at the second area 220 is where the first area 210 is a reference area and the second area is, or comprises, one or more emitters.
  • the reference area may be, for example, a blank area.
  • the one or more emitters are arranged to be read via emission radiation emitted therefrom.
  • the one or more optical emitters may be arranged to be excited by excitation radiation.
  • the one or more emitters may be optical emitters, that is, emitting electromagnetic radiation in visible wavelengths.
  • the emitters may emit in other bands of the electromagnetic spectrum, for example UV emission.
  • the one or more emitters may serve as the component that provides or serves as the unique identity.
  • the ORSE 200 being read via emission emitted therefrom provides a more robust barrier to fraudulent reading, more readily preventing spoofing or copying by, for instance, simply replicating (e.g., by printing) a bar code, QR code, or similar.
  • This advantage is particularly true when one or more (e.g., hundreds, thousands, millions, or more) of emitters are distributed randomly. For instance, this effect may be achieved using quantum dots, flakes of 2D materials, molecules (e.g., small molecules), atomic defects or vacancies, plasmonic structures, or similar.
  • the one or more emitters may be configured to exhibit photoluminescence when illuminated. This photoluminescence may be detected as emission radiation from the one or more emitters.
  • the first area 210 is a reference area provided as a border around the second area 220, and the second area 220 comprises one or more emitters arranged to be read via emission radiation emitted therefrom.
  • the one or more emitters serve as the components that provides or serves as the unique identity.
  • the one or more emitters are arranged to be excited by illumination in the form of excitation radiation.
  • the image capturing device 100 is configured to emit the excitation radiation (and thereby illuminate the first area 210 and second area 220).
  • the method is performed as described above in relation to Figure 3, with a ratio of the mean brightness values being calculated as above.
  • the first reading and the second reading can be used S300 to determine whether a first area-excitation radiation interaction occurring at the first area 210 is different to a second area-excitation interaction occurring at the second area 220.
  • the mean brightness value of the second area 220 is greater than that of the first area 210 (which may be absent emitters, or may comprise fewer emitters).
  • the emission from the second area 220 will be a combination of emission radiation (e.g., photoluminescence) and reflected light, which can be distinguished from only reflected light from the first area 210, by the method described above.
  • emission of emission radiation from the one or more emitters is described above as a highly advantageous example, it will be appreciated that benefits of the invention may also be obtained where other area-illumination interactions are monitored or used. For example, absorption, refraction, scattering and even different properties (e.g., magnitude, direction) of reflection may be considered to determine that different areaillumination interaction has taken place at the first area 210 vs the second area 220.
  • absorption, refraction, scattering and even different properties (e.g., magnitude, direction) of reflection may be considered to determine that different areaillumination interaction has taken place at the first area 210 vs the second area 220.
  • the image capturing device 100 may be a terminal device, such as a smartphone.
  • the image capturing device 100 may be configured to provide a source of illumination, for example to emit excitation radiation to excite the at least one optical emitter (e.g., from an electromagnetic radiation source, such as a flash or LED).
  • a source of illumination for example to emit excitation radiation to excite the at least one optical emitter (e.g., from an electromagnetic radiation source, such as a flash or LED).
  • an electromagnetic radiation source such as a flash or LED
  • the source of illumination may be a UV LED, which advantageously can excite visible fluorescence without reflected light being measured.
  • the image capturing device 100 comprises a reader 110 and a processor 120.
  • the reading S1/S100, S200 of the first area 210 and the second area 220 is performed by the reader 110, which could include or be a sensor.
  • the processor is configured to perform one or more of: using the first reading and the second reading to determine whether a first area-illumination interaction occurring at the first area 210 is different to a second area-illumination interaction occurring at the second area 220; correcting the second reading based on the first reading; locating the second area 220 based on the first area 210; extracting information from the first area 210; and extracting the identity from the second area 220.
  • the processor 120 may be configured to perform the aforementioned steps locally (i.e.
  • Figure 9 shows a system comprising the image capturing device 100, the ORSE 200 and a data store 400.
  • the data store 400 may be used to store information about the second area 220.
  • the first area 210 may store information that enables the image capturing device 100 to access (e.g., by wireless communication) information stored by the data store 400 pertaining to the second area, such as the spatial information and colour information described above.
  • the data store 400 may be an external device, as shown in Figure 9, or part of the image capturing device 100.
  • the data store 400 being an external device, advantageously, facilitates compactness of the image capturing device 100.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Input (AREA)

Abstract

There is described a method of reading an optically readable security element by an image capturing device, wherein the optically readable security element comprises a first area and a second area, the second area comprising an identity, the method comprising: reading the first area as a first reading; reading the second area as a second reading; and using the first reading and the second reading to determine whether a first area-illumination interaction occurring at the first area is different to a second area-illumination interaction occurring at the second area.

Description

A METHOD OF READING AN OPTICALLY READABLE SECURITY ELEMENT TO DISTINGUISH AREAS OF THE ELEMENT
TECHNICAL FIELD
The present disclosure relates to a method of reading an optically readable security element, and a related system.
BACKGROUND
Security elements or tags are used to provide security in relation to an object to which they are attached. These security elements provide security in relation to the object by labelling the object. For example, a security element may be encoded with a unique (e.g., randomised) identity that can be extracted from the security element, thereby enabling authentication of the object.
However, in reading the security element, the location of an area containing the identity, or, indeed, the presence of such an area, may not be readily apparent or discernible from a conventional reading. In some instances, a user may perceive there to be an area present comprising an identity, however that perception may be the result of environmental conditions (e.g., lighting conditions) or despite the absence of an identity (e.g., by spoofing), rather than the presence of a (e.g., genuine) identity.
Difficulty in locating and/or establishing the presence of the identity (i.e., a genuine identity) hinders efficient authentication of the object, which results in one or more of user dissatisfaction, a possible failure to authenticate quickly or at all, a time window that is vulnerable to exploitation by nefarious actors and general inefficiency.
Hence, there is a desire to provide a method of reading an optically readable security element that facilitates location of an area containing the identity and/or establishment of the presence of the area. Furthermore, there is a desire to improve the ability to distinguish between an area containing an identity and other areas of an optically readable security element. SUMMARY
It is one aim of the present disclosure, amongst others, to provide a method of reading an optically readable security element which at least partially obviates or mitigates at least some of the disadvantages of the prior art, whether identified herein or elsewhere, or to provide an alternative approach. For instance, it is an aim of embodiments of the invention to provide a method of reading an optically readable security element that facilitates location of an area containing an identity and/or establishment of the presence of the area. Furthermore, it is an aim of embodiments of the invention to improve the ability to distinguish between an area containing an identity and other areas of an optically readable security element.
According to the present invention there is provided a method of reading an optically readable security element, and a related system, as set forth in the appended claims. Other features of the invention will be apparent from the dependent claims and the description that follows.
According to a first aspect, there is provided a method of reading an optically readable security element, by an image capturing device, wherein the optically readable security element comprises a first area and a second area, the second area comprising an identity, the method comprising: reading the first area as a first reading; reading the second area as a second reading; and using the first reading and the second reading to determine whether a first area-illumination interaction occurring at the first area is different to a second area-illumination interaction occurring at the second area.
In one example, the optically readable security element comprises a plurality of readable components, and wherein a first readable component thereof comprises the first area and a second readable component thereof comprises the second area.
In one example, the first area and second area are illuminated by: illuminating the first area and the second area using a source of illumination. In one example, the first area and second area are illuminated by illuminating the first area and the second area using the image capturing device.
In one example, the method comprises: determining a first value of an optical parameter for the first reading; determining a second value of the optical parameter for the second reading; comparing the first value and second value; and using the comparison to determine whether the first area-illumination interaction occurring at the first area is different to the second area-illumination interaction occurring at the second area.
In one example, the determining of the first value of the optical parameter and/or determining of the second value of the optical parameter comprises filtering of optical parameter data.
In one example, the method comprises determining that the first area-illumination interaction occurring at the first area is different to the second area-illumination interaction occurring at the second area; and, optionally, extracting the identity from the second area.
In one example, the method comprises reading a third area of the optically readable security element, the third area comprising the first area and the second area; and identifying the first area and second area in the third area.
In one example, the method comprises dividing the third area into a plurality of regions, each region comprising a first area and a second area.
In one example, the first reading and the second reading are performed in a same field of view of the image capturing device.
In one example, the first area comprises information about the second area. In one example, the second area comprises one or more emitters arranged to be read via emission radiation emitted therefrom. The one or more emitters may be one or more optical emitters.
In one example, the one or more emitters are arranged to be excited by illumination in the form of excitation radiation, optionally wherein the image capturing device is configured to emit the excitation radiation.
In one example, the method comprises using the first reading and the second reading to determine whether a first area-excitation radiation interaction occurring at the first area is different to a second area-excitation radiation interaction occurring at the second area.
In one example, the method comprises determining that the second area-excitation radiation interaction results in the emission of emission radiation from the one or more emitters of the second area; and, optionally, extracting the identity from the second area.
In one example, the method comprises extracting the identity from the second area to authenticate the optically readable security element based on the second reading. In one example, the method comprises extracting the identity if the determination indicates that the first area-illumination interaction occurring at the first area is different to the second area-illumination interaction occurring at the second area.
In one example, the first area comprises an engineered component; and/or the first area comprises a reference area; and/or the second area comprises a unique component, encoding the identity.
In one example, the method may be performed when the first area and second area are, or have been, illuminated. In one example, the method may comprise reading the first area as a first reading and reading the second area as a second reading, wherein the second reading is based on the first reading.
In one example, the first reading and the second reading are performed in a same field of view of the image capturing device.
In one example, the method further comprises correcting the second reading based on the first reading.
In one example, correcting the second reading comprises correcting for at least one of: a colouration; a perspective; and an environmental condition.
In one example, the method further comprises locating the second area based on the first reading.
In one example, correcting the second reading based on the first reading comprises extracting the information about the second area from the first area.
In one example, the information about the first area comprises spatial information relating to at least one of: the first area, the second area and a relative position of the first area and the second area.
In one example, the method further comprises extracting the identity from the second area to authenticate the optically readable security element based on the second reading.
According to an example, there is provided an optically readable security element comprising a first area to be read in a first reading by an image capturing device and a second area to be read in a second reading by an image capturing device and comprising an identity. The second reading is based on the first reading. According to an example, there is provided a system comprising an image capturing device for reading an optically readable security element, the optically readable security element and a data store. The optically readable security element comprises a first area to be read in first reading and a second area to be read in a second reading and comprising an identity. The second reading is based on the first reading. The data store comprises information about the second area.
According to a second aspect, there is provided a system comprising: an image capturing device for reading an optically readable security element; the optically readable security element comprising a first area and a second area, the second area comprising an identity, wherein: the first area is to be read in first reading; the second area is to be read in a second reading; and the first reading and second reading are to be used to determine whether a first area-illumination interaction occurring at the first area is different to a second area-illumination interaction occurring at the second area.
In one example, the system comprises a data store comprising information about the second area.
According to a third aspect, there is provided a system comprising: an optically readable security element comprising a first area and a second area, the second area comprising an identity; and an image capturing device for reading the optically readable security element, the image capturing device configured to: read the first area in a first reading; read the second area in a second reading; use the first reading and the second reading to determine whether a first area-illumination interaction occurring at the first area is different to a second area-illumination interaction occurring at the second area.
In one example, the system comprises a data store comprising information about the second area.
BRIEF DESCRIPTION OF DRAWINGS For a better understanding of the invention, and to show how embodiments of the same may be brought into effect, reference will be made, by way of example only, to the accompanying Figures, in which:
Figure 1 shows a flowchart for a method of reading an optically readable security element;
Figure 2 shows an optically readable security element being read by an image capturing device;
Figure 3 shows a flowchart for a method of reading an optically readable security element;
Figure 4 shows a flowchart for a method of reading an optically readable security element;
Figure 5 shows an optically readable security element being read by an image capturing device;
Figure 6 shows a captured image of a third area of the optically readable security element;
Figures 7(a) - (c) show histograms of optical parameters;
Figure 8 shows an image capturing device; and
Figure 9 shows a system comprising an image capturing device, an optically readable security element, and a data store. DETAILED DESCRIPTION
The description which follows describes a method of reading an optically readable security element.
In summary, as introduced above, it is an aim of embodiments of the invention to provide a method of reading an optically readable security element that facilitates location of an area containing an identity and/or establishment of the presence of an identity. Furthermore, it is an aim of embodiments of the invention to improve the ability to distinguish between an area containing an identity and other areas of an optically readable security element.
In the context of the present invention, this is achievable by determining whether different area-illumination interactions occur in the reading of the optically readable security element.
Figure 1 shows a flowchart for a method of reading an optically readable security element (ORSE). The method of Figure 1 is best understood in conjunction with Figure 2, which shows an image capturing device 100 reading an ORSE 200 in a field of view 300 (e.g., in a same image frame) of the image capturing device 100.
The ORSE 200 comprises a first area 210 and a second area 220. The second area 220 comprises an identity. The identity may be extractable from the second area 220. The identity can be used to authenticate the object to which the ORSE 200 is attached, advantageously, therefore, providing security in relation to the object.
The method comprises reading S100 the first area 210 (e.g., a region or patch) as a first reading and reading S200 the second area 220 (e.g., a region or patch) as a second reading. The readings may be undertaken at the same or different times, for example in a single combined reading, or different readings. The method comprises using S300 the first reading and second reading to determine whether a first area-illumination occurring at the first area 210 is different to a second area-illumination interaction occurring at the second area 220.
Advantageously, in this way, it can be established whether, in reading ORSE 200, different area-illumination interactions occur, or take place, at different areas of the ORSE 200. This may be understood to result due to a different (e.g. physics, or illumination/excitation-reaction/response) mechanism being responsible for the areaillumination interaction. In this way, a determination may be made that there is a difference in a property, material and/or material property between the first area and the second area. In a specific example, as will be described in greater detail below, this may facilitate a determination that the second area comprises a unique component encoding an identity. Perhaps more generally, this facilitates one or more of (e.g. attempting to): locate the second area; establish the presence of the second area; and/or distinguish between the first area and second area. This might be additionally or alternatively defined or described as (e.g. attempting to) identify the second area, in order to extract the identity.
In this example, as shown in Figure 2, the first area 210 is a region around, or bordering, the second area 220. In other examples, the first area 210 may not be located around, or border, the second area 220. Instead, the first area 210 may be spatially separated, or provided away from, the second area 220. In yet further examples, the first area 210 and second area 220 may at least partially overlap one another.
Further detail of the features of the method of Figure 1 will be understood from the description which follows, with reference to the flowchart of Figure 3.
Referring to Figure 3, at Step S310, an image of the ORSE 200 is captured by the image capturing device 100. Reading of the ORSE 200 may be performed on the image of the ORSE 200 captured by the image capturing device 100. That is, in some examples, reading may take place on a previously captured image, or images, of the ORSE 200 by the image capturing device 100 (i.e., after image capture). Advantageously, this facilitates reading at a subsequent, and potentially more convenient, time point, for example when internet connection is available and/or when a plurality of readings are to be performed contemporaneously. In other examples, the reading and general processing may be undertaken live (i.e., substantially in real-time), or very soon thereafter (e.g., so that user does not see any delay).
In the captured image, the first area 210 and second area 220 are illuminated. The first area 210 and second area 220 may be illuminated using a source of illumination. The source of illumination may be an external, or indirect, source of illumination, such as ambient lighting. Alternatively, the first area 210 and second area 220 are illuminated using the image capturing device 100 (e.g., from an electromagnetic radiation source thereof, such as a flash or (e.g., UV) LED). The first area 210 and second area 220 may be illuminated using a combination of external illumination and illumination provided by the image capturing device 100. For avoidance of doubt, the method may be performed when the first area and second area are, or have been, illuminated.
It will be appreciated from the description herein that illumination refers to the incidence of electromagnetic radiation, which may be visible light or other electromagnetic radiation. In a specific example described below, the illumination is the provision of excitation radiation, which excites a material in the second area 220 thereby to produce emission of emission radiation. The emission radiation may be characteristic of the second area 220, and may contain the identity.
At Step S312, the first area 210 and second area 220 of the ORSE 200 are identified. In this way, regions of the captured image for reading can be identified. This may be achieved by performing a tag (i.e., ORSE or area) detection process, or algorithm. That is, the ORSE 200 may be determined to comprise a plurality of readable components, and wherein a first readable component thereof comprises the first area 210 and a second readable component thereof comprises the second area 220.
The first area 210 and second area 220 can be identified according to the example described below in relation to Figures 4 and 5. However, it will be appreciated by those of skill in the art that it may be possible to identify the first area 210 and second area 220 from the captured image in alternative ways (despite potentially resulting in slower and/or less accurate location or identification of regions of the captured image to be analysed).
Figure 4 shows a flowchart for a method of identifying the first area 210 and second area 220 of the ORSE 200. The method of Figure 4 is best understood in conjunction with Figure 5, which shows the image capturing device 100 reading the ORSE 200in the field of view 300 (e.g., in a same image frame) of the image capturing device 100.
The method comprises reading S1 the first area 210 of the ORSE 200 as a first reading and reading S2 the second area 220 of the ORSE 200 as a second reading. The second reading is based on the first reading. The second reading being based on the first reading means the first reading assists, facilitates, guides, improves or mediates the second reading. This assistance, advantageously, enables authentication of an object to which the ORSE 200 is attached (or simply the ORSE 200 itself) more efficiently and/or accurately, as detailed below.
It will be noted that the first area 210 and second area 220 shown Figure 5 may not be identical to the first area 210 and second area 220 shown in Figure 2. In particular, in Figure 5, the first area 210 is illustrated as being spatially separated from the second area 220. Despite this, it will be appreciated that the same principles apply, and the first area 210 may instead be a region around, or bordering, the second area 220, in an identical manner that of Figure 2 in which the first area 210 which borders the second area 220.
The first area 210 may comprise information about the second area 220, enabling the second reading to be based on the first reading. Thus, the method may comprise extracting the information about the second area 220 from the first area 210 and using the information about the second area 220 in relation to the second reading. Specifically, the first area 210 may comprise spatial information about the second area 220 and/or colour information about the second area 220, for example, in an objective sense or relative to the first area 210 or another part of the ORSE 200. The second area 220 comprises an identity extractable therefrom, which can be used to authenticate the object to which the ORSE 200 is attached, advantageously, therefore, providing security in relation to the object.
In comprising spatial information about the second area 220, the first area 210 preferably comprises information that describes the position (e.g., distance and orientation, including geometry, such as shape) of the first area 210 relative to the second area 220. The first area 210 may (e.g., in a first sub-region) comprise an engineered component programmed or encoded with the spatial information. For example, the first area 21 may comprise a hologram, bar code, QR code or similar. Alternatively, the first area 210 may simply be a region around, or bordering, the second area 220. The first area 210 may be blank area. Still, it will be appreciated that the first area 210 may indicate position, colour, or other information about the second area 220, by acting as a reference by which the second area 220 may be read.
In comprising colour information (e.g., colour temperature, grey and black balance and uniformity) about the second area 220, the first area 210 preferably comprises information that describes a predetermined colour. For example, the first area 210 may (e.g., in a second sub-region) comprise a reference area having a predetermined colour that can be used as a baseline (e.g., using a clustering technique, such as k-clustering) to determine the colour of the second area 220. For example, the colour temperature and/or luminosity data from the first area may be used to correct a reading of the second area. The engineered component may be formed by the reference area. For example, the ink of a QR code may function as a reference area.
The method may comprise correcting the second reading based on the first reading. Correcting the second reading may mean calibrating the second reading based on the first reading (e.g., as mentioned, using the reference area as a baseline for calibration). For example, the first area 210 (e.g., the reference area) may be a known colour (e.g., the colour of the reference area may be stored in the engineered component of the first area 210) or have known distribution of RGB values and used to calibrate colour read for the second area 220 to, advantageously, enable accurate reading of the second area 220, whose colour may be affected by ageing or irregularities resulting from its manufacture, for instance. Similarly, this calibration, advantageously, accounts for different environmental conditions (e.g., lighting/shadow gradients) to which the ORSE 200 may be subjected.
In another example, the first area may have a known shape or geometry and its projection/perspective used to correct (e.g., calculate) a geometry of the second area 220 to, advantageously, enable accurate reading of the second area 220. For example, it may be known or expected that the first and second areas 210, 220 are square in shape. If the first area 210 is read and found (e.g., using a Hough transform) to be a different shape, any correction or processing required to transform the first area 210 to a square shape can be used (e.g., using a homography matrix transformation) in the reading or processing of the second area 220. That is, a perspective, rotation, and/or magnification of the first area (e.g. comprising or being a QR code or other known feature) may be used to normalise the appearance or processing of the second area. This is advantageous when extracting an identity, because it may be desirable to have the same region of the second area to contribute to the same bits or portions of the identity every time it is read or extracted, for consistency. This is also advantageous if the second area is not visibly different from its surroundings (e.g. to the human eye), or is not a simple shape (for example, perspective correcting a curvy or non-rectilinear second area is difficult). Correcting may be performed before/at the same time as the second reading or after the second reading has taken place.
The method may comprise, in addition to or as an alternative to correcting the second area 220, locating the second area 220 based on the first area 210. Locating the second area 220 may mean guiding a user to the second area 220. For example, reading S1 the first area 210 as part of the first reading may prompt a signal to be transmitted to a display of the image capturing device 100 to cause the display to display directional information based on the spatial information guiding the user to the second area 220. Similarly, reading S1 the first area 210 as part of the first reading may prompt a signal to be transmitted to a speaker of the image capturing device 100 to cause the speaker to output sound information based on the spatial information guiding the user to the second area 220. In a related example, the step of locating the second area 220 based on the first area 210 may not be visible to the user but may instead be implemented by (e.g., automatically) implemented by software. Advantageously, in this way, efficient reading of the second area 220, hence authentication of the object to which the ORSE 200 is attached, is facilitated. For instance, the information about the second area 220 enables the identity to be extracted therefrom, even if the second area 220 is (e.g., for enhanced security) not easily visible to the naked eye, because a user is able to locate the second area 220 efficiently. This advantage may allow more freedom of design or material choice for the second area 202. In another related example, haptic feedback may be used to guide the user, thereby, advantageously, enabling a visually-impaired user to locate the second area 220.
Additionally, the first area 210 may comprise information about the second area 220 that indicates the object to which the ORSE 200 is attached. That is, ORSEs 200 with particular first area 210 properties (e.g., as information) may enable a determination that the ORSE 200 is provided on a particular type or kind of object. In this way, the information of the first area 210 may inform reading of the second area 220, for example by establishing what properties of the second area 220 should be expected or predicted. For example, information of the first area 210 may indicate that, for example, a certain area-illumination interaction, or level thereof, should be expected at the second area 220. Advantageously, in this way, specific types of ORSEs 200 used with specific object types or kinds can be more accurately and securely used to authenticate the object.
As mentioned above, the first area 210 and the second area 220 are shown in Figure 5 as discrete (i.e. , separate and distinct), vertically aligned squares. The first and second areas 210, 220 being distinct and spaced apart may, advantageously, facilitate easier distinction and reading (or even manufacturing) thereof. However, any arrangement of the first area 210 and the second area is possible. For instance, the first area 210 and the second area 220 may correspond to adjacent and contiguous areas (e.g., the ORSE 200 may be divided between the first area 210 and the second area 220 along a longitudinal axis) of the ORSE 200 or adjacent and non-contiguous areas (e.g., an equal number of equally sized strips). The first and second areas 210, 220 may partially or completely overlap, which, advantageously, may reduce the required space for both or make it harder to copy or replicate one or both areas 210, 220. Preferably, the first area 210 has low symmetry (e.g., is asymmetric) to facilitate determination, for instance, of a direction of the second area 220 relative to the first area 210. In other words, the spatial information may include the geometry of the first area 210. Highly advantageously, the first area 210 may be a border area of the second 220, as shown in Figure 2.
Highly advantageously, by the identifying the first area 210 and second area 220 (e.g., in accordance with the method described above with reference to Figures 4 and 5), the first area and second area may be located. In this way, the captured image of the ORSE 200 captured by the image capturing device 100 (or an appropriate portion or region thereof) may be taken as a third area comprising the first area 210 and the second area 220, and the first area 210 and second area 220 identified in the third area.
Referring back to Figure 3, at Step S314, the captured image is divided into a plurality of regions. Dividing the captured image may otherwise be referred to as separating or delineating the captured image. In a specific example, the third area is divided into a plurality of regions. Each region of the plurality of regions comprises a first area and a second area. This may otherwise be referred to as each region of the plurality of regions comprising a region of a (or the) first area and a region of a (or the) second area. In accordance with the method of identifying the first area 210 and second area 220 as described above, the first area 210 may be a reference area or region, and the second area 220 may be a tag area or region (i.e. , an area or region comprising some or all of the tag/unique component encoding the identity).
Advantageously, in this way, the impact of non-uniform illumination of the ORSE 200 can be minimised. That is, the effect of uneven lighting, lighting gradients (possibly due to the flash of the image capturing device 100, an ambient light source, or shadow), or directional light sources, can be minimised. This is because multiple regions can be analysed separately.
An example of this is illustrated in Figure 6, which shows a captured image of the third area 230 divided (or separated/delineated) into a plurality of regions 710a - d. Each of the plurality of regions 710a - d comprises a first area 210a - d and a second area 220a - d. In the illustrated example, the third area is divided into quadrants. It will nevertheless be appreciated that more or fewer regions (i.e., divisions) may be used. At Step S316, for each region 710a - d comprising the first area 210a - d and second area 220a - d (as a “coupled” first area and second area), a histogram is produced based on an optical parameter. As an example, a histogram of brightness in a colour channel of interest (e.g., red or green colour channel) may be produced.
Exemplary histograms are illustrated in Figure 7. Figure 7(a) shows a first histogram of brightness counts in a colour channel of interest for one region 710a of the plurality of regions 710a - d. Figure 7(b) shows a second histogram of brightness counts in the colour channel of interest for the first area 210a. Figure 7(c) shows a third histogram of brightness counts in the colour channel of interest for the second area 220a.
At Step S318, the histogram data is filtered to reduce the impact of noise. This is illustrated in the histograms of Figures 7(b) and 7(c) by the introduction of a noise floor 702 which, in this example, is applied at 1% of the total number of counts. Brightness counts falling below this noise floor 702 (i.e., having fewer counts in that brightness channel than the noise floor) are removed. As shown, this filtering step is performed for both first area 210a and second area 220a of the selected region 710a.
This is repeated for remaining regions 710b - d, so that a filtered histogram is produced for each first area 210b - d and second area 220b - d.
At Step S320, a mean brightness value for the first area 210a - d is calculated, and a mean brightness value for the second area 220a - d is calculated, based on the filtered histograms from Step S318. In other words, a first value of an optical parameter is determined for the first reading (i.e., of the first area) and a second value of the optical parameter is determined for the second reading (i.e., of the second area).
At Step S322, the first value and the second value are compared. Specifically, a ratio of the mean brightness value for the second area 220 and the mean brightness value for the first area 210 is calculated according to:
Figure imgf000019_0001
At Step S324, the comparison of the first value and the second value is used to determine whether the first area-illumination interaction occurring at the first area 210 is different to the second area-illumination interaction occurring at the second area 220.
Specifically, if R > 1, it can be determined that the first area-illumination interaction occurring at the first area is different to the second area-illumination interaction occurring at the second area.
Similarly, if R < 1, it can be determined that the first area-illumination interaction occurring at the first area is different to the second area-illumination interaction occurring at the second area. However, this is typically not to be expected as, in implementation, the mean brightness value for the second area 220 (i.e., the area comprising the identity) is expected to be greater than the mean brightness value for the first area 210 for a genuine unique component.
However, if R ~ 1, it can be determined that the first area-illumination interaction occurring at the first area is the same as to the second area-illumination interaction occurring at the second area. If such a situation occurs, the ORSE 200 may be determined not to comprise an extractable (or genuine) identity. The ORSE 200, or the reading thereof, may be rejected at S328. Alternatively, even when it is determined that the first area-illumination interaction occurring at the first area 210 is the same as to the second area-illumination interaction occurring at the second area 220, the identity can still be extracted (or attempted to be extracted) from the second area 220. In this instance, an alert or other output signal may be provided to alert a user, or the system, that the extraction has been performed, or attempted, despite the area-illumination interactions being the same. This is advantageous in warning a user of a potential non- genuine identity, whilst still allowing the method to continue to attempt to authenticate the object to which the ORSE 200 is attached. If it is determined that the first area-illumination interaction occurring at the first area 210 is different to (e.g., different by more than a threshold amount) the second areaillumination interaction occurring at the second area 220 (e.g., that R > 1), the identity may be extracted from the second area 220. The extracted identity may be compared against a known identity, for example one stored in data, to authenticate the object to which the ORSE 200 is attached.
Preferably, the second area 220 comprises a unique (e.g., randomised) component (e.g., a random deterministic feature), encoding the identity. The randomised component encoding the identity, compared with the engineered component encoding the identity, advantageously, engenders a more robust barrier to fraudulent reading of the ORSE 200.
A specific example of a situation in which the first area-illumination interaction occurring at the first area 210 is different to the second area-illumination interaction occurring at the second area 220 is where the first area 210 is a reference area and the second area is, or comprises, one or more emitters. The reference area may be, for example, a blank area. The one or more emitters are arranged to be read via emission radiation emitted therefrom. Relatedly, the one or more optical emitters may be arranged to be excited by excitation radiation. In this (or indeed any) example, the one or more emitters may be optical emitters, that is, emitting electromagnetic radiation in visible wavelengths. Alternatively, the emitters may emit in other bands of the electromagnetic spectrum, for example UV emission. The one or more emitters may serve as the component that provides or serves as the unique identity. Advantageously, the ORSE 200 being read via emission emitted therefrom provides a more robust barrier to fraudulent reading, more readily preventing spoofing or copying by, for instance, simply replicating (e.g., by printing) a bar code, QR code, or similar. This advantage is particularly true when one or more (e.g., hundreds, thousands, millions, or more) of emitters are distributed randomly. For instance, this effect may be achieved using quantum dots, flakes of 2D materials, molecules (e.g., small molecules), atomic defects or vacancies, plasmonic structures, or similar. The one or more emitters may be configured to exhibit photoluminescence when illuminated. This photoluminescence may be detected as emission radiation from the one or more emitters.
That is, in relation to the above specific example, the first area 210 is a reference area provided as a border around the second area 220, and the second area 220 comprises one or more emitters arranged to be read via emission radiation emitted therefrom. The one or more emitters serve as the components that provides or serves as the unique identity.
The one or more emitters are arranged to be excited by illumination in the form of excitation radiation. In one example, the image capturing device 100 is configured to emit the excitation radiation (and thereby illuminate the first area 210 and second area 220).
The method is performed as described above in relation to Figure 3, with a ratio of the mean brightness values being calculated as above. The first reading and the second reading can be used S300 to determine whether a first area-excitation radiation interaction occurring at the first area 210 is different to a second area-excitation interaction occurring at the second area 220. Specifically, in this example, as the second area 220 comprises the one or more emitters (i.e. , the one or more emitters are provided, or located, in the second area 220), the mean brightness value of the second area 220 is greater than that of the first area 210 (which may be absent emitters, or may comprise fewer emitters). Furthermore, in some example, the emission from the second area 220 will be a combination of emission radiation (e.g., photoluminescence) and reflected light, which can be distinguished from only reflected light from the first area 210, by the method described above.
In this way, it can be determined that the second area-excitation radiation interaction results in the emission of emission radiation from the one or more emitters of the second area 220. Subsequently, once this determination has been made, the identity can be extracted from the second area 220.
Whilst emission of emission radiation from the one or more emitters is described above as a highly advantageous example, it will be appreciated that benefits of the invention may also be obtained where other area-illumination interactions are monitored or used. For example, absorption, refraction, scattering and even different properties (e.g., magnitude, direction) of reflection may be considered to determine that different areaillumination interaction has taken place at the first area 210 vs the second area 220.
Figure 8 shows the image capturing device 100 in more detail. The image capturing device 100 may be a terminal device, such as a smartphone. The image capturing device 100 may be configured to provide a source of illumination, for example to emit excitation radiation to excite the at least one optical emitter (e.g., from an electromagnetic radiation source, such as a flash or LED). By being configured to emit excitation radiation, the image capturing device 100, advantageously, facilitates the aforementioned robust security. Further, emitting the excitation radiation from the image capturing device 100, advantageously, allows convenient control of excitation of the one or more optical emitters. The source of illumination may be a UV LED, which advantageously can excite visible fluorescence without reflected light being measured.
As shown in Figure 8, the image capturing device 100 comprises a reader 110 and a processor 120. The reading S1/S100, S200 of the first area 210 and the second area 220 is performed by the reader 110, which could include or be a sensor. The processor is configured to perform one or more of: using the first reading and the second reading to determine whether a first area-illumination interaction occurring at the first area 210 is different to a second area-illumination interaction occurring at the second area 220; correcting the second reading based on the first reading; locating the second area 220 based on the first area 210; extracting information from the first area 210; and extracting the identity from the second area 220. The processor 120 may be configured to perform the aforementioned steps locally (i.e. , at the image capturing device 100) or externally (e.g., at a server). The processor 120 may be dedicated hardware or existing hardware specifically configured to extract to perform the method. Figure 9 shows a system comprising the image capturing device 100, the ORSE 200 and a data store 400. The data store 400 may be used to store information about the second area 220. For example, though, typically, information about the second area 220 is included in the first area 210, the first area 210 may store information that enables the image capturing device 100 to access (e.g., by wireless communication) information stored by the data store 400 pertaining to the second area, such as the spatial information and colour information described above. Storing the information about the second area 220 in the data store 400, advantageously, introduces an extra layer of security and/or enables use of a less sophisticated, for instance, engineered component. The data store 400 may be an external device, as shown in Figure 9, or part of the image capturing device 100. The data store 400 being an external device, advantageously, facilitates compactness of the image capturing device 100.
Although preferred embodiments have been shown and described, it will be appreciated by those skilled in the art that various changes and modifications might be made without departing from the scope of the invention, as defined in the appended claims and as described above.
The optional features set out herein may be used either individually or in combination with each otherwhere appropriate and particularly in the combinations as set out in the accompanying claims. The optional features for each aspect or exemplary embodiment of the invention, as set out herein are also applicable to all other aspects or exemplary embodiments of the invention, where appropriate. In other words, the skilled person reading this specification should consider the optional features for each aspect or exemplary embodiment of the invention as interchangeable and combinable between different aspects and exemplary embodiments.

Claims

1. A method of reading an optically readable security element, by an image capturing device, wherein the optically readable security element comprises a first area and a second area, the second area comprising an identity, the method comprising: reading the first area as a first reading; reading the second area as a second reading; and using the first reading and the second reading to determine whether a first area-illumination interaction occurring at the first area is different to a second area-illumination interaction occurring at the second area.
2. The method according to claim 1 , wherein the optically readable security element comprises a plurality of readable components, and wherein a first readable component thereof comprises the first area and a second readable component thereof comprises the second area.
3. The method according to claim 1 or claim 2, wherein the first area and second area are illuminated by: illuminating the first area and the second area using a source of illumination, optionally using the image capturing device.
4. The method according to any one of the preceding claims, comprising: determining a first value of an optical parameter for the first reading; determining a second value of the optical parameter for the second reading; comparing the first value and second value; and using the comparison to determine whether the first area-illumination interaction occurring at the first area is different to the second areaillumination interaction occurring at the second area.
5. The method according to any one of the preceding claims, comprising: determining that the first area-illumination interaction occurring at the first area is different to the second area-illumination interaction occurring at the second area; and, optionally, extracting the identity from the second area.
6. The method according to any one of the preceding claims, comprising: reading a third area of the optically readable security element, the third area comprising the first area and the second area; and identifying the first area and second area in the third area.
7. The method according to claim 6, comprising: dividing the third area into a plurality of regions, each region comprising a first area and a second area.
8. The method according to any one of the preceding claims, wherein the first reading and the second reading are performed in a same field of view of the image capturing device.
9. The method according to any one of the preceding claims, wherein the first area comprises information about the second area.
10. The method according to any one of the preceding claims, wherein the second area comprises one or more emitters arranged to be read via emission radiation emitted therefrom.
11. The method according to claim 10, wherein the one or more emitters are arranged to be excited by illumination in the form of excitation radiation, optionally wherein the image capturing device is configured to emit the excitation radiation.
12. The method according to claim 11 , comprising: using the first reading and the second reading to determine whether a first area-excitation radiation interaction occurring at the first area is different to a second area-excitation radiation interaction occurring at the second area.
13. The method according to claim 12, comprising: determining that the second area-excitation radiation interaction results in the emission of emission radiation from the one or more emitters of the second area; and, optionally, extracting the identity from the second area.
14. The method of any one of the preceding claims, wherein: the first area comprises an engineered component; and/or the first area comprises a reference area; and/or the second area comprises a unique component, encoding the identity.
15. A system comprising: an image capturing device for reading an optically readable security element; the optically readable security element comprising a first area and a second area, the second area comprising an identity, wherein: the first area is to be read in first reading; the second area is to be read in a second reading; and the first reading and second reading are to be used to determine whether a first area-illumination interaction occurring at the first area is different to a second area-illumination interaction occurring at the second area.
PCT/GB2024/051824 2023-07-14 2024-07-11 A method of reading an optically readable security element to distinguish areas of the element WO2025017284A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GB2310822.8A GB2631779A (en) 2023-07-14 2023-07-14 A method of reading an optically readable security element comprising a first area and a second area
GB2310822.8 2023-07-14
GB2400540.7A GB2627860B (en) 2023-07-14 2024-01-15 A method of reading an optically readable security element to distinguish areas of the element
GB2400540.7 2024-01-15

Publications (1)

Publication Number Publication Date
WO2025017284A1 true WO2025017284A1 (en) 2025-01-23

Family

ID=91959414

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2024/051824 WO2025017284A1 (en) 2023-07-14 2024-07-11 A method of reading an optically readable security element to distinguish areas of the element

Country Status (1)

Country Link
WO (1) WO2025017284A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150324677A1 (en) * 2014-05-07 2015-11-12 Vitaly Talyansky Multi-Layer Optical Barcode with Security Features
US20180004993A1 (en) * 2016-06-24 2018-01-04 Relevant Play, Llc Authenticable digital code and associated systems and methods
US20190384955A1 (en) * 2016-12-16 2019-12-19 Kurz Digital Solutions Gmbh & Co. Kg Verification of a security document
US20220067468A1 (en) * 2020-08-31 2022-03-03 Temptime Corporation Barcodes with security material and readers for same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150324677A1 (en) * 2014-05-07 2015-11-12 Vitaly Talyansky Multi-Layer Optical Barcode with Security Features
US20180004993A1 (en) * 2016-06-24 2018-01-04 Relevant Play, Llc Authenticable digital code and associated systems and methods
US20190384955A1 (en) * 2016-12-16 2019-12-19 Kurz Digital Solutions Gmbh & Co. Kg Verification of a security document
US20220067468A1 (en) * 2020-08-31 2022-03-03 Temptime Corporation Barcodes with security material and readers for same

Similar Documents

Publication Publication Date Title
US8749767B2 (en) Systems and methods for detecting tape on a document
US10805523B2 (en) Article authentication apparatus having a built-in light emitting device and camera
EP2856409B1 (en) Article authentication apparatus having a built-in light emitting device and camera
TWI742100B (en) Method for authenticating a security marking utilizing long afterglow emission, and security marking comprising one or more afterglow compound
US20130136310A1 (en) Method and System for Collecting Information Relating to Identity Parameters of A Vehicle
WO2013136791A1 (en) Object verification device, object verification program, and object verification method
CN103155008B (en) For the method checking the optical security feature of value file
US10318775B2 (en) Authenticable digital code and associated systems and methods
EP4171051B1 (en) Collation device, program, and collation method
CN109448218A (en) Not visible characteristic detection device and method, sheet material identification/processing unit, printing inspection apparatus
WO2025017284A1 (en) A method of reading an optically readable security element to distinguish areas of the element
GB2627860A (en) A method of reading an optically readable security element to distinguish areas of the element
KR101509399B1 (en) A identification recognition system
CN111989721A (en) Method for verifying security features based on luminescent materials
KR102660115B1 (en) Authentification device and method for security pattern
CN104036274B (en) The method and device for determining whether it is true or false is distinguished by recognizing the image of article surface
CN104036309B (en) Distinguish the method and device of the papers being implanted with dot matrix mark
KR20160007789A (en) A Testing Method Of Forging/Manipulating Of Securities
US11941901B2 (en) Smartphone or tablet comprising a device for generating a digital identifier of a copy, including at least one print image of a printed product produced in a production system, and method for using this device
JP7342560B2 (en) Copy check medium and authenticity determination device
Štolc et al. On interoperability of security document reading devices
GB2633438A (en) A method of reading an optically readable security element using illumination intensity
CN109961084B (en) Method and device for verifying value documents
JP2019074778A (en) Method of reading information recording medium and method of determining authenticity
GB2620946A (en) Method of reading an optically readable security element

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24745512

Country of ref document: EP

Kind code of ref document: A1