[go: up one dir, main page]

WO2013164015A1 - A display integrated semitransparent sensor system and use thereof - Google Patents

A display integrated semitransparent sensor system and use thereof Download PDF

Info

Publication number
WO2013164015A1
WO2013164015A1 PCT/EP2012/057947 EP2012057947W WO2013164015A1 WO 2013164015 A1 WO2013164015 A1 WO 2013164015A1 EP 2012057947 W EP2012057947 W EP 2012057947W WO 2013164015 A1 WO2013164015 A1 WO 2013164015A1
Authority
WO
WIPO (PCT)
Prior art keywords
sensor
display
light
display device
sensors
Prior art date
Application number
PCT/EP2012/057947
Other languages
French (fr)
Inventor
Arnout Robert Leontine VETSUYPENS
Wouter M. F. WOESTENBORGHS
Saso MLADENOVSKI
Original Assignee
Barco N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Barco N.V. filed Critical Barco N.V.
Priority to PCT/EP2012/057947 priority Critical patent/WO2013164015A1/en
Publication of WO2013164015A1 publication Critical patent/WO2013164015A1/en

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02FOPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
    • G02F1/00Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
    • G02F1/01Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour 
    • G02F1/13Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour  based on liquid crystals, e.g. single liquid crystal display cells
    • G02F1/133Constructional arrangements; Operation of liquid crystal cells; Circuit arrangements
    • G02F1/13306Circuit arrangements or driving methods for the control of single liquid crystal cells
    • G02F1/13318Circuits comprising a photodetector
    • HELECTRICITY
    • H10SEMICONDUCTOR DEVICES; ELECTRIC SOLID-STATE DEVICES NOT OTHERWISE PROVIDED FOR
    • H10KORGANIC ELECTRIC SOLID-STATE DEVICES
    • H10K30/00Organic devices sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation
    • H10K30/80Constructional details
    • H10K30/81Electrodes
    • H10K30/82Transparent electrodes, e.g. indium tin oxide [ITO] electrodes
    • HELECTRICITY
    • H10SEMICONDUCTOR DEVICES; ELECTRIC SOLID-STATE DEVICES NOT OTHERWISE PROVIDED FOR
    • H10KORGANIC ELECTRIC SOLID-STATE DEVICES
    • H10K39/00Integrated devices, or assemblies of multiple devices, comprising at least one organic radiation-sensitive element covered by group H10K30/00
    • H10K39/30Devices controlled by radiation
    • HELECTRICITY
    • H10SEMICONDUCTOR DEVICES; ELECTRIC SOLID-STATE DEVICES NOT OTHERWISE PROVIDED FOR
    • H10KORGANIC ELECTRIC SOLID-STATE DEVICES
    • H10K59/00Integrated devices, or assemblies of multiple devices, comprising at least one organic light-emitting element covered by group H10K50/00
    • H10K59/10OLED displays
    • H10K59/12Active-matrix OLED [AMOLED] displays
    • H10K59/13Active-matrix OLED [AMOLED] displays comprising photosensors that control luminance
    • HELECTRICITY
    • H10SEMICONDUCTOR DEVICES; ELECTRIC SOLID-STATE DEVICES NOT OTHERWISE PROVIDED FOR
    • H10KORGANIC ELECTRIC SOLID-STATE DEVICES
    • H10K65/00Integrated devices, or assemblies of multiple devices, comprising at least one organic light-emitting element and at least one organic radiation-sensitive element, e.g. organic opto-couplers
    • HELECTRICITY
    • H10SEMICONDUCTOR DEVICES; ELECTRIC SOLID-STATE DEVICES NOT OTHERWISE PROVIDED FOR
    • H10KORGANIC ELECTRIC SOLID-STATE DEVICES
    • H10K30/00Organic devices sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation
    • H10K30/20Organic devices sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation comprising organic-organic junctions, e.g. donor-acceptor junctions
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/50Photovoltaic [PV] energy
    • Y02E10/549Organic PV cells

Definitions

  • the present invention relates to the field of displays, and more specifically features used in combination with a display.
  • Said feature is an integrated sensor system, able to measure properties of the light emitted by the display at multiple locations dispersed over its active area, as well as properties of the ambient at the same locations, suitably combined with the controller comprising both hardware and software components.
  • Said controller can interact with the sensors and control the display' s electronic driving, and further comprises suitable algorithms for improving the display's performance, guaranteeing its performance during its lifetime, or expanding its functionalities.
  • Sensors are nowadays commonly used in display devices, for instance in display devices used for professional markets, to ensure the performance requirements of properties of their emitted light are met during their lifetime.
  • Typical properties of the light, measured by the sensor system are luminance, and chromaticity, which can for instance be used for measuring the luminance and chromaticity uniformity of the light emitted by the display.
  • LCDs Liquid Crystal Displays
  • LCDs used in the medical market are transmissive displays, meaning that the liquid crystals cells do not emit light themselves, but merely modulate the light emitted by the backlight, which includes an integrated light source.
  • sensor systems can be integrated into the backlight, which allow controlling the light output of the backlight over time.
  • This light output can alter over time, for instance due to thermal effects occurring in the displays, or due to the typical degradation of the light source's light output over time.
  • This sensor system has several limitations, however. First of all, the light measured is not the final light output of the display as it will be seen by the observer. Indeed, the light seen by the observer goes through a complex path, determined by the optical design of the backlight, and the liquid crystal material.
  • the light is measured at a single location, while there may be a spatial dependency of the light emitted by the display over time.
  • Several sensor systems have been proposed in prior art that allow measuring the emitted light at several locations on the display's active area, or obtain a more global measurement of the light emitted by the display.
  • WO2004/023443 One possible solution is proposed in WO2004/023443.
  • a waveguide solution is used to guide the light towards the display's edge, where it is detected by a sensor.
  • Another solution, proposed in US 2007/0052874 Al uses a sensor, which is integrated into a pixel, which allows measuring the emitted light on an individual pixel basis.
  • the sensor is an entirely non- transparent sensor that is either put outside the display' s active area, or designed to be very limited in size, in order to reduce the impact on the display' s quality.
  • EP 1 424 672 Al An alternative technique to control a property of the display's light output is known from EP 1 424 672 Al, which uses a non-integrated sensor system.
  • the luminance emitted by the display over its entire active area is corrected at the level of individual pixels, by capturing the display's luminance output, and correcting the display's driving appropriately.
  • the drawback of this solution is that it is not integrated into the display, and due to the nature of this technique, it is typically only performed once, during the display's production process, and consequentially it does not allow updating the correction during the display's lifetime.
  • the advantage of such a technique is that it does not require a redesign of the display, as it uses a software solution that suitably alters the display's driving.
  • the ambient light level of the environment in which the display is used can impact the diagnosis' quality, because the visible contrast of the image seen on the display can be reduced due to reflections on the display.
  • an ambient light sensor is integrated into the display's bezel. This location is chosen, as the conventional sensor technology is non-transparent, and hence it would cause visible degradations of the display's quality.
  • Such a sensor is capable of measuring the ambient light at a single location.
  • the subject of the present invention is an integrated sensor system with a dual-sided light detection functionality, capable of measuring properties of a matrix-addressed display's light output at a multiple of zones over its active area at any moment during the display's lifetime.
  • the proposed sensor system comprises a specific design architecture, optimized to render the sensor invisible to human observers, and is further comprises suitable calibration techniques and a dedicated controller to ensure the sensor's correct operation.
  • the controller comprises both hardware and software components, allowing it to interact with the sensors and control the display's electronic driving.
  • a suitable interaction between the controller and the sensor allows obtaining stabile measured values.
  • This includes applying a suitable electronic driving signal on the sensor, and a software processing technique to process the obtained measurements.
  • the calibration algorithms then dictate how the controller should adapt the display's driving in order to obtain or guarantee the desired display performance.
  • the sensor system can be used for a variety of specific applications, including touch functionality.
  • the object of the invention is a display-integrated sensor system.
  • This system is beneficially used to enhance the display's performance, guarantee its performance during its lifetime or provide new functionalities, and this without creating any visible artefacts for the user of the display.
  • the specific technology at the basis of said display is not considered a limitation of the present invention.
  • the display technology can for instance be direct view, projection based or transmissive such as but not limited to LCD, OLED, DLP, or plasma.
  • Said display comprises at least two display areas which are specific regions on the display's active area, which contain a plurality of pixels.
  • the specifications of the display can differ profoundly. For instance, in the case of medical displays, typical specifications are a broad viewing angle, a high contrast ratio, a high luminance, a high resolution and a high pixel density. Any feature used in combination with the display should not significantly degrade its specifications, and it should not introduce visible artifacts, as this can ultimately have life-critical consequences, due to the nature of the application of the display.
  • the sensor system described in the present invention is suitable for this application.
  • the display device further comprises an electronic driving system connected to the sensor system's controller, able to communicate with the actual sensors and able to suitably control the display's driving.
  • This driving can be the driving of the panel, as well as the driving of the backlight, in the specific embodiment of an LCD.
  • the sensor system is designed such that an individual signal can be measured for every display area, and the obtained value is representative for a property of (or multiple properties of) the light emitted by (a part of) the display corresponding display area under test.
  • the sensor system of the present invention is a semitransparent sensor system, which comprises the actual light sensitive devices, residing on a semitransparent substrate, which is put in front of and parallel to the display' s active area.
  • the sensor system comprises a number of light sensitive areas, which are crossed by light emitted by the display, without the need of dedicated light-guiding designs to redirect the display's emitted light.
  • part of the light is absorbed by the sensor, which allows it to be detected, while the majority of the light continues its path towards the observer in front of the display, with a minimal impact due to the presence of the sensor.
  • the partial absorption and transmission characteristics of the sensor determine its semitransparency.
  • the light absorbed by the sensor is converted to an electrical signal, which influences the signal received by the controller, which is contacted at the edge of the display, outside the display's active area.
  • Said controller comprises both hardware and software components, and is used to interact with the sensor, which will receive the electrical measurement signals generated in the at least one sensor, and further processes them. Afterwards, the controller can utilize the display's electronic driving system on the basis of the received optical measurement signals, to improve the display's performance. Alternatively, the processed signals can be used in added functionalities.
  • the proposed technology at the heart of the sensor system, performing the actual light detection, is an organic photoconductive sensor.
  • This technology contains a material which alters its conductivity depending on a property of the impinging light, more specifically, a higher light level increases the conductivity of the device.
  • an organic material is used as light sensitive material.
  • Such organic materials have been a subject of advanced research over the past decades. This research has led to breakthroughs in several domains.
  • the main domain of this material research is the domain of emissive devices.
  • Such emissive devices include single pixel OLED devices, typically used for lighting applications, as well as high- resolution OLED displays, which have a matrix of individually controllable pixels.
  • Other known domains of research are organic photovoltaics (OPVs), as well as organic TFTs.
  • a single organic photoconductive sensor used in the sensor system has a design architecture which consists of two electrodes on a substrate. These two electrodes have finger-shaped interdigitated extensions, i.e. they are positioned such that each finger of one electrode is surrounded by two fingers of the other electrode after a spatial separation gap to avoid electrical shorts.
  • this design there are two outer fingers, which only have a single adjacent finger.
  • all the fingers have an identical width and gap between the fingers.
  • the finger width over finger gap ratio can have a very broad range, for instance starting from 0.5, but there is no real upper limit.
  • the sensor On top of these electrodes, a stack of organic layers is added, Some layers in the organic stack are photoconductive which allows it to detect the impinging light. To use the sensor to measure color, the sensor should have a spectral sensitivity that should at least cover the spectral power distribution of the display's primaries. In the case if we want to measure ambient light, the spectral sensitivity should cover the whole visible range of wavelengths 380-780nm.
  • the electrodes lead the signal towards the border of the display via a track of a semitransparent conductor, where it is received by the controller. More specifically, an electrical signal needs to be applied over the sensor, typically a voltage signal, and due to the light sensitivity, the resulting current flowing through the sensor is influenced.
  • the organic stack of the device in the present invention is designed and controlled specifically to reach a long device lifetime. Consequentially, it is encapsulated to avoid degradation of the materials.
  • An example of an organic photoconductive sensor, with lateral electrodes and a stack of organic materials is known from Applied Physics Letters 93 "Lateral organic bilayer heterojunction photoconductors" by John C. Ho, Alexi Arango and Vladimir Bulovic.
  • the described bilayer comprises an EGL (the material PTCBI is used) or Exciton Generation Layer and a CTL (the material TPD is used) or Charge Transport Layer (in contact with the electrodes).
  • EGL the material PTCBI is used
  • Exciton Generation Layer the material TPD is used
  • Charge Transport Layer in contact with the electrodes.
  • excitons are generated in the EGL, however, in organic materials the dissociation probability of photon-induced excitons is small due to the large binding energy (0.5-1.0eV) [V.I. Arkhipov, H.
  • this device uses gold electrodes, rendering it clearly visible to a human observer.
  • the device is not encapsulated, as it is intended for an entirely different application, to measure TNT molecules, and it is not intended to have a long lifetime with a stabile readout signal.
  • the substrate is a rigid substrate with a very high transmission in the visible spectrum with a sufficient thermal stability.
  • the luminance transmission can be in the range of 60-98%, preferably in the range of 87-98%.
  • the substrate should also have a rather uniform spectral transmission in the visible range, which limits the coloring of the sensor system when placed in front of the display. These transmission characteristics should be valid at any spatial location on the substrate.
  • Particularly inorganic substrates such as glass have sufficient thermal stability. This thermal stability is, amongst other factors, needed to withstand operating temperature of the technique used to add the organic layers onto the device as well as the patterning of the transparent conductor, which are described later on.
  • Suitable embodiments of suitable substrate glasses are for instance Corning Eagle XG glass or Polished soda-lime with a Si0 2 passivation layer.
  • the thickness of the glass substrate can for instance be in the range of 0.3-30 mm, more preferably in the range of 0.7-1. lmm.
  • the substrate is covered with a semitransparent conductor which remains fixed on the substrate, and is able to serve both as the electrode and as electrical conductor guiding the signal towards the border of the display.
  • a suitable semitransparent conductor is for instance Indium Tin Oxide (ITO), which is used in the preferred embodiment.
  • ITO Indium Tin Oxide
  • the exact specifications of the ITO used in the architecture should be carefully selected, as they impact the final performance of the sensor.
  • Important parameters of the ITO include its sheet resistance as this determines its conductivity, its wavelength-dependant complex refractive index as this determines its transmission and reflection properties in combination with its thickness and its work function, which determines its electrical properties in combination with the organic layers.
  • a possible range of sheet resistances for ITO is 1 ⁇ / ⁇ -5000 ⁇ / ⁇ .
  • the ITO sheet resistance is in the range of 60 ⁇ / ⁇ -125 ⁇ / ⁇ , as explained later.
  • a possible range of ITO layer thicknesses in an operational device is for instance 5-450 nm. More preferably, the ITO thickness is in the range of 25-65 nm.
  • ITO typically has a non-uniform spectral transmission curve, resulting in a specific coloring, which depends on the ITO thickness and the ITO manufacturing process.
  • An alternative material is the polymeric Poly(3,4-ethylenedioxythiophene) poly(styrenesulfonate), typically referred to as PEDOT:PSS.
  • the spatial uniformity of the ITO layer thickness should be high, to reduce possible inconsistent electrical properties, or a positional dependant luminance or chromaticity output.
  • the spatial luminance output as a consequence should be in the range of 85-100% (Lmax-Lmin)/Lmax, more preferably in the range of 95-100%.
  • a suitable semitransparent conductor should be patternable, using a cost- efficient patterning process.
  • the sensor electrode's finger shaped extensions are created on the ITO coated substrate by means of laser ablation in the preferred embodiment.
  • the ITO material is removed by irradiating the substrate with a laser beam. The removal is done starting from a substrate which is uniformly covered with and ITO coating.
  • This method of ITO removal is preferred as it is easily scalable to substrates of any size, knowing that the design can easily be upscaled, and ITO be removed in a single-step with a relatively inexpensive process at high yield.
  • a functioning device can be made with several possible stack architectures of the organic layers.
  • This stack architecture can for instance differ in the number of layers used.
  • the organic stack can be a monolayer, bi-layer, or more generally a multilayer stack.
  • the precise functionality of the different layers can also differ depending on the precise design.
  • the device has a three layer stack, consisting of a first Hole Transport Layer (HTL), added on the ITO patterned substrate, onto which an Exciton Generation Layer (EGL) is added, onto which a final organic layer, a second HTL, is added.
  • HTL Hole Transport Layer
  • ETL Exciton Generation Layer
  • a suitable work function of the semitransparent conductor is defined relative to the HOMO level of the HTL on top of the patterned substrate.
  • the work function of the semitransparent conductor should typically be higher than the HOMO level of the HTL on top of the patterned substrate.
  • the organic layers can be added onto the patterned substrate by means of several technologies.
  • vacuum (thermal) evaporation is used, in which the organic layers are deposited in a vacuum chamber by means of for instance a point source or a line organic source.
  • Alternative techniques are not excluded, however.
  • An example of an alternative technique to add the organic materials to the ⁇ patterned substrate is Organic Vapour Phase Deposition (OVPD). This deposition technique uses an inert carrier gas (N 2 ) to transport the organic molecules to be deposited to the substrate.
  • N 2 inert carrier gas
  • the encapsulation of the sensor based on organic photoconductors is done with a method different than the conventional approach (based on encapsulation plate, spacers and getters done in inert gas atmosphere).
  • a UV curable encapsulation glue cured with a suitable intensity and during an optimized time interval.
  • the encapsulation plate in this specific embodiment is a uniform, non-etched glass plate.
  • an AlOx layer encapsulation technique is used. This type encapsulation requires two steps. First of all, an AlOx-film is sputtered on top of the organic layers, hereafter a non-etched glass is attached on top of the created stack, by using UV-curable glue which has to be cured with a suitable intensity and during an optimized time interval. It is also not excluded that atomic layer deposition can be used to add the AlOx film.
  • a thin film of AlOx can be used in OLEDs as a barrier layer that has a limited transmission of water and oxygen (Sang-Hee Ko Park, Jiyoung Oh, Chi-Sun Hwang, Jeong-Ik Lee, Yong Suk Yang, Hye Yong Chu, Kwang-Yong Kang, "Ultra Thin Film Encapsulation of Organic Light Emitting Diode on a Plastic Substrate, ETRI Journal, Volume 27, Number 5, October 2005" and W. Keuning, P. van de Weijer, and H. Lifka, W. M. M. Kessels, and M.
  • This encapsulation methodology requires having a top layer in the organic stack that can withstand the sputtering. This can for instance be realized by using a three-layer stack device with a relatively thick HTL on top.
  • a suitable thickness can be for instance any thickness larger than 40 nm.
  • sensors comprising composite materials can be considered.
  • Composite materials can for instance comprise nano/micro particles, either organic or inorganic, dissolved in the organic layers, or an organic layer consisting of a combination of different organic materials (dopants). Since the organic photosensitive particles often exhibit a strongly wavelength sensitive absorption coefficient, this configuration can result in a less colored transmission spectrum when suitable materials are selected and suitably applied, or can be used to improve the detection over the whole visible spectrum, or can improve the detection of a specific wavelength region.
  • hybrid structures using a mix of organic and inorganic materials can be used.
  • a bilayer device that uses a quantum-dot exciton generation layer and an organic charge transport layer can be used. More specifically, colloidal Cadmium Selende quantum dots and an organic charge transport layer comprising of Spiro-TPD can be used for instance.
  • a disadvantage could be that the sensor only provides one output current per measurement for the entire spectrum.
  • the disadvantage is that spectral changes of the display over time can be challenging to measure. This limitation can be overcome for instance by using three independent photoconductors that have a different spectral sensitivity in the visible spectrum. Using suitable calibration techniques, the spectral changes over time can be characterized better.
  • photoconductors could be conceived similarly to the previous descriptions, and stacked on top of each other, or adjacent to each other on the substrate, to obtain an online color measurement.
  • transversal electrodes with organic photosensitive layers in between can be used.
  • a charge separation layer may be present between the CTL and the EGL.
  • Various materials may be used as charge separation layer, for instance A1Q3.
  • the proposed sensor architecture in the present invention contains a lot of design parameters, which can all be optimally selected from a broad span of possibilities. Consequentially, the design freedom can be put to optimal use in order to obtain the most suitable device for the intended application.
  • the most suitable device is designed to have the least possible reduction image quality, a stable signal readout, and it should fit with the sensor requirements needed for improving the display's performance, and expanding its functionalities.
  • a fundamental difference of the sensor system of the present invention, compared to prior art sensor systems is that the sensors themselves are designed to be semitransparent. This allows measuring properties of the display's emitted light, directly in front of the display's active area, in individual sensing areas which are larger than an individual pixel.
  • the device architecture of the photoconductive sensors uniquely comprises elements that can be made semitransparent, it is still required to properly design the photoconductive sensor, such that it has the least possible reduction image quality. Therefore, the possible causes of degradation should be clarified.
  • medical grade displays require a high luminance output and a broad viewing angle, and therefore the sensor should transmit as much luminance as possible over all angles. This includes both global, average changes in luminance, as well as smaller features, which locally alter the emitted luminance. In addition, the sensor should introduce the least possible changes in color.
  • a thin glass substrate with a high transmission is carefully chosen.
  • Corning Eagle XG glass or Polished soda-lime with a Si0 2 passivation layer with a thickness of 0.7 or 1.1 mm is chosen.
  • Semitransparent electrodes design When designing the geometry of the semitransparent electrical conductors, preferably some specific geometry, less visible for the human eye is used, which can result in less visible non-uniformities and thus a better image quality. For instance using semitransparent electrical electrodes, which comprise curved finger extensions, or finger patterns under an angle relative to the display' s pixel matrix, instead of straight finger-shaped semitransparent electrical conductor patterns will be less easy to detect by the human eye. For instance, these techniques can be used to avoid moire effects that can occur due to the superposition of the finger pattern grid on top of the display pixel grid.
  • parts of a floating electrical conductor system can be applied on regions outside the finger-shaped extensions of the semitransparent electrodes and ITO tracks that guide the electrical signal from the finger pattern towards the edge of the display's active area, to have a more uniform global luminance and color output.
  • These parts of a floating electrical conductor system have no function aside from improving the visibility. They are separated from the active, partially transparent electrodes with finger-shaped extensions and ⁇ tracks that guide the electrical signal from the finger pattern towards the edge of the display, by for instance a gap to ensure there is no electrical contact between them. Furthermore, the width of this gap is small enough so that the gap itself is not noticeable by the human eye.
  • the floating electrical conductor system is typically made of the same material as the active, semitransparent electrical conductors, as the shapes are typically created by patterning an initially uniformly coated ITO substrate.
  • a floating electrical conductor is applied next to the end of a finger of the pattern (this can be interpreted as a "nail" of the finger) which also has no other function except reducing the visibility of the finger pattern itself. They are separated from all active parts of the semitransparent conductor and there is no signal applied on them.
  • the gaps between the fingers of a partially transparent electrical conductor pattern are chosen such that the human eye is not able to distinguish any individual fingers. This is preferably enabled by appropriately choosing the gap between the fingers such that the gap in between the fingers at a given viewing distance will be smaller than the smallest details any (or a typical) human observer is able to discriminate, for the specific contrast between the finger and gap region.
  • the gap between the fingers can impact the performance of the sensor.
  • the percentage of the finger pattern area, which is covered by the semitransparent conductor is increased.
  • the average transmission over the finger pattern is very close (almost the same) to the transmission of the areas with a uniform floating electrical semitransparent conductor.
  • the human eye is unable to distinguish the sensor's finger pattern from the neighbouring ITO tracks and floating ITO conductors, wherein the patterns and tracks conduct the signal towards the edge of the display, and parts of the floating electrical conductor system with no specific signal.
  • the sensor's size will consequentially increase, when selecting a higher finger width to gap ratio for a given fixed gap.
  • a preferred embodiment of the invention semi-random fingers are created by randomly choosing several points on the two edges of the fingers, where the fingers should go through, the different points then are for instance connected using a cubic spline interpolation.
  • the position of the adjacent finger preferably is limited in distance, in the sense that the gap in between the fingers should remain approximately constant to ensure that the device's properties remain unaltered.
  • the points can be chosen in a semi random way in the sense that they are limited to a specific area to avoid too high spatial frequencies in the fingers.
  • the ratio between the finger width and the gap between the fingers is non-constant in the created finger pattern, as a consequence of approximately maintaining the gap size and choosing the points of the edge semirandomly.
  • the average finger width can be designed to reduce the visibility relative to the areas with floating ITO.
  • the exact material parameters of the ITO layer also determine the visibility of the resulting sensors. Indeed, the specific manufacturing procedure of the ITO results in specific thicknesses and complex refractive indices, which will eventually be the parameters of the ITO layer, which contribute to the sensors' absorption and transmission characteristics.
  • the ITO parameters also affect its electrical behaviour. For instance, a thinner ITO layer typically results in a higher sheet resistance, which can render it more difficult to guide the electrical signal towards the controller and perform a proper detection of the signal.
  • a typical range for ITO sheet resistances for a suitable visibility is for instance 60 ⁇ / ⁇ - 125 ⁇ / ⁇ .
  • the organic layers need to be suitably selected to reach the visibility requirements.
  • the organic layers mainly demand a suitable wavelength-dependant complex refractive index and thickness.
  • the two hole transport layers can be designed such that they impact the visibility of the sensor to a minor extent, as they can consist of materials with a minor absorption in the visible spectrum. They can, however, impact the thin-film effects occurring in the sensor.
  • the Exciton generation layer severely impacts the sensor's transmission, as this layer's function is to partially absorb incoming photons, and convert them into a measurable electric signal. Therefore, the Exciton generation material ideally has a uniform wavelength dependant absorption.
  • the layers absorption depends on its thickness and wavelength dependant absorption coefficient, as dictated by beer-lambert' s law. Therefore, the material's absorption coefficient may not be too high, such that the layer thickness does not have to become too thin, allowing it to reach a desired transmission, while remaining manufacturable by the organic layer addition techniques described earlier.
  • the luminance absorption of the organic stack is typically in the range of 3- 30%.
  • the luminance transmission of the organic stack is preferably designed to be in the range of 80-95%.
  • the wavelength-dependant absorption of the organic layers is also designed to be rather uniform over the visible spectrum, to avoid strong coloring due to the organic layers.
  • the organic layers are put on the substrate as uniform layers. This implies that they will introduce a global, uniform change in chromaticity of the display's emitted light.
  • a uniform deposition of the organic layers is used, typically with a uniformity between 90-100% over the entire area.
  • the color and luminance uniformity are results of these layer non-uniformities. This renders it easier to compensate the color shift, for instance by using an optimized antireflection coating, as described later on.
  • the display's driving can be suitably adapted to compensate for this uniform coloring.
  • the encapsulation of the sensor based on organic photoconductive sensors is preferably done with a method different than the conventional approach (based on encapsulation plate, spacers and getters done in inert gas atmosphere).
  • a space between the substrate with sensors and the encapsulation plate is filled, by preferably using encapsulation glue which has a refractive index close to the refractive index of both the substrate and the encapsulation plate (typically made of glass), high transmission and whereby the glue has no coloring effect. This helps to remove a few unwanted visual artifacts such as the spacers and getters. Furthermore, it improves the transmission of the sensor by reducing the reflection of internal optical interfaces.
  • the transmission can be improved by using antireflection coatings on the external side of the substrate and the encapsulation plate.
  • the antireflection coating can be tuned in such a way that it reduces the reflection over all wavelengths. For some wavelength regions the reflection is reduce more than for other wavelength regions which is helpful to remove any potential coloring of the sensor.
  • a stabile readout after calibration for luminance can be for instance mean an error of 0-20%, more preferably an error of 0-3%.
  • the main contributor to the sensor's stability by the controller is the driving signal applied to the photoconductive sensors, which is a specific voltage or current signal. From experiments, it became apparent that a voltage-driven sensor is preferred, due to the specific shape of the sensor's IV curves.
  • the type of voltage driving signal applied to the sensor can for instance be a square wave, a sinusoidal wave, or more exotic shapes known by the skilled person.
  • Preferably symmetrical waves going from a positive voltage to the same negative voltage are used. For example, good results were obtained using a square wave that switches between a positive and negative voltage.
  • the waveform applied does not result into a DC voltage over the cell, or in other words: when integrating one period of the waveform applied then the integrated voltage value is zero.
  • the applied wave is repeated multiple consecutive times, for the duration of the measurement procedure, and then one preferably uses the measured data which is retrieved at a certain point during the wave propagation in time.
  • a specific point on the upper and lower flanks is tracked during the consecutive cycles, and its output is used as measurement value over time.
  • the points that are focused are typically the points on the flank right before the voltage switches from high to low or from low to high.
  • the final value at the end of the positive flank or at the end of the negative flank is used.
  • the optimal frequency depends on the exact organic materials used in the sensor's layers, and their layer thickness. The frequency can be chosen between 0.01 Hz to 60Hz, preferably between 0.05 and 2.5 Hz.
  • the amplitude of the applied signal can have an impact on the resulting stability of the measured signal and typically is chosen between 0.3-3V. However, a broader range is not excluded, for instance 0.05-500V.
  • An applied voltage signal with a lower amplitude for instance 1 V generally renders a more stable result than a signal with a higher amplitude (for instance 8V).
  • the obtained measurement results can be asymmetrical, meaning that the positive flank renders a different result than the negative flank, and in addition they can converge differently. In some embodiments the measurement results are averaged or only one part (the positive of negative flank) is used.
  • the sensors have an initial burn-in, meaning that in the beginning of their lifetime, the measured value changes over time, even when the sensor is used in constant boundary conditions.
  • a decaying signal is measured over time, under constant driving and environmental conditions, which can for instance be overcome when using the sensor during this time period upfront in the production facility before shipping it to the field.
  • the organic layer stack of the sensor i.e. the stack material composition and the used layer thicknesses, has a direct impact on its fundamental stability.
  • the HTL thickness and the number of HTL layers impact the stability.
  • a thicker HTL in the range of for instance 80-160 nm is preferred over a thinner HTL in the range of 40 nm for improving the stability.
  • gaps between the fingers are preferred, but not limited to have gaps between the fingers with a width of 6 ⁇ to 30 ⁇ , because the gaps have a proven impact on the signal amplitude and visibility.
  • initial results indicate that the sensor performs (at least) as good as a sensor with standard encapsulation (using spacers, getters and inert gas atmosphere).
  • the sensor system with its design architecture described in the present invention has some inherent imperfections.
  • the sensor as described in a preferred embodiment also does not operate as an ideal luminance sensor.
  • the senor used is not a perfect luminance sensor, as it does not only capture light in a very small opening angle, preferably its angular sensitivity is taken into account, as described in the following part.
  • the measured luminance corresponds to the light emitted by the part of the active area located directly under it (assuming that the sensor's sensitive area is parallel to the display's active area).
  • the sensor according to embodiments of the present invention captures the pixel(s) under the point together with some light emitted by surrounding pixels. More specifically, the values captured by the sensor cover a larger area than the size of the sensor itself. Because of this, the patterns depicted on the display and captured by the sensor, do not correspond to the actual patterns and therefore a correction has to be done in taking into account the actual angular sensitivity of the sensor, to obtain the actual luminance values.
  • the luminance emission pattern of a pixel is measured as a function of the angles of its spherical coordinates.
  • the distance preferably is kept constant over the measurements.
  • a luminance sensor When a luminance sensor is positioned parallel to the display's active area, the latter corresponds to an inclination angle of 0, meaning that only an orthogonal light ray is considered.
  • the exact angular light sensitivity of the sensor can be characterized. These measurements can then be used to obtain the corrected pattern for the actual light the sensors will detect. Using this actual light output will provide an additional improvement and advantageous effect of the algorithm that will render more reliable results.
  • a display's angular emission pattern generally depends on its digital driving level (the grey level or color value sent to its pixels). Therefore, the conversion between values measured by the sensor system of the present invention and the actual values require a calibration that depends both on the luminance measured by the sensor as well as the driving level of the display. 2.6.1.2 Sensor non-linearity
  • the sensor according to the embodiment of the present invention generally has a non-linear response on the intensity of the impinging light, even when light impinges with a constant spectrum.
  • the sensor is generally more sensitive to changes in intensity at lower light levels compared to changes at higher light levels.
  • the spectral sensitivity of the sensor On top of the angular sensitivity and non-linearity, the spectral sensitivity of the sensor has to be considered.
  • the sensor's spectral sensitivity is directly related to the spectral sensitivity of the Exciton generation layer. As the Exciton generation layer's spectral sensitivity generally does not match the required spectral sensitivity curve, required to measure luminance and chromaticity. Therefore, a calibration technique is needed to utilize the sensor for luminance and chromaticity measurements.
  • I ( ⁇ ) is the spectral power distribution of the captured light.
  • the luminance corresponds to the Y component of the CIE XYZ tristimulus values. Since a sensor, according to embodiments of the present invention, has a characteristic spectral sensitivity curve that differs from the three color matching functions depicted above, it cannot be used as such to obtain any of the three tristimulus values. However, the sensor according to embodiments of the present invention is sensitive in the entire visible spectrum, because the EGL is photosensitive in the entire visible spectrum (or alternatively, they are at least sensitive to the spectral power distributions of a (typical) display's primaries), which allows obtaining the XYZ values after calibration for any specific type of spectral light distribution emitted by our display.
  • Displays are typically either monochrome or color displays. In the case of monochrome (e.g. grayscale) displays, they only have a single primary (e.g. white), and hence emit light with a single spectral power distribution. Color displays have typically three primaries - red (R), green (G) and blue (B)- which have three distinct spectral power distributions, although also displays with more than three primaries are possible.
  • a calibration step preferably is applied to match the XYZ tristimulus values corresponding to the spectral power distributions of the display's primaries to the measurements made by the sensor according to embodiments of the present invention.
  • the basic idea is to match the XYZ tristimulus values of the specific spectral power distribution of the primaries to the values measured by the sensor, by capturing them both with the sensor and an external reference sensor. Since the sensor according to embodiments of the present invention is non-linear, the spectral power distribution associated with the primary may alter slightly depending on the digital driving level of the primary and the angular emission pattern of the display may alter depending on the driving of the display's (sub)pixels, it is insufficient to match them at a single level. Instead, they need to be matched ideally at every digital driving level. This will provide a relation between the actual tristimulus values and sensor measurements in the entire range of possible values.
  • This offline chromaticity measurement which is enabled by calibrating the sensor to an external sensor which is able to measure tristimulus values (X, Y & Z) thus allows measuring brightness as well as chromaticity.
  • sensor calibration tables need to be created by appropriately driving the display and measuring the desired properties of the display' s emitted light with both the sensor system of the present invention and a reference sensor. For instance, for each angular emission pattern (i.e. for each display driving level), both the values measured by the sensor system of the present invention, as well as the values measured by the reference device, need to be obtained for a suitable range of measurement values. This can for instance be done by driving the backlight in a suitable range for all of the display's digital driving levels, and measuring both the values measured by the sensor system and the reference sensor, when the display is an LCD. The obtained values should then be put in a multidimensional table and interpolated, in order to obtain the correct value for each measured value by the sensor system of the present invention.
  • the relative angular and spectral sensitivity of the sensor system can be characterized, and combined with the angular and spectral emission of the display, to obtain the sensor detection vs. driving level curve at a certain value measured by the reference device, using for instance a mathematical algorithm, or an optical simulation combined with a mathematical algorithm.
  • the multidimensional calibration table can also be obtained. Note that when using the latter approach, it is assumed that the sensor's non-linear response, and relative angular and spectral sensitivity can be characterized independently.
  • these multidimensional calibration tables need to be created for every sensor, in case the individual sensors of the sensor system behave slightly different (for instance due to slight inhomogenities of the organic layers) or if the angular or spectral emission pattern of the display is for instance positional dependant.
  • the relative angular sensitivity of the sensor system does not alter, it can be sufficient to re-measure the non-linear response of the sensor.
  • This can for instance be done by integrating a light source in the display, for instance an LED or a set of LEDs, positioned at the viewer's side of the display's active area, but outside the active area, for instance underneath the bezel.
  • These light sources are only used for recalibration purposes, and therefore do not alter their emission properties over time.
  • a detection of all sensor sensitivities, relative to each other needs to be made, because the different sensors will detect different values due to their different position on the display's active area.
  • the spectrum of the light source is generally different than the spectrum of the display's primaries, an additional calibration step is needed to link the different measurements.
  • the actual physical signal generated by the sensor is directly used, without a calibration step that converts the sensed values in another unit.
  • This allows online, user transparent use of the sensor, where the actual image content controlled by the user can be measured, instead of displaying a dedicated patch.
  • This allows control of the display in real time, in the sense that the measured value can be compared to an expected value, calculated by a mathematical algorithm.
  • Said algorithm may be used to calculate the expected response of the sensor, based on digital driving levels provided to the display, and the physical behaviour of the sensor (this includes its spectral sensitivity over angle, its non-linearities and so on).
  • the difference between the sensing result and the theoretically calculated is compared by a controller to a lower and/or an upper threshold value, taking into account the reference. If the result is outside the accepted range of values, it is to be reviewed or corrected.
  • One possibility for review is that one or more subsequent sensing results for the display area are calculated and compared by the controller. If more than a critical number of sensing values for one display area are outside the accepted range, then the setting for the display area is to be corrected so as to bring it within the accepted range.
  • a critical number is for instance 2 out of 10. E.g. if 3 to 10 of sensing values are outside the accepted range, the controller takes action. Else, if the number of sensing values outside the accepted range is above a monitoring value but not higher than the critical value, then the controller may decide to continue monitoring.
  • the controller may decide not to review all sensing results continuously, but to restrict the number of reviews to infrequent reviews with a specific time interval in between. Furthermore, this comparison process may be scheduled with a relatively low priority, such that it is only carried out when the processor is idle.
  • such sensing result is stored in a memory.
  • such set of sensing results may be evaluated.
  • One suitable evaluation is to find out whether the sensed values of the difference in light are systematically above or below the threshold value that, according to the settings specified by the driving of the display, should be emitted. If such systematic difference exists, the driving of the display may be adapted accordingly.
  • certain sensing results may be left out of the set, such as for instance an upper and a lower value. Additionally, it may be that values corresponding to a certain display setting are looked at. For instance, sensing values corresponding to a high (RGB) driving levels are looked at only.
  • RGB high
  • RGB low
  • the sensed values of certain (RGB) driving level settings may be evaluated as these values are most reliable for reviewing driving level settings.
  • high and low values one may think of light measurements when emitting a predominantly green image versus the light measurements when emitting a predominantly yellow image.
  • Additional calculations can be based on said set of sensed values. For instance, instead of merely determining a difference between sensed value and theoretically calculated value of the light output, which is the originally calibrated value, the derivative may be reviewed. This can then be used to see whether the difference increases or decreases. Again, the timescale of determining such derivative may be smaller or larger, preferably larger, than that of the absolute difference. It is not excluded that average values are used for determining the derivative over time. It will be understood by the skilled reader, that use is made of storage of displays theoretically calculated values and sensed values for the said processing and calculations. An efficient storage protocol may be further implemented by the skilled person.
  • the sensor system of the present invention is a unique bidirectional sensor system, meaning that, when using the sensor system in an environment with a certain level of ambient light (i.e. light not originating from the backlight of the display), the measured light emitted from the display area to the corresponding sensor will be a combination of the light emitted by the display and the ambient light falling on the sensor from the environment.
  • This light coming from the environment may dynamically change for instance due to shadows being cast on the display screen.
  • the sensor system can comprise an optical, electronic or mechanical filter.
  • Ambient light measurement One needs to take into consideration here as well that the light emitted by the display can have a spectrum that significantly differs from the ambient light perceived spectrum, hence the obtained result needs to be matched to a device with a proper ⁇ ( ⁇ ) spectral sensitivity curve in order to obtain an actual ambient light measurement, which mimics the spectral response function of the human eye in the wavelength range from 380 nm to 780 nm and is used to establish the relation between a radiometric quantity that is a function of wavelength ⁇ , and the corresponding photometric quantity, and hence allows correctly measuring ambient light of any type with any spectrum, if it is beneficial to quantify the ambient light.
  • An ambient light sensor also has a specific required angular sensitivity, which can differ from the angular sensitivity of the sensor system of the present embodiment. This also requires appropriate matching. However, this matching is not required if it is sufficient to remove the contribution of the ambient light to the measured signal (in case this is ambient light + light emitted by the display area).
  • the sensor still suffers from the previous imperfections, and therefore, determining the property of the display light emission from a set of measurements where the display is turned on (measuring a combination of ambient light and display light) and off (measuring ambient light only), requires considering, amongst other effects, its non-linearity. This basically implies that a simple subtraction does not suffice for obtaining the desired value. It is assumed in the further embodiments that the sensor is calibrated appropriately to come with these imperfections.
  • the filter can comprise for example of optical differential filters such as two sensors one of which is sensitive to a polarization of received light and the other is insensitive to the impinging light's polarization, which can be used when the display emits linearly polarized light.
  • optical differential filters such as two sensors one of which is sensitive to a polarization of received light and the other is insensitive to the impinging light's polarization, which can be used when the display emits linearly polarized light.
  • at least one sensor can be rubbed and at least one sensor can be non-rubbed, whereby the non-rubbed sensor is polarization insensitive.
  • the at least one sensor when using sensor systems comprising two sensors, one of which is sensitive to a polarization of received light and the other is able to detect all light polarizations of the received light, such as at least one rubbed and one non-rubbed sensor, at least one sensor only reacts to the polarization corresponding to the polarization emitted by the display device and a part of the ambient light with the same polarization whereas with the other sensor, both the ambient light and light emitted by the display device are detected.
  • the light measured by the at least one non-rubbed sensor is the total of the ambient light and the polarized light emitted by the display device at the location of the sensor.
  • the light measured by the at least one rubbed sensor is the total of 50 % of the ambient light and the light emitted from display device at the location where the sensor measures.
  • the ambient light remains approximately constant, and we can use the display to depict the same content.
  • This can mathematically be expressed in two linear equations with two unknowns, which is easily solved (assuming the sensor is calibrated to overcome its imperfections such as non-linearities).
  • the amounts of the respective contributions of the ambient light and display device can be derived and isolated.
  • the contribution of the ambient light to the output signal of the sensor ambient light is measured and isolated by using an alternative type of filter, e.g. an electronic filter such as a filter on a temporal modulation of the backlight.
  • This modulation can be either a high temporal frequency modulation or a low frequency modulation.
  • the temporal modulation can be done with a much lower temporal frequency.
  • the calibration typically involves switching the backlight on and off to determine potential ambient light influences that might be measured during normal use of the display, for a display area and suitably one or more surrounding display areas. The difference between these measured values corresponds to the influence of the ambient light.
  • the calibration typically involves switching the display off, within a display area and suitably surrounding display areas. The calibration is for instance carried out for a first time upon start up of the display. Moments for such calibration during real-time use which do not disturb a viewer, include for instance short transition periods between a first block and a second block of images.
  • transition period is for instance an announcement of a new and regular program, such as the daily news.
  • transition periods are for instance periods between reviewing a first medical image (X-ray, MRI and the like) and a second medical image. The controller will know or may determine such transition period.
  • a mechanical filter can be used that filters out the ambient light.
  • the partially transparent sensor can be one that possesses touch functionality, for instance technology allowing a touch screen. When such a sensor is touched with a finger, all the external light is blocked by a shadowing effect, and thus all the ambient light is blocked locally when touching the region of interest.
  • the display can be designed with the required intelligence, such that it is aware of the touching from the touched sensor. The display device can then measure the light properties in a touched state where all or a significant amount of the external light is blocked. The measurement is then repeated in an untouched state. The derived difference between the two measurements provides the amount of ambient light.
  • the finger touching the sensor can have a reflection as well, which can influence the amount of light sensed by the sensor.
  • the display device can be calibrated by first carrying out a test to determine the influence of the reflection of a finger on the amount of measured light coming from the display without ambient light.
  • the use of a black absorbing cover as a light filter can be used for calibration and to isolate or attenuate the ambient light contribution.
  • the first measurement(s) include measuring the full light contribution, which is the emitted light from the display and the ambient light conditions.
  • the sensor measures the emitted display light with all ambient light excluded.
  • the latter step can be accomplished by using a black absorbing cover over the display, for example. Both these measurements are needed when one desires to quantify the ambient light. However, if one merely wants to measure the light emitted by the display, without the ambient light contribution, it is sufficient to cover the display to exclude ambient light influences and then measure. 2.6.6.5 Alignment
  • the sensor system of the present invention can be designed as a separate device that can be fixed in front of a display during manufacturing.
  • Said display sensor system can be considered a "clip-on" sensor system, like a front glass, which can be appropriately designed to make electrical contact when connected to the rest of the display by for instance wires that conduct the measured electronic signals to and from the controller inside the display.
  • This concept of "clip-on" sensor system is a distinct advantage over prior art solutions, as aids in lowering the cost of the total sensor system, for several reasons. Firstly, it does not require a costly redesign of the display's panel, which is required in prior art solutions such as in US 2007/0052874 Al where a sensor, integrated into a pixel, is used. Secondly, it results in a simple production flow, which is also a contributor in obtaining a lower cost.
  • the individual sub-sensors of the clip-on sensor can end up at a slightly different position on the display's active area.
  • the alignment algorithm consists of appropriately driving the display and to use the clip-on sensor to measure the required property of the emitted light. Afterwards, the results are processed to triangulate the sensors' location.
  • a square black and/or white
  • a square can be shown at a maximum driving level, at a background of a minimum driving level.
  • it will be detected by a single sensor, or by multiple sensors at a certain position, or in a certain range of potential positions on the screen, depending on the size of the square.
  • the detection can occur faster but less accurate or slower but more accurate. Therefore, an optimization can be done by using for instance the following algorithm: starting with a large square to roughly determine the position, and in a second iteration, the size of the square is decreased and simultaneously the translation area is restricted to the roughly determined position from the first iteration.
  • This specific alignment algorithm is not considered a limitation of the present invention, other alignment algorithms can also be used to determine this positional relationship.
  • the senor as described in the preferred embodiments is not an ideal sensor. Therefore, the calibration previously described is required to perform accurate measurements using the device. 2.7.2 Applications/performance improvements based on display light output measurements
  • the sensor system can be designed such that an individual signal can be measured for every display area, and the obtained value is representative for a property of (or multiple properties of) the light emitted by (a part of) the display corresponding display area under test. More specifically, the sensor system is sensitive to light in the areas corresponding to the electrodes. It is a major advantage of the present invention over prior art solutions, that the sensor system offers the ability to create an optimized design depending on the intended application. More specifically, the lay-out of the photoconductive sensors, i.e the way they are dispersed over the display's active area, as well as the photosensitive area of the sensors can be suitably designed for the intended application, specifically for the display it is intended to be used with.
  • one of the electrodes can always be connected to a central connector, shared with several sensors that are addressed sequentially.
  • the other electrodes are designed to converge to the different connections of a multiplexer, allowing switching between the different sensors. This will allow the sensing area to be as large as possible, with a minimal amount potential sensing area lost to the semitransparent conductive tracks such as ITO tracks.
  • the size of the light sensitive area can be designed for the intended application.
  • At least two sensors can be used over at least two areas of the display, while displaying an image that is intended to result in a uniform light output (e.g. all digital driving levels are made equal in the case no precorrection table is applied to the display's driving).
  • a uniform light output e.g. all digital driving levels are made equal in the case no precorrection table is applied to the display's driving.
  • the measurements are made on white patterns, for instance with equal driving of the red, green and blue sub pixels when using a color display.
  • Simple luminance checks can be performed by measuring at different positions, depending on the critical points or most representative areas of the display design.
  • the specifications regarding luminance uniformity can be derived from established standards/recommendations, e.g. created by dedicated committees and expert groups.
  • An example of a standard created by TGI 8 can be the following: luminance is measured at five locations over the faceplate of the display device (centre and four corners) using a calibrated luminance meter. If a telescopic luminance meter is used, it may need to be supplemented with a cone or baffle.
  • a sensor design lay-out can be implemented, and a suitable metric needs to be selected to assess the uniformity.
  • luminance and chromaticity non-uniformities can be corrected.
  • luminance uniformity corrections for instance by altering the relative driving of the red, green and blue channels of a color display, and applying luminance uniformity corrections afterwards by while maintaining the relative driving of the red, green and blue channels, in case the display has a linear luminance in a driving level curve, or alternatively adapt the ratio according to the actual luminance vs. driving level curve. This might require several iterations to obtain a satisfactory result.
  • Typical prior art luminance uniformity correction algorithms such as known from EP 1 424 672 Al, use an external sensor measure the luminance non-uniformity during production and, based on the measured results, apply a precorrection table to the driving levels of the display, that ensures images are correctly displayed.
  • This correction can be either based on an individual pixel basis or on a by using a correction per zone.
  • the drawback of this solution is that it is not integrated into the display, and due to the nature of this technique, it is typically only performed once, during the display's production process, while the sensor system of the present invention allows updating the luminance uniformity correction during the display' s lifetime, and hence it allows improving the display' s performance while the display remains in the field.
  • Another aspect of the present invention is to use the sensor system of the present invention to capture a low resolution luminance map of the light emitted by the display when all the pixels are put to an equal driving level.
  • This low resolution luminance map can be obtained by using a sensor system according to the present invention with a matrix of photoconductive sensors. Such a low-resolution map is typically desired because this simplifies the sensor-lay out and the complexity and cost of the controller. Obtaining this map would allow to derive a new precorrection table in a calibration phase during the display's lifetime.
  • This precorrection table is obtained by appropriately upscaling the low resolution luminance map to a high-resolution luminance map, which matches the display resolution, and determines how to adapt the display's driving in order to obtain a uniform display output image.
  • the reason such a limited number of sensors is usable, is that it is known from measurements that the noise can be distinguished in two distinct patterns: a high frequency noise at the individual pixel level, which is typically Gaussian, and low frequency noise resulting in the global trend of the curve.
  • the purpose of this embodiment will be to compensate for the low- frequency noise, and leave the high-frequency noise unaltered.
  • the display can have multiple layers that allow local control of the display's emitted light, in some specific embodiments.
  • the display is an LCD that allows local control of the display's emitted light
  • the driving of the backlight can also be adapted to render the display's light output more uniform, aside from merely altering the driving of the LC layer.
  • Determining the best solution of the low resolution luminance map depends on several factors, as there are a wide range of design parameters and a lot of flexibility to choose from. For example, only few constraints apply to the positioning of the sensors; the most important being that two sensors cannot overlap, due to the lateral, coplanar design of the electrodes with finger shaped extensions which are in contact with the light sensitive organic layers. Otherwise, sensors can be located at any position on the display. The best solution depends on the exact display type the sensor system is combined with, and the exact sensor architecture.
  • DICOM compliance The way the human eye responds to contrasts in light levels is not linear. At the darkest levels, small changes in luminance can be perceived better than at the brightest levels. The behavior of the human eye at varying shades of gray has been measured, resulting in the DICOM curve. When appropriately using this curve in the display's driving, the display can be made to behave perceptually linear. For a proper DICOM compliance, it is necessary to take measurements from multiple parts of the screen [ASSOCIATION NATIONAL ELECTRICAL MANUFACTURERS. Digital Imaging and Communications in Medicine (DICOM), Supplement 28: Grayscale Standard Display Function, technical report, National Electrical Manufacturers Association, 1998.]. Another use case in which the sensor system of the present invention can be used to ameliorate a display's performance, is assessing the display's DICOM compliance at multiple locations over its active area, and altering the display's driving to render it compliant if needed.
  • the entire DICOM calibration of the display can be preformed. In practice, this can be done by altering the LUT that is applied on the incoming image to obtain a DICOM calibrated image. To obtain this LUT, the native behaviour of the display should be measured using the sensor system's photoconductive sensors (without the initial DICOM calibration), the resulting values can be used in combination with the ideal DICOM curve to obtain the required LUT. As the measurements can only be made at the location of the photoconductive sensors, an interpolation/approximation step needs to be done to obtain the proper DICOM calibration at intermediate locations as well.
  • the local variation of reflection on the screen cannot be detected. Therefore, the ambient light could be considered acceptable, while local peaks can be very disturbing for a user.
  • the sensor system of the present invention will be able to consider the impact of the ambient light on the entire area of the screen, allowing to overcome the limitation of the existing measurement methodologies.
  • the sensor system of the present invention can detect whether there is a specular light source being reflected on the display surface, or when a window curtain is opened resulting into a sharp specular reflection.
  • the white and black levels can also exhibit non-uniformities. Therefore, it is valuable to measure the luminance ratio locally by using each individual sensor, by measuring the ambient light as well as L i ac k and L w ite.
  • the luminance ratio can then be calculated, and should be compliant with the following formula (typical requirement for instance in radiology):
  • action can be taken for instance by signalling the user that the ambient light is excessive.
  • the user can then for instance lower the ambient light, such that the luminance ratio again compliant with the formula above.
  • the display can be recalibrated according to the DICOM standard including ambient light by using the Barten model for the human visual system as recommended in the TGI 8 document.
  • DICOM compliance can be achieved at multiple locations of the display at which ambient light is measured with the proposed sensor system.
  • a possible solution is to adapt the backlight such that the display is compliant once again for particular display application requirements. Since the luminance ratio can have local variations, and the backlight is typically a CCFL backlight which does not allow local adaptations of the emitted light, the adaptation should be made based on the minimum measured luminance ratio. 2.7.3.4 Touch sensor functionality
  • the sensor technology of the present invention allows obtaining a touch screen.
  • the underlying principle is the following: when one of the sensors is touched, the external light is blocked, the measured light is a combination of the light transmitted through the sensor, and the light reflected on one's finger.
  • a touch screen could be an added value for many applications, and is not restricted to medical displays.
  • One such application is a new type of GOSD with context sensitive buttons that are used directly on the active area of the screen instead of on the bezel. When the buttons are not required, the area can simply be used again as part of the display.
  • the substrate on which the sensors are dispersed can be made larger than the display' s active area, which allows additional space, outside the display' s active area for touch sensors.
  • touch sensors can have a fixed light source underneath them, and the altered light output upon touching them can be registered.
  • a light sensor intended to measure ambient light can be made similarly outside the display's active area, but on the same substrate, to detect if the measured change is due to changing ambient light conditions, or due to an actual finger touching the sensor.
  • Fig. la is a front view of a display with an integrated sensor system according to the present invention.
  • Fig. lb shows a high-level representation of the sensor system 9 of the present invention.
  • Fig. lc illustrates the global structure of the electronics board.
  • Fig 2 shows the general architecture of the semitransparent photoconductive sensors comprised in the sensor system of the present invention.
  • Fig. 3 illustrates a side- view of the organic photoconductive sensor according to the present invention.
  • Fig. 4 depicts the spectral transmission of ITO with different thicknesses in the visible spectrum.
  • Fig. 5 schematically illustrates a floating electrode solution to improve the sensor' s visibility.
  • Fig. 6 illustrates how a floating conductive material can be used inside the finger patterns, to improve visibility.
  • Fig. 7 illustrates an increased finger width to finger gap ratio to help reduce the visibility of the sensor.
  • Fig. 8a schematically illustrates an electrode comprising a pattern whereby the pattern comprises semi-random fingers according to embodiments of the invention.
  • Fig. 8b illustrates an alternative embodiment were the finger pattern is shaped like Euclidean spirals.
  • Fig. 9 presents a measured transmission spectrum of a 20.7 nm PTCBI layer.
  • Fig. 10 shows the transmission of a sensor with standard encapsulation and a sensor with glue encapsulation with improved ITO.
  • Fig. 11a presents an example of an applied voltage signal over time on the sensor.
  • Fig. 1 lb depicts a possible current flowing through the sensor as a consequence of impinging light when applying the voltage signal of Fig. 11a.
  • Fig. 12 illustrates the IV curves of 4 different HTL configurations using the same organic materials.
  • Fig. 13a presents a picture with a line-shaped laser beam pointed in the gap between two interdigitated sensor fingers.
  • Fig. 13b presents, the measured current as a function of the position of the laser line with respect to the anode.
  • Fig. 14 shows an example of a long-term experiment, in which the measured current is tracked over time.
  • Fig. 15 shows an embodiment of the invention, using an optical filter to measure the ambient light.
  • Fig. 16a illustrates a possible method to roughly determine the position of the sensor relative to the display' s active area using a square that scans over the active area of the screen.
  • Fig. 16b illustrates another possible method to determine the position of the sensor relative to the display's active area This methodology uses images which have bright half and a dark half, to pinpoint in which quadrant the sensor is located.
  • Fig. 16c illustrates, a smaller version of the same images depicted in Fig 16c, that allow obtaining the correct sub quadrant of the initial quadrant.
  • Fig. 16d illustrates an algorithm to determine the position of the sensor relative to the display's active area, using a combination of the moving square and half bright/half dark images.
  • Fig.17 shows a high-resolution luminance map as emitted by the display.
  • Fig. 18a presents a cross-section of a profile measured using a high -resolution camera on a relatively uniform display is presented.
  • Fig. 18b presents an example of positions of the photoconductive sensors according to an embodiment of this invention.
  • Fig. 18c Presents a cross-section of the emitted light, after the uniformity correction is applied.
  • Fig. 19 illustrates the rescale process for a cross-section, used to go from the interpolated, non-uniform light output before applying a uniformity correction algorithm, to the uniform, corrected data.
  • Fig. 20 shows a local map of the error for digital driving level 496 (10 bit driving levels, ranging from 0 to 1023) when the sensors are located on a 6 by 6 uniform grid.
  • the sensors are dispersed in a regular grid
  • the sensors are dispersed in alternative grid, with a denser sensor concentration at the borders.
  • Fig. 21 the obtained results in the case where the sensor is not an ideal luminance sensor, and has an equal response independent of the angle at which the ray impinges on the photosensitive area, are presented for a uniform grid (left columns), a non-uniform grid (right columns), for a broad range of sensor resolutions (horizontal axis on the plots), combined with a broad range of sensor sizes (indicated above, 5mm, 10 mm, 15 mm, 20 mm, 25 mm or expressed in number of pixels: 30, 60, 90, 120 and 151 pixels).
  • the global percentual relative absolute error is presented on the vertical axis.
  • Fig. 21a the results for the darkest level are presented, while in Fig. 21b, the results for the brightest level are presented (DDL 255 for 8 bit display driving).
  • a device A coupled to a device B should not be limited to devices or systems wherein an output of device A is directly connected to an input of device B. It means that there exists a path between an output of A and an input of B which may be a path including other devices or means.
  • the term "at least partially transparent” or “semitransparent” as used throughout the present application refers to an object that may be partially transparent for all wavelengths, fully transparent for all wavelengths, fully transparent for a range of wavelengths and partially transparent for the rest of the wavelengths. Typically, it refers to optical transparency, e.g. transparency for visible light.
  • Partially transparent is herein understood as the property that the intensity of an image shown through the partially transparent member is reduced due to the said partially transparent member, or its color is altered.
  • Partially transparent or semitransparent refers particularly to a reduction of light luminance of at most 40%, more preferably at most 25%, more preferably at most 10%, or even at most 2%,.
  • the sensor design is created so as to be substantially transparent, i.e. with a reduction of impinging light intensity of at most 20% for every visible wavelength, which also limits its coloring.
  • the term 'display' is used herein for reference to the functional display. In case of a liquid crystal display, as an example, this is the layer stack provided with active matrix or passive matrix addressing.
  • the functional display is subdivided in display areas. An image may be displayed in one or more of the display areas.
  • the term 'display device' is used herein to refer to the complete apparatus.
  • the display device further comprises a controller, driving system and any other electronic circuitry needed for appropriate operation of the display device.
  • Fig. la shows a display device 1, for instance a liquid crystal display device (LCD device) 2.
  • the display device can be a plasma display device, an OLED display device, or any other kind of display device emitting light compatible with the specificities described hereafter.
  • the active area 3 of the display device 1 is divided into a number of groups 4 of display areas 5, wherein each display area 5 comprises a plurality of pixels.
  • the display device 3 of this example comprises eight groups 4 of display areas 5; each group 4 comprises in this example ten display areas 5.
  • Each of the display areas 5 is adapted for emitting light with a certain angular emission pattern, to display an image to a viewer in front of the display device 1.
  • Fig. la further shows the sensor system's substrate with photoconductive sensors 6 comprising semitransparent electrodes & conductors and organic layers. Light is absorbed on top of (part of) the display area 5, at the location of the electrodes with finger-shaped extensions 13, and thereby alters the conduction properties of the photoconductive material (For instance, a voltage is put over its electrodes, and an impinging-light dependent current consequentially flows through the sensor, which is measured using the controller). As depicted in the figure, one sensor is foreseen per display area. The electric current is guided towards the edge, using two semitransparent electrical conductors 10 and 11. The electrical conductors are contacted outside the display's active area by an array of contacts 7 which are connected to the controller, comprising e.g. eight groups 8 of contacts. 4.3.2 Controller
  • Fig lb shows a high-level representation of the sensor system 9 of the present invention.
  • the sensor system comprises the photoconductive sensors 6, which react on the impinging light. These photoconductive sensors are connected to the electronics board 12, which basically does all the low-level interaction with the photoconductive sensors, such as applying a dedicated sensor driving signal and retrieving the consequent measurement signal.
  • the electronics board also interacts with, and is controlled by instructions from the software 13.
  • the electronics board 12 combined with the software 13 together form the controller 14 of the present invention.
  • the software 13 comprises various instructions, ranging from low-level instructions to do basic interactions with the electronics board to complex algorithms such as luminance and chromaticity improvement algorithms. Therefore, the software is also able to interact with the display 1, in order to change for instance the driving of its pixels in order to improve its uniformity.
  • the global structure of the electronics board 12 is presented in Fig. lc.
  • the electronics board has an integrated board controller 15, which can be used to interact with several electronic components on the board.
  • the signal source, 16 is controlled by the board controller, which amongst others makes sure a signal with a desired amplitude and frequency are applied to the sensors 6.
  • the resulting signal gets amplified in a first preamplification stage 17, after which it reaches the multiplexer 18.
  • the board controller 15 should select the desired sensor.
  • the Electronics board can contain a plurality of multiplexers, which are controlled by the board controller. It is obvious for anyone skilled in the art that in this embodiment, a plurality of PGAs, filters and ADCs are also required.
  • the selected signal is than amplified in a next stage by a programmable gain amplifier 19, which has an adjustable gain factor, that can be adjusted for the specific sensor, as the signal generated by the sensor can vary depending for instance on the ITO track length, which can differ for different sensors at different locations on the substrate.
  • the obtained signal is filtered by the filter 20, and converted from an analogue to a digital signal by the ADC 21. Finally, the signal is returned to the board controller 15.
  • Fig 2. Shows the general architecture of the semitransparent photoconductive sensors comprised in the sensor system of the present invention. 4.4.2 Device architecture
  • Each photoconductive sensor 22 comprises two semitransparent electrodes on a substrate 23. Said two electrodes have finger-shaped interdigitated extensions 24, i.e. they are positioned such that each finger of one electrode is surrounded by two fingers of the other electrode after a spatial separation gap to avoid electrical shorting. For each sensor, there are two outer fingers 25, which only have a single adjacent finger.
  • the organic stack, 26 which is photoconductive, is put on top of the electrodes, which alters conductivity depending on the impinging light.
  • the electrodes are connected to the controller, positioned outside the display's active area, via a semitransparent conductive track 27.
  • the glass substrate 23 is typically made of glass materials such as Corning Eagle XG glass or Polished soda-lime with a Si0 2 passivation layer.
  • the thickness of this glass substrate 28 is selected to minimize the absorption of the substrate, and is typically in the range of 0.5 to 5 mm.
  • the width and height of the glass substrate are chosen typically larger than the display's active area, in order to assure that the substrate can be contacted outside the visible area of the screen, and in order to have space for the components of the encapsulation technique if needed.
  • the semitransparent electrodes 24 depicted in Fig. 3 are typically made of Indium Tin Oxide (ITO), which is used in the preferred embodiment.
  • ITO exists in many flavors, some examples are illustrated in Fig. 4, which illustrates the spectral transmission of ⁇ with different thicknesses in the visible spectrum, obtained from the company Colorado concept coatings. Depending on its thickness, the sheet resistance of the ITO material also alters.
  • a typical range ITO thicknesses is for a suitable device is 25-65nm, as described later on.
  • the 45nm ITO has a sheet resistance of about 60 ⁇ / ⁇ , while the 25 nm ITO has a sheet resistance of about 125 ⁇ / ⁇ , It is clear from this figure that ITO with different parameters will result in different chromaticities and transmitted luminances.
  • the exact specifications of the ITO used in the architecture should be carefully selected, as they impact the final performance of the sensor. Also, manufacturers typically provide tolerance limits on the thickness of the ITO substrates, which should be suitably controlled in order to obtain a sensor with a uniform color shift.
  • the sensors' finger shaped extensions 24 are created on the ITO coated substrate by means of laser ablation in the preferred embodiment.
  • a suitable fine-tuned laser ablation process is able to laser ablate patterns into large substrates, which are usable for all commonly used medical display sizes. These parameters are fine-tuned with a delicate balance in mind: the laser removal effect should be intense enough such that the ITO is sufficiently removed, to prevent any possible short-circuits in areas that are supposed to be removed at one hand, and the laser removal effect should not be too intense, to avoid potential damaging of the glass substrate, as this can result in undesired light scattering effects in the glass.
  • a patterned ITO is obtained with gaps 29 that separate individual fingers, and remaining fingers 24 with a suitable width 30.
  • the size of the gap 31 is limited by the resolution which can be obtained by the laser ablation process, which is typically around 10 ⁇ .
  • the width of the fingers is not a technological limitation, as the laser ablation process typically starts from a substrate with a uniform ITO coating.
  • the number of fingers is not considered a limitation of the present invention, but the technological limit is one finger per electrode, as this is required in order to obtain a working device.
  • the number of fingers may for instance be anything between 2 and 5000, more preferably between 10 and 2500, suitably between 25 and 700.
  • the surface area of a single semitransparent sensor may be in the order of square micrometers but is preferable in the order of square millimeters, for instance between 10 and 7000 square millimeters.
  • One suitable finger shape is for instance a 12000 by 170 micrometers size.
  • the gap in between the fingers can for instance be 15 micrometers in one suitable implementation.
  • the organic photosensitive layer stack 26 consists of several organic layers, which are added sequentially on the ITO patterned substrate.
  • the device has a three layer stack, consisting of a first Hole Transport Layer (HTL) 32, added on the ITO patterned substrate, onto which an Exciton Generation Layer (EGL) is added 33, onto which a final organic layer, a second HTL 34, is added.
  • HTL Hole Transport Layer
  • EGL Exciton Generation Layer
  • Second HTL 34 a second organic layer
  • These organic layers can be added to the patterned substrate for instance by using vacuum (thermal) evaporation. Impinging visible light generates excitons in the EGL 33, which diffuse towards the interface of the EGL 33 and the HTLs 32 and 34, where they are split into electrons and holes due to their energy band diagrams. Holes end up in the HTLs, while electrons remain in the EGL. The holes are then transported to the electrodes in the HTLs. 4.4.2.6 Encapsulation
  • a suitable encapsulation technique In order to protect the photoconductive sensors from potential contamination, a suitable encapsulation technique is used.
  • This encapsulation technique generally uses a material 35 on top of the organic layers, and a cover glass 36.
  • material 35 is an inert nitrogen (N 2 ) gas.
  • N 2 inert nitrogen
  • a UV-curable glue can be used, or a sputtered AlOx layer with a UV-curable glue on top.
  • the proposed sensor architecture in the present invention contains a lot of design parameters, which can all be optimally selected from a broad span of possibilities. Consequentially, the design freedom can be put to optimal use in order to obtain the most suitable device for the intended application.
  • the most suitable device is designed to have the least possible reduction image quality, a stabile signal readout, and it should fit with the sensor requirements needed for improving the display's performance, and expanding its functionalities.
  • a suitable selection of all design parameters is elaborated, according to the desired sensor requirements, the combination of these preferred parameters form the preferred sensor design.
  • the first important aspect of the optimized sensor design is the visibility of the sensor, as described earlier in this invention.
  • a thin glass substrate with a high transmission is carefully chosen.
  • the previously mentioned glass materials with a thickness 28 of 0.7 or 1.1 mm are suitably chosen.
  • the ITO pattern can be optimized to reduce the sensor's visibility. Effects that occur due to the ITO patterning include first of all high-frequency artifacts for instance due to the high-frequency spatial finger patterns or the separation of the floating ITO parts from the remainder of the ITO, interference effects due to the ITO finger pattern on top of the display with a matrix of pixels, local coloring due to the regions on the substrate with and without ITO material, and diffractive effects due to the geometry of the interdigitated electrode extensions. When a diffraction grating is illuminated by white light, in reflection one can see dispersion of light which comes from the fact that different wavelengths are diffracted at different angles. 4.5.1.3.1 Dummy fingers
  • Fig. 5 schematically illustrates a floating electrode solution to improve the sensor's visibility, more specifically to avoid the local coloring due to regions with and without ⁇ .
  • Two ITO patterns which can be made on a glass substrate are presented, a first one without a floating ITO pattern (Fig. 5a), and a second one with a floating ITO pattern (Fig. 5b).
  • the pattern presented in Fig. 5a includes three interdigitated electrode pairs with finger-shaped extensions 37 which compose the photosensitive area, which are connected to ITO tracks 38, that allow conducting the sensed electric signal towards the controller, which is contacted outside the display's visible area.
  • the width of the ITO track 39, and the distance between two consecutive ITO tracks 40 is also shown.
  • floating electrical conductor system 41 is applied in all the regions, where no ITO was present (as illustrated in Fig. 5a), outside the finger patterns of the electrode, at locations 42 according to an embodiment of the invention, to improve visibility.
  • the separation between the dummy parts, and the ITO patterns that carry a useful electric signal can for instance in the range of 1 to 100 ⁇ , more suitably in the range of 4 to 25 ⁇ , even more suitably between 8 and 20 ⁇ . For instance, a separation of 15 ⁇ can be used with the suitable laser ablation parameters.
  • Fig. 6 illustrates how a floating conductive material can be used inside the finger patterns, at the end of the fingers 24 in the form of nails 43 (in this case made of ITO).
  • nails 43 in this case made of ITO.
  • the separations 44 & 45 between the nails 43, and the ITO fingers 24 can be very narrow, for instance a separation of 15 ⁇ can be used with the suitable laser ablation parameters.
  • the gaps between the fingers are preferably chosen such that the human eye is not able to perceive a high-spatial frequency pattern where the fingers are located, due to the contrast between the regions with and without finger patterns.
  • a simulation can be performed, which is later confirmed by human observer tests.
  • This is an optical simulation model built in a ray tracing optical simulation software program.
  • the simulation includes a light source, a pattern according to embodiments of the present invention and an optical model of the human eye, and suitable processing of the obtained simulation results.
  • This human eye model has the appropriate optical imperfections, introduced amongst others by the limited cone density on the retina, the cornea, and the lens in our eye.
  • the human observer tests when using a bar finger shaped pattern comprising wide bars (e.g.
  • the distances were varied e.g. from 500 ⁇ down to 5 ⁇ .
  • the minimal distance depends on the specific type of ITO material used for the pattern, the thickness and type of the exciton generation layer and the methodology used to deposit the latter.
  • Fig. 7 two subfigures 7a and 7b are shown with different finger width/gap ratios.
  • finger 46 has a corresponding width 48, and a gap 47 with a width 49.
  • finger 50 has a broader width 52 compared to the width 48 of finger 46.
  • the gap width 53 of gap 51 is maintained at the initial gap width 49.
  • a suitable ratio between the finger width and the gap between the fingers in transmission can be estimated using simulations. More specifically, when maintaining a specific gap, optimized for the sensor's performance, the finger width has been increased, to reduce the percentage of the area without ITO patterns.
  • the metrics used for the simulations and to evaluate if there is a visible difference between the finger pattern region and the neighboring floating ITO region are the number of JNDs between them, to evaluate if there is a difference in brightness, and the ⁇ 2000 metric to evaluate if there is a perceived difference in chromaticity.
  • the average value of the tristimulus X, Y and Z values is calculated and used in the region of the finger pattern; in other words, we assume that the gap between the fingers is suitably chosen such that only an average tristimulus X,Y and Z value is perceived.
  • this also depends on the sensor design: the type of ITO and the thicknesses of the Exciton Generation Layer and Hole Transport Layer, and the encapsulation method.
  • a typical range for width to gap ratio is in the range 30 to 1 to 5 to 1 e.g. 10 to 1.
  • Fig. 8a schematically illustrates an electrode comprising a pattern whereby the pattern comprises semi-random fingers according to embodiments of the invention.
  • These finger patterns help to reduce the interference effects and the diffraction effects, and are also by nature tough to spot for human observers.
  • the semi-random finger pattern is constructed by semi-randomly choosing several points on the two edges of the finger where the fingers should go through. The different points are then connected using a cubic spline interpolation.
  • the adjacent finger is limited in the sense that the gap in between the fingers should remain approximately constant to ensure the device's properties remain unaltered. For instance, the gap is obtained by translating the created edges of the finger patterns, when their spatial frequency is not excessive.
  • the points can be chosen in a semi random way in the sense that they are limited to a specific area to avoid too high frequencies in the fingers.
  • Fig. 8a illustrates a simulation of such a pattern comprising fingers.
  • Each finger is determined by a set of control points, chosen at random through which for instance a cubic interpolation is run, resulting in a curve shape.
  • the control points are all put in an individual limited rectangle. This rectangle limits its position in horizontal and vertical direction. Rectangles for one of the fingers are positioned sequentially next to one another, and the curve is interpolated through the sequence of defined points. The oscillation of the curve can be increased by adding more control points in the interpolation, which boils down to altering the horizontal and vertical dimensions of the rectangle in which the control points are defined.
  • the fingers are oriented horizontally, like in Fig 8 a, a rectangle with a smaller horizontal dimension and a larger vertical dimension allows bigger oscillations.
  • the gap between the fingers is 20 microns wide and the sensor corresponds approximately to a 1cm x 1cm square.
  • the width of the fingers is at least 2.5 times the gap between the fingers, and on average the fingers width is 11.5 times the gap.
  • the fingers are depicted in black, while the gaps are depicted in white.
  • Fig. 8b illustrates an alternative embodiment were the finger pattern is shaped like Euclidean spirals.
  • ITO material p arameters By appropriately choosing a suitable ITO material (defined by its layer thickness and complex refractive index as function of wavelength), the transmission and coloring can be improved.
  • a suitable ITO material defined by its layer thickness and complex refractive index as function of wavelength
  • the 45nm ITO and the 25 nm ITO depicted in Fig. 4 are suitable candidates to obtain a sample with a sufficient transmission.
  • the coloring should be assessed in combination with the other components of the sensor, as their combination will determine the total wavelength dependant transmission and reflection.
  • the ITO material parameters also impact the other design elements of the ITO fingers, as described above.
  • the visibility of the organic photosensitive materials 26 is mainly determined by the properties of the Exciton generation layer 33, as the hole transport layers 32, 34 typically have a minor absorption in the visible spectrum.
  • the luminance absorption of the organic stack is for instance designed to be in the range of 3- 30%. More preferably, the luminance transmission of the organic stack is designed to be in the range of 80- 95%.
  • a suitable material for the Exciton generation material is 3,4,9,10- perylenetetracarboxylic bis-benzimidazole (PTCBI, purchased from Sensient), for several reasons. First of all, it is photosensitive over the entire visible spectrum, which allows it to react on light with any visible spectrum. Secondly, it has a rather uniform absorption coefficient over the visible spectrum, which results in a limited coloring of the light. In Fig. 9, a measured transmission spectrum of a 20.7 nm PTCBI layer is presented as an illustration. On top of that, the absorption coefficient has a suitable absolute value over the visible spectrum, which allows reaching a sufficient transmission for a layer thickness that can be made with the technologies used to add the different layers.
  • PTCBI 3,4,9,10- perylenetetracarboxylic bis-benzimidazole
  • PTCBI layer thicknesses in the order of 5 to 15 nm proved to result in suitable transmissions, more specifically, PTCBI layer thicknesses in the order of 8-12 nm proved to render very good transmission results, while maintaining a sufficient signal amplitude and contrast.
  • the HOMO and LUMO levels of PTCBI are respectively around 6.3 eV and 4.6 eV [ Toshiyuki Abe a,*, Sou Ogasawara a, Keiji Nagai b, Takayoshi Norimatsu, Dyes and Pigments 77(2008), 437-440].
  • this poses a restriction to the HOMO and LUMO levels of the HTL, such that the Exciton can be split into a hole and an electron, meaning that;
  • a suitable HTL for instance has a HOMO and LUMO level in the following ranges:
  • a suitable encapsulation technique In order to protect the photoconductive sensors from potential contamination, a suitable encapsulation technique is used.
  • This encapsulation technique generally uses a material 35 on top of the organic layers, and a cover glass 36.
  • material 35 is an inert nitrogen (N 2 ) gas.
  • N 2 inert nitrogen
  • a UV-curable glue can be used, or a sputtered AlOx layer with a UV-curable glue on top.
  • the material 35 on top of the organic layers should be carefully selected.
  • optical losses and reflections can occur here due to two optical interfaces, a first interface between the organic layers 26 and the material 35, and a second interface between the material 35 and the cover glass 36. Due to the relatively high contrast in refractive indices, using an inert gas atmosphere for material 35 can result in optical losses. Therefore, instead of encapsulating the sensor in an inert gas atmosphere and placing the glue only for instance at the edges of the encapsulation glass, an alternative embodiment can be used where the encapsulation glue is applied over the whole area of the sensor between the glass substrate and the encapsulation glass. The latter is enabled by using a drop of glue between the two glass plates, applying mechanical pressure on them, pushing out the inert gas by capillary forces and then curing the glue by UV exposure.
  • the material 35 consists only of the cured glue.
  • Norland Optical Adhesive 68 glue can be used, which enable very high transmission percentages and does not introduce any coloring.
  • this technique avoids several problems of the prior art can be overcome, for instance one those not need the implementation of getters and spacers, and the transmission is improved whereas the visibility in reflection is reduced.
  • the latter is also illustrated in Fig. 10, which illustrates the transmission of a sensor with standard encapsulation (and ITO 65nm, HTL 40nm, EGL lOnm) and a sensor with glue encapsulation (and ITO 45nm, HTL 40nm, EGL lOnm) with improved ITO.
  • the initial results indicate that the performance of the sensor with the glue is as good as the sensor with standard encapsulation.
  • a sputtered AlOx layer with a UV-curable glue on top can be used in an alternative embodiment.
  • an antireflection coating is applied on the external surfaces of the substrate glass and/or encapsulation glass to further improve the transmission and reducing the coloring.
  • the transmission over all wavelength regions and also reduce the coloring of the sensor can be improved.
  • the ARC can reduce the reflection (i.e. improving the transmission) over all visible wavelengths, but it can suitably be designed to reduce reflection relatively more in the green wavelength region and less in the blue and red wavelength region.
  • Such an ARC can for instance be obtained from a company such as prazisions glas & optik.
  • prazisions glas & optik also offers the possibility to make a customized ARC which can be tuned in such a way that coloring of another wavelength which is visible is reduced.
  • the sensor system's design should also be adapted to render reliable results its entire lifetime. Important elements in this are the sensor driving and photoconductive sensor's architecture.
  • the controller is used to apply a suitable signal over the sensor, and afterwards it is used to retrieve and process the light sensitive response of the sensor.
  • a suitable signal to be applied to the sensor is a voltage signal, more specifically a square wave that switches between a positive and negative voltage.
  • Fig. 11a an example of an applied voltage over time is presented.
  • This voltage signal switches from a negative voltage 54 of -IV to a positive voltage 55 of +1V.
  • the period of this signal is 8s, corresponding to a frequency of 0.125Hz.
  • a possible sensor response is presented in Fig. 1 lb, which clearly shows the need for such low frequencies.
  • the current shows a slow decay in time 56 both for the positive and negative applied voltage, which stabilizes at a point 57 near the end of the flank.
  • the decay 56 can be slower or faster in time, such that the curves can cross at some point during the flank, therefore, a stabilized value needs to be obtained.
  • the voltage flank is repeated over time, and the point 57 is tracked over time for the consecutive flanks, to obtain a time-dependant readout.
  • the decay 56 can vary.
  • the organic materials used as Hole transport layer(s) of the device influence the decay and hence the maximal usable frequency of the sensor.
  • TMPB as a HTL results in a slower sensor compared to when using for instance N,N,N',N'-tetrakis(4-methoxyphenyl)-benzidine (known as MeO-TPD). Therefore, the suitable frequency should be chosen depending on the organic material used in the device.
  • the most suitable voltage depends on several parameters of the organic stack. These parameters include the materials used for the HTLs, the number of HTLs in the stack, and the thicknesses of the HTLs. This is illustrated in Fig. 12, where the IV curves of 4 different HTL configurations are presented with the same organic materials.
  • the organic layers used in the different sensors are listed below. substrate name organic stack
  • the organic materials used in these devices are 1,3,5-Tris[(3- methylphenyl)phenylamino]benzene (m-MTDAB, purchased from Sigma Aldrich) as hole transport material, and PTCBI as Exciton generation material.
  • m-MTDAB is a hole transport material with hole mobility ⁇ ⁇ ⁇ 3 ⁇ 10 - " 3 cm 2 /Vs [Yasuhiko Shirota, Journal of Materials Chemistry, 10, 2000]
  • the molecular structure of m-MTDAB is similar to l,3,5-tris(di-2-pyridylamino)benzene (TDAPB) with exception of the nitrogen atoms in the outer benzene rings and the lack of methyl groups in meta orientation.
  • HOMO ionization potential
  • LUMO electron affinity
  • the I(V) curves of the organic photoconductive sensors show two more or less linear regions with different slopes.
  • region 1 the current increases more or less linearly with the voltage and the resistance for the illuminated devices is between 3 and 20 ⁇ .
  • the slope of the IV curve decreases considerably at a certain "knee-voltage" which is in the range 1-2.5 Volt and for higher voltages, the current increases at a slower pace.
  • SI and S2 the dark current (and also the current under illumination) increases quadratically at higher voltages.
  • region 1 the devices can be described with a constant conductivity.
  • the current density is the product of the carrier density, the average mobility of the charge carriers (for electrons and holes) and the electric field.
  • the mobility of holes and electrons is a constant, the electric field is proportional with the applied voltage.
  • the carrier density is obviously a non-linear function of the illumination, which is the result of a balance between generation and recombination, possibly involving trapped states.
  • the carrier density can be larger if the layers are thicker, because the probability for recombination is reduced.
  • the simple image outlined above breaks down, because near (one of) the electrodes (one of) the charge carriers is carried away faster than the other and a space charge region develops which takes up an important part of the applied voltage. Trapping of charges can play a very important role in this process. Outside of the space charge region, the organic stack provides the same (high) conductivity as is observed in the region 1, but the space charge region is a region with a much higher impedance. Increasing the voltage further contributes mainly to increasing the space charge region and only marginally to the current.
  • the non- linearity for sample S3 at low voltages may be due to the fact that the electrons in the EGL/ETL cannot easily travel to the anode through the 100 nm thick HTL and some voltage is needed to assist the charge transfer there.
  • the curves of Fig. 12 are obtained with finger patterns that are 16 mm wide and 15 mm high. Each finger of the electrodes is 80 ⁇ wide and separated from the next by a gap of 20 ⁇ , created using photolithography. For one sensor the total gap length is 2384 mm.
  • the deposition of the organic compounds on the four substrates, each with a different stack and the encapsulation under a N 2 atmosphere with CaO getters are performed by the Fraunhofer Institute for Photonic Microsystems (IPMS, Dresden, Germany).
  • the current is measured in this experiment using a Keithley PicoAmpMeter 6485, as a function of the voltage applied across the electrodes of the device.
  • the voltages are applied with a Keithley SourceMeter 2425 in the range between 0 and 10V.
  • the photoconductive sensor is illuminated with a white LED backlight powered with a Keithley 220 current source.
  • the backlight consists of a reflective cavity with white LED's and diffuser foils to obtain a uniform illumination.
  • the current through the photoconductive sensor is measured as a function of the position of local illumination for a set of voltages.
  • a picture is presented with a line-shaped laser beam 60 pointed in the gap 59 between two interdigitated sensor fingers 58.
  • Fig. 13b the measured current as a function of the position of the laser line with respect to the anode is presented. The experiment is performed at DC voltages of 0.5, 3 and 6V between the anode and cathode. At each position the current is measured under BI (I dark) and subsequently under BI+laser line (I laser).
  • the space charge region which is the region with the highest electric field is located in the vicinity of the cathode. Additional illumination and generation of electrons and holes in the high-field region leads to a higher current because the charge carriers are separated more effectively.
  • the large amount of electrons and holes that are generated locally by the laser beam have a large chance to recombine in the region where they are created and have only a limited effect on the photocurrent.
  • the experiment shows that the space charge is near the cathode and therefore the high-field region must have an excess of holes in the HTL. This may be because holes have a lower mobility than electrons or because holes are trapped more easily in the stack.
  • a suitable range of gaps is for instance 4 to 25 nm, typically between 8 and 20 nm.
  • Fig. 14 an example of a long-term experiment is presented, in which the current 57 is tracked over time. A significant drop at the beginning of the device's lifetime is present, which lasts typically about several hours. This can be avoided by using the sensor during production, to ensure the sensor is used in a more stable regime.
  • Suitable HTLs have a sufficiently high glass transition temperature (Tg), because the organic materials can be heated to temperatures of around 50°C, when they are used for instance in front of an LCD. If the material's Tg is not high enough, the materials can degrade over time.
  • Suitable HTLs to be used in front of an LCD have glass transition temperatures for instance in the range of 60°C-300°C (the upper limit is obviously not considered a limitation).
  • Suitable HTLs that respect the constraints imposed by the visibility, stability requirements, and which have suitable energy bands are NPB and MeO-TPD.
  • m-MTDAB in combination with the glue encapsulation is not excluded either, as initial experiments indicated a positive impact on long-term stability due to this encapsulation.
  • a suitable range of thicknesses when using MeO-TPD as HTL for layer 34 is in the range of 60-300 nm, while a suitable range of thicknesses for layer 32 when using MeO-TPD as HTL can be in the range of 60-300nm, more preferably in the range of 80-160 nm.
  • a Corning Eagle XG glass substrate with a thickness of 1.1 mm, coated with 45 nm ITO, obtained from the company Colorado concept coatings has been used in an operational device.
  • the same glass substrate and ITO coating have been used in the architecture of the improved device.
  • ITO patterning 4.5.3.2.1 Operational device
  • the ITO coated substrate has been patterned with a suitable gap width of 15 ⁇ , the ITO finger width was designed to be 173 ⁇ , using non-curved fingers.
  • the operational device uses 2 organic layers on top of the patterned ITO in the following order, a 80 nm MeO-TPD HTL and a 10 nm PTCBI layer.
  • Three organic layers are used in the improved device on top of the patterned ITO in the following order: a 80 nm MeO-TPD HTL, 10 nm PTCBI and 80 nm HTL.
  • Operational device uses the conventional encapsulation methodology, with getters placed outside the display' s active area.
  • the organic photoconductive sensor is encapsulated using the solution with the drop of glue.
  • the exact sensor architecture designed to obtain a stabile sensor optimized for visibility as elaborated above, determines its specific sensing characteristics, including its, exact angular sensitivity, non-linearity, and spectral sensitivity.
  • Suitable calibration tables should be created in order to be able to use the sensor for luminance and chromaticity measurements in combination with a display system. These calibration tables should then be integrated into the software 13 of the controller 14, in order to assure a correct measurement of the sensor system 9.
  • the photoconductive sensors 6 cannot distinguish the direction of the light. Therefore the photocurrent going through the semitransparent sensor can be the consequence of light emitted from display area 5 or external (ambient) light. Therefore additional measurements are to be performed.
  • the semitransparent sensor is present in a front section between the front glass and the display.
  • Fig. 15 shows an alternative embodiment of the invention relating to the photoconductive sensors 6.
  • Sensors 61a and 61b are deposited/placed on the ITO pattered substrate 63 (e.g. a glass) panel such as one that comes in front of a (typical) LCD panel 64 or directly onto the display.
  • the display can comprise an air gap in between the front glass and the panel, hence it is placed in front of the (LCD) panel, at a certain distance.
  • the sensor is preferably created on the side of the front glass facing the display (i.e. the substrate glass can be used as front glass, or on an additional layer positioned adjacent to the front glass, at the side facing the display).
  • the LCD display is backlit by light sources 65 (e.g. cold cathode fluorescent lights, LEDs ).
  • Sensor 61a is also covered by an optical filter 62.
  • a first amount of ambient light (AL) 66a impinges on filter 62 before being transmitted to sensor 61a.
  • a first amount of "display” light (DL) 67a reaches sensor 61a.
  • a second amount of ambient light 66b reaches sensor 61b.
  • a second amount of "display” light 67b reaches sensor 61b.
  • the first and second amount of ambient light 66a and 66b are identical / equal. This condition may be fulfilled in most practical cases when sensors 61a and 61b are close enough to each other and small enough.
  • first and second amount of display lights 67a and 67b are equal. This may be taken care of electronically (by driving the LCD panel so that the both sensors 61a and 61b receive the same amount of light (for instance, it may be necessary to compensate for potential non-uniformities of the light generated by the backlight)).
  • VOut a and VOut b be the output signals of sensors 61a and 61b respectively.
  • VOut a and VOut b we have for VOut a and VOut b:
  • VOut a a * AL + b* DL
  • VOut b c * AL + d* DL (where * indicates multiplication)
  • the sensor has a linear response to the impinging light level of the ambient light and light emitted by the display, which should be realized with the proper sensor calibration.
  • the coefficients a and c are different by way of the filter 62 (a ⁇ c) .
  • the coefficients b and d may be different (for instance, the filter 62 may be responsible for reflecting of a part of the display light not absorbed by the sensor 61b, resulting in b being larger than d.
  • the determinant a*d - b*c of the linear system of equations (1) and (2) in the unknowns AL (ambient light) and DL (display light) is non zero by an ad-hoc choice of the filter 62 (e.g. material, pigment and/or thickness of the filter impact coefficient d and influence the value of the determinant).
  • the system may thus be solved for AL and
  • the set of equations (1) and (2) shows that the sensors 61a and 61b are used in tandem to discriminate between different sources of light that contribute to the output signals of the sensors.
  • a calibration to a reference sensor that has a response according to the ⁇ ( ⁇ ) curve can be used to obtain the actual values, as previously discussed.
  • the coefficients a, b, c & d can be determined upfront, as they are related to the design of the display, sensor and filter, which remains constant over the display' s lifetime.
  • the filter 62 may be e.g. a polarizing filter that will only transmit light with the right polarization, when used in combination with an LCD that emits linearly polarized light.
  • the filter 62 may be integrated into or (structural) part of to the sensor 61a, as in the case of a sensor sensitive to a specific polarization of the light discussed elsewhere in this description. Indeed, if the sensor 61a contains a layer of organic material, "rubbing" the organic material can generate an integrated filter.
  • the filter 62 might be a simple linear polarizing filter, or partial linear polarizer filters that are applied suitably such that they transmit the polarization emitted by the display. In general, any filter that will force the two linear equations (1) and (2) to be linearly independent will do.
  • the filter 62 may be placed between the display and sensor. In that case, a first amount of display light (DL) 67a impinges on filter 62 before being transmitted to sensor 61a.
  • a set of linear equations similar to (1) and (2) is solved for the unknowns AL and DL in order to isolate the contributions of the ambient light to the output signals of sensors 61a and 61b.
  • the filter 62 can be designed such that it reduces the reflection of the panel, which leads to two independent equations (1) and (2). For instance, the coefficient a can become smaller than coefficient c due to the reduced reflection of this impinging ambient light on the panel.
  • the filter In a first state the filter is inactive and the sensor' s output as given by:
  • the filter 62 is activated (electrically) and the sensor's output is given by:
  • VOut 2 c AL + d DL with a « c and d not necessarily equal to b. 4.6.2.3 Electronic filter
  • the typical driving of the backlight unit which is integrated in the design of a transmissive liquid crystal display.
  • the backlight unit typically is Pulse Width Modulated (PMW) driven, which can be used advantageously in the scope of this invention.
  • PMW Pulse Width Modulated
  • the blinking should be done in such a way that the period in which the backlight is switched on and the period in which the backlight is switched off are sufficiently long to ensure that the sensor is capable to properly measure both sets of measurements.
  • the same can be done based on the measurements at different points in time on the PWM signal, even without using blinking, more precisely at the moments in time when the backlight is switched on and when it is switched off.
  • the disadvantage of this embodiment is that sensors are required with a very fast response time in this embodiment.
  • the filter 62 can be a mechanical filter that filters out the ambient light.
  • the mechanical filter 62 can be a finger, touching the sensor system. Upon touching, the ambient light is blocked locally when touching the region of interest, assuming the sensor 61a has smaller dimensions than the finger touching it.
  • the output signal of sensor 61a is:
  • VOut 2 c AL + d DL with a « c and d not necessarily equal to b.
  • the controller 34 is designed with the required intelligence, such that it is aware of the touching from the touched sensor.
  • the sensor system 31 can then measure the light properties in a touched state where all or a significant amount of the external light is blocked. The measurement is then repeated in an untouched state. The derived difference between the two measurements provides the amount of ambient light.
  • the finger touching the sensor can have a reflection as well, which can influence the amount of light sensed by the sensor.
  • the sensor system 31 can be calibrated by first carrying out a test to determine the influence of the reflection of a finger on the amount of measured light coming from the display without ambient light, and the equations need to be adapted accordingly.
  • the filter 62 can be a black absorbing cover, blocking all the ambient light, 66, without reflecting any of the display's emitted light 67.
  • the first measurement(s) include measuring the full light contribution, which is the combination of light emitted from the display and the ambient light.
  • the sensor measures the emitted display light with all ambient light excluded, using the absorbing cover as filter. Both these measurements are needed when one desires to quantify the ambient light. However, if one merely wants to measure the light emitted by the display, without the ambient light contribution, it is sufficient to cover the display to exclude ambient light influences and then measure.
  • a possible method to roughly determine the position of the sensor is to use a square 68 that scans over the active area of the screen 68, as illustrated in Fig. 16a.
  • the square moves in a grid that covers the entire display's active area by translating column per column for every consecutive row.
  • the sensor 69 measures light at every position of the square. Two consecutive positions of the square 68 are presented in Fig. 16a.
  • the coordinates of the square correspond closest to the coordinates of the sensor.
  • the size of the square can be determined based on the desired accuracy. The smaller the size of the moving square is, the better the accuracy will be.
  • This methodology can be further enhanced using images which have bright half 70 and a dark half 71, as illustrated in Fig. 16b.
  • An image is used which is cut in a bright and dark half in horizontal (Fig 16b, left) and vertical direction (Fig 16b, right), to pinpoint in which quadrant the sensor is located.
  • a smaller version of the same images is displayed, to obtain the correct sub quadrant of the initial quadrant, as presented in Fig. 16c.
  • the obtained results can always be compared to the reference values measured using a bright and a dark patch.
  • the sensor is located partially in the bright and dark region (assuming the sensor works like a luminance sensor, otherwise the neighbourhood of the sensor has been found).
  • the exact sensor's location we will start from the identified area, and use a small white square in a black background to precisely locate the sensor.
  • the measured luminance by the sensor will reach a local maximum, the position of the sensor relative to the display's active area sensor will be found. This algorithm is presented in Fig. 16d.
  • the sensor system 9 has an architecture that has been optimized for stability and visibility purposes, and due to proper calibration luminance and chromaticity measurements of the display are enabled, as well as ambient light measurements.
  • the remaining design choices include lay-out of the matrix of sensors (their locations on the screen), and the size of individual sensors. These remaining design choices should be made in harmony with the display controller 14, more specifically with the software 13 which includes calibration algorithms dedicated for the applications, as well as the specific display 1 the application is intended for, as this has repercussions for instance on the number of sensors used in the sensor matrix. Also, the sensors cannot be made too small, such that the sensor has a sufficient amplitude for a proper detection by the controller. 4.7.2 Applications/performance improvements based on display light output measurements
  • a suitable sensor-layout and size can be created to perform luminance/chromaticity uniformity checks.
  • the sensor-layout design is such that five sensors are created: one in the centre and four corners.
  • other custom sensor designs with very specific parameters are also possible. For example, when the exact size of the measurement area is not specified, only the borders of the region are specified. Creating a sensor with a relatively large sensing area is preferred, since this will average out any high-frequency spatial non-uniformity which might occur in the region. This can be realized in practice by using organic photoconductive sensors according to the present invention, designed with finger-shaped interdigitated extensions with sufficiently long fingers and a sufficient number of fingers to reach the required region size, or alternatively multiple smaller sensors which can be combined to create an averaged measurement.
  • the size of the sensors' measurement area is for instance a 1 by 1 cm region across the faceplate divided by the mean. This regional size approximates the area at a typical viewing distance.
  • Non-uniformities in display devices that can benefit from the sensor systems such as LCDs may vary significantly with luminance level, so a sampling of several luminance levels is usually necessary to characterise luminance uniformity.
  • luminance uniformity is determined by measuring luminance at various locations over the face of the display device while displaying a pattern with uniform driving, and applying a suitable metric to quantify the non-uniformity of the measured values.
  • Non-uniformity can be quantified as the maximum relative luminance deviation between any pair or set of luminance measurements.
  • a metric of spatial non-uniformity may also be calculated as the standard deviation of luminance measurements.
  • the luminance uniformity can be quantified using the following formula: 200*(Lmax - Lmin)/(Lmax +Lmin). Depending on the outcome of the measurements, it can be validated if the display is still operating within tolerable limits or not. If the performance proves to be insufficient, a signal can be sent to an administrator, or to an online server that registers the performance of the display over time.
  • the spatial noise of the display light output can also be characterized based by calculating the NPS (Noise Power Spectrum) of measurements of a uniform pattern at different digital driving levels.
  • NPS Noise Power Spectrum
  • recording of the outputs of the luminance performance can result in digital water marking, e.g. after capturing and recording all the signals measured by all the sensors of the sensor system at the time of diagnosis, it could be possible to re-create the same conditions which existed when an image was used to perform the diagnosis, at a later date.
  • the sensors can also be used for chromaticity uniformity checks.
  • the sensors are preferably large enough to cancel out the high- frequency Gaussian noise. Since the measured data is a spatial (weighted) average of the light impinging on the sensor, the noise will indeed disappear. However, the sensors should not be too large, otherwise we may cancel out the low-frequencies as well and the sensors would not capture the correct signal anymore.
  • the architecture of the sensor system of the present invention offers the flexibility to create sensors with a suitable size, by suitably selecting the number of fingers and the length of the fingers.
  • the sensor can for instance have a square geometry, with a side the size of 5-500 pixels.
  • the sensors are preferably dispersed over the whole active area of the display and their positions will define a 2D grid.
  • This grid may be uniform or not, regular over the display or not. For instance, the spacing in the borders may be reduced while keeping a uniform grid in the centre of the display.
  • Number of sensors the basic trade-off concerning the number of sensors is the cost of the sensor system, more sensors will certainly result in low-resolution map which will eventually provide a better match with the actual emitted pattern, but can typically result in a higher cost, for example due to a resulting more elaborate electronics board 12, or a more sophisticated ITO removal process. Moreover, the resulting improvement can be limited; there is typically an asymptotic behaviour of the correctness of the resulting high-resolution map depending on the number of sensors used, when considering only the low-frequency noise.
  • the number of sensors can for instance be in the range of 3 by 3 to 400 by 500 sensors, more suitably in the range of 6 by 6 to 50 by 40 sensors in the case of a 5MP medical grade radiology display.
  • the display areas 5, measured by the sensors should typically be made significantly larger than the size of the photosensitive area of a photoconductive sensor. This implies that the total sensitive area of the sensors is typically smaller than the display's active area, and an interpolation or approximation technique is required to obtain a suitable correction of the display's driving. Consequentially, the interpolation/approximation method used is of great importance. This will determine, based on the measurements of the sensors, the curve that will be used for correction, as it converts the low resolution map of captured sensor values into a high-resolution map, typically at the display resolution, that should be in the uniformity correction algorithm.
  • a preferred approximation algorithm which is used is an interpolation method which is based on biharmonic spline interpolation as disclosed by Sandwell et al in "Biharmonic Spline Interpolation of GEOS-3 and SEASAT Altimeter Data", Geophysical Research Letters, 2, 139-142,1987.
  • the biharmonic spline interpolation finds the minimum curvature interpolating surface, when a non-uniform grid of data points is given.
  • Other approximation algorithms can also be used, for example, the B-Spline which is disclosed in H. Prautzsch et al, Bezier- and B-Spline techniques, Spinger (2010).
  • an interpolating curve can be defined by a set of points, which runs through all of them.
  • An approximation defined on the set of points also called control points, will not necessarily interpolate every point and possibly none of them.
  • An additional property is that the control points are connected in the given order.
  • the set of control points is assumed to be ordered according to their abscissa, although it is not mandatory to apply the interpolation technique in the general case.
  • Another interpolation method which can be applied is linear interpolation, where a set of control points is given and whereby the interpolating curve is the union of the line segment connecting two consecutive points.
  • the linear interpolation is an easy interpolation technique and is continuous.
  • the quality of the uniformity correction which utilizes the high-resolution interpolated/approximated luminance map needs to be assessed with a suitable metric.
  • This metric is designed to compare two images. The first image is the desired uniform image. The second image is the ideal image we want to reach, with the scaled error modulated on top.
  • the error is the scaled difference between the actual measured signal, and the interpolated/approximated signal. The error is scaled in the same way as the measured signal is scaled to obtain the ideal, uniform image. This scaled error is then added as a modulation on top of the ideal image. This resulting rescaled error is a consequence of the difference between the image we would obtain by using the interpolated or approximated curve that uses only a limited number of measurement points, instead of the actual measured curve for the luminance uniformity correction
  • the global percentual error can for instance be obtained by calculating the sum of the local absolute errors per pixel, and dividing it by the sum of all the desired pixel values.
  • the generated results are not necessarily the most consistent ones with what perception human observer would perceive. Therefore, subjective metrics based on the human visual system can be used, that allow obtaining a better match how the image is perceived by humans.
  • SSIM Structural Similarity
  • a self-optimizing algorithm can be applied, since there are various parameters which can be fine-tuned, the final optimal solution is a combination of choices for each parameter.
  • the parameters may not be independent, meaning that for instance the optimal size of the sensors will depend on their number and on their positioning.
  • a self-optimizing algorithm designed such that it automatically looks for a suitable range of parameters, or more precisely a combination of parameters, is very useful. This is very advantageous as we can then apply it to any kind of spatial noise pattern later on, suitable parameters will be determined automatically.
  • This algorithm can be based on an iterative approach that tests all possible combinations of all parameters in a suitable range, and applies the metric to determine the quality of the result, based on a number of representative images for the display that should be made uniform. Once the results have been obtained for all combinations, a suitable result can be selected. The selection can be based on various criteria, such as complexity, cost, maximal tolerable error that should be achieved.
  • the most suitable sensor design parameters also depend on the exact display type which is used in combination with the sensor system.
  • a specific display type can even have different spatial noise patterns depending on its driving level, and therefore the ideal design of the sensor system has to be based not only on a single noise pattern, but a more global set of noise patterns over the entire display driving range have to be considered.
  • individual displays from a same type can have slightly different noise patterns. This should also be taken into account in the sensor design.
  • a high-resolution luminance map as emitted by the display is depicted, from which a low resolution luminance map has to be obtained using the sensor system of the present invention.
  • This luminance map has been obtained from a 5 MP display medical grade monochrome display, which has a resolution of 2048 (horizontally) by 2560 pixels (vertically), using a high-quality, high resolution camera.
  • Luminance measurements using a high-resolution camera are illustrated Figs. 18a, 18b and 18c. These illustrations are limited to ID, horizontal, cross-sections, of a single pixel row for simplicity reasons. Note that the luminance measurements described here are perpendicular to the display's active area. Such measurements can typically be used to characterize the non-uniformity of the luminance (or color in an alternative embodiment) of a display, or it can alternatively be used as input for an algorithm to remove the low-frequency, global, spatial luminance trend.
  • a cross-section of a profile measured using a high-resolution camera (suitably calibrated such that it measures luminance in perpendicular direction as emitted by the display) on a relatively uniform display is presented.
  • Fig. 18b an example of positions of the photoconductive sensors according to an embodiment of this invention are indicated using squares with the corresponding width of the sensors.
  • 10 sensors are used as, merely as illustration.
  • the Gaussian high- frequency noise is averaged out by designing the sensors with a suitable size and the measured points are a measure of the global trend only. For instance, sensors in Fig. 18b are selected with a width of 1 cm per sensor.
  • This spatial luminance correction algorithm basically applies a precorrection table to the driving levels of the display, to ensure images are correctly displayed. When the display is driven with a same driving level for every pixel, and the precorrection table is used, the resulting light output is spatially uniform.
  • a horizontal section has been used in the example described. In vertical direction, more sensors will have to be used since this type of displays is typically used in portrait mode.
  • a 5 MP display typically has a resolution of 2048 (horizontally) by 2560 pixels (vertically), in other words an aspect ratio of 4:5, and a pixel pitch of 0.165 mm. Therefore, 13 sensors in vertical direction can be used, leading to a matrix of 10 by 13 sensors. Note that the numbers presented in this example are intended as an example, the optimized solution depends on the design parameters of the sensor system mentioned above.
  • the interpolation described above relates to the one dimensional case. While this is very interesting to get a profound insight into the problem, the actual spatial luminance output of the display is a 2D map. Therefore, in the two dimensional case, the sensors preferably define a two dimensional grid instead of a single line. As before, every sensor stores a single value, namely the spatial (weighted) average of the measured data. This defines control points and then a two-dimensional interpolation or approximation method is run through them. Again, the choice of the design parameters, will determine the final shape. Two distinct models of sensor grids are considered in the analysis. In the first model, the values captured by the sensors are measured in 2D and the sensors are spread uniformly over the surface of the display. In the second model, special attention is devoted to the borders of the display.
  • a purely objective error computation can be used, by filtering the data captured by the camera and summing the absolute differences between the filtered data and the interpolated/approximated data after which they are divided by the sum of all filtered camera measurements, to obtain the global relative absolute error.
  • the filtering will be based on a rotationally symmetric Gaussian low pass filtered version of the measured luminance profile. The exact parameters of the filter will depend on the specific type of display which is considered.
  • This filtering step removes the high-frequency signal content of the signal, and retains the low-frequency content. This step is implemented as it is not the purpose of the method to correct the high-frequency noise.
  • another objective metric consists in measuring the maximal local relative absolute error. Instead of measuring only a global error, this captures the local deviation from the data. Note that, in this metric, the initial signals are compared, before the uniformity correction algorithm is applied.
  • the structural similarity is a general and commonly used tool to assess the difference in quality of two images which is based on the human visual system.
  • the first image is the uniform image we ideally want to reach.
  • the second image is the ideal image we want to reach, with the scaled error modulated on top.
  • the error is the difference between the actual measured signal, and the interpolated/approximated signal.
  • the error is scaled in the same way as the scaling of the measured signal to obtain the ideal, uniform image. This scaled error is then added as a modulation on top of the ideal image.
  • a uniform grid over 95% of the display width and height preferably four additional parameters are considered, namely the number of sensors in the x-direction, the number of sensors in the y-direction, the size of the sensors and the interpolation method.
  • the values were interpolated using cubic interpolation, linear interpolation, and a method based on biharmonic spline interpolation.
  • a suitable method based on the biharmonic spline interpolation method is for instance the MATLAB 4 griddata method. Using this method, a uniform grid of 7x5 or 6x6 sensors is sufficient to obtain a relative absolute global error less than 1% (the same 5MP display described above is used in this analysis), when using square sensors of 50 by 50 pixels.
  • the values cannot easily be used in an intuitive way to actually determine the best configuration, as this would require fixing an arbitrary threshold.
  • the best method among the three is the interpolation method based on the biharmonic spline interpolation method. It consistently produces globally the lowest relative error, the best SSIM values and the minimal local error.
  • Fig. 20a shows a local map of the error for digital driving level 496 (10 bit driving levels, ranging from 0 to 1023) when the sensors are located on a 6 by 6 uniform grid. Since the data illustrated in Fig. 20a are not extrapolated to the borders of the display, but only interpolated inside the convex hull defined by the set of sensors, there is an external ring which is put at 0. The main differences between the interpolated and the true signal are located towards the borders of the interpolated area. This is a structural error, which is persistent throughout the higher driving levels of the display. For the lowest driving levels, this observation no longer holds.
  • a non uniform grid with smaller spacing between the sensors in the borders was chosen.
  • the error is depicted, where the dots indicate the location of the sensors of size 50 by 50.
  • the global relative error is depicted by "e”
  • the maximal local error is depicted by "m”.
  • the grid used is non-uniform on the borders of the interpolated area.
  • the obtained results are presented for a uniform grid (left columns), a non-uniform grid (right columns), for a broad range of sensor resolutions (horizontal axis on the plots), combined with a broad range of sensor sizes (indicated above, 5mm, 10 mm, 15 mm, 20 mm, 25 mm or expressed in number of pixels: 30, 60, 90, 120 and 151 pixels).
  • the global percentual relative absolute error is presented on the vertical axis.
  • Fig. 21a the results for the darkest level are presented, while in Fig. 21b, the results for the brightest level are presented (DDL 255 for 8 bit display driving). It is clear that very good results were also obtained when using such a sensor. Also, it is assumed that ambient light is eliminated from the measured value as described earlier.
  • the other potential applications highlighted in the summary of the invention do not impose strict requirements on the lay-out of the photoconductive sensors, or the size of their light sensitive area.
  • the measurements for the DICOM checks can for instance be made at the locations of the sensors as described above for the uniformity correction, and the measured values can be interpolated/approximated at the locations between the different sensors.
  • Some general device performance parameters can be defined for the entire photoconductive sensors.
  • the transmission is a combined effect of the different components comprised in the sensor architecture.
  • the luminance transmission is preferably 60-98%.
  • the introduced color shift by the sensors should be reduced as much as possible, by suitably designing the sensor system.
  • the spectral sensitivity should cover the whole visible range of wavelengths 380-780nm.
  • the substrate is a transparent material of which glass is an example.
  • the substrate has a very high luminance transmission in the visible range.
  • the luminance transmission is preferably 60-98%.
  • the losses due to reflection from the substrate are typically optimized in combination with an Antireflection coating (ARC).
  • ARC Antireflection coating
  • glass substrates have a very limited coloring
  • the transmission of the substrate should be spatially uniform. This is typically the case for glass substrates.
  • the substrate should be able to withstand the thermal stress during the fabrication of the materials that are added on top: the semitransparent conductor and the organic layers.
  • the substrate should preferably be a rigid substrate, able to support the additional layers added on top.
  • a flexible substrate is technologically possible to use in a working sensor system,
  • the semitransparent conductive material can be an inorganic or organic material.
  • a usable broader range is 1 ⁇ / ⁇ -5000 ⁇ / ⁇ . 5.3.3 Work function
  • a suitable work function of the Semitransparent conductor is defined relative to the HOMO level of the HTL on top of the patterned substrate.
  • the work function of the semitransparent conductor should be higher than the HOMO level of the HTL on top of the patterned substrate.
  • a suitable semitransparent material should be chosen that remains fixed on the substrate.
  • Examples of a semitransparent conductor that can be attached to the substrate is ITO or PEDOT:PSS.
  • Typical thicknesses for a good sensor are in the range 25-65 nm 5.3.6 Uniformity: thickness, optical, ...
  • the transmission of the light over the substrate should be very uniform, typically in the order of 85-100% (Lmax-Lmin)/Lmax. 5.3.7 Patternable
  • Realistic range for a good visibility design (e.g. in the case of ITO) is : 30 to 1 to
  • a possible range is 1 to 100 ⁇
  • a suitable range of gaps is for instance 4 to 25 ⁇ , typically between 8 and 20 ⁇ .
  • the number of fingers may for instance be anything between 2 and 5000, more preferably between 10 and 2500, suitably between 25 and 700.
  • the surface area of a single semitransparent sensor may be in the order of square micrometers but is preferable in the order of square millimeters, for instance between 10 and 7000 square millimeters.
  • One suitable finger dimension is for instance a 12000 by 170 micrometers size 5.4.5 Number of sensors & sensor
  • the randomized shape is expected to render the best results
  • the semitransparent conductor typically has an absorption in the range of 5-30%
  • An Exciton generation layer which is photosensitive and generates excitons.
  • a hole transport layer which transports holes between the electrodes.
  • An additional layer can be a Charge separation layer, which separates holes and electrons at the interface of two organic layers, to keep them from recombining.
  • a two-layer or three-layer stack is used.
  • the two layer stack has a HTL directly on top of the patterned substrate, with an EGL on top of the HTL.
  • the three-layer stack has an additional HTL on top of the two-layer stack.
  • the relative location of the HOMO and LUMO levels of the HTL and EGL are important in a proper working device.
  • the LUMO level of the HTL should be higher than the LUMO level of the EGL, and the HOMO level of the HTL should be higher than the HOMO level of the EGL.
  • HOMO and LUMO levels are respectively around 6.3 eV and 4.6 eV. Therefore, a suitable HTL deposited on ITO for instance has a HOMO and LUMO level in the following ranges:
  • the optimal layer thickness depends on the exact organic material used in the
  • Typical for good sensor 60-150nm, (or even narrower can 80-100nm as mentioned in the text).
  • Typical for good sensor 5 to 15 nm
  • the luminance absorption of the organic stack is typically in the range of 3- 30%.
  • the luminance transmission of the organic stack is preferably designed to be in the range of 80-95%.
  • the introduced color shift by the sensors should be reduced as much as possible, by creating a stack with a rather uniform absorption over the visible spectrum when combined with the other components of the sensor, which results in a limited coloring of the light.
  • ABSORPTION is in the range of 3-30%, which is reasonable.
  • the spectral sensitivity should cover the whole visible range of wavelengths 380-780nm.
  • the organic layers are typically deposited uniformly across the display's active area, to avoid local coloring. On top, a uniform deposition of the organic layers is used, typically with a uniformity between 90-100% over the entire area. The color and luminance uniformity are results of these layer non-uniformities.
  • the glass transition temperature of the materials should be high enough to keep them from crystallizing when used in front of a display.
  • the glass transition temperature therefore depends on the exact display technology, but is typically in the range of 60°C to 300°C (the upper limit is not a constraint).
  • the sensors need to be encapsulated to avoid potential degradations of the organic materials.
  • Several encapsulation methodologies can be used to avoid these degradations:
  • a controller is needed for several reasons:
  • controller applies a voltage over the sensor, and reads out the consequent current.
  • the inverse is also possible, however.
  • the type of voltage driving signal applied to the sensor can for instance be a square wave, a sinusoidal wave, or more exotic shapes known by the skilled person.
  • Preferably symmetrical waves going from a positive voltage to the same negative voltage are used.
  • good results were obtained using a square wave that switches between a positive and negative voltage.
  • the waveform applied does not result into a DC voltage over the cell, or in other words: when integrating one period of the waveform applied then the integrated voltage value is zero. 6.1.3 Amplitude
  • the amplitude of the applied signal has an impact on the resulting stability of the measured signal and typically is chosen between 0.3-3V.
  • an operational range can for instance be 0.05-500V. 6.1.4 Frequency
  • the suitable frequency for a stabile sensor read-out is typically in the range of 0.05 to 2.5 Hz, as mentioned in the text.
  • a usable range can be broader, however. For instance 0.01 Hz to 60Hz.
  • the calibration should properly take into consideration the sensor's angular sensitivity, spectral sensitivity and non-linearity, as well as the display's spectral and angular emission.
  • the sensor system can comprise an optical, electronic or mechanical filter.
  • the obtained result for the ambient light contribution needs to be matched to a device with a proper ⁇ ( ⁇ ) spectral sensitivity curve in order to obtain an actual ambient light measurement.
  • the applications of the sensor do not impact the basic sensor architecture as described above. However, there are several design choices of the sensor remaining, which can be suitably selected for the intended applications. The remaining design choices include lay-out of the matrix of sensors (their locations on the screen), and the size of individual sensors.
  • Luminance/color uniformity checks The number of sensors used to evaluate the uniformity can be chosen almost arbitrary, as there only exist certain guidelines.
  • the most suitable location and size of the individual sensors used in the sensor grid depend on the exact display type. There is no fundamental limit to the sensor size and the number of sensors used, aside that the sensors cannot overlap.
  • suitable parameters can be:
  • Sensor size for instance, squares with a side of 5-500 pixels, most suitably squares with a side of 50-150 pixels
  • the quality assessment metric for instance SSIM, the global relative absolute error or maximal local relative absolute error.

Landscapes

  • Physics & Mathematics (AREA)
  • Nonlinear Science (AREA)
  • Mathematical Physics (AREA)
  • Chemical & Material Sciences (AREA)
  • Crystallography & Structural Chemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Electromagnetism (AREA)
  • Engineering & Computer Science (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Electroluminescent Light Sources (AREA)

Abstract

A partially transparent sensor (6) for use with a display device (1) is described comprising at least one display area (5) provided with a plurality of pixels, the partially transparent sensor being for detecting a property of light emitted from the said display area into a viewing angle of the display device or for detecting ambient light falling onto the display device, the sensor comprising an organic photoconductive layer, further comprising means to stabilize the organic photoconductive layer.

Description

A DISPLAY INTEGRATED SEMITRANSPARENT SENSOR SYSTEM AND USE
THEREOF
Content
1 Technical field of the invention 5 2 Background of the invention 5
3 Summary of the invention 8
3.1 General introduction 8
3.2 Display summary 8
3.3 Sensor system summary 9 3.4 Photoconductive sensor elaboration 10
3.4.1 Basic technology 10
3.4.2 Device architecture 10
3.4.2.1 General description 10
3.4.2.2 Substrate 12 3.4.2.3 Semitransparent conductor 12
3.4.2.4 Patterning 13
3.4.2.5 Organic layers 13
3.4.2.6 Encapsulation 14
3.4.2.7 Alternative architectures 15 3.5 Optimized design of the sensor system 17
3.5.1 Visibility 17
3.5.1.1 Introduction 17
3.5.1.2 Substrate design 18
3.5.1.3 Semitransparent electrodes design 18 3.5.1.4 Organic layers 21
3.5.1.5 Encapsulation 22
3.5.2 Stability 22
3.5.2.1 Introduction 22
3.5.2.2 Sensor driving by controller 23 3.5.2.3 Sensor architecture 24
3.6 Calibration 24
3.6.1 Introduction 24 3.6.1.1 Angular sensitivity 24
3.6.1.2 Sensor non-linearity 25
3.6.1.3 Spectral sensitivity 26
3.6.2 Luminance and chromaticity measurements 29 3.6.3 Reference devices 29
3.6.4 Sensor ageing 30
3.6.4.1 Introduction 30
3.6.4.2 Using reference sensor 30
3.6.4.3 Using a model and LEDs 30 3.6.5 Physical value measurements 31
3.6.6 Ambient light rejection algorithms 33
3.6.6.1 Introduction 33
3.6.6.1.1 Ambient light measurement 33
3.6.6.2 Optical filter 34 3.6.6.3 Electronic filter 35
3.6.6.4 Mechanical filter 36
3.6.6.5 Alignment 37
3.7 Applications 38
3.7.1 Introduction 38 3.7.2 Applications/performance improvements based on display light output measurements 38
3.7.2.1 Luminance/color uniformity checks 39
3.7.2.2 Luminance/color uniformity corrections 40
3.7.2.3 DICOM compliance 42 3.7.2.4 DICOM recalibration 42
3.7.3 Applications/performance improvements based on ambient light measurements 42
3.7.3.1 Local ambient light measurements 43
3.7.3.2 DICOM compliance including ambient light 43 3.7.3.3 Backlight adaptation 44
3.7.3.4 Touch sensor functionality 44
4 Brief Description of the Drawings 45
5 Description of the illustrative embodiments 47 5.1 Introduction to the terminology 47
5.2 Display summary 49
5.3 Sensor system summary 49
5.3.1 Photoconductive sensors 49 5.3.2 Controller 50
5.3.2.1 Overview controller 50
5.3.2.2 Electronics board 50
5.4 Photoconductive sensor elaboration 51
5.4.1 Basic technology 51 5.4.2 Device architecture 51
5.4.2.1 General description 51
5.4.2.2 Substrate 51
5.4.2.3 Semitransparent conductor 52
5.4.2.4 Patterning 52 5.4.2.5 Organic layers 53
5.4.2.6 Encapsulation 53
5.5 Optimized design of the sensor system 54
5.5.1 Visibility 54
5.5.1.1 Introduction 54 5.5.1.2 Substrate design 54
5.5.1.3 Semitransparent electrodes design 54
5.5.1.3.1 Dummy fingers 55
5.5.1.3.2 Nails 55
5.5.1.3.3 Narrow gap size 56 5.5.1.3.4 Finger width/gap ratio 56
5.5.1.3.5 Exotic fingers 57
5.5.1.3.6 ITO material p a ameters 58
5.5.1.4 Organic layers 59
5.5.1.5 Encapsulation 60 5.5.1.6 ARC 61
5.5.2 Stability 62
5.5.2.1 Introduction 62
5.5.2.2 Sensor driving by controller 62 5.5.2.3 Sensor architecture 66
5.5.2.3.1 Finger patterns 66
5.5.2.3.2 Organic layers 66
5.5.3 Suitable device architecture for stability and visibility 67 5.5.3.1 Substrate & ITO 67
5.5.3.1.1 Operational device 67
5.5.3.1.2 Improved device 67
5.5.3.2 ITO patterning 67
5.5.3.2.1 Operational device 67 5.5.3.2.2 Improved device 68
5.5.3.3 Organic layers 68
5.5.3.3.1 Operational device 68
5.5.3.3.2 Improved device 68
5.5.3.4 Encapsulation 68 5.5.3.4.1 Operational device 68
5.5.3.4.2 Improved device 68
5.6 Calibration 68
5.6.1 Luminance and chromaticity measurements 68
5.6.2 Ambient light rejection algorithms 69 5.6.2.1 Introduction 69
5.6.2.2 Optical filter 69
5.6.2.3 Electronic filter 72
5.6.2.4 Mechanical filter 73
5.6.2.5 Alignment 75 5.7 Applications 76
5.7.1 Introduction 76
5.7.2 Applications/performance improvements based on display light output measurements 76
5.7.2.1 Luminance/chromaticity uniformity checks 76 5.7.2.2 Luminance/color uniformity corrections 78
5.7.3 Applications/performance improvements based on ambient light measurements 86 1 Technical field of the invention
The present invention relates to the field of displays, and more specifically features used in combination with a display.
Said feature is an integrated sensor system, able to measure properties of the light emitted by the display at multiple locations dispersed over its active area, as well as properties of the ambient at the same locations, suitably combined with the controller comprising both hardware and software components. Said controller can interact with the sensors and control the display' s electronic driving, and further comprises suitable algorithms for improving the display's performance, guaranteeing its performance during its lifetime, or expanding its functionalities.
2 Background of the invention
Sensors are nowadays commonly used in display devices, for instance in display devices used for professional markets, to ensure the performance requirements of properties of their emitted light are met during their lifetime. Typical properties of the light, measured by the sensor system are luminance, and chromaticity, which can for instance be used for measuring the luminance and chromaticity uniformity of the light emitted by the display.
For example, in the professional broadcast market, complex sensor systems are used to make sure display remains within its specifications throughout its entire lifetime. Healthcare is another professional display market in which sensors are commonly used. In modern medical facilities high-quality medical imaging using display devices is more important than ever before as a diagnostic tool, as they are commonly used nowadays to make life-critical decisions. Sensors can be used in these display devices to ensure their continued optimal performance throughout their entire lifetime. Various types of sensor systems with various architectures can be used in combination with professional displays, and desired measurement type. In general, two main categories of sensor systems can be distinguished: a first category of sensor systems which are integrated into the display, and a second category of non-integrated sensor systems, which use measured values of a separate measurement system.
The most suitable sensor architecture depends not only on the intended market, as different markets typically have different requirements, but also on the display technology it is intended for. Various display technologies are used for different markets nowadays, for instance, Liquid Crystal Displays (LCDs), are currently the dominant technology used in the medical display market. LCDs used in the medical market are transmissive displays, meaning that the liquid crystals cells do not emit light themselves, but merely modulate the light emitted by the backlight, which includes an integrated light source.
As a consequence of this design, sensor systems can be integrated into the backlight, which allow controlling the light output of the backlight over time. This light output can alter over time, for instance due to thermal effects occurring in the displays, or due to the typical degradation of the light source's light output over time. This sensor system has several limitations, however. First of all, the light measured is not the final light output of the display as it will be seen by the observer. Indeed, the light seen by the observer goes through a complex path, determined by the optical design of the backlight, and the liquid crystal material.
This limitation has been overcome in the sensor system described in US 6,950,098 B2. This sensor system measures a property of the light emitted by the display at the viewer's side of the LC layer. As this sensor system does not transmit any light in its measurement area, the measurement is limited to a small measurement area at the border of the screen. The measurement result is used in a feedback loop to improve the display's performance. Such systems allow a more generic usage, as they are suitable for other display technologies aside from transmissive LCDs, including emissive displays such as OLED displays, plasma displays and so on.
Secondly, the light is measured at a single location, while there may be a spatial dependency of the light emitted by the display over time. Several sensor systems have been proposed in prior art that allow measuring the emitted light at several locations on the display's active area, or obtain a more global measurement of the light emitted by the display. One possible solution is proposed in WO2004/023443. In this invention, a waveguide solution is used to guide the light towards the display's edge, where it is detected by a sensor. Another solution, proposed in US 2007/0052874 Al uses a sensor, which is integrated into a pixel, which allows measuring the emitted light on an individual pixel basis. In both these prior art inventions, the sensor is an entirely non- transparent sensor that is either put outside the display' s active area, or designed to be very limited in size, in order to reduce the impact on the display' s quality.
An alternative technique to control a property of the display's light output is known from EP 1 424 672 Al, which uses a non-integrated sensor system. In this invention, the luminance emitted by the display over its entire active area is corrected at the level of individual pixels, by capturing the display's luminance output, and correcting the display's driving appropriately. The drawback of this solution, however, is that it is not integrated into the display, and due to the nature of this technique, it is typically only performed once, during the display's production process, and consequentially it does not allow updating the correction during the display's lifetime. The advantage of such a technique is that it does not require a redesign of the display, as it uses a software solution that suitably alters the display's driving.
Aside from controlling the light emitted by the medical display itself, the ambient light level of the environment in which the display is used can impact the diagnosis' quality, because the visible contrast of the image seen on the display can be reduced due to reflections on the display. In typical prior art designs, an ambient light sensor is integrated into the display's bezel. This location is chosen, as the conventional sensor technology is non-transparent, and hence it would cause visible degradations of the display's quality. Such a sensor is capable of measuring the ambient light at a single location. The subject of the present invention is an integrated sensor system with a dual-sided light detection functionality, capable of measuring properties of a matrix-addressed display's light output at a multiple of zones over its active area at any moment during the display's lifetime. Properties that can be measured include luminance, and chromaticity, as well as measuring the ambient light at these zones, which can be used for a wide variety of matrix-addressed display technologies, without the need to redesign the display. The proposed sensor system comprises a specific design architecture, optimized to render the sensor invisible to human observers, and is further comprises suitable calibration techniques and a dedicated controller to ensure the sensor's correct operation. The controller comprises both hardware and software components, allowing it to interact with the sensors and control the display's electronic driving.
A suitable interaction between the controller and the sensor allows obtaining stabile measured values. This includes applying a suitable electronic driving signal on the sensor, and a software processing technique to process the obtained measurements. The calibration algorithms then dictate how the controller should adapt the display's driving in order to obtain or guarantee the desired display performance. As a consequence of its design architecture, the sensor system can be used for a variety of specific applications, including touch functionality.
Summary of the invention
2.1 General introduction
The object of the invention is a display-integrated sensor system. This system is beneficially used to enhance the display's performance, guarantee its performance during its lifetime or provide new functionalities, and this without creating any visible artefacts for the user of the display.
2.2 Display summary
The specific technology at the basis of said display is not considered a limitation of the present invention. The display technology can for instance be direct view, projection based or transmissive such as but not limited to LCD, OLED, DLP, or plasma.
Said display comprises at least two display areas which are specific regions on the display's active area, which contain a plurality of pixels. Depending on the exact display technology and design, the specifications of the display can differ profoundly. For instance, in the case of medical displays, typical specifications are a broad viewing angle, a high contrast ratio, a high luminance, a high resolution and a high pixel density. Any feature used in combination with the display should not significantly degrade its specifications, and it should not introduce visible artifacts, as this can ultimately have life-critical consequences, due to the nature of the application of the display. The sensor system described in the present invention is suitable for this application.
The display device further comprises an electronic driving system connected to the sensor system's controller, able to communicate with the actual sensors and able to suitably control the display's driving. This driving can be the driving of the panel, as well as the driving of the backlight, in the specific embodiment of an LCD.
2.3 Sensor system summary
The sensor system is designed such that an individual signal can be measured for every display area, and the obtained value is representative for a property of (or multiple properties of) the light emitted by (a part of) the display corresponding display area under test.
The sensor system of the present invention is a semitransparent sensor system, which comprises the actual light sensitive devices, residing on a semitransparent substrate, which is put in front of and parallel to the display' s active area. The sensor system comprises a number of light sensitive areas, which are crossed by light emitted by the display, without the need of dedicated light-guiding designs to redirect the display's emitted light. When passing through the sensor, part of the light is absorbed by the sensor, which allows it to be detected, while the majority of the light continues its path towards the observer in front of the display, with a minimal impact due to the presence of the sensor. The partial absorption and transmission characteristics of the sensor determine its semitransparency.
The light absorbed by the sensor is converted to an electrical signal, which influences the signal received by the controller, which is contacted at the edge of the display, outside the display's active area. Said controller comprises both hardware and software components, and is used to interact with the sensor, which will receive the electrical measurement signals generated in the at least one sensor, and further processes them. Afterwards, the controller can utilize the display's electronic driving system on the basis of the received optical measurement signals, to improve the display's performance. Alternatively, the processed signals can be used in added functionalities. 2.4 Photoconductive sensor elaboration
2.4.1 Basic technology The proposed technology at the heart of the sensor system, performing the actual light detection, is an organic photoconductive sensor. This technology contains a material which alters its conductivity depending on a property of the impinging light, more specifically, a higher light level increases the conductivity of the device. In the case of an organic photoconductive sensor, an organic material is used as light sensitive material.
Such organic materials have been a subject of advanced research over the past decades. This research has led to breakthroughs in several domains. The main domain of this material research is the domain of emissive devices. Such emissive devices include single pixel OLED devices, typically used for lighting applications, as well as high- resolution OLED displays, which have a matrix of individually controllable pixels. Other known domains of research are organic photovoltaics (OPVs), as well as organic TFTs.
In the field of sensors, organic materials have been studied less intensively thus far in prior art. The main field where organic materials have been studied in sensors is the field of digital copiers, which is an entirely different application and therefore imposes fundamentally different requirements on the sensors.
2.4.2 Device architecture
2.4.2.1 General description
More specifically, a single organic photoconductive sensor used in the sensor system has a design architecture which consists of two electrodes on a substrate. These two electrodes have finger-shaped interdigitated extensions, i.e. they are positioned such that each finger of one electrode is surrounded by two fingers of the other electrode after a spatial separation gap to avoid electrical shorts. In this design, there are two outer fingers, which only have a single adjacent finger. In a typical design, all the fingers have an identical width and gap between the fingers. The finger width over finger gap ratio can have a very broad range, for instance starting from 0.5, but there is no real upper limit.
On top of these electrodes, a stack of organic layers is added, Some layers in the organic stack are photoconductive which allows it to detect the impinging light. To use the sensor to measure color, the sensor should have a spectral sensitivity that should at least cover the spectral power distribution of the display's primaries. In the case if we want to measure ambient light, the spectral sensitivity should cover the whole visible range of wavelengths 380-780nm. The electrodes lead the signal towards the border of the display via a track of a semitransparent conductor, where it is received by the controller. More specifically, an electrical signal needs to be applied over the sensor, typically a voltage signal, and due to the light sensitivity, the resulting current flowing through the sensor is influenced. The organic stack of the device in the present invention is designed and controlled specifically to reach a long device lifetime. Consequentially, it is encapsulated to avoid degradation of the materials.
An example of an organic photoconductive sensor, with lateral electrodes and a stack of organic materials is known from Applied Physics Letters 93 "Lateral organic bilayer heterojunction photoconductors" by John C. Ho, Alexi Arango and Vladimir Bulovic. The described bilayer comprises an EGL ( the material PTCBI is used) or Exciton Generation Layer and a CTL (the material TPD is used) or Charge Transport Layer (in contact with the electrodes). In this device, excitons are generated in the EGL, however, in organic materials the dissociation probability of photon-induced excitons is small due to the large binding energy (0.5-1.0eV) [V.I. Arkhipov, H. Bassler, Phys Status Solidi A 201 (2004) 1152], and hence the excitons diffuse towards the interface of the EGL and the CTL, where they are split into electrons and holes due to their energy band diagrams. Holes end up in the CTL, while electrons remain in the EGL. The holes are then transported to the electrodes in the CTL. However, the mobilities of the charge carriers in organic materials are a few orders of magnitude lower than in inorganic semiconductors, and this leads typically to higher driving voltages and/or lower currents compared to inorganic semiconductors.
Yet, this device uses gold electrodes, rendering it clearly visible to a human observer. On top of that, the device is not encapsulated, as it is intended for an entirely different application, to measure TNT molecules, and it is not intended to have a long lifetime with a stabile readout signal.
2.4.2.2 Substrate The substrate is a rigid substrate with a very high transmission in the visible spectrum with a sufficient thermal stability. The luminance transmission can be in the range of 60-98%, preferably in the range of 87-98%. The substrate should also have a rather uniform spectral transmission in the visible range, which limits the coloring of the sensor system when placed in front of the display. These transmission characteristics should be valid at any spatial location on the substrate. Particularly inorganic substrates such as glass have sufficient thermal stability. This thermal stability is, amongst other factors, needed to withstand operating temperature of the technique used to add the organic layers onto the device as well as the patterning of the transparent conductor, which are described later on. Suitable embodiments of suitable substrate glasses are for instance Corning Eagle XG glass or Polished soda-lime with a Si02 passivation layer. The thickness of the glass substrate can for instance be in the range of 0.3-30 mm, more preferably in the range of 0.7-1. lmm.
2.4.2.3 Semitransparent conductor
The substrate is covered with a semitransparent conductor which remains fixed on the substrate, and is able to serve both as the electrode and as electrical conductor guiding the signal towards the border of the display. A suitable semitransparent conductor is for instance Indium Tin Oxide (ITO), which is used in the preferred embodiment. The exact specifications of the ITO used in the architecture should be carefully selected, as they impact the final performance of the sensor. Important parameters of the ITO include its sheet resistance as this determines its conductivity, its wavelength-dependant complex refractive index as this determines its transmission and reflection properties in combination with its thickness and its work function, which determines its electrical properties in combination with the organic layers. A possible range of sheet resistances for ITO is 1 Ω/α-5000 Ω/α. More preferably, the ITO sheet resistance is in the range of 60 Ω/α-125 Ω/α, as explained later. A possible range of ITO layer thicknesses in an operational device is for instance 5-450 nm. More preferably, the ITO thickness is in the range of 25-65 nm.
ITO typically has a non-uniform spectral transmission curve, resulting in a specific coloring, which depends on the ITO thickness and the ITO manufacturing process. An alternative material is the polymeric Poly(3,4-ethylenedioxythiophene) poly(styrenesulfonate), typically referred to as PEDOT:PSS.
On top of that, the spatial uniformity of the ITO layer thickness should be high, to reduce possible inconsistent electrical properties, or a positional dependant luminance or chromaticity output. For instance, the spatial luminance output as a consequence, should be in the range of 85-100% (Lmax-Lmin)/Lmax, more preferably in the range of 95-100%.
Finally, a suitable semitransparent conductor should be patternable, using a cost- efficient patterning process.
2.4.2.4 Patterning
The sensor electrode's finger shaped extensions are created on the ITO coated substrate by means of laser ablation in the preferred embodiment. When using the laser ablation technique, the ITO material is removed by irradiating the substrate with a laser beam. The removal is done starting from a substrate which is uniformly covered with and ITO coating. By appropriately tuning the laser beam intensity, pulse duration and spot size, a clean removal of the ITO can be realized, while the substrate remains undamaged. This method of ITO removal is preferred as it is easily scalable to substrates of any size, knowing that the design can easily be upscaled, and ITO be removed in a single-step with a relatively inexpensive process at high yield.
However, alternative techniques such as lithographic techniques are not excluded in an alternative embodiment. However, such techniques are typically expensive, especially for patterning large substrates.
2.4.2.5 Organic layers
A functioning device can be made with several possible stack architectures of the organic layers. This stack architecture can for instance differ in the number of layers used. The organic stack can be a monolayer, bi-layer, or more generally a multilayer stack. The precise functionality of the different layers can also differ depending on the precise design. In the preferred embodiment, the device has a three layer stack, consisting of a first Hole Transport Layer (HTL), added on the ITO patterned substrate, onto which an Exciton Generation Layer (EGL) is added, onto which a final organic layer, a second HTL, is added. In this sensor architecture, there are two interfaces at which the excitons can be dissociate up into electrons and holes, instead of the unique interface mentioned above. From this, it is clear that a suitable work function of the semitransparent conductor is defined relative to the HOMO level of the HTL on top of the patterned substrate. The work function of the semitransparent conductor should typically be higher than the HOMO level of the HTL on top of the patterned substrate.
The organic layers can be added onto the patterned substrate by means of several technologies. In the preferred embodiment, vacuum (thermal) evaporation is used, in which the organic layers are deposited in a vacuum chamber by means of for instance a point source or a line organic source. Alternative techniques are not excluded, however. An example of an alternative technique to add the organic materials to the ΠΌ patterned substrate is Organic Vapour Phase Deposition (OVPD). This deposition technique uses an inert carrier gas (N2) to transport the organic molecules to be deposited to the substrate.
Aside from these deposition techniques, entirely different techniques to add the organic layers are not excluded.
2.4.2.6 Encapsulation As it is generally known that organic materials are sensitive to moisture and air, and that the materials themselves can degrade when they are not properly encapsulated. One can think of several possible encapsulation techniques. In a specific embodiment, an etched cover glass is put over the device (while placed in a inert gas atmosphere), which makes contact with the substrate at the border of the display, where it is glued to the substrate, outside the display's active area. Getters are used to absorb any leftover humidity and oxygen in the device, which can be placed outside the display' s active area in a proper design, assuming the active area is not excessively large. However, this encapsulation methodology typically requires spacers at several locations over the entire device's surface, to ensure the substrate and the cover glass remain properly separated. On top of that spacers should be appropriately dispersed over the non active device areas, to have the least possible impact on the device's visibility and performance.
In another embodiment of the present invention, the encapsulation of the sensor based on organic photoconductors is done with a method different than the conventional approach (based on encapsulation plate, spacers and getters done in inert gas atmosphere). In particularly we fill in the space between the substrate with sensors and the encapsulation plate with a UV curable encapsulation glue, cured with a suitable intensity and during an optimized time interval. The encapsulation plate in this specific embodiment is a uniform, non-etched glass plate.
In yet another embodiment of the present invention, an AlOx layer encapsulation technique is used. This type encapsulation requires two steps. First of all, an AlOx-film is sputtered on top of the organic layers, hereafter a non-etched glass is attached on top of the created stack, by using UV-curable glue which has to be cured with a suitable intensity and during an optimized time interval. It is also not excluded that atomic layer deposition can be used to add the AlOx film. It is known from prior art that a thin film of AlOx can be used in OLEDs as a barrier layer that has a limited transmission of water and oxygen (Sang-Hee Ko Park, Jiyoung Oh, Chi-Sun Hwang, Jeong-Ik Lee, Yong Suk Yang, Hye Yong Chu, Kwang-Yong Kang, "Ultra Thin Film Encapsulation of Organic Light Emitting Diode on a Plastic Substrate, ETRI Journal, Volume 27, Number 5, October 2005" and W. Keuning, P. van de Weijer, and H. Lifka, W. M. M. Kessels, and M. Creatore, "Cathode encapsulation of OLEDs by atomic layer deposited A1203 films and Al203/a-SiNx:H stacks", J. Vac. Sci. Technol. A 30, 01A131 (2012)). This encapsulation methodology requires having a top layer in the organic stack that can withstand the sputtering. This can for instance be realized by using a three-layer stack device with a relatively thick HTL on top. A suitable thickness can be for instance any thickness larger than 40 nm.
2.4.2.7 Alternative architectures
Aside from the architecture of the preferred embodiment described above, alternative photoconductive sensor architectures are not excluded.
In an alternative embodiment, sensors comprising composite materials can be considered. Composite materials can for instance comprise nano/micro particles, either organic or inorganic, dissolved in the organic layers, or an organic layer consisting of a combination of different organic materials (dopants). Since the organic photosensitive particles often exhibit a strongly wavelength sensitive absorption coefficient, this configuration can result in a less colored transmission spectrum when suitable materials are selected and suitably applied, or can be used to improve the detection over the whole visible spectrum, or can improve the detection of a specific wavelength region. Alternatively, instead of using organic layers to generate charges and collect them with the electrodes, hybrid structures using a mix of organic and inorganic materials can be used. For instance, a bilayer device that uses a quantum-dot exciton generation layer and an organic charge transport layer can be used. More specifically, colloidal Cadmium Selende quantum dots and an organic charge transport layer comprising of Spiro-TPD can be used for instance.
Although the preferred embodiment, which uses organic photoconductive sensors allowed obtaining good results, a disadvantage could be that the sensor only provides one output current per measurement for the entire spectrum. The disadvantage is that spectral changes of the display over time can be challenging to measure. This limitation can be overcome for instance by using three independent photoconductors that have a different spectral sensitivity in the visible spectrum. Using suitable calibration techniques, the spectral changes over time can be characterized better.
These photoconductors could be conceived similarly to the previous descriptions, and stacked on top of each other, or adjacent to each other on the substrate, to obtain an online color measurement.
In yet another alternative embodiment, transversal electrodes with organic photosensitive layers in between can be used.
In another embodiment, a charge separation layer (CSL) may be present between the CTL and the EGL. Various materials may be used as charge separation layer, for instance A1Q3.
It is assumed in the remainder of this invention that the organic photoconductors have the architecture of the preferred embodiment. 2.5 Optimized design of the sensor system
The proposed sensor architecture in the present invention contains a lot of design parameters, which can all be optimally selected from a broad span of possibilities. Consequentially, the design freedom can be put to optimal use in order to obtain the most suitable device for the intended application. The most suitable device is designed to have the least possible reduction image quality, a stable signal readout, and it should fit with the sensor requirements needed for improving the display's performance, and expanding its functionalities.
2.5.1 Visibility
2.5.1.1 Introduction
A fundamental difference of the sensor system of the present invention, compared to prior art sensor systems is that the sensors themselves are designed to be semitransparent. This allows measuring properties of the display's emitted light, directly in front of the display's active area, in individual sensing areas which are larger than an individual pixel.
Even when knowing that the device architecture of the photoconductive sensors uniquely comprises elements that can be made semitransparent, it is still required to properly design the photoconductive sensor, such that it has the least possible reduction image quality. Therefore, the possible causes of degradation should be clarified. First of all, medical grade displays require a high luminance output and a broad viewing angle, and therefore the sensor should transmit as much luminance as possible over all angles. This includes both global, average changes in luminance, as well as smaller features, which locally alter the emitted luminance. In addition, the sensor should introduce the least possible changes in color. Potential changes in color include a global, uniform shift in the chromaticity of the display's emitted light, as well as a local, spatially nonuniform shift in the chromaticity of the display's emitted light. On top of that, complex optical phenomena, such as interferometric moire patterns must be eliminated. Furthermore, reflection of the sensor system is important to consider, as it is positioned in front of a display, and hence it can impact the display's perceived contrast. Having these possible causes of degradation in mind, an optimized sensor design, which reduces the visible quality to the highest possible degree, is part of the present invention.
2.5.1.2 Substrate design
In order to avoid a significant reduction in luminance, a thin glass substrate with a high transmission is carefully chosen. In the preferred embodiment, Corning Eagle XG glass or Polished soda-lime with a Si02 passivation layer with a thickness of 0.7 or 1.1 mm is chosen.
2.5.1.3 Semitransparent electrodes design When designing the geometry of the semitransparent electrical conductors, preferably some specific geometry, less visible for the human eye is used, which can result in less visible non-uniformities and thus a better image quality. For instance using semitransparent electrical electrodes, which comprise curved finger extensions, or finger patterns under an angle relative to the display' s pixel matrix, instead of straight finger-shaped semitransparent electrical conductor patterns will be less easy to detect by the human eye. For instance, these techniques can be used to avoid moire effects that can occur due to the superposition of the finger pattern grid on top of the display pixel grid.
According to embodiments of the invention, parts of a floating electrical conductor system, with no specific electrical functionality, can be applied on regions outside the finger-shaped extensions of the semitransparent electrodes and ITO tracks that guide the electrical signal from the finger pattern towards the edge of the display's active area, to have a more uniform global luminance and color output. These parts of a floating electrical conductor system have no function aside from improving the visibility. They are separated from the active, partially transparent electrodes with finger-shaped extensions and ΓΓΟ tracks that guide the electrical signal from the finger pattern towards the edge of the display, by for instance a gap to ensure there is no electrical contact between them. Furthermore, the width of this gap is small enough so that the gap itself is not noticeable by the human eye. Moreover, the floating electrical conductor system is typically made of the same material as the active, semitransparent electrical conductors, as the shapes are typically created by patterning an initially uniformly coated ITO substrate.
In addition, to reduce the visibility of the finger patterns themselves, a floating electrical conductor is applied next to the end of a finger of the pattern (this can be interpreted as a "nail" of the finger) which also has no other function except reducing the visibility of the finger pattern itself. They are separated from all active parts of the semitransparent conductor and there is no signal applied on them.
In embodiments of the present invention, the gaps between the fingers of a partially transparent electrical conductor pattern are chosen such that the human eye is not able to distinguish any individual fingers. This is preferably enabled by appropriately choosing the gap between the fingers such that the gap in between the fingers at a given viewing distance will be smaller than the smallest details any (or a typical) human observer is able to discriminate, for the specific contrast between the finger and gap region. Evidently, there are limits in the possible choice of the gap sizes, for two reasons. Firstly, there are technological limits imposed by the laser ablation process used in the preferred embodiment to remove the undesired parts of the semitransparent conductor. Secondly, the gap between the fingers can impact the performance of the sensor.
Furthermore, by widening the fingers while keeping the gap width constant, the percentage of the finger pattern area, which is covered by the semitransparent conductor is increased. In this way, the average transmission over the finger pattern is very close (almost the same) to the transmission of the areas with a uniform floating electrical semitransparent conductor. This significantly reduces the visibility of the finger pattern area in reference to the floating electrical conductor system and makes the whole substrate uniform. In other words, by suitably selecting the finger width/gap ratio for a suitable gap, the human eye is unable to distinguish the sensor's finger pattern from the neighbouring ITO tracks and floating ITO conductors, wherein the patterns and tracks conduct the signal towards the edge of the display, and parts of the floating electrical conductor system with no specific signal. As the number of fingers has to be maintained to reach a desired signal amplitude, the sensor's size will consequentially increase, when selecting a higher finger width to gap ratio for a given fixed gap.
Moreover, a preferred embodiment of the invention semi-random fingers are created by randomly choosing several points on the two edges of the fingers, where the fingers should go through, the different points then are for instance connected using a cubic spline interpolation. The position of the adjacent finger preferably is limited in distance, in the sense that the gap in between the fingers should remain approximately constant to ensure that the device's properties remain unaltered.
In addition, in other embodiments, the points can be chosen in a semi random way in the sense that they are limited to a specific area to avoid too high spatial frequencies in the fingers. Preferably, the ratio between the finger width and the gap between the fingers is non-constant in the created finger pattern, as a consequence of approximately maintaining the gap size and choosing the points of the edge semirandomly. In addition, when maintaining a specific gap size, optimized for the sensor' s performance as described above, the average finger width can be designed to reduce the visibility relative to the areas with floating ITO.
These semi-random fingers ameliorate visibility, due to the reduced moire effect, and the reduced visibility in reflection and transmission due to the nature of the senor's irregularity (for instance, diffraction effects are reduced because the two electrodes no longer form a regular grid).
Aside from the geometry of the fingers, the exact material parameters of the ITO layer also determine the visibility of the resulting sensors. Indeed, the specific manufacturing procedure of the ITO results in specific thicknesses and complex refractive indices, which will eventually be the parameters of the ITO layer, which contribute to the sensors' absorption and transmission characteristics. The ITO parameters also affect its electrical behaviour. For instance, a thinner ITO layer typically results in a higher sheet resistance, which can render it more difficult to guide the electrical signal towards the controller and perform a proper detection of the signal. A typical range for ITO sheet resistances for a suitable visibility is for instance 60 Ω/α- 125 Ω/α.
On top of that, the exact technological procedure of the ablation process impacts the visibility of the sensors. If an insufficient amount of ITO is ablated, parts of ITO remain on the substrate, which can form a conductive path that can cause a short circuit in the sensor design, and hence can render the sensor unusable. On the other hand, an excessive ablation can cause the removal of parts of the glass substrate, which typically results in a rough glass surface, which can cause scattering of the impinging light, and hence it can increase the sensors' visibility. 2.5.1.4 Organic layers
The organic layers need to be suitably selected to reach the visibility requirements. When focusing on the sensor's visibility, the organic layers mainly demand a suitable wavelength-dependant complex refractive index and thickness. In the preferred device structure, the two hole transport layers can be designed such that they impact the visibility of the sensor to a minor extent, as they can consist of materials with a minor absorption in the visible spectrum. They can, however, impact the thin-film effects occurring in the sensor. The Exciton generation layer, however, severely impacts the sensor's transmission, as this layer's function is to partially absorb incoming photons, and convert them into a measurable electric signal. Therefore, the Exciton generation material ideally has a uniform wavelength dependant absorption. The layers absorption depends on its thickness and wavelength dependant absorption coefficient, as dictated by beer-lambert' s law. Therefore, the material's absorption coefficient may not be too high, such that the layer thickness does not have to become too thin, allowing it to reach a desired transmission, while remaining manufacturable by the organic layer addition techniques described earlier.
The luminance absorption of the organic stack is typically in the range of 3- 30%. The luminance transmission of the organic stack is preferably designed to be in the range of 80-95%. The wavelength-dependant absorption of the organic layers is also designed to be rather uniform over the visible spectrum, to avoid strong coloring due to the organic layers.
The organic layers are put on the substrate as uniform layers. This implies that they will introduce a global, uniform change in chromaticity of the display's emitted light. On top, a uniform deposition of the organic layers is used, typically with a uniformity between 90-100% over the entire area. The color and luminance uniformity are results of these layer non-uniformities. This renders it easier to compensate the color shift, for instance by using an optimized antireflection coating, as described later on. Alternatively, the display's driving can be suitably adapted to compensate for this uniform coloring. 2.5.1.5 Encapsulation In embodiments of the present invention, the encapsulation of the sensor based on organic photoconductive sensors is preferably done with a method different than the conventional approach (based on encapsulation plate, spacers and getters done in inert gas atmosphere). In particular, a space between the substrate with sensors and the encapsulation plate is filled, by preferably using encapsulation glue which has a refractive index close to the refractive index of both the substrate and the encapsulation plate (typically made of glass), high transmission and whereby the glue has no coloring effect. This helps to remove a few unwanted visual artifacts such as the spacers and getters. Furthermore, it improves the transmission of the sensor by reducing the reflection of internal optical interfaces. In addition, it reduces the visibility of the finger pattern in reflection and transmission. This is because the glue is filling in all the gaps, which contain the organic layers, and reduces the diffractive effects caused by the finger pattern. The same effect is realized using the AlOx layer based encapsulation technique uses the same principle, the only difference in the structure resides in the additional sputtered AlOx film, but a similar effect is realized on the global device structure in terms of visibility.
Furthermore, the transmission can be improved by using antireflection coatings on the external side of the substrate and the encapsulation plate. The antireflection coating can be tuned in such a way that it reduces the reflection over all wavelengths. For some wavelength regions the reflection is reduce more than for other wavelength regions which is helpful to remove any potential coloring of the sensor.
2.5.2 Stability
2.5.2.1 Introduction In order to be used as a reliable sensor system, the signal generated by the sensor needs to be stable over extended periods of time. The sensor's design architecture, as well as the signal applied by the controller, should be suitably designed, in order to reach a stable readout. Depending on the desired application, a stabile readout after calibration for luminance, can be for instance mean an error of 0-20%, more preferably an error of 0-3%.
2.5.2.2 Sensor driving by controller
The main contributor to the sensor's stability by the controller is the driving signal applied to the photoconductive sensors, which is a specific voltage or current signal. From experiments, it became apparent that a voltage-driven sensor is preferred, due to the specific shape of the sensor's IV curves. The type of voltage driving signal applied to the sensor can for instance be a square wave, a sinusoidal wave, or more exotic shapes known by the skilled person. Preferably symmetrical waves going from a positive voltage to the same negative voltage are used. For example, good results were obtained using a square wave that switches between a positive and negative voltage. Preferably the waveform applied does not result into a DC voltage over the cell, or in other words: when integrating one period of the waveform applied then the integrated voltage value is zero.
Preferably, the applied wave is repeated multiple consecutive times, for the duration of the measurement procedure, and then one preferably uses the measured data which is retrieved at a certain point during the wave propagation in time. Preferably, a specific point on the upper and lower flanks is tracked during the consecutive cycles, and its output is used as measurement value over time. The points that are focused are typically the points on the flank right before the voltage switches from high to low or from low to high. For example, the final value at the end of the positive flank or at the end of the negative flank is used. Preferably, the optimal frequency depends on the exact organic materials used in the sensor's layers, and their layer thickness. The frequency can be chosen between 0.01 Hz to 60Hz, preferably between 0.05 and 2.5 Hz.
In addition, the amplitude of the applied signal can have an impact on the resulting stability of the measured signal and typically is chosen between 0.3-3V. However, a broader range is not excluded, for instance 0.05-500V. An applied voltage signal with a lower amplitude (for instance 1 V) generally renders a more stable result than a signal with a higher amplitude (for instance 8V). The obtained measurement results can be asymmetrical, meaning that the positive flank renders a different result than the negative flank, and in addition they can converge differently. In some embodiments the measurement results are averaged or only one part (the positive of negative flank) is used.
On top of that, the sensors have an initial burn-in, meaning that in the beginning of their lifetime, the measured value changes over time, even when the sensor is used in constant boundary conditions. During the burn-in phase, a decaying signal is measured over time, under constant driving and environmental conditions, which can for instance be overcome when using the sensor during this time period upfront in the production facility before shipping it to the field.
2.5.2.3 Sensor architecture
The organic layer stack of the sensor i.e. the stack material composition and the used layer thicknesses, has a direct impact on its fundamental stability. For example, the HTL thickness and the number of HTL layers impact the stability. In a case of a dual layer stack a thicker HTL in the range of for instance 80-160 nm is preferred over a thinner HTL in the range of 40 nm for improving the stability.
It is preferred, but not limited to have gaps between the fingers with a width of 6μπι to 30μπι, because the gaps have a proven impact on the signal amplitude and visibility.
When it comes to encapsulating the sensor by using a glue over its entire surface, as indicated in an earlier embodiment, initial results indicate that the sensor performs (at least) as good as a sensor with standard encapsulation (using spacers, getters and inert gas atmosphere).
2.6 Calibration
2.6.1 Introduction
The sensor system with its design architecture described in the present invention, has some inherent imperfections.
2.6.1.1 Angular sensitivity First of all, the sensor as described in a preferred embodiment also does not operate as an ideal luminance sensor.
As the sensor used is not a perfect luminance sensor, as it does not only capture light in a very small opening angle, preferably its angular sensitivity is taken into account, as described in the following part.
For a given point on an ideal luminance sensor, the measured luminance corresponds to the light emitted by the part of the active area located directly under it (assuming that the sensor's sensitive area is parallel to the display's active area). On the contrary, the sensor according to embodiments of the present invention captures the pixel(s) under the point together with some light emitted by surrounding pixels. More specifically, the values captured by the sensor cover a larger area than the size of the sensor itself. Because of this, the patterns depicted on the display and captured by the sensor, do not correspond to the actual patterns and therefore a correction has to be done in taking into account the actual angular sensitivity of the sensor, to obtain the actual luminance values. To enable the latter preferably the luminance emission pattern of a pixel is measured as a function of the angles of its spherical coordinates. The distance preferably is kept constant over the measurements. When a luminance sensor is positioned parallel to the display's active area, the latter corresponds to an inclination angle of 0, meaning that only an orthogonal light ray is considered. In addition, the exact angular light sensitivity of the sensor can be characterized. These measurements can then be used to obtain the corrected pattern for the actual light the sensors will detect. Using this actual light output will provide an additional improvement and advantageous effect of the algorithm that will render more reliable results.
Note that a display's angular emission pattern generally depends on its digital driving level (the grey level or color value sent to its pixels). Therefore, the conversion between values measured by the sensor system of the present invention and the actual values require a calibration that depends both on the luminance measured by the sensor as well as the driving level of the display. 2.6.1.2 Sensor non-linearity
The sensor according to the embodiment of the present invention generally has a non-linear response on the intensity of the impinging light, even when light impinges with a constant spectrum. The sensor is generally more sensitive to changes in intensity at lower light levels compared to changes at higher light levels.
2.6.1.3 Spectral sensitivity
On top of the angular sensitivity and non-linearity, the spectral sensitivity of the sensor has to be considered. The sensor's spectral sensitivity is directly related to the spectral sensitivity of the Exciton generation layer. As the Exciton generation layer's spectral sensitivity generally does not match the required spectral sensitivity curve, required to measure luminance and chromaticity. Therefore, a calibration technique is needed to utilize the sensor for luminance and chromaticity measurements.
This can be understood as follows. A human observer is unable to distinguish the brightness or chromaticity of light with a specific wavelength impinging on his retina. Instead, he possesses three distinct types of photoreceptors, sensitive to three distinct wavelength bands that define his chromatic response. This chromatic response can be expressed mathematically by color matching functions. Consequentially, three color matching functions, and have been defined by the CIE in 1931. They can be considered physically as three independent spectral sensitivity curves of three independent optical detectors positioned at our retinas. These color matching functions can be used to determine the CIE1931 XYZ tristimulus values, using the following formulae:
Figure imgf000028_0001
Y = / I(X) y(X) dX
Jo
Z = I I(X) z(X) dX
Jo
Where I (λ) is the spectral power distribution of the captured light. The luminance corresponds to the Y component of the CIE XYZ tristimulus values. Since a sensor, according to embodiments of the present invention, has a characteristic spectral sensitivity curve that differs from the three color matching functions depicted above, it cannot be used as such to obtain any of the three tristimulus values. However, the sensor according to embodiments of the present invention is sensitive in the entire visible spectrum, because the EGL is photosensitive in the entire visible spectrum (or alternatively, they are at least sensitive to the spectral power distributions of a (typical) display's primaries), which allows obtaining the XYZ values after calibration for any specific type of spectral light distribution emitted by our display. Displays are typically either monochrome or color displays. In the case of monochrome (e.g. grayscale) displays, they only have a single primary (e.g. white), and hence emit light with a single spectral power distribution. Color displays have typically three primaries - red (R), green (G) and blue (B)- which have three distinct spectral power distributions, although also displays with more than three primaries are possible. A calibration step preferably is applied to match the XYZ tristimulus values corresponding to the spectral power distributions of the display's primaries to the measurements made by the sensor according to embodiments of the present invention. In this calibration step, the basic idea is to match the XYZ tristimulus values of the specific spectral power distribution of the primaries to the values measured by the sensor, by capturing them both with the sensor and an external reference sensor. Since the sensor according to embodiments of the present invention is non-linear, the spectral power distribution associated with the primary may alter slightly depending on the digital driving level of the primary and the angular emission pattern of the display may alter depending on the driving of the display's (sub)pixels, it is insufficient to match them at a single level. Instead, they need to be matched ideally at every digital driving level. This will provide a relation between the actual tristimulus values and sensor measurements in the entire range of possible values. To obtain a conversion between any measured value, as measured by the sensor according to the preferred embodiment, and the desired tristimulus value, an interpolation is needed to obtain a continuous conversion curve. This results in three conversion curves per display primary that convert the measured value in the XYZ tristimulus values. Note that the conversion can be higher-dimensional, for instance a 2D conversion table can be required, due to sensor's imperfections. For instance, the conversions will not only depend on the measured value, but also on the display's driving levels, as the angular emission depends on the display's driving. In the case of a monochrome display, three conversion tables are obtained when using this calibration methodology. Obtaining the XYZ tristimulus values is now evident when using a monochrome display. The light to be measured can simply be generated on the display (in the form of uniform patches), and measured by the sensor according to embodiment of the present invention, when using the different conversion tables.
In the case of a color display, this calibration needs to be done for each of the display's primaries. This results in 9 (multidimensional) conversion tables, in the typical case when the display has 3 primaries. Note that a specific colored patch with a specific driving of the red, green and blue primary will have a specific spectrum, which is a superposition of the scaled spectra of the red, green and blue primaries, and hence every possible combination of the driving levels needs to be calibrated individually. Therefore, an alternative methodology can suitably be used: the red, green and blue primaries need to be calibrated individually for each digital driving level. During such a calibration a single primary patch is displayed while the other 2 channels (primaries) remain at the lowest possible driving level (emitting the least possible light, ideally no light at all). This suitable methodology implies that the red, green and blue driving of the patch needs to be done sequentially. The correct three conversion tables corresponding to the specific primary will need to be applied to obtain the XYZ tristimulus values from the measured values. This results in three sets of tristimulus values: (XRYRZR), (XGYGZG) and (XBYBZB). Since the XYZ tristimulus values are additive, the XYZ tristimulus values of the patch can be obtained using the following formulae:
Figure imgf000030_0001
Note that we assume the display has no crosstalk in these formulae, but more complicated formulas can be formulated in order to take crosstalk into account as well. Two parts can be distinguished in the XYZ tristimulus values. Y is directly a measure of brightness (luminance) of the emitted light. The chromaticity, on the other hand, can be specified for instance by two derived parameters, x and y. These parameters can be obtained from the XYZ tristimulus values using the following formulae:
X
X + Y + Z Y
y ~ X + Y + z
This offline chromaticity measurement which is enabled by calibrating the sensor to an external sensor which is able to measure tristimulus values (X, Y & Z) thus allows measuring brightness as well as chromaticity.
2.6.2 Luminance and chromaticity measurements
When using the sensor for luminance and chromaticity measurements, the limitations of the sensor need to be considered when performing a suitable calibration technique. This has implications on the usability of the display. Indeed, it is clear from the previous description that uniform patches have to be displayed underneath the sensors, which implies that the sensor is not used during the normal operation of the display, as images with any image content can be displayed by the user during normal display operation.
2.6.3 Reference devices
In order to overcome the limitations of the sensor, sensor calibration tables need to be created by appropriately driving the display and measuring the desired properties of the display' s emitted light with both the sensor system of the present invention and a reference sensor. For instance, for each angular emission pattern (i.e. for each display driving level), both the values measured by the sensor system of the present invention, as well as the values measured by the reference device, need to be obtained for a suitable range of measurement values. This can for instance be done by driving the backlight in a suitable range for all of the display's digital driving levels, and measuring both the values measured by the sensor system and the reference sensor, when the display is an LCD. The obtained values should then be put in a multidimensional table and interpolated, in order to obtain the correct value for each measured value by the sensor system of the present invention.
Alternatively, the relative angular and spectral sensitivity of the sensor system can be characterized, and combined with the angular and spectral emission of the display, to obtain the sensor detection vs. driving level curve at a certain value measured by the reference device, using for instance a mathematical algorithm, or an optical simulation combined with a mathematical algorithm. When combining this with the sensor's non-linear response to the property of the light, the multidimensional calibration table can also be obtained. Note that when using the latter approach, it is assumed that the sensor's non-linear response, and relative angular and spectral sensitivity can be characterized independently.
Note that these multidimensional calibration tables need to be created for every sensor, in case the individual sensors of the sensor system behave slightly different (for instance due to slight inhomogenities of the organic layers) or if the angular or spectral emission pattern of the display is for instance positional dependant.
2.6.4 Sensor ageing
2.6.4.1 Introduction
In case the sensor system would degrade during its lifetime, several solutions can be used to again ensure a proper calibration of the sensor.
2.6.4.2 Using reference sensor
The same procedure outlined above can be redone at any time during the display's lifetime. This will of course require a rather costly intervention, as this will render the display inoperable during the time of the recalibration, and requires reference sensors in the field which often do not come at a low cost.
2.6.4.3 Using a model and LEDs
As typically the relative angular sensitivity of the sensor system does not alter, it can be sufficient to re-measure the non-linear response of the sensor. This can for instance be done by integrating a light source in the display, for instance an LED or a set of LEDs, positioned at the viewer's side of the display's active area, but outside the active area, for instance underneath the bezel. These light sources are only used for recalibration purposes, and therefore do not alter their emission properties over time. In this solution, a detection of all sensor sensitivities, relative to each other needs to be made, because the different sensors will detect different values due to their different position on the display's active area. As the spectrum of the light source is generally different than the spectrum of the display's primaries, an additional calibration step is needed to link the different measurements.
Note that neither spectral changes due to changes of the display over time, nor changes in relative angular emission of the display are taken into account in this methodology. Spectral changes over time can be integrated into the calibration by means of a model that predicts these changes over time. On top of that, in this methodology it is assumed that the non-linearity is independent of the angular emission, which should still be confirmed from measurements.
2.6.5 Physical value measurements However, it is not excluded that some detection can be done in real-time, during normal operation for the user. In some specific embodiments, it can be suitable to do the measurements relative to a reference value.
In an alternative use case of the sensor, the actual physical signal generated by the sensor is directly used, without a calibration step that converts the sensed values in another unit. This allows online, user transparent use of the sensor, where the actual image content controlled by the user can be measured, instead of displaying a dedicated patch. This allows control of the display in real time, in the sense that the measured value can be compared to an expected value, calculated by a mathematical algorithm.
Said algorithm may be used to calculate the expected response of the sensor, based on digital driving levels provided to the display, and the physical behaviour of the sensor (this includes its spectral sensitivity over angle, its non-linearities and so on).
In one embodiment, the difference between the sensing result and the theoretically calculated is compared by a controller to a lower and/or an upper threshold value, taking into account the reference. If the result is outside the accepted range of values, it is to be reviewed or corrected. One possibility for review is that one or more subsequent sensing results for the display area are calculated and compared by the controller. If more than a critical number of sensing values for one display area are outside the accepted range, then the setting for the display area is to be corrected so as to bring it within the accepted range. A critical number is for instance 2 out of 10. E.g. if 3 to 10 of sensing values are outside the accepted range, the controller takes action. Else, if the number of sensing values outside the accepted range is above a monitoring value but not higher than the critical value, then the controller may decide to continue monitoring.
In order to balance processing effort, the controller may decide not to review all sensing results continuously, but to restrict the number of reviews to infrequent reviews with a specific time interval in between. Furthermore, this comparison process may be scheduled with a relatively low priority, such that it is only carried out when the processor is idle.
In another embodiment, such sensing result is stored in a memory. At the end of a monitoring period, such set of sensing results may be evaluated. One suitable evaluation is to find out whether the sensed values of the difference in light are systematically above or below the threshold value that, according to the settings specified by the driving of the display, should be emitted. If such systematic difference exists, the driving of the display may be adapted accordingly. In order to increase the robustness of the set of sensing results, certain sensing results may be left out of the set, such as for instance an upper and a lower value. Additionally, it may be that values corresponding to a certain display setting are looked at. For instance, sensing values corresponding to a high (RGB) driving levels are looked at only. This may be suitable to verify if the display behaves at high (RGB) driving levels similar to its behaviour at other settings, for instance low (RGB) driving levels. Alternatively, the sensed values of certain (RGB) driving level settings may be evaluated as these values are most reliable for reviewing driving level settings. Instead of high and low values, one may think of light measurements when emitting a predominantly green image versus the light measurements when emitting a predominantly yellow image.
Additional calculations can be based on said set of sensed values. For instance, instead of merely determining a difference between sensed value and theoretically calculated value of the light output, which is the originally calibrated value, the derivative may be reviewed. This can then be used to see whether the difference increases or decreases. Again, the timescale of determining such derivative may be smaller or larger, preferably larger, than that of the absolute difference. It is not excluded that average values are used for determining the derivative over time. It will be understood by the skilled reader, that use is made of storage of displays theoretically calculated values and sensed values for the said processing and calculations. An efficient storage protocol may be further implemented by the skilled person.
2.6.6 Ambient light rej ection algorithms
2.6.6.1 Introduction
As mentioned earlier, the sensor system of the present invention is a unique bidirectional sensor system, meaning that, when using the sensor system in an environment with a certain level of ambient light (i.e. light not originating from the backlight of the display), the measured light emitted from the display area to the corresponding sensor will be a combination of the light emitted by the display and the ambient light falling on the sensor from the environment. This light coming from the environment may dynamically change for instance due to shadows being cast on the display screen. In order to be able to distinguish the contribution of light coming from both sides, the sensor system can comprise an optical, electronic or mechanical filter.
2.6.6.1.1 Ambient light measurement One needs to take into consideration here as well that the light emitted by the display can have a spectrum that significantly differs from the ambient light perceived spectrum, hence the obtained result needs to be matched to a device with a proper ν(λ) spectral sensitivity curve in order to obtain an actual ambient light measurement, which mimics the spectral response function of the human eye in the wavelength range from 380 nm to 780 nm and is used to establish the relation between a radiometric quantity that is a function of wavelength λ, and the corresponding photometric quantity, and hence allows correctly measuring ambient light of any type with any spectrum, if it is beneficial to quantify the ambient light. An ambient light sensor also has a specific required angular sensitivity, which can differ from the angular sensitivity of the sensor system of the present embodiment. This also requires appropriate matching. However, this matching is not required if it is sufficient to remove the contribution of the ambient light to the measured signal (in case this is ambient light + light emitted by the display area).
In any case, one needs to take into consideration that the sensor still suffers from the previous imperfections, and therefore, determining the property of the display light emission from a set of measurements where the display is turned on (measuring a combination of ambient light and display light) and off (measuring ambient light only), requires considering, amongst other effects, its non-linearity. This basically implies that a simple subtraction does not suffice for obtaining the desired value. It is assumed in the further embodiments that the sensor is calibrated appropriately to come with these imperfections.
2.6.6.2 Optical filter
When using an optical filter system, the filter can comprise for example of optical differential filters such as two sensors one of which is sensitive to a polarization of received light and the other is insensitive to the impinging light's polarization, which can be used when the display emits linearly polarized light. To obtain this polarization sensitivity, at least one sensor can be rubbed and at least one sensor can be non-rubbed, whereby the non-rubbed sensor is polarization insensitive. By rubbing the sensor (a method to do so is described by J. M. Geary et al, The mechanism of polymer alignment of liquid-crystal materials, J. Appl. Phys. 62, 10 (1987) which is included herein by reference) and aligning the molecules of the layer only linearly polarized light in one direction is absorbed by the sensor. The other, orthogonal, component of the light measured by the said sensor is not or only partially absorbed. The light emitted by most LCD's is typically linearly polarized in one direction, whereas the ambient light is unpolarized. As a result, when using sensor systems comprising two sensors, one of which is sensitive to a polarization of received light and the other is able to detect all light polarizations of the received light, such as at least one rubbed and one non-rubbed sensor, at least one sensor only reacts to the polarization corresponding to the polarization emitted by the display device and a part of the ambient light with the same polarization whereas with the other sensor, both the ambient light and light emitted by the display device are detected. The light measured by the at least one non-rubbed sensor, is the total of the ambient light and the polarized light emitted by the display device at the location of the sensor. On the other hand the light measured by the at least one rubbed sensor, is the total of 50 % of the ambient light and the light emitted from display device at the location where the sensor measures, As the two sensors are located close to each other, the ambient light remains approximately constant, and we can use the display to depict the same content. This can mathematically be expressed in two linear equations with two unknowns, which is easily solved (assuming the sensor is calibrated to overcome its imperfections such as non-linearities). As a result, the amounts of the respective contributions of the ambient light and display device can be derived and isolated.
2.6.6.3 Electronic filter
In yet another embodiment, the contribution of the ambient light to the output signal of the sensor ambient light is measured and isolated by using an alternative type of filter, e.g. an electronic filter such as a filter on a temporal modulation of the backlight. This modulation can be either a high temporal frequency modulation or a low frequency modulation.
In the case of a high temporal frequency modulation, one can think of display devices which are Pulse Width Modulated (PMW) driven, by driving the backlight of the display devices in a blinking mode, e.g. by switching the backlight on and off in short pulses over time. When measuring the light properties using the on and off state of the blinking backlight of the display device, the light originating from only the display device, without the ambient light, can be derived. As a result this temporal variation of the display light can be used to discriminate between display light and ambient light.
Instead of doing a high-frequency temporal modulation, the temporal modulation can be done with a much lower temporal frequency. In the case of using a display with a backlight, the calibration typically involves switching the backlight on and off to determine potential ambient light influences that might be measured during normal use of the display, for a display area and suitably one or more surrounding display areas. The difference between these measured values corresponds to the influence of the ambient light. In case of using a display without backlight, the calibration typically involves switching the display off, within a display area and suitably surrounding display areas. The calibration is for instance carried out for a first time upon start up of the display. Moments for such calibration during real-time use which do not disturb a viewer, include for instance short transition periods between a first block and a second block of images. In case of consumer displays, such transition period is for instance an announcement of a new and regular program, such as the daily news. In case of professional displays, such as displays for medical use, such transition periods are for instance periods between reviewing a first medical image (X-ray, MRI and the like) and a second medical image. The controller will know or may determine such transition period. 2.6.6.4 Mechanical filter
In a further embodiment, a mechanical filter can be used that filters out the ambient light. For example the partially transparent sensor can be one that possesses touch functionality, for instance technology allowing a touch screen. When such a sensor is touched with a finger, all the external light is blocked by a shadowing effect, and thus all the ambient light is blocked locally when touching the region of interest. The display can be designed with the required intelligence, such that it is aware of the touching from the touched sensor. The display device can then measure the light properties in a touched state where all or a significant amount of the external light is blocked. The measurement is then repeated in an untouched state. The derived difference between the two measurements provides the amount of ambient light.
Moreover the finger touching the sensor can have a reflection as well, which can influence the amount of light sensed by the sensor. As a result, the display device can be calibrated by first carrying out a test to determine the influence of the reflection of a finger on the amount of measured light coming from the display without ambient light.
Moreover, the use of a black absorbing cover as a light filter can be used for calibration and to isolate or attenuate the ambient light contribution. For this method two luminance measurements are performed. The first measurement(s) include measuring the full light contribution, which is the emitted light from the display and the ambient light conditions. During a second measurement, the sensor measures the emitted display light with all ambient light excluded. The latter step can be accomplished by using a black absorbing cover over the display, for example. Both these measurements are needed when one desires to quantify the ambient light. However, if one merely wants to measure the light emitted by the display, without the ambient light contribution, it is sufficient to cover the display to exclude ambient light influences and then measure. 2.6.6.5 Alignment
Furthermore, the sensor system of the present invention, can be designed as a separate device that can be fixed in front of a display during manufacturing. Said display sensor system can be considered a "clip-on" sensor system, like a front glass, which can be appropriately designed to make electrical contact when connected to the rest of the display by for instance wires that conduct the measured electronic signals to and from the controller inside the display. This concept of "clip-on" sensor system is a distinct advantage over prior art solutions, as aids in lowering the cost of the total sensor system, for several reasons. Firstly, it does not require a costly redesign of the display's panel, which is required in prior art solutions such as in US 2007/0052874 Al where a sensor, integrated into a pixel, is used. Secondly, it results in a simple production flow, which is also a contributor in obtaining a lower cost.
However, when mounting the clip-on sensor onto the display, the individual sub-sensors of the clip-on sensor can end up at a slightly different position on the display's active area. In such case there is a priori no guaranteed positional relationship between the location of the photoconductive sensors and the display's active area. It is essential to know the position of the sensors relative to the active area of the screen. Therefore, an alignment procedure can be used to link the position of the sensor to a certain location or zone on the display's active area. The alignment algorithm consists of appropriately driving the display and to use the clip-on sensor to measure the required property of the emitted light. Afterwards, the results are processed to triangulate the sensors' location. For example a square (black and/or white) can be shown at a maximum driving level, at a background of a minimum driving level. When the square is translated over the screen, it will be detected by a single sensor, or by multiple sensors at a certain position, or in a certain range of potential positions on the screen, depending on the size of the square. Depending on the size of the square, the detection can occur faster but less accurate or slower but more accurate. Therefore, an optimization can be done by using for instance the following algorithm: starting with a large square to roughly determine the position, and in a second iteration, the size of the square is decreased and simultaneously the translation area is restricted to the roughly determined position from the first iteration. This specific alignment algorithm is not considered a limitation of the present invention, other alignment algorithms can also be used to determine this positional relationship.
2.7 Applications
2.7.1 Introduction Due to its ability to measure properties of the light emitted by the entire display (or a substantial representative region thereof), as well as properties of the ambient light impinging on the sensor, its performance can be improved, or its functionalities can be expanded when suitably combined with a controller comprising both hardware and software components, which can interact with the sensors and control the display's electronic driving.
As mentioned before, the sensor as described in the preferred embodiments is not an ideal sensor. Therefore, the calibration previously described is required to perform accurate measurements using the device. 2.7.2 Applications/performance improvements based on display light output measurements
Due to the technology used in the sensor system of the present invention, the sensor system can be designed such that an individual signal can be measured for every display area, and the obtained value is representative for a property of (or multiple properties of) the light emitted by (a part of) the display corresponding display area under test. More specifically, the sensor system is sensitive to light in the areas corresponding to the electrodes. It is a major advantage of the present invention over prior art solutions, that the sensor system offers the ability to create an optimized design depending on the intended application. More specifically, the lay-out of the photoconductive sensors, i.e the way they are dispersed over the display's active area, as well as the photosensitive area of the sensors can be suitably designed for the intended application, specifically for the display it is intended to be used with. This is a consequence of the sensor design architecture, more specifically the ITO which can be patterned to virtually any pattern that optimally suits the application, and the fact that the ITO patterned substrate is uniformly covered with the organic layers. This will only have a limited additional cost, potentially resulting from the increased cost of the ablation procedure, which can become more time-consuming or complex. In case more sensors are required, the controller might also become more expensive.
This is an advantage over existing prior art, as in existing prior art, typically either a high-resolution measurement is used, by using a high-resolution camera measurement, or a measurement using a pixel-integrated measurement system, or a measurement is used that measures only a limited area of the screen.
This will however require a design with a significant amount of semitransparent conductive tracks such as ITO tracks, as the two electrodes with finger shaped extensions and ΓΓΌ tracks reside in the same plane. To limit the number of semitransparent conductive tracks such as ITO tracks, one of the electrodes can always be connected to a central connector, shared with several sensors that are addressed sequentially. The other electrodes are designed to converge to the different connections of a multiplexer, allowing switching between the different sensors. This will allow the sensing area to be as large as possible, with a minimal amount potential sensing area lost to the semitransparent conductive tracks such as ITO tracks.
Also, by performing a suitable patterning of the ITO, the size of the light sensitive area can be designed for the intended application.
2.7.2.1 Luminance/color uniformity checks
In one application of the sensor system of the present invention, at least two sensors can be used over at least two areas of the display, while displaying an image that is intended to result in a uniform light output (e.g. all digital driving levels are made equal in the case no precorrection table is applied to the display's driving). Typically, for luminance uniformity checks and corrections, the measurements are made on white patterns, for instance with equal driving of the red, green and blue sub pixels when using a color display.
Simple luminance checks can be performed by measuring at different positions, depending on the critical points or most representative areas of the display design. The specifications regarding luminance uniformity can be derived from established standards/recommendations, e.g. created by dedicated committees and expert groups. An example of a standard created by TGI 8 can be the following: luminance is measured at five locations over the faceplate of the display device (centre and four corners) using a calibrated luminance meter. If a telescopic luminance meter is used, it may need to be supplemented with a cone or baffle. For display devices with non-Lambertian light distribution, such as an LCD, if the measurements are made with a near-range luminance meter, the meter should have a narrow aperture angle, otherwise certain correction factors should be applied (Blume H, Ho AMK, Stevens F, Steven PM. (2001). "Practical aspects of gray-scale calibration of display systems." Proc SPIE 4323:28-41.).
Using this standard as a guideline, a sensor design lay-out can be implemented, and a suitable metric needs to be selected to assess the uniformity.
2.7.2.2 Luminance/color uniformity corrections
Aside from a mere detection of the non-uniformity of the luminance or chromaticity emitted by the display, luminance and chromaticity non-uniformities can be corrected. In the following, the main focus is put on luminance uniformity corrections, but it is clear for anyone skilled in the art that this can be extended to chromaticity uniformity corrections for instance by altering the relative driving of the red, green and blue channels of a color display, and applying luminance uniformity corrections afterwards by while maintaining the relative driving of the red, green and blue channels, in case the display has a linear luminance in a driving level curve, or alternatively adapt the ratio according to the actual luminance vs. driving level curve. This might require several iterations to obtain a satisfactory result.
Typical prior art luminance uniformity correction algorithms, such as known from EP 1 424 672 Al, use an external sensor measure the luminance non-uniformity during production and, based on the measured results, apply a precorrection table to the driving levels of the display, that ensures images are correctly displayed. This correction can be either based on an individual pixel basis or on a by using a correction per zone. The drawback of this solution, however, is that it is not integrated into the display, and due to the nature of this technique, it is typically only performed once, during the display's production process, while the sensor system of the present invention allows updating the luminance uniformity correction during the display' s lifetime, and hence it allows improving the display' s performance while the display remains in the field.
Another aspect of the present invention is to use the sensor system of the present invention to capture a low resolution luminance map of the light emitted by the display when all the pixels are put to an equal driving level. This low resolution luminance map can be obtained by using a sensor system according to the present invention with a matrix of photoconductive sensors. Such a low-resolution map is typically desired because this simplifies the sensor-lay out and the complexity and cost of the controller. Obtaining this map would allow to derive a new precorrection table in a calibration phase during the display's lifetime. This precorrection table is obtained by appropriately upscaling the low resolution luminance map to a high-resolution luminance map, which matches the display resolution, and determines how to adapt the display's driving in order to obtain a uniform display output image. The reason such a limited number of sensors is usable, is that it is known from measurements that the noise can be distinguished in two distinct patterns: a high frequency noise at the individual pixel level, which is typically Gaussian, and low frequency noise resulting in the global trend of the curve. The purpose of this embodiment will be to compensate for the low- frequency noise, and leave the high-frequency noise unaltered.
Note that the display can have multiple layers that allow local control of the display's emitted light, in some specific embodiments. For instance, in the specific embodiment the display is an LCD that allows local control of the display's emitted light, the driving of the backlight can also be adapted to render the display's light output more uniform, aside from merely altering the driving of the LC layer.
Determining the best solution of the low resolution luminance map depends on several factors, as there are a wide range of design parameters and a lot of flexibility to choose from. For example, only few constraints apply to the positioning of the sensors; the most important being that two sensors cannot overlap, due to the lateral, coplanar design of the electrodes with finger shaped extensions which are in contact with the light sensitive organic layers. Otherwise, sensors can be located at any position on the display. The best solution depends on the exact display type the sensor system is combined with, and the exact sensor architecture.
2.7.2.3 DICOM compliance The way the human eye responds to contrasts in light levels is not linear. At the darkest levels, small changes in luminance can be perceived better than at the brightest levels. The behavior of the human eye at varying shades of gray has been measured, resulting in the DICOM curve. When appropriately using this curve in the display's driving, the display can be made to behave perceptually linear. For a proper DICOM compliance, it is necessary to take measurements from multiple parts of the screen [ASSOCIATION NATIONAL ELECTRICAL MANUFACTURERS. Digital Imaging and Communications in Medicine (DICOM), Supplement 28: Grayscale Standard Display Function, technical report, National Electrical Manufacturers Association, 1998.]. Another use case in which the sensor system of the present invention can be used to ameliorate a display's performance, is assessing the display's DICOM compliance at multiple locations over its active area, and altering the display's driving to render it compliant if needed.
2.7.2.4 DICOM recalibration
Instead of only determining if the display is still compliant to the DICOM standards, the entire DICOM calibration of the display can be preformed. In practice, this can be done by altering the LUT that is applied on the incoming image to obtain a DICOM calibrated image. To obtain this LUT, the native behaviour of the display should be measured using the sensor system's photoconductive sensors (without the initial DICOM calibration), the resulting values can be used in combination with the ideal DICOM curve to obtain the required LUT. As the measurements can only be made at the location of the photoconductive sensors, an interpolation/approximation step needs to be done to obtain the proper DICOM calibration at intermediate locations as well.
2.7.3 Applications/performance improvements based on ambient light measurements Aside from improving its performance, and expanding its functionalities based on measurements of light emitted by the display, measurements of the ambient light can also be used in various embodiments.
2.7.3.1 Local ambient light measurements
For instance, using only a single ambient light sensor, the local variation of reflection on the screen cannot be detected. Therefore, the ambient light could be considered acceptable, while local peaks can be very disturbing for a user.
Contrary to the current device in use which measures at a single location, the sensor system of the present invention will be able to consider the impact of the ambient light on the entire area of the screen, allowing to overcome the limitation of the existing measurement methodologies. E.g. The sensor system of the present invention can detect whether there is a specular light source being reflected on the display surface, or when a window curtain is opened resulting into a sharp specular reflection.
In the case of professional display such as medical grade displays, aside from local ambient light variations over the area of the screen, the white and black levels can also exhibit non-uniformities. Therefore, it is valuable to measure the luminance ratio locally by using each individual sensor, by measuring the ambient light as well as L iack and Lw ite. The luminance ratio can then be calculated, and should be compliant with the following formula (typical requirement for instance in radiology):
Lwhite ~t~ L ambient ~ ,-, ,- r
> 250
black ambient
If the display is not compliant with the luminance ratio, action can be taken for instance by signalling the user that the ambient light is excessive. The user can then for instance lower the ambient light, such that the luminance ratio again compliant with the formula above.
2.7.3.2 DICOM compliance including ambient light
However, in some market segments such as surgical displays, altering the ambient light is not an option. Therefore, it is necessary to use alternative solutions to reach the compliance.
As specified by the TGI 8 document issued by the American Association of Physicists in Medicine, one needs to do a recalibration of the display driving levels in presence of ambient light so that the display is DICOM compliant. With the current invention ambient light can be measured at multiple positions of the display, and interpolated to obtain the ambient light on every location on the display's active area. Knowing the ambient light illuminace and the reflection properties of the display with the proposed semitransparent sensor system, one can calculate Lam ient- In a preferred embodiment the display can be recalibrated according to the DICOM standard including ambient light by using the Barten model for the human visual system as recommended in the TGI 8 document. Such a DICOM compliance can be achieved at multiple locations of the display at which ambient light is measured with the proposed sensor system.
2.7.3.3 Backlight adaptation
In another embodiment, after measuring the ambient light with the proposed sensor system, a possible solution is to adapt the backlight such that the display is compliant once again for particular display application requirements. Since the luminance ratio can have local variations, and the backlight is typically a CCFL backlight which does not allow local adaptations of the emitted light, the adaptation should be made based on the minimum measured luminance ratio. 2.7.3.4 Touch sensor functionality
The sensor technology of the present invention allows obtaining a touch screen. The underlying principle is the following: when one of the sensors is touched, the external light is blocked, the measured light is a combination of the light transmitted through the sensor, and the light reflected on one's finger. A touch screen could be an added value for many applications, and is not restricted to medical displays. One such application is a new type of GOSD with context sensitive buttons that are used directly on the active area of the screen instead of on the bezel. When the buttons are not required, the area can simply be used again as part of the display.
Alternatively, the substrate on which the sensors are dispersed can be made larger than the display' s active area, which allows additional space, outside the display' s active area for touch sensors. These touch sensors can have a fixed light source underneath them, and the altered light output upon touching them can be registered. Furthermore, a light sensor intended to measure ambient light can be made similarly outside the display's active area, but on the same substrate, to detect if the measured change is due to changing ambient light conditions, or due to an actual finger touching the sensor.
3 Brief Description of the Drawings
Fig. la is a front view of a display with an integrated sensor system according to the present invention.
Fig. lb shows a high-level representation of the sensor system 9 of the present invention.
Fig. lc illustrates the global structure of the electronics board.
Fig 2 shows the general architecture of the semitransparent photoconductive sensors comprised in the sensor system of the present invention.
Fig. 3, illustrates a side- view of the organic photoconductive sensor according to the present invention.
Fig. 4, depicts the spectral transmission of ITO with different thicknesses in the visible spectrum.
Fig. 5 schematically illustrates a floating electrode solution to improve the sensor' s visibility.
Fig. 6 illustrates how a floating conductive material can be used inside the finger patterns, to improve visibility.
Fig. 7 illustrates an increased finger width to finger gap ratio to help reduce the visibility of the sensor.
Fig. 8a schematically illustrates an electrode comprising a pattern whereby the pattern comprises semi-random fingers according to embodiments of the invention. Fig. 8b illustrates an alternative embodiment were the finger pattern is shaped like Euclidean spirals.
Fig. 9, presents a measured transmission spectrum of a 20.7 nm PTCBI layer. Fig. 10 shows the transmission of a sensor with standard encapsulation and a sensor with glue encapsulation with improved ITO.
Fig. 11a presents an example of an applied voltage signal over time on the sensor.
Fig. 1 lb depicts a possible current flowing through the sensor as a consequence of impinging light when applying the voltage signal of Fig. 11a.
Fig. 12 illustrates the IV curves of 4 different HTL configurations using the same organic materials.
Fig. 13a, presents a picture with a line-shaped laser beam pointed in the gap between two interdigitated sensor fingers.
Fig. 13b presents, the measured current as a function of the position of the laser line with respect to the anode.
Fig. 14 shows an example of a long-term experiment, in which the measured current is tracked over time.
Fig. 15 shows an embodiment of the invention, using an optical filter to measure the ambient light.
Fig. 16a illustrates a possible method to roughly determine the position of the sensor relative to the display' s active area using a square that scans over the active area of the screen.
Fig. 16b. illustrates another possible method to determine the position of the sensor relative to the display's active area This methodology uses images which have bright half and a dark half, to pinpoint in which quadrant the sensor is located.
Fig. 16c illustrates, a smaller version of the same images depicted in Fig 16c, that allow obtaining the correct sub quadrant of the initial quadrant.
Fig. 16d illustrates an algorithm to determine the position of the sensor relative to the display's active area, using a combination of the moving square and half bright/half dark images.
Fig.17, shows a high-resolution luminance map as emitted by the display.
Fig. 18a, presents a cross-section of a profile measured using a high -resolution camera on a relatively uniform display is presented. Fig. 18b, presents an example of positions of the photoconductive sensors according to an embodiment of this invention.
Fig. 18c. Presents a cross-section of the emitted light, after the uniformity correction is applied.
Fig. 19 illustrates the rescale process for a cross-section, used to go from the interpolated, non-uniform light output before applying a uniformity correction algorithm, to the uniform, corrected data.
Fig. 20 shows a local map of the error for digital driving level 496 (10 bit driving levels, ranging from 0 to 1023) when the sensors are located on a 6 by 6 uniform grid. In Fig. 20a, the sensors are dispersed in a regular grid, while in Fig. 20b, the sensors are dispersed in alternative grid, with a denser sensor concentration at the borders.
In Fig. 21, the obtained results in the case where the sensor is not an ideal luminance sensor, and has an equal response independent of the angle at which the ray impinges on the photosensitive area, are presented for a uniform grid (left columns), a non-uniform grid (right columns), for a broad range of sensor resolutions (horizontal axis on the plots), combined with a broad range of sensor sizes (indicated above, 5mm, 10 mm, 15 mm, 20 mm, 25 mm or expressed in number of pixels: 30, 60, 90, 120 and 151 pixels). The global percentual relative absolute error is presented on the vertical axis. In Fig. 21a, the results for the darkest level are presented, while in Fig. 21b, the results for the brightest level are presented (DDL 255 for 8 bit display driving).
4 Description of the illustrative embodiments 4.1 Introduction to the terminology
The present invention will be described with respect to particular embodiments and with reference to certain drawings but the invention is not limited thereto but only by the claims. The drawings described are only schematic and are non-limiting. In the drawings, the size of some of the elements may be exaggerated and not drawn on scale for illustrative purposes.
Furthermore, the terms first, second, third and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and that the embodiments of the invention described herein are capable of operation in other sequences than described or illustrated herein.
Moreover, the terms top, bottom, over, under and the like in the description and the claims are used for descriptive purposes and not necessarily for describing relative positions. It is to be understood that the terms so used are interchangeable under appropriate circumstances and that the embodiments of the invention described herein are capable of operation in other orientations than described or illustrated herein.
It is to be noticed that the term "comprising", used in the claims, should not be interpreted as being restricted to the means listed thereafter; it does not exclude other elements or steps. Thus, the scope of the expression "a device comprising means A and B" should not be limited to devices consisting only of components A and B. It means that with respect to the present invention, the only relevant components of the device are A and B.
Similarly, it is to be noticed that the term "coupled", also used in the claims, should not be interpreted as being restricted to direct connections only. Thus, the scope of the expression "a device A coupled to a device B" should not be limited to devices or systems wherein an output of device A is directly connected to an input of device B. It means that there exists a path between an output of A and an input of B which may be a path including other devices or means.
It is furthermore observed that the term "at least partially transparent" or "semitransparent" as used throughout the present application refers to an object that may be partially transparent for all wavelengths, fully transparent for all wavelengths, fully transparent for a range of wavelengths and partially transparent for the rest of the wavelengths. Typically, it refers to optical transparency, e.g. transparency for visible light. Partially transparent is herein understood as the property that the intensity of an image shown through the partially transparent member is reduced due to the said partially transparent member, or its color is altered. Partially transparent or semitransparent refers particularly to a reduction of light luminance of at most 40%, more preferably at most 25%, more preferably at most 10%, or even at most 2%,. Typically the sensor design is created so as to be substantially transparent, i.e. with a reduction of impinging light intensity of at most 20% for every visible wavelength, which also limits its coloring.
Moreover, the term 'display' is used herein for reference to the functional display. In case of a liquid crystal display, as an example, this is the layer stack provided with active matrix or passive matrix addressing. The functional display is subdivided in display areas. An image may be displayed in one or more of the display areas. The term 'display device' is used herein to refer to the complete apparatus. Suitably, the display device further comprises a controller, driving system and any other electronic circuitry needed for appropriate operation of the display device. 4.2 Display summary
Fig. la shows a display device 1, for instance a liquid crystal display device (LCD device) 2. Alternatively the display device can be a plasma display device, an OLED display device, or any other kind of display device emitting light compatible with the specificities described hereafter. The active area 3 of the display device 1 is divided into a number of groups 4 of display areas 5, wherein each display area 5 comprises a plurality of pixels. The display device 3 of this example comprises eight groups 4 of display areas 5; each group 4 comprises in this example ten display areas 5. Each of the display areas 5 is adapted for emitting light with a certain angular emission pattern, to display an image to a viewer in front of the display device 1.
4.3 Sensor system summary 4.3.1 Photoconductive sensors
Fig. la further shows the sensor system's substrate with photoconductive sensors 6 comprising semitransparent electrodes & conductors and organic layers. Light is absorbed on top of (part of) the display area 5, at the location of the electrodes with finger-shaped extensions 13, and thereby alters the conduction properties of the photoconductive material (For instance, a voltage is put over its electrodes, and an impinging-light dependent current consequentially flows through the sensor, which is measured using the controller). As depicted in the figure, one sensor is foreseen per display area. The electric current is guided towards the edge, using two semitransparent electrical conductors 10 and 11. The electrical conductors are contacted outside the display's active area by an array of contacts 7 which are connected to the controller, comprising e.g. eight groups 8 of contacts. 4.3.2 Controller
4.3.2.1 Overview controller
Fig lb shows a high-level representation of the sensor system 9 of the present invention. The sensor system comprises the photoconductive sensors 6, which react on the impinging light. These photoconductive sensors are connected to the electronics board 12, which basically does all the low-level interaction with the photoconductive sensors, such as applying a dedicated sensor driving signal and retrieving the consequent measurement signal. The electronics board also interacts with, and is controlled by instructions from the software 13. The electronics board 12 combined with the software 13 together form the controller 14 of the present invention. The software 13 comprises various instructions, ranging from low-level instructions to do basic interactions with the electronics board to complex algorithms such as luminance and chromaticity improvement algorithms. Therefore, the software is also able to interact with the display 1, in order to change for instance the driving of its pixels in order to improve its uniformity.
4.3.2.2 Electronics board
The global structure of the electronics board 12 is presented in Fig. lc. The electronics board has an integrated board controller 15, which can be used to interact with several electronic components on the board. The signal source, 16 is controlled by the board controller, which amongst others makes sure a signal with a desired amplitude and frequency are applied to the sensors 6. The resulting signal gets amplified in a first preamplification stage 17, after which it reaches the multiplexer 18. Several sensors are connected to the multiplexer 18, so the board controller 15 should select the desired sensor. Also, the Electronics board can contain a plurality of multiplexers, which are controlled by the board controller. It is obvious for anyone skilled in the art that in this embodiment, a plurality of PGAs, filters and ADCs are also required. The selected signal is than amplified in a next stage by a programmable gain amplifier 19, which has an adjustable gain factor, that can be adjusted for the specific sensor, as the signal generated by the sensor can vary depending for instance on the ITO track length, which can differ for different sensors at different locations on the substrate. Afterwards, the obtained signal is filtered by the filter 20, and converted from an analogue to a digital signal by the ADC 21. Finally, the signal is returned to the board controller 15.
4.4 Photoconductive sensor elaboration 4.4.1 Basic technology
Fig 2. Shows the general architecture of the semitransparent photoconductive sensors comprised in the sensor system of the present invention. 4.4.2 Device architecture
4.4.2.1 General description
Each photoconductive sensor 22 comprises two semitransparent electrodes on a substrate 23. Said two electrodes have finger-shaped interdigitated extensions 24, i.e. they are positioned such that each finger of one electrode is surrounded by two fingers of the other electrode after a spatial separation gap to avoid electrical shorting. For each sensor, there are two outer fingers 25, which only have a single adjacent finger.
The organic stack, 26 which is photoconductive, is put on top of the electrodes, which alters conductivity depending on the impinging light. The electrodes are connected to the controller, positioned outside the display's active area, via a semitransparent conductive track 27.
4.4.2.2 Substrate In Fig. 3, a side- view of the organic photoconductive sensor according to the present invention is presented. The glass substrate 23 is typically made of glass materials such as Corning Eagle XG glass or Polished soda-lime with a Si02 passivation layer. The thickness of this glass substrate 28 is selected to minimize the absorption of the substrate, and is typically in the range of 0.5 to 5 mm. The width and height of the glass substrate are chosen typically larger than the display's active area, in order to assure that the substrate can be contacted outside the visible area of the screen, and in order to have space for the components of the encapsulation technique if needed.
4.4.2.3 Semitransparent conductor
The semitransparent electrodes 24 depicted in Fig. 3 are typically made of Indium Tin Oxide (ITO), which is used in the preferred embodiment. ITO exists in many flavors, some examples are illustrated in Fig. 4, which illustrates the spectral transmission of ΓΓΟ with different thicknesses in the visible spectrum, obtained from the company Colorado concept coatings. Depending on its thickness, the sheet resistance of the ITO material also alters. A typical range ITO thicknesses is for a suitable device is 25-65nm, as described later on. For instance, the 45nm ITO has a sheet resistance of about 60Ω/α, while the 25 nm ITO has a sheet resistance of about 125 Ω/α, It is clear from this figure that ITO with different parameters will result in different chromaticities and transmitted luminances. The exact specifications of the ITO used in the architecture should be carefully selected, as they impact the final performance of the sensor. Also, manufacturers typically provide tolerance limits on the thickness of the ITO substrates, which should be suitably controlled in order to obtain a sensor with a uniform color shift.
4.4.2.4 Patterning The sensors' finger shaped extensions 24 are created on the ITO coated substrate by means of laser ablation in the preferred embodiment. A suitable fine-tuned laser ablation process is able to laser ablate patterns into large substrates, which are usable for all commonly used medical display sizes. These parameters are fine-tuned with a delicate balance in mind: the laser removal effect should be intense enough such that the ITO is sufficiently removed, to prevent any possible short-circuits in areas that are supposed to be removed at one hand, and the laser removal effect should not be too intense, to avoid potential damaging of the glass substrate, as this can result in undesired light scattering effects in the glass. After ablation, a patterned ITO is obtained with gaps 29 that separate individual fingers, and remaining fingers 24 with a suitable width 30. The size of the gap 31 is limited by the resolution which can be obtained by the laser ablation process, which is typically around 10 μπι. The width of the fingers is not a technological limitation, as the laser ablation process typically starts from a substrate with a uniform ITO coating.
The number of fingers is not considered a limitation of the present invention, but the technological limit is one finger per electrode, as this is required in order to obtain a working device. The number of fingers may for instance be anything between 2 and 5000, more preferably between 10 and 2500, suitably between 25 and 700. The surface area of a single semitransparent sensor may be in the order of square micrometers but is preferable in the order of square millimeters, for instance between 10 and 7000 square millimeters. One suitable finger shape is for instance a 12000 by 170 micrometers size. The gap in between the fingers can for instance be 15 micrometers in one suitable implementation.
4.4.2.5 Organic layers
The organic photosensitive layer stack 26 consists of several organic layers, which are added sequentially on the ITO patterned substrate. In the preferred embodiment, the device has a three layer stack, consisting of a first Hole Transport Layer (HTL) 32, added on the ITO patterned substrate, onto which an Exciton Generation Layer (EGL) is added 33, onto which a final organic layer, a second HTL 34, is added. These organic layers can be added to the patterned substrate for instance by using vacuum (thermal) evaporation. Impinging visible light generates excitons in the EGL 33, which diffuse towards the interface of the EGL 33 and the HTLs 32 and 34, where they are split into electrons and holes due to their energy band diagrams. Holes end up in the HTLs, while electrons remain in the EGL. The holes are then transported to the electrodes in the HTLs. 4.4.2.6 Encapsulation
In order to protect the photoconductive sensors from potential contamination, a suitable encapsulation technique is used. This encapsulation technique generally uses a material 35 on top of the organic layers, and a cover glass 36. The exact configuration and geometry of the material 35 and the cover glass 36 will depend on the specific encapsulation technology. For instance, in the conventional encapsulation methodology, material 35 is an inert nitrogen (N2) gas. Alternatively, a UV-curable glue can be used, or a sputtered AlOx layer with a UV-curable glue on top.
4.5 Optimized design of the sensor system
The proposed sensor architecture in the present invention contains a lot of design parameters, which can all be optimally selected from a broad span of possibilities. Consequentially, the design freedom can be put to optimal use in order to obtain the most suitable device for the intended application. The most suitable device is designed to have the least possible reduction image quality, a stabile signal readout, and it should fit with the sensor requirements needed for improving the display's performance, and expanding its functionalities. In the following sections, a suitable selection of all design parameters is elaborated, according to the desired sensor requirements, the combination of these preferred parameters form the preferred sensor design.
4.5.1 Visibility
4.5.1.1 Introduction
The first important aspect of the optimized sensor design is the visibility of the sensor, as described earlier in this invention.
4.5.1.2 Substrate design
In order to avoid a significant reduction in luminance, a thin glass substrate with a high transmission is carefully chosen. In the preferred embodiment, the previously mentioned glass materials with a thickness 28 of 0.7 or 1.1 mm are suitably chosen.
4.5.1.3 Semitransparent electrodes design The ITO pattern can be optimized to reduce the sensor's visibility. Effects that occur due to the ITO patterning include first of all high-frequency artifacts for instance due to the high-frequency spatial finger patterns or the separation of the floating ITO parts from the remainder of the ITO, interference effects due to the ITO finger pattern on top of the display with a matrix of pixels, local coloring due to the regions on the substrate with and without ITO material, and diffractive effects due to the geometry of the interdigitated electrode extensions. When a diffraction grating is illuminated by white light, in reflection one can see dispersion of light which comes from the fact that different wavelengths are diffracted at different angles. 4.5.1.3.1 Dummy fingers
Fig. 5 schematically illustrates a floating electrode solution to improve the sensor's visibility, more specifically to avoid the local coloring due to regions with and without ΓΓΌ. Two ITO patterns, which can be made on a glass substrate are presented, a first one without a floating ITO pattern (Fig. 5a), and a second one with a floating ITO pattern (Fig. 5b). The pattern presented in Fig. 5a, includes three interdigitated electrode pairs with finger-shaped extensions 37 which compose the photosensitive area, which are connected to ITO tracks 38, that allow conducting the sensed electric signal towards the controller, which is contacted outside the display's visible area. The width of the ITO track 39, and the distance between two consecutive ITO tracks 40 is also shown. In Fig. 5b, floating electrical conductor system 41, is applied in all the regions, where no ITO was present (as illustrated in Fig. 5a), outside the finger patterns of the electrode, at locations 42 according to an embodiment of the invention, to improve visibility. The separation between the dummy parts, and the ITO patterns that carry a useful electric signal can for instance in the range of 1 to 100 μπι, more suitably in the range of 4 to 25 μπι, even more suitably between 8 and 20 μπι. For instance, a separation of 15 μπι can be used with the suitable laser ablation parameters.
4.5.1.3.2 Nails Aside from having the floating ITO to the regions outside the finger pattern regions, Fig. 6 illustrates how a floating conductive material can be used inside the finger patterns, at the end of the fingers 24 in the form of nails 43 (in this case made of ITO). On the left hand side of Fig. 6, the finger pattern without nails is shown, while on the right hand side the finger pattern with nails is shown. The separations 44 & 45 between the nails 43, and the ITO fingers 24 can be very narrow, for instance a separation of 15 μπι can be used with the suitable laser ablation parameters. There is no voltage applied on the nails and they are separated from the electrodes. The nails help for the edges of the finger pattern to become invisible for an observer.
4.5.1.3.3 Narrow gap size The gaps between the fingers are preferably chosen such that the human eye is not able to perceive a high-spatial frequency pattern where the fingers are located, due to the contrast between the regions with and without finger patterns. To select appropriate values, a simulation can be performed, which is later confirmed by human observer tests. This is an optical simulation model built in a ray tracing optical simulation software program. The simulation includes a light source, a pattern according to embodiments of the present invention and an optical model of the human eye, and suitable processing of the obtained simulation results. This human eye model has the appropriate optical imperfections, introduced amongst others by the limited cone density on the retina, the cornea, and the lens in our eye. The human observer tests when using a bar finger shaped pattern comprising wide bars (e.g. 4mm wide) with varying distances in between, to make sure one cannot distinguish the gaps in between the fingers. The distances were varied e.g. from 500μπι down to 5 μπι. The minimal distance depends on the specific type of ITO material used for the pattern, the thickness and type of the exciton generation layer and the methodology used to deposit the latter.
4.5.1.3.4 Finger width/gap ratio
Additionally when using a finger pattern with a suitable gap, and an increased finger width to finger gap ratio helps to reduce the visibility of the sensor, as illustrated in Fig. 7. The larger finger/gap ratio is needed in order for the finger pattern not to be noticeable in transmission and reflection when reference to the parts containing floating ITO and organic materials. In Fig 7, two subfigures 7a and 7b are shown with different finger width/gap ratios. In Fig. 7a finger 46 has a corresponding width 48, and a gap 47 with a width 49. In Fig 7b, finger 50 has a broader width 52 compared to the width 48 of finger 46. The gap width 53 of gap 51 is maintained at the initial gap width 49.
A suitable ratio between the finger width and the gap between the fingers in transmission can be estimated using simulations. More specifically, when maintaining a specific gap, optimized for the sensor's performance, the finger width has been increased, to reduce the percentage of the area without ITO patterns. The metrics used for the simulations and to evaluate if there is a visible difference between the finger pattern region and the neighboring floating ITO region, are the number of JNDs between them, to evaluate if there is a difference in brightness, and the ΔΕ2000 metric to evaluate if there is a perceived difference in chromaticity. Note that the average value of the tristimulus X, Y and Z values is calculated and used in the region of the finger pattern; in other words, we assume that the gap between the fingers is suitably chosen such that only an average tristimulus X,Y and Z value is perceived.. Of course, this also depends on the sensor design: the type of ITO and the thicknesses of the Exciton Generation Layer and Hole Transport Layer, and the encapsulation method. A typical range for width to gap ratio is in the range 30 to 1 to 5 to 1 e.g. 10 to 1.
4.5.1.3.5 Exotic fingers
Fig. 8a schematically illustrates an electrode comprising a pattern whereby the pattern comprises semi-random fingers according to embodiments of the invention. These finger patterns help to reduce the interference effects and the diffraction effects, and are also by nature tough to spot for human observers. In Fig. 8a the semi-random finger pattern is constructed by semi-randomly choosing several points on the two edges of the finger where the fingers should go through. The different points are then connected using a cubic spline interpolation. The adjacent finger is limited in the sense that the gap in between the fingers should remain approximately constant to ensure the device's properties remain unaltered. For instance, the gap is obtained by translating the created edges of the finger patterns, when their spatial frequency is not excessive. The points can be chosen in a semi random way in the sense that they are limited to a specific area to avoid too high frequencies in the fingers.
Fig. 8a illustrates a simulation of such a pattern comprising fingers. Each finger is determined by a set of control points, chosen at random through which for instance a cubic interpolation is run, resulting in a curve shape. To make sure the finger goes in the correct general direction, the control points are all put in an individual limited rectangle. This rectangle limits its position in horizontal and vertical direction. Rectangles for one of the fingers are positioned sequentially next to one another, and the curve is interpolated through the sequence of defined points. The oscillation of the curve can be increased by adding more control points in the interpolation, which boils down to altering the horizontal and vertical dimensions of the rectangle in which the control points are defined. If the fingers are oriented horizontally, like in Fig 8 a, a rectangle with a smaller horizontal dimension and a larger vertical dimension allows bigger oscillations. In this example, the gap between the fingers is 20 microns wide and the sensor corresponds approximately to a 1cm x 1cm square. The width of the fingers is at least 2.5 times the gap between the fingers, and on average the fingers width is 11.5 times the gap. The fingers are depicted in black, while the gaps are depicted in white. By choosing the control points at random, no specific, repetitive pattern should be visible.
Fig. 8b illustrates an alternative embodiment were the finger pattern is shaped like Euclidean spirals. Other patterns which result in reduced artifacts and a higher transmission of light, hereby said patterns are simulated as described above, can also be applied, for instance a sine wave pattern.
4.5.1.3.6 ITO material p arameters By appropriately choosing a suitable ITO material (defined by its layer thickness and complex refractive index as function of wavelength), the transmission and coloring can be improved. In the first case one can apply ITO coated glass substrates with a higher transmission i.e. lower absorption in the ITO, by for instance changing the layer thickness of the ITO or choosing an ITO with a lower absorption coefficient. The 45nm ITO and the 25 nm ITO depicted in Fig. 4 are suitable candidates to obtain a sample with a sufficient transmission. The coloring should be assessed in combination with the other components of the sensor, as their combination will determine the total wavelength dependant transmission and reflection. As described earlier, the ITO material parameters also impact the other design elements of the ITO fingers, as described above. 4.5.1.4 Organic layers
The visibility of the organic photosensitive materials 26 is mainly determined by the properties of the Exciton generation layer 33, as the hole transport layers 32, 34 typically have a minor absorption in the visible spectrum. The luminance absorption of the organic stack is for instance designed to be in the range of 3- 30%. More preferably, the luminance transmission of the organic stack is designed to be in the range of 80- 95%.
A suitable material for the Exciton generation material is 3,4,9,10- perylenetetracarboxylic bis-benzimidazole (PTCBI, purchased from Sensient), for several reasons. First of all, it is photosensitive over the entire visible spectrum, which allows it to react on light with any visible spectrum. Secondly, it has a rather uniform absorption coefficient over the visible spectrum, which results in a limited coloring of the light. In Fig. 9, a measured transmission spectrum of a 20.7 nm PTCBI layer is presented as an illustration. On top of that, the absorption coefficient has a suitable absolute value over the visible spectrum, which allows reaching a sufficient transmission for a layer thickness that can be made with the technologies used to add the different layers.
PTCBI layer thicknesses in the order of 5 to 15 nm proved to result in suitable transmissions, more specifically, PTCBI layer thicknesses in the order of 8-12 nm proved to render very good transmission results, while maintaining a sufficient signal amplitude and contrast.
The HOMO and LUMO levels of PTCBI are respectively around 6.3 eV and 4.6 eV [ Toshiyuki Abe a,*, Sou Ogasawara a, Keiji Nagai b, Takayoshi Norimatsu, Dyes and Pigments 77(2008), 437-440]. In order for the device to work properly according to the described working principle, this poses a restriction to the HOMO and LUMO levels of the HTL, such that the Exciton can be split into a hole and an electron, meaning that;
[(ELUMO(EGL) -EHOMO(EGL)) - (ELUMO(EGL) -EHOMO(HTL))] > leV or at least the binding energy of the exciton. Therefore, a suitable HTL for instance has a HOMO and LUMO level in the following ranges:
-6.3 eV < HOMO < -4.7 eV
-4 eV < LUMO < -l eV
We estimate the PTCBI's electron mobility is typically in a range between 2x10" 2 and 2xlOA cm2/Vs. Aside from the Exciton generation layer, thin film effects are important in the sensor, as it comprises several thin-film layers (organic layers 26 and ITO 24). Therefore, the design should consider the combined effect of the different layers into a unified transmission. 4.5.1.5 Encapsulation
In order to protect the photoconductive sensors from potential contamination, a suitable encapsulation technique is used. This encapsulation technique generally uses a material 35 on top of the organic layers, and a cover glass 36. The exact configuration and geometry of the material 35 and the cover glass 36 will depend on the specific encapsulation technology. For instance, in the conventional encapsulation methodology, material 35 is an inert nitrogen (N2) gas. Alternatively, a UV-curable glue can be used, or a sputtered AlOx layer with a UV-curable glue on top. In order to have an optimized encapsulation, the material 35 on top of the organic layers should be carefully selected. The reason is that optical losses and reflections can occur here due to two optical interfaces, a first interface between the organic layers 26 and the material 35, and a second interface between the material 35 and the cover glass 36. Due to the relatively high contrast in refractive indices, using an inert gas atmosphere for material 35 can result in optical losses. Therefore, instead of encapsulating the sensor in an inert gas atmosphere and placing the glue only for instance at the edges of the encapsulation glass, an alternative embodiment can be used where the encapsulation glue is applied over the whole area of the sensor between the glass substrate and the encapsulation glass. The latter is enabled by using a drop of glue between the two glass plates, applying mechanical pressure on them, pushing out the inert gas by capillary forces and then curing the glue by UV exposure. In this way there is no gas left between the encapsulation glass and the organic materials, and the material 35 consists only of the cured glue. When glue is used the reflection is reduced at the gas/encapsulation glass interface and the organic/gas interface because the refractive index of the glue is ngiue = 1.54 i.e. it matches the refractive index of glass and is very close to the refractive index of the organic materials of around 1.8. Preferably, Norland Optical Adhesive 68 glue can be used, which enable very high transmission percentages and does not introduce any coloring. On top of that, this technique avoids several problems of the prior art can be overcome, for instance one those not need the implementation of getters and spacers, and the transmission is improved whereas the visibility in reflection is reduced. The latter is also illustrated in Fig. 10, which illustrates the transmission of a sensor with standard encapsulation (and ITO 65nm, HTL 40nm, EGL lOnm) and a sensor with glue encapsulation (and ITO 45nm, HTL 40nm, EGL lOnm) with improved ITO. The initial results indicate that the performance of the sensor with the glue is as good as the sensor with standard encapsulation.
Alternatively, instead of directly adding the glue to the organic stack, a sputtered AlOx layer with a UV-curable glue on top can be used in an alternative embodiment.
4.5.1.6 ARC
In other embodiments, an antireflection coating (ARC) is applied on the external surfaces of the substrate glass and/or encapsulation glass to further improve the transmission and reducing the coloring. In addition, the transmission over all wavelength regions and also reduce the coloring of the sensor can be improved. The ARC can reduce the reflection (i.e. improving the transmission) over all visible wavelengths, but it can suitably be designed to reduce reflection relatively more in the green wavelength region and less in the blue and red wavelength region. Such an ARC can for instance be obtained from a company such as prazisions glas & optik. Furthermore, prazisions glas & optik also offers the possibility to make a customized ARC which can be tuned in such a way that coloring of another wavelength which is visible is reduced. 4.5.2 Stability
4.5.2.1 Introduction
The sensor system's design should also be adapted to render reliable results its entire lifetime. Important elements in this are the sensor driving and photoconductive sensor's architecture.
4.5.2.2 Sensor driving by controller
The controller is used to apply a suitable signal over the sensor, and afterwards it is used to retrieve and process the light sensitive response of the sensor. A suitable signal to be applied to the sensor is a voltage signal, more specifically a square wave that switches between a positive and negative voltage.
In Fig. 11a, an example of an applied voltage over time is presented. This voltage signal switches from a negative voltage 54 of -IV to a positive voltage 55 of +1V. The period of this signal is 8s, corresponding to a frequency of 0.125Hz. A possible sensor response is presented in Fig. 1 lb, which clearly shows the need for such low frequencies. Upon switching, the current shows a slow decay in time 56 both for the positive and negative applied voltage, which stabilizes at a point 57 near the end of the flank. For different light levels, the decay 56 can be slower or faster in time, such that the curves can cross at some point during the flank, therefore, a stabilized value needs to be obtained. In order to obtain a time-dependant output of the sensors, the voltage flank is repeated over time, and the point 57 is tracked over time for the consecutive flanks, to obtain a time-dependant readout.
Depending on the organic materials used in the device, the decay 56 can vary. For instance, the organic materials used as Hole transport layer(s) of the device influence the decay and hence the maximal usable frequency of the sensor. For instance using TMPB as a HTL results in a slower sensor compared to when using for instance N,N,N',N'-tetrakis(4-methoxyphenyl)-benzidine (known as MeO-TPD). Therefore, the suitable frequency should be chosen depending on the organic material used in the device.
The most suitable voltage depends on several parameters of the organic stack. These parameters include the materials used for the HTLs, the number of HTLs in the stack, and the thicknesses of the HTLs. This is illustrated in Fig. 12, where the IV curves of 4 different HTL configurations are presented with the same organic materials. The organic layers used in the different sensors are listed below. substrate name organic stack
51 40 nm HTL / 10 nm EGL
52 40 nm HTL / 5 nm EGL
53 100 nm HTL / 10 nm EGL
54 40 nm HTL / 10 nm EGL / 40 nm HTL
The organic materials used in these devices are 1,3,5-Tris[(3- methylphenyl)phenylamino]benzene (m-MTDAB, purchased from Sigma Aldrich) as hole transport material, and PTCBI as Exciton generation material.
The electro-optic performance of the photoconductive sensors is measured for two illumination levels (above a lambertian emitter at -150 cd/m2 and -300 cd/m2) and in the dark state (< 1 cd/m2). m-MTDAB is a hole transport material with hole mobility μρ~3χ10 -"3 cm 2 /Vs [Yasuhiko Shirota, Journal of Materials Chemistry, 10, 2000] The molecular structure of m-MTDAB is similar to l,3,5-tris(di-2-pyridylamino)benzene (TDAPB) with exception of the nitrogen atoms in the outer benzene rings and the lack of methyl groups in meta orientation. Since the ionization potential (HOMO) and the electron affinity (LUMO) are determined by the functional groups in the m-MTDAB molecule, similar to TDAPB, we can estimate the HOMO and LUMO level to be respectively 5.09 eV and 1.64 eV. [J Pang, Y Tao, S Freiberg, XP Yang, M D'lorio, and SN Wang. Journal of Materials Chemistry, 12(2):206_212, 2002].
The I(V) curves of the organic photoconductive sensors show two more or less linear regions with different slopes. At low voltages (region 1) the current increases more or less linearly with the voltage and the resistance for the illuminated devices is between 3 and 20 ΜΩ. As expected in a photoconductor, the conductivity increases with the illumination, but this relation is not linear. The slope of the IV curve decreases considerably at a certain "knee-voltage" which is in the range 1-2.5 Volt and for higher voltages, the current increases at a slower pace. For some devices (SI and S2) the dark current (and also the current under illumination) increases quadratically at higher voltages. Some characteristics show slightly negative values near the origin, but this is due to non-equilibrium conditions of the measurement. Although a final model of the sensor is in the works, but is currently not finalized yet, we can make some estimations concerning the charge transport. In region 1 the devices can be described with a constant conductivity. The current density is the product of the carrier density, the average mobility of the charge carriers (for electrons and holes) and the electric field. The mobility of holes and electrons is a constant, the electric field is proportional with the applied voltage. The carrier density is obviously a non-linear function of the illumination, which is the result of a balance between generation and recombination, possibly involving trapped states. The carrier density can be larger if the layers are thicker, because the probability for recombination is reduced.
If the voltage increases above the knee- voltage the simple image outlined above breaks down, because near (one of) the electrodes (one of) the charge carriers is carried away faster than the other and a space charge region develops which takes up an important part of the applied voltage. Trapping of charges can play a very important role in this process. Outside of the space charge region, the organic stack provides the same (high) conductivity as is observed in the region 1, but the space charge region is a region with a much higher impedance. Increasing the voltage further contributes mainly to increasing the space charge region and only marginally to the current.
For the substrates SI, S2, S4 the behaviour in region 1 is practically linear. The non- linearity for sample S3 at low voltages may be due to the fact that the electrons in the EGL/ETL cannot easily travel to the anode through the 100 nm thick HTL and some voltage is needed to assist the charge transfer there.
Note that the exact IV curves under different illumination levels also depend on the rest of the sensor's architecture. The curves of Fig. 12 are obtained with finger patterns that are 16 mm wide and 15 mm high. Each finger of the electrodes is 80 μπι wide and separated from the next by a gap of 20 μπι, created using photolithography. For one sensor the total gap length is 2384 mm. The substrates with patterned ITO electrodes (Si02 passivation layer, ITO Rs=15 Ω/sq) are purchased from Naranjo Substrates (Groningen, Netherlands). The deposition of the organic compounds on the four substrates, each with a different stack and the encapsulation under a N2 atmosphere with CaO getters are performed by the Fraunhofer Institute for Photonic Microsystems (IPMS, Dresden, Germany).
The current is measured in this experiment using a Keithley PicoAmpMeter 6485, as a function of the voltage applied across the electrodes of the device. The voltages are applied with a Keithley SourceMeter 2425 in the range between 0 and 10V. During the current measurement the photoconductive sensor is illuminated with a white LED backlight powered with a Keithley 220 current source. The backlight consists of a reflective cavity with white LED's and diffuser foils to obtain a uniform illumination. In another practical experiment the current through the photoconductive sensor is measured as a function of the position of local illumination for a set of voltages. When local illumination is created using a line- shaped laser beam, it was seen that for the higher voltages applied (6V) the photoconductive sensor is more sensitive when it is illuminated near the cathodic electrode. In this experiment, a 10 x 10 cm ITO covered glass substrate has been purchased from Prazisions Glas & Optik (PGO, polished glass substrate 1.1 mm thick, Si02 passivation layer, ITO Rs=50 Ω/sq). The substrate is rinsed to remove possible contamination. Lithography is performed to define an electrode pattern of two single parallel electrodes 80μπι wide and 20μπι apart. The length of the electrodes is 10 mm.
In Fig. 13a, a picture is presented with a line-shaped laser beam 60 pointed in the gap 59 between two interdigitated sensor fingers 58. In Fig. 13b, the measured current as a function of the position of the laser line with respect to the anode is presented. The experiment is performed at DC voltages of 0.5, 3 and 6V between the anode and cathode. At each position the current is measured under BI (I dark) and subsequently under BI+laser line (I laser).
This can be explained by assuming that the space charge region, which is the region with the highest electric field is located in the vicinity of the cathode. Additional illumination and generation of electrons and holes in the high-field region leads to a higher current because the charge carriers are separated more effectively. In the low- field region, the large amount of electrons and holes that are generated locally by the laser beam have a large chance to recombine in the region where they are created and have only a limited effect on the photocurrent. The experiment shows that the space charge is near the cathode and therefore the high-field region must have an excess of holes in the HTL. This may be because holes have a lower mobility than electrons or because holes are trapped more easily in the stack.
A similar measurement for 3V applied over the electrodes shows that the increased sensitivity at the cathode is not as strong as in the measurement for 6V. This implies that the space charge region is not as pronounced for voltages closer to the knee of the I(V) curve.
Initial experiments indicate that the most suitable voltages are lower voltages, for instance when voltages are used in the first region.
4.5.2.3 Sensor architecture
4.5.2.3.1 Finger patterns
The stability of the sensor has been assessed for different gap widths 53. A suitable range of gaps is for instance 4 to 25 nm, typically between 8 and 20 nm.
4.5.2.3.2 Organic layers
In Fig. 14 an example of a long-term experiment is presented, in which the current 57 is tracked over time. A significant drop at the beginning of the device's lifetime is present, which lasts typically about several hours. This can be avoided by using the sensor during production, to ensure the sensor is used in a more stable regime.
Suitable HTLs have a sufficiently high glass transition temperature (Tg), because the organic materials can be heated to temperatures of around 50°C, when they are used for instance in front of an LCD. If the material's Tg is not high enough, the materials can degrade over time. Suitable HTLs to be used in front of an LCD have glass transition temperatures for instance in the range of 60°C-300°C (the upper limit is obviously not considered a limitation). Suitable HTLs that respect the constraints imposed by the visibility, stability requirements, and which have suitable energy bands are NPB and MeO-TPD. However, m-MTDAB in combination with the glue encapsulation is not excluded either, as initial experiments indicated a positive impact on long-term stability due to this encapsulation.
In terms of layer thicknesses, thicker HTLs are typically favoured over thinner HTLs for improving the sensor's stability. A suitable range of thicknesses when using MeO-TPD as HTL for layer 34 is in the range of 60-300 nm, while a suitable range of thicknesses for layer 32 when using MeO-TPD as HTL can be in the range of 60-300nm, more preferably in the range of 80-160 nm.
4.5.3 Suitable device architecture for stability and visibility
According to the current results a suitable device design has been created using the constraints to have an operational device with usable visibility and stability. Also, several effects have been separately measured, which led to the understanding of an architecture of an even better device. Note that specific parameters are given, but these should not be considered a limitation of the present invention.
4.5.3.1 Substrate & ITO
4.5.3.1.1 Operational device
A Corning Eagle XG glass substrate with a thickness of 1.1 mm, coated with 45 nm ITO, obtained from the company Colorado concept coatings has been used in an operational device.
4.5.3.1.2 Improved device
The same glass substrate and ITO coating have been used in the architecture of the improved device.
4.5.3.2 ITO patterning 4.5.3.2.1 Operational device The ITO coated substrate has been patterned with a suitable gap width of 15 μηι, the ITO finger width was designed to be 173 μηι, using non-curved fingers.
4.5.3.2.2 Improved device The same ITO patterning used in the operational device has been used in the architecture of the improved device.
4.5.3.3 Organic layers
4.5.3.3.1 Operational device
The operational device uses 2 organic layers on top of the patterned ITO in the following order, a 80 nm MeO-TPD HTL and a 10 nm PTCBI layer.
4.5.3.3.2 Improved device
Three organic layers are used in the improved device on top of the patterned ITO in the following order: a 80 nm MeO-TPD HTL, 10 nm PTCBI and 80 nm HTL.
4.5.3.4 Encapsulation
4.5.3.4.1 Operational device The current device uses the conventional encapsulation methodology, with getters placed outside the display' s active area.
4.5.3.4.2 Improved device
The organic photoconductive sensor is encapsulated using the solution with the drop of glue.
4.6 Calibration
4.6.1 Luminance and chromaticity measurements
The exact sensor architecture, designed to obtain a stabile sensor optimized for visibility as elaborated above, determines its specific sensing characteristics, including its, exact angular sensitivity, non-linearity, and spectral sensitivity. Suitable calibration tables, according to the earlier described methodologies, should be created in order to be able to use the sensor for luminance and chromaticity measurements in combination with a display system. These calibration tables should then be integrated into the software 13 of the controller 14, in order to assure a correct measurement of the sensor system 9.
4.6.2 Ambient light rejection algorithms
4.6.2.1 Introduction
As mentioned earlier the photoconductive sensors 6 cannot distinguish the direction of the light. Therefore the photocurrent going through the semitransparent sensor can be the consequence of light emitted from display area 5 or external (ambient) light. Therefore additional measurements are to be performed.
Suitably, the semitransparent sensor is present in a front section between the front glass and the display.
4.6.2.2 Optical filter
Fig. 15 shows an alternative embodiment of the invention relating to the photoconductive sensors 6. Sensors 61a and 61b are deposited/placed on the ITO pattered substrate 63 (e.g. a glass) panel such as one that comes in front of a (typical) LCD panel 64 or directly onto the display. In an alternative embodiment the display can comprise an air gap in between the front glass and the panel, hence it is placed in front of the (LCD) panel, at a certain distance. In this embodiment, the sensor is preferably created on the side of the front glass facing the display (i.e. the substrate glass can be used as front glass, or on an additional layer positioned adjacent to the front glass, at the side facing the display).
The LCD display is backlit by light sources 65 (e.g. cold cathode fluorescent lights, LEDs ...). Sensor 61a is also covered by an optical filter 62. A first amount of ambient light (AL) 66a impinges on filter 62 before being transmitted to sensor 61a. A first amount of "display" light (DL) 67a reaches sensor 61a. A second amount of ambient light 66b reaches sensor 61b. A second amount of "display" light 67b reaches sensor 61b. In a first approximation, we will assume that the first and second amount of ambient light 66a and 66b are identical / equal. This condition may be fulfilled in most practical cases when sensors 61a and 61b are close enough to each other and small enough. In addition we also assume that the first and second amount of display lights 67a and 67b are equal. This may be taken care of electronically (by driving the LCD panel so that the both sensors 61a and 61b receive the same amount of light (for instance, it may be necessary to compensate for potential non-uniformities of the light generated by the backlight)).
Let VOut a and VOut b be the output signals of sensors 61a and 61b respectively. We have for VOut a and VOut b:
(1) VOut a = a * AL + b* DL
(2) VOut b = c * AL + d* DL (where * indicates multiplication)
Note that it is assumed here that the sensor has a linear response to the impinging light level of the ambient light and light emitted by the display, which should be realized with the proper sensor calibration. The coefficients a and c are different by way of the filter 62 (a < c) . The coefficients b and d may be different (for instance, the filter 62 may be responsible for reflecting of a part of the display light not absorbed by the sensor 61b, resulting in b being larger than d.
The determinant a*d - b*c of the linear system of equations (1) and (2) in the unknowns AL (ambient light) and DL (display light) is non zero by an ad-hoc choice of the filter 62 (e.g. material, pigment and/or thickness of the filter impact coefficient d and influence the value of the determinant). The system may thus be solved for AL and
DL once a, b, c and d have been characterized.
The set of equations (1) and (2) shows that the sensors 61a and 61b are used in tandem to discriminate between different sources of light that contribute to the output signals of the sensors. A calibration to a reference sensor that has a response according to the Υ(λ) curve can be used to obtain the actual values, as previously discussed. The coefficients a, b, c and d may be obtained e.g. as follows: AL is imposed (calibrated light source) while the backlight is switched off (DL = 0). In that case, a = VOut a / AL and c = VOut b / AL. The coefficients b and d are similarly obtained with AL = 0 and DL known (all light valves open and the luminance of the display panel measured). ). The coefficients a, b, c & d can be determined upfront, as they are related to the design of the display, sensor and filter, which remains constant over the display' s lifetime.
The filter 62 may be e.g. a polarizing filter that will only transmit light with the right polarization, when used in combination with an LCD that emits linearly polarized light. In some instances, the filter 62 may be integrated into or (structural) part of to the sensor 61a, as in the case of a sensor sensitive to a specific polarization of the light discussed elsewhere in this description. Indeed, if the sensor 61a contains a layer of organic material, "rubbing" the organic material can generate an integrated filter. In addition the filter 62 might be a simple linear polarizing filter, or partial linear polarizer filters that are applied suitably such that they transmit the polarization emitted by the display. In general, any filter that will force the two linear equations (1) and (2) to be linearly independent will do.
In some specific embodiments, it is possible to modify the arrangement of figure 15 as follows: the filter 62 may be placed between the display and sensor. In that case, a first amount of display light (DL) 67a impinges on filter 62 before being transmitted to sensor 61a. A set of linear equations similar to (1) and (2) is solved for the unknowns AL and DL in order to isolate the contributions of the ambient light to the output signals of sensors 61a and 61b. For instance, the filter 62 can be designed such that it reduces the reflection of the panel, which leads to two independent equations (1) and (2). For instance, the coefficient a can become smaller than coefficient c due to the reduced reflection of this impinging ambient light on the panel. When the filter 62 is put in between the substrate 63 and the LCD panel 64 at the position of sensor 61a, we can make sure that the determinant of the equations (1) and (2) is non zero. The filter needs to reduce the reflection of the panel. In this design, 0<d<b because DL needs to pass the filter before it reaches sensor 61a. Furthermore, 0<a<c because AL passes through both sensors and generates same Vout, but when reflected on the panel i.e. panel + filter and goes back on the sensors, sensor 61a will have a smaller Vout because of the reduced reflectivity of the panel and the filter.
It is conceivable to modify the arrangement of figure 9 to produce two linearly independent equations time sequentially instead of simultaneously. For instance, one could rely on a filter 62 that is a switchable filter (e.g. a switchable polarizer)
In a first state the filter is inactive and the sensor' s output as given by:
( ) VOut 1 = b DL
In a second state, the filter 62 is activated (electrically) and the sensor's output is given by:
(2') VOut 2 = c AL + d DL with a « c and d not necessarily equal to b. 4.6.2.3 Electronic filter
In another embodiment of this invention, use can be made of the typical driving of the backlight unit which is integrated in the design of a transmissive liquid crystal display. The backlight unit typically is Pulse Width Modulated (PMW) driven, which can be used advantageously in the scope of this invention. The methods comprises of following steps:
In a first embodiment:
Switching on the blinking of the backlight
- Switching on an electronic filter in combination with the sensor according to embodiments of the present invention, in which the filter is designed to obtain two sets of measurements from the sensor. The first set is made in the period where the backlight is switched off in which only ambient light is sensed, while the second set is made in the period the backlight is switched on hence both ambient light and light emitted by the display are measured
Using both measurement sets of the previous step, it is possible to determine the part of the output signal induced by the different sources of light. Hence, it is possible do eliminate the contribution of the ambient light. In this embodiment, the blinking should be done in such a way that the period in which the backlight is switched on and the period in which the backlight is switched off are sufficiently long to ensure that the sensor is capable to properly measure both sets of measurements.
In an alternative embodiment, the same can be done based on the measurements at different points in time on the PWM signal, even without using blinking, more precisely at the moments in time when the backlight is switched on and when it is switched off. The disadvantage of this embodiment is that sensors are required with a very fast response time in this embodiment.
In another embodiment, the following steps can be taken:
Switching on the backlight unit with blinking
- Using an electronic filter on the output of the sensor to detect the average light captured, whereby said light captured is a combination of the display light and the ambient light
Switching off the blinking of the backlight unit Using an electronic filter on the output of the sensor to detect the average light captured, whereby said light captured is a combination of the display light and the ambient light
Using both measurements sets of the previous step to determine the part of the output signal induced by the different sources of light, based on a model. Hence, it is possible do eliminate the contribution of the ambient light, and measure the light emitted by the display only.
The advantage of these methods and arrangements is that the backlight is not switched off completely which means normal viewing can be continued.
4.6.2.4 Mechanical filter
In an alternative embodiment, the filter 62 can be a mechanical filter that filters out the ambient light.
For instance, the mechanical filter 62 can be a finger, touching the sensor system. Upon touching, the ambient light is blocked locally when touching the region of interest, assuming the sensor 61a has smaller dimensions than the finger touching it.
A finger pressed against the sensor 61a will prevent most of the Ambient light from reaching the sensor 61a for as long as the finger is pressed. In that case, the output signal of sensor 61a is:
( ) VOut 1 = b DL Once the finger is no longer pressed, one has following expression:
(2') VOut 2 = c AL + d DL with a « c and d not necessarily equal to b.
Solving the set of equations ( ) and (2') under the assumption that DL has remained constant allows one to isolate AL and DL by solving the set of linear equations ( ) and (2').
The controller 34 is designed with the required intelligence, such that it is aware of the touching from the touched sensor. The sensor system 31 can then measure the light properties in a touched state where all or a significant amount of the external light is blocked. The measurement is then repeated in an untouched state. The derived difference between the two measurements provides the amount of ambient light.
Moreover the finger touching the sensor can have a reflection as well, which can influence the amount of light sensed by the sensor. As a result, the sensor system 31 can be calibrated by first carrying out a test to determine the influence of the reflection of a finger on the amount of measured light coming from the display without ambient light, and the equations need to be adapted accordingly.
Moreover, the filter 62 can be a black absorbing cover, blocking all the ambient light, 66, without reflecting any of the display's emitted light 67. For this method two luminance measurements are performed. The first measurement(s) include measuring the full light contribution, which is the combination of light emitted from the display and the ambient light. During a second measurement, the sensor measures the emitted display light with all ambient light excluded, using the absorbing cover as filter. Both these measurements are needed when one desires to quantify the ambient light. However, if one merely wants to measure the light emitted by the display, without the ambient light contribution, it is sufficient to cover the display to exclude ambient light influences and then measure.
These ambient light rejection algorithms should then be integrated into the software 13 of the controller 14, in order to assure a correct measurement of the sensor system 9.
4.6.2.5 Alignment
A possible method to roughly determine the position of the sensor is to use a square 68 that scans over the active area of the screen 68, as illustrated in Fig. 16a. The square moves in a grid that covers the entire display's active area by translating column per column for every consecutive row. The sensor 69 measures light at every position of the square. Two consecutive positions of the square 68 are presented in Fig. 16a.
When the highest value is measured, the coordinates of the square correspond closest to the coordinates of the sensor. The size of the square can be determined based on the desired accuracy. The smaller the size of the moving square is, the better the accuracy will be.
This methodology can be further enhanced using images which have bright half 70 and a dark half 71, as illustrated in Fig. 16b. An image is used which is cut in a bright and dark half in horizontal (Fig 16b, left) and vertical direction (Fig 16b, right), to pinpoint in which quadrant the sensor is located. Afterwards, a smaller version of the same images is displayed, to obtain the correct sub quadrant of the initial quadrant, as presented in Fig. 16c. The obtained results can always be compared to the reference values measured using a bright and a dark patch.
Once the measurement is no longer equal to one of the reference values, the sensor is located partially in the bright and dark region (assuming the sensor works like a luminance sensor, otherwise the neighbourhood of the sensor has been found). To determine the exact sensor's location, we will start from the identified area, and use a small white square in a black background to precisely locate the sensor. When the measured luminance by the sensor will reach a local maximum, the position of the sensor relative to the display's active area sensor will be found. This algorithm is presented in Fig. 16d.
4.7 Applications
4.7.1 Introduction
The sensor system 9 according to the description so far has an architecture that has been optimized for stability and visibility purposes, and due to proper calibration luminance and chromaticity measurements of the display are enabled, as well as ambient light measurements.
However, there are several design choices of the sensor remaining, which can be suitably selected for the intended applications. The remaining design choices include lay-out of the matrix of sensors (their locations on the screen), and the size of individual sensors. These remaining design choices should be made in harmony with the display controller 14, more specifically with the software 13 which includes calibration algorithms dedicated for the applications, as well as the specific display 1 the application is intended for, as this has repercussions for instance on the number of sensors used in the sensor matrix. Also, the sensors cannot be made too small, such that the sensor has a sufficient amplitude for a proper detection by the controller. 4.7.2 Applications/performance improvements based on display light output measurements
4.7.2.1 Luminance/chromaticity uniformity checks
A suitable sensor-layout and size can be created to perform luminance/chromaticity uniformity checks.
In a first method, the sensor-layout design is such that five sensors are created: one in the centre and four corners. Of course other custom sensor designs with very specific parameters are also possible. For example, when the exact size of the measurement area is not specified, only the borders of the region are specified. Creating a sensor with a relatively large sensing area is preferred, since this will average out any high-frequency spatial non-uniformity which might occur in the region. This can be realized in practice by using organic photoconductive sensors according to the present invention, designed with finger-shaped interdigitated extensions with sufficiently long fingers and a sufficient number of fingers to reach the required region size, or alternatively multiple smaller sensors which can be combined to create an averaged measurement. The size of the sensors' measurement area is for instance a 1 by 1 cm region across the faceplate divided by the mean. This regional size approximates the area at a typical viewing distance. Non-uniformities in display devices that can benefit from the sensor systems such as LCDs may vary significantly with luminance level, so a sampling of several luminance levels is usually necessary to characterise luminance uniformity.
As a result, luminance uniformity is determined by measuring luminance at various locations over the face of the display device while displaying a pattern with uniform driving, and applying a suitable metric to quantify the non-uniformity of the measured values.
Non-uniformity can be quantified as the maximum relative luminance deviation between any pair or set of luminance measurements. Alternatively, a metric of spatial non-uniformity may also be calculated as the standard deviation of luminance measurements.
Alternatively, the luminance uniformity can be quantified using the following formula: 200*(Lmax - Lmin)/(Lmax +Lmin). Depending on the outcome of the measurements, it can be validated if the display is still operating within tolerable limits or not. If the performance proves to be insufficient, a signal can be sent to an administrator, or to an online server that registers the performance of the display over time.
The spatial noise of the display light output can also be characterized based by calculating the NPS (Noise Power Spectrum) of measurements of a uniform pattern at different digital driving levels.
In addition, recording of the outputs of the luminance performance can result in digital water marking, e.g. after capturing and recording all the signals measured by all the sensors of the sensor system at the time of diagnosis, it could be possible to re-create the same conditions which existed when an image was used to perform the diagnosis, at a later date.
As a uniform pattern needs to be applied to the display, the measurements cannot be made during normal use of the display. Instead, the patterns can be displayed when an interruption of the normal image content is permitted. Aside from these luminance uniformity checks, the sensors can also be used for chromaticity uniformity checks.
4.7.2.2 Luminance/color uniformity corrections
Several main aspects of the sensor system can be altered to obtain the most optimal results for a luminance uniformity correction algorithm:
(1) Sensor Size: the sensors are preferably large enough to cancel out the high- frequency Gaussian noise. Since the measured data is a spatial (weighted) average of the light impinging on the sensor, the noise will indeed disappear. However, the sensors should not be too large, otherwise we may cancel out the low-frequencies as well and the sensors would not capture the correct signal anymore. The architecture of the sensor system of the present invention offers the flexibility to create sensors with a suitable size, by suitably selecting the number of fingers and the length of the fingers. The sensor can for instance have a square geometry, with a side the size of 5-500 pixels.
(2) Position of sensors: the sensors are preferably dispersed over the whole active area of the display and their positions will define a 2D grid. This grid may be uniform or not, regular over the display or not. For instance, the spacing in the borders may be reduced while keeping a uniform grid in the centre of the display.
(3) Number of sensors: the basic trade-off concerning the number of sensors is the cost of the sensor system, more sensors will certainly result in low-resolution map which will eventually provide a better match with the actual emitted pattern, but can typically result in a higher cost, for example due to a resulting more elaborate electronics board 12, or a more sophisticated ITO removal process. Moreover, the resulting improvement can be limited; there is typically an asymptotic behaviour of the correctness of the resulting high-resolution map depending on the number of sensors used, when considering only the low-frequency noise. The number of sensors can for instance be in the range of 3 by 3 to 400 by 500 sensors, more suitably in the range of 6 by 6 to 50 by 40 sensors in the case of a 5MP medical grade radiology display.
(4) The used interpolation/approximation technique: as a consequence of the remarks about the sensor size and the number of sensors used in the design, the display areas 5, measured by the sensors, should typically be made significantly larger than the size of the photosensitive area of a photoconductive sensor. This implies that the total sensitive area of the sensors is typically smaller than the display's active area, and an interpolation or approximation technique is required to obtain a suitable correction of the display's driving. Consequentially, the interpolation/approximation method used is of great importance. This will determine, based on the measurements of the sensors, the curve that will be used for correction, as it converts the low resolution map of captured sensor values into a high-resolution map, typically at the display resolution, that should be in the uniformity correction algorithm.
Of course, given a set of points an infinite number of possibilities can be used to link them together or approximate them. A preferred approximation algorithm which is used is an interpolation method which is based on biharmonic spline interpolation as disclosed by Sandwell et al in "Biharmonic Spline Interpolation of GEOS-3 and SEASAT Altimeter Data", Geophysical Research Letters, 2, 139-142,1987. The biharmonic spline interpolation finds the minimum curvature interpolating surface, when a non-uniform grid of data points is given. Other approximation algorithms can also be used, for example, the B-Spline which is disclosed in H. Prautzsch et al, Bezier- and B-Spline techniques, Spinger (2010). Other interpolation and approximation techniques can also be applied. For instance an interpolating curve can be defined by a set of points, which runs through all of them. An approximation defined on the set of points, also called control points, will not necessarily interpolate every point and possibly none of them. An additional property is that the control points are connected in the given order. Preferably, the set of control points is assumed to be ordered according to their abscissa, although it is not mandatory to apply the interpolation technique in the general case. Another interpolation method which can be applied is linear interpolation, where a set of control points is given and whereby the interpolating curve is the union of the line segment connecting two consecutive points. The linear interpolation is an easy interpolation technique and is continuous. However, it is a local technique, since moving a single point will influence only two line segments, hence will not propagate to the entire curve. Another technique which can be applied is a cubic spline interpolation, whereby cubic piecewise polynomials are used. The cubic spline has the particularity that both the first and second derivatives are continuous, resulting in a smooth curve. This technique is global since moving a point influences the entire curve. The Catmull- Rom interpolation can also be used, which is a special case of the pchip interpolation, where the slope of the curve leaving a point is the same as the slope of the segment connecting the previous and the next control points. In addition, the first derivative is continuous. (5) The quality assessment metric:
The quality of the uniformity correction, which utilizes the high-resolution interpolated/approximated luminance map needs to be assessed with a suitable metric. This metric is designed to compare two images. The first image is the desired uniform image. The second image is the ideal image we want to reach, with the scaled error modulated on top. The error is the scaled difference between the actual measured signal, and the interpolated/approximated signal. The error is scaled in the same way as the measured signal is scaled to obtain the ideal, uniform image. This scaled error is then added as a modulation on top of the ideal image. This resulting rescaled error is a consequence of the difference between the image we would obtain by using the interpolated or approximated curve that uses only a limited number of measurement points, instead of the actual measured curve for the luminance uniformity correction
The easiest is to use purely objective metrics, such as PSNR and MSE, or the maximum local and global percentual error. The global percentual error can for instance be obtained by calculating the sum of the local absolute errors per pixel, and dividing it by the sum of all the desired pixel values. However, the generated results are not necessarily the most consistent ones with what perception human observer would perceive. Therefore, subjective metrics based on the human visual system can be used, that allow obtaining a better match how the image is perceived by humans. For example, we can use the Structural Similarity (SSIM) index, which is based on the human visual system, and can be used to compare the similarity between two images. In our application, one of the images is typically the ideal uniform reference image, which should ideally be obtained after calibration.
(6) Percentage of the display's width and height which is considered in the uniformity correction. The borders present in the display device provide the largest non- uniformities and complex effects occur. For instance, the natural drop-off of the luminance is partly compensated by the mach-banding phenomenon. Indeed, as a consequence of the mach-banding phenomenon, a more uniform luminance profile is perceived. On top of that, while the sensors can be made smaller at the border to capture higher spatial frequencies present in the noise, creating the sensors with a very tiny width has no use as the high-frequency trend will no longer be filtered out, which is undesired. Therefore, the analysis is typically limited to a certain central area of the display's active area, excluding the very edge of the display borders. This area can be for instance the area within 90% and 100% of the display's active area width and height, more preferably between 95% and 99%.
In addition a self-optimizing algorithm can be applied, since there are various parameters which can be fine-tuned, the final optimal solution is a combination of choices for each parameter. Unfortunately, the parameters may not be independent, meaning that for instance the optimal size of the sensors will depend on their number and on their positioning. Hence, a self-optimizing algorithm designed such that it automatically looks for a suitable range of parameters, or more precisely a combination of parameters, is very useful. This is very advantageous as we can then apply it to any kind of spatial noise pattern later on, suitable parameters will be determined automatically. This algorithm can be based on an iterative approach that tests all possible combinations of all parameters in a suitable range, and applies the metric to determine the quality of the result, based on a number of representative images for the display that should be made uniform. Once the results have been obtained for all combinations, a suitable result can be selected. The selection can be based on various criteria, such as complexity, cost, maximal tolerable error that should be achieved.
Note that, these aspects of the sensor system can be interdependent, and the suitable sensor lay-out and size is a result of the combination of all these aspects. When fixing these aspects of the sensor system, the preferred design architecture of the photoconductive sensors has been obtained, when combined with the earlier considerations for visibility and stability.
The most suitable sensor design parameters also depend on the exact display type which is used in combination with the sensor system. A specific display type can even have different spatial noise patterns depending on its driving level, and therefore the ideal design of the sensor system has to be based not only on a single noise pattern, but a more global set of noise patterns over the entire display driving range have to be considered. Also, individual displays from a same type can have slightly different noise patterns. This should also be taken into account in the sensor design.
In Fig.17, a high-resolution luminance map as emitted by the display is depicted, from which a low resolution luminance map has to be obtained using the sensor system of the present invention. This luminance map has been obtained from a 5 MP display medical grade monochrome display, which has a resolution of 2048 (horizontally) by 2560 pixels (vertically), using a high-quality, high resolution camera.
Luminance measurements using a high-resolution camera are illustrated Figs. 18a, 18b and 18c. These illustrations are limited to ID, horizontal, cross-sections, of a single pixel row for simplicity reasons. Note that the luminance measurements described here are perpendicular to the display's active area. Such measurements can typically be used to characterize the non-uniformity of the luminance (or color in an alternative embodiment) of a display, or it can alternatively be used as input for an algorithm to remove the low-frequency, global, spatial luminance trend.
In the Fig. 18a, a cross-section of a profile measured using a high-resolution camera (suitably calibrated such that it measures luminance in perpendicular direction as emitted by the display) on a relatively uniform display is presented. In Fig. 18b, an example of positions of the photoconductive sensors according to an embodiment of this invention are indicated using squares with the corresponding width of the sensors. In this figure, 10 sensors are used as, merely as illustration. The Gaussian high- frequency noise is averaged out by designing the sensors with a suitable size and the measured points are a measure of the global trend only. For instance, sensors in Fig. 18b are selected with a width of 1 cm per sensor.
The values captured by these sensors result in a ID low-resolution sampling of the measured ID curve. It is clear from Fig. 18b for anyone skilled in the art that a good interpolation or approximation can be suitably applied using this limited number of measurement points (for instance by using the pchirp interpolation) with sensors according to this invention (for simplicity, one can assume the sensors operate as luminance sensors in the initial approximation), to obtain a good approximation of the camera measurement. At the very corners complex effects, such as mach banding for instance can occur therefore a more uniform luminance profile is perceived. Therefore, the analysis is typically limited to a certain percentage of the display area, excluding the very edge of the display borders. On top of that, a denser concentration of sensors, potentially with a smaller width can be used towards the borders to obtain a more suitable approximation. The obtained interpolated or approximated curve can then be used as input in a spatial luminance correction algorithm in order to obtain a uniform spatial luminance output. This spatial luminance correction algorithm basically applies a precorrection table to the driving levels of the display, to ensure images are correctly displayed. When the display is driven with a same driving level for every pixel, and the precorrection table is used, the resulting light output is spatially uniform.
When this correction is applied, a cross-section of the emitted light is taken, like illustrated in Fig. 18c. In this figure, only the high-frequency noise remains, and the global, low frequency spatial noise trend has been successfully eliminated by suitably applying a uniformity correction algorithm.
A horizontal section has been used in the example described. In vertical direction, more sensors will have to be used since this type of displays is typically used in portrait mode. A 5 MP display typically has a resolution of 2048 (horizontally) by 2560 pixels (vertically), in other words an aspect ratio of 4:5, and a pixel pitch of 0.165 mm. Therefore, 13 sensors in vertical direction can be used, leading to a matrix of 10 by 13 sensors. Note that the numbers presented in this example are intended as an example, the optimized solution depends on the design parameters of the sensor system mentioned above.
The interpolation described above, relates to the one dimensional case. While this is very interesting to get a profound insight into the problem, the actual spatial luminance output of the display is a 2D map. Therefore, in the two dimensional case, the sensors preferably define a two dimensional grid instead of a single line. As before, every sensor stores a single value, namely the spatial (weighted) average of the measured data. This defines control points and then a two-dimensional interpolation or approximation method is run through them. Again, the choice of the design parameters, will determine the final shape. Two distinct models of sensor grids are considered in the analysis. In the first model, the values captured by the sensors are measured in 2D and the sensors are spread uniformly over the surface of the display. In the second model, special attention is devoted to the borders of the display.
To assess the error due to using the low-resolution measurement grid several metrics are elaborated, In the first metric, a purely objective error computation can be used, by filtering the data captured by the camera and summing the absolute differences between the filtered data and the interpolated/approximated data after which they are divided by the sum of all filtered camera measurements, to obtain the global relative absolute error. The filtering will be based on a rotationally symmetric Gaussian low pass filtered version of the measured luminance profile. The exact parameters of the filter will depend on the specific type of display which is considered. This filtering step removes the high-frequency signal content of the signal, and retains the low-frequency content. This step is implemented as it is not the purpose of the method to correct the high-frequency noise.. In addition, another objective metric consists in measuring the maximal local relative absolute error. Instead of measuring only a global error, this captures the local deviation from the data. Note that, in this metric, the initial signals are compared, before the uniformity correction algorithm is applied.
Moreover, as both shapes can be considered as images, we propose to use the SSIM metric as a second metric. The structural similarity (SSEVI) is a general and commonly used tool to assess the difference in quality of two images which is based on the human visual system. The first image is the uniform image we ideally want to reach. The second image is the ideal image we want to reach, with the scaled error modulated on top. The error is the difference between the actual measured signal, and the interpolated/approximated signal. The error is scaled in the same way as the scaling of the measured signal to obtain the ideal, uniform image. This scaled error is then added as a modulation on top of the ideal image. This resulting rescaled error is a consequence of the difference between the image we would obtain by using the interpolated or approximated curve instead of the actual curve for the luminance uniformity correction. Moreover, as the metric captures the similarity between two images, it is not necessary to filter the data. That is, the scaled error still contains the noise and this noise is accounted for by the metric. Figure 19 illustrates the rescale process for a cross-section. The interpolated data are rescaled to the ideal level, which is determined by the minimum interpolated data. The actual data is also rescaled with the same factor. Consequentially, the error occurs as a modulation added on top of the ideal level. The value to which the ideal level is then normalized depends on the level of brightness of the image.
When considering a uniform grid over 95% of the display width and height, preferably four additional parameters are considered, namely the number of sensors in the x-direction, the number of sensors in the y-direction, the size of the sensors and the interpolation method. The values were interpolated using cubic interpolation, linear interpolation, and a method based on biharmonic spline interpolation. A suitable method based on the biharmonic spline interpolation method, is for instance the MATLAB 4 griddata method. Using this method, a uniform grid of 7x5 or 6x6 sensors is sufficient to obtain a relative absolute global error less than 1% (the same 5MP display described above is used in this analysis), when using square sensors of 50 by 50 pixels. These results have also been obtained at higher driving levels, a slightly larger error was obtained at the very lowest driving levels. We also saw that there is again flexibility in the sensor size, similar results have been obtained for square sensors of 50 by 50 to 150 by 150 pixels. As the maximal local relative absolute error can still be in the range of 8%, a matrix with a higher number of sensors can be beneficial in case a smaller maximal local error is desired. Using the SSIM metric, the SSIM values were computed for each profile corresponding to the output of each display driving level and each respective level, then their value were averaged. The SSIM metric results that we see that the images which have a very similar structure and that the similarity increases with the number of sensors. However, the values cannot easily be used in an intuitive way to actually determine the best configuration, as this would require fixing an arbitrary threshold. Based on the metrics used, the best method among the three is the interpolation method based on the biharmonic spline interpolation method. It consistently produces globally the lowest relative error, the best SSIM values and the minimal local error. These results show that the objective metrics and subjective metrics are consistent, the same conclusions were drawn for both metrics.
In the second model, alternative configurations of sensor grids with a focus on the boarders has been studied. This is illustrated in Fig. 20a which shows a local map of the error for digital driving level 496 (10 bit driving levels, ranging from 0 to 1023) when the sensors are located on a 6 by 6 uniform grid. Since the data illustrated in Fig. 20a are not extrapolated to the borders of the display, but only interpolated inside the convex hull defined by the set of sensors, there is an external ring which is put at 0. The main differences between the interpolated and the true signal are located towards the borders of the interpolated area. This is a structural error, which is persistent throughout the higher driving levels of the display. For the lowest driving levels, this observation no longer holds. Therefore, when analyzing the two-dimensional case, a non uniform grid with smaller spacing between the sensors in the borders was chosen. In Fig. 20b the error is depicted, where the dots indicate the location of the sensors of size 50 by 50. The global relative error is depicted by "e", while the maximal local error is depicted by "m". Here the grid used is non-uniform on the borders of the interpolated area.
More specifically, a grid with a suitable spacing between the two first sensors was constructed, both in the horizontal and vertical direction, whereby the spacing is half the spacing between two other adjacent sensors. Though this configuration uses the exact same number of sensors, it offers a significant improvement of the interpolation, in all but the very lowest driving levels. At the very darkest levels; a slightly larger error was obtained when using this alternative grid.
In the results described here above for a cross-section and for the entire active area are based on the assumption that the matrix of sensors operate as luminance sensors, which measure light emitted by the display in perpendicular direction, i.e. directly underneath the sensor's photosensitive area. Tests were also done in the case where the sensor is not an ideal luminance sensor, and has an equal response independent of the angle at which the ray impinges on the photosensitive area. It is clear for the reader skilled in the art that the distance between position at which the light is emitted and the position at which the light is captured now has an impact on the measurement. Tests were for instance done at a separation of 0.5, 2, 3,5 and 10 mm between the sensor and the pixels. In Fig. 21, the obtained results are presented for a uniform grid (left columns), a non-uniform grid (right columns), for a broad range of sensor resolutions (horizontal axis on the plots), combined with a broad range of sensor sizes (indicated above, 5mm, 10 mm, 15 mm, 20 mm, 25 mm or expressed in number of pixels: 30, 60, 90, 120 and 151 pixels). The global percentual relative absolute error is presented on the vertical axis. In Fig. 21a, the results for the darkest level are presented, while in Fig. 21b, the results for the brightest level are presented (DDL 255 for 8 bit display driving). It is clear that very good results were also obtained when using such a sensor. Also, it is assumed that ambient light is eliminated from the measured value as described earlier.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments. 4.7.3 Applications/performance improvements based on ambient light measurements
The other potential applications highlighted in the summary of the invention, do not impose strict requirements on the lay-out of the photoconductive sensors, or the size of their light sensitive area. The measurements for the DICOM checks can for instance be made at the locations of the sensors as described above for the uniformity correction, and the measured values can be interpolated/approximated at the locations between the different sensors.
5 A summary of properties of Photoconductive sensors according to embodiments of the present invention
5.1 General parameters
Some general device performance parameters can be defined for the entire photoconductive sensors.
Transmission (in perpendicular direction). The transmission is a combined effect of the different components comprised in the sensor architecture.
The luminance transmission is preferably 60-98%.
The introduced color shift by the sensors should be reduced as much as possible, by suitably designing the sensor system.
To use the sensor to measure color, the sensor should have a spectral sensitivity that should at least cover the spectral power distribution of the display's primaries. In the case if we want to measure ambient light, the spectral sensitivity should cover the whole visible range of wavelengths 380-780nm.
Measurement error:
For a usable sensor: 0-20% on luminance
Preferably: 0-3% on luminance
5.2 Substrate
5.2.1 Optical parameters
5.2.1.1 Transmission The substrate is a transparent material of which glass is an example. Preferably the substrate has a very high luminance transmission in the visible range. The luminance transmission is preferably 60-98%.
The losses due to reflection from the substrate are typically optimized in combination with an Antireflection coating (ARC).
Other substrates are not excluded as well, but will typically perform less.
5.2.1.2 Coloring
Typically, glass substrates have a very limited coloring
5.2.1.3 Spatial uniformity
The transmission of the substrate should be spatially uniform. This is typically the case for glass substrates.
5.2.1.4 ARC
To be fine-tuned in combination with the rest of the sensor design.
5.2.2 Thickness
Possible: 0.3 mm to 20 mm (the upper limit is not a real technological limit), preferably 0.7-1.1 mm
5.2.3 Thermal resistance
The substrate should be able to withstand the thermal stress during the fabrication of the materials that are added on top: the semitransparent conductor and the organic layers.
Additionally, it should be able to withstand potential thermal effects due to any processing such as patterning, and the thermal effects due to the heating of the display and environmental conditions in which the display is used. This is typically not a problem for glass substrates.
5.2.4 Mechanical robustness
The substrate should preferably be a rigid substrate, able to support the additional layers added on top. A flexible substrate is technologically possible to use in a working sensor system,
5.3 Semitransparent conductor
5.3.1 Optical parameters
5.3.1.1 Transmission
5.3.2 Sheet resistance of semitransparent conductive material
The semitransparent conductive material can be an inorganic or organic material.
Parameters in the specific case of ITO:
Good results have been obtained in the range of 60 Ω/α-125 Ω/α.
A usable broader range is 1 Ω/α-5000 Ω/α. 5.3.3 Work function
A suitable work function of the Semitransparent conductor is defined relative to the HOMO level of the HTL on top of the patterned substrate. The work function of the semitransparent conductor should be higher than the HOMO level of the HTL on top of the patterned substrate.
5.3.4 "Stick" to the substrate
A suitable semitransparent material should be chosen that remains fixed on the substrate. Examples of a semitransparent conductor that can be attached to the substrate is ITO or PEDOT:PSS.
5.3.5 Transparent conductor thicknesses:
Parameters in the specific case of ITO: Possible thicknesses are 5-450 nm
Typical thicknesses for a good sensor are in the range 25-65 nm 5.3.6 Uniformity: thickness, optical, ...
The transmission of the light over the substrate should be very uniform, typically in the order of 85-100% (Lmax-Lmin)/Lmax. 5.3.7 Patternable
Requirement: it should be possible to pattern the substrate with a suitable patterning technology.
5.4 Semitransparent conductor patterning
5.4.1 Finger width over gap width ratio
Possible range: 0.5 to 1 although upper limit is not a real limitation
Realistic range for a good visibility design (e.g. in the case of ITO) is : 30 to 1 to
5 to 1
5.4.2 Gap between floating and conducting material (e.g. ITO)
A possible range is 1 to 100 μπι
A suitable range of gaps is for instance 4 to 25 μπι, typically between 8 and 20 μπι.
5.4.3 Number of fingers
The number of fingers may for instance be anything between 2 and 5000, more preferably between 10 and 2500, suitably between 25 and 700.
5.4.4 Sensor size
The surface area of a single semitransparent sensor may be in the order of square micrometers but is preferable in the order of square millimeters, for instance between 10 and 7000 square millimeters. One suitable finger dimension is for instance a 12000 by 170 micrometers size 5.4.5 Number of sensors & sensor
The number of sensors in the design depends on the intended application, and is not a limitation of the invention. 5.4.6 Finger shape
Several embodiments are possible:
-Straight
-Spiral shaped
-Semi-Random shape, as described in the text
The randomized shape is expected to render the best results
5.4.7 Transmission
The semitransparent conductor typically has an absorption in the range of 5-30%
Organic materials .1 Layer functionality
Two types of layers are typically used:
An Exciton generation layer, which is photosensitive and generates excitons. A hole transport layer, which transports holes between the electrodes.
An additional layer can be a Charge separation layer, which separates holes and electrons at the interface of two organic layers, to keep them from recombining.
5.5.2 Number of layers
Typically, in a good working device, a two-layer or three-layer stack is used. The two layer stack has a HTL directly on top of the patterned substrate, with an EGL on top of the HTL.
The three-layer stack has an additional HTL on top of the two-layer stack.
Other, more complex stacks are not excluded as well, for instance including an additional charge separation layer, or multiple Exciton generation layers. 5.5.3 HOMO/LUMO levels
The relative location of the HOMO and LUMO levels of the HTL and EGL are important in a proper working device.
The LUMO level of the HTL should be higher than the LUMO level of the EGL, and the HOMO level of the HTL should be higher than the HOMO level of the EGL.
In the specific embodiment of PTCBI, the HOMO and LUMO levels are respectively around 6.3 eV and 4.6 eV. Therefore, a suitable HTL deposited on ITO for instance has a HOMO and LUMO level in the following ranges:
-6.3 eV < HOMO < -4.7 eV
-4 eV < LUMO < -l eV
5.5.4 Layer thicknesses
5.5.4.1 Thickness of the HTLs
The optimal layer thickness depends on the exact organic material used in the
HTL.
Possible range of thicknesses: 1-300 nm
Typical for good sensor: 60-150nm, (or even narrower can 80-100nm as mentioned in the text).
5.5.4.2 Thickness of the EGL
E.g. In the case of PTCBI:
Typical for good sensor: 5 to 15 nm
Other materials can have other spectral absorption curves over the visible spectrum, which impacts the layer thickness.
5.5.5 Transmission & coloring
The luminance absorption of the organic stack is typically in the range of 3- 30%.
The luminance transmission of the organic stack is preferably designed to be in the range of 80-95%.
The introduced color shift by the sensors should be reduced as much as possible, by creating a stack with a rather uniform absorption over the visible spectrum when combined with the other components of the sensor, which results in a limited coloring of the light.
As mentioned in the text above, the ABSORPTION is in the range of 3-30%, which is reasonable.
5.5.6 Spectral sensitivity
To use the sensor to measure color, the sensor should have a spectral sensitivity that should at least cover the spectral power distribution of the display's primaries. In the case if we want to measure ambient light, the spectral sensitivity should cover the whole visible range of wavelengths 380-780nm.
5.5.7 Spatial color and transmission uniformity
The organic layers are typically deposited uniformly across the display's active area, to avoid local coloring. On top, a uniform deposition of the organic layers is used, typically with a uniformity between 90-100% over the entire area. The color and luminance uniformity are results of these layer non-uniformities.
5.5.8 Glass transition temperature
The glass transition temperature of the materials should be high enough to keep them from crystallizing when used in front of a display. The glass transition temperature therefore depends on the exact display technology, but is typically in the range of 60°C to 300°C (the upper limit is not a constraint).
5.6 Encapsulation
The sensors need to be encapsulated to avoid potential degradations of the organic materials. Several encapsulation methodologies can be used to avoid these degradations:
Standard encapsulation methodology
Using encapsulation glue
Using an AlOx-film, sputtered on top of the organic layers, glue and uniform glass.
Details are described in the text. 6 Controller
A controller is needed for several reasons:
• To do all the low-level interaction with the photoconductive sensor (which requires a dedicated electronics board)
• Software is required:
o To control and to properly interact with the electronics board o For the applications of the sensor system including luminance and chromaticity improvement algorithms.
• To interact with the display, in order to change for instance the driving of its pixels
6.1 Electronics board
6.1.1 Applied signal type (voltage)
It is preferred that the controller applies a voltage over the sensor, and reads out the consequent current. The inverse is also possible, however.
6.1.2 Shape of applied signal
The type of voltage driving signal applied to the sensor can for instance be a square wave, a sinusoidal wave, or more exotic shapes known by the skilled person. Preferably symmetrical waves going from a positive voltage to the same negative voltage are used. For example, good results were obtained using a square wave that switches between a positive and negative voltage. Preferably the waveform applied does not result into a DC voltage over the cell, or in other words: when integrating one period of the waveform applied then the integrated voltage value is zero. 6.1.3 Amplitude
In addition, the amplitude of the applied signal has an impact on the resulting stability of the measured signal and typically is chosen between 0.3-3V.
However, an operational range can for instance be 0.05-500V. 6.1.4 Frequency
The suitable frequency for a stabile sensor read-out is typically in the range of 0.05 to 2.5 Hz, as mentioned in the text.
A usable range can be broader, however. For instance 0.01 Hz to 60Hz.
6.2 Software
6.2.1 Calibration algorithms
To use the sensor in practice, a suitable calibration is required, to measure luminance and chromaticity. As the sensor is sensitive at both sides, it is sensitive both to light emitted by the display, as well as ambient light. 6.2.1.1 Calibrating display light
The calibration should properly take into consideration the sensor's angular sensitivity, spectral sensitivity and non-linearity, as well as the display's spectral and angular emission.
6.2.1.2 Calibrating for ambient light
In order to be able to distinguish the contribution of the ambient light, the sensor system can comprise an optical, electronic or mechanical filter.
6.2.1.3 Measuring ambient light
The obtained result for the ambient light contribution needs to be matched to a device with a proper ν(λ) spectral sensitivity curve in order to obtain an actual ambient light measurement.
6.2.1.4 Sensor ageing
In case the sensor system would degrade during its lifetime, several solutions be used to again ensure a proper calibration of the sensor, which include using a reference sensor to recalibrate the sensor, and using a model+LEDs, as elaborated in the text.
6.2.2 Applications
The applications of the sensor do not impact the basic sensor architecture as described above. However, there are several design choices of the sensor remaining, which can be suitably selected for the intended applications. The remaining design choices include lay-out of the matrix of sensors (their locations on the screen), and the size of individual sensors.
6.2.2.1 Luminance/color uniformity checks The number of sensors used to evaluate the uniformity can be chosen almost arbitrary, as there only exist certain guidelines.
There can be for instance 5 sensors, one in the center, and 4 in the corners of the display.
6.2.2.2 Luminance/color uniformity corrections
The most suitable location and size of the individual sensors used in the sensor grid depend on the exact display type. There is no fundamental limit to the sensor size and the number of sensors used, aside that the sensors cannot overlap.
More details concerning this are elaborated in the text. As mentioned, there are several main aspects of the sensor system can be altered to obtain the most optimal results for a luminance uniformity correction algorithm.
• Sensor size
· Number of sensors
• The used interpolation/approximation technique:
• The quality assessment metric:
• Percentage of the display's width and height which is considered in the uniformity correction. Note that, these aspects of the sensor system can be interdependent, and the suitable sensor lay-out and size is a result of the combination of all these aspects.
For instance, suitable parameters can be:
• Sensor size: for instance, squares with a side of 5-500 pixels, most suitably squares with a side of 50-150 pixels
• Number of sensors: for instance in the range of 3 by 3 to 400 by 500, most suitably in the range of 6 by 6 to 50 by 40
• The used interpolation/approximation technique: for instance a method based on biharmonic spline interpolation, others are mentioned in the text
• The quality assessment metric: for instance SSIM, the global relative absolute error or maximal local relative absolute error.
• Percentage of the display's width and height which is considered in the uniformity correction. For instance 90-100%, preferably 95-99%

Claims

Claims
1. A display device comprising at least one display area provided with a plurality of pixels, with for each display area, a sensor for detecting a property of light emitted from the said display area (5) into a viewing angle of the display device (1) or for detecting ambient light falling onto the display device, which sensor is located in a front section of said display device in front of said display area, the sensor comprising an at least partially transparent organic photoconductive layer, further comprising means to stabilize the organic photoconductive layer.
2. The display device of claim 1 wherein the means to stabilize the organic
photoconductive layer comprises a glass transition temperature of the organic photoconductive layer being above 60°C.
3. The display device of claim 1 wherein the means to stabilize the organic
photoconductive layer is a circuit for applying a stabilizing signal to the sensor.
4. The display according to claim 3 wherein the signal applied is a square wave.
5. The display device of any of claims 1 to 4, wherein the organic photoconductive layer comprises an exciton generation layer (EGL) and a charge transport layer (CTL), the charge transport layer (CTL) being in contact with a first and a second semitransparent electrode
6. The display device of claim 5 wherein the first and second semitransparent electrodes have finger-shaped interdigitated extensions.
7. The display device according to claim 6 in which the fingers are straight, spiraled or semi-random in shape.
8. The display device of any of the claims 5 to 7 wherein the materials for the CTL and the EGL are such that the LUMO energy level of the HTL is higher than the LUMO energy level of the EGL and the HOMO energy level of the HTL is higher than the HOMO energy level of the EGL, such that an exciton can dissociate at the interface.
9. The display device of any of claims 5 to 8 further comprising a charge separation layer (CSL) between the CTL and the EGL
10. The display device of any previous claim, wherein the first and second
semitransparent electrodes are connected to a semitransparent track, conducting a measurement signal from said sensor within said viewing angle to a controller.
11. The display device as claimed in any of the claims 5 to 10, wherein the first and second semitransparent electrodes comprise an electrically conductive oxide.
12. The display device according to any of the previous claims wherein the organic photoconductive layer comprises a first charge transport layer (CTL), an exciton generation layer (EGL) and a second charge transport layer.
13. The display device of any of the claims 5 to 12 wherein the charge transport layer is a hole transport layer (HTL).
14. The display device of any of the claims 5 to 13, wherein the first and second semitransparent electrodes are made from a material with a work function higher than the HOMO energy level of charge transport layer.
15. The display device of any of the claims 5 to 14 wherein the resistivity of the first and second transparent electrodes is 1 to 5000 ohm per square preferably 60 to 125 ohms per square.
16. The display device of any previous claim in which the sensor is encapsulated.
17. Use of the display device as claimed in any of the previous claims for simultaneous display of an image and sensing a light property in at least one display area.
18. Use as claimed in claim 17, wherein the light property is the luminance and wherein color measurements are sensed by the at least one sensor of the display device in a calibration mode.
19. A partially transparent sensor for use with a display device comprising at least one display area provided with a plurality of pixels, the partially transparent sensor being for detecting a property of light emitted from the said display area into a viewing angle of the display device or for detecting ambient light falling onto the display device, the sensor comprising an organic photoconductive layer, further comprising means to stabilize the organic photoconductive layer.
20. The sensor of claim 19 wherein the means to stabilize the organic photoconductive layer comprises a glass transition temperature of the organic photoconductive layer being above 60°C.
21. The sensor of claim 19 wherein the means to stabilize the organic photoconductive layer is a circuit for applying a stabilizing signal to the sensor.
22. The sensor according to claim 21 wherein the signal applied is a square wave.
23. The sensor of any of claims 19 to 22, wherein the organic photoconductive layer comprises an exciton generation layer (EGL) and a charge transport layer (CTL), the charge transport layer (CTL) being in contact with a first and a second semitransparent electrode
24. The sensor of claim 23 wherein the first and second semitransparent electrodes have finger-shaped interdigitated extensions.
25. The sensor according to claim 24 in which the fingers are straight, spiraled or semi- random in shape.
26. The sensor of any of the claims 23 to 25 wherein the materials for the CTL and the EGL are such that the LUMO energy level of the HTL is higher than the LUMO energy level of the EGL and the HOMO energy level of the HTL is higher than the HOMO energy level of the EGL, such that an exciton can dissociate at the interface.,
27. The sensor of any of claims 23 to 26 further comprising a charge separation layer (CSL) between the CTL and the EGL
28. The sensor of any of the claims 23 to 27, wherein the first and second
semitransparent electrodes are connected to a semitransparent track, conducting a measurement signal from said sensor within said viewing angle to a controller.
29. The sensor as claimed in any of the claims 23 to 28, wherein the first and second semitransparent electrodes comprise an electrically conductive oxide.
30. The sensor according to any of the claims 19 to 29 wherein the organic
photoconductive layer comprises a first charge transport layer (CTL), an exciton generation layer (EGL) and a second charge transport layer.
31. The sensor of any of the claims 23 to 30 wherein the charge transport layer is a hole transport layer (HTL).
32. The sensor of any of the claims 23 to 31, wherein the first and second
semitransparent electrodes are made from a material with a work function higher than the HOMO energy level of charge transport layer.
33. The sensor of any of the claims 23 to 32 wherein the resistivity of the first and second transparent electrodes is 1 to 5000 ohm per square preferably 60 to 125 ohms per square.
34. The sensor of any of the claims 19 to 33 in which the sensor is encapsulated.
PCT/EP2012/057947 2012-04-30 2012-04-30 A display integrated semitransparent sensor system and use thereof WO2013164015A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2012/057947 WO2013164015A1 (en) 2012-04-30 2012-04-30 A display integrated semitransparent sensor system and use thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2012/057947 WO2013164015A1 (en) 2012-04-30 2012-04-30 A display integrated semitransparent sensor system and use thereof

Publications (1)

Publication Number Publication Date
WO2013164015A1 true WO2013164015A1 (en) 2013-11-07

Family

ID=46319100

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2012/057947 WO2013164015A1 (en) 2012-04-30 2012-04-30 A display integrated semitransparent sensor system and use thereof

Country Status (1)

Country Link
WO (1) WO2013164015A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9826226B2 (en) 2015-02-04 2017-11-21 Dolby Laboratories Licensing Corporation Expedited display characterization using diffraction gratings
US11029592B2 (en) 2018-11-20 2021-06-08 Flightsafety International Inc. Rear projection simulator with freeform fold mirror
US11122243B2 (en) 2018-11-19 2021-09-14 Flightsafety International Inc. Method and apparatus for remapping pixel locations
CN114417246A (en) * 2022-02-28 2022-04-29 昇显微电子(苏州)有限公司 Method and device for automatically evaluating Demura effect
CN115023750A (en) * 2020-02-21 2022-09-06 Eizo株式会社 Method for detecting light emitted from display screen and display device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1194013A1 (en) * 2000-09-29 2002-04-03 Eastman Kodak Company A flat-panel display with luminance feedback
WO2004023443A2 (en) 2002-09-09 2004-03-18 E.I. Du Pont De Nemours And Company Organic electronic device having improved homogeneity
EP1424672A1 (en) 2002-11-29 2004-06-02 Barco N.V. Method and device for correction of matrix display pixel non-uniformities
WO2004086527A1 (en) * 2003-03-28 2004-10-07 Siemens Aktiengesellschaft Multi-functional sensor display
US20050200293A1 (en) * 2004-02-24 2005-09-15 Naugler W. E.Jr. Penlight and touch screen data input system and method for flat panel displays
US6950098B2 (en) 2001-07-03 2005-09-27 Barco N.V. Method and system for real time correction of an image
US20070052874A1 (en) 2005-09-08 2007-03-08 Toshiba Matsushita Display Technology Co., Ltd. Display apparatus including sensor in pixel
WO2009089470A2 (en) * 2008-01-11 2009-07-16 Massachusetts Institute Of Technology Photovoltaic devices

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1194013A1 (en) * 2000-09-29 2002-04-03 Eastman Kodak Company A flat-panel display with luminance feedback
US6950098B2 (en) 2001-07-03 2005-09-27 Barco N.V. Method and system for real time correction of an image
WO2004023443A2 (en) 2002-09-09 2004-03-18 E.I. Du Pont De Nemours And Company Organic electronic device having improved homogeneity
EP1424672A1 (en) 2002-11-29 2004-06-02 Barco N.V. Method and device for correction of matrix display pixel non-uniformities
WO2004086527A1 (en) * 2003-03-28 2004-10-07 Siemens Aktiengesellschaft Multi-functional sensor display
US20050200293A1 (en) * 2004-02-24 2005-09-15 Naugler W. E.Jr. Penlight and touch screen data input system and method for flat panel displays
US20070052874A1 (en) 2005-09-08 2007-03-08 Toshiba Matsushita Display Technology Co., Ltd. Display apparatus including sensor in pixel
WO2009089470A2 (en) * 2008-01-11 2009-07-16 Massachusetts Institute Of Technology Photovoltaic devices

Non-Patent Citations (11)

* Cited by examiner, † Cited by third party
Title
BLUME H; HO AMK; STEVENS F; STEVEN PM: "Practical aspects of gray-scale calibration of display systems", PROC SPIE, vol. 4323, 2001, pages 28 - 41
HO JOHN ET AL: "Lateral organic bilayer heterojunction photoconductors", APPLIED PHYSICS LETTERS, AIP, AMERICAN INSTITUTE OF PHYSICS, MELVILLE, NY, US, vol. 93, no. 6, 14 August 2008 (2008-08-14), pages 63305 - 63305, XP012113490, ISSN: 0003-6951, DOI: 10.1063/1.2949317 *
J PANG; Y TAO; S FREIBERG; XP YANG; M D'IORIO; SN WANG, JOURNAL OF MATERIALS CHEMISTRY, vol. 12, no. 2, 2002, pages 206 - 212
J. M. GEARY ET AL.: "The mechanism of polymer alignment of liquid-crystal materials", J. APPL. PHYS., vol. 62, 1987, pages 10
JOHN C. HO: "Applied Physics Letters 93", ALEXI ARANGO AND VLADIMIR BULOVIC, article "Lateral organic bilayer heterojunction photoconductors"
MORIMUNE T ET AL: "SEMITRANSPARENT ORGANIC PHOTODETECTORS UTILIZING SPUTTER-DEPOSITED INDIUM TIN OXIDE FOR TOP CONTACT ELECTRODE", JAPANESE JOURNAL OF APPLIED PHYSICS, THE JAPAN SOCIETY OF APPLIED PHYSICS, JAPAN SOCIETY OF APPLIED PHYSICS, TOKYO; JP, vol. 44, no. 4B, 1 January 2005 (2005-01-01), pages 2815 - 2817, XP001245855, ISSN: 0021-4922, DOI: 10.1143/JJAP.44.2815 *
SANG-HEE KO PARK; JIYOUNG OH; CHI-SUN HWANG; JEONG-IK LEE; YONG SUK YANG; HYE YONG CHU; KWANG-YONG KANG, ULTRA THIN FILM ENCAPSULATION OF ORGANIC LIGHT EMITTING DIODE ON A PLASTIC SUBSTRATE, ETRI JOURNAL, vol. 27, no. 5, October 2005 (2005-10-01)
TOSHIYUKI ABE A; SOU OGASAWARA A; KEIJI NAGAI B; TAKAYOSHI NORIMATSU, DYES AND PIGMENTS, vol. 77, 2008, pages 437 - 440
V.I. ARKHIPOV; H. BÄSSLER, PHYS STATUS SOLIDI A, vol. 201, 2004, pages 1152
W. KEUNING; P. VAN DE WEIJER; H. LIFKA; W. M. M. KESSELS; M. CREATORE: "Cathode encapsulation of OLEDs by atomic layer deposited A1203 films and A1203/a-SiNx:H stacks", J. VAC. SCI. TECHNOL. A, vol. 30, 2012, pages 01A131
YASUHIKO SHIROTA, JOURNAL OF MATERIALS CHEMISTRY, vol. 10, 2000

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9826226B2 (en) 2015-02-04 2017-11-21 Dolby Laboratories Licensing Corporation Expedited display characterization using diffraction gratings
US11122243B2 (en) 2018-11-19 2021-09-14 Flightsafety International Inc. Method and apparatus for remapping pixel locations
US11595626B2 (en) 2018-11-19 2023-02-28 Flightsafety International Inc. Method and apparatus for remapping pixel locations
US11812202B2 (en) 2018-11-19 2023-11-07 Flightsafety International Inc. Method and apparatus for remapping pixel locations
US12192686B2 (en) 2018-11-19 2025-01-07 Flightsafety International Inc. Method and apparatus for remapping pixel locations
US11029592B2 (en) 2018-11-20 2021-06-08 Flightsafety International Inc. Rear projection simulator with freeform fold mirror
US11709418B2 (en) 2018-11-20 2023-07-25 Flightsafety International Inc. Rear projection simulator with freeform fold mirror
CN115023750A (en) * 2020-02-21 2022-09-06 Eizo株式会社 Method for detecting light emitted from display screen and display device
CN115023750B (en) * 2020-02-21 2023-08-15 Eizo株式会社 Method for detecting light emitted from display screen and display device
CN114417246A (en) * 2022-02-28 2022-04-29 昇显微电子(苏州)有限公司 Method and device for automatically evaluating Demura effect

Similar Documents

Publication Publication Date Title
US20130278578A1 (en) Display device and means to improve luminance uniformity
US11037525B2 (en) Display system and data processing method
EP2659306B1 (en) Display device and means to measure and isolate the ambient light
JP4571492B2 (en) Display circuit with optical sensor
US10444555B2 (en) Display screen, electronic device, and light intensity detection method
JP5301240B2 (en) Display device
JP5060810B2 (en) Liquid crystal display device, driving method and manufacturing method thereof
US10089924B2 (en) Structural and low-frequency non-uniformity compensation
CN1211770C (en) Electroluminescent display device with luminance correction in dependence on age and ambient light
TWI859745B (en) Display device and electronic device
WO2012089849A1 (en) Method and system for compensating effects in light emitting display devices
TW201001390A (en) Display device and method for luminance adjustment of display device
JP5740132B2 (en) Display device and semiconductor device
JP2005530217A5 (en)
WO2013164015A1 (en) A display integrated semitransparent sensor system and use thereof
JP2010211374A (en) Touch panel and electronic equipment
JP5094489B2 (en) Display device
CN108230997A (en) Oled substrate and its luminance compensation method, display device
RU2488193C1 (en) Phototransistor and display unit equipped with it
CN112313736A (en) Display devices and electronic equipment
CN105047129B (en) Structure and low frequency Inconsistency compensation
WO2012089847A2 (en) Stability and visibility of a display device comprising an at least transparent sensor used for real-time measurements
JP2011107454A (en) Display device
US12148401B2 (en) Display substrate, display device and compensation method thereof
US20060044299A1 (en) System and method for compensating for a fabrication artifact in an electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12728041

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12728041

Country of ref document: EP

Kind code of ref document: A1