GB2363024A - Calibrating an imaging system - Google Patents
Calibrating an imaging system Download PDFInfo
- Publication number
- GB2363024A GB2363024A GB0108478A GB0108478A GB2363024A GB 2363024 A GB2363024 A GB 2363024A GB 0108478 A GB0108478 A GB 0108478A GB 0108478 A GB0108478 A GB 0108478A GB 2363024 A GB2363024 A GB 2363024A
- Authority
- GB
- United Kingdom
- Prior art keywords
- image data
- image
- target
- attribute
- output apparatus
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 110
- 238000000034 method Methods 0.000 claims abstract description 71
- 238000012545 processing Methods 0.000 claims abstract description 70
- 230000003362 replicative effect Effects 0.000 claims description 5
- 238000012806 monitoring device Methods 0.000 claims 2
- 238000012544 monitoring process Methods 0.000 claims 1
- 239000003086 colorant Substances 0.000 description 40
- OAICVXFJPJFONN-UHFFFAOYSA-N Phosphorus Chemical compound [P] OAICVXFJPJFONN-UHFFFAOYSA-N 0.000 description 20
- 238000007639 printing Methods 0.000 description 15
- 238000010894 electron beam technology Methods 0.000 description 14
- 238000009125 cardiac resynchronization therapy Methods 0.000 description 9
- 230000004313 glare Effects 0.000 description 8
- 239000000976 ink Substances 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 7
- 230000003595 spectral effect Effects 0.000 description 7
- 238000001228 spectrum Methods 0.000 description 7
- 230000006835 compression Effects 0.000 description 6
- 238000007906 compression Methods 0.000 description 6
- 238000013500 data storage Methods 0.000 description 6
- 230000010076 replication Effects 0.000 description 6
- 230000033458 reproduction Effects 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 4
- 230000001419 dependent effect Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 241001270131 Agaricus moelleri Species 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000000712 assembly Effects 0.000 description 2
- 238000000429 assembly Methods 0.000 description 2
- 230000007935 neutral effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000014616 translation Effects 0.000 description 2
- 241001637516 Polygonia c-album Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 238000007641 inkjet printing Methods 0.000 description 1
- 238000007648 laser printing Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000005693 optoelectronics Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/04—Diagnosis, testing or measuring for television systems or their details for receivers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Image Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Facsimile Image Signal Circuits (AREA)
Abstract
An imaging system (100) and method of calibrating the imaging system (100). The imaging system (100) comprises an imaging apparatus (120) operatively associated with an output apparatus (300, 340). The imaging apparatus (120) stores first image data representative of a target (400) having at least one attribute associated therewith. The imaging apparatus (120) outputs the first image data to the output apparatus (300, 340), which displays an image of the target (400). The imaging apparatus (120) generates second image data representative of the displayed target (400). Processing criteria in the imaging system (100) are adjusted to minimize the difference between the attribute of the target (400) represented in the first image data from the attribute of the target (400) represented by the second image data.
Description
c v 2363024 DEVICE AND METHOD FOR CALIBRATING AN IMAGING SYSTEM
Technical Field of the Invention
The present invention relates to image processing and, more particularly, to an imaging system and method for calibrating the imaging system so that optimal replications of images of objects are displayed on an output device associated with the imaging system.
BackQround of the Invention Digital cameras are devices that produce machinereadable image data representative of an image of an object. The machine-readable image data generated by the digital camera is often referred to herein simply as "image data." The process of generating image data representative of an image of an object is often referred to herein simply as "imaging" the object. The image data generated by the digital camera is transmitted to an output device, such as a video monitor or a printer, that replicates the image of the object.
The digital camera typically has optical elements, a two-dimensional photodetector array, a data storage device, and a processor. The optical elements serve to focus an image of the object onto the two-dimensional photodetector array and may comprise various lenses. The two-dimensionalphotodetector array generates image data representative of the optical image focused onto it. The processor serves to process the image datec and to transfer the image data to and from the data storage device and the output device. The data storage device serves to store the image data for future processing.
Each photodetector generates image data 1 a A representative of a small portion of the optical image of the object. The cumulation of image data generated by the plurality of photodetectors is representative of the image of the object, similar to a mosaic representation of the image of the object. Each photodetector outputs a data value that corresponds to the intensity of light it receives. For example, photodetectors that receive high intensities of light may output high data values. Likewise, photodetectors that receive low intensities of light may output low data values. The different intensities of light that may be converted to image data is one of the factors that affects the "tone reproduction" of the digital camera. Tone reproductions typically vary between different two-dimensional is photodetector arrays, which in turn causes tone reproductions to vary between different digital cameras.
Tone reproduction in a displayed image may be modified by processing luminance ratios to form a specific tone map.
Two-dimensional photodetectors that image color objects require additional hardware and processing capabilities. The image data representative of a color image typically consists of the intensities of specific red, green, and blue spectral components of the image of the object. In summary, generating the image data is achieved by having selected photodetectors generate image data representative of either a red, a green, or a blue spectral component of the image of the object. This may, as an example, be achieved by locating a two- dimensional photodetector assembly that has an array of color filters between the object and the photodetectors. The array of filters permits only the specific spectral components of red, green, or blue light to pass to a single photodetector. Accordingly, single photodetectors image single and specific spectral components of the image of the object. As with the other components comprising the 2 L A digital camera, the filters can vary from one digital camera to another. For example, one red filter may pass a slightly different wavelength of light than another red filter. Thus, different digital cameras process color differently.
In some digital.cameras, the array of filters and their accompanying photodetectors are arranged in groups of four, which are often referred to herein as "super pixels" or sometimes simply as "'pixels." A pixel typically consists of one photodetector that images red light, two photodetectors that image green light, and one photodetector that images blue light. The pattern of photodetectors comprising the super pixel may, as an example, correspond to the Baer pattern. By combining the red, green, and blue lights, imaged by the photodetectors each pixel is able to represent a wide spectrum of colors. However, due to the limited numbers of base or primary light colors that are combined, e.g., red, green, and blue, the spectrum of colors that may be represented in image data by the digital camera is limited.
The processor serves to process, store, and output the image data to the output device. The processor combines the red, green, and blue spectral components of the image data pursuant to predetermined ratios. The process of creating an image based on image data from the pixels is sometimes referred to as "'demosaicing." An example of generating image data and demosaicing is set forth in the United States patent, 5,838,818 of Herley for ARTIFACT REDUCTION COMPRESSION METHOD AND APPARATUS FOR MOSAICED IMAGES, which is hereby incorporated for all that is disclosed therein.
The processor also stores the image data in the data storage device. In order to store a plurality of images in the data storage device, the processor typically 3 c compresses the image data. In addition, the processor facilitates the transmission of image data to the output device, such as a monitor or a printer. The image data output from the digital camera is typically in a standardized format so that the colors represented by the image data closely correspond to standardized colors that may be replicated on the output device. For example, colors represented by the image data may, as an example, correspond to the international color consortium (ICC) standard. The compression and transmission of the image data may, as an example, comply with the tagged image file format (TIFF). The compression of the image data may, as another example, correspond to the IS 10918-1 (ITU-T T.81) standard or other standards of the Joint Photographic Experts Group (JPEG). An example of compressing and decompressing image data is set forth in the United States patent, 5,838,818, previously referenced.
Two of the most common output devices are video monitors and printers. A video monitor is a lightemitting device that uses combinations of red, green, and blue colored light to create color images. Most video monitors have a cathode ray tube (CRT) comprising an array of red, green, and blue phosphor elements. The phosphor elements may be arranged in groups, similar to the above described pixels, wherein each group has a red, a green, and a blue phosphor element.
In addition, the CRT has three electron emitters that emit electron beams and magnetic assemblies that steer the electron beams. The electron emitters are often referred to as "electron guns." The CRT typically has one electron gun to control red light, one to control green light, and one to control blue light. The electron guns emit electron beams that strike their associated phosphor elements, which in turn, emit their color of 4 light for a period. The magnetic assemblies steer the beams so that they all may simultaneously strike the phosphor elements comprising a single group or pixel. Video electronics within the video monitor control the locations of the electron beams and the intensity of each beam. By controlling, the intensity of the electron beams striking the phosphor elements, the video monitor is able to control the color and brightness of an image displayed by the CRT.
The colors of the phosphor in the CRTs tend to vary between different video monitors. Likewise, the video electronics and other components comprising the monitor tend to vary between different video monitors. These variations cause video monitors that receive identical input information to display different images. For example, two video monitors may be given instructions via image data to display a particular shade of blue that has specific ratios of green and red components. The two video monitors may have different colored phosphor elements and different video electronics and may, thus, display different shades of blue.
A printer prints ink onto a sheet of paper to create an image. Some black and white printers print images by printing a plurality of black dots onto a sheet of white paper. The cumulation of dots forms the image, similar to a mosaic. The precision of an image that may be printed is dependent on the number of dots per unit area that the printer is able to place on a sheet of paper. For example, a printer that is able to print 600 dots per inch (dpi) is generally able to print images with less precision than a printer that is able to print 1200 dpi.
The '"tone map" of the printer is dependent on the number of dots that are able to be printed per unit area in addition to other factors as are known in the art.
This is a result of different tones of gray being printed t V by varying the number of dots that are printed per unit area. Thus, if a printer is able to print a large number of dots per unit area, it is generally able to print a large number of different grays. The tone map is also dependent on the particular shades of gray that are able to be printed, which in turn is also dependent on the "blackness" or shade of black of the ink and the "whiteness" or shade of white of the page.
Some printers print color images, which may, as an example, be achieved, by mixing colors on the sheet of paper. Rather than printing only black dots, color printing uses a plurality of colors that, when mixed together, form a dot having a desired color. A color printer typically mixes black, yellow, magenta, and cyan to achieve its "gamut." The ratios of these colors that are mixed on the sheet of paper may vary between different printers. In addition, these colors may vary between different printers and different ink manufactures. Accordingly, two different printers that receive the same input information or image data may print two different images.
Several problems exist in accurately replicating an image of an object. For example, the digital camera may image an object comprising a specific shade of red. The specific shade of red may be represented by image data having specific ratios of red, green, and blue light. The video monitor, however, may display a different shade of red than that imaged by the digital camera and, thus, that comprising the object. This color discrepancy may, as an example, be caused by the video electronics processing the image data generated by the camera so as to display a shade of red that is different than that sensed by the digital camera. Accordingly, the displayed shade of red will be different than that comprising the object. In another example, variations in the colors of 6 k 9 is the phosphor elements in the CRT may cause a different shade of red to be displayed. A similar problem exists with regard to the tone maps. The image of the object may have been generated using a digital camera having one specific tone map. The image of the object will not be able to be accurately replicated by the video monitor if the image data is not processed to reflect the specific tone map of the video monitor, which is different from that of the digital camera.
Similar problems exist with regard to replicating an image of an object by printing the image. Color printing has an additional problem in that the image data is typically generated by using red, green, and blue light. The printer, however, typically prints the image of the object by using black, yellow, magenta, and cyan colored inks. The translation between the red, green, and blue light to the black, yellow, magenta, and cyan ink will often cause variations in the printed image of the object.
Therefore, a need exists for an imaging system and a calibration method that overcomes the problems caused by variations in the components comprising the imaging system.
Summarv of the Invention The invention is, directed to a calibrated imaging system and a method for calibrating the imaging system. The calibrated imaging system provides for accurate and uniform replications of images of objects.
The imaging system employs an imaging apparatus such as, for example, a digital camera that generates image data representative of an image of an object. The image data is used by an output device, such as a video monitor or a printer, to replicate the image of the object. A 7 A 4 video monitor typically replicates the image of the object by displaying the image on a conventional cathode ray tube (CRT) or a liquid crystal display (LCD). A printer typically replicates the image of the object by printing the image onto a sheet of paper as a series of dots in a conventional manner, such as laser printing and ink jet printing.
The method and system cause an image of an object replicated by the imaging system to be consistent, even as the output devices change. This consistency is achieved by calibrating the imaging system to account for variations in the values of image replication parameters (sometimes referred to herein simply as "parameters") that affect the attributes of the image. In other words, some parameters may differ in value between different output devices, which causes the attributes of the image to vary. This calibration method allows the imaging system to be calibrated to take differences in parameter values and, thus, image attributes into account so as to produce accurate and consistent images. These parameters may, as examples, include the tone map, primary colors, gamut of the output device, and the ambient light conditions of the output device. The attributes may, as an example, include the color balance of neutral grays.
Calibration commences with the imaging apparatus outputting first image data representative of a target to the output device associated with the imaging apparatus. The first image data has a first set of predetermined parameter values that correspond to an image of a target having predetermined attributes. The output device displays an image that is representative of the target based on the first image data. Variations in processing criteria and the components comprising the imaging system may cause the above-described image of the target as displayed by the output device to be different from the 8 4 accurate image that would have been displayed by an "ideal" imaging system. In other words, the attributes of the displayed image may be different from the attributes of the image that would have been displayed by an "ideal" imaging system.
When the replicated image of the object is displayed on the output device, the imaging apparatus generates second image data representative of the replicated image of the target. Accordingly, the second image data is representative of the image of the target having a second set of values for the parameters, which is representative of the image of the target having a second set of attributes. The imaging apparatus determines the difference between corresponding values of the first and second sets of parameter values. The imaging apparatus then modifies its processing criteria of the image data based on the determined differences between the first set of parameter values and the second set of parameter values so as to produce an accurate display image, i.e., a display image that corresponds closely to the original object which was imaged. Image data representative of other objects is then processed based on this modified processing criteria so that the replicated images of the objects accurately represent the actual images of the objects.
Brief Description of the Drawing
Fig. 1 is a schematic illustration of an imaging system generating image data representative of an object.
Fig. 2 is a schematic illustration of the imaging system of Fig. 1 configured to perform a closed loop calibration.
Fig. 3 is a front schematic illustration of a two-dimensional photodetector array of the type 9 1 illustrated in Fig. 1.
Fig. 4 is an illustration of a target used by configuration of the imaging system of Fig. 2 to calibrate the imaging system.
Fig. 5 is a front schematic illustration of a video monitor of the type illustrated in Fig. 2, including a cathode ray tube.
Fig. 6 is a flow chart describing an embodiment of a calibration process for the imaging system of Fig. 2.
Detailed Description of the Invention
Figs. 1 through 6, in general, illustrate a method of calibrating an imaging system 100 having at least one image processing criterion. The method comprises: providing an imaging apparatus 120; providing an output apparatus 300, 340 operatively associated with the imaging apparatus 120; providing first image data representative of a target 400 having at least one parameter, wherein the at least one parameter is predetermined; producing an image of the target 400 based on the first image data using the output apparatus 300, 340; generating second image data representative of the image of the target 400 using the imaging apparatus 120; processing the second image data to determine the at least one parameter of the image of the target 400; and adjusting the at least one image processing criterion to reduce the difference between the predetermined at least one parameter and the at least one parameter of the image of the target 400 represented by the second image data.
Figs. 1 through 6 also, in general, illustrate a method of imaging an object 122. The method comprises: providing an imaging apparatus 120, wherein the imaging apparatus 120 processes image data based upon at least one image processing criterion; providing an output t 1 apparatus 300, 340 operatively associated with the imaging apparatus 120; providing first image data representative of a target 400 having at least one parameter, wherein the at least one parameter is predetermined; producing an image of the target 400 based on the first image data using the output apparatus 300, 340; generating second image data representative of the image of the target 400 using the imaging apparatus 120; processing the second image data to determine the at least one parameter of the image of the target 400; adjusting the at least one image processing criterion to reduce the difference between the predetermined at least one parameter and the at least one parameter of the image of the target 400 represented by the second image data; generating third image data representative of an image of the object 122 using the imaging apparatus 120; processing the third image data based on the adjusted at least one image processing criterion; transmitting the processed third image data to the output apparatus 300, 340; and replicating the image of the object 122 using the output apparatus 300, 340.
Having generally described the imaging system 100 and a method of calibrating the imaging system 100, they will now be described in greater detail.
A brief summary of the imaging system 100 and a calibration method are set forth below followed by a more detailed description. Referring to Fig. 1, the imaging system 100 serves to replicate an image of an object 122 onto an output device, such as a video monitor 300 or a printer 340. The imaging system 100 generally operates as an open loop system wherein a digital camera 120 generates image data representative'of an image of the object 122. The image data is transmitted to a personal computer 200 where it is processed into a format that may be replicated by the video monitor 300 or the printer 11 1 6 340.
One problem in replicating the image of the object 122 is that variations in devices and processing criteria between the generation of the image data by the digital camera 120 and the replication of the image of the object 122 typically causes variations or inaccuracies to appear in the replicated image of the object 122. Thus, the image of the object 122 displayed on the video monitor 300 may not be an accurate representation of the image of the object 122. Furthermore, the image of the object 122 displayed by the video monitor 300 may be different from an image of the object 122 printed by the printer 340.
Referring to Fig. 2, which is a schematic illustration of the imaging system 100 with its components configured to perform a calibration, the imaging system 100 and calibration method disclosed herein overcome the above-described problems by providing a closed loop calibration method and system. During closed loop calibration, the digital camera 120 outputs first image data representative of a target 400 having at least one parameter value that is predetermined. Accordingly, the attributes of the target corresponding to the predetermined parameter value will also be predetermined. The video monitor 300 or the printer 340 receives the first image data and replicates the image of the target 400. The digital camera 120 then generates second image data that is representative of the replicated image of the target 400. The second image data will have parameter values that differ from the predetermined parameter values of the first image data.
Under ideal conditions, the predetermined parameter values of the target 400 represented by the first image data should be identical to the parameter values of the target 400 represented by the second image data. For example, the digital camera 120 may output first image 1 4 a data representative of a target 400 having parameter values corresponding to a specific shade of blue to the video monitor 300. The video monitor 300 should display an image having that specific shade of blue. Due to the aforementioned variations, however, the shade of blue displayed by the video monitor 300 will typically vary somewhat from the shade of blue the digital camera 120 intended to be displayed. The difference in the shades of blue is represented by a difference between the predetermined parameter values of the first image data and the parameter values of the second image data.
In order to reduce the difference between the shades of blue represented in the first image data and the second image data, processing criteria of the digital camera 120 are modified. For example, if the digital camera 120 determines that the second image data has too much green in the shade of blue, the digital camera 120 may process the image data to lower the amount of green present in the image data prior to the image data being output to the video monitor 300. The modified processing criteria are then applied to image data representative of an object 122, Fig. 1. The modified processing criteria cause similar and accurate images of the object 122, or other objects, to be displayed on any output device that has been calibrated as described above. The processing criteria further cause the images to accurately reflect the artistic intent of the user. In an alternative or additional embodiment, the digital camera 120 may instruct a user to change settings on the video monitor 300 or the printer 340 to reduce the level of green that is displayed. In this case, the processing criteria are manually modified by a user per instructions from the digital camera 120.
Having summarily described the imaging system 100 and a method to calibrate the imaging system 100, they 13 q will now be described in greater detail.
Referring again to Fig. 2, the imaging system 100 may comprise a digital camera 120, a personal computer 200, a video monitor 300, and a printer 340. The digital camera 120 is sometimes referred to herein as an imaging apparatus. The video monitor 300 and the printer 340 are sometimes referred to herein as output apparatuses or output devices. The use of the digital camera 120 as an imaging device is for illustration purposes and it is to be understood that other imaging devices, such as a scanning device or a digital video camera, may be used in place of the digital camera 120.
The digital camera 120 may have a housing 130 with an aperture 132 formed therein. The aperture 132 may serve to allow light 124 to enter the housing 130. The interior of the housing 130 may contain a lens 138, a two-dimensional photodetector array 140, a processor 142 (sometimes referred to as a computer), and a memory device 144. It should be noted that the processor 142 and the memory device 144 may be a single component. For illustration purposes, however, they are illustrated herein as being individual components. A data line 150 may electrically connect the two-dimensional photodetector array 140 to the processor 142. A data line 152 may electrically connect the processor 142 to the memory device 144. In addition, a conventional strobe 156 may be associated with the digital camera 120 and may serve to illuminate objects that are being photographed by the digital camera 120.
The lens 138 may be a conventional lens or plurality of lenses that serve to focus the light 124 onto'the two-dimensional photodetector array 140. In some embodiments of the digital camera 120, the lens 138 may be a zoom lens that enlarges or decreases the size of the image represented by the light 124 that is focused onto 14 4 the two-dimensional photodetector array 140.
The digital camera 120 illustrated in Fig. 2 shows a side perspective view of the two-dimensional photodetector array 140. The side of the two-dimensional photodetector array 140 is flat as shown in Fig. 2.
Referring to Fig. 3, which is a front schematic illustration of the two-dimensional photodetector array 140, the front of the two-dimensional photodetector array may be rectangular. The two-dimensional photodetector array 140 may have a height Hl extending in a y-direction and a length Ll extending in an x direction. The height Hl and the length Ll may define a surface 160 on which a plurality of photodetectors 162 are mounted. The arrangement of photodetectors 162 may form a plurality of rows 164 and columns 166. It should be noted that the photodetectors 162 illustrated in Fig.
3 have been greatly enlarged for illustration purposes.
The photodetectors 162 may be conventional optoelectronic. devices that serve to convert intensities of light to image data. For example, photodetectors 162 that receive high intensities of light may output image data having high values. Likewise, photodetectors 162 that receive low intensities of light may output image data having low values. The process of generating image data representative of an object is sometimes referred to simply as "imaging" the object. In the case where the image data is in a digital format, the number of discrete values of image data that may represent the intensity of light is proportional to the number of values that may be represented by the digital format. For example, if the image data is binary and represented by four bits, there can be only 16 different values for image data. The number of discrete values that may represent the image data is one of the factors that establishes the tone map or gray scale of the two-dimensional photodetector array 1 and, thus, the digital camera 120. Another factor that determines the tone map is the spectrum of the gray scale that can be imaged, as is known in the art.
Generating image data representative of color images requires additional components, not shown, to be added to the two-dimensional photodetector array 140. For example, a screen, not shown, having a plurality of color filters may be placed adjacent or doped onto the surface 160. The screen may consist of a plurality of red, green, and blue filters wherein a single filter is associated with a single photodetector 162. The filters may allow only specific bands of wavelengths of red, green or blue light to pass to their corresponding photodetectors 162. The photodetectors 162 may be arranged into clusters or "'pixels" consisting of four adjacent photodetectors 162. It should be noted that the clusters are sometimes referred to as "super pixels." Each pixel, through its association with the filters, may have one photodetector 162 that images red light, two photodetectors 162 that image green light, and one photodetector 162 that images blue light. Human vision relies heavily on green spectral components of light, thus, two photodetectors. 162 image green light inthis example. As will be described below, the image data generated by each pixel may be combined to represent the color and intensity of light received by each pixel.
Referring again to Fig. 2, during the imaging process, the processor 142 receives the image data generated by the two-dimensional photodetector array 140 and stores the image data in the memory device 144. The processor 142 may compress the image data in a conventional manner in order to maximize the amount of image data that may be stored in the memory device 144. Compressing the image data, however, may result in a degradation of the image data. As described below, the 16 9 processor 142 may also facilitate the transmission of image data to the personal computer 200.
The processor 142 may also convert the image data output from the twodimensional photodetector array 140 into a particular format for an output device that processes image data in a specific manner. The format may allow for different output devices to display substantially identical images of the object 122. The processor 142 may, as an example, transform the image data to a variation of the tagged image file format (TIFF). Processing the image data includes applying predetermined processing criteria to the output data. For example, the processing criteria may include modifying the image data to achieve a specific color scheme or tone map. As will be described below, the calibration process, in part, determines the predetermined processing criteria that the processor 142 uses to process the image data.
Referring briefly to Fig. 1, the processor 142 may also include an input/output device 146 that may be a conventional communications device that serves to transmit and receive data from a peripheral device. The peripheral device is illustrated in Fig. 1 as being the personal computer 200. The data may include image data and instructional information that may be displayed on the output device 300, 340. The input/output device 146 may, as an example, be a conventional infrared transmitter and receiver or a conventional electronic transmitter and receiver. In the embodiment of Fig. 1, an electrical data line 148 connects the input/output device 146 to the personal computer 200.
The memory device 144 may be a conventional digital data storage device that stores image data. Examples of the memory device 144 include random access memory devices, sometimes referred to as RAM or flash memory.
17 Other examples of the memory device 144 include magneti and optical media, such as conventional magnetic discs and optical discs. The memory device 144 may be capable of storing image data representative of several images.
The memory device 144 may also be capable of storing image data representative of at least one target 400. It should be noted that for illustration purposes only, the target 400 of Fig..2 is illustrated as being in the shape of a T. A more detailed example of the target 400 is illustrated in Fig. 4. The target 400 may comprise a surface 408 that may be 18% gray. Twenty gray squares 410 may be located on the surface 408. The gray squares 410 may be arranged in a circle and may be located an equal distance from a center point 412 of the circle.
The gray squares 410 are referred.to as the first through the twentieth gray squares and referenced individually as 421 through 440 respectively. The gray squares 410 may represent ten different shades of gray wherein each of the ten shades is represented by two gray squares 410.
The shades of gray may extend from white to black and two gray squares 410 may be 18% gray. It is preferred that the shades of gray not be in a sequential order around the circle. It is also preferred that two gray squares 410'having the same shades of gray not be adjacent each other.
In addition to the gray squares 410, the target 400 may have twelve color squares 450. The color squares 450 may be located at predetermined areas of the target 400.
The twelve color squares 450 are referred to as the first through the twelfth color squares and referenced numerically as 451 through 462.' The color squares 450 may represent six colors wherein each color is present in two squares 450. Each color may be a different and predetermined. Each color square 450 may be substantially uniform in color. For example, the color 18 0 squares 450 may be red, green, blue, cyan, magenta, and yellow wherein each color is present in two squares 450. It is to be understood that the target 400 illustrated in Fig. 4 is for illustration purposes only and that other targets may be used to calibrate the imaging system 100, Fig. 2.
Referring again to Fig. 1, which provides a detailed escription of the personal computer 200, the personal computer 200 may be a conventional personal computer or other conventional data processing device. The personal computer 200 may have a housing 208. The housing 208 may contain a processor 210, a memory device 212, an input/output device 214, a video processor 216, and a printer driver 218. A data line 230 may electrically connect the processor 210 to the input/output device 214. A data line 232 may electrically connect the processor 210 to the memory device 212. A data line 234 may electrically connect the processor 210 to the video processor 216. A data line 236 may electrically connect the processor 210 to the printer driver 218. In addition to the aforementioned components, a keyboard 220 may be located external to the housing 208 and may be electrically connected to the processor 210 via a data line 238.
The processor 210 may be a conventional processor of the type used in conjunction with conventional personal computers. The memory device 212 may be conventional memory used in conjunction with conventional personal computers. The memory device 212 may, as examples, be electronic memory such as RAM, magnetic memory, or optical memory. As will be described further below, in one embodiment of the imaging system 100, the processor may apply processing criteria to image data as was described with reference to the processor 142 in the digital camera 120. Likewise, the memory device 212 may 19 1 store first image data representative of the target 400, Fig. 4.
The video processor 216 may be a device that converts image data to a video data format that can be interpreted by the video monitor 300 as is known in the art. For example, the video processor 216 may define the refresh rate, color reproduction, and the number of pixels used to display an image on the video monitor 300. The video processor 216 may be one of several models of video processors that are commercially available and that function with the video monitor 300. Different video processors may convert the image data slightly differently. Thus, the format of the video data output by the video processor 216 may vary between different video processors 216. Accordingly, the format of video data output by the personal computer 200 may vary between different personal computers 200.
The printer driver 218 may convert image data to a format that may be interpreted and printed by the printer 340 as is known in the art. The printer driver 218 may comprise conventional printer software and a conventional input/output hardware device. The printer driver 218, like the video processor 216, may be one of numerous models that are commercially available. Thus, the format of the image data output to the printer 340 may vary from one printer driver 218 to another. Accordingly, the format of image data output by the personal computer 200 to the printer 340 may vary between different personal computers 200.
The video monitor 300 may have a housing 308. The housing 308 may contain video electronics 310, a conventional cathode ray tube 314 (CRT), and other conventional components, not shown, that are used in video processing. A data line 316 may electrically connect the video electronics 310 to the cathode ray tube 314. A data line 320 may electrically connect the video processor 216 in the personal computer 200 to the video electronics 310 in the video monitor 300. The video electronics 310 may be comprised of conventional video electronics that process video information to display images represented in the video information on the CRT 314.
The CRT 314 is illustrated in greater detail in Fig 5, which is a front schematic illustration of the video monitor 300. The CRT 314 may be of the type comprising plurality of pixels 332 located on a viewing screen 330. The pixels 332 illustrated in Fig. 5 have been greatly enlarged for illustration purposes. The pixels 332 may be arranged to form a plurality of rows 334 and columns 336. Each pixel 332 on the viewing screen 330 described herein comprises a red, a green, and a blue phosphor element, not shown. Each phosphor element is of a specific wavelength of either red, green, or blue. phosphor elements emit their corresponding color of light for a short period upon being struck by an electron beam. The intensity of light emitted by each phosphor element is directly proportional to the intensity of the electron beam that strikes the phosphor element. The video ele,ctronics 310 cause the viewing screen 330 to display images having spectrums of colors, referred to as the "gamut," by varying the ratios of red, green, and blue light emitted by each pixel 332. The video electronics 310 are able to control the brightness of images displayed on the viewing screen 330 by varying the intensity of light emitted by each pixel 332.
The CRT 314 may have three electron emitters, often referred to as "electron guns," that each emit an electron beam toward the viewing screen 330 to strike the phosphor elements. The electron guns are not illustrated herein. One electron beam strikes the red phosphor 21 elements, one electron beam strikes the green phosphor elements, and one electron beam strikes the blue phosphor elements. By varying the intensity of the electron beams, the intensity of red, green, and blue light emitte by each pixel can be controlled. Varying the intensities of light allows the red, green, and blue light to be combined to specific ratios to create the above-described spectrum of colors or gamut.
The number of pixels 332 on the viewing screen 330 extending along a height H2 in the y-direction is known as the pixel count in the y-direction. The number of pixels 332 extending along a length L2 in the x-direction is known as the pixel count in the x-direction. The aspect ratio of the CRT 314 is the ratio of the pixel count in the y-direction to the pixel count in the x direction. The aspect ratio of the CRT 314 may vary between different CRTs 314. Accordingly, aspect ratios tend to vary between different video monitors 300.
The video electronics 310 control the electron guns based on the above-described video data received from the video processor 216. For example, the video electronics 310 may cause the electron guns to emit specific intensities of electron beams at specific phosphor elements. This allows the color and intensity of light emitted by each pixel 332 to be controlled individually.
It is to be understood that in some embodiments of the imaging system 100, the video electronics 310 may be adapted to display image data received directly from the digital camera 120 without passing through the personal computer 200.
The CRT 314 and the video electronics 310-may vary between different video monitors 300. One variation in CRTs 314 is in the wavelengths of light emitted by the phosphor elements, which may vary slightly between different video monitors 300. Another variation in 22 1 different CRTs 314 is the intensity of the electron beams emitted by the electron guns. These variations in CRTs 314 cause variations in the tone maps, color reproductions, and gamuts of different video monitors 300. Another variation in different CRTs 314 exists in the video electronics 310. The variations may instruct the electron guns to combine red, green, and blue light differently, which causes the aforementioned variations between different video monitors 300. In addition to the above described variations in CRTs 314, different CRTs 314 may have different aspect ratios, which govern the height to width ratio of images displayed on the CRTs 314.
In conventional imaging systems, the above-described variations in the components cause different output devices to display different images of the same object. More specifically, attributes of an image vary between different output devices. For example if an object reflects light of a specific wavelength of green, and if two different models of video monitors 300 are displaying images of the object 122, the displayed images may vary. For example, one video monitor 300 may display the image of the object 122 as having more red than is in the object 122 and the other video monitor 300 may display the image of the object 122 as having more blue than is in the object 122. In addition, one video monitor 300 may crop the image, thus, reducing the size of the image in one direction. As will be described below, the imaging system 100 and calibration method disclosed herein overcome the problems of variations in the components by performing a closed loop calibration of the imaging system 100.
Having described the video monitor 300, the printer 340 will now be described. The printer 340 may be a conventional printer as is commonly associated with a 23 0 personal computer. For illustration purposes only, the printer 340 described herein is of the type known in the art as an ink jet printer. The printer 340 may have a housing 344. Printer electronics 346, a print head 348, and other conventional printing components, not shown, may be located within the housing 344. The print head 348 causes ink to be printed onto a piece of paper, not shown, in a conventional manner. The printer electronics 346 control the printing of ink onto the paper by the print head 348.
Printing an image onto a sheet of paper is achieved by printing a plurality of small dots onto the sheet of paper. Varying degrees of gray may be printed by varying the number of dots that are printed in a specific area. The varying degree of gray that may be printed is one factor that determines the gray scale and the tone map of the printer 340 as are known in the art. Different printers vary substantially in the number of dots that may be printed per unit area, which is also known as dots per inch or "dpi." Likewise, different printers vary substantially in their gray scales, tone maps, and gammas.
The printer 340 may, as an example, be of the type that prints color images onto a piece of paper. Color printing is typically achieved by printing a combination of primary colors in the form of dots onto the sheet of paper. A plurality of these colored dots represents an image of the object, similar to a mosaic representation of an object. The primary colors are typically black, yellow, magenta, and cyan. Combinations of these primary colors allow the printer 340 to print a wide spectrum of colors, which is known as the gamut of the printer 340.
As with the video monitor 300, images printed by the printer 340 may vary between different printers 340. For example, different print heads 348 may cause the 24 combinations of the primary colors to vary between different printers 340. In addition, the processors 346 of different printers 340 may cause the print heads 348 to combine the primary colors in different ratios.
Accordingly, if two different models of printers print images of an object, the printed images may not be the same. The problem of different images from different printers is exacerbated if the different printers 340 use primary colors that differ.
Another problem in printing consistent images with the printer 340 is that the primary colors used by the digital camera 120 are different than the primary colors used by the printer 340. The digital camera 120 typically generates image data based on primary light colors of red, green, and blue. The printer 340, on the other hand, prints the image based on the primary colors of black, yellow, magenta, and cyan. The translations between the primary colors used by the processor 210 and those used by the printer 340 may vary between different printers 340 and printer drivers 218. These variations may cause different images of the same object to be printed on different printers 340.
Having described the components of the imaging system 100, a method of calibrating the imaging system will now be described. The calibration process is illustrated by the flow chart of Fig. 6. The calibration process is described with reference to the video monitor 300, Fig. 1, and is followed by a description with reference to the printer 340.
Referrin g again to Fig. 2, calibration of the imaging system 100 using the video monitor 300 commences with the digital camera 120 outputting first image data representative of a blank or a dark screen to the personal computer 200. It should be noted that the term "first image data" as used herein is image data stored in 0 the memory device 144 of the digital camera 120 and output from the digital camera 120. The first image data is transmitted to the personal computer 200 via the data line 148. The first image data has predetermined parameter values, which in this part of the calibration process, cause a blank or dark sc reen to appear on the CRT 314. As will be described further below, the first image data used in other parts of the calibration process will have different parameter values. The first image data may be compressed and may, as an example, comply with the compression and transmission format specified by the tagged image file format (TIFF). The compression of the image data may, as another example, correspond to the IS 10918-1 (ITU-T T.81) and other standards of the Joint Photographic Experts Group (JPEG).
Referring briefly to Fig. 1, the first image data is transmitted to the personal computer 200 via the data line 148. The input/output device 214 in the personal computer 200 receives the first image data and transmits it to the processor 210 via the data line 230. The processor 210 performs conventional decompression on the first image data, which is generally very straight forward because the first image data is only representative of a blank screen. The processor 210 then transmits the processed first image data to the video processor 216 via the data line 234. The video processor 216 processes the first image data to a format that is recognized by the video monitor 300 and transmits the first image data to the video monitor 300 via the data line 320. The video electronics 310 in the video monitor 300 receives the first image data and puts it into a format that can be displayed by the CRT 314. The first image data is then transmitted to the CRT 314 via the data line 316, wherein the viewing screen 330 displays the blank screen representative of the first image data.
26 0 9 Referring again to Fig. 2, when the blank screen is displayed, the user uses the digital camera 120 to generate second image data representative of the blank or dark screen. The term "second image data" is referred herein to image data generated by the two-dimensional photodetector array 140 that is representative of images displayed on the viewing screen 330. The user should position the digital camera 120 at the same pos-ition as where the user's eyes are positioned when he or she views the viewing screen 330. It is preferred that the digital camera 120 not use the strobe 156, Fig. 1, when it images the blank screen so that second image data representative of the viewing environment of the viewing screen 330 is able to be generated. The digital camera 120 then generates the second image data representative of the blank screen. More specifically, the light 124, which is an image of the blank screen, passes through the aperture 132 in the housing 130 of the digital camera 120. The light 124 is focused by the lens 138 onto the two-dimensional photodetector array 140. The photodetectors 162, Fig. 3, on the two-dimensional photodetector array 140 then generate second image data representative of the blank screen. The two-dimensional photodetector array 140 transmits the second image data to the processor 142 via the data line 150.
The processor 142 analyses the second image data to determine the viewing environment of the viewing screen 330. For example, the processor 142 is able to determine the intensity of ambient light and glare affecting the viewing screen 330. The digital camera 120 may then inform the user as how to set the viewing environment for the optimal viewing of images on the viewing screen 330.
Informing the user as how to change the viewing environment may, as an example, be accomplished by providing instructions onto the viewing screen 330. The 27 W digital camera 120 may output first image data that causes the viewing screen 330 to display the instructions. For example, if the second image data indicates a bright spot on the screen, it is generally an indication- of high glare. The user may be instructed to change the relation of light sources relative to the viewing screen 330 to reduce the glare. In addition, if the second image data indicates that the ambient light is too intense, the user will be informed to lower the intensity of the ambient light. After the user has changed the glare source and light conditions, a new second image of the viewing screen 330 may then be taken. The new second image is evaluated as described above and the digital camera 120 may offer new suggestions to further enhance the viewing environment. The user may accept the suggestions and reiterate the above-described calibration procedure. Iterations of.the calibration process may continue until either the user or the digital camera 120 is satisfied with the viewing environment.
The intensity of the ambient light as set by the user and represented in the second image data may be stored by the digital camera 120 for future use. For example, the intensity of ambient light may be used by the imaging system 100 to adjust the tone map of images displayed on the viewing screen 330. In situations where the user decides not to remedy the glare problem, the location of the glare on the viewing screen 330 may be stored for future reference. Subsequent to any changes in the viewing environment or changes to the output device, the digital camera 120 may generate second image data that may be used by the imaging system 100 as a basis for calibration.
Having described calibration with reference to the blank screen, calibration with reference to the target 400, Fig. 4, will now be described. Calibration using 28 4 the target 400 may be used to adjust processing criteria of the digital camera 120.
This part of the calibration process commences with the digital camera 120 outputting first image data representative of an image of the target 400, Fig. 4, to thig:-- personal computer 200. More specifically, the processor 142 instructs the memory device 144 to transmit the first image data to the processor 142 via the data line 152. The processor 142 then transmits the first image data to the personal computer 200 via the data line 148 as described above with reference to the first image data representative.of a blank or dark screen. The first image data may be compressed per TIFF and/or JPEG specifications as was described above.
Referring to Figs. 2 and 4, the first image data is representative of the target 400, wherein the target 400 has at least one predetermined attribute, e.g., the tone map. In the examples provided herein, the image data has several predetermined parameter values associated with it. These parameter values establish attributes of the image of the target 400, such as the specific gray levels of the gray squares 410 and the specific colors of the color squares 450. Some of the parameters of the first image data represent the shades of the gray squares 410.
As described above, in the target 400 of Fig. 2, there are twenty gray squares 410 representing ten different shades of gray. The different shades of gray are used to establish the gray scale of the target 400. Other parameters of the first.image data may represent the colors in the color squares 450 of the target 400. The colors are predetermined and are used to characterize the color content of the phosphors in the specific video monitor 300. The colors may, as an example, correspond to the international color consortium (ICC) standard.
The first image data representative of the target 29 1( 400 is transmitted to the video monitor 300 via the personal computer 200 as described above with reference to the first image data representative of the blank screen. The video monitor 300 then displays the image of the target 400 as described above. It should be noted that the target 400 displayed on the viewing screen 330 will typically have different attributes than those intended due to variations in processing the first image data. A user uses the digital camera 120 to generate second image data representative of the image of the target 400 displayed on the viewing screen 330. The second image data may be generated without the use of a flash or a strobe in order to best replicate the target 400 as displayed on the viewing screen 330.
The second image data will represent the image of the target 400 as displayed on the viewing screen 330. More specifically, the second image data will have parameter values that correspond to the different shades of gray present in the gray squares 410 as well as the different colors present in the color squares 450. The second image data is then-analyzed by the processor 142. The processor 142 determines the aforementioned parameter values of the target 400 displayed on the viewing screen 330. Under ideal circumstances, the target 400 displayed on the viewing screen 330 will be an exact replication of the target 400 as represented by the first image data. Accordingly, under ideal conditions the parameter values represented in the first image data will be equal to the parameter values represented in the second image data. Due to the above-described variations in the components comprising the imaging system 100, variations typically occur to the first image data between its output from the digital camera 120 and its display on the viewing screen 330. Accordingly, the parameter values represented in the first image data will not be equal to the parameter 1 W & values represented in the second image data. The changes in the parameter values cause attributes of the image of the target 400 displayed on the viewing screen 330 to be different than attributes of the image of the target 400 that was intended to be displayed per the first image data. Likewise, the image of an object may appear different when displayed on different video monitors. The same applies to objects that are imaged by the digital camera 120. The images of the objects displayed on the viewing screen 330 will be different than the actual images of the objects.
In order to overcome these variations, the processor 142 adjusts its processing criteria and the.processing criteria of the imaging system 100 as a whole to compensate for, or cancel out, the variations. One of the processing criterions that may be accounted for is the tone map. Compensating for the tone map is achieved by analyzing the image data representing the gray squares 410 of the target 400. Each gray square 410 should have a predetermined shade of gray associated with it. The processor 142 may analyze the distinctions between the shades of gray to determine if the tone map needs to be adjusted. If adjustment is required, the user may be informed to adjust the brightness, contrast, or related function of the video monitor 300. If the user chooses not to adjust the brightness, contrast, or related function per recommendations of the processor 142, or if the user is unable to meet the recommendations of the processor 142, the processor 142 then modifies the tone map of the digital camera 120 to account for the contrast and brightness settings. More specifically, adjustments are made to the processing criteria of theprocessor 142 to account for the tone map. For example, parameter values corresponding to the gray scales may be scaled to adjust the tone map. it should be noted that the 31 adjustments are made in view of the viewing environment as determined by imaging the blank viewing screen 330. For example, if glare is present on the viewing screen 330, the processor 142 will likely disregard the areas of the target 400 corresponding to the glare during its analysis of the dark screen second image data. Likewise, the intensity of the ambient light will be taken into account when the processing criteria are adjusted.
The color squares 450 represented in the second image data will give measured values for the color temperature or hue and the primary colors used by the video monitor 300. The primary colors used by the monitor allow the processor 142 to determine the gamut of the video monitor 300. By further analyzing the second image data, the processor 142 can inform the user as how to adjust the color temperature or hue of the video monitor 300 to achieve the best possible neutral grays. If the user chooses not to adjust the color temperature or hue, or is unable to do so, the processor 142 changes the color balance of the digital camera 120 to optimize the image displayed on the viewing screen 330. The processor 142 attempts to have the colors in the color squares 450 match the colors represented by the first image data. For example, parameter values corresponding to the colors may be scaled to optimize the color temperature or hue of the displayed target.
The above calibration has been described as being a twofold process, one process relating to the gray squares 410 and another process relating to the color squares 450. It should be noted that these calibration procedures may affect each other. For example, adjusting the hue may affect the contrast. In order to overcome this problem, both calibration processes may be combined. Thus, the calibration procedure may, as an example, change the processing criteria for the contrast and hue 32 a in one step. Again second image data representative of the image displayed on the video monitor 300 may be generated to serve as a basis for calibration.
At this point, the image of the target 400 displayed on the viewing screen 330 appears very close to its intended appearance per the first image data. In order to obtain a more accurate display of the target 400, the above-described calibration process may be repeated. Thus, the digital camera 120 may generate new second image data representative of the target 400 to fine tune the processing criteria.
The aspect ratio may also be taken into account by analyzing the second image data. The processor 142 may determine the locations of the gray squares 410 represented by the second image data. As described above, the gray squares 410 should be equal distance from a center point 412. If the gray squares 410 are not equal distance from the 412, either the personal computer 200 or the video monitor 300 may be corrupting the first image data so as to cause the image to be stretched or compressed either horizontally or vertically. The digital camera 120 may inform the user to adust the output of the video monitor 300 to account for the compression or expansion of the image. The processing criteria may be adjusted accordingly to account for the expansion or contraction of the image.
The processing criteria established during calibration may be stored in the memory device 212 so that the processor 142 may use the processing criteria in the future. For example, when the digital camera 120 outputs image data representative -of an image of an object, the stored processing criteria are applied to the image data so that an optimal and accurate image is displayed on the viewing screen 330.
A similar calibration method as described above is 33 k applied to calibrate the imaging system 100 when the printer 340 is to be used to display images of objects.
An image of the target 400 based on the first image data is printed by the printer 340. The printed image of the target 400 is then imaged by the digital camera 120 as described above to generated the above-described second image data. In this case where the digital camera 120 generates image data representative of a printed display, the strobe 156 may be used in a conventional manner. The strobe 156 emits light having a known spectral content, which may be considered during the evaluation of the second image data. The second image data is then analyzed so as to provide an optimal printing of the image of the target 400. This commences with the digital camera 120 instructing the user to adjust settings on the printer 340. The settings may include contrast and color balance, also known as tint. The instructions may, as an example, appear on the viewing screen 330 of the video monitor 300 as described above. The parameters that cannot be set by the user may then be set by adjusting the processing criteria as described above. It should be noted that the process of printing and imaging the target 400 may be repeated to fine tune the processing criteria.
Subsequent to any changes in the displayed image, second image data representative of the image may be generated to serve as a basis for calibration.
The calibration process has been described above with reference to calibrating the imaging system 100 using either the video monitor 300 or the printer 340.
There are many situations, however, when both video monitor 300 and the printer 340 will be used to view images and, thus, both need to be calibrated. During calibration, the imaging system 100 may inform the user as how to optimize both the video monitor 300 and the printer 340 using manual settings. The imaging system 34 c 1 may then perform the above-described calibration on both the video monitor 300 and the printer 340 in order to adjust internal processing criteria, which optimally calibrates both the video monitor 300 and the printer 340. It should be noted, however, that in some circumstances, adjusting the processing criteria to optimize the output on one device may in turn deteriorate the output of the other device. In such a situation, the imaging system 100 may request the user to select the output device which is to display the most accurate image. Accordingly, the processing criteria will be modified so that images displayed by that output device are the most accurate.
Having described a process for calibrating the imaging system 100, the operation of the calibrated imaging system 100 will now be described.
Referring again to Fig. 1, the imaging process commences with the digital camera 120 generating image data representative of the image of the object 122. More specifically, light 124 comprising an image of the object 122 reflects from the object 122. The light 124 passes through the aperture 132 in the housing 130 and is focused by the lens 138 onto the two-dimensional photodetector array 140. Upon an instruction from the processor 142, the two-dimensional photodetector array 140 generates third image data representative of the image of the object 122. For reference purposes, the term "third image data" is used herein to describe image data representative on an image of the object 122. The third image data is in a conventional format that includes the colors of the object 122. For example, the colors of the third image data may conform to the ICC and the format of the image data may conform to the TIFF or the JPEG as described above. The third image data is stored in the memory device 144 for future processing.
41 Upon a user command, the processor 142 processes the third image data stored in the memory device 144 for output to the video monitor 300 or the printer 340. In the example described herein, the third image data is output to the video monitor 300. The processor 142 applies the processing criteria established during the calibration of the video monitor 300 to the third image data. For example, the processing criteria may scale certain parameter values in order to have the image of the object 122 have certain attributes. In a further example, the parameter values corresponding to the gray scales or color may be scaled per the processing criteria established during the calibration., ccordingly, when the image of the object 122 as represented by the processed third image data is displayed by the video monitor 300, the image will be an accurate replication of the image of the object 122.
Having described a preferred embodiment of the imaging system 100 and calibration method, other embodiments of the imaging system 100 and calibration method will now be described.
In one embodiment of the imaging system 100 a light meter, not shown, is included in the imaging system 100. The light meter transmits data pertaining to the ambient light of the output device to the processor 142. The processor 142 may adjust the parameter values to account for the light. For example, the processor 142 may brighten or darken the image that is to be displayed by appropriately modifying the image data. In addition, the light meter may indicate the frequency band of ambient light associated with the output device. For example, the output device may indicate the spectrum of the ambient light, which will vary depending on whether the ambient light is from a particular artificial source or whether it is from sunlight. The image data may then be 36 r processed so as to account for changing ambient light on the video monitor 300.
The ambient light conditions at a particular viewing area may change throughout the day. For example, during daylight hours, natural sunlight may provide the ambient light of the output device and artificial light may provide the ambient light during night hours. In addition, the intensity of the natural sunlight will vary throughout the day. The processor 142 may adjust the processing criteria based on the time of day to account for the changing ambient light conditions of a particular viewing area. This may, as an example, be accomplished by performing the above-described calibration procedure or by adjusting the second image data per data received from the light meter.
The digital camera 120 may have a user interface, not shown, attached thereto. The user interface may, as an example, be a LCD display. The LCD display may provide the user with the above described instructions as how to adjust the video monitor 300, the printer 340, and the viewing environment. In addition, the user interface may display instructions to guide the user through the calibration process. The user interface may also include the above-described light meter affixed thereto.
The imaging apparatus has been described herein as being the digital camera 120, which has been described as having a two-dimensional photodetector array 140. It is to be understood, however, that other imaging apparatuses may be used to generate image data. For example, the digital camera 120 may be substituted with a scanning device. Likewise, the digital camera 120 may be-a digital video camera. Additionally, the digital camera 120 may be of the type having a linear photodetector array rather than a two-dimensional photodetector array 140.
37 % 0 The processing criteria have been described as being performed by the processor 142 located in the digital camera 120. Because the calibration system is closed loop, the processing criteria may be performed at virtually any location in the loop. For example, the personal computer 200 may perform the processing criteria as described above with reference to the digital camera 120. In this embodiment, the image data representative of the target 400, Fig. 4, may be stored in the memory device 212. The personal computer 200 then outputs the first image data to the output device where the image of the target 400 is displayed or printed. The digital camera 120 then generates second image data representative of the target 400 and outputs the second image data to the personal computer 200. The personal computer 200 performs the calibration as was described above with reference to the digital camera 120.
In this embodiment, third image data representative of the object 122 is transmitted to the personal computer 200 without the processing criteria being applied to it by the personal computer 200. The personal computer 200 applies the processing criteria and transmits the processed third image data to the output device.
The imaging system 100 was described above using the personal computer 200 between the digital camera 120 and an output device. It is to be understood that the imaging system 100 may function without the personal computer 200. In this embodiment, image data is transmitted from the digital camera 120 directly to the output device. The calibration processes proceeds by calibrating the digital camera 120 directly to the output device.
While an illustrative and presently preferred embodiment of the invention has been described in detail herein, it is to be understood that the inventive 38 is concepts may be otherwise variously embodied and employed and that the appended claims are intended to be construed to include such variations except insofar as limited by the prior art.
39
Claims (10)
1. A method for calibrating an imaging system (100) having at least one image processing criterion, said method comprising: providing an imaging apparatus (120); providing an output apparatus (300, 340) operatively associated with said imaging apparatus (120); providing to said output apparatus (300, 340) first image data representative of a target (400) having at least one attribute associated therewith, wherein said at least one attribute is predetermined; producing an image of said target (400) based on said first image data using said output apparatus (300, 340); generating second image data representative of said image of said target (400) using said imaging apparatus (120); analyzing said second image data to determine said at least one attribute of said image of said target (400); and adjusting said at least one image processing criterion to reduce the difference between said predetermined at least one attribute of said first image data and said at least one attribute of said image of said target (400) represented by said second image data.
2. The method of claim 1 wherein: said providing an output apparatus (300, 340) comprises providing an output apparatus (300, 340) operatively associated with said imaging apparatus (120), said output apparatus (300, 340) having at least one manual adjustment device to vary said at least one image processing criterion; and said adjusting comprises instructing a user to adjust said at least one manual adjustment device to reduce the difference between said predetermined at least one attribute and said at least one attribute of said image of said target (400) represented by said second image data.
3. The method of claim 1 wherein said providing an imaging apparatus (120) comprises providing an imaging apparatus (120) having a processing device (142) located therein, and wherein said at least on image processing criterion is performed by said processing device (142).
4. The method of claim 1 and further comprising: providing a light monitoring device located in the proximity of said output apparatus (300, 340); and monitoring ambient light associated with said output apparatus (300, 340) using said light monitoring device; and adjusting said at least one processing criteria based at least in part on said ambient light.
5. The method of claim I wherein said at least one attribute comprises a color balance.
6. The method of claim 1 wherein said at least one attribute comprises a tone map.
7. The method of claim I wherein said at least one attribute comprises a color reproduction.
8. The method of claim 1 wherein said at least one attribute comprises ambient light.
9. comprising:
A method of imaging an object (122), the method 41 providing an imaging apparatus (120), wherein said imaging apparatus (120) processes image data based upon at least one image processing criterion; providing an output apparatus (300, 340) operatively associated with said imaging apparatus (120); providing first image data representative of a target (400) having at least one attribute, wherein said at least one attribute is predetermined; producing an image of said target (400) based on said first image data using said output apparatus (300, 340); generating second image data representative of said image of said target (400) using said imaging apparatus (120); analyzing said second image data to determine said at least one attribute of said image of said target (400); adjusting said at least one image processing criterion to reduce the difference between said predetermined at least one attribute and said at least one attribute of said image of said target (400) represented by said second image data; generating third image data representative of an image of said object (122) using said imaging apparatus (120); processing said third image data based on said adjusted at least one image processing criterion; transmitting said processed third image data to said output apparatus (300, 340); and replicating said image of said object (122) using said output apparatus (300, 340).
10. An imaging system (100) comprising: an imaging apparatus (120) operatively associated with a computer (142); 42 an output apparatus (300, 340) operatively associated with said computer (142); a computer-readable medium operatively associated with said computer (142), said computer-readable medium containing instructions for controlling said computer (142) to calibrate said imaging system (100) by:
providing to said output apparatus (300, 340) first image data representative of a target (400) having at least one attribute, wherein said at least one attribute is predetermined, and wherein said output apparatus (300, 340) produces an image of said target (400) based on said first image data; analyzing second image data representative of said produced image of said target (400) to determine said at least one attribute of said produced image of said target (400), wherein said second image data is generated by said imaging apparatus (120); adjusting at least one processing criterion of said computer (142) to reduce the difference between said predetermined at least one attribute of said first image data and said at least one attribute of said image of said target (400) represented by said second image data.
43
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US54581600A | 2000-04-07 | 2000-04-07 |
Publications (2)
Publication Number | Publication Date |
---|---|
GB0108478D0 GB0108478D0 (en) | 2001-05-23 |
GB2363024A true GB2363024A (en) | 2001-12-05 |
Family
ID=24177664
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB0108478A Withdrawn GB2363024A (en) | 2000-04-07 | 2001-04-04 | Calibrating an imaging system |
Country Status (3)
Country | Link |
---|---|
JP (1) | JP2001309405A (en) |
DE (1) | DE10111434A1 (en) |
GB (1) | GB2363024A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1563454A2 (en) * | 2002-09-20 | 2005-08-17 | Tribeca Imaging Laboratories, Inc. | Method for color correction of digital images |
WO2012072855A1 (en) * | 2010-12-01 | 2012-06-07 | Nokia Corporation | Calibrating method and apparatus |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7111017B1 (en) * | 2002-01-31 | 2006-09-19 | Extreme Networks, Inc. | Dynamic device management and deployment |
US20050122406A1 (en) * | 2003-12-09 | 2005-06-09 | Voss James S. | Digital camera system and method having autocalibrated playback viewing performance |
DE102005048240A1 (en) * | 2005-10-07 | 2007-04-19 | Stefan Steib | Method for the spectral, integrated calibration of an image sensor by means of monochromatic light sources |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0267566A2 (en) * | 1986-11-10 | 1988-05-18 | Canon Kabushiki Kaisha | Color image recording apparatus |
WO1992005668A1 (en) * | 1990-09-17 | 1992-04-02 | Eastman Kodak Company | Scene balance calibration of digital scanner |
GB2325809A (en) * | 1997-05-23 | 1998-12-02 | Umax Data Systems Inc | Dynamically scanning an image |
-
2001
- 2001-02-26 JP JP2001049580A patent/JP2001309405A/en active Pending
- 2001-03-09 DE DE10111434A patent/DE10111434A1/en not_active Withdrawn
- 2001-04-04 GB GB0108478A patent/GB2363024A/en not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0267566A2 (en) * | 1986-11-10 | 1988-05-18 | Canon Kabushiki Kaisha | Color image recording apparatus |
WO1992005668A1 (en) * | 1990-09-17 | 1992-04-02 | Eastman Kodak Company | Scene balance calibration of digital scanner |
GB2325809A (en) * | 1997-05-23 | 1998-12-02 | Umax Data Systems Inc | Dynamically scanning an image |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1563454A2 (en) * | 2002-09-20 | 2005-08-17 | Tribeca Imaging Laboratories, Inc. | Method for color correction of digital images |
EP1563454A4 (en) * | 2002-09-20 | 2007-01-17 | Tribeca Imaging Lab Inc | Method for color correction of digital images |
WO2012072855A1 (en) * | 2010-12-01 | 2012-06-07 | Nokia Corporation | Calibrating method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
JP2001309405A (en) | 2001-11-02 |
DE10111434A1 (en) | 2001-10-18 |
GB0108478D0 (en) | 2001-05-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5313291A (en) | Method for matching color prints to color images on a monitor screen | |
DE69915225T2 (en) | Image processing apparatus and image processing method | |
US7215343B2 (en) | Color correction using a device-dependent display profile | |
EP1821518B1 (en) | Personalized color reproduction | |
CA2043180C (en) | Color correction system employing reference pictures | |
US5652663A (en) | Preview buffer for electronic scanner | |
US7589873B2 (en) | Setting a color tone to be applied to an image | |
US5309257A (en) | Method and apparatus for providing color matching between color output devices | |
US6975437B2 (en) | Method, apparatus and recording medium for color correction | |
US5243414A (en) | Color processing system | |
EP0696865A2 (en) | Color image processing | |
JPH0715612A (en) | Device and method for encoding color | |
US8199367B2 (en) | Printing control device, printing system and printing control program | |
JP2006500877A (en) | Digital image color correction method | |
JP4310707B2 (en) | Gradation conversion calibration method and gradation conversion calibration module using this method | |
JP2001189874A (en) | Color printer calibration method, characterizing method for optical characteristic adjustment parameter and printing system suitable for calibrating digital color printer | |
GB2213674A (en) | Transforming colour monitor pixel values to colour printer pixel values | |
JPH114353A (en) | Image processing method and system | |
US6885394B1 (en) | Method and apparatus for outputting multi-band image | |
GB2363024A (en) | Calibrating an imaging system | |
US20030117435A1 (en) | Profile creating system | |
US20060077487A1 (en) | Digital color fidelity | |
US20060012829A1 (en) | System and method for tone-dependent multi-frequency halftone screening | |
US7525685B2 (en) | True-color computer monitor preview of a color print | |
US6424740B1 (en) | Method and means for producing high quality digital reflection prints from transparency images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |