WO2020027647A1 - Apparatus and method for imaging - Google Patents
Apparatus and method for imaging Download PDFInfo
- Publication number
- WO2020027647A1 WO2020027647A1 PCT/MY2019/000029 MY2019000029W WO2020027647A1 WO 2020027647 A1 WO2020027647 A1 WO 2020027647A1 MY 2019000029 W MY2019000029 W MY 2019000029W WO 2020027647 A1 WO2020027647 A1 WO 2020027647A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- images
- pixel
- respective pixel
- background
- zoom ratio
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 156
- 238000000034 method Methods 0.000 title claims abstract description 66
- 238000004590 computer program Methods 0.000 claims description 12
- 230000000694 effects Effects 0.000 description 18
- 230000008030 elimination Effects 0.000 description 8
- 238000003379 elimination reaction Methods 0.000 description 8
- 239000003086 colorant Substances 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000003213 activating effect Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000001795 light effect Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 229920006395 saturated elastomer Polymers 0.000 description 1
- 238000013515 script Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/174—Segmentation; Edge detection involving the use of two or more images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/74—Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
Definitions
- Various embodiments relate to an apparatus for imaging and a method for imaging.
- Photographing of products normally requires at least a few hours for professional photographers to adjust the lights to strike a balance between product lighting and background colour, as well as lighting of the product.
- the product is reflective or is multicolour, and/or some parts of the product have a matching or similar colour as that of the background, the photographer has to make some compromises in lightings and quality of product shooting.
- an apparatus for imaging may include a lighting arrangement configured to provide lighting, and a processor, wherein the processor is configured to control an imaging device to generate at least two images, each of the at least two images depicting an object of interest, and the at least two images depicting the object of interest and a background at different distances relative to each other, and wherein the processor is further configured, for a respective pixel of pixels defining an image of the at least two images, to determine, based on the at least two images, a zoom ratio associated with the respective pixel, and to determine, using the zoom ratio determined, the respective pixel as a pixel belonging to the object of interest or a background.
- an apparatus for imaging is provided.
- the apparatus may include a processor, and a memory coupled to the processor, the memory having stored therein instructions, which when executed by the processor, cause the processor: to control an imaging device to generate at least two images, each of the at least two images depicting an object of interest, and the at least two images depicting the object of interest and a background at different distances relative to each other, to determine, based on the at least two images and for a respective pixel of pixels defining an image of the at least two images, a zoom ratio associated with the respective pixel, and to determine, using the zoom ratio determined, the respective pixel as a pixel belonging to the object of interest or a background.
- an apparatus for imaging may include a processor, and a memory coupled to the processor, the memory having stored therein instructions, which when executed by the processor, cause the processor, for at least two images generated, each of the at least two images depicting an object of interest, and the at least two images depicting the object of interest and a background at different distances relative to each other: to determine, based on the at least two images and for a respective pixel of pixels defining an image of the at least two images, a zoom ratio associated with the respective pixel, and to determine, using the zoom ratio determined, the respective pixel as a pixel belonging to the object of interest or a background.
- a method for imaging may include generating, via an imaging device, at least two images, each of the at least two images depicting an object of interest, and the at least two images depicting the object of interest and a background at different distances relative to each other, determining, based on the at least two images and for a respective pixel of pixels defining an image of the at least two images, a zoom ratio associated with the respective pixel, and determining, using the zoom ratio determined, the respective pixel as a pixel belonging to the object of interest or a background.
- a method for imaging may include, for a respective pixel of pixels defining an image of at least two images, wherein each of the at least two images depicts an object of interest and wherein the at least two images depict the object of interest and a background at different distances relative to each other, determining, based on the at least two images, a zoom ratio associated with the respective pixel, and determining, using the zoom ratio determined, the respective pixel as a pixel belonging to the object of interest or a background.
- a computer program or a computer program product may include instructions which, when executed by a computing device, cause the computing device to carry out a method for imaging as described herein.
- FIG. 1A shows a schematic block diagram of an apparatus for imaging, according to various embodiments.
- FIG. 1B shows a schematic block diagram of an apparatus for imaging, according to various embodiments.
- FIG. 1C shows a flow chart illustrating a method for imaging, according to various embodiments.
- FIG. 1D shows a flow chart illustrating a method for imaging, according to various embodiments.
- FIGS. 2 A to 2E show schematic perspective views of a pixel machine of various embodiments from different angles.
- FIG. 3 shows a schematic back view of a main controller, according to various embodiments.
- FIG. 4 shows a flow chart illustrating a method for imaging, according to various embodiments.
- FIG. 5 shows a schematic view illustrating different pixels and their corresponding zoom ratios, according to various embodiments.
- a and/or B may include A or B or both A and B.
- an imaging apparatus for example, a product photography apparatus, e.g., an automatic photography equipment for products.
- the apparatus may minimise the effort and labour associated with photography of products, for example, for the fastest expanding online shopping business.
- Various embodiments may also provide the corresponding methods for imaging.
- One or more of the following may be achieved: (1) detection of pixels belonging to object(s) for automatic background elimination (e.g., background cut); (2) detection of saturated pixel(s) to automatically eliminate reflections; (3) detection of, and elimination or maintenance of product shadows; (4) automatic centering of the object(s); (5) elimination of background and shadows of rotating object(s); (6) providing uniform exposure and colour for all the pixels.
- automatic background elimination e.g., background cut
- detection of saturated pixel(s) to automatically eliminate reflections e.g., background cut
- detection of, and elimination or maintenance of product shadows e.g., automatic centering of the object(s)
- elimination of background and shadows of rotating object(s) e.g., rotating object(s)
- an object may be captured in frames with and without zoom effect. Then, the zoom ratio of pixels in the frames may be checked. There may be two different zoom ratios, one each for the background and the object, because the object is nearer to the camera. The object may be determined (or identified) and“cut” based on this change.
- FIG. 1A shows a schematic block diagram of an apparatus 100 for imaging, according to various embodiments.
- the apparatus 100 includes a lighting arrangement 102 configured to provide lighting, and a processor 104, wherein the processor 104 is configured to control an imaging device to generate at least two images, each of the at least two images depicting an object of interest, and the at least two images depicting the object of interest and a background at different distances relative to each other, and wherein the processor 104 is further configured, for a respective pixel of pixels defining an image of the at least two images, to determine, based on the at least two images, a zoom ratio associated with the respective pixel, and to determine, using the zoom ratio determined, the respective pixel as a pixel belonging to the object of interest or the background.
- an apparatus 100 for imaging may be provided, having a lighting arrangement 102 and a processor 104.
- the apparatus (or arrangement) 100 may be employed to image or take images of an object of interest, e.g., against a background, and/or of the background in the absence of the object.
- background may mean the background relative to the object (in the presence of the object) or said background in the absence of the object, or otherwise to be construed in the context the term is used.
- the processor 104 may communicate with the lighting arrangement 102, for example, via a channel represented by the line 106.
- the processor 104 and the lighting arrangement 102 may be electrically coupled to one another, e.g., via a physical cable 106.
- the processor 104 may send one or more control signals to the lighting arrangement 102.
- the lighting arrangement 102 may provide lighting to illuminate the object and/or the background. This may mean that the object and/or the background may be illuminated simultaneously or separately by the lighting.
- the lighting arrangement 102 may partially or entirely surround the object.
- the lighting arrangement 102 may be arranged in the form of a closed box environment, and the object may be placed within said environment for imaging purposes.
- the various methods and techniques may be used in an open studio, on a stage, or in a film set, or any other suitable environments or settings.
- the lighting may illuminate the object and/or the background from different directions towards the object and/or the background.
- the processor 104 may control an (optical) imaging device (e.g., a (digital) camera capable of taking photographs and/or videos, or a (digital) video recorder) to generate at least two images or a plurality of images (or frames).
- the plurality of images may mean 10, 20, 50, 100 or more images.
- the imaging device may be separately provided or integrally provided with the apparatus 100.
- the processor 104 may control the imaging device to take at least two images or a number of images showing the object of interest in the images. It should be appreciated that two or more or all of the plurality of images may depict the object (e.g., against a background), and/or two or more of the plurality of images may depict the background without the object. An equal number of the plurality of images may depict the background without the object and with the object respectively.
- images may be taken or obtained (directly) as still images (e.g., photographic images), and/or may be images extracted from a moving sequence of consecutive graphics (e.g., a moving picture, motion picture, movie).
- the processor 104 may control the imaging device to generate at least two images, where each of the at least two images may depict an object of interest, and the at least two images depicting the object of interest and a background at different distances relative to each other.
- zooming via the optical lenses of the imaging device, and/or moving the object of interest (e.g., closer to or farther from the imaging device) and/or moving the background (e.g., closer to or farther from the imaging device) may be carried out.
- various embodiments may rely on generating images or frames at different distances between the object of interest and the background.
- the object of interest may be placed on a moving stand, which may then be moved closer to and/or far from the imaging device to create a“zoom effect”, or an artist or a model may move closer and/or go back relative to the imaging device.
- Such techniques may also be used in a photobooth to eliminate the background.
- the processor 104 may determine, for a (or each) respective pixel of the pixels and based on (or from) the at least two images, a zoom ratio associated with the respective pixel.
- the pixels defining the image may mean the pixels defining a portion of the image (e.g., pixels defining an area in the image, the area being smaller than the (outer) boundary of the image), or all of the pixels defining the entirety of the image.
- the processor 104 may determine whether the respective pixel is a pixel belonging to the object (“object pixel”) or the background (relative to the object) (“background pixel”). In this way, object pixels and background pixels may be determined and, thus, differentiated from each other. Accordingly, an object pixel may be distinguished from a non- object pixel, e.g., a background pixel.
- object pixel means a pixel that defines, in an image, the physical part of the object
- the term“background pixel” means a pixel that defines, in an image, the background either in the presence of the object or in the absence of the object.
- the processor 104 may be configured to control the imaging device to generate the at least two images at respective (different) zoom values of the imaging device. For example, one image of the at least two images may be generated at a first zoom value of the imaging device, and the other image of the at least two images may be generated at a second (different) zoom value of the imaging device.
- the second zoom value may be higher or greater than the first zoom value, meaning that the processor may control the imaging device so as to zoom in on the object of interest in order to generate the image at the second zoom value. Accordingly, the imaging device may be controlled so that different images depicting the object of interest may be generated at different zoom values of the imaging device.
- the imaging device may be zoomed out between taking the at least two images.
- an image of the at least two images may be generated when the imaging device is in its original state (e.g., a non-zoom state), where, for example, the zoom value of the imaging device may be designated as zoom value“0”.
- zoom values may be defined or expressed in number form or percentage form.
- the zoom values of the imaging device may correspond to the optical zoom of the imaging device.
- to“determine” the respective pixel as a pixel belonging to the object or a background may include, to“identify” the respective pixel as a pixel belonging to the object or the background.
- this may not necessarily mean to (positively) mark or tag the respective pixel as an object pixel or a background pixel, although this may be the case in various embodiments.
- the identification of the respective pixel may mean the resulting or selection process of the respective pixel being determined or inferred to be an object pixel or a background pixel based on the result of the determination of the zoom ratio.
- the zoom ratio associated with the object (“object zoom ratio”) and the zoom ratio associated with the background (“background zoom ratio”) are different.
- object zoom ratio is greater or larger than the background zoom ratio.
- pixels corresponding to parts of the object, or different objects, that are located at the same plane relative to the imaging device may have at least substantially similar or identical zoom ratio.
- the term“zoom ratio” may mean the degree or amount of change associated with a pixel, e.g., resulting from a change in the zoom value of the imaging device.
- the change associated with the pixel may be a change relating to a physical dimension of the pixel, for example, size, area, etc.
- the zoom ratio may provide or define a size change factor, e.g., a size increase factor.
- the zoom ratio may be or may include a (associated) value.
- pixels may be considered to be arranged in a square grid-like manner with rows and columns.
- one pixel e.g., Pixel A
- the imaging device may be operated to zoom in to generate a second image
- the corresponding Pixel A is enlarged in the second image and may occupy more than one square of the grid-like arrangement.
- there is no change in the pixel value e.g., intensity value and/or colour value(s) of Pixel A in the two images.
- corresponding pixels of Pixel A in the two images may be determined or identified, so that the associated zoom ratio may be determined.
- a respective background pixel may then, for example, occupy four squares of the grid-like arrangement, while a respective object pixel, due to the object being located nearer to the imaging device, may occupy, nine squares of the grid-like arrangement.
- the zoom ratio of the object pixel may be determined to be nine, while the zoom ratio of the background pixel may be determined to be four, lower than that for the object pixel. It should be appreciated that this may be a simplistic analysis or determination of the zoom ratios for the purpose of aiding understanding, and that other forms or manners of determination of the zoom ratios may be employed.
- the zoom ratio of a pixel may be determined, and from the zoom ratio, the pixel may be determined as either an object pixel or a background pixel. As such, the object of interest may therefore be determined and separated from the background.
- the object of interest may be three-dimensional (3D) or may include one or more 3D features
- the object pixels may not have a constant zoom ratio (as generally is the case for a 2D object or 2D feature). Due to the 3D nature, different parts of the object may be at different distances relative to the imaging device and, therefore, different object pixels may have different zoom ratios and, hence, there may be a range of zoom ratios associated with the object pixels. The difference in the zoom ratios may be small. In other words, the object pixels corresponding to an object of interest that is 3D may have zoom ratios encompassing a range of values.
- Different parts of the 3D object may be located at different distances relative to the imaging device, and, therefore, when the 3D object is zoomed in, the pixels corresponding to the different parts may have different zoom ratios, although these are generally relatively close to each other with small differences.
- the object pixels may be covered by a range of zoom ratios.
- the above is similarly applicable to background pixels where the background may include one or more 3D features.
- the processor 104 may be further configured to remove or discard pixels determined as belonging to the background. All pixels that are determined as background pixels may be removed or discarded.
- the processor 104 may further be configured to control the imaging device to generate at least two additional images at the respective zoom values of the imaging device, each of the at least two additional images depicting the background without the object of interest. This may mean that the at least two additional images may be generated at the same zoom values for generating the at least two images.
- images depicting respectively the background with and without the object of interest may be generated at a first zoom value
- images depicting respectively the background with and without the object of interest may be generated at a second zoom value.
- the processor 104 may further be configured, for an additional respective pixel of pixels defining an additional image of the at least two additional images, to determine, based on (or from) the at least two additional images, an additional zoom ratio associated with the additional respective pixel. For determining the respective pixel (of pixels defining the image of the at least two images as a pixel belonging to the object or a background), the processor 104 may further be configured to make a comparison between the zoom ratio and the additional zoom ratio, and may determine, based on the comparison, the respective pixel as a pixel belonging to the object or the background.
- the additional zoom ratio is associated with a background pixel. Accordingly, if a pixel is a background pixel, its associated zoom ratio would be at least substantially similar or comparable to the additional zoom ratio.
- the processor 104 may be configured to make a comparison between the zoom ratio and a threshold value, and to determine, based on the comparison, the respective pixel as a pixel belonging to the object or the background.
- a pixel with an associated zoom ratio that is determined to be at least substantially similar to the threshold value, or is determined to be larger or greater than the threshold value may be determined as an object pixel, while a pixel with an associated zoom ratio that is determined to be different to the threshold value, or is determined to be smaller or less than the threshold value, may be determined as a background pixel.
- the threshold value may, for example, be set or inputted by a user.
- the processor 104 may be configured to determine the respective pixel as a pixel belonging to the object if the zoom ratio is determined to be higher than the threshold value, and to maintain the respective pixel, and, to determine the respective pixel as a pixel belonging to the background if the zoom ratio is determined to be lower than the threshold value, and to discard the respective pixel.
- the threshold value may be (set) at a level located between one or more zoom ratios estimated or expected for object pixels, and one or more zoom ratios estimated or expected for background pixels.
- the zoom ratio for an object pixel may be different (e.g., higher) than the zoom ratio for a background pixel.
- pixels with determined zoom ratios that are greater than the threshold value may be determined or identified to correspond to object pixels and thus may be maintained, while pixels with determined zoom ratios that are less than the threshold value may be determined or identified to correspond to background pixels and thus may be removed or discarded.
- the processor 104 may be configured to make a comparison between the zoom ratio and a range of cut-off values, to determine the respective pixel as a pixel belonging to the object if the zoom ratio is determined to be within the range of cut-off values, and to maintain the respective pixel, and, to determine the respective pixel as a pixel belonging to the background if the zoom ratio is determined to be outside of the range of cut-off values, and to discard the respective pixel.
- the range of cut-off values may be defined or inputted by a user.
- the range of cut-off values may include or encompass one or more zoom ratios estimated or expected for object pixels. Respective pixels whose associated zoom ratios fall inside such range may be determined to be object pixels and be maintained, while, respective pixels whose associated zoom ratios fall out of the range may be determined to be background pixels (or non-object pixels) and be discarded.
- the range of cut-off values are defined such that only the zoom ratios associated with pixels belonging to the first object fall within the range (assuming that the first object is the object of interest), the result would be that pixels belonging to the first object would be maintained, while pixels belonging to the second object and the background would be removed or discarded.
- the second object may be defined as part of the background.
- the range of cut-off values are defined such that only the zoom ratios associated with pixels belonging to the second object fall within the range (assuming that the second object is the object of interest), the result would be that pixels belonging to the second object would be maintained, while pixels belonging to the first object and the background would be removed. If the range of cut-off values are defined such that only the zoom ratios associated with pixels belonging to the first and second objects fall within the range (assuming that the two objects are both objects of interest), the result would be that pixels belonging to the first and second objects would be maintained while pixels belonging to the background would be removed. It should be appreciated that this may be extended to situations where there may be more than two objects separated by distance from each other, relative to the imaging device.
- the processor 104 may further be configured to generate a resultant
- the processor 104 may further be configured to label (or mark) the respective pixel with the zoom ratio, for example, for subsequent processing purposes.
- the term“object” may refer to a thing, a product, a person, or a subject to be imaged. Further, it should be appreciated that the term“object” may include a living thing (e.g., person, animal, plant, etc.) and/or a non-living thing (e.g., a product, item, inanimate body or object, etc.).
- a living thing e.g., person, animal, plant, etc.
- a non-living thing e.g., a product, item, inanimate body or object, etc.
- the processor 104 may control the lighting arrangement 102 to vary at least one parameter of the lighting during or for generation of images.
- the at least one parameter may be varied in between generation of two images, or of two immediately adjacent images.
- the at least one parameter of the lighting may include any one of or any combination of a lighting intensity, a lighting colour, or a lighting direction.
- the lighting may include an object lighting to illuminate the object.
- the processor 104 may be configured to control the lighting arrangement 102 to vary at least one object parameter of the object lighting during or for generation of images.
- background lighting may be provided to illuminate the background.
- the term“background lighting” may mean the lighting that illuminates the background, and/or the space in between the background and the object of interest, without illuminating the object. This means that the object of interest to be imaged is not illuminated by the background lighting, i.e., lighting provided from one or more light sources, to be employed as the background lighting to illuminate the background, does not illuminate the object.
- the background lighting may be constant (i.e., not variable) or an at least background parameter of the background lighting may be varied.
- the at least one object parameter and/or the at least background parameter may be varied simultaneously or may be varied by the same factor or value.
- the at least one object parameter and/or the at least background parameter may correspond to intensity and/or colour.
- the imaging device e.g., camera
- the imaging device may be adjusted to low(er) sensitivity so that light scattered from the background that may potentially illuminate the object may not be captured or be observable in one or more images of the plurality of images generated.
- the lighting arrangement 102 may include a plurality of light sources to provide the lighting.
- Each light source may be or may include a light emitting diode or device (LED).
- the plurality of light sources may include a combination of red (R) LEDs, green (G) LEDs, blue (B) LEDs, and white (W) LEDs.
- RGBW LEDs may provide a pure white colour.
- any other types of light sources may be used, as long as the light sources may be controlled to vary at least one parameter (e.g., light intensity) of the lighting provided by the light sources.
- Each light source (e.g., LED) may be individually addressable. This may mean that each light source may be independently switched on/off, and controlled to vary the at least one parameter of the lighting.
- the lighting arrangement 102 may include a plurality of (separate) lighting panels to provide lighting from a plurality of directions to the object and/or background.
- Each lighting panel may include a plurality of light sources, for example, LEDs, e.g., a combination of red LEDs, green LEDs, blue LEDs, and white LEDs.
- the lighting arrangement 102 may include at least one curved lighting panel configured to provide a focused lighting towards the object.
- the at least one curved lighting panel may include a plurality of light sources, e.g., LEDs. Four curved lighting panels may be provided.
- the lighting arrangement 102 may be or may form part of a pixel machine.
- a driver arrangement may be provided (e.g., in the pixel machine) for driving the lighting arrangement 102.
- the driver arrangement may have a plurality of drivers for driving the associated light sources of the lighting arrangement 102.
- the processor 104 may be provided with one or more communication interfaces, e.g., including at least one interface for communication with the lighting arrangement 102 or the pixel machine.
- the processor 104 may be or may form part of a (main) controller.
- the (main) controller may further include a display and other peripheral devices, e.g., a keyboard, a mouse, etc.
- the pixel machine and the (main) controller may be comprised in the apparatus 100.
- the processor 104 may be further configured to control relative movement between the imaging device and the object, and the processor 104 may be further configured to control the imaging device to generate images from a plurality of directions relative to the object. For example, there may be rotational movement between the imaging device and the object. This may allow 360° generation of images of the object.
- the imaging device may be placed on a support structure.
- the support structure may be a movable support structure (e.g., an XYZ motorized stand), and the processor 104 may control movement (e.g., rotational movement) of the movable support structure, relative to the object.
- movement e.g., rotational movement
- the apparatus 100 may include a turn table to support the object, and the processor 104 may control movement (e.g., rotational and/or linear movement) of the turn table, relative to the imaging device.
- the object may be placed on the turn table to be rotated.
- the turn table may be at least substantially transparent or transmissive to light.
- the apparatus 100 may further include an actuator configured to communicate with the processor 104, wherein, in response to a single activation of the actuator, operation of the processor 104 may be (fully) automated.
- the actuator may be a push actuator, e.g., a push button.
- the processor 104 may further be configured to identify a flicker effect in the images generated and to remove the flicker effect. For example, when the imaging device is operated to take videos, flicker effect may be captured in one or more images due to frequency difference in the respective operations between the imaging device and the lighting arrangement 102. Flicker in the images may be observed continuously. Rather than controlling the lighting to minimise the flicker effect, this may be achieved in the processing of the images. From the images generated, flicker which is regular and similar may be identified so that the flicker may be eventually removed.
- FIG. 1B shows a schematic block diagram of an apparatus lOOb for imaging (and/or for image processing), according to various embodiments.
- the apparatus lOOb includes a processor l04b, and a memory 105 coupled to the processor l04b.
- the memory 105 and the processor l04b may be coupled to each other (as represented by the line 107), e.g., physically coupled and/or electrically coupled.
- the memory 105 has stored therein instructions, which when executed by the processor l04b, cause the processor l04b, for at least two images generated, each of the at least two images depicting an object of interest, and the at least two images depicting the object of interest and a background at different distances relative to each other, to determine, based on the at least two images and for a respective pixel of pixels defining an image of the at least two images, a zoom ratio associated with the respective pixel, and to determine, using the zoom ratio determined, the respective pixel as a pixel belonging to the object of interest or a background.
- the at least two images may be generated at respective zoom values of an imaging device.
- the at least two images may be transferred to and/or stored in the memory 105 or another memory.
- the memory 105 has stored therein instructions, which when executed by the processor l04b, cause the processor l04b to control an imaging device to generate at least two images, each of the at least two images depicting an object of interest, and the at least two images depicting the object of interest and a background at different distances relative to each other, to determine, based on (or from) the at least two images and for a respective pixel of pixels defining an image of the at least two images, a zoom ratio associated with the respective pixel, and to determine, using the zoom ratio determined, the respective pixel as a pixel belonging to the object of interest or the background.
- the at least two images may be transferred to and/or stored in the memory 105 or another memory.
- the processor l04b may control the imaging device to generate the at least two images at respective zoom values of the imaging device.
- the processor l04b may execute the instructions to further cause the processor l04b to control the imaging device to generate at least two additional images at the respective zoom values of the imaging device, each of the at least two additional images depicting the background without the object of interest, to determine, based on (or from) the at least two additional images and for an additional respective pixel of pixels defining an additional image of the at least two additional images, an additional zoom ratio associated with the additional respective pixel, for determining the respective pixel (of pixels defining the image of the at least two images as a pixel belonging to the object or a background), to make a comparison between the zoom ratio and the additional zoom ratio, and, to determine, based on the comparison, the respective pixel as a pixel belonging to the object or the background.
- the processor l04b may execute the instructions to further cause the processor l04b, for determining the respective pixel (of pixels defining the image of the at least two images as a pixel belonging to the object or a background), to make a comparison between the zoom ratio and a threshold value, and, to determine, based on the comparison, the respective pixel as a pixel belonging to the object or the background.
- the processor l04b may execute the instructions to cause the processor l04b to determine the respective pixel as a pixel belonging to the object if the zoom ratio is determined to be higher than the threshold value, and to maintain the respective pixel, and to determine the respective pixel as a pixel belonging to the background if the zoom ratio is determined to be lower than the threshold value, and to discard the respective pixel.
- the processor l04b may execute the instructions to cause the processor l04b, for determining the respective pixel (of pixels defining the image of the at least two images as a pixel belonging to the object or a background), to make a comparison between the zoom ratio and a range of cut-off values, to determine the respective pixel as a pixel belonging to the object if the zoom ratio is determined to be within the range of cut-off values, and to maintain the respective pixel, and to determine the respective pixel as a pixel belonging to the background if the zoom ratio is determined to be outside of the range of cut-off values, and to discard the respective pixel.
- the processor l04b may execute the instructions to further cause the processor l04b to generate a resultant image of the object based on pixels determined as belonging to the object. In various embodiments, the processor l04b may execute the instructions to further cause the processor l04b to discard pixels determined as belonging to the background. All pixels that are determined as background pixels may be removed or discarded.
- the processor 104b may execute the instructions to further cause the processor l04b to label (or mark) the respective pixel with the zoom ratio.
- FIG. 1C shows a flow chart 120 illustrating a method for imaging (and/or for image processing), according to various embodiments.
- At 122 at least two images are generated via (or using) an imaging device, each of the at least two images depicting an object of interest, and the at least two images depicting the object of interest and a background at different distances relative to each other.
- a zoom ratio associated with the respective pixel is determined.
- the respective pixel is determined as a pixel belonging to the object of interest or a background.
- the at least two images may be generated at respective (different) zoom values of the imaging device.
- the method may further include generating, via the imaging device, at least two additional images at the respective zoom values of the imaging device, each of the at least two additional images depicting the background without the object of interest, determining, based on (or from) the at least two additional images and for an additional respective pixel of pixels defining an additional image of the at least two additional images, an additional zoom ratio associated with the additional respective pixel, and, at 126, making a comparison between the zoom ratio and the additional zoom ratio, and determining, based on the comparison, the respective pixel as a pixel belonging to the object or the background.
- the method may include making a comparison between the zoom ratio and a threshold value, and determining, based on the comparison, the respective pixel as a pixel belonging to the object or the background.
- the method may include determining the respective pixel as a pixel belonging to the object if the zoom ratio is determined to be higher than the threshold value, and maintaining the respective pixel, and, determining the respective pixel as a pixel belonging to the background if the zoom ratio is determined to be lower than the threshold value, and discarding the respective pixel.
- the method may include making a comparison between the zoom ratio and a range of cut-off values, determining the respective pixel as a pixel belonging to the object if the zoom ratio is determined to be within the range of cut-off values, and maintaining the respective pixel, and, determining the respective pixel as a pixel belonging to the background if the zoom ratio is determined to be outside of the range of cut-off values, and discarding the respective pixel.
- a resultant image of the object may be generated based on (or from or using) pixels identified as belonging to the object.
- the method may further include removing or discarding pixels determined as belonging to the background. All pixels that are determined as background pixels may be removed or discarded.
- the method may further include labeling (or marking) the respective pixel with the zoom ratio.
- the method may further include identifying a flicker effect in the images generated, and removing the flicker effect.
- the method may further include providing lighting.
- at least one parameter of the lighting provided to illuminate the object and/or background may be varied.
- FIG. 1D shows a flow chart 130 illustrating a method for imaging (and/or for image processing), according to various embodiments.
- a zoom ratio associated with the respective pixel is determined based on the at least two images.
- the respective pixel is determined as a pixel belonging to the object of interest or a background.
- the at least two images may be generated at respective zoom values of an imaging device.
- the method may include providing the lighting.
- Various embodiments may also provide a computer program or a computer program product, which may include instructions which, when executed by a computing device, cause the computing device to carry out a method for imaging as described herein.
- Various embodiments may also provide a computer program or a computer program product, which may include instructions which, when executed by a computing device, cause the computing device to generate, via an imaging device, at least two images at respective zoom values of the imaging device, the at least two images depicting an object of interest, to determine, based on (or from) the at least two images and for a respective pixel of pixels defining an image of the at least two images, a zoom ratio associated with the respective pixel, and to determine, using the zoom ratio determined, the respective pixel as a pixel belonging to the object or a background.
- the apparatus of various embodiments may include two parts: a pixel machine and a main controller, as will be described further below with reference to FIGS. 2 A to 2E and 3.
- FIGS. 2 A to 2E show schematic perspective views of a pixel machine 201 of various embodiments. It should be appreciated that while one or more features or elements of the pixel machine 201 may not be shown in one or more of FIGS. 2 A to 2E for clarity and easier understanding purposes, and/or to illustrate an internal environment of the pixel machine 201, such features/elements may nevertheless be part of the pixel machine 201 as may be apparent from FIGS. 2A to 2E. Further, while LEDs are described in the context of the pixel machine 201, it should be appreciated that other types of light sources may be employed, as long as at least one parameter (e.g., light intensity) of the lighting provided may be variable. Nevertheless, LED lighting is preferable due to its characteristics of low power consumption, speed of control and long life.
- a parameter e.g., light intensity
- the pixel machine 201 may include a front-left LED panel 202, a front-middle LED panel 204, a front-right LED panel 206, a back side LED panel 207, a left side LED panel 214, a right side LED panel 216, four (key light) LED panels 220 which may be curved, a top LED panel 222, a top-back LED panel 224, a top-left LED panel 225, a top-right LED panel 226, a top- front (key light) LED panel 227, and a bottom LED panel 228, where one or more of these LED panels may define a lighting arrangement (e.g., 102, FIG. 1A).
- the lighting arrangement may provide omnidirectional lighting, where the intensity of the lighting may be changed.
- Each of the LED panels 202, 204, 206, 207, 214, 216, 220, 222, 224, 225, 226, 227, 228 may include a plurality of LEDs (illustrated as squares in the panels and represented as 229 for some LEDs).
- Each LED panel 202, 204, 206, 207, 214, 216, 220, 222, 224, 225, 226, 227, 228 may include a combination of red (R) LEDs, green (G) LEDs, blue (B) LEDs, and white (W) LEDs (RGBW LEDs), which may provide pure white colour as producing white from RGB in a close box may be challenging.
- Each LED panel may be individually or independently controlled.
- Each LED 229 may be individually or independently addressable.
- the lighting arrangement of the pixel machine 201 may provide three types of lights: key light, fill light and back light, thereby providing three-point lighting.
- the lighting arrangement may also provide background lighting.
- the four LED panels 220 in the comer, the top-back LED panel 224, the top-left LED panel 225, the top-right LED panel 226, and the top-front LED panel 227 may provide key light.
- the left side LED panel 214 and the right side LED panel 216 may provide fill light.
- the back side LED panel 207 may provide back light.
- An imaging device (e.g., camera) 210 which may be supported on a support structure (e.g., camera stand) 212, may be provided for imaging through an aperture 208 defined in the front- middle LED panel 204.
- the imaging device 210 may have a zoom feature, meaning that the imaging device 210 is capable of zooming-in and zooming-out operations for generation of images.
- a turn table 231 may be provided to support one or more (sample) objects (e.g., represented as 230) that are to be imaged.
- a stepper motor 232 may be provided for rotation of the object(s) 230 or turn table 231.
- the lighting arrangement of the pixel machine 201 may provide lighting to light up an object (or product) 230 at least substantially equally from all angles so one or more or all features of the object(s) 230 may have good resolution, contrast and colours.
- the four side LED panels 220 (curved or round shape) employ more LEDs to provide sharp and crisp contrast of the object(s) 230.
- the panels 220 may also help to produce useful shadows shining areas of the object(s) 230.
- the curved shape angle of the LED panels 220 may give a more focused light to the object(s) 230.
- the panels 220 may also be used to generate different light effects, e.g., highlights and long and softer shadows, Chiaroscuro effect, etc, for photography.
- the key light provided may be angled such that it may light up any part of an object 230 with control of the intensity of the key light.
- Each LED 229 may be addressed individually.
- Such a technique may allow (full) control over object lighting so that an optimal lighting condition or environment suitable for product photography may be determined.
- the optimal or best lighting condition or environment may be determined or suggested automatically with minimal or without any saturation and/or dark spots.
- An interface 234 may be provided on the pixel machine 201, for example, for cable connection to the processor to be described below with reference to FIG. 3.
- FIG. 3 shows a schematic back view of a main controller 340, according to various embodiments.
- the main controller 340 may include a processor 342, for example, an industrial PC with embedded LEDs and motor controllers.
- the main controller 340 may further include a display device (e.g., a touch-screen monitor) 352, a keyboard 354 and a mouse 356.
- a display device e.g., a touch-screen monitor
- An actuator for example, a single capture button 344, may be provided (integrated) with the processor 342.
- a connector 348 may be provided for connection to a network, e.g., a local area network (LAN).
- a power connector or socket 350 may also be provided.
- An interface (or link connector) 346 may be provided on the processor 342 for communication with the pixel machine 201 (FIGS. 2A to 2E), for example, via the interface 234 (FIG. 2A). In this way, signals may be communicated from the processor 342 to the pixel machine 201, for example, to control movement and operation of the imaging device 210, to vary the parameter of the lighting provided by the lighting arrangement, etc.
- a cable may be connected between the interfaces 234, 346, e.g., a multicore cable which may carry power, and control signals for LEDs 229, motor(s) (e.g., 232) and the imaging device 210 of the pixel machine 201.
- elimination of the background may be achieved based on zoom.
- the imaging device e.g., 210
- the processor 342 for example, via a software application
- the background may be captured in various frames at different zoom values.
- an object or product
- frames or images
- pixels belonging to the background and pixels belonging to the object may be separated as the object is closer to the imaging device and its zoom ratio is different to that of the background.
- the zoom ratio or factor for the product, which is near the imaging device, is more than that for the background.
- FIG. 4 shows a flow chart 490 illustrating a method for imaging, as an example to determine or identify object pixels and background pixels based on zoom effect.
- a product may be placed in the apparatus (e.g., 100) of various embodiments (e.g., in the pixel machine 201). Lighting may be provided to illuminate the object. The background may also be illuminated by the lighting provided. Command is sent to the imaging device (e.g., 210) to generate images.
- the imaging device e.g., camera
- the imaging device may be controlled to generate images depicting the object at different zoom values, which may be set in a software application.
- the images may include the background.
- the light intensity and/or colour emitted from the associated LEDs or LED panels may be provided and/or varied under the control of the processor 342 (FIG. 3) which may in turn take command via or from a software application, e.g., based on user input. It should be appreciated that key light may be provided on the object.
- the button 344 (FIG. 3) may be actuated/pressed to perform the product imaging or photography. If the button 344 is pressed for more than 5 seconds, 360° photography may be carried out. Actuation of the button 344 may result in automated operations.
- images may be generated depicting the background without the product at the same zoom values as for the images generated at 491.
- Corresponding frames e.g., at the same zoom value
- with and without the product may be compared to each other.
- zoom ratios of pixels may be determined.
- the zoom ratios may be determined using the frames captured at 491, or from frames captured at 491 and 492.
- the zoom ratio for pixels defining the product is different to the pixels defining the background.
- background pixels may, generally, have a substantially similar zoom ratio (or substantially constant zoom ratio), hence, the condition“constant background”.
- the zoom ratio of the pixels in the images or frames may be determined. There are two different zoom ratios, one each for the background and the object, because the object is nearer to the imaging device. Pixels belonging to the object may be determined, and the object may then be“cut”.
- a product may be 20 mm (X) and 40 mm (Y), where LEDs distance in the background may be 5 mm (X) and 5 mm (Y).
- the camera may be zoomed to 10% zoom.
- the size increase factor (or zoom ratio) for the product near the camera is more than the background. This factor helps in determining which pixels belong to the background and which pixels belong to the product.
- FIG. 5 shows a schematic view illustrating different pixels and their corresponding zoom ratios, according to various embodiments.
- a first frame,“Frame A” 560a may be captured or generated.
- Frame A 560a that is generated shows a first object 56 la and a second object 562a against a background 564a.
- the second object 562a is arranged further relative to the imaging device as compared to the first object 561 a, meaning that the second object 562a is arranged in the space between the first object 56 la and the background 564a.
- the second object 562a is sized appropriately relative to the first object 56 la so that the first object 56 la and the second object 562a as captured in Frame A 560a are of at least substantially similar size.
- a pixel 565a is defined for the first object 56la
- a pixel 566a is defined for the second object 562a
- a pixel 567a is defined for the background 564a.
- the pixels 565a, 566a, 567a in Frame A 560a may be of at least substantially similar size.
- the imaging device may then be operated to a second zoom value, z 2 , where z 2 > z ⁇ .
- a second frame, “Frame B” 560b may be captured or generated, showing an enlarged first object 56lb and an enlarged second object 562b against an enlarged background 564b.
- the pixels 565a, 566a, 567a are correspondingly enlarged to the pixels 565b, 566b, 567b.
- the right side of FIG. 5 shows enlarged views of the pixels 565a, 566a, 567a, 565b, 566b, 567b to illustrate the change of the pixels 565a, 566a, 567a to the pixels 565b, 566b, 567b, with an increase in the respective sizes.
- the pixel value e.g., intensity value and/or colour value
- the size increase, and therefore, the respective zoom ratios, for the different pixels 565a, 566a, 567a of Frame A 560a being enlarged to the pixels 565b, 566b, 567b of Frame B 560b, may be different to each other.
- the associated zoom ratio may be rm
- the associated zoom ratio may be m 2
- the associated zoom ratio may be m 3
- the images may be generated both with and without the object.
- a pair of images, respectively with and without the object may be generated at the same zoom value.
- corresponding images with and without the product may be compared to each other, and the zoom ratio throughout the frames or in all the frames may be determined.
- the respective zoom ratios for the background pixels and for the object pixels are different, and, in this way, object pixels and background pixels may be differentiated.
- the background may be determined and subsequently removed.
- the method without the object in the pixel machine (e.g., 201) and to determine the zoom ratio by varying the zoom values.
- the product may be placed on a turn table (e.g., 231), and the same zoom value may be run, and both frames with product and without product may be used for comparison to find which pixel belongs to the object and which pixel belongs to the background, so as to eliminate the background.
- the various methods for imaging may be used for making 360° photography of product by rotation where each frame may have clear background elimination.
- the object may help a user to place the object in the centre of the turn table to have a better result in 360° photography.
- the object may be placed anywhere at the turn table and rotated, and the object pixels may be determined or identified.
- the image that is desired or best to put in front of the imaging device may be identified and such image with the product may be shown on a display device in half tone and half tone of live video, which may provide a live view which helps the user to match the location of the object following the image on screen by overlaying both images.
- the product may first be detected where it is in the frame.
- the centre or middle of the product may be determined by dividing X and Y values by 2.
- a centre line of the product may be drawn.
- the centre line may be thick as all the middle pixels may not lie on the same X and Y lines, so the width (threshold of centre) of the product may increase but in no way, may have value out of product boundary.
- the difference between the current location and what should be if the product is placed in the centre of the turn-table may be determined and a half tone image may be shown to the user, and where necessary, to act to move the product to the centre of the turn table to have good 360 rotation view.
- each light source e.g., LED
- each light source may be controlled through a touch screen monitor or by one or more mouse clicks which help the user to create different lighting effects.
- LEDs may be driven by a DC (direct current) controller to minimise or eliminate lighting non-sync locking effect.
- DC controller direct current controller may eliminate the flicker caused with most LED driving arrangements used in low cost LED drivers using direct AC line of 50/60 hertz so as to save cost.
- Each light source e.g., LED
- Each light source may be addressed and controlled independently. While the light sources (e.g., LEDs) receive commands from the processor, the associated drivers for the light sources are located in the pixel machine.
- LED light control from all angles may help to light up the round edges from left, right, top and bottom of the product, and good contrast and resolution pixels are made part of the final frame.
- the intensity of the lighting may be decreased, thereby minimising reflection.
- the lighting angle may be rotated, resulting in change in shadow position and in reflection effect.
- Such a lighting pattern may be employed for identification of different pixels, for example, object pixels, background pixels, shadow pixels.
- lighting may be provided to the sides of an object.
- lighting may be provided from the front for a shiny object, the edges of the object may become mixed with the background.
- the front lighting may then be turned off and side lighting may be provided for images to be generated to capture the edges of the object without saturation.
- lighting may be provided from the left and/or right sides of the object. Subsequently, the images generated with the front lighting and the side lighting may be combined to form the complete object.
- a camera stand may be mounted on the box defining the pixel machine, which may help in the minimisation or elimination of vibration or small differences in the position of detection of pixels in different frames.
- the imaging device e.g., camera
- the imaging device may be mounted on an XYZ motorized stand which may not only automatically zoom but adjust its physical location as per product size and/or placement on the turn table.
- the imaging device may be moved up and down, in a linear or curved motion.
- the imaging device may be moved in a curved motion to take a top view image of the object if desired.
- This may mean that the front-middle LED panel 204 may be movable together with the imaging device when the XYZ motorized stand is activated to control positioning of the imaging device.
- the front-middle LED panel 204 may be movable up and down and/or rotated in curve. Additionally or alternatively, even if the imaging device is moved up and down with XY movement, a curved effect may be achieved using the software application.
- lighting may be provided from any or all angles, the lighting may be turned on in a specific or defined manner to find the desired or best lighting based on pixel resolution and/or saturation levels.
- front, back, left and right views of the object may be automatically captured by rotation of the turn table.
- a user (only) needs to place the object front facing the imaging device.
- the turn table may then rotate and adjust, for example, 5 - 10°, to determine the location where the maximum width of the object is detected. This may help to further eliminate any placement errors by the user.
- the turn table may rotate at any angle, which may be defined in a default setting of the software. For example, the turn table may rotate 90° to capture the left, back and right views of the object for photoshoot.
- the apparatus and various methods for imaging, imaging or photography, and elimination of background may take less than a minute.
- camera white balance may be adjusted based on RGB color intensity which may help in capturing close to the natural colours of the object(s) or product(s).
- a software application compatible with Windows® and Linux® operating systems may be provided.
- the object on the turn table may be automatically determined, the camera may be adjusted to its best location, the lighting may be adjusted, object image(s) may be taken or generated, the background may be eliminated, the object may be centred in the middle of frame and both raw and compressed versions may be stored, for example, in the“My Pictures” folder in Windows® or on a flash drive connected to processor or to a network addressed folder.
- Activating the button 344 triggers the automatic scripts written in the software application, which may include predefined template(s) to get most of product photographs correct.
- an application programming interface (API) platform may be provided, which may be used to write plugin for web application which may support single button operation to upload product images in single, quad or 360° compressed and optimised for web resolution direct to e-commerce websites with single press of the button 344.
- API application programming interface
- the various methods may include one or more of capturing frames with and without the object, and employing zoom effect.
- background elimination may be carried out based on zoom effect.
- Corresponding frames for capture of the background with and without the object may be performed at corresponding zoom values.
- the zoom ratio of pixels corresponding to the objects is different from the zoom ratio of pixels corresponding to the background (as the objects are closer to the camera), the pixels corresponding to the objects may be determined and separated from those corresponding to the background.
- the various methods and techniques may be used in a studio with bigger objects to shoot, including fashion models, clothing and cars. Background lighting may be varied with known values, and pixels may be marked and determined as belonging to the product or the background so that there is no necessity for manual editing, e.g., to remove the background. It should be appreciated that the various methods for imaging may be implemented, either individually or a combination thereof, in the apparatus for imaging of various embodiments.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
An apparatus for imaging is provided. The apparatus includes a lighting arrangement configured to provide lighting, and a processor, wherein the processor is configured to control an imaging device to generate at least two images, each of the at least two images depicting an object of interest, and the at least two images depicting the object of interest and a background at different distances relative to each other, and wherein the processor is further configured, for a respective pixel of pixels defining an image of the at least two images, to determine, based on the at least two images, a zoom ratio associated with the respective pixel, and to determine, using the zoom ratio determined, the respective pixel as a pixel belonging to the object of interest or the background. A method for imaging is also provided.
Description
APPARATUS AND METHOD FOR IMAGING
Technical Field
Various embodiments relate to an apparatus for imaging and a method for imaging.
Background
Photographing of products normally requires at least a few hours for professional photographers to adjust the lights to strike a balance between product lighting and background colour, as well as lighting of the product. In worst cases, if the product is reflective or is multicolour, and/or some parts of the product have a matching or similar colour as that of the background, the photographer has to make some compromises in lightings and quality of product shooting. In almost every case, no matter how good and professional and skilled the photographer is, along with how good the post production tools may be, it ends up with the need for the photographer to eliminate the background by manual editing, which definitely is bound to cause loss of product details and resolution.
Summary
The invention is defined in the independent claims. Further embodiments of the invention are defined in the dependent claims.
According to an embodiment, an apparatus for imaging is provided. The apparatus may include a lighting arrangement configured to provide lighting, and a processor, wherein the processor is configured to control an imaging device to generate at least two images, each of the at least two images depicting an object of interest, and the at least two images depicting the object of interest and a background at different distances relative to each other, and wherein the processor is further configured, for a respective pixel of pixels defining an image of the at least two images, to determine, based on the at least two images, a zoom ratio associated with the respective pixel, and to determine, using the zoom ratio determined, the respective pixel as a pixel belonging to the object of interest or a background.
According to an embodiment, an apparatus for imaging is provided. The apparatus may include a processor, and a memory coupled to the processor, the memory having stored therein instructions, which when executed by the processor, cause the processor: to control an imaging device to generate at least two images, each of the at least two images depicting an object of interest, and the at least two images depicting the object of interest and a background at different distances relative to each other, to determine, based on the at least two images and for a respective pixel of pixels defining an image of the at least two images, a zoom ratio associated with the respective pixel, and to determine, using the zoom ratio determined, the respective pixel as a pixel belonging to the object of interest or a background.
According to an embodiment, an apparatus for imaging is provided. The apparatus may include a processor, and a memory coupled to the processor, the memory having stored therein instructions, which when executed by the processor, cause the processor, for at least two images generated, each of the at least two images depicting an object of interest, and the at least two images depicting the object of interest and a background at different distances relative to each other: to determine, based on the at least two images and for a respective pixel of pixels defining an image of the at least two images, a zoom ratio associated with the respective pixel, and to determine, using the zoom ratio determined, the respective pixel as a pixel belonging to the object of interest or a background.
According to an embodiment, a method for imaging is provided. The method may include generating, via an imaging device, at least two images, each of the at least two images depicting an object of interest, and the at least two images depicting the object of interest and a background at different distances relative to each other, determining, based on the at least two images and for a respective pixel of pixels defining an image of the at least two images, a zoom ratio associated with the respective pixel, and determining, using the zoom ratio determined, the respective pixel as a pixel belonging to the object of interest or a background.
According to an embodiment, a method for imaging is provided. The method may include, for a respective pixel of pixels defining an image of at least two images, wherein each of the at least two images depicts an object of interest and wherein the at least two images depict the object of interest and a background at different distances relative to each other, determining, based on the at least two images, a zoom ratio associated with the respective pixel, and determining, using the zoom ratio determined, the respective pixel as a pixel belonging to the object of interest or a background.
According to an embodiment, a computer program or a computer program product is provided. The computer program or computer program product may include instructions which, when executed by a computing device, cause the computing device to carry out a method for imaging as described herein.
Brief Description of the Drawings
In the drawings, like reference characters generally refer to like parts throughout the different views. The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention. In the following description, various embodiments of the invention are described with reference to the following drawings, in which:
FIG. 1A shows a schematic block diagram of an apparatus for imaging, according to various embodiments.
FIG. 1B shows a schematic block diagram of an apparatus for imaging, according to various embodiments.
FIG. 1C shows a flow chart illustrating a method for imaging, according to various embodiments.
FIG. 1D shows a flow chart illustrating a method for imaging, according to various embodiments.
FIGS. 2 A to 2E show schematic perspective views of a pixel machine of various embodiments from different angles.
FIG. 3 shows a schematic back view of a main controller, according to various embodiments.
FIG. 4 shows a flow chart illustrating a method for imaging, according to various embodiments.
FIG. 5 shows a schematic view illustrating different pixels and their corresponding zoom ratios, according to various embodiments.
Detailed Description
The following detailed description refers to the accompanying drawings that show, by way of illustration, specific details and embodiments in which the invention may be practiced. These
embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. Other embodiments may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the invention. The various embodiments are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments.
As used herein, the term“and/or” includes any and all combinations of one or more of the associated listed items. For example, A and/or B may include A or B or both A and B.
Various embodiments may provide an imaging apparatus, for example, a product photography apparatus, e.g., an automatic photography equipment for products. The apparatus may minimise the effort and labour associated with photography of products, for example, for the fastest expanding online shopping business. Various embodiments may also provide the corresponding methods for imaging.
One or more of the following may be achieved: (1) detection of pixels belonging to object(s) for automatic background elimination (e.g., background cut); (2) detection of saturated pixel(s) to automatically eliminate reflections; (3) detection of, and elimination or maintenance of product shadows; (4) automatic centering of the object(s); (5) elimination of background and shadows of rotating object(s); (6) providing uniform exposure and colour for all the pixels.
In various embodiments, an object may be captured in frames with and without zoom effect. Then, the zoom ratio of pixels in the frames may be checked. There may be two different zoom ratios, one each for the background and the object, because the object is nearer to the camera. The object may be determined (or identified) and“cut” based on this change.
FIG. 1A shows a schematic block diagram of an apparatus 100 for imaging, according to various embodiments. The apparatus 100 includes a lighting arrangement 102 configured to provide lighting, and a processor 104, wherein the processor 104 is configured to control an imaging device to generate at least two images, each of the at least two images depicting an object of interest, and the at least two images depicting the object of interest and a background at different distances relative to each other, and wherein the processor 104 is further configured, for a respective pixel of pixels defining an image of the at least two images, to determine, based on the at least two images, a zoom ratio associated with the respective pixel, and to determine, using the zoom ratio determined, the respective pixel as a pixel belonging to the object of interest or the background.
In other words, an apparatus 100 for imaging (and/or for image processing) may be provided, having a lighting arrangement 102 and a processor 104. The apparatus (or
arrangement) 100 may be employed to image or take images of an object of interest, e.g., against a background, and/or of the background in the absence of the object. It should be appreciated that the term“background” may mean the background relative to the object (in the presence of the object) or said background in the absence of the object, or otherwise to be construed in the context the term is used.
The processor 104 may communicate with the lighting arrangement 102, for example, via a channel represented by the line 106. The processor 104 and the lighting arrangement 102 may be electrically coupled to one another, e.g., via a physical cable 106. The processor 104 may send one or more control signals to the lighting arrangement 102.
The lighting arrangement 102 may provide lighting to illuminate the object and/or the background. This may mean that the object and/or the background may be illuminated simultaneously or separately by the lighting. The lighting arrangement 102 may partially or entirely surround the object. As a non-limiting example, the lighting arrangement 102 may be arranged in the form of a closed box environment, and the object may be placed within said environment for imaging purposes. However, it should be appreciated that the various methods and techniques may be used in an open studio, on a stage, or in a film set, or any other suitable environments or settings. The lighting may illuminate the object and/or the background from different directions towards the object and/or the background.
The processor 104 may control an (optical) imaging device (e.g., a (digital) camera capable of taking photographs and/or videos, or a (digital) video recorder) to generate at least two images or a plurality of images (or frames). As non-limiting examples, the plurality of images may mean 10, 20, 50, 100 or more images. The imaging device may be separately provided or integrally provided with the apparatus 100. The processor 104 may control the imaging device to take at least two images or a number of images showing the object of interest in the images. It should be appreciated that two or more or all of the plurality of images may depict the object (e.g., against a background), and/or two or more of the plurality of images may depict the background without the object. An equal number of the plurality of images may depict the background without the object and with the object respectively.
In the context of various embodiments, images may be taken or obtained (directly) as still images (e.g., photographic images), and/or may be images extracted from a moving sequence of consecutive graphics (e.g., a moving picture, motion picture, movie).
In various embodiments, the processor 104 may control the imaging device to generate at least two images, where each of the at least two images may depict an object of interest, and the at least two images depicting the object of interest and a background at different distances relative to each other. As non-limiting examples, during generation of the at least two images, zooming via the optical lenses of the imaging device, and/or moving the object of interest (e.g., closer to or farther from the imaging device) and/or moving the background (e.g., closer to or farther from the imaging device) may be carried out. It should be appreciated that various embodiments may rely on generating images or frames at different distances between the object of interest and the background. For example, the object of interest may be placed on a moving stand, which may then be moved closer to and/or far from the imaging device to create a“zoom effect”, or an artist or a model may move closer and/or go back relative to the imaging device. Such techniques may also be used in a photobooth to eliminate the background.
Of the plurality of pixels (“image pixels”) defining an image depicting the object, the processor 104 may determine, for a (or each) respective pixel of the pixels and based on (or from) the at least two images, a zoom ratio associated with the respective pixel. The pixels defining the image may mean the pixels defining a portion of the image (e.g., pixels defining an area in the image, the area being smaller than the (outer) boundary of the image), or all of the pixels defining the entirety of the image.
From the zoom ratio determined, the processor 104 may determine whether the respective pixel is a pixel belonging to the object (“object pixel”) or the background (relative to the object) (“background pixel”). In this way, object pixels and background pixels may be determined and, thus, differentiated from each other. Accordingly, an object pixel may be distinguished from a non- object pixel, e.g., a background pixel. In the context of various embodiments, the term“object pixel” means a pixel that defines, in an image, the physical part of the object, while the term“background pixel” means a pixel that defines, in an image, the background either in the presence of the object or in the absence of the object.
In various embodiments, for generating the at least two images, the processor 104 may be configured to control the imaging device to generate the at least two images at respective (different) zoom values of the imaging device. For example, one image of the at least two images may be generated at a first zoom value of the imaging device, and the other image of the at least two images may be generated at a second (different) zoom value of the imaging device. The second zoom value may be higher or greater than the first zoom value, meaning that the processor may control the
imaging device so as to zoom in on the object of interest in order to generate the image at the second zoom value. Accordingly, the imaging device may be controlled so that different images depicting the object of interest may be generated at different zoom values of the imaging device. Further, it should be appreciated that the imaging device may be zoomed out between taking the at least two images. In various embodiments, an image of the at least two images may be generated when the imaging device is in its original state (e.g., a non-zoom state), where, for example, the zoom value of the imaging device may be designated as zoom value“0”.
In the context of various embodiments, the zoom values may be defined or expressed in number form or percentage form.
In the context of various embodiments, the zoom values of the imaging device may correspond to the optical zoom of the imaging device.
In the context of various embodiments, it should be appreciated that, to“determine” the respective pixel as a pixel belonging to the object or a background may include, to“identify” the respective pixel as a pixel belonging to the object or the background. However, this may not necessarily mean to (positively) mark or tag the respective pixel as an object pixel or a background pixel, although this may be the case in various embodiments. Nevertheless, in some embodiments, the identification of the respective pixel may mean the resulting or selection process of the respective pixel being determined or inferred to be an object pixel or a background pixel based on the result of the determination of the zoom ratio.
As would be appreciated, due to the different distances of the object of interest and the background relative to each other and/or relative to the imaging device, the zoom ratio associated with the object (“object zoom ratio”) and the zoom ratio associated with the background (“background zoom ratio”) are different. As the object is relatively nearer to the imaging device, as compared to the background, the object zoom ratio is greater or larger than the background zoom ratio. Further, it should be appreciated that pixels corresponding to parts of the object, or different objects, that are located at the same plane relative to the imaging device may have at least substantially similar or identical zoom ratio.
In the context of various embodiments, the term“zoom ratio” may mean the degree or amount of change associated with a pixel, e.g., resulting from a change in the zoom value of the imaging device. The change associated with the pixel may be a change relating to a physical dimension of the pixel, for example, size, area, etc. As a non-limiting example, the zoom ratio may provide or define a size change factor, e.g., a size increase factor.
In the context of various embodiments, the zoom ratio may be or may include a (associated) value.
As an illustrative non-limiting example, pixels may be considered to be arranged in a square grid-like manner with rows and columns. Without operating the zoom feature of the imaging device (i.e., the imaging device being in the original state at no zoom) to generate a first image, one pixel, e.g., Pixel A, of the first image, may occupy one square of the grid-like arrangement. When the imaging device is operated to zoom in to generate a second image, the corresponding Pixel A is enlarged in the second image and may occupy more than one square of the grid-like arrangement. Generally, there is no change in the pixel value (e.g., intensity value and/or colour value(s)) of Pixel A in the two images. As the pixel value may remain at least substantially the same, corresponding pixels of Pixel A in the two images may be determined or identified, so that the associated zoom ratio may be determined.
Further, using the above example, when a second image is generated at a particular zoom value of the imaging device that has zoomed in, a respective background pixel may then, for example, occupy four squares of the grid-like arrangement, while a respective object pixel, due to the object being located nearer to the imaging device, may occupy, nine squares of the grid-like arrangement. For the above simplistic illustration, for example, the zoom ratio of the object pixel may be determined to be nine, while the zoom ratio of the background pixel may be determined to be four, lower than that for the object pixel. It should be appreciated that this may be a simplistic analysis or determination of the zoom ratios for the purpose of aiding understanding, and that other forms or manners of determination of the zoom ratios may be employed.
Accordingly, due to the difference in the respective zoom ratios for object pixels and background pixels, the zoom ratio of a pixel may be determined, and from the zoom ratio, the pixel may be determined as either an object pixel or a background pixel. As such, the object of interest may therefore be determined and separated from the background.
Further, in various embodiments where the object of interest may be three-dimensional (3D) or may include one or more 3D features, it should be appreciated that the object pixels may not have a constant zoom ratio (as generally is the case for a 2D object or 2D feature). Due to the 3D nature, different parts of the object may be at different distances relative to the imaging device and, therefore, different object pixels may have different zoom ratios and, hence, there may be a range of zoom ratios associated with the object pixels. The difference in the zoom ratios may be small.
In other words, the object pixels corresponding to an object of interest that is 3D may have zoom ratios encompassing a range of values. Different parts of the 3D object may be located at different distances relative to the imaging device, and, therefore, when the 3D object is zoomed in, the pixels corresponding to the different parts may have different zoom ratios, although these are generally relatively close to each other with small differences. As such, the object pixels may be covered by a range of zoom ratios. The above is similarly applicable to background pixels where the background may include one or more 3D features.
In various embodiments, the processor 104 may be further configured to remove or discard pixels determined as belonging to the background. All pixels that are determined as background pixels may be removed or discarded.
In various embodiments, the processor 104 may further be configured to control the imaging device to generate at least two additional images at the respective zoom values of the imaging device, each of the at least two additional images depicting the background without the object of interest. This may mean that the at least two additional images may be generated at the same zoom values for generating the at least two images. As illustrative non-limiting examples, images depicting respectively the background with and without the object of interest may be generated at a first zoom value, and images depicting respectively the background with and without the object of interest may be generated at a second zoom value. The processor 104 may further be configured, for an additional respective pixel of pixels defining an additional image of the at least two additional images, to determine, based on (or from) the at least two additional images, an additional zoom ratio associated with the additional respective pixel. For determining the respective pixel (of pixels defining the image of the at least two images as a pixel belonging to the object or a background), the processor 104 may further be configured to make a comparison between the zoom ratio and the additional zoom ratio, and may determine, based on the comparison, the respective pixel as a pixel belonging to the object or the background.
As the additional images are generated without the object of interest, it should be appreciated that the additional zoom ratio is associated with a background pixel. Accordingly, if a pixel is a background pixel, its associated zoom ratio would be at least substantially similar or comparable to the additional zoom ratio.
In various embodiments, for determining the respective pixel (of pixels defining the image of the at least two images as a pixel belonging to the object or a background), the processor 104 may be configured to make a comparison between the zoom ratio and a threshold value, and to
determine, based on the comparison, the respective pixel as a pixel belonging to the object or the background. As a non-limiting example, a pixel with an associated zoom ratio that is determined to be at least substantially similar to the threshold value, or is determined to be larger or greater than the threshold value, may be determined as an object pixel, while a pixel with an associated zoom ratio that is determined to be different to the threshold value, or is determined to be smaller or less than the threshold value, may be determined as a background pixel. The threshold value may, for example, be set or inputted by a user.
In various embodiments, the processor 104 may be configured to determine the respective pixel as a pixel belonging to the object if the zoom ratio is determined to be higher than the threshold value, and to maintain the respective pixel, and, to determine the respective pixel as a pixel belonging to the background if the zoom ratio is determined to be lower than the threshold value, and to discard the respective pixel. As non-limiting examples, the threshold value may be (set) at a level located between one or more zoom ratios estimated or expected for object pixels, and one or more zoom ratios estimated or expected for background pixels. As described herein, the zoom ratio for an object pixel may be different (e.g., higher) than the zoom ratio for a background pixel. As such, by setting the threshold value at a suitable or appropriate level, pixels with determined zoom ratios that are greater than the threshold value may be determined or identified to correspond to object pixels and thus may be maintained, while pixels with determined zoom ratios that are less than the threshold value may be determined or identified to correspond to background pixels and thus may be removed or discarded.
In various embodiments, for determining the respective pixel (of pixels defining the image of the at least two images as a pixel belonging to the object or a background), the processor 104 may be configured to make a comparison between the zoom ratio and a range of cut-off values, to determine the respective pixel as a pixel belonging to the object if the zoom ratio is determined to be within the range of cut-off values, and to maintain the respective pixel, and, to determine the respective pixel as a pixel belonging to the background if the zoom ratio is determined to be outside of the range of cut-off values, and to discard the respective pixel. The range of cut-off values may be defined or inputted by a user.
As non-limiting examples, the range of cut-off values may include or encompass one or more zoom ratios estimated or expected for object pixels. Respective pixels whose associated zoom ratios fall inside such range may be determined to be object pixels and be maintained, while,
respective pixels whose associated zoom ratios fall out of the range may be determined to be background pixels (or non-object pixels) and be discarded.
As a further illustrative example, assuming that there are two objects against a background, where a first object is positioned nearest to the imaging device, and a second object positioned farther relative to the imaging device, in a space between the first object and the background. If the range of cut-off values are defined such that only the zoom ratios associated with pixels belonging to the first object fall within the range (assuming that the first object is the object of interest), the result would be that pixels belonging to the first object would be maintained, while pixels belonging to the second object and the background would be removed or discarded. Here, the second object may be defined as part of the background. If the range of cut-off values are defined such that only the zoom ratios associated with pixels belonging to the second object fall within the range (assuming that the second object is the object of interest), the result would be that pixels belonging to the second object would be maintained, while pixels belonging to the first object and the background would be removed. If the range of cut-off values are defined such that only the zoom ratios associated with pixels belonging to the first and second objects fall within the range (assuming that the two objects are both objects of interest), the result would be that pixels belonging to the first and second objects would be maintained while pixels belonging to the background would be removed. It should be appreciated that this may be extended to situations where there may be more than two objects separated by distance from each other, relative to the imaging device.
In various embodiments, the processor 104 may further be configured to generate a resultant
(or final) image of the object based on (or from or using) pixels determined as belonging to the object.
In various embodiments, the processor 104 may further be configured to label (or mark) the respective pixel with the zoom ratio, for example, for subsequent processing purposes.
In the context of various embodiments, the term“object” may refer to a thing, a product, a person, or a subject to be imaged. Further, it should be appreciated that the term“object” may include a living thing (e.g., person, animal, plant, etc.) and/or a non-living thing (e.g., a product, item, inanimate body or object, etc.).
In various embodiments, the processor 104 may control the lighting arrangement 102 to vary at least one parameter of the lighting during or for generation of images. For example, the at least one parameter may be varied in between generation of two images, or of two immediately adjacent images. In the context of various embodiments, the at least one parameter of the lighting may
include any one of or any combination of a lighting intensity, a lighting colour, or a lighting direction.
In various embodiments, the lighting may include an object lighting to illuminate the object. The processor 104 may be configured to control the lighting arrangement 102 to vary at least one object parameter of the object lighting during or for generation of images. Additionally, background lighting may be provided to illuminate the background. In the context of various embodiments, the term“background lighting” may mean the lighting that illuminates the background, and/or the space in between the background and the object of interest, without illuminating the object. This means that the object of interest to be imaged is not illuminated by the background lighting, i.e., lighting provided from one or more light sources, to be employed as the background lighting to illuminate the background, does not illuminate the object. The background lighting may be constant (i.e., not variable) or an at least background parameter of the background lighting may be varied. The at least one object parameter and/or the at least background parameter may be varied simultaneously or may be varied by the same factor or value. As a non-limiting example, the at least one object parameter and/or the at least background parameter may correspond to intensity and/or colour. Further, in various embodiments, for the purpose of minimizing or avoiding the effect of the object being illuminated by scattered light from the background, the imaging device (e.g., camera) may be adjusted to low(er) sensitivity so that light scattered from the background that may potentially illuminate the object may not be captured or be observable in one or more images of the plurality of images generated.
The lighting arrangement 102 may include a plurality of light sources to provide the lighting. Each light source may be or may include a light emitting diode or device (LED). In one non-limiting example, the plurality of light sources may include a combination of red (R) LEDs, green (G) LEDs, blue (B) LEDs, and white (W) LEDs. The use of RGBW LEDs may provide a pure white colour. Nevertheless, it should be appreciated that any other types of light sources may be used, as long as the light sources may be controlled to vary at least one parameter (e.g., light intensity) of the lighting provided by the light sources. Each light source (e.g., LED) may be individually addressable. This may mean that each light source may be independently switched on/off, and controlled to vary the at least one parameter of the lighting.
The lighting arrangement 102 may include a plurality of (separate) lighting panels to provide lighting from a plurality of directions to the object and/or background. Each lighting panel may
include a plurality of light sources, for example, LEDs, e.g., a combination of red LEDs, green LEDs, blue LEDs, and white LEDs.
The lighting arrangement 102 may include at least one curved lighting panel configured to provide a focused lighting towards the object. The at least one curved lighting panel may include a plurality of light sources, e.g., LEDs. Four curved lighting panels may be provided.
The lighting arrangement 102 may be or may form part of a pixel machine. A driver arrangement may be provided (e.g., in the pixel machine) for driving the lighting arrangement 102. The driver arrangement may have a plurality of drivers for driving the associated light sources of the lighting arrangement 102.
The processor 104 may be provided with one or more communication interfaces, e.g., including at least one interface for communication with the lighting arrangement 102 or the pixel machine. The processor 104 may be or may form part of a (main) controller. The (main) controller may further include a display and other peripheral devices, e.g., a keyboard, a mouse, etc.
The pixel machine and the (main) controller may be comprised in the apparatus 100.
The processor 104 may be further configured to control relative movement between the imaging device and the object, and the processor 104 may be further configured to control the imaging device to generate images from a plurality of directions relative to the object. For example, there may be rotational movement between the imaging device and the object. This may allow 360° generation of images of the object.
The imaging device may be placed on a support structure. The support structure may be a movable support structure (e.g., an XYZ motorized stand), and the processor 104 may control movement (e.g., rotational movement) of the movable support structure, relative to the object.
The apparatus 100 may include a turn table to support the object, and the processor 104 may control movement (e.g., rotational and/or linear movement) of the turn table, relative to the imaging device. The object may be placed on the turn table to be rotated. The turn table may be at least substantially transparent or transmissive to light.
The apparatus 100 may further include an actuator configured to communicate with the processor 104, wherein, in response to a single activation of the actuator, operation of the processor 104 may be (fully) automated. The actuator may be a push actuator, e.g., a push button.
The processor 104 may further be configured to identify a flicker effect in the images generated and to remove the flicker effect. For example, when the imaging device is operated to take videos, flicker effect may be captured in one or more images due to frequency difference in the
respective operations between the imaging device and the lighting arrangement 102. Flicker in the images may be observed continuously. Rather than controlling the lighting to minimise the flicker effect, this may be achieved in the processing of the images. From the images generated, flicker which is regular and similar may be identified so that the flicker may be eventually removed.
FIG. 1B shows a schematic block diagram of an apparatus lOOb for imaging (and/or for image processing), according to various embodiments. The apparatus lOOb includes a processor l04b, and a memory 105 coupled to the processor l04b. The memory 105 and the processor l04b may be coupled to each other (as represented by the line 107), e.g., physically coupled and/or electrically coupled.
In various embodiments, the memory 105 has stored therein instructions, which when executed by the processor l04b, cause the processor l04b, for at least two images generated, each of the at least two images depicting an object of interest, and the at least two images depicting the object of interest and a background at different distances relative to each other, to determine, based on the at least two images and for a respective pixel of pixels defining an image of the at least two images, a zoom ratio associated with the respective pixel, and to determine, using the zoom ratio determined, the respective pixel as a pixel belonging to the object of interest or a background. The at least two images may be generated at respective zoom values of an imaging device. The at least two images may be transferred to and/or stored in the memory 105 or another memory.
In various embodiments, the memory 105 has stored therein instructions, which when executed by the processor l04b, cause the processor l04b to control an imaging device to generate at least two images, each of the at least two images depicting an object of interest, and the at least two images depicting the object of interest and a background at different distances relative to each other, to determine, based on (or from) the at least two images and for a respective pixel of pixels defining an image of the at least two images, a zoom ratio associated with the respective pixel, and to determine, using the zoom ratio determined, the respective pixel as a pixel belonging to the object of interest or the background. The at least two images may be transferred to and/or stored in the memory 105 or another memory. In various embodiments, for generating the at least two images, the processor l04b may control the imaging device to generate the at least two images at respective zoom values of the imaging device.
It should be appreciated that description in the context of the apparatus 100 may correspondingly be applicable to the apparatus lOOb, and description in the context of the processor 104 may correspondingly be applicable to the processor l04b.
In various embodiments, the processor l04b may execute the instructions to further cause the processor l04b to control the imaging device to generate at least two additional images at the respective zoom values of the imaging device, each of the at least two additional images depicting the background without the object of interest, to determine, based on (or from) the at least two additional images and for an additional respective pixel of pixels defining an additional image of the at least two additional images, an additional zoom ratio associated with the additional respective pixel, for determining the respective pixel (of pixels defining the image of the at least two images as a pixel belonging to the object or a background), to make a comparison between the zoom ratio and the additional zoom ratio, and, to determine, based on the comparison, the respective pixel as a pixel belonging to the object or the background.
In various embodiments, the processor l04b may execute the instructions to further cause the processor l04b, for determining the respective pixel (of pixels defining the image of the at least two images as a pixel belonging to the object or a background), to make a comparison between the zoom ratio and a threshold value, and, to determine, based on the comparison, the respective pixel as a pixel belonging to the object or the background.
In various embodiments, the processor l04b may execute the instructions to cause the processor l04b to determine the respective pixel as a pixel belonging to the object if the zoom ratio is determined to be higher than the threshold value, and to maintain the respective pixel, and to determine the respective pixel as a pixel belonging to the background if the zoom ratio is determined to be lower than the threshold value, and to discard the respective pixel.
In various embodiments, the processor l04b may execute the instructions to cause the processor l04b, for determining the respective pixel (of pixels defining the image of the at least two images as a pixel belonging to the object or a background), to make a comparison between the zoom ratio and a range of cut-off values, to determine the respective pixel as a pixel belonging to the object if the zoom ratio is determined to be within the range of cut-off values, and to maintain the respective pixel, and to determine the respective pixel as a pixel belonging to the background if the zoom ratio is determined to be outside of the range of cut-off values, and to discard the respective pixel.
In various embodiments, the processor l04b may execute the instructions to further cause the processor l04b to generate a resultant image of the object based on pixels determined as belonging to the object.
In various embodiments, the processor l04b may execute the instructions to further cause the processor l04b to discard pixels determined as belonging to the background. All pixels that are determined as background pixels may be removed or discarded.
In various embodiments, the processor 104b may execute the instructions to further cause the processor l04b to label (or mark) the respective pixel with the zoom ratio.
FIG. 1C shows a flow chart 120 illustrating a method for imaging (and/or for image processing), according to various embodiments.
At 122, at least two images are generated via (or using) an imaging device, each of the at least two images depicting an object of interest, and the at least two images depicting the object of interest and a background at different distances relative to each other.
At 124, based on (or from) the at least two images and for a respective pixel of pixels defining an image of the at least two images, a zoom ratio associated with the respective pixel is determined.
At 126, using the zoom ratio determined, the respective pixel is determined as a pixel belonging to the object of interest or a background.
In various embodiments, at 122, the at least two images may be generated at respective (different) zoom values of the imaging device.
In various embodiments, the method may further include generating, via the imaging device, at least two additional images at the respective zoom values of the imaging device, each of the at least two additional images depicting the background without the object of interest, determining, based on (or from) the at least two additional images and for an additional respective pixel of pixels defining an additional image of the at least two additional images, an additional zoom ratio associated with the additional respective pixel, and, at 126, making a comparison between the zoom ratio and the additional zoom ratio, and determining, based on the comparison, the respective pixel as a pixel belonging to the object or the background.
In various embodiments, at 126, the method may include making a comparison between the zoom ratio and a threshold value, and determining, based on the comparison, the respective pixel as a pixel belonging to the object or the background.
In various embodiments, at 126, the method may include determining the respective pixel as a pixel belonging to the object if the zoom ratio is determined to be higher than the threshold value, and maintaining the respective pixel, and, determining the respective pixel as a pixel belonging to
the background if the zoom ratio is determined to be lower than the threshold value, and discarding the respective pixel.
In various embodiments, at 126, the method may include making a comparison between the zoom ratio and a range of cut-off values, determining the respective pixel as a pixel belonging to the object if the zoom ratio is determined to be within the range of cut-off values, and maintaining the respective pixel, and, determining the respective pixel as a pixel belonging to the background if the zoom ratio is determined to be outside of the range of cut-off values, and discarding the respective pixel.
In various embodiments, a resultant image of the object may be generated based on (or from or using) pixels identified as belonging to the object.
In various embodiments, the method may further include removing or discarding pixels determined as belonging to the background. All pixels that are determined as background pixels may be removed or discarded.
In various embodiments, the method may further include labeling (or marking) the respective pixel with the zoom ratio.
In various embodiments, the method may further include identifying a flicker effect in the images generated, and removing the flicker effect.
The method may further include providing lighting. In various embodiments, at least one parameter of the lighting provided to illuminate the object and/or background (during or for generation of images) may be varied.
FIG. 1D shows a flow chart 130 illustrating a method for imaging (and/or for image processing), according to various embodiments. At 132, for a respective pixel of pixels defining an image of at least two images, wherein each of the at least two images depicts an object of interest and wherein the at least two images depict the object of interest and a background at different distances relative to each other, a zoom ratio associated with the respective pixel is determined based on the at least two images. At 134, using the zoom ratio determined, the respective pixel is determined as a pixel belonging to the object of interest or a background. The at least two images may be generated at respective zoom values of an imaging device. The method may include providing the lighting. It should be appreciated that description in relation to the method described in the context of the flow chart 120 may correspondingly be applicable in relation to the method described in the context of the flow chart 130, and vice versa.
It should be appreciated that description in the context of the apparatus 100, lOOb may correspondingly be applicable in relation to the method described in the context of the flow chart 120, 130 and vice versa.
Various embodiments may also provide a computer program or a computer program product, which may include instructions which, when executed by a computing device, cause the computing device to carry out a method for imaging as described herein.
Various embodiments may also provide a computer program or a computer program product, which may include instructions which, when executed by a computing device, cause the computing device to generate, via an imaging device, at least two images at respective zoom values of the imaging device, the at least two images depicting an object of interest, to determine, based on (or from) the at least two images and for a respective pixel of pixels defining an image of the at least two images, a zoom ratio associated with the respective pixel, and to determine, using the zoom ratio determined, the respective pixel as a pixel belonging to the object or a background.
The apparatus of various embodiments may include two parts: a pixel machine and a main controller, as will be described further below with reference to FIGS. 2 A to 2E and 3.
FIGS. 2 A to 2E show schematic perspective views of a pixel machine 201 of various embodiments. It should be appreciated that while one or more features or elements of the pixel machine 201 may not be shown in one or more of FIGS. 2 A to 2E for clarity and easier understanding purposes, and/or to illustrate an internal environment of the pixel machine 201, such features/elements may nevertheless be part of the pixel machine 201 as may be apparent from FIGS. 2A to 2E. Further, while LEDs are described in the context of the pixel machine 201, it should be appreciated that other types of light sources may be employed, as long as at least one parameter (e.g., light intensity) of the lighting provided may be variable. Nevertheless, LED lighting is preferable due to its characteristics of low power consumption, speed of control and long life.
The pixel machine 201 may include a front-left LED panel 202, a front-middle LED panel 204, a front-right LED panel 206, a back side LED panel 207, a left side LED panel 214, a right side LED panel 216, four (key light) LED panels 220 which may be curved, a top LED panel 222, a top-back LED panel 224, a top-left LED panel 225, a top-right LED panel 226, a top- front (key light) LED panel 227, and a bottom LED panel 228, where one or more of these LED panels may define a lighting arrangement (e.g., 102, FIG. 1A). The lighting arrangement may provide omnidirectional lighting, where the intensity of the lighting may be changed.
Each of the LED panels 202, 204, 206, 207, 214, 216, 220, 222, 224, 225, 226, 227, 228 may include a plurality of LEDs (illustrated as squares in the panels and represented as 229 for some LEDs). Each LED panel 202, 204, 206, 207, 214, 216, 220, 222, 224, 225, 226, 227, 228 may include a combination of red (R) LEDs, green (G) LEDs, blue (B) LEDs, and white (W) LEDs (RGBW LEDs), which may provide pure white colour as producing white from RGB in a close box may be challenging. Each LED panel may be individually or independently controlled. Each LED 229 may be individually or independently addressable.
The lighting arrangement of the pixel machine 201 may provide three types of lights: key light, fill light and back light, thereby providing three-point lighting. The lighting arrangement may also provide background lighting. The four LED panels 220 in the comer, the top-back LED panel 224, the top-left LED panel 225, the top-right LED panel 226, and the top-front LED panel 227 may provide key light. The left side LED panel 214 and the right side LED panel 216 may provide fill light. The back side LED panel 207 may provide back light.
An imaging device (e.g., camera) 210, which may be supported on a support structure (e.g., camera stand) 212, may be provided for imaging through an aperture 208 defined in the front- middle LED panel 204. The imaging device 210 may have a zoom feature, meaning that the imaging device 210 is capable of zooming-in and zooming-out operations for generation of images. A turn table 231 may be provided to support one or more (sample) objects (e.g., represented as 230) that are to be imaged. A stepper motor 232 may be provided for rotation of the object(s) 230 or turn table 231. The lighting arrangement of the pixel machine 201 may provide lighting to light up an object (or product) 230 at least substantially equally from all angles so one or more or all features of the object(s) 230 may have good resolution, contrast and colours.
The four side LED panels 220 (curved or round shape) employ more LEDs to provide sharp and crisp contrast of the object(s) 230. The panels 220 may also help to produce useful shadows shining areas of the object(s) 230. The curved shape angle of the LED panels 220 may give a more focused light to the object(s) 230. The panels 220 may also be used to generate different light effects, e.g., highlights and long and softer shadows, Chiaroscuro effect, etc, for photography.
The key light provided may be angled such that it may light up any part of an object 230 with control of the intensity of the key light. Each LED 229 may be addressed individually. Such a technique may allow (full) control over object lighting so that an optimal lighting condition or environment suitable for product photography may be determined. Further, in various embodiments, as the object height and length may be detected using a computer application, the optimal or best
lighting condition or environment may be determined or suggested automatically with minimal or without any saturation and/or dark spots.
An interface 234 may be provided on the pixel machine 201, for example, for cable connection to the processor to be described below with reference to FIG. 3.
FIG. 3 shows a schematic back view of a main controller 340, according to various embodiments. The main controller 340 may include a processor 342, for example, an industrial PC with embedded LEDs and motor controllers. The main controller 340 may further include a display device (e.g., a touch-screen monitor) 352, a keyboard 354 and a mouse 356.
An actuator, for example, a single capture button 344, may be provided (integrated) with the processor 342. A connector 348 may be provided for connection to a network, e.g., a local area network (LAN). A power connector or socket 350 may also be provided.
An interface (or link connector) 346 may be provided on the processor 342 for communication with the pixel machine 201 (FIGS. 2A to 2E), for example, via the interface 234 (FIG. 2A). In this way, signals may be communicated from the processor 342 to the pixel machine 201, for example, to control movement and operation of the imaging device 210, to vary the parameter of the lighting provided by the lighting arrangement, etc. A cable may be connected between the interfaces 234, 346, e.g., a multicore cable which may carry power, and control signals for LEDs 229, motor(s) (e.g., 232) and the imaging device 210 of the pixel machine 201.
The method for imaging will now be described by way of the following non-limiting examples, and with reference to FIG. 4.
For various methods, elimination of the background may be achieved based on zoom. As the imaging device (e.g., 210) may be controlled (e.g., via the processor 342, for example, via a software application), the background may be captured in various frames at different zoom values. Further, an object (or product) may be positioned against the background and frames (or images) may be generated at the same zoom values. Using the frames generated, pixels belonging to the background and pixels belonging to the object may be separated as the object is closer to the imaging device and its zoom ratio is different to that of the background. The zoom ratio or factor for the product, which is near the imaging device, is more than that for the background.
FIG. 4 shows a flow chart 490 illustrating a method for imaging, as an example to determine or identify object pixels and background pixels based on zoom effect. At 491, a product may be placed in the apparatus (e.g., 100) of various embodiments (e.g., in the pixel machine 201). Lighting may be provided to illuminate the object. The background may also be illuminated by the lighting
provided. Command is sent to the imaging device (e.g., 210) to generate images. The imaging device (e.g., camera) may be controlled to generate images depicting the object at different zoom values, which may be set in a software application. The images may include the background.
As a non-limiting example, the light intensity and/or colour emitted from the associated LEDs or LED panels may be provided and/or varied under the control of the processor 342 (FIG. 3) which may in turn take command via or from a software application, e.g., based on user input. It should be appreciated that key light may be provided on the object.
[0100] As a further non-limiting example, the button 344 (FIG. 3) may be actuated/pressed to perform the product imaging or photography. If the button 344 is pressed for more than 5 seconds, 360° photography may be carried out. Actuation of the button 344 may result in automated operations.
At 492, images (or frames) may be generated depicting the background without the product at the same zoom values as for the images generated at 491. Corresponding frames (e.g., at the same zoom value) with and without the product may be compared to each other.
At 492 or thereafter, zoom ratios of pixels may be determined. The zoom ratios may be determined using the frames captured at 491, or from frames captured at 491 and 492.
As illustrated at 493, the zoom ratio for pixels defining the product is different to the pixels defining the background. At 494, for a respective zoom ratio determined, a comparison is made on the condition that if the zoom ratio is not equal to (represented as“!=”) constant background, the pixel associated with the respective zoom ratio is determined to be an object pixel at 495; otherwise, the pixel is determined to be a background pixel at 496. As background is relatively farther in distance relative to the imaging device as compared to the object-imaging device distance, background pixels may, generally, have a substantially similar zoom ratio (or substantially constant zoom ratio), hence, the condition“constant background”.
In further details, the zoom ratio of the pixels in the images or frames may be determined. There are two different zoom ratios, one each for the background and the object, because the object is nearer to the imaging device. Pixels belonging to the object may be determined, and the object may then be“cut”.
As a non-limiting example, a product may be 20 mm (X) and 40 mm (Y), where LEDs distance in the background may be 5 mm (X) and 5 mm (Y). For the first frame, the camera may be zoomed to 10% zoom. The size increase factor (or zoom ratio) for the product near the camera is
more than the background. This factor helps in determining which pixels belong to the background and which pixels belong to the product.
To aid understanding, FIG. 5 shows a schematic view illustrating different pixels and their corresponding zoom ratios, according to various embodiments. As a non-limiting example, there may be two objects to be imaged against a background, with the two objects being arranged at different distances relative to the imaging device that is used to capture the images.
At a first zoom value, zi, of the imaging device, a first frame,“Frame A” 560a, may be captured or generated. It should be appreciated that the first zoom value, zl? may be any zoom value (i.e., corresponding to any“zoom” state of the imaging device), including, for example, zi = 0, where the imaging device is in the original state without zooming in into the objects. Frame A 560a that is generated shows a first object 56 la and a second object 562a against a background 564a. The second object 562a is arranged further relative to the imaging device as compared to the first object 561 a, meaning that the second object 562a is arranged in the space between the first object 56 la and the background 564a. To enable better clarity and understanding of zoom ratios, the second object 562a is sized appropriately relative to the first object 56 la so that the first object 56 la and the second object 562a as captured in Frame A 560a are of at least substantially similar size. For understanding purposes, a pixel 565a is defined for the first object 56la, a pixel 566a is defined for the second object 562a, and a pixel 567a is defined for the background 564a. The pixels 565a, 566a, 567a in Frame A 560a may be of at least substantially similar size.
The imaging device may then be operated to a second zoom value, z2, where z2 > z\. A second frame, “Frame B” 560b, may be captured or generated, showing an enlarged first object 56lb and an enlarged second object 562b against an enlarged background 564b. The pixels 565a, 566a, 567a are correspondingly enlarged to the pixels 565b, 566b, 567b. The right side of FIG. 5 shows enlarged views of the pixels 565a, 566a, 567a, 565b, 566b, 567b to illustrate the change of the pixels 565a, 566a, 567a to the pixels 565b, 566b, 567b, with an increase in the respective sizes. It should be appreciated that the pixel value (e.g., intensity value and/or colour value) is at least substantially similar or does not change going from the pixels 565a, 566a, 567a to respectively pixels 565b, 566b, 567b.
As illustrated in FIG. 5, as a result of the difference in the relative distances of the first object 56la, the second object 562a, and the background 564a to the imaging device, the size increase, and therefore, the respective zoom ratios, for the different pixels 565a, 566a, 567a of Frame A 560a being enlarged to the pixels 565b, 566b, 567b of Frame B 560b, may be different to each other. For
example, for the first object pixels 565a, 565b, the associated zoom ratio may be rm, for the second object pixels 566a, 566b, the associated zoom ratio may be m2, and, for the background pixels 567a, 567b, the associated zoom ratio may be m3, where rm > m2 > m3.
In the context of the apparatus and methods for imaging disclosed herein, around 100 frames or images at different zoom values may be generated, for the purpose of differentiating the background from the object/product. The images may be generated both with and without the object. For example, a pair of images, respectively with and without the object, may be generated at the same zoom value. Subsequently, corresponding images with and without the product may be compared to each other, and the zoom ratio throughout the frames or in all the frames may be determined. The respective zoom ratios for the background pixels and for the object pixels are different, and, in this way, object pixels and background pixels may be differentiated. The background may be determined and subsequently removed.
As described, it is possible to perform the method without the object in the pixel machine (e.g., 201) and to determine the zoom ratio by varying the zoom values. After that, the product may be placed on a turn table (e.g., 231), and the same zoom value may be run, and both frames with product and without product may be used for comparison to find which pixel belongs to the object and which pixel belongs to the background, so as to eliminate the background.
Nevertheless, it should be appreciated that images without the product may not be generated. Therefore, comparison of pixels using frames with and without the object, or using frames with the object only, may be possible. Both techniques work, as the zoom ratios corresponding to the object and the background are different.
It has been found that good results may be obtained even where prior reference without the product on the table is not available, as comparison on the basis of frames having the object with a change in a zoom value may assist in determination or identification of the object pixels and background pixels, for the purposes of removing the background pixels. Such a technique makes it a faster and more reliable process.
The various methods for imaging may be used for making 360° photography of product by rotation where each frame may have clear background elimination.
In the context of the apparatus and various methods for imaging, as pixels belonging to the object may be detected, it may help a user to place the object in the centre of the turn table to have a better result in 360° photography. To achieve this, the object may be placed anywhere at the turn table and rotated, and the object pixels may be determined or identified. The image that is desired or
best to put in front of the imaging device may be identified and such image with the product may be shown on a display device in half tone and half tone of live video, which may provide a live view which helps the user to match the location of the object following the image on screen by overlaying both images. As a non-limiting example, the product may first be detected where it is in the frame. The centre or middle of the product may be determined by dividing X and Y values by 2. A centre line of the product may be drawn. The centre line may be thick as all the middle pixels may not lie on the same X and Y lines, so the width (threshold of centre) of the product may increase but in no way, may have value out of product boundary. The difference between the current location and what should be if the product is placed in the centre of the turn-table may be determined and a half tone image may be shown to the user, and where necessary, to act to move the product to the centre of the turn table to have good 360 rotation view.
In the context of the apparatus and various methods for imaging, each light source (e.g., LED) may be controlled through a touch screen monitor or by one or more mouse clicks which help the user to create different lighting effects. LEDs may be driven by a DC (direct current) controller to minimise or eliminate lighting non-sync locking effect. The use of DC controller may eliminate the flicker caused with most LED driving arrangements used in low cost LED drivers using direct AC line of 50/60 hertz so as to save cost. Each light source (e.g., LED) may be addressed and controlled independently. While the light sources (e.g., LEDs) receive commands from the processor, the associated drivers for the light sources are located in the pixel machine.
Capture of round edges of products in good resolution is a challenge. In the context of the apparatus and various methods for imaging, LED light control from all angles may help to light up the round edges from left, right, top and bottom of the product, and good contrast and resolution pixels are made part of the final frame.
As the user has control of flexibility in lighting from any or all angles, coupled with possibility of rotation of object(s), various lighting effects may be created in various embodiments.
In the context of the various methods for imaging, the intensity of the lighting may be decreased, thereby minimising reflection. The lighting angle may be rotated, resulting in change in shadow position and in reflection effect. Such a lighting pattern may be employed for identification of different pixels, for example, object pixels, background pixels, shadow pixels.
In the context of the various methods for imaging, lighting may be provided to the sides of an object. As lighting may be provided from the front for a shiny object, the edges of the object may become mixed with the background. The front lighting may then be turned off and side lighting may
be provided for images to be generated to capture the edges of the object without saturation. For example, lighting may be provided from the left and/or right sides of the object. Subsequently, the images generated with the front lighting and the side lighting may be combined to form the complete object.
In the context of the various methods for imaging, in order to have proper details and proper pixel colours, it may be possible to zoom in on the object parts and identify the pixel colours as details, and, then, when in a“zoom out” condition, a picture or image may be formed based on the pixel values obtained in the“zoom in” condition. In this way, even when the object is looked at from far with higher resolution, details and true colours are still there.
In the context of the apparatus, a camera stand may be mounted on the box defining the pixel machine, which may help in the minimisation or elimination of vibration or small differences in the position of detection of pixels in different frames.
In the context of the apparatus, the imaging device (e.g., camera) may be mounted on an XYZ motorized stand which may not only automatically zoom but adjust its physical location as per product size and/or placement on the turn table. The imaging device may be moved up and down, in a linear or curved motion. The imaging device may be moved in a curved motion to take a top view image of the object if desired. This may mean that the front-middle LED panel 204 may be movable together with the imaging device when the XYZ motorized stand is activated to control positioning of the imaging device. The front-middle LED panel 204 may be movable up and down and/or rotated in curve. Additionally or alternatively, even if the imaging device is moved up and down with XY movement, a curved effect may be achieved using the software application.
In the context of the apparatus, as lighting may be provided from any or all angles, the lighting may be turned on in a specific or defined manner to find the desired or best lighting based on pixel resolution and/or saturation levels.
In the context of the apparatus and various methods for imaging, front, back, left and right views of the object may be automatically captured by rotation of the turn table. A user (only) needs to place the object front facing the imaging device. The turn table may then rotate and adjust, for example, 5 - 10°, to determine the location where the maximum width of the object is detected. This may help to further eliminate any placement errors by the user. Unless, if desired, the user wants to have a front view of its own choice. Once the best possible front position is detected, the turn table may rotate at any angle, which may be defined in a default setting of the software. For example, the turn table may rotate 90° to capture the left, back and right views of the object for photoshoot.
In the context of the apparatus and various methods for imaging, imaging or photography, and elimination of background, may take less than a minute.
In the context of the apparatus and various methods for imaging, as the light intensity may be controlled and/or different colour combinations may be produced, camera white balance may be adjusted based on RGB color intensity which may help in capturing close to the natural colours of the object(s) or product(s).
In the context of the apparatus and various methods for imaging, a software application compatible with Windows® and Linux® operating systems may be provided. After the single button 344 is pressed by a user, the object on the turn table may be automatically determined, the camera may be adjusted to its best location, the lighting may be adjusted, object image(s) may be taken or generated, the background may be eliminated, the object may be centred in the middle of frame and both raw and compressed versions may be stored, for example, in the“My Pictures” folder in Windows® or on a flash drive connected to processor or to a network addressed folder. Activating the button 344 triggers the automatic scripts written in the software application, which may include predefined template(s) to get most of product photographs correct.
In the context of the apparatus and various methods for imaging, an application programming interface (API) platform may be provided, which may be used to write plugin for web application which may support single button operation to upload product images in single, quad or 360° compressed and optimised for web resolution direct to e-commerce websites with single press of the button 344.
As described above, the various methods may include one or more of capturing frames with and without the object, and employing zoom effect. As described, background elimination may be carried out based on zoom effect. Corresponding frames for capture of the background with and without the object may be performed at corresponding zoom values. As the zoom ratio of pixels corresponding to the objects is different from the zoom ratio of pixels corresponding to the background (as the objects are closer to the camera), the pixels corresponding to the objects may be determined and separated from those corresponding to the background.
The various methods and techniques may be used in a studio with bigger objects to shoot, including fashion models, clothing and cars. Background lighting may be varied with known values, and pixels may be marked and determined as belonging to the product or the background so that there is no necessity for manual editing, e.g., to remove the background.
It should be appreciated that the various methods for imaging may be implemented, either individually or a combination thereof, in the apparatus for imaging of various embodiments.
While the invention has been particularly shown and described with reference to specific embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. The scope of the invention is thus indicated by the appended claims and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced.
Claims
1. An apparatus for imaging comprising:
a lighting arrangement configured to provide lighting; and
a processor,
wherein the processor is configured to control an imaging device to generate at least two images, each of the at least two images depicting an object of interest, and the at least two images depicting the object of interest and a background at different distances relative to each other, and wherein the processor is further configured, for a respective pixel of pixels defining an image of the at least two images, to determine, based on the at least two images, a zoom ratio associated with the respective pixel, and to determine, using the zoom ratio determined, the respective pixel as a pixel belonging to the object of interest or the background.
2. The apparatus as claimed in claim 1,
wherein, for generating the at least two images, the processor is configured to control the imaging device to generate the at least two images at respective zoom values of the imaging device.
3. The apparatus as claimed in claim 2,
wherein the processor is further configured to control the imaging device to generate at least two additional images at the respective zoom values of the imaging device, each of the at least two additional images depicting the background without the object of interest,
wherein the processor is further configured, for an additional respective pixel of pixels defining an additional image of the at least two additional images, to determine, based on the at least two additional images, an additional zoom ratio associated with the additional respective pixel, and wherein, for determining the respective pixel, the processor is configured to make a comparison between the zoom ratio and the additional zoom ratio, and to determine, based on the comparison, the respective pixel as a pixel belonging to the object of interest or the background.
4. The apparatus as claimed in claim 1 or 2,
wherein, for determining the respective pixel, the processor is configured to make a comparison between the zoom ratio and a threshold value, and to determine, based on the comparison, the respective pixel as a pixel belonging to the object of interest or the background.
5. The apparatus as claimed in claim 4, wherein the processor is configured:
to determine the respective pixel as a pixel belonging to the object of interest if the zoom ratio is determined to be higher than the threshold value, and to maintain the respective pixel; and to determine the respective pixel as a pixel belonging to the background if the zoom ratio is determined to be lower than the threshold value, and to discard the respective pixel.
6. The apparatus as claimed in claim 1 or 2, wherein, for determining the respective pixel, the processor is configured:
to make a comparison between the zoom ratio and a range of cut-off values;
to determine the respective pixel as a pixel belonging to the object of interest if the zoom ratio is determined to be within the range of cut-off values, and to maintain the respective pixel; and to determine the respective pixel as a pixel belonging to the background if the zoom ratio is determined to be outside of the range of cut-off values, and to discard the respective pixel.
7. An apparatus for imaging comprising:
a processor; and
a memory coupled to the processor, the memory having stored therein instructions, which when executed by the processor, cause the processor:
to control an imaging device to generate at least two images, each of the at least two images depicting an object of interest, and the at least two images depicting the object of interest and a background at different distances relative to each other;
to determine, based on the at least two images and for a respective pixel of pixels defining an image of the at least two images, a zoom ratio associated with the respective pixel; and
to determine, using the zoom ratio determined, the respective pixel as a pixel belonging to the object of interest or the background.
8. The apparatus as claimed in claim 7, wherein the instructions, which when executed by the processor, further cause the processor:
for generating the at least two images, to control the imaging device to generate the at least two images at respective zoom values of the imaging device.
9. The apparatus as claimed in claim 8, wherein the instructions, which when executed by the processor, further cause the processor:
to control the imaging device to generate at least two additional images at the respective zoom values of the imaging device, each of the at least two additional images depicting the background without the object of interest;
to determine, based on the at least two additional images and for an additional respective pixel of pixels defining an additional image of the at least two additional images, an additional zoom ratio associated with the additional respective pixel;
for determining the respective pixel, to make a comparison between the zoom ratio and the additional zoom ratio; and
to determine, based on the comparison, the respective pixel as a pixel belonging to the object of interest or the background.
10. An apparatus for imaging comprising:
a processor; and
a memory coupled to the processor, the memory having stored therein instructions, which when executed by the processor, cause the processor, for at least two images generated, each of the at least two images depicting an object of interest, and the at least two images depicting the object of interest and a background at different distances relative to each other:
to determine, based on the at least two images and for a respective pixel of pixels defining an image of the at least two images, a zoom ratio associated with the respective pixel; and
to determine, using the zoom ratio determined, the respective pixel as a pixel belonging to the object of interest or a background.
11. A method for imaging comprising:
generating, via an imaging device, at least two images, each of the at least two images depicting an object of interest, and the at least two images depicting the object of interest and a background at different distances relative to each other;
determining, based on the at least two images and for a respective pixel of pixels defining an image of the at least two images, a zoom ratio associated with the respective pixel; and
determining, using the zoom ratio determined, the respective pixel as a pixel belonging to the object of interest or a background.
12. The method as claimed in claim 11 ,
wherein generating at least two images comprises generating the at least two images at respective zoom values of the imaging device.
13. The method as claimed in claim 12, further comprising:
generating, via the imaging device, at least two additional images at the respective zoom values of the imaging device, each of the at least two additional images depicting the background without the object of interest;
determining, based on the at least two additional images and for an additional respective pixel of pixels defining an additional image of the at least two additional images, an additional zoom ratio associated with the additional respective pixel; and
wherein determining the respective pixel comprises making a comparison between the zoom ratio and the additional zoom ratio, and determining, based on the comparison, the respective pixel as a pixel belonging to the object of interest or the background.
14. The method as claimed in claim 11 or 12, wherein determining the respective pixel comprises making a comparison between the zoom ratio and a threshold value, and determining, based on the comparison, the respective pixel as a pixel belonging to the object of interest or the background.
15. The method as claimed in claim 14, comprising:
determining the respective pixel as a pixel belonging to the object of interest if the zoom ratio is determined to be higher than the threshold value, and maintaining the respective pixel; and determining the respective pixel as a pixel belonging to the background if the zoom ratio is determined to be lower than the threshold value, and discarding the respective pixel.
16. The method as claimed in claim 11 or 12, wherein determining the respective pixel comprises:
making a comparison between the zoom ratio and a range of cut-off values;
determining the respective pixel as a pixel belonging to the object of interest if the zoom ratio is determined to be within the range of cut-off values, and maintaining the respective pixel; and determining the respective pixel as a pixel belonging to the background if the zoom ratio is determined to be outside of the range of cut-off values, and discarding the respective pixel.
17. A method for imaging comprising:
for a respective pixel of pixels defining an image of at least two images, wherein each of the at least two images depicts an object of interest and wherein the at least two images depict the object of interest and a background at different distances relative to each other, determining, based on the at least two images, a zoom ratio associated with the respective pixel; and
determining, using the zoom ratio determined, the respective pixel as a pixel belonging to the object of interest or a background.
18. A computer program or a computer program product comprising instructions which, when executed by a computing device, cause the computing device to carry out a method as claimed in any one of claims 11 to 13.
19. A computer program or a computer program product comprising instructions which, when executed by a computing device, cause the computing device to carry out a method as claimed in claim 17.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
MYPI2018001380 | 2018-07-31 | ||
MYPI2018001380 | 2018-07-31 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020027647A1 true WO2020027647A1 (en) | 2020-02-06 |
Family
ID=69232647
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/MY2019/000029 WO2020027647A1 (en) | 2018-07-31 | 2019-07-24 | Apparatus and method for imaging |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2020027647A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113317659A (en) * | 2021-05-18 | 2021-08-31 | 深圳市行识未来科技有限公司 | A show shelf for planar design three-dimensional model |
EP4192000A1 (en) * | 2021-11-29 | 2023-06-07 | Tata Consultancy Services Limited | Method and system for zone-wise adaptive illumination of objects |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040179134A1 (en) * | 2003-03-12 | 2004-09-16 | Nec Corporation | Portable terminal device and method and program for varying light illuminance used therein |
KR100676232B1 (en) * | 2003-09-05 | 2007-01-30 | 가부시키가이샤 히다치 고쿠사이 덴키 | Computer-readable recording medium recording object tracking method, object tracking device and program of calculator for object tracking |
US20090262220A1 (en) * | 2008-04-21 | 2009-10-22 | Samsung Digital Imaging Co., Ltd. | Digital photographing apparatus, method of controlling the same, and recording medium storing computer program for executing the method |
US8159536B2 (en) * | 2004-06-14 | 2012-04-17 | Agency For Science, Technology And Research | Method for detecting desired objects in a highly dynamic environment by a monitoring system |
US9223404B1 (en) * | 2012-01-27 | 2015-12-29 | Amazon Technologies, Inc. | Separating foreground and background objects in captured images |
-
2019
- 2019-07-24 WO PCT/MY2019/000029 patent/WO2020027647A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040179134A1 (en) * | 2003-03-12 | 2004-09-16 | Nec Corporation | Portable terminal device and method and program for varying light illuminance used therein |
KR100676232B1 (en) * | 2003-09-05 | 2007-01-30 | 가부시키가이샤 히다치 고쿠사이 덴키 | Computer-readable recording medium recording object tracking method, object tracking device and program of calculator for object tracking |
US8159536B2 (en) * | 2004-06-14 | 2012-04-17 | Agency For Science, Technology And Research | Method for detecting desired objects in a highly dynamic environment by a monitoring system |
US20090262220A1 (en) * | 2008-04-21 | 2009-10-22 | Samsung Digital Imaging Co., Ltd. | Digital photographing apparatus, method of controlling the same, and recording medium storing computer program for executing the method |
US9223404B1 (en) * | 2012-01-27 | 2015-12-29 | Amazon Technologies, Inc. | Separating foreground and background objects in captured images |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113317659A (en) * | 2021-05-18 | 2021-08-31 | 深圳市行识未来科技有限公司 | A show shelf for planar design three-dimensional model |
CN113317659B (en) * | 2021-05-18 | 2022-12-13 | 深圳市行识未来科技有限公司 | A show shelf for planar design three-dimensional model |
EP4192000A1 (en) * | 2021-11-29 | 2023-06-07 | Tata Consultancy Services Limited | Method and system for zone-wise adaptive illumination of objects |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Murmann et al. | A dataset of multi-illumination images in the wild | |
JP7203844B2 (en) | Training data generation method, generation device, and semantic segmentation method for the image | |
US7206449B2 (en) | Detecting silhouette edges in images | |
US7295720B2 (en) | Non-photorealistic camera | |
JP6286481B2 (en) | Writing system and method for object enhancement | |
US9648699B2 (en) | Automatic control of location-registered lighting according to a live reference lighting environment | |
CN114119779B (en) | Method and electronic device for generating material mapping by shooting with multi-angle lighting | |
CN107977694B (en) | Automatic analysis system for photographing, inputting and identifying samples | |
US12056812B2 (en) | Interactive image generation | |
US20220114734A1 (en) | System for background and floor replacement in full-length subject images | |
TWI568260B (en) | Image projection and capture with simultaneous display of led light | |
WO2020027647A1 (en) | Apparatus and method for imaging | |
JP2016181068A (en) | Learning sample imaging device | |
CN117545123A (en) | Brightness compensation method of LED light source | |
US20180167596A1 (en) | Image capture and display on a dome for chroma keying | |
WO2020027645A2 (en) | Apparatus and method for imaging | |
KR102564522B1 (en) | Multi-view shooting apparatus and method for creating 3D volume object | |
WO2020027646A2 (en) | Apparatus and method for imaging | |
WO2020027648A1 (en) | Apparatus and method for imaging | |
JP2013026656A (en) | Photographing device, photographing method, and photographing program | |
CN111034365A (en) | Apparatus and method for irradiating object | |
CN110874862A (en) | System and method for three-dimensional reconstruction | |
CN110874863A (en) | Three-dimensional reconstruction method and system for three-dimensional reconstruction | |
CN116342852A (en) | Sample image acquisition method, model training method and image acquisition system | |
WO2023232417A1 (en) | 3d reconstruction method and picture recording arrangement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19845326 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19845326 Country of ref document: EP Kind code of ref document: A1 |