[go: up one dir, main page]

Next Article in Journal
The Smart Drifter Cluster: Monitoring Sea Currents and Marine Litter Transport Using Consumer IoT Technologies
Next Article in Special Issue
Multi-Resolution 3D Rendering for High-Performance Web AR
Previous Article in Journal
A Novel Strategy for Detecting Permittivity and Loss Tangent of Low-Loss Materials Based on Cylindrical Resonant Cavity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Computational Integral Imaging Reconstruction via Elemental Image Blending without Normalization

1
Department of Computer Science, Sangmyung University, Seoul 110-743, Republic of Korea
2
Department of Intelligent IOT, Sangmyung University, Seoul 110-743, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(12), 5468; https://doi.org/10.3390/s23125468
Submission received: 7 May 2023 / Revised: 28 May 2023 / Accepted: 7 June 2023 / Published: 9 June 2023
Figure 1
<p>Computational integral imaging system (<b>a</b>) pickup and (<b>b</b>) computational reconstruction.</p> ">
Figure 2
<p>Processes of reconstructing a volume using the standard CIIR method.</p> ">
Figure 3
<p>(<b>a</b>) 1D optical model of integral imaging and (<b>b</b>) its window signal model.</p> ">
Figure 4
<p>(<b>a</b>) Illustration of the sum of nine SWFs in standard CIIR; (<b>b</b>) the results of normalization.</p> ">
Figure 5
<p>(<b>a</b>) Illustration of the sum of nine SWFs in triangular CIIR; (<b>b</b>) the results of normalization.</p> ">
Figure 6
<p>Diagram of the proposed CIIR method based on image blending: (<b>a</b>) represents the flowchart of the proposed method, and (<b>b</b>) describes the overlapping process; (<b>c</b>,<b>d</b>) illustrate the <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>E</mi> </mrow> <mrow> <mi>i</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> extraction process and the <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math> overwriting process, respectively.</p> ">
Figure 7
<p>(<b>a</b>) Flowchart of the proposed method. (<b>b</b>) Horizontal overlapping for each row of EIA.</p> ">
Figure 8
<p>Weight of alpha blending by overlapping, using a window.</p> ">
Figure 9
<p>Alpha-blended signal in a shifted window function format.</p> ">
Figure 10
<p>Optical experimental setup and acquired EIAs. (<b>a</b>) optical setup (<b>b</b>) EIA of the green car (<b>c</b>) EIA of the yellow car.</p> ">
Figure 11
<p>Reconstructed images of the green car using (<b>a</b>) a rectangular window, (<b>b</b>) a triangular window, and (<b>c</b>) proposed method at <span class="html-italic">z</span> = <span class="html-italic">z</span><sub>0</sub> (20 mm) and <span class="html-italic">z</span> = <span class="html-italic">z</span><sub>0</sub> + 10 mm.</p> ">
Figure 12
<p>Reconstructed images of the yellow car using (<b>a</b>) a rectangular window, (<b>b</b>) a triangular window, and (<b>c</b>) proposed method at <span class="html-italic">z</span> = <span class="html-italic">z</span><sub>0</sub> (20 mm) and <span class="html-italic">z</span> = <span class="html-italic">z</span><sub>0</sub> + 10 mm.</p> ">
Figure 13
<p>Public light field data from the Heidelberg Collaboratory for Image Processing (HCI) used in the experiment. (<b>a</b>,<b>b</b>) EIA and its zoomed area of 9 × 9 elemental images from the 81 HCI image files; (<b>c</b>) a difference view of two neighboring elemental images.</p> ">
Figure 14
<p>Reconstructed images of ‘bicycle’ data using (<b>a</b>) a rectangular window, (<b>b</b>) a triangular window, and (<b>c</b>) the proposed method.</p> ">
Figure 15
<p>Time–memory scatterplot.</p> ">
Versions Notes

Abstract

:
This paper presents a novel computational integral imaging reconstruction (CIIR) method using elemental image blending to eliminate the normalization process in CIIR. Normalization is commonly used in CIIR to address uneven overlapping artifacts. By incorporating elemental image blending, we remove the normalization step in CIIR, leading to decreased memory consumption and computational time compared to those of existing techniques. We conducted a theoretical analysis of the impact of elemental image blending on a CIIR method using windowing techniques, and the results showed that the proposed method is superior to the standard CIIR method in terms of image quality. We also performed computer simulations and optical experiments to evaluate the proposed method. The experimental results showed that the proposed method enhances the image quality over that of the standard CIIR method, while also reducing memory usage and processing time.

1. Introduction

Integral imaging is a well-known technique for visualizing and recognizing three-dimensional objects. Since Lippmann proposed integral imaging technology in 1908 [1], it has been actively studied in various fields, including 3D image recording, 3D visualization, and 3D object recognition [2,3,4,5,6,7,8,9,10,11,12,13,14,15]. Specifically, computational integral imaging exhibits several advantages over traditional 3D imaging techniques, such as the benefit of a full parallax with white light and continuous viewpoints, without the need to wear eyeglasses. It provides a more immersive and realistic viewing experience in 3D visualization and virtual reality applications.
The computational integral imaging system is composed of a pickup process and a reconstruction process, as shown in Figure 1. In the pickup process, an image camera captures rays coming from a three-dimensional object and passing through a lenslet array. These recorded rays are known as an elemental image array (EIA). The reconstruction process generates 3D images from the EIA by employing a computational integral imaging reconstruction (CIIR) method. This CIIR method overcomes optical limitations such as lens aberrations and barrel distortion, producing 3D volume images for recognizing 3D objects and estimating their depths.
Generally, computational integral imaging reconstruction (CIIR) methods are based on back projection [16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40]. The principle of back projection involves projecting 2D elemental images onto a 3D space and overlapping each projected image at a reconstruction image plane [16,17,18,19]. Back projection-based CIIR methods, owing to their straightforward model for ray optics, have been extensively researched for the improvement of 3D imagery [16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40]. These methods can be categorized into pixel-mapping-based projection, windowing-based projection, and convolution-based projection. Pixel mapping methods project each pixel of an elemental image array into 3D space through a pinhole array [20,21,22,23,24,25], reducing computational costs and improving the visual quality of the reconstructed images. Windowing methods project weighted elemental images into a 3D space, with windowing functions being defined from a signal model of CIIR. The signal model enhances the visual quality of the reconstructed images by eliminating blurring and lens array artifacts [26,27]. Recently, CIIR methods utilizing convolution and the delta function have been introduced to acquire depth information, offering improvements in reconstructed image quality and control over depth resolution [28,29,30,31,32,33,34,35]. In addition, CIIR methods using a tilted elemental image array have been proposed to enhance image quality [36,37]. A depth-controlled computational reconstruction using sub-images or continuously non-uniform shifting pixels was proposed to achieve improved depth resolution and image quality [38,39], and a deep learning-based integral imaging system that uses a pre-trained Mask R-CNN was suggested to avoid blurry areas that are out of focus [40].
Existing methods of computational integral imaging reconstruction (CIIR) typically involve magnification, overlapping, and normalization processes. However, to reduce computational costs, magnification can be replaced with shifting processes. The overlapping process is necessary and cannot be eliminated. The normalization process is also necessary to correct for uneven overlapping artifacts. However, this process demands a significant amount of memory and computing time, equivalent to that of the overlapping process. As the size of each elemental image increases, memory usage and processing time also increase, making this method unsuitable for real-time applications.
In this paper, we present a novel method for computational integral imaging reconstruction (CIIR) using elemental image blending. Our proposed technique eliminates the need for the normalization process to compensate for uneven overlapping artifacts, resulting in decreased memory usage and computational time compared to those of existing methods. We conducted an analysis of the impact of elemental image blending on a CIIR method using windowing techniques. Our model and analysis show that the proposed method is theoretically superior to the standard CIIR method. Additionally, we performed computer simulations and optical experiments to evaluate the proposed method. The experimental results indicate that compared to existing CIIR methods, our method exhibits approximately half the memory usage, along with and improved processing speed. Moreover, the proposed method enhances the image quality over that of the standard CIIR method.

2. Conventional Computational Integral Imaging Reconstruction

The conventional CIIR method is depicted in Figure 2. The model is expanded by the window model to explain existing techniques using the same model. Each elemental image is subjected to a window function, moved according to the shift factor, and then overlapped. The number of overlaps within the reconstructed image plane may be inconsistent, prompting the execution of a normalization process to remove the uneven overlapping artifact. By repeatedly carrying out these operations along the z-axis, we can achieve a volumetric reconstruction of a 3D object.
However, the standard CIIR method exhibits some artifacts that lead to a decline in visual quality. During the overlapping process, uneven overlapping artifacts—also known as granular noise—emerge in the reconstructed images [26]. Furthermore, blurring artifacts appear in the out-of-focus areas, and lenslet artifacts arise due to the lenslet’s shape. As the lenslet shape is rectangular, the reconstructed images appear blocky. The window-based CIIR method has been proposed as a means to enhance the quality of reconstructed images by eliminating these artifacts [26,27]. This CIIR approach incorporates a window function in the overlapping process, using a signal model.
The standard CIIR method can be viewed as a CIIR approach that employs a rectangular window. In this case, the rectangular window can be substituted with a different window, such as a triangular or cubic one. Switching the rectangular window to a triangular window reduces blurring artifacts and lenslet artifacts, enhancing the image quality of the reconstructed image [27]. The primary cause of these artifacts is the discontinuity in the windows. Thus, to minimize artifacts, a smooth and continuous window should be implemented to eliminate discontinuity and produce seamlessly reconstructed signals.
To explain the signal model of integral imaging, we introduce our previously proposed signal model [26]. Figure 3a is an optical model using a 1D pinhole array of an integral imaging system. Figure 3b is a model that abstracts the integral imaging system by introducing a rectangular window function. Here, fz(x) is the intensity signal of an object located at the distance z from the lens array, and rz(x) is the reconstructed signal at the distance z obtained by the back-projecting and overlapping of the picked-up EIA passing through the virtual pinhole array. The EIA pickup process is described as a procedure that windows, inverts, and downscales the fz(x) signal. On the contrary, the reconstruction process is explained as a process that re-inverts, upscales, and overlaps these elemental images. This means that the inversion and scaling effects disappear, leaving only the windowing and overlapping effects. It thus allows for the reconstruction model to be described as a simplified model in Figure 3b.
Based on the model in Figure 3b, the relationship between the original signal fz(x) and its reconstructed signal rz(x) is written as
r z x = f z x i = 0 N 1 π i x w = f z x S π x ,
where πi(x/w) = π0((x-i⍺)/w), ⍺ represents the size of each elemental image, N represents the number of elemental images, and w represents the size of the window π0(x). Here, the shifted window function (SWF) πi(x) is a shifted version of the window function π0(x), where the function π0(x) = 1, for 0 ≤ x ≤ 1, and zero otherwise. In SWF, the shifting factor s is a multiple of the elemental signal length ⍺; that is, s = i·⍺. The sum of the shifted windows is Sπ(x), and from Equation (1), the original signal fz(x) is obtained as fz(x) = rz(x)/Sπ(x). Here, the normalization process is considered by the division of a reconstructed image by the summation of SWFs in the window-based CIIR method.
For example, Figure 4a shows nine shifted window functions (SWFs) using a rectangular window function, while Figure 5a displays nine SWFs derived from a triangular window function. The sum of these SWFs, represented as Sπ(x), is highlighted in red at the bottom of each figure. The function πNi(x) in Figure 4b represents the normalized window function of πi(x), achieved by dividing each rectangular window function πi(x) by Sπ(x); hence, πNi(x) = πi(x)/Sπ(x). As seen in the lower section of Figure 4b, the sum of these normalized SWFs equals one. Figure 5b echoes this concept, presenting normalized SWFs for the triangular window function. It can be observed that the window functions in Figure 5a are simply translated versions of πi(x); however, their normalized window functions can be different due to the shape of the function Sπ(x), as illustrated in Figure 5b. The differences are particularly noticeable in πN0(x) and πN8(x). In conclusion, applying normalized window functions to the EIA eliminates the need for the normalization process of the CIIR. Therefore, the development of a method using normalized window functions is important.

3. Proposed CIIR Model via Elemental Image Blending

A computational reconstruction method for integral imaging systems is proposed by using elemental image blending. Conventional CIIR methods require a compensation process to normalize the reconstructed images due to uneven overlapping. The normalization process is performed by dividing the reconstructed images by the summation of the SWFs. This process requires additional memory because it needs to record the overlapping numbers for all pixels of the reconstructed images. To eliminate the normalization process, we introduce elemental image blending. Here, the overlapping process is modified with elemental image blending; thus, the normalization process can be canceled out. It turns out that our method saves approximately half of the memory, consequently improving the computation speed. In this section, we describe the proposed method and provide a mathematical analysis of the impact of the elemental image blending technique on CIIR using the window signal model.
Figure 6 depicts the proposed CIIR method based on elemental image blending. The flow of the proposed method is as follows: First, two elemental images, E i 1 and E i , are obtained, and then overlapping is performed using elemental image blending. The resulting image is then overwritten onto the reconstruction buffer. This process is repeated for all the elemental images of the EIA. In the proposed method, the reconstruction buffer is a temporal memory that stores the overlapped image R i of the two elemental images, E i 1 and E i . The elemental image E i 1 is extracted from the reconstruction buffer, while E i is obtained from i t h elemental image in EIA. As depicted in Figure 6a, the proposed method consists of the overlapping process using image blending, the extraction process of the elemental image E i 1 , and the overwriting process of R i , as explained in the following paragraphs.
Figure 6b describes the process of overlapping two images, E i 1 and E i , using image blending. In the overlapping process, there are areas where the two input images overlap and areas where they do not. The area where two images overlap, with blending, is referred to as the blending area. Let the parameter w be the width of the input images, and the parameter a be the shift factor. The overlapping range in E i 1 is from a to w , while the overlapping range in E i is from 0 to w a . Thus, the blending area and the non-blending area are separable. The blending area is blended by the well-known alpha-blending for elemental image blending. We choose the alpha-blending as the elemental image blending due to its simplicity and effectiveness. The blended image is stored in R i from a to w . The non-blending areas of E i 1 and E i are copied in R i from 0 to w a and in R i from w to w + a , respectively. Thus, the images of each area are merged to output R i . Figure 6c illustrates the process of extracting E i 1 from the reconstruction buffer. The overlapping results are stored in the reconstruction buffer, and thus the extracted image of size w × w from this buffer is different from the elemental image. In the initial reconstruction buffer, the elemental image located at the leftmost position in EIA is written as E 0 . The initial queue pointer, q 0 , points to the 0th column of the reconstruction buffer. The image of size w × w , extracted from q 0 , is defined as E 0 . As the overlapping and overwriting process repeats, the queue point is updated. E i 1 can be obtained by extracting an image of w × w in size from q 2 i 2 .
Figure 6d illustrates the process of overwriting R i in the reconstruction buffer. R 1 , the overlapping image of E 0 and E 1 , is overwritten onto the buffer, starting from q 1 . Here q 1 is the position obtained by moving q 0 by a . As the overlapping and overwriting processes are repeated, the queue pointer is updated as
q i = q i 1 + a .
When R i is input in the overwriting process, the queue pointer is updated, and R i is overwritten onto the buffer starting from q 2 i 1 .
We also provide the flowchart of the proposed method, as shown in Figure 7. The proposed method employs horizontal overlapping for each row of the EIA, as shown in Figure 7a. Vertical overlapping is applied to the resulting images after horizontal overlapping, which is achieved by the use of horizontal overlapping of the resulting transposed images. Figure 7b depicts the flowchart of the overlapping process. The first elemental image, E 0 , is written in the reconstruction buffer. Subsequently, E i 1 is extracted from the reconstruction buffer, and E i is fetched from the EIA for image blending. This process is repeated until the final elemental image is blended in the image blending step.
The proposed method of the overlapping and overwriting processes, as described above, can eliminate the normalization process. Here, we mathematically explain how the normalization process can be eliminated and how our elemental image blending affects the image quality of the reconstructed image.
To analyze the overall CIIR model using elemental image blending, elemental image blending is represented by windowing an elemental image. Here, alpha-blending was used as an example. When two elemental image signals are alpha-blended, two weighted signals, A 1 ( x ) and A 2 ( x ) , are represented as two windowing functions for two elemental images, which are written as
A 1 x 1 , ( 0 x < a ) w x w a , ( a x < w ) ,   A 2 x x w a , ( 0 x < w a ) 1 , ( w a x < w )
As the elemental images overlap, the weights of the signals change due to overlapping, as shown in Figure 8. Accordingly, the weights after overlapping can be represented in different windowing functions. Let the weight of the first elemental image signal be π 0 ( x ) . It is then defined as the product of the shifted function of A 1 ( x ) ,
π 0 x = A 1 x A 1 x a A 1 x 2 a A 1 x a M 1 .
Similarly, the weight of the i t h signal, π i ( x ) , is defined as
π i x = A 2 x A 1 x A 1 x a A 1 x 2 a A 1 x a M 1 ,
which is the product of A 2 ( x ) and the shifted function of A 1 x , when i is between 2 and N M 1 . As shown in Figure 8, the A 1 ( x ) signal is required for image blending with the next elemental image; thus, its overlap decreases as i increases from N M to the end.
According to the formulas, the alpha blending signal for each elemental image can be represented in shifted window functions, as shown in Figure 9. Since the summation of window functions is a unit, the original signal can be reconstructed without compensation processes, such as normalization. Note that the form of each window in Figure 9 is a continuous and smooth window. According to the window theory mentioned in Section 2, a continuous window is excellent for removing lenslet artifacts, and it reduces blurring artifacts that occur in the standard CIIR method, which utilizes the rectangular window. Therefore, the proposed method using elemental image blending provides better image quality than does the standard CIIR, while also and requiring less memory and computing time.

4. Experiment Results and Discussion

To demonstrate the usefulness of our proposed method, we conducted optical and computational experiments. Figure 10 illustrates the optical experimental setup consisting of lenslets and a CCD camera, along with the acquired EIAs in that environment. The size and focal length of each lenslet is 1.08 mm and 5.2 mm, respectively. The CCD camera in use is the Canon EOS 800D model, equipped with an APS-C size 1.6× crop sensor and an effective pixel count of 24.2 million pixels. Two 3D objects, green and yellow cars, were used in the experiment. The EIA for the green car comprises 32 × 32 elemental images, with each elemental image consisting of 32 × 32 pixels. The EIA for the yellow car consists of 45 × 34 elemental images, and each elemental image has a resolution of 58 × 58 pixels. The resolution of the elemental images can be controlled by adjusting the distance between the camera and the lens array. The location of the 3D objects, z 0 , is around 20 mm. We reconstructed the 3D objects using existing CIIRs with rectangular and triangular windows and our proposed CIIR.
Figure 11 shows the reconstructed images of the green car with the output plane locations set at 20 mm and 30 mm. The images in Figure 11a are reconstructed by the standard CIIR with a rectangular window. In the reconstructed image at 20 mm, there is a blocky area due to lenslet artifacts. Similarly, the reconstructed image at 30 mm also shows blocky areas, and the object boundary is defocused and blurry, as highlighted by the dotted ellipses areas. The images in Figure 11c are reconstructed by the proposed CIIR. In the reconstructed image at 20 mm, there are no blocky areas, and the object boundary is clean, compared with those of the standard method. The reconstructed image at 30 mm also shows no blocky areas, and the object boundary is relatively sharp. The images in Figure 11b are reconstructed by the window-based CIIR with a triangular window. The images suffer from less lenslet and blurring artifacts, compared to those obtained by using the CIIR with a rectangular window. Also, they show similar quality to those of the reconstructed images using the proposed CIIR.
Figure 12 shows the reconstructed images of the yellow car with the same locations of the output plane as those in the previous experiment. Figure 12a shows the images reconstructed by the window-based CIIR with a rectangular window. As can be seen from the enlarged images, the star and text are blurry due to blurring artifacts, and there are blocky areas due to the lenslet artifacts. On the other hand, the image reconstructed by the proposed CIIR using alpha blending, indicated in Figure 12c, shows reduced blurring artifacts and significantly fewer lenslet artifacts. The image quality of the proposed method is similar to that of the image reconstructed by the window-based CIIR with a triangular window, as shown in Figure 12b.
We conducted another experiment using the public light field dataset provided by the Heidelberg Collaboratory for Image Processing (HCI) [41]. Figure 13a,b displays one of the HCI datasets used in our experiment, called ‘bicycle’. As the HCI data are provided in a set of 81 files, we concatenate these into a 2D array to form an EIA suitable for CIIR. Consequently, the newly prepared EIA comprises 9 × 9 elemental images, each of which has a size of 512 × 512 pixels. Figure 13a presents a magnified section of this EIA.
Note that the characteristics of the HCI data contrast, to some extent, with those of an optical EIA. In general, an EIA directly picked up through a planar lens array exhibits a positive disparity for near objects. The disparity approaches zero as the distance to the objects increases. On the other hand, the disparity of the two nearest elemental images from the HCI data shows positive values for near objects and negative values for distant examples. Moreover, the disparity values present in these HCI images do not exceed 1. These observations can be easily inferred from the elemental image difference, as illustrated in Figure 13c.
To apply the HCI images to CIIR, an EIA from these HCI images should be prepared. Given that the disparity value is less than one, each elemental image is magnified by image interpolation. For addressing negative disparity values, each elemental image is inverted by image flipping. Subsequently, these elemental images are concatenated into a 2D array to construct the EIA. Once prepared, this EIA could be applied to the CIIR methods, including our proposed method, allowing us to evaluate the experimental results.
Figure 14 illustrates the reconstructed images of the ‘bicycle’ dataset, focusing on near and far objects, respectively. For near focus, the shift factor for CIIR is set to 1 pixel. As performed for the distant focus, elemental images were similarly flipped and magnified twice, and they were then applied to the CIIRs, with a shift factor of 1 pixel. Figure 14a,b depicts the images reconstructed from the CIIR methods, based on rectangular and triangular window functions, respectively. Figure 14c shows the images from the proposed method. The areas marked as ① and ③ in Figure 14 represent the images of objects located at a depth in focus. All three methods show clear objects. Areas marked as ② and ④ in Figure 14 represent the images of objects positioned at a depth out of focus.
As depicted in Figure 14c, the images reconstructed from the experiment utilizing the HCI data also show that our method yields relatively sharper images compared to those captured using conventional methods. Notably, this experimental result indicates that our approach significantly broadens the depth of focus relative to the existing methods. Furthermore, the blurring phenomenon in the areas out of focus is reduced in the proposed method, yielding smoother image quality compared to that of the conventional methods.
We repeated a time–memory measurement experiment to demonstrate the usefulness of the proposed method in terms of memory and time efficiency. The experiment was performed by implementing CIIRs with elemental image blending, a rectangular window, a triangular window, and a cubic window in MATLAB R2022a. The experiment used an EIA of 10 × 10 elemental images, with 512 × 512 pixels as the size of each elemental image. The computer used for time measurement was equipped with an Intel® Core™ i7-10700KF CPU @ 3.80 GHz processor and 64 GB RAM.
In this experiment, we conducted CIIRs without magnification in order to minimize memory usage. The window-based CIIR methods require memory for normalization, in which the amount of required memory is the same size as the reconstructed image. In contrast, the proposed CIIR requires only the memory for a reconstructed image. The experiment employed a fixed size for both the elemental image and the number of elemental images. For each CIIR method, the shifting factors were varied with values of 8, 16, 32, and 64. Figure 15 shows the simulation results as a scatterplot. Typically, memory requirements increase as the shifting parameter increases in CIIR. Moreover, the computational time required follows the order of the rectangular, triangular, and cubic windows, in accordance with the complexity of CIIR. As shown in Figure 15, the proposed CIIR using alpha blending requires the least time and memory, when compared to the other methods.
A notable point to compare is that the proposed method requires less time and memory than the method using a triangular window, which provides a similar image quality to that of the proposed method in the optical experiment conducted above. The proposed method requires less time than the method using a rectangular window due to the influence of memory access time. Note that the proposed method demands less time and memory than the CIIR employing a triangular window, which achieves similar image quality to that of the proposed method in the above optical experiment.
Therefore, the experimental results show that the proposed CIIR provides reconstructed images with better subjective image quality than the standard CIIR. It also provides reconstructed images with similar image quality to that of the window-based CIIR using a triangular window. In addition, from an objective evaluation perspective, the proposed CIIR method requires less processing time due to less memory usage compared to that of the window-based CIIRs.

5. Conclusions

We have introduced a computational integral imaging reconstruction model via elemental image blending. Our signal model defines elemental image blending in detail from the perspective of the window. Unlike other methods, the proposed model does not require a normalization process. Moreover, our model is expected to provide enhanced image quality due to its continuous and smooth window-like behavior. Optical and computational experiments demonstrate that the proposed method improves image quality without normalization, also requiring less memory and processessing time.

Author Contributions

Conceptualization, H.Y.; methodology, E.L. and H.Y.; software, E.L. and H.Y.; validation, E.L., H.C. and H.Y.; formal analysis, H.Y.; investigation, E.L., H.C. and H.Y.; resources, H.Y.; data curation, E.L., H.C. and H.Y. writing—original draft preparation, H.Y.; writing—review and editing, E.L., H.C. and H.Y.; visualization, E.L.; supervision, H.Y.; project administration, H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by a 2021 research Grant from Sangmyung University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lippmann, G. La photographic integrale. CR Acad. Sci. 1908, 146, 446–451. [Google Scholar]
  2. Huang, Y.; Krishnan, G.; O’Connor, T.; Joshi, R.; Javidi, B. End-to-end integrated pipeline for underwater optical signal detection using 1D integral imaging capture with a convolutional neural network. Opt. Express 2023, 31, 1367–1385. [Google Scholar] [CrossRef] [PubMed]
  3. Jang, J.S.; Javidi, B. Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics. Opt. Lett. 2022, 27, 324–326. [Google Scholar] [CrossRef] [PubMed]
  4. Javidi, B.; Hua, H.; Stern, A.; Martinez, M.; Matobe, O.; Wetzstein, G. Focus issue introduction: 3D image acquisition and display: Technology, perception and applications. Opt. Express 2022, 30, 4655–4658. [Google Scholar] [CrossRef] [PubMed]
  5. Javidi, B.; Carnicer, A.; Arai, J.; Fujii, T.; Hua, H.; Liao, H.; Martínez-Corral, M.; Pla, F.; Stern, A.; Waller, L.; et al. Roadmap on 3D integral imaging: Sensing, processing, and display. Opt. Express 2020, 28, 32266–32293. [Google Scholar] [CrossRef]
  6. Li, X.; Zhao, M.; Xing, Y.; Zhang, H.L.; Li, L.; Kim, S.T.; Wang, Q.H. Designing optical 3D images encryption and reconstruction using monospectral synthetic aperture integral imaging. Opt. Express 2018, 26, 11084–11099. [Google Scholar] [CrossRef]
  7. Shen, X.; Kim, H.S.; Satoru, K.; Markman, A.; Javidi, B. Spatial-temporal human gesture recognition under degraded conditions using three-dimensional integral imaging. Opt. Express 2018, 26, 13938–13951. [Google Scholar] [CrossRef]
  8. Markman, A.; Shen, X.; Javidi, B. Three-dimensional object visualization and detection in low light illumination using integral imaging. Opt. Lett. 2017, 42, 3068–3071. [Google Scholar] [CrossRef]
  9. Llavador, A.; Sola-Pikabea, J.; Saavedra, G.; Javidi, B.; Martínez-Corral, M. Resolution improvements in integral microscopy with Fourier plane recording. Opt. Express 2016, 24, 20792–20798. [Google Scholar] [CrossRef] [Green Version]
  10. Lee, Y.K.; Yoo, H. Three-dimensional visualization of objects in scattering medium using integral imaging and spectral analysis. Opt. Lasers Eng. 2016, 77, 31–38. [Google Scholar] [CrossRef]
  11. Yoo, H.; Shin, D.H.; Cho, M. Improved depth extraction method of 3D objects using computational integral imaging reconstruction based on multiple windowing techniques. Opt. Lasers Eng. 2015, 66, 105–111. [Google Scholar] [CrossRef]
  12. Park, S.; Yeom, J.; Jeong, Y.; Chen, N.; Hong, J.Y.; Lee, B. Recent issues on integral imaging and its applications. J. Inf. Disp. 2014, 15, 37–46. [Google Scholar] [CrossRef]
  13. Xiao, X.; Javidi, B.; Martinez-Corral, M.; Stern, A. Advances in three-dimensional integral imaging: Sensing, display, and applications [Invited]. Appl. Opt. 2013, 52, 546–560. [Google Scholar] [CrossRef]
  14. Cho, M.; Daneshpanah, M.; Moon, I.; Javidi, B. Three-dimensional optical sensing and visualization using integral imaging. Proc. IEEE J. 2011, 99, 556–575. [Google Scholar]
  15. Okano, F.; Arai, J.; Hoshino, H.; Yuyama, I. Three-dimensional video system based on integral photography. Opt. Eng. 1999, 38, 1072–1077. [Google Scholar] [CrossRef]
  16. Shin, D.H.; Yoo, H. Image quality enhancement in 3D computational integral imaging by use of interpolation methods. Opt. Express 2007, 15, 12039–12049. [Google Scholar] [CrossRef]
  17. Hong, S.H.; Jang, J.S.; Javidi, B. Three-dimensional volumetric object reconstruction using computational integral imaging. Opt. Express 2004, 12, 483–491. [Google Scholar] [CrossRef]
  18. Arimoto, H.; Javidi, B. Integral three-dimensional imaging with digital reconstruction. Opt. Lett. 2001, 26, 157–159. [Google Scholar] [CrossRef]
  19. Chen, N.; Ren, Z.; Li, D.; Lam, E.Y.; Situ, G. Analysis of the noise in backprojection light field acquisition and its optimization. Appl. Opt. 2017, 56, F20–F26. [Google Scholar] [CrossRef] [Green Version]
  20. Inoue, K.; Lee, M.C.; Javidi, B.; Cho, M. Improved 3D integral imaging reconstruction with elemental image pixel rearrangement. J. Opt. 2018, 20, 025703. [Google Scholar] [CrossRef]
  21. Inoue, K.; Cho, M. Visual quality enhancement of integral imaging by using pixel rearrangement technique with convolution operator (CPERTS). Opt. Lasers Eng. 2018, 111, 206–210. [Google Scholar] [CrossRef]
  22. Cho, M.; Javidi, B. Computational reconstruction of three-dimensional integral imaging by rearrangement of elemental image pixels. J. Disp. Technol. 2009, 5, 61–65. [Google Scholar] [CrossRef]
  23. Shin, D.H.; Yoo, H. Computational integral imaging reconstruction method of 3D images using pixel-to-pixel mapping and image interpolation. Opt. Commun. 2009, 282, 2760–2767. [Google Scholar] [CrossRef]
  24. Shin, D.H.; Yoo, H. Scale-variant magnification for computational integral imaging and its application to 3D object correlator. Opt. Express 2008, 16, 8855–8867. [Google Scholar] [CrossRef] [PubMed]
  25. Qin, Z.; Chou, P.Y.; Wu, J.Y.; Huang, C.T.; Huang, Y.P. Resolution-enhanced light field displays by recombining subpixels across elemental images. Opt. Lett. 2019, 44, 2438–2441. [Google Scholar] [CrossRef]
  26. Yoo, H.; Shin, D.H. Improved analysis on the signal property of computational integral imaging system. Opt. Express 2007, 15, 14107–14114. [Google Scholar] [CrossRef]
  27. Yoo, H. Artifact analysis and image enhancement in three-dimensional computational integral imaging using smooth windowing technique. Opt. Lett. 2011, 36, 2107–2109. [Google Scholar] [CrossRef]
  28. Ai, L.Y.; Kim, E.S. Refocusing-range and image-quality enhanced optical reconstruction of 3-D objects from integral images using a principal periodic δ-function array. Opt. Commun. 2018, 410, 871–883. [Google Scholar] [CrossRef]
  29. Yoo, H.; Jang, J.-Y. Intermediate elemental image reconstruction for refocused three-dimensional images in integral imaging by convolution with δ-function sequences. Opt. Lasers Eng. 2017, 97, 93–99. [Google Scholar] [CrossRef]
  30. Ai, L.Y.; Dong, X.B.; Jang, J.Y.; Kim, E.S. Optical full-depth refocusing of 3-D objects based on subdivided-elemental images and local periodic δ-functions in integral imaging. Opt. Express 2016, 24, 10359–10375. [Google Scholar] [CrossRef]
  31. Llavador, A.; Sánchez-Ortiga, E.; Saavedra, G.; Javidi, B.; Martínez-Corral, M. Free-depths reconstruction with synthetic impulse response in integral imaging. Opt. Express 2015, 23, 30127–30135. [Google Scholar] [CrossRef] [Green Version]
  32. Jang, J.Y.; Shin, D.H.; Kim, E.S. Improved 3-D image reconstruction using the convolution property of periodic functions in curved integral-imaging. Opt. Lasers Eng. 2014, 54, 14–20. [Google Scholar] [CrossRef]
  33. Jang, J.Y.; Shin, D.H.; Kim, E.S. Optical three-dimensional refocusing from elemental images based on a sifting property of the periodic δ-function array in integral imaging. Opt. Express 2014, 22, 1533–1550. [Google Scholar] [CrossRef]
  34. Jang, J.Y.; Ser, J.I.; Cha, S.; Shin, S.H. Depth extraction by using the correlation of the periodic function with an elemental image in integral imaging. Appl. Opt. 2012, 51, 3279–3286. [Google Scholar] [CrossRef]
  35. Xing, Y.; Wang, Q.H.; Ren, H.; Luo, L.; Deng, H.; Li, D.H. Optical arbitrary-depth refocusing for large-depth scene in integral imaging display based on reprojected parallax image. Opt. Commun. 2019, 433, 209–214. [Google Scholar] [CrossRef]
  36. Yan, Z.; Yan, X.; Jiang, X.; Ai, L. Computational integral imaging reconstruction of perspective and orthographic view images by common patches analysis. Optics Express 2017, 25, 21887–21900. [Google Scholar] [CrossRef]
  37. Cho, M.; Javidi, B. Free view reconstruction of three-dimensional integral imaging using tilted reconstruction planes with locally nonuniform magnification. J. Disp. Technol. 2009, 5, 345–349. [Google Scholar] [CrossRef]
  38. Cho, B.; Kopycki, P.; Martinez-Corral, M.; Cho, M. Computational volumetric reconstruction of integral imaging with improved depth resolution considering continuously non-uniform shifting pixels. Opt. Lasers Eng. 2018, 111, 114–121. [Google Scholar] [CrossRef]
  39. Hwang, D.C.; Park, J.S.; Shin, D.H.; Kim, E.S. Depth-controlled reconstruction of 3D integral image using synthesized intermediate sub-images. Opt. Commun. 2008, 281, 5991–5997. [Google Scholar] [CrossRef]
  40. Yi, F.; Jeong, O.; Moon, I.; Javidi, B. Deep Learning Integral Imaging for Three-Dimensional Visualization, Object Detection, and Segmentation. Opt. Lasers Eng. 2021, 146, 106695. [Google Scholar] [CrossRef]
  41. 4D Light Field Dataset. Available online: https://lightfield-analysis.uni-konstanz.de/ (accessed on 28 May 2023).
Figure 1. Computational integral imaging system (a) pickup and (b) computational reconstruction.
Figure 1. Computational integral imaging system (a) pickup and (b) computational reconstruction.
Sensors 23 05468 g001
Figure 2. Processes of reconstructing a volume using the standard CIIR method.
Figure 2. Processes of reconstructing a volume using the standard CIIR method.
Sensors 23 05468 g002
Figure 3. (a) 1D optical model of integral imaging and (b) its window signal model.
Figure 3. (a) 1D optical model of integral imaging and (b) its window signal model.
Sensors 23 05468 g003
Figure 4. (a) Illustration of the sum of nine SWFs in standard CIIR; (b) the results of normalization.
Figure 4. (a) Illustration of the sum of nine SWFs in standard CIIR; (b) the results of normalization.
Sensors 23 05468 g004
Figure 5. (a) Illustration of the sum of nine SWFs in triangular CIIR; (b) the results of normalization.
Figure 5. (a) Illustration of the sum of nine SWFs in triangular CIIR; (b) the results of normalization.
Sensors 23 05468 g005
Figure 6. Diagram of the proposed CIIR method based on image blending: (a) represents the flowchart of the proposed method, and (b) describes the overlapping process; (c,d) illustrate the E i 1 extraction process and the R i overwriting process, respectively.
Figure 6. Diagram of the proposed CIIR method based on image blending: (a) represents the flowchart of the proposed method, and (b) describes the overlapping process; (c,d) illustrate the E i 1 extraction process and the R i overwriting process, respectively.
Sensors 23 05468 g006
Figure 7. (a) Flowchart of the proposed method. (b) Horizontal overlapping for each row of EIA.
Figure 7. (a) Flowchart of the proposed method. (b) Horizontal overlapping for each row of EIA.
Sensors 23 05468 g007
Figure 8. Weight of alpha blending by overlapping, using a window.
Figure 8. Weight of alpha blending by overlapping, using a window.
Sensors 23 05468 g008
Figure 9. Alpha-blended signal in a shifted window function format.
Figure 9. Alpha-blended signal in a shifted window function format.
Sensors 23 05468 g009
Figure 10. Optical experimental setup and acquired EIAs. (a) optical setup (b) EIA of the green car (c) EIA of the yellow car.
Figure 10. Optical experimental setup and acquired EIAs. (a) optical setup (b) EIA of the green car (c) EIA of the yellow car.
Sensors 23 05468 g010
Figure 11. Reconstructed images of the green car using (a) a rectangular window, (b) a triangular window, and (c) proposed method at z = z0 (20 mm) and z = z0 + 10 mm.
Figure 11. Reconstructed images of the green car using (a) a rectangular window, (b) a triangular window, and (c) proposed method at z = z0 (20 mm) and z = z0 + 10 mm.
Sensors 23 05468 g011
Figure 12. Reconstructed images of the yellow car using (a) a rectangular window, (b) a triangular window, and (c) proposed method at z = z0 (20 mm) and z = z0 + 10 mm.
Figure 12. Reconstructed images of the yellow car using (a) a rectangular window, (b) a triangular window, and (c) proposed method at z = z0 (20 mm) and z = z0 + 10 mm.
Sensors 23 05468 g012
Figure 13. Public light field data from the Heidelberg Collaboratory for Image Processing (HCI) used in the experiment. (a,b) EIA and its zoomed area of 9 × 9 elemental images from the 81 HCI image files; (c) a difference view of two neighboring elemental images.
Figure 13. Public light field data from the Heidelberg Collaboratory for Image Processing (HCI) used in the experiment. (a,b) EIA and its zoomed area of 9 × 9 elemental images from the 81 HCI image files; (c) a difference view of two neighboring elemental images.
Sensors 23 05468 g013
Figure 14. Reconstructed images of ‘bicycle’ data using (a) a rectangular window, (b) a triangular window, and (c) the proposed method.
Figure 14. Reconstructed images of ‘bicycle’ data using (a) a rectangular window, (b) a triangular window, and (c) the proposed method.
Sensors 23 05468 g014
Figure 15. Time–memory scatterplot.
Figure 15. Time–memory scatterplot.
Sensors 23 05468 g015
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, E.; Cho, H.; Yoo, H. Computational Integral Imaging Reconstruction via Elemental Image Blending without Normalization. Sensors 2023, 23, 5468. https://doi.org/10.3390/s23125468

AMA Style

Lee E, Cho H, Yoo H. Computational Integral Imaging Reconstruction via Elemental Image Blending without Normalization. Sensors. 2023; 23(12):5468. https://doi.org/10.3390/s23125468

Chicago/Turabian Style

Lee, Eunsu, Hyunji Cho, and Hoon Yoo. 2023. "Computational Integral Imaging Reconstruction via Elemental Image Blending without Normalization" Sensors 23, no. 12: 5468. https://doi.org/10.3390/s23125468

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop