[go: up one dir, main page]

Academia.eduAcademia.edu
Object Detection by 2-D Continuous Wavelet Transform Vijaya Kumar Reddy, Kiran Kumar Siramoju, Pradip Sircar* Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur 208016, Uttar Pradesh, India Abstract— The use of two dimensional (2-D) continuous wavelet analysis has not been extensive for image processing using wavelets. It has been overshadowed by the 2-D discrete dyadic wavelet transform (DWT) due to its compactness and excellent performance in coding, data compression, image reconstruction, etc. However, the 2-D DWT has some restrictions on the scale and position parameters, and it does not detect all the features of an image unless properly tuned. The 2-D continuous wavelet transform (CWT), on the other hand, is more flexible and provides complete control over the scale and position parameters; and thus it is capable of extracting various features of an image, which cannot be accomplished by the DWT. It is shown that sharp edges can be extracted at lower scales of the 2-D CWT. In this paper, an algorithm is developed to detect focused objects in an image/video using the 2-D CWT. The first step in this algorithm is to extract the edges of focused objects using the 2-D CWT. The object detected is converted to binary image. Some applications of object detection method in image and video processing are mentioned. Keywords—2-D continuous wavelet transform; isotropic 2-D wavelets; directional 2-D wavelets; image analysis; object detection I. Fourier transform Ψ ( k ) , one sees that wavelet transform provides a local filtering both in time/space ( b ) and scale ( a ), which works at constant relative bandwidth, ∆k k = constant . Thus it is more efficient at high frequencies (small scales); in particular, for detection of singularities in the signal. Moreover, the transformation (1) may be inverted exactly and yields a reconstruction formula, which amounts to decomposing the signal in terms of dilated, translated copies ψ b , a of the mother wavelet ψ . We should distinguish between two radically different versions of the wavelet transform, the continuous wavelet transform (CWT) and the discrete wavelet transform (DWT), although both are computed based on the same formula, = Wψ (b, a ) a −1 ∫ ψ ∗ ( a −1 ( x − b ) ) s ( x)dx , a > 0, b ∈  ∞ INTRODUCTION Wavelet analysis is a particular time-scale or space-scale representation of signals which has found a wide range of applications in physics, signal and image processing, and applied mathematics in the last two decades. It is a fact that most real life signals are non stationary in nature. They often contain transient components, sometimes very significant physically, and mostly cover a wide range of frequencies. Clearly, the standard Fourier analysis is inadequate for treating such signals, since it loses all information about the time localization of a given frequency component. In wavelet analysis, the fundamental concept is to choose a particular mother function such that by scaling and shifting the mother function, one can obtain all the desired functions; that is, the mother function shifted and scaled provides a complete set of basis functions. The outcome is a time-scale or spacescale representation, based on the wavelet transform: s ( x)  Wψ (b, a ) = ∫ ψ b∗, a ( x) s ( x)dx ∞ ψ b,= a −1ψ ( a −1 ( x − b ) ) , x ∈  a ( x) time/space domain and frequency domain, a > 1 is a scale or dilation parameter and b ∈  is a translation parameter [1, 2]. In addition, ψ must satisfy an admissibility condition, which in most cases can be reduced to the requirement that ψ has zero mean; hence, it is sufficiently oscillating. Combining this condition with the localization properties of ψ ( x) and its (1) −∞ (2) 1 with L - normalization, where s ( x) is a finite energy signal, the analyzing wavelet ψ ( x) is a well-localized function in *Corresponding author; Email address: sircar@iitk.ac.in, Phone: +91512-2597063, Fax: +91-512-2590063. −∞ (3) In the CWT, all values of a and b are considered. The DWT, on the other hand, is based on a preselected uniform grid for b and a dyadic grid for a , and is explicitly designed for generating orthonormal bases starting from multi-resolution analysis. This results in the general structure of decomposition and reconstruction of signals with the analysis filter bank and the synthesis filter bank, respectively, which leads to the perfect reconstruction quadrature mirror filter (PRQMF) bank approach for implementing the DWT. The CWT, in contrast, does not have this convenient structure of implementation. As a result, the CWT provides a highly redundant representation, where the reconstruction of signal or image is not straightforward. Incidentally, the PRQMF structure of 1-D DWT is not extendable to 2-D DWT which should be the natural choice for analyzing an image. We often implement a 2-D DWT as two 1D DWTs along x-direction and y-direction in cascade. But this approach cannot extract all the features of an image as explained in the sequel. The CWT has all the flexibility and efficiency in extracting specific information from a signal or an image. In 1-D case, the CWT has been used for analysis of multi-component nonstationary signals [3, 4]. The CWT may also be extended to 2 or more dimensions, with exactly the same properties as in the 1-D case. An image is a two-dimensional signal. The 2-D CWT can be developed for image analysis [5−7]. When compared to the 1-D case, the new fact here is the presence of rotation degree of freedom. This is crucial for detecting the oriented features of an image, that is, the regions where the amplitude is regular in one direction and has a sharp variation in the perpendicular direction, for example, edges or contours. The CWT is a very efficient tool in this respect, provided one uses a directional wavelet, that is, a wavelet which has an intrinsic orientation. In this paper, we present an algorithm based on 2-D CWT to detect a focused object in an image/video by first extracting its sharp edges, and then by applying standard image processing techniques for object detection. The detected object is converted to binary image. There are several applications of object detection method in image and video processing. TWO-DIMENSIONAL CWT II. We now consider the two-dimensional CWT which has become a major tool in image processing [5−7]. In this context, an image is a two-dimensional signal of finite energy in space domain, represented by a complex-valued function   s ( x ) ∈ L2  2 , d 2 x such that (  2 = s(x) ( )  ∫ s(x) 2 2  d2x < ∞ (4) ) Given a two-dimensional analyzing wavelet  2 2 2 ψ ( x ) ∈ L  , d x , all the geometric operations that we want Fig. 1. Mexican Hat Wavelet in Spatial Domain We have to impose an admissibility condition on the wavelet in the spatial frequency domain, namely, cψ ≡ (2π ) 2 ( ) ) ( ( () ( ) ) 2 ( ( ( ( )) ( )       W b , a, θ= a ∫ Ψ ∗ ar−θ k eib ⋅k S k d 2 k ( ) ( ) (7)    where S k is the Fourier transform of s ( x ) , and Ψ k is  the Fourier transform of ψ ( x ) .  2 (8) ( ) ( ) The important observation to make here is that all the formulas are almost identical in 1-D and in 2-D. As a consequence, the interpretation of the CWT as a singularity (edges, contours, corners, etc.) analyzer [8] still holds, and the mathematical properties of the 2-D transform strictly parallel those of its 1-D counterpart. )) ))  d 2k < ∞    with the assumption that ψ ( x ) ∈ L1  2 , d 2 x  L2  2 , d 2 x . 2 (a) a=4 (c) a= is the rotation parameters. ( 2 which again may be replaced in practice by the following necessary condition:    (9) Ψ 0 =0 ⇔ ∫ ψ ( x ) d 2 x =0     U b , a, θ ψ  ( x ) ≡ ψ  (= (5) x ) a −1ψ a −1r−θ x − b b a θ , ,    where b ∈  2 is the translation, a > 0 is the dilation, and cos θ − sin θ  rθ ≡   , 0 ≤ θ ≤ 2π  sin θ cos θ  In terms of these actions, the basic formulas for the 2-D CWT read      (6) = W b , a, θ a −1 ∫ ψ ∗ a −1r−θ x − b s ( x ) d 2 x ( )  −2  k Ψ k 2 to apply to it are obtained by combining three elementary transformations of the plane, namely, rigid translations in the plane of the image, dilations or scaling (global zooming in and out) and rotations. The combined action of these three types of transformations is realized by the following unitary map in the  space L2  2 , d 2 x of finite energy signal: ( ∫ Fig. 2. 1 2 (b) (d) a =1 a= 1 4 Mexican Hat Wavelet in Frequency Domain at different Scales III. ANALYZING WAVELETS FOR 2-D CWT A. Isotropic Wavelets If one wants to perform a point wise analysis, that is, no oriented features are present or relevant in the image; one may choose an analyzing wavelet which is invariant under rotation. Then, the θ dependence drops out, for instance, in the reconstruction formula. A 2-D Mexican hat is one such isotropic wavelet which is simply the Laplacian of a Gaussian given by: ( ψ H (x) = 2 − x  2 ) exp ( − 1 2 ) ( 2 2 x = −∇ exp − 12 x ) (10) The 2-D Morlet is one such wavelet with an intrinsic direction given by the equations,    2 (12) ψ M ( x ) = exp ik0 ⋅ x exp − 12 Ax ( ) ( ) ( ( ) )   2 ε exp  − 12 A−1 k − k0     parameter k0 is the wave  ΨM k= (13) The Mexican hat wavelet in spatial domain is as shown in Fig. 1. In the frequency domain, its equation is given by The vector, and  2 2 1 A diag  ε ,1 , ε ≥ 1 , is a 2 × 2 anisotropic matrix. We have = ΨH k = 2π k exp 12 k (11) dropped the correction term necessary to enforce the  The Mexican hat wavelet in frequency domain is as shown admissibility condition, ψ M 0 = 0 , because it is numerically in Fig. 2.   negligible for k0 ≥ 5.6 . By taking ε ≥ 1 , and k0 = ( 0, k0 ) , Thus, the Mexican hat is a real, rotation invariant wavelet. It behaves as a second-order operator in all directions; hence, it that is, perpendicular to the large axis of the ellipse, we get detects the singularities in all directions. The frequency the simplified Morlet wavelet equation given below: spectrum suggests that at higher scales a , it passes the lower 2 frequencies, whereas at lower scales, it detects the higher (14) ψ= exp ( ik0 y ) exp − 12 xε + y 2 M ( x, y ) frequencies. ( ) ( ) () ( ( (a) Magnitude, ε =1 (b) Magnitude, ε =5 )) Fig. 3(a)-(d) show the effect of variation of ε on the Morlet wavelet in the spatial domain. For the anisotropy ε ≥ 1 in the matrix A = diag  1ε ,1 , the modulus becomes a Gaussian elongated in the x - direction, that is, its “footprint” is an ellipse with large axis along the x - axis. Clearly this wavelet will detect preferentially singularities in the x direction, and its efficiency increases with ε . ( ( )) The simplified equation for the 2-D Morlet wavelet in the frequency domain is given by the following equation: Ψ M ( k x , k y )= exp − 12 −ε k x2 + ( k y − k0 ) (c) Magnitude, Fig. 3. ε = 10 = a ∀ε ε ; Morlet Wavelet in Spatial Domain, (d) Phase, = 0, = k0 6 4 ,θ Effect of variation of 1 B. Directional Wavelets When the aim is to detect oriented features (segments, vector field, etc.) in an image, or to perform directional filtering, one has to use a wavelet which is sensitive to directions. The best angular selectivity is obtained if ψ is directional, which means that the effective support of its Fourier transform Ψ is contained in a convex cone in a spatial  frequency space k , with apex at the origin. The wavelet acts   as a filter in the k - space. Suppose the signal s ( x ) is strongly oriented, for instance, along the x - axis. Then, its Fourier transform is strongly peaked along the k y - axis. In order to detect such a signal, with good directional selectivity, one  needs a wavelet ψ supported in a narrow cone in the k space [6]. The directional selectivity demands to restrict the support of Ψ , and not of ψ . 2 (15) In the Fourier space, the effective support of the function  is an ellipse centered at k0 and elongated in the k y direction, thus, it is contained in a convex cone that becomes narrower as ε increases. C. Edge Detection Using Wavelets A remarkable property of the wavelet transform is its ability to characterize the local irregularity of functions [8]. For an image s ( x, y ) , its edges correspond to singularities of s ( x, y ) , and thus, are related to the local maxima of the wavelet transform modulus. Therefore, the wavelet transform is an effective method for edge detection. Data from digital images are discrete. So to deal with such image data, we need discrete filters. The continuous wavelet functions are to be converted into discrete edge detectors as follows: Suppose the support of a wavelet ψ ( x, y ) is within an n by n grid ( n =2l + 1, l ∈  + ), then the discrete edge detector defined by the wavelet is [9] g[i, j ] = ∫ ∫ ψ ( x, y)dxdy ; j + 12 i + 12 j − 12 i − 12 IV. i= −l , , l ; j = −l , , l (16) DETECTION OF FOCUSED OBJECT IN AN IMAGE Edges in an image can be detected by applying the 2-D CWT as explained in the previous section. Sharp edges can be detected by choosing appropriate scale. For large scale, the 2-D CWT detects the global shape of the objects, and smaller values of scale reveal finer and finer details, in particular edges and contours. Here, we are using 2-D Mexican hat wavelet as it is quite efficient for a point wise analysis (contour detection, visual contrast, fractal analysis) [7]. It can be observed that at small scales 2-D CWT detects only sharp edges in an image. This is demonstrated in Fig. 4 by taking one square image which has sharp edges and another image which has blurred edges. The edges of the blurred image are extracted by the Canny’s method which is based on using the gradient operator [10, 11], and shown in Fig. 5. The general rules of edge detection and the Canny’s method are described in Appendixes A and B. 4. Finally, contour is filled, that is, detected object is converted to binary image. The threshold is selected based on the Otsu’s method [12] which is presented in appendix C. Edge discontinuities after the edge detection are connected by mathematical morphological operations [13]. 1) Contour Filling: After finding the edge map , the focused object is ready to be detected. In the segmentation process, it is accomplished by finding the first and last edge points in each row. The pixels in between are assigned to the focussed object, and the same process is repeated for each column. This process can be modified to define the object candidates. For horizontal candidates, if we meet an edge point, begin to draw a line until we meet the last edge point in that row .The same scheme is utilized in vertical candidate. The resulting image involves small-uncovered regions and isolated noisy pixels which are eliminated by applying the morphological operations [13]. B. Results (a) Square (b) CWT of Square (c) Blurred Square (d) CWT of Blurred Square Fig. 4. 2-D CWT of Sharp Square and Blurred Square It is possible to detect edges of focused object in an image by applying 2-D CWT. In this paper, a method is proposed to extract the edges of focused object, and that object is converted to binary image. It is observed that a scale chosen between 18 and 116 gives the edges of focused object using Mexican hat wavelet. A. Proposed Algorithm 1. () : s x  W ( x, y ) . Apply the 2-D CWT to an image at a scale preferably between 1 8 and 1 16 We are taken a few images in which one or more objects are focused and the rest are blurred. Consider Fig. 6 of the Player. Fig. 7(a) shows the 2-D CWT of the image at scale 18 in which we can observe elimination of blurred object. The binary image of extracted object is shown in Fig. 7(c). 2. Find the maximum magnitude M of the CWT coefficients: max W ( x, y ) = M , as explained in (A-1). Set ( x , y )∈Ω K , 0 < K < M , as the edge threshold. If W ( x, y ) ≥ K , then ( x , y ) is called an edge point of s ( x, y ) . Here K is calculated by the global threshold method [12]. 3. Carry out morphological operation for connecting the edges [13]. (a) Default Threshold Fig. 5. (b) Increased Threshold (0.9) Edges of Blurred Square by Canny Method The same concept as explained above can also be applied to video to detect the focused objects. The detected object is converted to binary image. This is useful for processing video, such as, to know the object location in a particular frame. Extracting the focused objects eliminates the unimportant information. V. CONCLUSION AND FUTURE WORK In this paper, a method is proposed to extract the focused object in an image or video using the 2-D CWT. The 2-D Mexican hat wavelet is chosen which is good for point wise analysis. This wavelet is Laplacian of Gaussian. We can eliminate the unfocussed objects by choosing a proper scale which will extract high frequency content in an image or video. Weak edges can be eliminated by choosing threshold. The broken edges of focused object are connected by the morphological operations. Making the moving objects to binary will be useful in video processing. This work can be extended to background subtraction in video, content dependent coding [14], and to interpolate lost frames in video [15]. APPENDIX = v [i, j ] p 2 [i, j ] + q 2 [i, j ] ϕ [i, j ] = arctan ( q[i, j ], p[i, j ]) (A-5) (A-6) where the arctan function takes two arguments and generates an angle. A. General Rules of Edge Detection Edge detectors are modified gradient operators. Since an edge is characterized by having a gradient of large magnitude, edge detectors are approximations of gradient operators. Because noise influences the accuracy of the computation of gradients, usually an edge detector is a combination of a smoothing filter and a gradient operator [10]. In image processing, the following two ways are used to define edges:   • Local maxima definition: For s ( x ) ∈ L2  2 , d 2 x , a point ( x , y ) is called an edge point of the image s ( x, y ) , if ∇s has a local maximum at ( x , y ) , that is, ∇s ( x , y ) ≥ ∇s ( x, y ) in the neighborhood of ( x , y ) . An edge curve in an image is a continuous curve on which all points are edge points. The set of all edge points of s ( x, y ) is called an edge image of s ( x, y ) .   • Threshold definition: For s ( x ) ∈ L2  2 , d 2 x , assume that in the domain Ω ⊆  2 , ( ( max ∇s ( x, y ) = M ( x , y )∈Ω ) ) (A-1) Choose K , 0 < K < M , as the edge threshold. ∇s ( x , y ) ≥ K , then ( x , y ) is called an edge point. If B. Edge Detection by Canny’s Method 1) Smoothing: Let s[i, j ] denote the image, and g[i, j ; σ ] be a Gaussian smoothing filter where σ is the spread of the Gaussian, which controls the degree of smoothing. The result of convolution of s[i, j ] with g[i, j ; σ ] gives an array of smoothed data as ∑ s[l , m]g[i − l , j − m] l , m∈ (A-2) 2) Gradient Calculation: Firstly, the gradient of the smoothed array u[i, j ] is used to produce the x - and y - partial derivatives p[i, j ] and q[i, j ] , respectively, as p[i,= j] q[i, = j] ( u[i, j + 1] − u[i, j ] + u[i + 1, j + 1] − u[i + 1, j ]) ( u[i, j ] − u[i + 1, j ] + u[i, j + 1] − u[i + 1, j + 1]) 3) Non-Maxima Suppression: Given the magnitude image array v [i, j ] , one can apply the thresholding operation in the gradient-based method and end up with ridges of edge pixel. However, Canny has a more sophisticated approach to the problem. In this approach, an edge point is defined to be a point whose strength is locally maximum in the direction of the gradient. This is a stronger constraint to satisfy and is used to thin the ridges found by thresholding. This process which results in one pixel wide ridges is called non-maxima suppression. After non-maxima suppression, one ends up with an image z[i, j ] which is zero everywhere except at the local maxima points. At the local maxima points the value of the magnitude is preserved. 4) Thresholding: The Canny Edge Detection Algorithm has the following steps [11]: = u[i, j ] Fig. 6. Player 2 (A-3) 2 (A-4) where the x - and y - partial derivatives are computed with averaging the finite differences over a 2×2 square. From the standard formulas for rectangular-to-polar conversion, the magnitude and orientation of the gradient can be computed as: In spite of the smoothing performed as the first step in edge detection, the non-maxima suppressed magnitude image z[i, j ] will contain many false edge fragments caused by noise and fine texture. The contrast of the false edge fragments is small. These false edge fragments in the non maxima-suppressed gradient magnitude should be reduced. One typical procedure is to apply a threshold to z[i, j ] . All values below the threshold are set to zero. After the application of threshold to the nonmaxima suppressed magnitude, an array e[i, j ] containing the edges detected in the image s[i, j ] is obtained. However, in this method applying the proper threshold value is difficult, and the procedure involves trial and error. Due to this difficulty, in the array e[i, j ] , there may still be some false edges if the threshold is too low, or some edges may be missing if the threshold is too high. A more effective thresholding scheme uses two thresholds. In this scheme, two threshold values, t1 and t2 are applied to z[i, j ] . Typically, t1 = 2t2 . With these threshold values, two edge images e1[i, j ] and e2 [i, j ] are produced. The image e2 [i, j ] has gaps in the contours but contains fewer false edges. With the double thresholding algorithm, the edges in e2 [i, j ] are linked into contours. When it reaches the end of a contour, the algorithm looks in e1[i, j ] at the locations of the 8neighbours for edges that can be linked to the contour. This algorithm continues until the gap has been bridged to an edge in e2 [i, j ] . The algorithm performs edge linking as a byproduct of thresholding and resolves some of the problems related to choosing a threshold. = σ 12 ∑ (i − µ ) t i =0 1 2 pi ξ1 , = σ 22 ∑ (i − µ ) L −1 i = t +1 2 2 pi ξ 2 (A-13, 14) Using the discriminant analysis, Otsu showed that the optimal threshold topt can be determined by maximizing the between-class variance σ B2 , topt = arg max σ B2 (t ) (A-15) 0 ≤ t ≤ L −1 where σ is defined as: 2 B σ B2 (t ) = ξ1 ( µ1 − µT ) + ξ 2 ( µ2 − µT ) 2 (a) 2-D CWT of Player (d) Edges by Canny Method at threshold 0.2 Fig. 7. Edges of Player by 2-D CWT and Canny Method [1] I. Daubechies, “The wavelet transform, time-frequency localization and signal analysis,” IEEE Trans. Inform. Theory, vol. 36, pp. 961-1005, 1990. [2] I. Daubechies, Ten Lectures on Wavelets, Philadelphia, PA: SIAM, 1992. [3] K. Prasad and P. Sircar, “Analysis of multicomponent non-stationary signals by continuous wavelet transform method,” in Proc. IEEE Intl. Workshop on Intelligent Signal Processing (WISP 2005), pp. 223-228, Faro, Potugal, Sep. 1-3, 2005. [4] P. Sircar, K. Prasad, and B. Harshavardhan, “Analysis of multicomponent speech-like signals by continuous wavelet transformbased technique,” in Proc. 14th European Signal Process. Conf. (EUSIPCO 2006), Florence, Italy, Sep. 4-8, 2006. [5] J.-P. Antoine, P. Carrette, R. Murenzi, and B. Piette, (1993), “Image analysis with two-dimensional continuous wavelet transform,” Signal Processing, vol. 31, pp. 241-272, 1993. [6] J.-P. Antoine, R. Murenzi, “Two-dimensional directional wavelets and the scale angle representation,” Signal Processing, vol. 52, pp. 259-281, 1996. [7] J.-P. Antoine, R. Murenzi and P. Vanderghenyst, “Two-dimensional directional wavelets in image processing,” Int. J. of Imaging Systems and Tehcnology, vol. 7, pp. 152-165, 1996. [8] S. Mallat and W.L. Hwang , “Singularity detection and processing with wavelets,” IEEE Trans. on Inform. Theo., vol. 38, no. 2, March 1992. [9] R.M. Rao and A.S. Bopardikar, Wavelet Transforms: Introduction to Theory and Applications, Addison-Wesley, Reading, MA, 1998. (c) Binary Image C. Global Thresholding by Otsu’s Method An image can be represented by a 2-D gray-level intensity function f ( x, y ) . The value of f ( x, y ) is the gray-level, ranging from 0 to L − 1 , where L is the number of distinct gray-levels. Let the number of pixels with gray-level i be ni and n be the total number of pixels in a given image; then the probability of occurrence of gray-level i is defined as: L −1 n (A-7) 1 pi = i , pi ≥ 0, ∑ pi = n i =0 The average gray-level of the entire image is computed as: L −1 (A-8) i =0 are In the case of single thresholding, the pixels of an image divided into two classes C1 = {0,1, , t} and C2 ={t + 1, t + 2, , L − 1} , where t is the threshold value. C1 and C2 are naturally corresponding to the background and the foreground (objects of interest). The probabilities of the two classes are: ξ1 = ∑ pi , t i =0 ξ2 = ∑p L −1 i = t +1 (A-9, 10) i The mean gray-level values of the two classes can be computed as: µ1 = ∑ i pi ξ1 , t i =0 µ2 = ∑ip L −1 i = t +1 i The class variances are given by ξ2 (A-16) REFERENCES (b) Edges of Player µT = ∑ ipi 2 (A-11, 12) [10] D. Marr and E. Hildreth, “Theory of edge detection,” Proc. Roy. Soc. London B, vol. 207, pp. 187-216, 1980. [11] J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 8, pp.679-714, 1986. [12] N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Sys., Man., Cyber., vol. 9, pp. 62-66, 1979. [13] R. van den Boomgaard and R. van Balen, “Methods for fast morphological image transforms using bitmapped binary images,” Computer Vision, Graphics, and Image Processing: Graphical Models and Image Processing, vol. 54, no. 3, pp. 254-258, May, 1992. [14] A. Cavallaro, From Visual Information to Knowledge: Semantic Video Object Segmentation, Tracking and Description, Ph.D. dissertation, EPFL, Lausanne, 2002. [15] A. Kaur, P. Sircar, and A. Banerjee, “Interpolation of lost frames of a video stream using object based motion estimation and compensation,” Annual IEEE India Conf. INDICON 2008, pp. 40-45, 2008.