Abstract
Successful identification of specularities in an image can be crucial for an artificial vision system when extracting the semantic content of an image or while interacting with the environment. We developed an algorithm that relies on scale and rotation invariant feature extraction techniques and uses motion cues to detect and localize specular surfaces. Appearance change in feature vectors is used to quantify the appearance distortion on specular surfaces, which has previously been shown to be a powerful indicator for specularity (Doerschner et al. in Curr Biol, 2011). The algorithm combines epipolar deviations (Swaminathan et al. in Lect Notes Comput Sci 2350:508–523, 2002) and appearance distortion, and succeeds in localizing specular objects in computer-rendered and real scenes, across a wide range of camera motions and speeds, object sizes and shapes, and performs well under image noise and blur conditions.

















Similar content being viewed by others
Notes
But see [19] who use only minimal assumptions about the scene, motion and 3D shape.
Also see [28] specular shape perception in human observers.
When referring to matte or diffusely reflecting objects, we imply that these objects also have a 2D texture.
It may be that bi-modality does not work well as a global parameter; however, at a particular spatial scale it may continue to correctly predict surface reflectance.
At the fundamental matrix estimation stage, motion vectors from the entire image contribute.
Note that the aim in [1] was to predict human perception. Thus this measure predicts apparent or perceived shininess not physical reflectance.
SIFT features have also been used for sparse specular surface reconstruction [65].
This is crucial since appearance distortion critically depends on the change in feature vectors.
As discussed below: nonrigid and specular motion share similar features and may be confused by a classifier; see, for example [1].
Precision-recall curves are obtained by varying a specific threshold parameter, analogous to ROC curves.
Given the results in experiment 5.1, we did not expect differences in performance for rotation and zoom.
Interestingly, it has been shown that such objects tend to be perceived as less shiny by human observers [71].
We suggest below that by complementing our motion-based features with static cues to specularity, e.g., [19], also simple 3D specular shapes may be detected.
In [1] images to be classified as matte or shiny contained only a single object and a black background.
Compared to the video sequences in [1].
Specular highlights have been suggested as robust features for matching between 2D images and object’s 3D representation for pose estimation [73]. This suggests that highlights may also be useful for specular object detection.
References
Doerschner, K., Fleming, R., Yilmaz, O., Schrater, P., Hartung, B., Kersten, D.: Visual motion and the perception of surface material. Curr. Biol. 21(23), 2010–2016 (2011)
Swaminathan, R., Kang, S., Szeliski, R., Criminisi, A., Nayar, S.: On the motion and appearance of specularities in image sequences. Lect. Notes Comput. Sci. 2350, 508–523 (2002)
Horn, B.: Shape from shading: A method for obtaining the shape of a smooth opaque object from one view (1970)
Horn, B.: Shape from Shading Information. McGraw-Hill, New York (1975)
Koenderink, J., Van Doorn, A.: Photometric invariants related to solid shape. Optica Acta 27, 981–996 (1980)
Pentland, A.: Shape information from shading: a theory about human perception. Spatial Vis 4, 165–182 (1989)
Ihrke, I., Kutulakos, K., Magnor, M., Heidrich, W.: EUROGRAPHICS 2008 STAR—state of the art report state of the art in transparent and specular Oobject reconstruction (2008)
Wang, Z., Huang, X., Yang, R., Zhang, Y.: Measurement of mirror surfaces using specular reflection and analytical computation. Mach. Vis. Appl. 24, 289–304 (2013)
Saint-Pierre, C.-A., Boisvert, J., Grimard, G., Cheriet, F.: Detection and correction of specular reflections for automatic surgical tool segmentation in thoracoscopic images. Mach. Vis. Appl. 22, 171–180 (2011)
Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A., Fitzgibbon, A., Kinectfusion : Real-time 3d reconstruction and interaction using a moving depth camera. In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, UIST ’11, ACM, New York, pp. 559–568 (2011)
Newcombe, R.A., Davison, A.J., Izadi, S., Kohli, P., Hilliges, O., Shotton, J., Molyneaux, D., Hodges, S., Kim, D., Fitzgibbon, A.: Kinectfusion: real-time dense surface mapping and tracking. In: 2011 10th IEEE international symposium on Mixed and Augmented Reality (ISMAR), pp. 127–136
Dörschner, K.: Image Motion and the Appearance of Objects. McGraw-Hill, New York (1975)
Shafer, S.: Using color to separate reflection components. Color 10, 210–218 (1985)
Wolff, L., Boult, T.: Constraining object features using a polarization reflectance model. IEEE Trans. Pattern Anal. Mach. Intell. 13, 635–657 (1991)
Nayar, S., Fang, X., Boult, T.: Removal of specularities using color and polarization. In: 1993 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Proceedings CVPR’93, pp. 583–590 (1993)
Oren, M., Nayar, S.: A theory of specular surface geometry. Int. J. Comput. Vis. 24, 105–124 (1997)
Roth, S., Black, M.: Specular flow and the recovery of surface structure. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 2, IEEE, pp. 1869–1876 (2006)
Nayar, S., Ikeuchi, K., Kanade, T.: Determining shape and reflectance of Lambertian, specular, and hybrid surfaces using extended sources. In: International Workshop on Industrial Applications of Machine Intelligence and Vision, IEEE, pp. 169–175
DelPozo, A., Savarese, S.: Detecting specular surfaces on natural images. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition CVPR ’07, pp. 1–8
Doerschner, K., Kersten, D., Schrater, P.: Rapid classification of specular and diffuse reflection from image velocities. Pattern Recognit. 44, 1874–1884 (2011)
Ho, Y., Landy, M., Maloney, L.: How direction of illumination affects visually perceived surface roughness. J. Vis. 6, 8 (2006)
Doerschner, K., Boyaci, H., Maloney, L.: Estimating the glossiness transfer function induced by illumination change and testing its transitivity. J. Vis. 10(4), 1–9 (2010)
Doerschner, K., Maloney, L., Boyaci, H.: Perceived glossiness in high dynamic range scenes. J. Vis. 10 (2010)
te Pas, S., Pont, S.: A comparison of material and illumination discrimination performance for real rough, real smooth and computer generated smooth spheres. In: Proceedings of the 2nd symposium on Applied perception in graphics and visualization, ACM New York, USA, pp. 75–81
Nishida, S., Shinya, M.: Use of image-based information in judgments of surface-reflectance properties. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 15, 2951–2965 (1998)
Dror, R., Adelson, E., Willsky, A.: Estimating surface reflectance properties from images under unknown illumination. In: Human Vision and Electronic Imaging VI, SPIE Photonics West, pp. 231–242 (2001)
Matusik, W., Pfister, H., Brand, M., McMillan, L.: A data-driven reflectance model. ACM Trans. Graph. 22, 759–769 (2003)
Fleming, R., Torralba, A., Adelson, E.: Specular reflections and the perception of shape. J. Vis. 4, 798–820 (2004)
Motoyoshi, I., Nishida, S., Sharan, L., Adelson, E.: Image statistics and the perception of surface qualities. Nature (London) 447, 206–209 (2007)
Vangorp, P., Laurijssen, J., Dutré, P.: The influence of shape on the perception of material reflectance. ACM Trans. Graph. (TOG) 26, 77-es (2007)
Olkkonen, M., Brainard, D.: Perceived glossiness and lightness under real-world illumination. J. Vis. 10, 5 (2010)
Kim, J., Anderson, B.: Image statistics and the perception of surface gloss and lightness. J. Vis. 10(9), 3 (2010)
Marlow, P., Kim, J., Anderson, B.: The role of brightness and orientation congruence in the perception of surface gloss. J. Vis. 11, 16 (2011)
Kim, J., Marlow, P., Anderson, B.: The perception of gloss depends on highlight congruence with surface shading. J. Vis. 11, 4 (2011)
Zaidi, Q.: Visual inferences of material changes: color as clue and distraction. Wiley Interdiscip. Rev. Cogn. Sci. 2(6), 686–700 (2011)
te Pas, S., Pont, S., van der Kooij, K.: Both the complexity of illumination and the presence of surrounding objects influence the perception of gloss. J. Vis. 10, 450–450 (2010)
Hartung, B., Kersten, D.: Distinguishing shiny from matte. J. Vis. 2, 551–551 (2002)
Sakano, Y., Ando, H.: Effects of self-motion on gloss perception. Perception 37, 77 (2008)
Wendt, G., Faul, F., Ekroll, V., Mausfeld, R.: Disparity, motion, and color information improve gloss constancy performance. J. Vis. 10, 7 (2010)
Blake, A.: Specular stereo. In: Proceedings of the International Joint Conference on Artificial Intelligence, pp. 973–976
Weyrich, T., Lawrence, J., Lensch, H., Rusinkiewicz, S., Zickler, T.: Principles of appearance acquisition and representation. Found. Trends Comput. Graph. Vis. 4, 75–191 (2009)
Klinker, G., Shafer, S., Kanade, T.: A physical approach to color image understanding. Int. J. Comput. Vis. 4, 7–38 (1990)
Bajcsy, R., Lee, S., Leonardis, A.: Detection of diffuse and specular interface reflections and inter-reflections by color image segmentation. Int. J. Comput. Vis. 17, 241–272 (1996)
Tan, R., Ikeuchi, K.: Separating reflection components of textured surfaces using a single image. IEEE Trans. Pattern Anal. Mach. Intell. 27(2), 178–193 (2005)
Mallick, S.P., Zickler, T., Belhumeur, P.N., Kriegman, D.J.: Specularity removal in images and videos: a pde approach. In: Computer Vision-ECCV 2006, pp. 550–563, Springer, Berlin (2006)
Angelopoulou, E.: Specular highlight detection based on the fresnel reflection coefficient. In: EEE 11th International Conference on Computer Vision. ICCV 2007. I, pp. 1–8 (2007)
Chung, Y., Chang, S., Cherng, S., Chen, S.: Dichromatic reflection separation from a single image. Lect. Notes Comput. Sci. 4679, 225 (2007)
Adato, Y., Ben-Shahar, O.: Specular flow and shape in one shot. In: BMVC, pp. 1–11
Szeliski, R.: Computer vision: algorithms and applications. Springer, New York (2010)
Adato, Y., Vasilyev, Y., Ben Shahar, O., Zickler, T.: Toward a theory of shape from specular flow. In: ICCV07, pp. 1–8
Adato, Y., Zickler, T., Ben-Shahar, O.: Toward robust estimation of specular flow. In: Proceedings of the British Machine Vision Conference, p. 1 (2010)
Vasilyev, Y., Adato, Y., Zickler, T., Ben-Shahar, O.: Dense specular shape from multiple specular flows. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR, pp. 1–8 (2008)
Oo, T., Kawasaki, H., Ohsawa, Y., Ikeuchi, K.: The separation of reflected and transparent layers from real-world image sequence. Mach. Vis. Appl. 18, 17–24 (2007)
Adato, Y., Vasilyev, Y., Zickler, T., Ben-Shahar, O.: Shape from specular flow. IEEE Trans. Pattern Anal. Mach. Intell. 32, 2054–2070 (2010)
Blake, A., Bulthoff, H.: Shape from specularities: computation and psychophysics. Philos. Trans. Soc. Lond. Ser. B Biol. Sci. 331, 237–252 (1991)
Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004)
Horn, B.K., Schunck, B.G.: Determining optical flow. Artif. Intell. 17, 185–203 (1981)
Adato, Y., Zickler, T., Ben-Shahar, O.: A polar representation of motion and implications for optical flow. In: IEEE Conference on IEEE Computer Vision and Pattern Recognition (CVPR), pp. 1145–1152 (2011)
Cordes, K., Muller, O., Rosenhahn, B., Ostermann, J.: Half-sift: high-accurate localized features for sift. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2009. CVPR Workshops. IEEE Computer Society, pp. 31–38 (2009)
Toews, M., Wells, W.: Sift-rank: ordinal description for invariant feature correspondence. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009, pp. 172–177 (2009)
Farid, H., Popescu, A.: Blind removal of lens distortions. J. Opt. Soc. Am. 18, 2072–2078 (2001)
Clark, A., Grant, R., Green, R.: Perspective Correction for improved visual registration using natural features. In: 23rd International Conference Image and Vision Computing New Zealand (IVCNZ 2008). IEEE Computer Press, Los Alamitos (2008)
Szeliski, R., Avidan, S., Anandan, P.: Layer extraction from multiple images containing reflections and transparency. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vol. 1, pp. 246–253. IEEE, USA (2000)
Vedaldi, A., Fulkerson, B.: Vlfeat : an open and portable library of computer vision algorithms. In: Proceedings of the international conference on Multimedia, ACM, pp. 1469–1472
Sankaranarayanan, A.C., Veeraraghavan, A., Tuzel, O., Agrawal, A.: Specular surface reconstruction from sparse reflection correspondences. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, pp. 1245–1252 (2010)
Beis, J., Lowe, D.: Shape indexing using approximate nearest-neighbour search in high-dimensional spaces. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1000–1006 (1997)
Fischler, M., Bolles, R.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24, 381–395 (1981)
Hartley, R., Gupta, R., Chang, T.: Stereo from uncalibrated cameras. In: Proceedings CVPR’92, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 761–764 (1992)
Sampson, P.: Fitting conic sections to. Comput. Graph. Image Process. 18, 97–108 (1982)
Blinn, J.: Models of light reflection for computer synthesized pictures. In: ACM SIGGRAPH Computer Graphics, Vol. 11, ACM, pp. 192–198
Doerschner, K., Kersten, D., Schrater, P.: Analysis of shape-dependent specular motion—predicting shiny and matte appearance. J. Vis. 8, 594–594 (2008)
Gautama, T., Van Hulle, M.: A phase-based approach to the estimation of the optical flow field using spatial filtering. IEEE Trans. Neural Netw. 13, 1127–1136 (2002)
Netz, A., Osadchy, M.: Using specular highlights as pose invariant features for 2d–3d pose estimation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, pp. 721–728 (2011)
Acknowledgments
This work was supported by a Marie Curie International Reintegration Grant (239494) within the Seventh European Community Framework Programme awarded to KD. KD has also been supported by a Turkish Academy of Sciences Young Scientist Award (TUBA GEBIP), a grant by the Scientific and Technological Research Council of Turkey (TUBITAK 1001, 112K069), and the EU Marie Curie Initial Training Network PRISM (FP7-PEOPLE-2012-ITN, Grant Agreement: 316746).
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
1.1 Algorithm parameters
-
SIFT peak threshold = 3
-
SIFT edge threshold = 10
-
SIFT feature elimination threshold = 5
-
SIFT matching threshold = 2
-
RANSAC iteration = 2,000
-
Sampson error = 0.02
-
Convolution kernel size = 60
-
Convolution kernel, Gaussian standard deviation = 30
-
Specular field threshold = \(1.5 \times 10^{-6}\)
-
Connected component area threshold = 1,000
1.2 Optic flow experiments
For the optical flow-based detection experiment, we kept the parameters the same as in [1]. However, we used 5% of the optical flow vectors for epipolar deviation computation. The Sampson error, kernel size and standard deviation are identical to the ones used for the SIFT-based method.
About this article
Cite this article
Yilmaz, O., Doerschner, K. Detection and localization of specular surfaces using image motion cues. Machine Vision and Applications 25, 1333–1349 (2014). https://doi.org/10.1007/s00138-014-0610-9
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00138-014-0610-9