Abstract
Estimation methods that check worker states for remote collaboration have been used in research via wearable sensors, cameras, and microphones, but these methods have some drawbacks. An estimation method that uses both vibration sensors and distance sensors is presented in this research. A prototype module with two sensors is tested to estimate the four states of a user by creating a self-organizing map (SOM) using the sensor data. Tests show that the prototype module estimates the user state by classifying it into one of three clusters, including “key typing” and “leaving a seat,” and others.
Similar content being viewed by others
References
Daniel, O.O., Alex, P.: Sensor-based organisational design and engineering. Int. J. Organ. Des. Eng. 1(1/2), 69–97 (2010)
Dourish, P., Bellotti, V.: Awareness and coordination in shared workspaces. In: Proceedings of the 1992 ACM Conference on Computer-Supported Cooperative Work (CSCW 1992), pp. 107–114 (1992)
Feldmann, I., Waizenegger, W., Atzpadin, N., Schreer, O.: Real-time depth estimation for immersive 3D videoconferencing. In: 2010 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video, pp. 1–4, June 2010
Gutwin, C., Greenberg, S.: Design for individuals, design for groups: tradeoffs between power and workspace awareness. In: Proceedings of the 1998 ACM Conference on Computer Supported Cooperative Work (CSCW 1998), pp. 207–216 (1998)
Gutwin, C., Greenberg, S.: A descriptive framework of workspace awareness for real-time groupware. Comput. Support. Coop. Work 11(3), 411–446 (2002)
Hashimoto, S., Tanaka, T., Aoki, K., Fujita, K.: Improvement of interruptibility estimation during PC work by reflecting conversation status. IEICE Trans. Inf. Syst. E97.D(12), 3171–3180 (2014)
Iso, K., Ozawa, S., Date, M., Takada, H., Andoh, Y., Matsuura, N.: Video conference 3D display that fuses images to replicate gaze direction. J. Disp. Technol. 8(9), 511–520 (2012)
Jones, A., Lang, M., Fyffe, G., Yu, X., Busch, J., McDowall, I., Bolas, M., Debevec, P.: Achieving eye contact in a one-to-many 3D video teleconferencing system. ACM Trans. Graph. 28(3), 64:1–64:8 (2009)
Kennedy, L.S., Ellis, D.P.W.: Pitch-based emphasis detection for characterization of meeting recordings. In: 2003 IEEE Workshop on Automatic Speech Recognition and Understanding (IEEE Cat. No.03EX721), pp. 243–248, November 2003
Otsuka, K.: Multimodal conversation scene analysis for understanding people’s communicative behaviors in face-to-face meetings. In: Salvendy, G., Smith, M.J. (eds.) Human Interface 2011. LNCS, vol. 6772, pp. 171–179. Springer, Heidelberg (2011). doi:10.1007/978-3-642-21669-5_21
Rodden, T.: Populating the application: a model of awareness for cooperative applications. In: Proceedings of the 1996 ACM Conference on Computer Supported Cooperative Work (CSCW 1996), pp. 87–96 (1996)
Acknowledgments
We would like to thank Editage (www.editage.jp) for English language editing.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Iso, K., Kobayashi, M., Yuizono, T. (2017). A Method for Estimating Worker States Using a Combination of Ambient Sensors for Remote Collaboration. In: Yoshino, T., Yuizono, T., Zurita, G., Vassileva, J. (eds) Collaboration Technologies and Social Computing. CollabTech 2017. Lecture Notes in Computer Science(), vol 10397. Springer, Cham. https://doi.org/10.1007/978-3-319-63088-5_3
Download citation
DOI: https://doi.org/10.1007/978-3-319-63088-5_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-63087-8
Online ISBN: 978-3-319-63088-5
eBook Packages: Computer ScienceComputer Science (R0)