Abstract
The outlook for the disabled people has been instantiated through the phenomena of analysing the gestures through signs which is converted in real-time into text. The aid to the people possessing an anomaly of being unable to hear or to speak through automation has been excruciatingly scrutinized over the past years. Diverse modus operandi has been induced for collation of Sign Language Detection with Text-to-Speech, escalating its utilization among common people having any disability. With the actuation of this prospect, these people can convey any message to the ones unknown to sign language. In the proposed articulation, we’ve tried to consummate the prowess of Convolutional Neural Networks for Sign Language Detection and thereby contemplating the efficiency through increasing the depth of the network. The output we got was phenomenal giving us a potent outlook for amalgamating it into the real-time.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Suharjito, Anderson, R., Wiryana, F., Ariesta, M.C., Kusuma, G.P.: Sign language recognition application systems for deaf-mute people: a review based on input-process-output. Procedia Comput. Sci. 116, 441–448 (2017). https://doi.org/10.1016/j.procs.2017.10.028
Zhuang, F., et al.: A comprehensive survey on transfer learning. Proc. IEEE 109, 43–76 (2021). https://doi.org/10.1109/jproc.2020.3004555
Szegedy, C., et al.: Going deeper with convolutions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015). https://doi.org/10.1109/cvpr.2015.7298594
Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20, 273–297 (1995). https://doi.org/10.1007/bf00994018
Kau, L.-J., Su, W.-L., Yu, P.-J., Wei, S.-J.: A real-time portable sign language translation system. In: 2015 IEEE 58th International Midwest Symposium on Circuits and Systems (MWSCAS) (2015). https://doi.org/10.1109/mwscas.2015.7282137
Shahriar, S., et al.: Real-time american sign language recognition using skin segmentation and image category classification with convolutional neural network and deep learning. In: TENCON 2018 - 2018 IEEE Region 10 Conference. (2018). https://doi.org/10.1109/tencon.2018.8650524
Nair, M.S., Nimitha, A.P., Idicula, S.M.: Conversion of Malayalam text to Indian sign language using synthetic animation. In: 2016 International Conference on Next Generation Intelligent Systems (ICNGIS) (2016). https://doi.org/10.1109/icngis.2016.7854002
Mahesh, M., Jayaprakash, A., Geetha, M.: Sign language translator for mobile platforms. In: 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI) (2017). https://doi.org/10.1109/icacci.2017.8126001
Kumar, S., Wangyal, T., Saboo, V., Srinath, R.: Time series neural networks for real time sign language translation. In: 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA) (2018). https://doi.org/10.1109/icmla.2018.00043
Shivashankara, S., Srinath, S.: American sign language recognition system: an optimal approach. Int. J. Image Graph. Signal Process. 10, 18–30 (2018). https://doi.org/10.5815/ijigsp.2018.08.03
Peressotti, F., Scaltritti, M., Miozzo, M.: Can sign language make you better at hand processing? PLoS ONE 13, e0194771 (2018). https://doi.org/10.1371/journal.pone.0194771
Jérôme, F., Benoît, F., Anthony, C.: Deep learning applied to sign language. In: CEUR Workshop Proceedings, vol. 2491 (2019)
Schembri, A., Stamp, R., Fenlon, J., Cormier, K.: Variation and change in varieties of British sign language in England. In: Braber, N., Jansen, S. (eds.) Sociolinguistics in England, pp. 165–188. Palgrave Macmillan, London (2018). https://doi.org/10.1057/978-1-137-56288-3_7
Koller, O.: Quantitative Survey of the State of the Art in Sign Language Recognition arXiv (2020). https://doi.org/10.48550/arXiv.2008.09918
Abiyev, R.H., Arslan, M., Idoko, J.B.: Sign language translation using deep convolutional neural networks. KSII Trans. Internet Inf. Syst. 14 (2020). https://doi.org/10.3837/tiis.2020.02.009
Pivac, L.: Learner autonomy in New Zealand sign language interpreting students. In: McKee, D., Rosen, R.S., McKee, R. (eds.) Teaching and Learning Signed Languages, pp. 197–221. Palgrave Macmillan, London (2014). https://doi.org/10.1057/9781137312495_10
Dong, C., Leu, M.C., Yin, Z.: American sign language alphabet recognition using Microsoft Kinect. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2015). https://doi.org/10.1109/cvprw.2015.7301347
Kania, K., Markowska-Kaczmar, U.: American sign language fingerspelling recognition using wide residual networks. In: Rutkowski, L., Scherer, R., Korytkowski, M., Pedrycz, W., Tadeusiewicz, R., Zurada, J.M. (eds.) ICAISC 2018. LNCS (LNAI), vol. 10841, pp. 97–107. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91253-0_10
Kelly, D., Mc Donald, J., Markham, C.: Weakly supervised training of a sign language recognition system using multiple instance learning density matrices. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 41, 526–541 (2011). https://doi.org/10.1109/tsmcb.2010.2065802
Ibrahim, N.B., Selim, M.M., Zayed, H.H.: An automatic Arabic sign language recognition system (ArSLRS). J. King Saud Univ. – Comput. Inf. Sci. 30, 470–477 (2018). https://doi.org/10.1016/j.jksuci.2017.09.007
Jimenez, J., Martin, A., Uc, V., Espinosa, A.: Mexican sign language alphanumerical gestures recognition using 3D Haar-like features. IEEE Lat. Am. Trans. 15, 2000–2005 (2017). https://doi.org/10.1109/tla.2017.8071247
Mohandes, M., Deriche, M., Liu, J.: Image-based and sensor-based approaches to Arabic sign language recognition. IEEE Trans. Hum.-Mach. Syst. 44, 551–557 (2014). https://doi.org/10.1109/thms.2014.2318280
Gallo, B., San-Segundo, R., Lucas, J.M., Barra, R., D’Haro, L.F., Fernandez, F.: Speech into sign language statistical translation system for deaf people. IEEE Lat. Am. Trans. 7, 400–404 (2009). https://doi.org/10.1109/tla.2009.5336641
Lopez-Ludena, V., San-Segundo, R., Martin, R., Sanchez, D., Garcia, A.: Evaluating a speech communication system for deaf people. IEEE Lat. Am. Trans. 9, 565–570 (2011). https://doi.org/10.1109/tla.2011.5993744
Dwivedi, S.A., Attry, A.: Juxtaposing deep learning models efficacy for ocular disorder detection of diabetic retinopathy for ophthalmoscopy. In: 2021 6th International Conference on Signal Processing, Computing and Control (ISPCC) (2021). https://doi.org/10.1109/ispcc53510.2021.9609368
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Challa, N., Baishya, K., Rohatgi, V., Gupta, K. (2022). Real-Time Sign Language Detection Leveraging Real-Time Translation. In: Sugumaran, V., Upadhyay, D., Sharma, S. (eds) Advancements in Interdisciplinary Research. AIR 2022. Communications in Computer and Information Science, vol 1738. Springer, Cham. https://doi.org/10.1007/978-3-031-23724-9_32
Download citation
DOI: https://doi.org/10.1007/978-3-031-23724-9_32
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-23723-2
Online ISBN: 978-3-031-23724-9
eBook Packages: Computer ScienceComputer Science (R0)