SIGN LANGUAGE
RECOGNITION
PRESENTED BY :- ADNAN ANSARI
KOMAL PANDEY
SHOHEB MULLA
ADITYA SALVI
GUIDED BY :-
ASST. PROF. PRATIKSHA DESHMUKH
ABSTRACT
LITERATURE
REFERENCE
REVIEW
INTRODUCTIO
N
SYSTEM
CONCLUSIO
ARCHITECT
N
URE
HARDWARE
/ SOFTWARE
INTRODUCTIO
N
Sign language is a vital tool
for communication among
individuals with hearing and
speech impairments. This
project aims to build a
system that uses computer
vision and AI to recognize
hand gestures in real time,
translating them into text or
speech. The goal is to
promote inclusive, barrier-
free communication.
ABSTRACT
This system uses computer
vision and deep learning to
recognize sign language
gestures in real time. Hand
gestures are captured via
webcam, processed using
CNN, and translated into text
or speech. It aims to improve
communication accessibility
for the hearing-impaired and
promote inclusive technology.
LITERATURE
AUTHOR
REVIEW TECHNIQUE SIGN
USED LANGUAGE
STARNER & WEBCAM- ASL
PENTLAND BASED
GESTURE
CAPTURE
REKHA ET AL DOUBLE- ISL
HANDED
GESTURE
DATASET
OYEDOTUN & CNN-BASED ASL
KHASHMAN CLASSIFCATIO
N
SYSTEM
ARCHITECTURE
.
.
HARDWARE & SOFTWARE
Hardware Requirements-
Webcam: For capturing hand gestures in real
time
Computer/Laptop: Minimum 4GB RAM, i3
processor or higher
GPU (Optional): For faster model training
and real-time inference
Microphone/Speaker: If integrating text-to-
speech output
HARDWARE & SOFTWARE
Software Requirements-
Operating System: Windows/Linux/macOS
Programming Language: Python
Libraries/Frameworks:
- OpenCV (for image processing)
- TensorFlow/Keras (for deep learning)
- NumPy & Pandas (for data handling)
- pyttsx3 or gTTS (for speech synthesis)
IDE: Jupyter Notebook, VS Code, or Google Colab
Dataset: ASL/ISL gesture dataset or custom hand
gesture images
FUTURE SCOPE
- Continuous Sign Recognition: Interpret full
sentences, not just isolated signs
- Multilingual Support: Extend to ASL, ISL, BSL,
and other sign languages
- Mobile & Wearable Integration: Enable real-
time recognition on smartphones and smart
glasses
- Context-Aware Translation: Use AI to
understand gesture meaning based on
conversation flow
- Smart Assistant Compatibility: Integrate with
devices for voice commands and accessibility
features
CONCLUSION
- The system enables real-time sign
language recognition using deep
learning.
- It promotes accessibility and bridges
communication gaps.
- A step toward inclusive tech that
empowers the hearing-impaired
community.
REFERENCES
1. M. Al-Qurishi, T. Khalid, and R. Souissi, “Deep
Learning for Sign Language Recognition: Current
Techniques, Benchmarks, and Open Issues,”
IEEE Access, vol. 9, pp. 123456–123470, Sep.
2021.
2. S. Agre, S. Wasker, A. Vashishtha, H. Latkar,
and A. Kanse, “Real-time Conversion of Sign
Language to Text and Speech,” JETIR, vol. 10,
no. 11, pp. 1–8, Nov. 2023.
3. M. M. Prasad, S. B, A. M, R. R, and S. Narayan,
“Sign Language and Hand Gesture Recognition
System,” IJCRT, vol. 11, no. 7, pp. 1–6, Jul. 2023.