[go: up one dir, main page]

0% found this document useful (0 votes)
44 views2 pages

Sign Language Detection Synopsis Updated

Uploaded by

shivangi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views2 pages

Sign Language Detection Synopsis Updated

Uploaded by

shivangi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Minor Project Synopsis

on

Sign Language Detection using ACTION RECOGNITION with Python (LSTM Deep Learning
Model)

Submitted in Partial Fulfilment of the Requirements


For the Award of Degree of

Bachelor of Technology

in

Electronics And Communication Engineering

Project Guide: Submitted By:


Mr. Binay Kumar Singh Shreyank Pandey
Akshita Chawla
Dhwani Bali
Shivangi

Maharaja Agrasen Institute of Technology


Sector-22, Rohini, Delhi-110086.

Affiliated To

Guru Gobind Singh Indraprastha University, New Delhi-110078


(20XX-20XX)

Title
Sign Language Detection using ACTION RECOGNITION with Python (LSTM Deep Learning
Model)

Literature Review
Sign language recognition is a significant research area in assistive technology, aiming to
bridge communication gaps for individuals with hearing impairments. Traditional computer
vision methods relied on static image classification, but recent advancements use temporal
modeling with deep learning, particularly LSTM networks, to capture sequential gesture
patterns. Integration of MediaPipe for keypoint extraction enhances gesture recognition
accuracy. Research gaps include limited datasets, challenges in real-time multi-user
recognition, and adapting models for various sign languages.
Objective
To develop a real-time sign language detection system that uses action recognition
techniques with Python and an LSTM deep learning model to improve gesture recognition
accuracy and speed.

Methodology
1. Data Acquisition: Capture sign language gestures using a webcam or dataset.
2. Preprocessing: Extract frames and apply MediaPipe Holistic model to obtain keypoints
(body, hands, face).
3. Feature Extraction: Store sequential keypoint data for each gesture.
4. Model Training: Train an LSTM model using TensorFlow/Keras on the extracted
sequences.
5. Real-Time Prediction: Implement live video feed prediction using the trained LSTM
model.
6. Evaluation: Assess the model’s performance based on accuracy and latency.

Plan
Phase 1: Literature review and dataset preparation.
Phase 2: Preprocessing and feature extraction pipeline setup.
Phase 3: LSTM model training and optimization.
Phase 4: Integration with live video feed for real-time recognition.
Phase 5: Testing, validation, and final documentation.

References
1. MediaPipe Holistic Documentation: https://google.github.io/mediapipe/
2. TensorFlow LSTM Guide: https://www.tensorflow.org/guide/keras/rnn
3. Video Tutorial: https://www.youtube.com/watch?v=doDUihpj6ro
4. Research Article: Sign Language Detection Using Action Recognition with Python
(ResearchGate)

You might also like