[go: up one dir, main page]

0% found this document useful (0 votes)
35 views9 pages

Major Project 'Hand Gesture Recognition System ' Synopsis

The document presents a project synopsis for a Hand Gesture Recognition System aimed at translating sign language gestures into text and speech in real time, enhancing communication for individuals with hearing and speech impairments. It utilizes MediaPipe for hand tracking and OpenCV for video processing, offering a user-friendly and cost-effective solution without the need for specialized hardware. Future improvements may include expanding gesture vocabulary and enhancing model accuracy through deep learning techniques.

Uploaded by

Naveen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views9 pages

Major Project 'Hand Gesture Recognition System ' Synopsis

The document presents a project synopsis for a Hand Gesture Recognition System aimed at translating sign language gestures into text and speech in real time, enhancing communication for individuals with hearing and speech impairments. It utilizes MediaPipe for hand tracking and OpenCV for video processing, offering a user-friendly and cost-effective solution without the need for specialized hardware. Future improvements may include expanding gesture vocabulary and enhancing model accuracy through deep learning techniques.

Uploaded by

Naveen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

VISVESVARAYA TECHNOLOGICAL UNIVERSITY

“Jnana Sangama”, Belagavi-590018

A Project Phase I (BCS685) Synopsis Report


on

“ Hand Gesture Recognition System ”


Submitted in the partial fulfillment of the requirements for
the award of

BACHELOR OF ENGINEERING DEGREE


In
COMPUTER SCIENCE & ENGINEERING
Submitted by
Ankith K N 4AD22CS011
Ashwin V 4AD22CS014
Chetana Guddapa Vadnikopa 4AD22CS021
Kalpana Lahari 4AD22CS041

Under the guidance of


Mr. Raghuram A S
Assistant Professor
Department of CSE

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

ATME College of Engineering,


13th Kilometer, Mysore-Kanakapura-Bangalore Road
Mysore-570028
2024-25

ABSTRACT:
Sign language serves as a vital mode of communication for individuals with hearing and speech
impairments. However, a communication gap often exists between sign language users and those
unfamiliar with it. This project aims to bridge that gap by developing a Hand Gesture
Recognition System that translates sign language gestures into both text and speech output in
real time. The system leverages MediaPipe for efficient hand tracking and gesture
recognition, integrated with OpenCV for real-time video processing. Machine learning models
are employed to classify hand gestures corresponding to predefined sign language symbols. The
recognized gesture is then converted into text, which is displayed on the screen, and further
transformed into speech output using a text-to-speech (TTS) engine. The proposed system has
the potential to enhance accessibility and inclusivity by enabling seamless communication
between sign language users and non-signers. It can be applied in various domains, including
education, customer service, and assistive technology for differently-abled individuals. Future
enhancements may include expanding the gesture vocabulary and improving model accuracy
through deep learning techniques.
INTRODUCTION:
Sign language is a crucial communication tool for individuals with hearing and speech
impairments. However, the lack of widespread understanding of sign language creates a
communication barrier between signers and non-signers. This project, Hand Gesture Recognition
System for Sign Language Detection, aims to bridge this gap by developing a system that
translates hand gestures into text and speech output in real time.
Using MediaPipe for hand tracking and OpenCV for video processing, the system recognizes
predefined sign language gestures and converts them into meaningful text, which is displayed
on-screen. Additionally, a text-to-speech (TTS) engine generates voice output, enabling seamless
communication.
This project enhances accessibility for differently-abled individuals and has applications in
education, healthcare, and assistive technology. Future improvements may include expanding the
gesture vocabulary and integrating deep learning for higher accuracy.

OBJECTIVE:
The primary objectives of the Hand Gesture Recognition System are:
1. Real-Time Gesture Recognition: Accurately detect and classify hand gestures using
MediaPipe and OpenCV for seamless sign language recognition.
2. Text Conversion: Convert recognized gestures into text output, displaying the detected
sign on-screen for easy interpretation.
3. Speech Output: Integrate a Text-to-Speech (TTS) engine to generate audio output,
enabling communication between signers and non-signers.
4. Improved Accessibility: Assist individuals with hearing and speech impairments by
providing an efficient tool to communicate with those unfamiliar with sign language.
5. User-Friendly Interface: Develop a simple and intuitive interface that ensures ease of
use for both sign language users and non-signers.
PROBLEM STATEMENT:
Communication barriers between individuals with hearing and speech impairments and those
unfamiliar with sign language create significant challenges in daily interactions. Traditional
methods of communication, such as interpreters or written text, may not always be readily
available or convenient. As a result, individuals with disabilities often face difficulties in social
interactions.

This project aims to develop a Hand Gesture Recognition System that can automatically
recognize sign language gestures and convert them into text and speech output in real time.
By leveraging MediaPipe for hand tracking and OpenCV for video processing, the system
ensures efficient and accurate recognition of predefined gestures. The integration of a Text-to-
Speech (TTS) engine further enhances accessibility by providing voice output for the detected
signs.

By addressing this issue, the project seeks to bridge the communication gap, promote
inclusivity, and provide an assistive tool that empowers individuals with hearing and speech
impairments to communicate more effectively with the broader society.

EXISTING SYSTEM
Several existing systems aim to bridge the communication gap for individuals with hearing and
speech impairments. These include:
1. Glove-Based Gesture Recognition Systems: Some solutions use sensor-equipped gloves
to track hand movements and recognize sign language. While accurate, these systems can
be expensive, require specialized hardware, and may not be user-friendly.
2. Camera-Based Sign Language Recognition: Systems leveraging computer vision
techniques recognize gestures using cameras and image processing. While effective,
many rely on deep learning models that require large datasets, extensive training, and
high computational power.
3. Mobile Applications for Sign Language Translation: Some apps attempt to translate
sign language using smartphone cameras and AI-based models. However, they often have
limitations in real-time performance, gesture vocabulary, and adaptability to different
sign languages.
Despite their advantages, most existing systems face challenges such as high cost, hardware
dependency, limited accuracy, and restricted vocabulary. This project aims to overcome
these limitations by developing a real-time, camera-based hand gesture recognition system
using MediaPipe and OpenCV, providing a cost-effective and accessible solution with text
and speech output.

PROPOSED SYSTEM
The Hand Gesture Recognition System for Sign Language Detection is designed to provide a
real-time, cost-effective, and accessible solution for sign language translation. Unlike existing
systems that rely on expensive hardware or require extensive training data, this system uses
MediaPipe for hand tracking and OpenCV for video processing to accurately recognize
gestures without the need for additional sensors or gloves.
The proposed system captures hand gestures through a webcam, processes them using computer
vision techniques, and translates the recognized gestures into text output, which is displayed on
the screen. Additionally, a Text-to-Speech (TTS) engine is integrated to generate voice output,
enabling seamless communication between signers and non-signers.
Key features of the proposed system include:
 Real-time sign language recognition with high accuracy.
 Conversion of recognized gestures into both text and speech output.
 User-friendly, hardware-independent approach using only a webcam.
By leveraging efficient algorithms and lightweight frameworks, this system aims to enhance
accessibility, promote inclusivity, and provide an assistive tool for individuals with hearing
and speech impairments.
LITERATURE SURVEY :(Sample Survey Template below.Minimum 4
expected)

[1] Routing Attacks and Solutions in Mobile Ad hoc Networks


• Author: Geng Peng and Zou Chuanyun.
• Date of Conference: 27-30 Nov. 2006
• Date Added to IEEE Xplore: 10 April 2007.
⮚ A security routing mechanism based on common neighbour listening is proposed. In this
mechanism, the trust value and trust threshold are defined to evaluate a node's credit
standing and judge whether a node is a malicious node or not. The common neighbour
which holds the biggest trust value is chosen to listen to the network.
⮚ The mechanism can react quickly and effectively protect the network from kinds of
attacks when some malicious nodes occur in the ad-hoc network.
⮚ Once the route is destroyed by malicious node, common neighbour will search another
route to the destination during a route discovery phase.

[2] Detection of Routing Misbehaviour in MANETs with 2ACK scheme


• Author: Chinmaya Kumar Nayak and Satyabrata Das.
• Date Added to IEEE Xplore: January 2011.
⮚ The 2ACK scheme that serves as an add-on technique for routing schemes to detect
routing misbehaviour and to mitigate their effect .
⮚ To reduce extra routing overhead, only a few of the received data packets are
acknowledged in the 2ACK scheme.

[3] MIKBIT-Modified DSR for MANET


• Author : Abhilasha Gupta, Raksha Upadhyay, Uma Rathore Bhatt.
• Date of Conference: 7-8 Feb. 2014.
• Date Added to IEEE Xplore: 03 April 2014.
⮚ Reactive and proactive routing protocols are being used in Mobile ad hoc networks
(MANETs).
⮚ Dynamic Source Routing (DSR) is one of the reactive routing protocols but high
overhead involved in flooding while route creation is a limiting factor of DSR protocol.
⮚ The paper aims to minimize the number of route requests (RREQs), which is a significant
source of overhead for the DSR.

[4] Trust Based Optimal Routing In MANET's

• Author: S. Neelakandan, J. Gokul Anand.


• Date of Conference: 23-24 March 2011.
• Date Added to IEEE Xplore: 02 May 2011.
⮚ MANETs do not have a fixed infra-structure which makes them easy to build over an
area. Mobile Ad hoc networks (MANETs) have several advantages compared to
traditional wireless networks.
⮚ These include ease of deployment, speed of deployment and decreased dependency on a
fixed infrastructure.
⮚ However unique characteristics of MANETs topology such as open peer-to-peer
architecture, shared wireless medium and limited resource pose a number of non-trivial
challenges to security design.

[5] Reverse Tracing Scheme to Prevent the Cooperative Attacks in MANETs

• Author: Syeda Arshiya Sultana and Samreen Banukazi.


• Date of Conference: 20-21 Feb 2015.
• Date Added to IEEE Xplore: 02 April 2015.
⮚ Wireless network architecture that has attracted much attention recently is Mobile ad hoc
network (MANET), which consists of mobile hosts only.
⮚ Since there is no base station, each mobile host must act as a router to forward packets.
⮚ A Blackhole is an attack in which a malevolent node transmits malicious broadcast to the
Route Reply (RREP); it then suddenly drops the packets without forwarding it to
destination.
⮚ In cooperative attack, the attackers cooperate with each other and work in group to
destroy the target network. To solve the issue of cooperative attack by using reverse
tracing scheme based on Dynamic Source Routing (DSR) routing mechanism.

Hardware and Software Requirements


Hardware Requirements:
1. Computer/Laptop: A system with a decent processor (Intel i5 or higher, AMD Ryzen
equivalent) for smooth processing.
2. Webcam: A built-in or external webcam for capturing hand gestures in real time.
3. Speakers/Headphones: Required for the text-to-speech (TTS) output.

Software Requirements:
1. Python: Primary programming language for implementing the system.
2. OpenCV: For real-time video processing and image recognition.
3. MediaPipe: For efficient hand tracking and gesture recognition.
4. Text-to-Speech (TTS) Engine: Such as pyttsx3 or gTTS for generating voice output.
5. NumPy & Pandas: For handling numerical operations and data processing.
6. Frontend (TBD): Flutter (mobile app), Tkinter (desktop GUI), or Web (HTML/CSS/JS).
REFERENCES

1. P.-C. Tsou, J.-M. Chang, H.-C. Chao, and J.-L. Chen, “CBDS: A cooperative bait
detection scheme to prevent malicious nodes in MANET based on hybrid defense
architecture,” in Proc. 2nd Int. Conf. Wireless Commun., VITAE, Chennai, India, Feb.
28–Mar. 3, 2011, pp. 1–5.
2. A. Venkatesh and M. S. Nagaraj, “Cloud Storage and Retrieval - A User Perspective,”
R.V. College of Engineering, Bangalore, India. Available at IEEE Xplore.
3. M. Conti, R. Di Pietro, L. V. Mancini, and A. Mei, “Distributed Detection of Clone
Attacks in Wireless Sensor Networks,” IEEE Trans. Dependable Secure Comput., vol. 8,
no. 5, pp. 685–698, 2011.
4. A. Gupta, R. Upadhyay, and U. R. Bhatt, “MIKBIT-Modified DSR for MANET,” in
Proc. 2014 Int. Conf. on Ad Hoc Networks, pp. 15–23.

Signature of Guide Signature of Coordinator


Guide Name Name of Coordinator
Designation Designation

You might also like