The method for sign language recognition that is being suggested combines the strengths of both Computer Vision and Convolutional Neural Networks (CNN). CNNs can analyze the visual elements of sign language because they are good at processing images, and computer vision techniques are good at capturing temporal dependencies, which helps to grasp the sequential nature of signing gestures. The system's provision of voice-based output, which provides spoken explanations or descriptions in addition to textual output, tackles the problem of text comprehension, especially for individuals who might have literacy issues.
NOTE- Webcam required.
- Efficiency through CNN and Computer Vision
- Improved Precision
- Sign-to-Voice Translation
![Interface](https://github.com/user-attachments/assets/5a6a3a58-9f31-4584-8267-8a61d20e4455)
2. Datasets used
![Dataset image](https://github.com/user-attachments/assets/fab8731e-e94f-4d51-a512-f25a989f9406)
3. Data Preprocessing
![Data Collection](https://github.com/user-attachments/assets/7b559f5b-64ce-42c6-b584-bda14e4d7391)
4. Output- Model identified the gesture as "Love, Hope" and corresponding voice
![love hope](https://github.com/user-attachments/assets/1792c659-17d7-485e-9ebc-a27c3db33227)