1st level project(MVP): A dictionary look up app with returning word, pronunciation(written form), part of speech, and vocal file giving auditory example of pronunciation. Accessibility level considered adequate as determined by Axe Accessibility.
2nd level project(stretch goals) - Detect hand position and show constant stream live through webcam incorporating another technology, MediaPipe hand tracking, properly through react. Make that functionality capable of being activated or deactivated via client. Show mapping of fingers so user can easily relate hand position to sign language alphabet and numbers 0 - 9.
3rd level project(serious stretch goals)- relate live to American Sign Language alphabet, pull words from Merriam-Webster's Dictionary API and provide functionality to sign our words returned as audio and text (can scale back to written input only to still provide viable 1st level product(MVP) within the limited timeframe). Meant as the start of a much larger project; meant for submissions to initiatives like the Google Developers Solution Challenge; initiatives that foster the growth of constructive projects with positive impacts.
- Create a database using PostgreSQL
- Create model with table for users using Java
- Create model with table for words using Java
- Create model with table for ASL letters A-Z and number 0-9 using Java and consuming the HandSpeak API
- Generate CRUD capability using React and Java
- Stretch Goal Implement user accessable word bank that updates based on database information
Used primarily by:
Client who is hearing impaired; I want to type out words and relate them to signed out words; convert words to sounds and written form so that I can more efficiently communicate with all other people in a multitude of situations.
End user goal:
Dictionary App for word lookup and audio returned as written and vocalized characters/words; App for tracking hand motion by webcam.
End business goal:
Camera access. Merriam-Webster's Dictionary API.
Acceptance criteria:
Refer to American Sign Language Alphabet. Refer to Merriam-Webster's API. Return written characters. Return vocal versions of characters. Enable webcam hand tracking through react app. Accessibility level considered adequate as determined by Axe Accessibility.
*Level One project (MVP) and Level Two project (strech goal) achieved. All acceptance criteria achieved.
Sources: MediaPipe documentation and package: https://google.github.io/mediapipe/getting_started/javascript.html Finger Pose documentation and package: https://openbase.com/js/fingerpose/documentation General connecting webcam to browser capability: https://www.kirupa.com/html5/accessing_your_webcam_in_html5.htm General create files via javascript: https://code-boxx.com/create-save-files-javascript/ LabelBox for image annotation to use in training and testing cases: https://app.labelbox.com/ Tensorflow 2 Object Detection API tutorial: https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/index.html Tensorflow object detection overview: https://github.com/nicknochnack/TFODCourse