Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create a model to detect hand-drawn "SOS" #7

Open
krook opened this issue Oct 7, 2019 · 9 comments
Open

Create a model to detect hand-drawn "SOS" #7

krook opened this issue Oct 7, 2019 · 9 comments
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@krook
Copy link
Member

krook commented Oct 7, 2019

Is your feature request related to a problem? Please describe.
We assume that the person in need will have a kit with the printed symbols available. We should improve the system to demonstrate how a person could hand-recreate the symbols, and in turn, make the recognition more sensitive to those symbols.

@krook
Copy link
Member Author

krook commented Oct 18, 2019

@krook krook added enhancement New feature or request help wanted Extra attention is needed labels Mar 28, 2020
@anushkrishnav
Copy link

I can work on this

@krook
Copy link
Member Author

krook commented Apr 26, 2021

Thanks @anushkrishnav

@sarrah-basta
Copy link

Is this issue resolved, as I can see it open and unassigned? My proposal for the resolving it is: If we already have the models for our symbols from the kit ready, and a hand-written SoS is not one of them, it could be a good idea to instead use a pretrained model for handwritten letter recognition and integrate it with our present models.

@krook
Copy link
Member Author

krook commented Dec 21, 2021

Hi @sarrah-basta. This hasn't been worked on yet. Please feel free to take a shot. Thank you! And I agree, if we can reuse a model that would be ideal. Maybe the Model Asset Exchange has something to build upon.

@bhavyagoel
Copy link

Hi, @krook hope you are doing well!
As the issue is opened and unassigned, I can work on this. I would propose to simply determine the alphabets written on the ground, as it could be possible that someone in need might write some other information as well, like for reference "INJURED" or something else.
And based on the determined text, we can classify it.
For this, we can use a pre-trained model, and as mentioned by you Model Asset Exchange seems a good choice, we can also take help from mediapie for detecting if the human is present their or not, and in what state.

@sarrah-basta
Copy link

@krook @bhavyagoel yes as both of you'll have mentioned, we can use Optical character Recognition from Model Asset Exchange to detect the text. It can be further classified using either manually or using a classification model such as Naive Bayes classifier or the like. And adding the Face Detection Model from MediaPipe as mentioned would also be a huge plus.

@krook
Copy link
Member Author

krook commented Jan 6, 2022

Thanks for your interest everyone. So since @sarrah-basta replied first to the latest request, why don't you take the first pass at it? If you need any feedback or review of the proposed approach, then you can tag @bhavyagoel and I. Sound like a plan?

@sarrah-basta
Copy link

Yep sure, I'll start by finalising the models among ones we were proposing and try looking at our codebase to understand how it can be integrated. Thanks

Thanks for your interest everyone. So since @sarrah-basta replied first to the latest request, why don't you take the first pass at it? If you need any feedback or review of the proposed approach, then you can tag @bhavyagoel and I. Sound like a plan?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

4 participants