-
Notifications
You must be signed in to change notification settings - Fork 301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
💡[Feature]: Trusted Execution and In-Memory ML Model Protection Framework for Face Authentication #1261
Comments
Thank you for creating this issue! 🎉 We'll look into it as soon as possible. In the meantime, please make sure to provide all the necessary details and context. If you have any questions reach out to LinkedIn. Your contributions are highly appreciated! 😊 Note: I Maintain the repo issue twice a day, or ideally 1 day, If your issue goes stale for more than one day you can tag and comment on this same issue. You can also check our CONTRIBUTING.md for guidelines on contributing to this project. |
@sanjay-kv Please assign me this issue! |
@sanjay-kv Hey bro! This would require a lot of work like creating a TEE redirection in python. Then, encrypting the model before it's sent for authentication. Redirecting it to our local TEE and doing the decryption over there and doing the face authentication and sending back the results. This is a real-time issue faced by UIDAI. Please consider escalating it to level 3 if you feel it's worth it. This is how the architecture would look like (not exactly same): |
Hello @J-B-Mugundh! Your issue #1261 has been closed. Thank you for your contribution! |
projects like adding folder structure has 200 points limit, I will update to level3. nw |
Oh okay great! Thank u! Also, could u check if #1260 is also considered like this? It's also a project added in folder structure for end-to-end file-locking mechanism with facial recognition! |
Is there an existing issue for this?
Feature Description
Our proposed approach follows the steps:
Use Case
Consider a scenario where a web application uses a machine learning (ML) model for face authentication or sensitive data analysis. Storing and running the model locally could expose it to tampering or reverse engineering. By using a Trusted Execution and In-Memory Model Protection framework, the model is securely stored and decrypted in a Trusted Execution Environment (TEE) and executed in memory without persisting sensitive information. This ensures the model is protected even in a vulnerable browser environment.
Benefits
Add ScreenShots
No response
Priority
High
Record
The text was updated successfully, but these errors were encountered: