Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

💡[Feature]: Trusted Execution and In-Memory ML Model Protection Framework for Face Authentication #1261

Closed
4 tasks done
J-B-Mugundh opened this issue Oct 6, 2024 · 6 comments · Fixed by #1272
Closed
4 tasks done

Comments

@J-B-Mugundh
Copy link
Contributor

Is there an existing issue for this?

  • I have searched the existing issues

Feature Description

Our proposed approach follows the steps:

  • Model Size Optimization: Optimize the model by capturing and storing compressed snapshots, focusing on essential components to reduce size.
  • Storage and Decryption: Store the encrypted model in the browser's cache and perform decryption within a Trusted Execution Environment (TEE).
  • In-Memory Execution: Execute the decrypted model in-memory, avoiding persistent storage of sensitive data.
  • Obfuscation: Use advanced obfuscation techniques to secure the model and decryption process, making reverse engineering more difficult.
  • Runtime Integrity Checks: Implement runtime checks with digital signatures or hash functions to continuously verify model authenticity and detect tampering.

Use Case

Consider a scenario where a web application uses a machine learning (ML) model for face authentication or sensitive data analysis. Storing and running the model locally could expose it to tampering or reverse engineering. By using a Trusted Execution and In-Memory Model Protection framework, the model is securely stored and decrypted in a Trusted Execution Environment (TEE) and executed in memory without persisting sensitive information. This ensures the model is protected even in a vulnerable browser environment.

Benefits

  • Increased Security: Sensitive ML models are encrypted and decrypted only within a secure TEE, minimizing exposure to attacks.
  • Tamper-Resistance: In-memory execution prevents unauthorized access to the decrypted model, as it never gets stored in an unprotected state.
  • Reduced Risk of Reverse Engineering: Obfuscation techniques protect the model and decryption logic, making it more difficult for attackers to reverse-engineer or manipulate the ML model.
  • Efficient Model Execution: In-memory execution provides faster processing times while maintaining security, improving performance in real-time applications.

Add ScreenShots

No response

Priority

High

Record

  • I have read the Contributing Guidelines
  • I'm a GSSOC'24 contributor
  • I want to work on this issue
@J-B-Mugundh J-B-Mugundh added the enhancement New feature or request label Oct 6, 2024
Copy link

github-actions bot commented Oct 6, 2024

Thank you for creating this issue! 🎉 We'll look into it as soon as possible. In the meantime, please make sure to provide all the necessary details and context. If you have any questions reach out to LinkedIn. Your contributions are highly appreciated! 😊

Note: I Maintain the repo issue twice a day, or ideally 1 day, If your issue goes stale for more than one day you can tag and comment on this same issue.

You can also check our CONTRIBUTING.md for guidelines on contributing to this project.
We are here to help you on this journey of opensource, any help feel free to tag me or book an appointment.

@J-B-Mugundh
Copy link
Contributor Author

@sanjay-kv Please assign me this issue!

@J-B-Mugundh
Copy link
Contributor Author

J-B-Mugundh commented Oct 6, 2024

@sanjay-kv Hey bro! This would require a lot of work like creating a TEE redirection in python. Then, encrypting the model before it's sent for authentication. Redirecting it to our local TEE and doing the decryption over there and doing the face authentication and sending back the results. This is a real-time issue faced by UIDAI. Please consider escalating it to level 3 if you feel it's worth it.

This is how the architecture would look like (not exactly same):
image

Copy link

github-actions bot commented Oct 7, 2024

Hello @J-B-Mugundh! Your issue #1261 has been closed. Thank you for your contribution!

@sanjay-kv
Copy link
Member

projects like adding folder structure has 200 points limit, I will update to level3. nw

@sanjay-kv sanjay-kv added level3 and removed level2 labels Oct 7, 2024
@J-B-Mugundh
Copy link
Contributor Author

projects like adding folder structure has 200 points limit, I will update to level3. nw

Oh okay great! Thank u! Also, could u check if #1260 is also considered like this? It's also a project added in folder structure for end-to-end file-locking mechanism with facial recognition!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants