diff --git a/presentation/src/index.md b/presentation/src/index.md
index 9e7a050..bebafa2 100644
--- a/presentation/src/index.md
+++ b/presentation/src/index.md
@@ -10,15 +10,37 @@ by [`Dboy Liao`](https://github.com/dboyliao)
## About Me
+- My real name is Yin-Chen Liao but all my friends call me Dboy
+ - [GitHub](https://github.com/dboyliao)
+ - [Linkedin](https://www.linkedin.com/in/yin-chen-liao-69967188/)
+- Core Developer of [`uTensor`](https://utensor.github.io/website/) project
+ - Author of [`utensor_cgen`](https://github.com/uTensor/utensor_cgen), code generator for `uTensor`
+ - Contributor to `uTensor` C++ runtime
+
+---
+
+## About Me
+
- Interested in
- Scientific Computing
- Optimization
- Machine Learning & Deep Learning
- Freelancer
- - get a bunch of time hacking
+ - Got a bunch of time **hacking**
--
-- That's why I'm here!
+
+That's why I'm here!
+
+
+---
+
+## About The Talk
+
+- My one month challenge
+ - Try to reproduce a deep learining paper with Julia
+ - Writing my very first Julia package
+ - IN ONE MONTH
---
@@ -152,7 +174,9 @@ $\theta\_{y\_n} \cdot f\_{\phi}(x\_n) - \theta\_k\cdot f\_{\phi}(x\_n) \geq 1 -
--
-All we need is $\triangledown\_{\phi}\mathcal{L}^{meta}$!
+
+All we need is $\triangledown_{\phi}\mathcal{L}^{meta}$!
+
---
@@ -169,7 +193,7 @@ All we need is $\triangledown\_{\phi}\mathcal{L}^{meta}$!
- Short story: the solver is in fact differentiable
--
-- Long story: with **KKT conditions** and **implicit function theorem**, we can analytically compute the gradient as following
+- Long story: with **strong duality**, **KKT conditions** and **implicit function theorem**, we can analytically compute the gradient as following
@@ -187,17 +211,23 @@ name: impl
## Implementation
1. Data preprocessing, reformat normal dataset as $\mathcal{D}^{meta\-train} = \\{(D^{meta\-train}\_{train}, D^{meta\-train}\_{test})\\}$
-2. The $\text{QPSolver}$ and it's adjoint (with `Zygote.@adjoint`)
+2. Differentiable $\text{QPSolver}$ (with `Zygote.@adjoint`)
3. The embedding network, $f\_{\phi}$
4. The inner/outer training loop
+--
+
+
+
+
+
---
## Data Preprocessing
- `data_loader.jl`
- Extending `sample` from `StatsBase`
-- multiple dispatching make it really easy to integrate with existing library
+- **Multiple dispatching** make it really easy to extend existing library
@@ -211,7 +241,7 @@ name: impl
- `qp.jl`
- Solve the QP with `Convex.jl`, which has nice API and comply to `MathOptInterface`
-- Define it's backward gradient with `Zygote.@adjoint` even the solver itself is not differentiable
+- Define backward gradient with `Zygote.@adjoint` even the solver itself is not differentiable
- Just like `torch.nn.Module.backward`
@@ -231,11 +261,15 @@ name: impl
---
+## Experiments
+
- Finally my training script is working
--
+
But the model is not working ...
+
---