Skip to content

Commit

Permalink
Merge pull request #10 from EmbeddedLLM/szeyu-patch-1
Browse files Browse the repository at this point in the history
Update README.md
  • Loading branch information
tanpinsiang authored Jul 15, 2024
2 parents 43fb3ff + 9d7aff1 commit 7ed931f
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,15 +118,15 @@ Run local LLMs on iGPU, APU and CPU (AMD , Intel, and Qualcomm (Coming Soon)). E
1. `ellm_chatbot --port 7788 --host localhost --server_port <ellm_server_port> --server_host localhost`. **Note:** To find out more of the supported arguments. `ellm_chatbot --help`.
![Chatbot Web UI](asset/ellm_chatbot_vid.webp)
![asset/ellm_chatbot_vid.webp](asset/ellm_chatbot_vid.webp)
### Launch Model Management UI
It is an interface that allows you to download and deploy OpenAI API compatible server. You can find out the disk space required to download the model in the UI.
1. `ellm_modelui --port 6678`. **Note:** To find out more of the supported arguments. `ellm_modelui --help`.
![Model Management UI](asset/ellm_modelui.png)
![Model Management UI](asset/ellm_modelui.png)
## Compile OpenAI-API Compatible Server into Windows Executable
Expand Down

0 comments on commit 7ed931f

Please sign in to comment.