From 0db8391d2bf06a0c32f2cc7b412ac89da16e86c0 Mon Sep 17 00:00:00 2001 From: JB Date: Sat, 20 Apr 2024 15:46:32 +0900 Subject: [PATCH] Update README.md --- README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 6431ef1..391ce56 100644 --- a/README.md +++ b/README.md @@ -9,6 +9,7 @@ This repository connects the LLM API to Slack. It currently supports implementations using OpenAI's ChatGPT and Google's Gemini model. The basic structure is straightforward. When a message arrives through Slack, we generate a response using the LLM's API. +It has multimodal capabilities, enabling us to process and analyze images. All settings are set via environment variables. See [here](./app/config/constants.py). @@ -44,7 +45,7 @@ uvicorn app.main:app --reload This command will run the application based on the app object in the main module of the app package. You can use the `--reload` option to automatically reload the application when file changes are detected. -![image](https://github.com/jybaek/llm-with-slack/assets/10207709/8b2b6740-b2e8-4648-86e2-e247a203c542) +![image](https://github.com/jybaek/llm-with-slack/assets/10207709/fb235e7e-c99b-412d-8d54-765f74950794) ## Installation 1. Clone the repository: