Welcome to the comprehensive guide for configuring your application's environment with the .env
file. This document is your one-stop resource for understanding and customizing the environment variables that will shape your application's behavior in different contexts.
While the default settings provide a solid foundation for a standard docker
installation, delving into this guide will unveil the full potential of LibreChat. This guide empowers you to tailor LibreChat to your precise needs. Discover how to adjust language model availability, integrate social logins, manage the automatic moderation system, and much more. It's all about giving you the control to fine-tune LibreChat for an optimal user experience.
- Here you can change the app title and footer
- Uncomment to add a custom footer.
- Uncomment and make empty "" to remove the footer.
APP_TITLE=LibreChat
CUSTOM_FOOTER="My custom footer"
LibreChat has built-in central logging.
- Debug logging is enabled by default and crucial for development.
- To report issues, reproduce the error and submit logs from
./api/logs/debug-%DATE%.log
at LibreChat GitHub Issues. - Error logs are stored in the same location.
- Keep debug logs active by default or disable them by setting
DEBUG_LOGGING=FALSE
in the environment variable. - For more information about this feature, read our docs: https://docs.librechat.ai/features/logging_system.html
DEBUG_LOGGING=TRUE
- Enable verbose server output in the console with
DEBUG_CONSOLE=TRUE
, though it's not recommended due to high verbosity.
DEBUG_CONSOLE=TRUE
This is not recommend, however, as the outputs can be quite verbose, and so it's disabled by default.
- The server will listen to localhost:3080 by default. You can change the target IP as you want. If you want to make this server available externally, for example to share the server with others or expose this from a Docker container, set host to 0.0.0.0 or your external IP interface.
Tips: Setting host to 0.0.0.0 means listening on all interfaces. It's not a real IP.
- Use localhost:port rather than 0.0.0.0:port to access the server.
HOST=localhost
PORT=3080
- Change this to your MongoDB URI if different. You should also add
LibreChat
or your ownAPP_TITLE
as the database name in the URI. For example:- if you are using docker, the URI format is
mongodb://<ip>:<port>/<database>
. YourMONGO_URI
should look like this:mongodb://127.0.0.1:27018/LibreChat
- if you are using an online db, the URI format is
mongodb+srv://<username>:<password>@<host>/<database>?<options>
. YourMONGO_URI
should look like this:mongodb+srv://username:[email protected]/LibreChat?retryWrites=true
(retryWrites=true
is the only option you need when using the online db)
- if you are using docker, the URI format is
- Instruction on how to create an online MongoDB database (useful for use without docker):
- Securely access your docker MongoDB database:
MONGO_URI=mongodb://127.0.0.1:27018/LibreChat
- To use LibreChat locally, set
DOMAIN_CLIENT
andDOMAIN_SERVER
tohttp://localhost:3080
(3080 being the port previously configured) - When deploying LibreChat to a custom domain, set
DOMAIN_CLIENT
andDOMAIN_SERVER
to your deployed URL, e.g.https://librechat.example.com
DOMAIN_CLIENT=http://localhost:3080
DOMAIN_SERVER=http://localhost:3080
UID and GID are numbers assigned by Linux to each user and group on the system. If you have permission problems, set here the UID and GID of the user running the docker compose command. The applications in the container will run with these uid/gid.
UID=1000
GID=1000
In this section you can configure the endpoints and models selection, their API keys, and the proxy and reverse proxy settings for the endpoints that support it.
- Uncomment
ENDPOINTS
to customize the available endpoints in LibreChat PROXY
is to be used by all endpoints (leave blank by default)
ENDPOINTS=openAI,azureOpenAI,bingAI,chatGPTBrowser,google,gptPlugins,anthropic
PROXY=
see: Anthropic Endpoint
- You can request an access key from https://console.anthropic.com/
- Leave
ANTHROPIC_API_KEY=
blank to disable this endpoint - Set
ANTHROPIC_API_KEY=
to "user_provided" to allow users to provide their own API key from the WebUI - If you have access to a reverse proxy for
Anthropic
, you can set it withANTHROPIC_REVERSE_PROXY=
- leave blank or comment it out to use default base url
ANTHROPIC_API_KEY=user_provided
ANTHROPIC_MODELS=claude-1,claude-instant-1,claude-2
ANTHROPIC_REVERSE_PROXY=
see: Azure OpenAI
- To use Azure with this project, set the following variables. These will be used to build the API URL.
AZURE_API_KEY=
AZURE_OPENAI_API_INSTANCE_NAME=
AZURE_OPENAI_API_DEPLOYMENT_NAME=
AZURE_OPENAI_API_VERSION=
AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME=
AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME=
Note: As of 2023-11-10, the Azure API only allows one model per deployment,
- Chat completion:
https://{AZURE_OPENAI_API_INSTANCE_NAME}.openai.azure.com/openai/deployments/{AZURE_OPENAI_API_DEPLOYMENT_NAME}/chat/completions?api-version={AZURE_OPENAI_API_VERSION}
- You should also consider changing the
OPENAI_MODELS
variable to the models available in your instance/deployment.
Note:
AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME
andAZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME
are optional but might be used in the future
-
It's recommended to name your deployments after the model name, e.g.
gpt-35-turbo,
which allows for fast deployment switching andAZURE_USE_MODEL_AS_DEPLOYMENT_NAME
enabled. However, you can use non-model deployment names and setting theAZURE_OPENAI_DEFAULT_MODEL
to ensure it works as expected. -
Identify the available models, separated by commas without spaces. The first will be default. Leave it blank or as is to use internal settings.
Note: as deployment names can't have periods, they will be removed when the endpoint is generated.
AZURE_OPENAI_MODELS=gpt-3.5-turbo,gpt-4
- This enables the use of the model name as the deployment name, e.g. "gpt-3.5-turbo" as the deployment name (Advanced)
AZURE_USE_MODEL_AS_DEPLOYMENT_NAME=TRUE
- To use Azure with the Plugins endpoint, you need the variables above, and uncomment the following variable:
Note: This may not work as expected and Azure OpenAI may not support OpenAI Functions yet Omit/leave it commented to use the default OpenAI API
PLUGINS_USE_AZURE="true"
Bing, also used for Sydney, jailbreak, and Bing Image Creator, see: Bing Access token and Bing Jailbreak
- Follow these instructions to get your bing access token (it's best to use the full cookie string for that purpose): Bing Access Token
- Leave
BINGAI_TOKEN=
blank to disable this endpoint - Set
BINGAI_TOKEN=
to "user_provided" to allow users to provide their own API key from the WebUI
Note: It is recommended to leave it as "user_provided" and provide the token from the WebUI.
BINGAI_HOST
can be necessary for some people in different countries, e.g. China (https://cn.bing.com). Leave it blank or commented out to use default server.
BINGAI_TOKEN=user_provided
BINGAI_HOST=
see: ChatGPT Free Access token
Warning: To use this endpoint you'll have to set up your own reverse proxy. Here is the installation guide to deploy your own (based on PandoraNext): PandoraNext Deployment Guide
CHATGPT_REVERSE_PROXY=<YOUR-REVERSE-PROXY>
Note: If you're a GPT plus user you can add gpt-4, gpt-4-plugins, gpt-4-code-interpreter, and gpt-4-browsing to the list above and use the models for these features; however, the view/display portion of these features are not supported, but you can use the underlying models, which have higher token contextNote: The current method only works withtext-davinci-002-render-sha
- Leave
CHATGPT_TOKEN=
blank to disable this endpoint - Set
CHATGPT_TOKEN=
to "user_provided" to allow users to provide their own API key from the WebUI- It is not recommended to provide your token in the
.env
file since it expires often and sharing it could get you banned.
- It is not recommended to provide your token in the
CHATGPT_TOKEN=
CHATGPT_MODELS=text-davinci-002-render-sha
Follow these instruction to setup: Google LLMs
GOOGLE_KEY=user_provided
GOOGLE_REVERSE_PROXY=
- Customize the available models, separated by commas, without spaces.
- The first will be default.
- Leave it blank or commented out to use internal settings (default: all listed below).
# all available models as of 12/16/23
GOOGLE_MODELS=gemini-pro,gemini-pro-vision,chat-bison,chat-bison-32k,codechat-bison,codechat-bison-32k,text-bison,text-bison-32k,text-unicorn,code-gecko,code-bison,code-bison-32k
-
To get your OpenAI API key, you need to:
- Go to https://platform.openai.com/account/api-keys
- Create an account or log in with your existing one
- Add a payment method to your account (this is not free, sorry 😬)
- Copy your secret key (sk-...) to
OPENAI_API_KEY
-
Leave
OPENAI_API_KEY=
blank to disable this endpoint -
Set
OPENAI_API_KEY=
to "user_provided" to allow users to provide their own API key from the WebUI
OPENAI_API_KEY=user_provided
- Set to true to enable debug mode for the OpenAI endpoint
DEBUG_OPENAI=false
- Customize the available models, separated by commas, without spaces.
- The first will be default.
- Leave it blank or commented out to use internal settings.
OPENAI_MODELS=gpt-3.5-turbo-1106,gpt-4-1106-preview,gpt-3.5-turbo,gpt-3.5-turbo-16k,gpt-3.5-turbo-0301,text-davinci-003,gpt-4,gpt-4-0314,gpt-4-0613
- Titling is enabled by default when initiating a conversation.
- Set to false to disable this feature.
TITLE_CONVO=true
- The default model used for titling by is gpt-3.5-turbo. You can change it by uncommenting the following and setting the desired model. (Optional)
Note: Must be compatible with the OpenAI Endpoint.
OPENAI_TITLE_MODEL=gpt-3.5-turbo
- Enable message summarization by uncommenting the following (Optional/Experimental)
Note: this may affect response time when a summary is being generated.
OPENAI_SUMMARIZE=true
Not yet implemented: this will be a conversation option enabled by default to save users on tokens. We are using the ConversationSummaryBufferMemory method to summarize messages. To learn more about this, see this article: https://www.pinecone.io/learn/series/langchain/langchain-conversational-memory/
- Reverse proxy settings for OpenAI:
- see: LiteLLM
- see also: Free AI APIs
OPENAI_REVERSE_PROXY=
- Sometimes when using Local LLM APIs, you may need to force the API to be called with a
prompt
payload instead of amessages
payload; to mimic the/v1/completions
request instead of/v1/chat/completions
. This may be the case for LocalAI with some models. To do so, uncomment the following (Advanced)
OPENAI_FORCE_PROMPT=true
See OpenRouter for more info.
- OpenRouter is a legitimate proxy service to a multitude of LLMs, both closed and open source, including: OpenAI models, Anthropic models, Meta's Llama models, pygmalionai/mythalion-13b and many more open source models. Newer integrations are usually discounted, too!
Note: this overrides the OpenAI and Plugins Endpoints.
OPENROUTER_API_KEY=
Here are some useful documentation about plugins:
- Identify the available models, separated by commas without spaces. The first model in the list will be set as default. Leave it blank or commented out to use internal settings.
PLUGIN_MODELS=gpt-3.5-turbo,gpt-3.5-turbo-16k,gpt-3.5-turbo-0301,gpt-4,gpt-4-0314,gpt-4-0613
- Set to false or comment out to disable debug mode for plugins
DEBUG_PLUGINS=true
- For securely storing credentials, you need a fixed key and IV. You can set them here for prod and dev environments.
- You need a 32-byte key (64 characters in hex) and 16-byte IV (32 characters in hex) You can use this replit to generate some quickly: Key Generator
Warning: If you don't set them, the app will crash on startup.
CREDS_KEY=f34be427ebb29de8d88c107a71546019685ed8b241d8f2ed00c3df97ad2566f0
CREDS_IV=e2341419ec3dd3d19b13a1a87fafcbfb
This plugin supports searching Azure AI Search for answers to your questions. See: Azure AI Search
AZURE_AI_SEARCH_SERVICE_ENDPOINT=
AZURE_AI_SEARCH_INDEX_NAME=
AZURE_AI_SEARCH_API_KEY=
AZURE_AI_SEARCH_API_VERSION=
AZURE_AI_SEARCH_SEARCH_OPTION_QUERY_TYPE=
AZURE_AI_SEARCH_SEARCH_OPTION_TOP=
AZURE_AI_SEARCH_SEARCH_OPTION_SELECT=
- OpenAI API key for DALL-E / DALL-E-3. Leave commented out to have the user provide their own key when installing the plugin. If you want to provide your own key for all users you can uncomment this line and add your OpenAI API key here.
# DALLE_API_KEY=
- For customization of the DALL-E-3 System prompt, uncomment the following, and provide your own prompt. (Advanced)
- See official prompt for reference: DALL-E System Prompt
DALLE3_SYSTEM_PROMPT="Your System Prompt here"
- DALL-E Proxy settings. This is separate from its OpenAI counterpart for customization purposes (Advanced)
Reverse proxy settings, changes the baseURL for the DALL-E-3 API Calls The URL must match the "url/v1," pattern, the "openai" suffix is also allowed.
Examples: - https://open.ai/v1 - https://open.ai/v1/ACCOUNT/GATEWAY/openai - https://open.ai/v1/hi/openai
DALLE_REVERSE_PROXY=
Note: if you have PROXY set, it will be used for DALL-E calls also, which is universal for the app
See detailed instructions here: Google Search
GOOGLE_API_KEY=
GOOGLE_CSE_ID=
SerpApi is a real-time API to access Google search results (not as performant)
SERPAPI_API_KEY=
See detailed instructions here: Stable Diffusion
- Use "http://127.0.0.1:7860" with local install and "http://host.docker.internal:7860" for docker
SD_WEBUI_URL=http://host.docker.internal:7860
See detailed instructions here: Wolfram Alpha
WOLFRAM_APP_ID=
- You need a Zapier account. Get your API key from here: Zapier
- Create allowed actions - Follow step 3 in this getting start guide from Zapier
Note: zapier is known to be finicky with certain actions. Writing email drafts is probably the best use of it.
ZAPIER_NLA_API_KEY=
Enables search in messages and conversations:
SEARCH=true
Note: If you're not using docker, it requires the installation of the free self-hosted Meilisearch or a paid remote plan
To disable anonymized telemetry analytics for MeiliSearch for absolute privacy, set to true:
MEILI_NO_ANALYTICS=true
For the API server to connect to the search server. Replace '0.0.0.0' with 'meilisearch' if serving MeiliSearch with docker-compose.
MEILI_HOST=http://0.0.0.0:7700
MeiliSearch HTTP Address, mainly for docker-compose to expose the search server. Replace '0.0.0.0' with 'meilisearch' if serving MeiliSearch with docker-compose.
MEILI_HTTP_ADDR=0.0.0.0:7700
This master key must be at least 16 bytes, composed of valid UTF-8 characters. MeiliSearch will throw an error and refuse to launch if no master key is provided or if it is under 16 bytes. MeiliSearch will suggest a secure autogenerated master key. This is a ready made secure key for docker-compose, you can replace it with your own.
MEILI_MASTER_KEY=DrhYf7zENyR6AlUCKmnz0eYASOQdl6zxH7s7MKFSfFCt
This section contains the configuration for:
The Automated Moderation System uses a scoring mechanism to track user violations. As users commit actions like excessive logins, registrations, or messaging, they accumulate violation scores. Upon reaching a set threshold, the user and their IP are temporarily banned. This system ensures platform security by monitoring and penalizing rapid or suspicious activities.
see: Automated Moderation
BAN_VIOLATIONS
: Whether or not to enable banning users for violations (they will still be logged)BAN_DURATION
: How long the user and associated IP are banned for (in milliseconds)BAN_INTERVAL
The user will be banned everytime their score reaches/crosses over the interval threshold
BAN_VIOLATIONS=true
BAN_DURATION=1000 * 60 * 60 * 2
BAN_INTERVAL=20
LOGIN_VIOLATION_SCORE=1
REGISTRATION_VIOLATION_SCORE=1
CONCURRENT_VIOLATION_SCORE=1
MESSAGE_VIOLATION_SCORE=1
NON_BROWSER_VIOLATION_SCORE=20
LOGIN_MAX
: The max amount of logins allowed per IP perLOGIN_WINDOW
LOGIN_WINDOW
: In minutes, determines the window of time forLOGIN_MAX
loginsREGISTER_MAX
: The max amount of registrations allowed per IP perREGISTER_WINDOW
REGISTER_WINDOW
: In minutes, determines the window of time forREGISTER_MAX
registrations
LOGIN_MAX=7
LOGIN_WINDOW=5
REGISTER_MAX=5
REGISTER_WINDOW=60
LIMIT_CONCURRENT_MESSAGES
: Whether to limit the amount of messages a user can send per requestCONCURRENT_MESSAGE_MAX
: The max amount of messages a user can send per request
LIMIT_CONCURRENT_MESSAGES=true
CONCURRENT_MESSAGE_MAX=2
Note: You can utilize both limiters, but default is to limit by IP only.
- IP Limiter:
LIMIT_MESSAGE_IP
: Whether to limit the amount of messages an IP can send perMESSAGE_IP_WINDOW
MESSAGE_IP_MAX
: The max amount of messages an IP can send perMESSAGE_IP_WINDOW
MESSAGE_IP_WINDOW
: In minutes, determines the window of time forMESSAGE_IP_MAX
messages
LIMIT_MESSAGE_IP=true
MESSAGE_IP_MAX=40
MESSAGE_IP_WINDOW=1
- User Limiter:
LIMIT_MESSAGE_USER
: Whether to limit the amount of messages an IP can send perMESSAGE_USER_WINDOW
MESSAGE_USER_MAX
: The max amount of messages an IP can send perMESSAGE_USER_WINDOW
MESSAGE_USER_WINDOW
: In minutes, determines the window of time forMESSAGE_USER_MAX
messages
LIMIT_MESSAGE_USER=false
MESSAGE_USER_MAX=40
MESSAGE_USER_WINDOW=1
The following enables user balances for the OpenAI/Plugins endpoints, which you can add manually or you will need to build out a balance accruing system for users.
see: Token Usage
- To manually add balances, run the following command:
npm run add-balance
- You can also specify the email and token credit amount to add, e.g.:
npm run add-balance [email protected] 1000
- You can also specify the email and token credit amount to add, e.g.:
Note: 1000 credits = $0.001 (1 mill USD)
- Set to
true
to enable token credit balances for the OpenAI/Plugins endpoints
CHECK_BALANCE=false
see: User/Auth System
- General Settings:
ALLOW_EMAIL_LOGIN
: Email login. Set totrue
orfalse
to enable or disable ONLY email login.ALLOW_REGISTRATION
: Email registration of new users. Set totrue
orfalse
to enable or disable Email registration.ALLOW_SOCIAL_LOGIN
: Allow users to connect to LibreChat with various social networks, see below. Set totrue
orfalse
to enable or disable.ALLOW_SOCIAL_REGISTRATION
: Enable or disable registration of new user using various social network. Set totrue
orfalse
to enable or disable.
Quick Tip: Even with registration disabled, add users directly to the database using
npm run create-user
.
ALLOW_REGISTRATION=true
ALLOW_SOCIAL_LOGIN=false
ALLOW_SOCIAL_REGISTRATION=false
- Default values: session expiry: 15 minutes, refresh token expiry: 7 days
- For more information: Refresh Token
SESSION_EXPIRY=1000 * 60 * 15
REFRESH_TOKEN_EXPIRY=(1000 * 60 * 60 * 24) * 7
- You should use new secure values. The examples given are 32-byte keys (64 characters in hex).
- Use this replit to generate some quickly: JWT Keys
JWT_SECRET=16f8c0ef4a5d391b26034086c628469d3f9f497f08163ab9b40137092f2909ef
JWT_REFRESH_SECRET=eaa5191f2914e30b9387fd84e254e4ba6fc51b4654968a9b0803b456a54b8418
for more information: Discord
# Discord
DISCORD_CLIENT_ID=your_client_id
DISCORD_CLIENT_SECRET=your_client_secret
DISCORD_CALLBACK_URL=/oauth/discord/callback
for more information: Facebook
# Facebook
FACEBOOK_CLIENT_ID=
FACEBOOK_CLIENT_SECRET=
FACEBOOK_CALLBACK_URL=/oauth/facebook/callback
for more information: GitHub
# GitHub
GITHUB_CLIENT_ID=your_client_id
GITHUB_CLIENT_SECRET=your_client_secret
GITHUB_CALLBACK_URL=/oauth/github/callback
for more information: Google
# Google
GOOGLE_CLIENT_ID=
GOOGLE_CLIENT_SECRET=
GOOGLE_CALLBACK_URL=/oauth/google/callback
for more information: Azure OpenID or AWS Cognito OpenID
# OpenID
OPENID_CLIENT_ID=
OPENID_CLIENT_SECRET=
OPENID_ISSUER=
OPENID_SESSION_SECRET=
OPENID_SCOPE="openid profile email"
OPENID_CALLBACK_URL=/oauth/openid/callback
OPENID_BUTTON_LABEL=
OPENID_IMAGE_URL=
Email is used for password reset. See: Email Password Reset
- Note that all either service or host, username and password and the From address must be set for email to work.
If using
EMAIL_SERVICE
, do NOT set the extended connection parameters:
HOST
,PORT
,ENCRYPTION
,ENCRYPTION_HOSTNAME
,ALLOW_SELFSIGNED
Failing to set valid values here will result in LibreChat using the unsecured password reset!
See: nodemailer well-known-services
EMAIL_SERVICE=
If EMAIL_SERVICE
is not set, connect to this server:
EMAIL_HOST=
Mail server port to connect to with EMAIL_HOST (usually 25, 465, 587, 2525):
EMAIL_PORT=25
Encryption valid values: starttls
(force STARTTLS), tls
(obligatory TLS), anything else (use STARTTLS if available):
EMAIL_ENCRYPTION=
Check the name in the certificate against this instead of EMAIL_HOST
:
EMAIL_ENCRYPTION_HOSTNAME=
Set to true to allow self-signed, anything else will disallow self-signed:
EMAIL_ALLOW_SELFSIGNED=
Username used for authentication. For consumer services, this MUST usually match EMAIL_FROM:
EMAIL_USERNAME=
Password used for authentication:
EMAIL_PASSWORD=
The human-readable address in the From is constructed as EMAIL_FROM_NAME <EMAIL_FROM>
. Defaults to APP_TITLE
:
EMAIL_FROM_NAME=
Mail address for from field. It is REQUIRED to set a value here (even if it's not porperly working):