GPT4All as a server #1649
Closed
mscottgithub
started this conversation in
General
Replies: 2 comments 5 replies
-
GPT4All does not provide a web interface. Right now, the only graphical client is a Qt-based desktop app, and until we get the docker-based API server working again (#1641) it is the only way to connect to or serve an API service (unless the bindings can also connect to the API). |
Beta Was this translation helpful? Give feedback.
4 replies
-
On 2023-11-16 11:23, Jared Van Bortel wrote:
If you are looking for a web interface to llama.cpp, there is a host
of options out there - oobabooga's text-generation-webui (which I use
frequently), koboldcpp, even llama.cpp's built-in server example which
has efficient parallel generation support for when multiple clients
are connected.
Our docker-based API server can be used with any OpenAI-compatible
client AFAIK, but it is currently not working, and even when it does
work I don't have the impression that it is particularly robust at the
moment. Our main focus is the desktop chat interface.
--
Reply to this email directly, view it on GitHub [1], or unsubscribe
[2].
You are receiving this because you authored the thread.Message ID:
***@***.***>
Links:
------
[1]
#1649 (reply in thread)
[2]
https://github.com/notifications/unsubscribe-auth/AU4ETKLKGP33ZOGJ7B7GZUDYEZD23AVCNFSM6AAAAAA7LGWXV6VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TKOJQG4YDQ
I am grateful for your feedback. What is the primary difference between
gpt4all some some of these web interfaces such as oobabooga or
koboldcpp? I would presume that oobabooga is just a web interface
whereas gpt4all provides more? I apologize for my lack of knowledge. I
am just getting my feet wet in AI and trying to learn all the moving
parts. Are gpt4all and oobabooga variations on a theme...one focused on
local chat engine interface and one focused on web interface? Or is
gpt4all trying to accomplish more? I am still trying to wrap my mind
around how these projects are all unfolding and how they relate to each
other.
|
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I may have misunderstood a basic intent or goal of the gpt4all project and am hoping the community can get my head on straight. I believed from all that I've read that I could install GPT4All on Ubuntu server with a LLM of choice and have that server function as a text-based AI that could then be connected to by remote clients via chat client or web interface for interaction. I was under the impression there is a web interface that is provided with the gpt4all installation.
I was under the impression this could all be done on private hardware and hosted privately, even choosing to not have it connected to the Internet or reliant on any cloud services. I set out to do that. I have a Dell R720 host with 2 CPUs, 192GB of RAM, ample amounts of SSD storage, and a GeForce RTX 3080 ti GPU card. I installed Ubuntu on it and am trying to get GPT4All functional on the server.
I have so far not been able to get it installed--but that is not necessarily the goal of this question. I want to know if this goal is achievable and, hopefully, with little effort. I just want to get it installed and functional and then be able to connect to the server from a desktop PC using a chat client or browser for interaction with the LLM of choice. Have I misinterpreted this project? Is this intended to be a client-only installed system for local interaction only? If I am correct and this is easily doable then I would appreciate any help I can get to find instructions on how to get it installed and functional. So far I have not been able to find any clear, step-by-step instructions on how to install it on Ubuntu to get it to even launch and load a model for interaction, much less have it host that model for use by client PCs.
Respectfully and hopefully,
Mike
Beta Was this translation helpful? Give feedback.
All reactions