Skip to content

Releases: TabbyML/tabby

v0.17.0-rc.3

06 Sep 21:48
Compare
Choose a tag to compare
v0.17.0-rc.3 Pre-release
Pre-release
v0.17.0-rc.3

v0.17.0-rc.2

05 Sep 23:54
Compare
Choose a tag to compare
v0.17.0-rc.2 Pre-release
Pre-release
v0.17.0-rc.2

v0.17.0-rc.1

05 Sep 23:00
Compare
Choose a tag to compare
v0.17.0-rc.1 Pre-release
Pre-release
v0.17.0-rc.1

v0.17.0-rc.0

04 Sep 07:50
Compare
Choose a tag to compare
v0.17.0-rc.0 Pre-release
Pre-release
v0.17.0-rc.0

v0.16.1

28 Aug 04:32
Compare
Choose a tag to compare

⚠️ Notice

  • Starting from this version, we are utilizing websockets for features that require streaming (e.g., Answer Engine and Chat Side Panel). If you are deploying tabby behind a reverse proxy, you may need to configure the proxy to support websockets.

πŸš€ Features

  • Discussion threads in the Answer Engine are now persisted, allowing users to share threads with others.

🧰 Fixed and Improvements

  • Fixed an issue where the llama-server subprocess was not being reused when reusing a model for Chat / Completion together (e.g., Codestral-22B) with the local model backend.
  • Updated llama.cpp to version b3571 to support the jina series embedding models.

πŸ’« New Contributors

Full Changelog: v0.15.0...v0.16.1

v0.16.1-rc.0

27 Aug 05:04
Compare
Choose a tag to compare
v0.16.1-rc.0 Pre-release
Pre-release
v0.16.1-rc.0

v0.16.0

26 Aug 17:06
Compare
Choose a tag to compare

v0.16.0-rc.4

23 Aug 21:43
Compare
Choose a tag to compare
v0.16.0-rc.4 Pre-release
Pre-release
v0.16.0-rc.4

v0.16.0-rc.3

22 Aug 21:08
Compare
Choose a tag to compare
v0.16.0-rc.3 Pre-release
Pre-release
v0.16.0-rc.3

v0.16.0-rc.2

22 Aug 20:07
Compare
Choose a tag to compare
v0.16.0-rc.2 Pre-release
Pre-release
v0.16.0-rc.2