Skip to content

Releases: explodinggradients/ragas

v0.0.12

06 Sep 17:39
717039d
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.0.11...v0.0.12

v0.0.11

24 Aug 02:18
5cf4975
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.0.10...v0.0.11

v0.0.10

02 Aug 19:19
154902d
Compare
Choose a tag to compare

Main

What's Changed

Full Changelog: v0.0.9...v0.0.10

v0.0.9

27 Jul 14:48
234354d
Compare
Choose a tag to compare

patch release for v0.0.8

What's Changed

Full Changelog: v0.0.8...v0.0.9

v0.0.8

27 Jul 10:36
2b9734d
Compare
Choose a tag to compare

Main

What's Changed

Full Changelog: 0.0.7...v0.0.8

0.0.7

20 Jul 16:46
9444617
Compare
Choose a tag to compare

What's Changed

Full Changelog: 0.0.6...0.0.7

0.0.6

15 Jul 06:27
eefb0ca
Compare
Choose a tag to compare

Main

  • Context Relevancy v2 - measures how relevant is the retrieved context to the prompt. This is done using a combination of OpenAI models and cross-encoder models. To improve the score one can try to optimize the amount of information present in the retrieved context.

What's Changed

Full Changelog: 0.0.5...0.0.6

0.0.5

10 Jul 10:55
d4d0883
Compare
Choose a tag to compare

What's Changed

Full Changelog: 0.0.4...0.0.5

0.0.4

10 Jul 07:37
f111519
Compare
Choose a tag to compare

Important feats

What's Changed

New Contributors

Full Changelog: v0.0.3...0.0.4

v0.0.3

09 Jun 14:25
6b76cd3
Compare
Choose a tag to compare

v0.0.3 is a major design change

We have added 3 new metrics that help you answer how factually correct is your generated answers, how relevant are the answers to the question and how relevant are the contexts returned form the retriever to the questions. This gives you a sense of the performance of both you generation and retrieval steps. We also have a "ragas_score" which is unified score to give a single metric about your pipelines.

checkout the quickstart to see how it works: https://github.com/explodinggradients/ragas/blob/main/examples/quickstart.ipynb