You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From the benchmark results, I observed that this project both of insertion and query quality is better than LightRAG and GraphRAG. However, both LightRAG and GraphRAG have a common issue: their query time is too long. This has been a major concern for me. I would like to see a comparison chart of query time costs for better clarity.
The text was updated successfully, but these errors were encountered:
Hello, the query procedure is very similar, so the times should be identical. The context size can be reduced (maybe only providing chunks for example) to ensure a faster generation of the answer.
The latency can also be reduced by using local language models (in particular since we "read" the query we need to wait for one LLM call)
From the benchmark results, I observed that this project both of insertion and query quality is better than LightRAG and GraphRAG. However, both LightRAG and GraphRAG have a common issue: their query time is too long. This has been a major concern for me. I would like to see a comparison chart of query time costs for better clarity.
The text was updated successfully, but these errors were encountered: