Note
dygest is a text analysis tool that extracts insights from documents, generating summaries, keywords, TOCs, and performing Named Entity Recognition (NER).
dygest was created to gain fast insights into longer transcripts of audio and video content by retrieving relevant topics and providing an easy to use HTML interface with short cuts from summaries to corresponding text chunks. NER processing further enhances those insights by identifying names of individuals, organisations, locations etc.
-
Text insights
Generate concise insights for your text files using various LLM services by creating summaries, keywords, table of contents (TOC) and named entities (NER). -
Unified LLM Interface
dygest uses litellm and provides integration for various LLM service providers:OpenAI
,Anthropic
,HuggingFace
,Groq
,Ollama
etc. Check the complete provider list for all available services. -
Token Friendly
dygest performs token-heavy text analysis and summarization tasks. Therefore, the underlying LLM pipeline can be tailored to your needs and specific rate limits using a mixed experts approach. -
Mixed Experts Approach
dygest utilizes two fully customizable LLMs to handle different processing tasks. The first, referred to as thelight_model
, is designed for lighter tasks such as summarization and keyword extraction. The second, called theexpert_model
, is optimized for more complex tasks like constructing Tables of Contents (TOCs).This flexibility allows for various pipeline configurations. For example, the
light_model
can run locally usingOllama
, while theexpert_model
can leverage an external API service likeOpenAI
orGroq
. This approach ensures efficiency and adaptability based on specific requirements.
Tip
As the expert_model
is dealing with a lot of input content it is recommended to use a larger LLM (>=32B
) for this task. Smaller LLMs (3B
to 7B
) perform well as light_model
.
-
Named Entity Recognition (NER)
Named Entity Recognition via fast and reliableflair
framework (identifies persons, organisations, locations etc.). -
User-friendly HTML Editor
By defaultdygest
will create a.html
file that can be viewed in standard browsers and combines summaries, keywords, TOC and NER for your text. It features a text editor for you to make further changes. -
Input Formats:
.txt
,.csv
,.xlsx
,.doc
,.docx
,.pdf
,.html
,.xml
-
Export Formats:
.json
,.csv
,.html
- ๐ Python
>=3.10
- ๐ API keys for LLM services like
OpenAI
,Anthropic
andGroq
and / or a runningOllama
instance
Note
API Keys have to be stored in your environment (e.g. export $OPENAI_API_KEY=skj....
)
python3 -m venv venv
source venv/bin/activate
pip install dygest
git clone https://github.com/tsmdt/dygest.git
cd dygest
python3 -m venv venv
source venv/bin/activate
pip install .
Customize the dygest LLM pipeline by running the dygest config
command:
Usage: dygest config [OPTIONS]
Configure LLMs, Embeddings and Named Entity Recognition.
โญโ Options โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ --light_model -l TEXT LLM model name for lighter tasks (summarization, keywords) [default: None] โ
โ --expert_model -x TEXT LLM model name for heavier tasks (TOCs). [default: None] โ
โ --embedding_model -e TEXT Embedding model name. [default: None] โ
โ --temperature -t FLOAT Temperature of LLM. [default: None] โ
โ --sleep -s FLOAT Pause LLM requests to prevent rate limit errors (in seconds). [default: None] โ
โ --chunk_size -c INTEGER Maximum number of tokens per chunk. [default: None] โ
โ --ner --no-ner Enable Named Entity Recognition (NER). Defaults to False. [default: no-ner] โ
โ --precise --fast Enable precise mode for NER. Defaults to fast mode. [default: fast] โ
โ --lang -lang TEXT Language of file(s) for NER. Defaults to auto-detection. [default: None] โ
โ --api_base -api TEXT Set custom API base url for providers like Ollama and Hugginface. [default: None] โ
โ --view_config -v View loaded config parameters. โ
โ --help Show this message and exit. โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
The configuration is saved as dygest_config.yaml
in the project directory. The .yaml
config looks like this:
light_model: ollama/mistral:latest
expert_model: groq/llama-3.3-70b-versatile
embedding_model: ollama/nomic-embed-text:latest
temperature: 0.4
chunk_size: 1000
ner: true
language: auto
precise: false
api_base: null
sleep: 0
Run the dygest LLM pipeline with the dygest run
command:
Usage: dygest run [OPTIONS]
Create insights for your documents (summaries, keywords, TOCs).
โญโ Options โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ --files -f TEXT Path to the input folder or .txt file. [default: None] โ
โ --output_dir -o TEXT If not provided, outputs will be saved in the input folder. [default: None] โ
โ --export_format -ex [all|json|csv|html] Set the data format for exporting. [default: html] โ
โ --toc -t Create a Table of Contents (TOC) for the text. Defaults to False. โ
โ --summarize -s Include a short summary for the text. Defaults to False. โ
โ --keywords -k Create descriptive keywords for the text. Defaults to False. โ
โ --sim_threshold -sim FLOAT Similarity threshold for removing duplicate topics. [default: 0.85] โ
โ --verbose -v Enable verbose output. Defaults to False. โ
โ --export_metadata -meta Enable exporting metadata to output file(s). Defaults to False. โ
โ --help Show this message and exit. โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Find an example .json
output in the examples folder.
dygest uses great python packages:
litellm
: https://github.com/BerriAI/litellmflair
: https://github.com/flairNLP/flairtyper
: https://github.com/fastapi/typerjson_repair
: https://github.com/mangiucugna/json_repairmarkitdown
: https://github.com/microsoft/markitdown