Currently experimenting with Programmatic Prompting, and fine tuning language models using in context learning, with examples. My goal is to recreate DSPY's implementation of their Optimizers - Bootstrap examples, fine which examples satisfy metric, generate more examples from that chosen sample and continue, until we satisfy metric accuracy requirement on test data. Experimenting with DSPY to train a language model to be sarcastic and snarky with only 5-6 examples.
-
Notifications
You must be signed in to change notification settings - Fork 0
MaanasTaneja/LLMTrainer
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Recreating DSPY's LM Bootstrap Few-shot Optimizer module, train models using 'In Context Learning'
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published