finetuning
Here are 341 public repositories matching this topic...
Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started with Inference, Fine-Tuning, RAG. We also show you how to solve end to end problems using Llama model family and using them on various provider services
-
Updated
Jan 17, 2025 - Jupyter Notebook
Efficient Triton Kernels for LLM Training
-
Updated
Jan 18, 2025 - Python
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://docs.h2o.ai/h2o-llmstudio/
-
Updated
Jan 18, 2025 - Python
A fast library for AutoML and tuning. Join our Discord: https://discord.gg/Cppx2vSPVP.
-
Updated
Dec 18, 2024 - Jupyter Notebook
Interact with your SQL database, Natural Language to SQL using LLMs
-
Updated
Jul 24, 2024 - Python
A PyTorch Library for Meta-learning Research
-
Updated
Jun 7, 2024 - Python
Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6
-
Updated
Sep 23, 2024 - Python
Curated tutorials and resources for Large Language Models, Text2SQL, Text2DSL、Text2API、Text2Vis and more.
-
Updated
Jan 16, 2025
🎯 Task-oriented embedding tuning for BERT, CLIP, etc.
-
Updated
Mar 11, 2024 - Python
Easiest and laziest way for building multi-agent LLMs applications.
-
Updated
Jan 14, 2025 - Python
Mastering Applied AI, One Concept at a Time
-
Updated
Jan 14, 2025 - Jupyter Notebook
Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.
-
Updated
Oct 25, 2024 - Python
Webui for using XTTS and for finetuning it
-
Updated
Jan 17, 2025 - Python
优质稳定的OpenAI的API接口-For企业和开发者。OpenAI的api proxy,支持ChatGPT的API调用,支持openai的API接口,支持:gpt-4,gpt-3.5。不需要openai Key, 不需要买openai的账号,不需要美元的银行卡,通通不用的,直接调用就行,稳定好用!!智增增
-
Updated
Nov 6, 2024 - PHP
Finetuning large language models for GDScript generation.
-
Updated
May 26, 2023 - Python
[IJCAI 2023 survey track]A curated list of resources for chemical pre-trained models
-
Updated
Jun 17, 2023
Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeed
-
Updated
Jun 14, 2023 - Python
Open source project for data preparation of LLM application builders
-
Updated
Jan 18, 2025 - HTML
Improve this page
Add a description, image, and links to the finetuning topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the finetuning topic, visit your repo's landing page and select "manage topics."