The repo collects model and data projects for instruction following large language models.
-
Updated
Apr 14, 2023
The repo collects model and data projects for instruction following large language models.
This repo contains a list of channels and sources from where LLMs should be learned
Code for the Paper "Grounding Hindsight Instructions in Multi-Goal Reinforcement Learning for Robotics"
Awesome Instruction Editing. Image and Media Editing with Human Instructions. Instruction-Guided Image and Media Editing.
[Arxiv 2024] Official Implementation of the paper: "InstrAug: Automatic Instruction Augmentation for Multimodal Instruction Fine-tuning"
This is the official repo for Contrastive Vision-Language Alignment Makes Efficient Instruction Learner.
A better Alpaca Model Trained with Less Data (only 9k instructions of the original set)
Collect and maintain high quality instruction finetune datasets in different domain and languages. 搜集並維護高品質各專業領域及語言的指令微調資料集
The home of Stambecco 🦌: Italian Instruction-following LLaMA Model
An benchmark for evaluating the capabilities of large vision-language models (LVLMs)
Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Following" (ICCV 2021). We address the task of long horizon instruction following with a modular architecture that decouples a task into visual perception and action policy prediction.
Code for "FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models (ACL 2024)"
Instruction Following Agents with Multimodal Transforemrs
🌱 梦想家(DreamerGPT):中文大语言模型指令精调
Awesome Multimodal Assistant is a curated list of multimodal chatbots/conversational assistants that utilize various modes of interaction, such as text, speech, images, and videos, to provide a seamless and versatile user experience.
EditWorld: Simulating World Dynamics for Instruction-Following Image Editing
WangChanGLM 🐘 - The Multilingual Instruction-Following Model
Finetune LLaMA-7B with Chinese instruction datasets
Code for "Lion: Adversarial Distillation of Proprietary Large Language Models (EMNLP 2023)"
[ICLR 2024] Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language Models
Add a description, image, and links to the instruction-following topic page so that developers can more easily learn about it.
To associate your repository with the instruction-following topic, visit your repo's landing page and select "manage topics."