[Arxiv 2024] Official Implementation of the paper: "InstrAug: Automatic Instruction Augmentation for Multimodal Instruction Fine-tuning"
-
Updated
May 15, 2024 - Jupyter Notebook
[Arxiv 2024] Official Implementation of the paper: "InstrAug: Automatic Instruction Augmentation for Multimodal Instruction Fine-tuning"
Collect and maintain high quality instruction finetune datasets in different domain and languages. 搜集並維護高品質各專業領域及語言的指令微調資料集
The home of Stambecco 🦌: Italian Instruction-following LLaMA Model
Awesome Instruction Editing. Image and Media Editing with Human Instructions. Instruction-Guided Image and Media Editing.
This is the official repo for Contrastive Vision-Language Alignment Makes Efficient Instruction Learner.
Code for the Paper "Grounding Hindsight Instructions in Multi-Goal Reinforcement Learning for Robotics"
The repo collects model and data projects for instruction following large language models.
A better Alpaca Model Trained with Less Data (only 9k instructions of the original set)
This repo contains a list of channels and sources from where LLMs should be learned
🌱 梦想家(DreamerGPT):中文大语言模型指令精调
An benchmark for evaluating the capabilities of large vision-language models (LVLMs)
Instruction Following Agents with Multimodal Transforemrs
EditWorld: Simulating World Dynamics for Instruction-Following Image Editing
Awesome Multimodal Assistant is a curated list of multimodal chatbots/conversational assistants that utilize various modes of interaction, such as text, speech, images, and videos, to provide a seamless and versatile user experience.
WangChanGLM 🐘 - The Multilingual Instruction-Following Model
Code for "FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models (ACL 2024)"
[NeurIPS'23] "MagicBrush: A Manually Annotated Dataset for Instruction-Guided Image Editing".
[ICLR 2024] Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language Models
Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Following" (ICCV 2021). We address the task of long horizon instruction following with a modular architecture that decouples a task into visual perception and action policy prediction.
Finetune LLaMA-7B with Chinese instruction datasets
Add a description, image, and links to the instruction-following topic page so that developers can more easily learn about it.
To associate your repository with the instruction-following topic, visit your repo's landing page and select "manage topics."