Skip to content

LLM finetuning on Intel XPUs - LoRA on intel discrete GPUs

License

Notifications You must be signed in to change notification settings

rahulunair/tiny_llm_finetuner

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tiny llm Finetuner for Intel dGPUs

Finetuning openLLAMA on Intel discrete GPUS

A finetuner1 2 for LLMs on Intel XPU devices, with which you could finetune the openLLaMA-3b model to sound like your favorite book.

image

Setup and activate conda env

conda env create -f env.yml
conda activate pyt_llm_xpu

Warning: OncePyTorch and intel extension for PyTorch is already setup, then install peft without dependencies as peft requires PyTorch 2.0(not supported yet on Intel XPU devices.)

Generate data

Fetch a book from guttenberg (default: pride and prejudice) and generate the dataset.

python fetch_data.py

Finetune

python finetune.py --input_data ./book_data.json --batch_size=64 --micro_batch_size=16 --num_steps=300

Inference

For inference, you can either provide a input prompt, or the model will take a default prompt

Without user provided prompt
python inference.py --infer
Using your own prompt for inference
python inference.py --infer --prompt "my prompt"
Benchmark Inference
python inference.py --bench

1: adapted from: source
2: adapted from: source

About

LLM finetuning on Intel XPUs - LoRA on intel discrete GPUs

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages