Skip to content
This repository has been archived by the owner on Nov 21, 2022. It is now read-only.

Can you demonstrate how to fine-tune a pretrained model on unlabeled data #287

Open
turian opened this issue Sep 11, 2022 · 6 comments
Open
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@turian
Copy link

turian commented Sep 11, 2022

馃殌 Feature

Documentation or example showing how to fine-tune a pretrained model on unlabeled data.

Motivation

It's great to fine-tune your pretrained model on untrained data, so that---if you have precious few labels in the target domain---you still have adapted to that domain using untrained data.

Pitch

We have these super huge foundational models, but for niche domains without larges it's great to fine tune.
Examples:

  • Want to work on a particular style of text.
  • Want to fine-tune on a spoken language that it was not exposed.
  • etc.

Alternatives

Hack around, maybe use hugging face. IDK?

@turian turian added enhancement New feature or request help wanted Extra attention is needed labels Sep 11, 2022
@Borda
Copy link
Member

Borda commented Sep 14, 2022

@SeanNaren or @rohitgr7, mind having a look? 馃Ζ

@uakarsh
Copy link

uakarsh commented Oct 7, 2022

Hi @turian, do you mean Can you demonstrate how to fine-tune a pre-trained model on new data? Because I think unlabeled data mean the data used for pre-training (i.e without labels, performing the task of MLM, or related). Hi @Borda, I think, I would like to do an example/documentation (if it doesn't exist) for fine-tuning the model using lightning transformer.

@Borda
Copy link
Member

Borda commented Oct 14, 2022

Hi @Borda, I think, I would like to do an example/documentation (if it doesn't exist) for fine-tuning the model using lightning transformer.

That would be great!

@uakarsh
Copy link

uakarsh commented Oct 15, 2022

Thanks for the heads up. Hi, @turian, can you provide me the instance where you would like to see an example or the domain you are talking about? Because, the usage of the library (on different task) are given in the readme section.

@Borda
Copy link
Member

Borda commented Nov 7, 2022

@turian @uakarsh how are you doing? 馃Ζ

@uakarsh
Copy link

uakarsh commented Nov 7, 2022

Hi @Borda, I am doing good. I was not able to make much of progress, since I am not sure, what is the exact thing, which needs to be solved. Because, if fine-tuning is concerned, I guess there are examples on readme and docs of lightning transformers.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants