Skip to content

archersama/awesome-recommend-system-pretraining-papers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

94 Commits
 
 

Repository files navigation

A Paper List for Recommend-system PreTrained Models

Awesome

This is a paper list for pretrained recommend System (recommendation) models. It also contains some related research areas such as large language model for recommendation.

Keyword: Recommend System, pretrained models, large language model

Welcome to open an issue or make a pull request!

Recruitment advertisement, effective for a long time! Welcome to join HuaWei Noah Ark Recommendation&Search Lab! The Recommendation and Search Lab is a sub-lab under Huawei Noah Ark Lab, which is mainly engaged in the research and application of recommendation and search, as well as machine learning and data mining technologies related to them. Now, we only need school graduates and interns. Resume can be sent to me directly. Requirements:1. Graduated from Top School OR 2. At least one computer top conference paper published.

Paper List

Review

  • Knowledge Transfer via Pre-training for Recommendation: A Review and Prospect, arXiv, 2020, [paper]
  • Self-Supervised Learning for Recommender Systems: A Survey ,arxiv 2022, [paper]
  • Pre-train, Prompt and Recommendation: A Comprehensive Survey of Language Modelling Paradigm Adaptations in Recommender Systems, arxiv 2022, [paper]
  • How Can Recommender Systems Benefit from Large Language Models: A Survey, arxiv 2023, [paper] [code]

Dataset

  • Yelp[link]
  • Petdata[link]
  • M5Product: Self-harmonized Contrastive Learning for E-commercial Multi-modal Pretraining, CVPR 2022 [paper]
  • Tenrec: A Large-scale Multipurpose Benchmark Dataset for Recommender Systems, NeurIPS 2022 [paper]
  • PixelRec: A Image Dataset for Benchmarking Recommender Systems with Raw Pixels [link],arxiv 2023,[paper]
  • Netflix: [link]
  • Ninerec: A benchmark dataset suite for evaluating transferable recommendation [link], arxiv 2023,[paper]
  • A Content-Driven Micro-Video Recommendation Dataset at Scale [link], arxiv 2023,[paper]

Empirical Study

  • Where to Go Next for Recommender Systems? ID-vs. Modality-based recommender models revisited, SIGIR 2023, [paper] [code]
  • Generative Recommendation: Towards Next-generation Recommender Paradigm, arxiv 2023, [paper]
  • Exploring Adapter-based Transfer Learning for Recommender Systems: Empirical Studies and Practical Insights, WSDM 2024, [paper] [code]

Sequential / Session-Based Recommendation

  • A Simple Convolutional Generative Network for Next Item Recommendation, WSDM 2018/08, [paper] [code]
  • BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer, CIKM 2019 , [paper][code]
  • S3-Rec: Self-Supervised Learning for Sequential Recommendation with Mutual Information Maximization , CIKM-2020 , [paper][code]
  • Transformers4Rec: Bridging the Gap between NLP and Sequential / Session-Based Recommendation, Recsys 2021 , [paper][code]
  • Towards Universal Sequence Representation Learning for Recommender Systems , KDD 2022 , [paper][code]
  • Learning Vector-Quantized Item Representation for Transferable Sequential Recommenders, WWW 2023, [paper] [code]
  • MISSRec: Pre-training and Transferring Multi-modal Interest-aware Sequence Representation for Recommendation, ACM MM 2023,[paper] [code]

User Representation Pretraining

  • Parameter-Efficient Transfer from Sequential Behaviors for User Modeling and Recommendation, SIGIR 2020 , [paper], [code]
  • UPRec: User-Aware Pre-training for Recommender Systems ,submitted TKDE in 2021 , [paper]
  • U-BERT: Pre-training user representations for improved recommendation, AAAI 2021, [paper]
  • UserBERT: Self-supervised User Representation Learning , arxiv 2021 , [paper]
  • One4all User Representation for Recommender Systems in E-commerce , arxiv 2021 , [paper]
  • One Person, One Model, One World: Learning Continual User Representation without Forgetting, SIGIR 2021 , [paper]
  • Scaling Law for Recommendation Models: Towards General-purpose User Representations , AAAI 2023 , [paper]

Two Tower Pretraining

  • Self-supervised Learning for Large-scale Item Recommendations , CIKM 2021 , [paper]
  • TransRec: Learning Transferable Recommendation from Mixture-of-Modality Feedback , arxiv 2022 , [paper]
  • IntTower: the Next Generation of Two-Tower Model for Pre-Ranking System, CIKM 2022 , [paper][code]

Language Models as Recommenddation Models & Prompt Learning

  • Language Models as Recommender Systems: Evaluations and Limitations , NeurIPS 2021 Workshop ICBINB , [paper]
  • CTR-BERT: Cost-effective knowledge distillation for billion-parameter teacher models, [paper]
  • Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5) , Recsys 2022 , [paper])
  • M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems ,arxiv 2022 , [paper]
  • PTab: Using the Pre-trained Language Model for Modeling Tabular Data, arxiv 2022, [paper]
  • Prompt Learning for News Recommendation, SIGIR 2023, [paper]

Large Language Models for Recommendation

  • LLMRec: Large Language Models with Graph Augmentation for Recommendation , WSDM 2024 , [paper], [code], [blog in Chinese]
  • Is ChatGPT a Good Recommender A Preliminary Study, arxiv 2023, [paper]
  • Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent, arxiv 2023, [paper]
  • Uncovering ChatGPT’s Capabilities in Recommender Systems, arxiv 2023, [paper][code]
  • Sparks of Artificial General Recommender (AGR): Early Experiments with ChatGPT, arxiv 2023, [paper]
  • Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation, arxiv 2023,[paper] [code]
  • TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation, arxiv 2023, [paper]
  • PALR: Personalization Aware LLMs for Recommendation, arxiv 2023, [paper]
  • Large Language Models are Zero-Shot Rankers for Recommender Systems, arxiv 2023, [paper]
  • Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach, arxiv 2023, [paper]
  • Leveraging Large Language Models in Conversational Recommender Systems, arxiv 2023, [paper]
  • Privacy-Preserving Recommender Systems with Synthetic Query Generation using Differentially Private Large Language Models, arxiv 2023, [paper]
  • Exploring the Upper Limits of Text-Based Collaborative Filtering Using Large Language Models: Discoveries and Insights, arxiv 2023, [paper]
  • A Bi-Step Grounding Paradigm for Large Language Models in Recommendation Systems,arxiv 2023, [paper]
  • CTRL: Connect Tabular and Language Model for CTR Prediction, arxiv 2023,[paper]
  • LlamaRec: Two-Stage Recommendation using Large Language Models for Ranking, PGAI@CIKM 2023,[paper] [code]
  • LLM4Vis: Explainable Visualization Recommendation using ChatGPT,EMNLP Industry 2023, paper, code

Graph Pretraining

  • Curriculum Pre-Training Heterogeneous Subgraph Transformer for Top-N Recommendation , arxiv 2021 ,[paper]
  • Contrastive Pre-Training of GNNs on Heterogeneous Graphs , CIKM 2021 , [paper]
  • Self-supervised Graph Learning for Recommendation , SIGIR 2021 , [paper]
  • Self-Supervised Hypergraph Convolutional Networks for Session-based Recommendation , AAAI 2021 , [paper]

Workshop and Tutorial

Related hub

https://github.com/CHIANGEL/Awesome-LLM-for-RecSys

Copyright

By Xiangyang Li (xiangyangli@pku.edu.cn) from Peking University.

@misc{rs-pretrain-papers,
  author = {Xiangyang Li},
  title = {awesome-recommend-system-pretraining-papers},
  year = {2022},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/archersama/awesome-recommend-system-pretraining-papers/}}
}