A curated list of papers of interesting empirical study and insight on deep learning. Continually updating...
-
Updated
May 17, 2024
A curated list of papers of interesting empirical study and insight on deep learning. Continually updating...
MDL Complexity computations and experiments from the paper "Revisiting complexity and the bias-variance tradeoff".
Code for Arxiv Double Descent Demystified: Identifying, Interpreting & Ablating the Sources of a Deep Learning Puzzle
Explore the double-descent phenomena in the context of system identification. Companion code to the paper (https://arxiv.org/abs/2012.06341):
This repository is the official implementation of "Optimization Variance: Delve into the Epoch-Wise Double Descent of DNNs"
Double Descent results for FCNNs on MNIST, extended by Label Noise (Reconciling Modern Machine-Learning Practice and the Classical Bias–Variance Trade-Off) [Python/PyTorch]..
This project outlines 4 experiments to explore the effects of several settings on the bias-variance tradeoff curve
Assignments of my CST Part II Deep Neural Networks unit
ICLR 2022: Phenomenology of Double Descent in Finite-width Neural Networks
Implementation of the double descent Deep Learning phenomenon from the article Grokking: Generalization beyond overfitting.
Interpolating Neural Networks in Asset Pricing Data. Supports Distributed Training in TensorFlow.
A Review of Preetum Nakkiran's "More Data Can Hurt for Linear Regression: Sample-wise Double Descent"
Double descent experiments/repros on classical ML models and deep neural nets
Toy dataset to study double descent optimization patterns in machine learning.
Add a description, image, and links to the double-descent topic page so that developers can more easily learn about it.
To associate your repository with the double-descent topic, visit your repo's landing page and select "manage topics."