Molecule Transformers
Popular repositories
-
smiles-featurizers
smiles-featurizers PublicExtract Molecular SMILES embeddings from language models pre-trained with various objectives architectures.
Python 14
-
moleculenet-smiles-bert-mixup
moleculenet-smiles-bert-mixup PublicTraining pre-trained BERT language model on molecular SMILES from the Molecule Net benchmark by leveraging mixup and enumeration augmentations.
-
moleculetransformers.github.io
moleculetransformers.github.io PublicDocumentation for the Molecule Transformers.
-
moleculenet-bert-ssl
moleculenet-bert-ssl PublicSemi-supervised learning techniques (pseudo-label, mixmatch, and co-training) for pre-trained BERT language model amidst low-data regime based on molecular SMILES from the Molecule Net benchmark.
Python 1
-
smiles-augment
smiles-augment PublicAugment molecular SMILES with methods including enumeration, and mixup, for low-data regime settings for downstream supervised drug discovery tasks.
-
rdkit-benchmarking-platform-transformers
rdkit-benchmarking-platform-transformers PublicPort of RDKit Benchmarking platform for pre-trained transformers-based language models for virtual screening drug discovery task.
Python
Repositories
- smiles-featurizers Public
Extract Molecular SMILES embeddings from language models pre-trained with various objectives architectures.
-
- moleculenet-bert-ssl Public
Semi-supervised learning techniques (pseudo-label, mixmatch, and co-training) for pre-trained BERT language model amidst low-data regime based on molecular SMILES from the Molecule Net benchmark.
- rdkit-benchmarking-platform-transformers Public
Port of RDKit Benchmarking platform for pre-trained transformers-based language models for virtual screening drug discovery task.
- smiles-augment Public
Augment molecular SMILES with methods including enumeration, and mixup, for low-data regime settings for downstream supervised drug discovery tasks.
- moleculenet-smiles-bert-mixup Public
Training pre-trained BERT language model on molecular SMILES from the Molecule Net benchmark by leveraging mixup and enumeration augmentations.