Slides from my NLP course on recurrent neural networks and attention mechanisms
-
Updated
May 30, 2024
Slides from my NLP course on recurrent neural networks and attention mechanisms
Deep learning project for the keyword spotting task. I designed two architectures and compared them against an existing approach in the literature, obtaining competitive results. with the Speech Commands dataset.
Proyek ini adalah sistem manajemen jadwal dan absensi untuk mahasiswa dan dosen. Sistem ini dibangun menggunakan Go dan menggunakan GORM sebagai ORM untuk berinteraksi dengan database.
Playground for testing and implementing various Vision Models
A simple but complete full-attention transformer with a set of promising experimental features from various papers
Orchestrate Swarms of Agents From Any Framework Like OpenAI, Langchain, and Etc for Business Operation Automation. Join our Community: https://discord.gg/DbjBMJTSWD
LSTM-ARIMA with Attention and Multiplicative Decomposition for Sophisticated Stock Forecasting.
Text Summarization Modeling with three different Attention Types
QuillGPT is an implementation of the GPT decoder block based on the architecture from Attention is All You Need paper by Vaswani et. al. in PyTorch. Additionally, this repository contains two pre-trained models — Shakespearean GPT and Harpoon GPT, a Streamlit Playground, Containerized FastAPI Microservice, training - inference scripts & notebooks.
GPT-based protein language model for PTM site prediction
An implementation of the GPT(generative pretrained transformer) model, from scratch, which produces Shakespearean text by training on the dialogues written by Shakespeare along with the GPT Encoder.
SSM-DTA: Breaking the Barriers of Data Scarcity in Drug-Target Affinity Prediction (Briefings in Bioinformatics 2023)
[ICML 2024] Outlier-Efficient Hopfield Layers for Large Transformer-Based Models
Code for the paper: Mixed Models with Multiple Instance Learning
Simple character level Transformer
Algorithm for stroke occlusion detection. Work in progress. In the context of France 2030 project on stroke research (BOOSTER).
Simple Llama architecture LLM in pytorch
Simplified Implementation of SOTA Deep Learning Papers in Pytorch
A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplest AI framework ever.
Add a description, image, and links to the attention-mechanism topic page so that developers can more easily learn about it.
To associate your repository with the attention-mechanism topic, visit your repo's landing page and select "manage topics."