Soft attention mechanism for video caption generation
-
Updated
Jul 17, 2017 - Python
Soft attention mechanism for video caption generation
The primary goal was to develop a deep learning model capable of generating descriptive captions for images, empowering visually impaired individuals to perceive visual content through auditory means.
This project uses an Encoder-Decoder architecture with Attention to generate descriptive captions for images using InceptionV3 for feature extraction and LSTM for decoding. Part of AI825 Visual Recognition course taught by Prof. Dinesh Babu Jayagopi and Prof. Vishwanath G, IIITB
Product Title Generation From Image using Semantic Compositional Network and Top-Down Attention Model
Simulated version of the Head Turning Modulation model
This repository is to understand Attention mechanism for the Classification task. The task used here for explanation is Recognizing Textual Entailment. It is a Natural Language Inference task.
Offered by deeplearning.ai via Coursera. The course is taught by Younes Bensouda Mourri, Łukasz Kaiser, and Eddy Shyu.
A PyTorch implementation for Machine translation on PHP Corpus data based on RNN model.
Official implementation utilised on the paper: Disagreement attention: Let us agree to disagree on computed tomography segmentation
Contrastive-LSH Embedding and Tokenization Technique for Multivariate Time Series Classification
Solutions for Natural Language Processing Specialization offered by deeplearning.ai on Coursera
Use several classical deep learning models to solve multi-label NLP classification problem
This repository contains my coursework and projects completed during the Natural Language Processing Specialization offered by DeepLearning.AI.
All codes and data used for NLP project
Add a description, image, and links to the attention-model topic page so that developers can more easily learn about it.
To associate your repository with the attention-model topic, visit your repo's landing page and select "manage topics."