Multimodal Deep Q-Network (MDQN) for modelling human-like social intelligence.
-
Updated
Feb 23, 2017 - Lua
Multimodal Deep Q-Network (MDQN) for modelling human-like social intelligence.
Visual Question Answering project as a part of 11-777 course requirements
Under the framework of TELMI Project, this is a python script to automatically upload multimodal data into repovizz repository. The project is part of TELMI within MTG Universitat Pompeu Fabra
KERAS: Multimodal Deep Learning for Semantic Segmentation (RGB, NIR Streams) - multiple architectures
Codes for ACL2018 Multimodal Language Workshop paper
prediction adult site user numbers with multimodel source (Image and text and tag)
[IN PROGRESS] Multimodal feature extraction modules for ease of doing research and reproducibility.
TCyb 2018: Graph learning for multiview clustering
Used to convert speech into a haptic signal
Mode normalization (ICLR 2019).
Repository for the conference article "Enhancing the AI2 Diagrams dataset using Rhetorical Structure Theory", published in the Proceedings of the 11th International Language Resources and Evaluation Conference.
Prognostically Relevant Subtypes and Survival Prediction for Breast Cancer Based on Multimodal Genomics Data
Integrating machining learning and multi-modal neuroimaging to detect schizophrenia at the level of the individual
Deception Detection project website
Analysing Adversarial Loss of Social GAN
This repository contains the source code for the paper "Improving the performance of unimodal dynamic hand gesture recognition with multimodal training"
A repository associated with the article Statistics For Multimodality: why, when, how – an invitation
Behavioral data analysis and plotting in Python.
Add a description, image, and links to the multimodality topic page so that developers can more easily learn about it.
To associate your repository with the multimodality topic, visit your repo's landing page and select "manage topics."