Assembling Motion-Ingteraction Knowledge for Agent.
-
Updated
May 31, 2024
Assembling Motion-Ingteraction Knowledge for Agent.
PixelEDL: Unsupervised Skill Discovery and Learning from Pixels
This is a curated list of awesome papers on Embodied AI.
The third-generation implementation of animals where an aeon is hosted inside a bipedal robot
Repo for ICCV'23 Workshops "Cross-Dimensional Refined Learning for Real-Time 3D Visual Perception from Monocular Video"
[ICRA 2024] SG-Bot: Object Rearrangement via Coarse-to-Fine Robotic Imagination on Scene Graphs
Auto Encoder Enhanced Vision Language Navigation in Vizdoom, KBS 2023
Implementation of Multiplicative Compositional Policies (MCP)
Discovery and Learning of Minecraft Navigation Goals from Pixels and Coordinates
[IROS22 Oral] Optimization of Forcemyography Sensor Placement for Arm Movement Recognition https://arxiv.org/abs/2207.10915
Code for paper "Modality Plug-and-Play: Elastic Modality Adaptation in Multimodal LLMs for Embodied AI"
Good Time to Ask: A Learning Framework for Asking for Help in Embodied Visual Navigation
Evaluation tasks for ObjectNav models
🌎 The Website for the Embodied AI Workshop at CVPR
A CommonSense Reasoning Dataset pertaining to Physical Commonsense affordance of objects.
transformer + reinforcement learning for navigation + POMPD
🎉🎨 This repository contains a reading list of papers on **Embodied AI**.
LACMA: Language-Aligning Contrastive Learning with Meta-Actions for Embodied Instruction Following
Papers on integrating large language models with embodied AI
Paper & Project lists of cutting-edge research on visual navigation and embodied AI.
Add a description, image, and links to the embodied-ai topic page so that developers can more easily learn about it.
To associate your repository with the embodied-ai topic, visit your repo's landing page and select "manage topics."