Schedule: Language Grounding for Robotics Workshop at ACL 2017
Vancouver, Canada
Dates: August 3, 2017

Schedule for the 1st workshop on Language Grounding for Robotics at ACL 2017:

Slides
09:00 – 09:15:Welcome and Opening Remarks
09:15 – 09:50:Joyce Chai, MSU:Teaching Robots New Tasks through Language Instructions
09:50 – 10:25:Ray Mooney, UT AustinRobots that Learn Grounded Language Through Interactive Dialog
10:30 – 11:00:Coffee Break
11:00 – 11:35:Stefanie Tellex, BrownLearning Models of Language, Action and Perception for Human-Robot Collaboration
11:35 – 12:10Nicholas Roy, MITRepresentations vs Algorithms: Symbols and Geometry in Robotics
12:10 – 14:00:Poster Session:All Accepted Regular and Cross-Submission Papers Listed Below (Lunch from 12:30 – 14:00)
14:00 – 14:35:Percy Liang, StanfordBridging the Language-Action Gap
14:35 – 15:10:Jason Weston,
Facebook AI Research
(End-to-End Methods for) Dialogue, Interaction and Learning
15:10 – 15:20:Contributed Talk: Yanchao Yu, Arash Eshghi and Oliver Lemon. Learning how to Learn: An Adaptive Dialogue Agent for Incrementally Learning Visually Grounded Word Meanings.
15:20 – 15:30:Contributed Talk:Muhannad Alomari, Paul Duckworth, Majd Hawasly, David C. Hogg and Anthony G. Cohn. Natural Language Grounding and Grammar Induction for Robotic Manipulation Commands.
15:30 – 16:00:Mid-afternoon Snacks
16:00 – 16:10:Contributed Talk:Siddharth Karamcheti, Edward Clem Williams, Dilip Arumugam, Mina Rhee, Nakul Gopalan, Lawson L.S. Wong and Stefanie Tellex. A Tale of Two DRAGGNs: A Hybrid Approach for Interpreting Action-Oriented and Goal-Oriented Instructions.
16:10 – 16:45:Karl Moritz Hermann,
Google DeepMind
Grounded Language Learning in Simulated Worlds
16:45 – 17:45:Panel (Jason, Joyce, Karl, Nicholas, Percy, Ray, Stefanie, Terry)



Accepted Regular Workshop Papers

Siddharth Karamcheti, Edward Clem Williams, Dilip Arumugam, Mina Rhee, Nakul Gopalan, Lawson L.S. Wong and Stefanie Tellex
Interpreting Action-Based and Goal-Based Instructions in a Mobile-Manipulator Domain

Jesse Thomason, Jivko Sinapov and Raymond Mooney
Guiding Interaction Behaviors for Multi-modal Grounded Language Learning

Peter Lindes, Aaron Mininger, James R. Kirk and John E. Laird
Grounding Language for Interactive Task Learning

Anjali Narayan-Chen, Colin Graber, Mayukh Das, Md Rakibul Islam, Soham Dan, Sriraam Natarajan, Janardhan Rao Doppa, Julia Hockenmaier, Martha Palmer and Dan Roth
Towards Problem Solving Agents that Communicate and Learn

Yanchao Yu, Arash Eshghi and Oliver Lemon
Learning how to learn: an adaptive dialogue agent for incrementally learning visually grounded word meanings

Matthew Marge, Claire Bonial, Ashley Foots, Cory Hayes, Cassidy Henry, Kimberly Pollard, Ron Artstein, Clare Voss and David Traum
Exploring Variation of Natural Human Commands to a Robot in a Collaborative Navigation Task

Andrea Vanzo, Danilo Croce, Roberto Basili and Daniele Nardi
Structured Learning for Context-aware Spoken Language Understanding of Robotic Commands

Muhannad Alomari, Paul Duckworth, Majd Hawasly, David C. Hogg and Anthony G. Cohn
Natural Language Grounding and Grammar Induction for Robotic Manipulation Commands

Bedřich Pišl and David Mareček
Communication with Robots using Multilayer Recurrent Networks

Yordan Hristov, Svetlin Penkov, Alex Lascarides and Subramanian Ramamoorthy
Grounding Symbols in Multi-Modal Instructions

Li Lucy and Jon Gauthier
Are distributional representations ready for the real world? Evaluating word vectors for grounded perceptual meaning

Jekaterina Novikova, Christian Dondrup, Ioannis Papaioannou and Oliver Lemon
Sympathy Begins with a Smile, Intelligence Begins with a Word: Use of Multimodal Features in Spoken Human-Robot Interaction


Accepted Cross-Submission Papers

Devendra Singh Chaplot, Kanthashree Mysore Sathyendra, Rama Kumar Pasumarthi, Dheeraj Rajagopal and Ruslan Salakhutdinov
Gated-Attention Architectures for Task-Oriented Language Grounding

Michael Spranger
Procedural Semantics - A Case Study in Locative Spatial Language (IROS 2015)

Lanbo She and Joyce Chai
State Based Grounded Verb Semantic and Interactive Acquisition (ACL2017)

Rohan Paul, Andrei Barbu, Sue Felshin, Boris Katz and Nicholas Roy
Temporal Grounding Graphs for Language Understanding with Accrued Visual-Linguistic Context (IJCAI 2017)

Muhannad Alomari, Paul Duckworth, Majd Hawasly, Nils Bore, David Hogg and Anthony Cohn
Grounding of Human Environments and Activities for Autonomous Robots (IJCAI 2017)

Alane Suhr, Mike Lewis, James Yeh and Yoav Artzi
A Corpus of Natural Language for Visual Reasoning (ACL 2017)

Mark Yatskar, Luke Zettlemoyer and Ali Farhadi
Situation Recognition: Visual Semantic Role Labeling for Image Understanding (CVPR 2016)