Welcome to the Heterogeneous Learning and Reasoning Lab

We are interested in research on machine learning, natural language processing and combining vision and language. We work on information and semantic extraction from language. We investigate methodologies to interplay between learning and reasoning and develop techniques to include declarative and procedural knowledge in statistical/neural learning. We develop research software and build prototypes to facilitate designing AI systems. We also aim to apply our techniques and tools on real world-problems and conduct multi-disciplinary research to impact the society for making the world a better place to live.

If you are enthusiastic to know more about our works you can check them in the Github. https://github.com/HLR/

News

June 2nd 2021

Our paper on "Zero-Shot Compositional Concept Learning" is accepted for ACL-2021 workshop on MetaNLP. Congratulations to Guangyue et al.

June 2nd 2021

Our paper on "Towards Navigation by Reasoning over Spatial Configurations" is accepted for ACL-2021 workshop on SpLU-RoboNLP. Congratulations to Yue et al.

May 4th 2021

Congratulations to Sushanta for his Master thesis defense and graduating!

April 29th 2021

Our paper on "Relational Gating for ''What If'' Reasoning" is accepted for IJCAI-2021. Congratulations to Chen et al.!

March 10th 2021

Our paper on "Time-Stamped Language Model, Teaching Language Models to Understand The Flow of Events" is accepted for NAACL-2021. Congratulations to Hossein et al.

March 10th 2021

Our paper on "SPARTQA, A Textual Question Answering Benchmark for Spatial Reasoning" is accepted for NAACL-2021. Congratulations to Roshanak et al.

October 30th 2020

Our poster "SPARTQA, A Textual Question Answering Benchmark for Spatial Reasoning" won the "best poster award" in 3rd Michigan AI Symposium, University of Michigan, Ann Arbor. Congratulations to Roshanak et al.

September 15th 2020

Our paper on "SRLGRN Semantic Role Labeling Graph Reasoning Network" is accepted for EMNLP-2020. Congratulations to Chen et al.!

... see all News