Paul Liang
email:  ppliang (at) mit (dot) edu
office:  Wiesner Building E15-392

CV | Bio | Google Scholar
Github | Twitter

Research, teaching, diversity statements

I am an Assistant Professor at the MIT Media Lab and MIT EECS, where I direct the Multisensory Intelligence research group.

In summer 2024, I was a visiting researcher in the AI, psychology, and neuroscience program at UC Berkeley's Simons Institute for the Theory of Computing. Previously, I received my Ph.D. from the Machine Learning Department at Carnegie Mellon University, advised by Louis-Philippe Morency and Ruslan Salakhutdinov.

Prospective students: I am hiring at all levels (post-docs, PhDs, masters, undergrads, and visitors). If you want to join MIT as a graduate student, please apply through the programs in Media Arts & Sciences, EECS, or IDSS, and mention my name in your application.
I'm also happy to collaborate and answer questions about my research and MIT programs, I especially encourage students from underrepresented groups to reach out.

Some recent talks:



Multisensory Intelligence Group

Our group studies the foundations of multisensory AI and its impact on the human experience, through three complementary thrusts:

(1) Foundations of multisensory AI: The science and engineering of AI systems that can learn and interact with the world through integrating diverse sensory channels.

(2) Enhancing human experiences: Designing interactive AI technologies to augment human capabilities and improve overall well-being.

(3) Real-world human-AI interaction: Quantifying and mitigating real-world societal concerns for responsible deployment.

Group
Chanakya Ekbote (MAS)
David Dai (MAS)
Megan Tjandrasuwita (EECS PhD, co-advised with Armando Solar-Lezama)
Ao Qu (IDSS PhD, co-advised with Jinhua Zhao)
Devin Murphy (EECS MEng, co-advised with Wojciech Matusik)
Lily Chen, Dewei Feng, Jimin Lee, Peilin Chen, Adithya Balachandran (EECS MEngs)
Minseok Jung (IDSS MS, co-advised with Lalana Kagal)
Steven Chen, Hengzhi Li (UROPs)
Ziyin Liu, Aaron Han (Visiting researchers)
Former students
Haofei Yu, now PhD student at UIUC
Rohan Pandey, now researcher at Open AI (best CMU senior thesis award)
Yun Cheng, now PhD student at Princeton
Rulin Shao, now PhD student at University of Washington
Xiang Fan, now PhD student at the University of Washington (CRA outstanding undergrad researcher honorable mention)
Jivat Neet, then research fellow at Microsoft Research, now PhD student at UC Berkeley
Yiwei Lyu, now PhD student at the University of Michigan (CRA outstanding undergrad researcher honorable mention)
Yuxin Xiao, now PhD student at MIT
Peter Wu, now PhD student at UC Berkeley
Dong Won Lee, now PhD student at MIT
Terrance Liu, now PhD student at CMU
Chengfeng Mao, now PhD student at MIT
Ziyin Liu, then PhD student at the University of Tokyo, now PostDoc at MIT

Teaching
Spring 2025: MIT MAS.S60 How to AI (Almost) Anything
Spring 2025: MIT 6.390 Introduction to Machine Learning
Spring 2024: CMU 11-877 Advanced Topics in Multimodal Machine Learning, with Daniel Fried
Fall 2023: CMU 11-777 Multimodal Machine Learning, with Louis-Philippe Morency
Summer 2023: African Masters Of Machine Intelligence course on Multimodal AI (day1, day2, day3, day4)
2022-2023: Tutorials on Multimodal ML at ICML, ICMI, CVPR, NAACL with Louis-Philippe Morency
Spring 2023: CMU 11-866 Artificial Social Intelligence, with Louis-Philippe Morency
Spring 2023: CMU 11-877 Advanced Topics in Multimodal Machine Learning, with Louis-Philippe Morency
Fall 2022: CMU 11-777 Multimodal Machine Learning, with Louis-Philippe Morency
Spring 2022: CMU 11-877 Advanced Topics in Multimodal Machine Learning, with Louis-Philippe Morency, Amir Zadeh

  Selected publications (see full list on Google Scholar)


sym

MimeQA: Towards Socially-Intelligent Nonverbal Foundation Models
Hengzhi Li, Megan Tjandrasuwita, Yi R. Fung, Armando Solar-Lezama, Paul Pu Liang
arXiv 2025

sym

CLIMB: Data Foundations for Large-Scale Multimodal Clinical Foundation Models
David Dai, Peilin Chen, Malinda Lu, Daniel Li, Haowen Wei, Hejie Cui, Paul Pu Liang
ICML 2025

sym

Understanding the Emergence of Multimodal Representation Alignment
Megan Tjandrasuwita, Chanakya Ekbote, Liu Ziyin, Paul Pu Liang
ICML 2025

sym

Interactive Sketchpad: An Interactive Multimodal System for Collaborative, Visual Problem-Solving
Jimin Lee, Steven Chen, Paul Pu Liang
CHI 2025 Late-Breaking Work

sym

Progressive Compositionality In Text-to-Image Generative Models
Evans Han, Linghao Jin, Xiaofeng Liu, Paul Pu Liang
ICLR 2025 (spotlight)

sym

HEMM: Holistic Evaluation of Multimodal Foundation Models
Paul Pu Liang, Akshay Goindani, Talha Chafekar, Leena Mathur, Haofei Yu, Ruslan Salakhutdinov, Louis-Philippe Morency
NeurIPS 2024

sym

Foundations and Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions
Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency
ACM Computing Surveys 2024, Tutorials at ICML & ICMI 2023, CVPR & NAACL 2022

sym

Quantifying & Modeling Multimodal Interactions: An Information Decomposition Framework
Paul Pu Liang, Yun Cheng, Xiang Fan, Chun Kai Ling, Suzanne Nie, Richard Chen, Zihao Deng, Nick Allen, Randy Auerbach, Faisal Mahmood, Russ Salakhutdinov, Louis-Philippe Morency
NeurIPS 2023

sym

High-Modality Multimodal Transformer: Quantifying Modality & Interaction Heterogeneity for High-Modality Representation Learning
Paul Pu Liang, Yiwei Lyu, Xiang Fan, Jeffrey Tsaw, Yudong Liu, Shentong Mo, Dani Yogatama, Louis-Philippe Morency, Ruslan Salakhutdinov
TMLR 2022

sym

MultiBench: Multiscale Benchmarks for Multimodal Representation Learning
Paul Pu Liang, Yiwei Lyu, Xiang Fan, Zetian Wu, Yun Cheng, Jason Wu, Leslie Chen, Peter Wu, Michelle Lee, Yuke Zhu, Ruslan Salakhutdinov, Louis-Philippe Morency
NeurIPS 2021, JMLR Open Source Software 2022

sym

Towards Understanding and Mitigating Social Biases in Language Models
Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, Ruslan Salakhutdinov
ICML 2021

sym

Think Locally, Act Globally: Federated Learning with Local and Global Representations
Paul Pu Liang*, Terrance Liu*, Liu Ziyin, Nicholas Allen, Randy Auerbach, David Brent, Ruslan Salakhutdinov, Louis-Philippe Morency
NeurIPS 2019 Workshop on Federated Learning (oral, distinguished student paper award)

sym

Multimodal Transformer for Unaligned Multimodal Language Sequences
Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, Zico Kolter, Louis-Philippe Morency, Ruslan Salakhutdinov
ACL 2019

sym

Multimodal Language Analysis in the Wild: CMU-MOSEI Dataset and Interpretable Dynamic Fusion Graph
Amir Zadeh, Paul Pu Liang, Soujanya Poria, Erik Cambria, Louis-Philippe Morency
ACL 2018 (oral)


Modified version of template from here