Paul Liang, CMU
Paul Pu Liang
Email: pliang(at)cs.cmu.edu
Office: Gates and Hillman Center 8011
5000 Forbes Avenue, Pittsburgh, PA 15213
Machine Learning Department and Language Technologies Institute, School of Computer Science, Carnegie Mellon University
[CV]
@pliang279
@pliang279
@lpwinniethepu
Publications
(* denotes joint first-authors)
2024
- Foundations of Multisensory Artificial Intelligence
Paul Pu Liang
PhD Thesis. Committee: Louis-Philippe Morency, Ruslan Salakhutdinov, Manuel Blum, Lenore Blum, Trevor Darrell
[arXiv]
- Foundations and Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions
Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency
ACM Computing Surveys, Tutorials at ICML 2023, ICMI 2023, CVPR 2022, NAACL 2022
[arXiv] [tutorial website] [tutorial videos]
- Think Twice: Perspective-Taking Improves Large Language Models' Theory-of-Mind Capabilities
Alex Wilf, Sihyun Shawn Lee, Paul Pu Liang, Louis-Philippe Morency
ACL 2024
[arXiv] [code]
- Modeling Dense Multimodal Interactions Between Biological Pathways and Histology for Survival Prediction
Guillaume Jaume, Anurag Vaidya, Richard Chen, Drew Williamson, Paul Pu Liang, Faisal Mahmood
CVPR 2024
[arXiv] [code]
- FLHetBench: Benchmarking Device and State Heterogeneity in Federated Learning
Junyuan Zhang, Shuang Zeng, Miao Zhang, Runxi Wang, Feifei Wang, Yuyin Zhou, Paul Pu Liang, Liangqiong Qu
CVPR 2024
[arXiv]
- Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications
Paul Pu Liang, Chun Kai Ling, Yun Cheng, Alex Obolenskiy, Yudong Liu, Rohan Pandey, Alex Wilf, Louis-Philippe Morency, Ruslan Salakhutdinov
ICLR 2024
[arXiv] [code]
2023
- Quantifying & Modeling Multimodal Interactions: An Information Decomposition Framework
Paul Pu Liang, Yun Cheng, Xiang Fan, Chun Kai Ling, Suzanne Nie, Richard Chen, Zihao Deng, Nicholas Allen, Randy Auerbach, Faisal Mahmood, Ruslan Salakhutdinov, Louis-Philippe Morency
NeurIPS 2023
[arXiv] [code]
- Factorized Contrastive Learning: Going Beyond Multi-view Redundancy
Paul Pu Liang*, Zihao Deng*, Martin Ma*, James Zou, Louis-Philippe Morency, Ruslan Salakhutdinov
NeurIPS 2023
[arXiv] [code]
- Localized Symbolic Knowledge Distillation for Visual Commonsense Models
Jae Sung Park, Jack Hessel, Khyathi Chandu, Paul Pu Liang, Ximing Lu, Qiuyuan Huang, Peter West, Jianfeng Gao, Ali Farhadi, Yejin Choi
NeurIPS 2023
[arXiv] [code]
- Read and Reap the Rewards: Learning to Play Atari with the Help of Instruction Manuals
Yue Wu, Yewen Fan, Paul Pu Liang, Amos Azaria, Yuanzhi Li, Tom Mitchell
NeurIPS 2023, ICLR 2023 Workshop on Reincarnating RL (oral)
[arXiv] [code]
- Difference-Masking: Choosing What to Mask in Continued Pretraining
Alex Wilf, Syeda Akter, Leena Mathur, Paul Pu Liang, Sheryl Mathew, Mengrou Shou, Eric Nyberg, Louis-Philippe Morency
EMNLP 2023 Findings
[arXiv]
- Multimodal Fusion Interactions: A Study of Human and Automatic Quantification
Paul Pu Liang, Yun Cheng, Ruslan Salakhutdinov, Louis-Philippe Morency
ICMI 2023
[arXiv] [code]
- HIINT: Historical, Intra- and Inter- personal Dynamics Modeling with Cross-person Memory Transformer
Yubin Kim, Dong Won Lee, Paul Pu Liang, Sharifa Algohwinem, Cynthia Breazeal, Hae Won Park
ICMI 2023
[arXiv]
- Multimodal Lecture Presentations Dataset: Understanding Multimodality in Educational Slides
Dong Won Lee, Chaitanya Ahuja, Paul Pu Liang, Sanika Natu, Louis-Philippe Morency
ICCV 2023
[arXiv] [code]
- Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment
Rohan Pandey, Rulin Shao, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency
ACL 2023
[arXiv] [code]
- Language Models Get a Gender Makeover: Mitigating Gender Bias with Few-Shot Data Interventions
Himanshu Thakur, Atishay Jain, Praneetha Vaddamanu, Paul Pu Liang, Louis-Philippe Morency
ACL 2023
[arXiv] [code]
- Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control
Xiang Fan, Yiwei Lyu, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency
ACL Findings 2023, NeurIPS 2022 Workshop on Human in the Loop Learning (oral, best paper nomination)
[arXiv] [code]
- MultiViz: Towards Visualizing and Understanding Multimodal Models
Paul Pu Liang, Yiwei Lyu, Gunjan Chhablani, Nihal Jain, Zihao Deng, Xingbo Wang, Louis-Philippe Morency, Ruslan Salakhutdinov
ICLR 2023, CHI 2023 Late Breaking Work
[arXiv] [code]
- FindAdaptNet: Find and Insert Adapters by Learned Layer Importance
Junwei Huang, Karthik Ganesan, Soumi Maiti, Young Min Kim, Xuankai Chang, Paul Pu Liang, Shinji Watanabe
ICASSP 2023
[paper]
- Face-to-Face Contrastive Learning for Social Intelligence Question-Answering
Alex Wilf, Martin Ma, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency
FG 2023
[arXiv] [code]
- Beyond the Imitation Game: Quantifying and Extrapolating the Capabilities of Language Models
442 authors including Paul Pu Liang
TMLR 2023
[arXiv] [code]
2022
- High-Modality Multimodal Transformer: Quantifying Modality & Interaction Heterogeneity for High-Modality Representation Learning
Paul Pu Liang, Yiwei Lyu, Xiang Fan, Jeffrey Tsaw, Yudong Liu, Dani Yogatama, Louis-Philippe Morency, Ruslan Salakhutdinov
TMLR 2022
[arXiv] [code]
- MultiZoo & MultiBench: A Standardized Toolkit for Multimodal Deep Learning
Paul Pu Liang, Yiwei Lyu, Xiang Fan, Arav Agarwal, Yun Cheng, Louis-Philippe Morency, Ruslan Salakhutdinov
JMLR Open Source Software 2022
[website] [code]
- Fundamentals of Multimodal Representation Learning: Towards Generalization and Quantification
Paul Pu Liang
PhD Thesis Proposal. Committee: Louis-Philippe Morency, Ruslan Salakhutdinov, Manuel Blum, Lenore Blum, Trevor Darrell
[document]
- Brainish: Formalizing A Multimodal Language for Intelligence and Consciousness
Paul Pu Liang
Association for the Scientific Study of Consciousness 2022, Models of Consciousness 2022 (oral)
[arXiv]
- Uncertainty Quantification with Pre-trained Language Models: A Large-scale Empirical Analysis
Yuxin Xiao, Paul Pu Liang, Umang Bhatt, Willie Neiswanger, Ruslan Salakhutdinov, Louis-Philippe Morency
EMNLP Findings 2022
[arXiv] [code]
- GEMv2: Multilingual NLG Benchmarking in a Single Line of Code
77 authors including Paul Pu Liang
EMNLP Demo Track 2022
[arXiv] [code]
- PACS: A Dataset for Physical Audiovisual Commonsense Reasoning
Samuel Yu, Peter Wu, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency
ECCV 2022
[arXiv] [code]
- DIME: Fine-grained Interpretations of Multimodal Models via Disentangled Local Explanations
Yiwei Lyu, Paul Pu Liang, Zihao Deng, Ruslan Salakhutdinov, Louis-Philippe Morency
AIES 2022
[arXiv] [code]
- Rethinking Architecture Design for Tackling Data Heterogeneity in Federated Learning
Liangqiong Qu*, Yuyin Zhou*, Paul Pu Liang*, Yingda Xia, Feifei Wang, Li Fei-Fei, Ehsan Adeli, Daniel Rubin
CVPR 2022
[arXiv] [code]
2021
- MultiBench: Multiscale Benchmarks for Multimodal Representation Learning
Paul Pu Liang, Yiwei Lyu, Xiang Fan, Zetian Wu, Yun Cheng, Jason Wu, Leslie Chen, Peter Wu, Michelle Lee, Yuke Zhu, Ruslan Salakhutdinov, Louis-Philippe Morency
NeurIPS 2021
[arXiv] [website] [code]
- Understanding the Tradeoffs in Client-side Privacy for Downstream Speech Tasks
Peter Wu, Paul Pu Liang, Jiatong Shi, Ruslan Salakhutdinov, Shinji Watanabe, Louis-Philippe Morency
Asia Pacific Signal and Information Processing Association Annual Summit and Conference 2021
[arXiv] [code]
- Towards Understanding and Mitigating Social Biases in Language Models
Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, Ruslan Salakhutdinov
ICML 2021
[arXiv] [code]
- Learning Language and Multimodal Privacy-Preserving Markers of Mood from Mobile Data
Paul Pu Liang*, Terrance Liu*, Anna Cai, Michal Muszynski, Ryo Ishii, Nick Allen, Randy Auerbach, David Brent, Ruslan Salakhutdinov, Louis-Philippe Morency
ACL 2021 (oral)
[arXiv]
- Cross-Modal Generalization: Learning in Low Resource Modalities via Meta-Alignment
Paul Pu Liang*, Peter Wu*, Liu Ziyin, Louis-Philippe Morency, Ruslan Salakhutdinov
ACM Multimedia 2021 (oral)
[arXiv] [code]
- StylePTB: A Compositional Benchmark for Fine-grained Controllable Text Style Transfer
Yiwei Lyu*, Paul Pu Liang*, Hai Pham*, Eduard Hovy, Barnabás Póczos, Ruslan Salakhutdinov, Louis-Philippe Morency
NAACL 2021
[arXiv] [code]
- Proceedings of the Third Workshop on Multimodal Artificial Intelligence
Amir Zadeh, Louis-Philippe Morency, Paul Pu Liang, Candace Ross, Ruslan Salakhutdinov, Soujanya Poria, Erik Cambria, Kelly Shi
NAACL 2021 Workshop Proceedings
[proceedings] [website]
- Ask & Explore: Grounded Question Answering for Curiosity-Driven Exploration
Jivat Neet, Yiding Jiang, Paul Pu Liang
ICLR 2021 Workshop on Embodied Multimodal Learning
[arXiv]
- Anchor & Transform: Learning Sparse Embeddings for Large Vocabularies
Paul Pu Liang, Manzil Zaheer, Yuan Wang, Amr Ahmed
ICLR 2021
[arXiv] [code]
2020
- MOSEAS: A Multimodal Language Dataset for Spanish, Portuguese, German and French
Amir Zadeh, Yansheng Cao, Simon Hessner, Paul Pu Liang, Soujanya Poria, Louis-Philippe Morency
EMNLP 2020
[paper]
- Diverse and Admissible Trajectory Prediction through Multimodal Context Understanding
Seong Hyeon Park, Gyubok Lee, Manoj Bhat, Jimin Seo, Minseok Kang, Jonathan Francis, Ashwin R. Jadhav, Paul Pu Liang, Louis-Philippe Morency
ECCV 2020, CVPR 2020 Argoverse competition (honorable mention award)
[arXiv] [code]
- Towards Debiasing Sentence Representations
Paul Pu Liang, Irene Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, Louis-Philippe Morency
ACL 2020
[arXiv] [code]
- Proceedings of the Second Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML)
Amir Zadeh, Louis-Philippe Morency, Paul Pu Liang, Soujanya Poria
ACL 2020 Workshop Proceedings
[proceedings] [website]
- On Emergent Communication in Competitive Multi-Agent Teams
Paul Pu Liang, Jeffrey Chen, Ruslan Salakhutdinov, Louis-Philippe Morency, Satwik Kottur
AAMAS 2020 (oral)
[arXiv] [code] [slides]
- Empirical and Theoretical Studies of Multimodal Co-learning
Amir Zadeh, Paul Pu Liang, Louis-Philippe Morency
Elsevier Information Fusion 2020
[arXiv]
2019
- Think Locally, Act Globally: Federated Learning with Local and Global Representations
Paul Pu Liang*, Terrance Liu*, Liu Ziyin, Nicholas Allen, Randy Auerbach, David Brent, Ruslan Salakhutdinov, Louis-Philippe Morency
NeurIPS 2019 Workshop on Federated Learning (oral, distinguished student paper award)
[arXiv] [code]
- Deep Gamblers: Learning to Abstain with Portfolio Theory
Liu Ziyin, Zhikang Wang, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency, Masahito Ueda
NeurIPS 2019
[arXiv] [code] [poster]
- Learning Representations from Imperfect Time Series Data via Tensor Rank Regularization
Paul Pu Liang*, Zhun Liu*, Yao-Hung Hubert Tsai, Qibin Zhao, Ruslan Salakhutdinov, Louis-Philippe Morency
ACL 2019
[arXiv] [poster]
- Multimodal Transformer for Unaligned Multimodal Language Sequences
Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, Zico Kolter, Louis-Philippe Morency, Ruslan Salakhutdinov
ACL 2019
[arXiv] [code]
- Social-IQ: A Question Answering Benchmark for Artificial Social Intelligence
Amir Zadeh, Michael Chan, Paul Pu Liang, Edmund Tong, Louis-Philippe Morency
CVPR 2019 (oral)
[paper] [code] [poster]
- Strong and Simple Baselines for Multimodal Utterance Embeddings
Paul Pu Liang*, Yao Chong Lim*, Yao-Hung Hubert Tsai, Ruslan Salakhutdinov, Louis-Philippe Morency
NAACL 2019 (oral)
[arXiv] [code] [slides]
- Learning Factorized Multimodal Representations
Yao-Hung Hubert Tsai*, Paul Pu Liang*, Amir Zadeh, Louis-Philippe Morency, Ruslan Salakhutdinov
ICLR 2019
[arXiv] [code] [poster]
- Found in Translation: Learning Robust Joint Representations by Cyclic Translations Between Modalities
Hai Pham*, Paul Pu Liang*, Thomas Manzini, Louis-Philippe Morency, Barnabás Póczos
AAAI 2019
[arXiv] [code] [slides] [poster]
- Words can Shift: Dynamically Adjusting Word Representations Using Nonverbal Behaviors
Yansen Wang, Ying Shen, Zhun Liu, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency
AAAI 2019
[arXiv] [code] [slides] [poster]
2018
- Computational Modeling of Human Multimodal Language: The MOSEI Dataset and Interpretable Dynamic Fusion
Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency
Master's Thesis, CMU Machine Learning Data Analysis Project 2018 (first runner-up award)
[paper] [slides] [poster]
- Multimodal Language Analysis with Recurrent Multistage Fusion
Paul Pu Liang, Ziyin Liu, Amir Zadeh, Louis-Philippe Morency
EMNLP 2018 (oral)
[arXiv] [slides] [poster]
- Multimodal Local-Global Ranking Fusion for Emotion Recognition
Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency
ICMI 2018
[arXiv] [poster]
- Multimodal Language Analysis in the Wild: CMU-MOSEI Dataset and Interpretable Dynamic Fusion Graph
Amir Zadeh, Paul Pu Liang, Jonathan Vanbriesen, Soujanya Poria, Edmund Tong, Erik Cambria, Minghai Chen, Louis-Philippe Morency
ACL 2018 (oral)
[arXiv] [code] [slides]
- Efficient Low-rank Multimodal Fusion with Modality-Specific Factors
Zhun Liu, Ying Shen, Varun Lakshminarasimhan, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency
ACL 2018 (oral)
[arXiv] [code] [slides]
- Proceedings of the First Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML)
Amir Zadeh, Paul Pu Liang, Louis-Philippe Morency, Soujanya Poria, Erik Cambria, Stefan Scherer
ACL 2018 Workshop Proceedings
[proceedings] [website] [introduction] [datasets] [results]
- An Empirical Evaluation of Sketched SVD and its Application to Leverage Score Ordering
Hui Han Chin, Paul Pu Liang
ACML 2018
[arXiv] [slides] [poster]
- Multi-attention Recurrent Network for Human Communication Comprehension
Amir Zadeh, Paul Pu Liang, Soujanya Poria, Prateek Vij, Erik Cambria, Louis-Philippe Morency
AAAI 2018 (oral)
[arXiv] [code] [slides]
- Memory Fusion Network for Multi-view Sequential Learning
Amir Zadeh, Paul Pu Liang, Navonil Mazumder, Soujanya Poria, Erik Cambria, Louis-Philippe Morency
AAAI 2018 (oral)
[arXiv] [code] [slides]
2017
- Multimodal Sentiment Analysis with Word-level Fusion and Reinforcement Learning
Minghai Chen*, Sen Wang*, Paul Pu Liang*, Tadas Baltrušaitis, Amir Zadeh, Louis-Philippe Morency
ICMI 2017 (oral, honorable mention award)
[arXiv] [code] [slides]