Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content ->

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

CLIN: A Continually Learning Language Agent for Rapid Task Adaptation and Generalization

Bodhisattwa Prasad MajumderBhavana Dalvi MishraPeter JansenPeter Clark
2024
COLM

Language agents have shown some ability to interact with an external environment, e.g., a virtual world such as ScienceWorld, to perform complex tasks, e.g., growing a plant, without the startup 

AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agents

Harsh TrivediTushar KhotMareike HartmannNiranjan Balasubramanian
2024
ACL

Autonomous agents that address day-to-day digital tasks (e.g., ordering groceries for a household), must not only operate multiple apps (e.g., notes, messaging, shopping app) via APIs, but also 

Can Language Models Serve as Text-Based World Simulators?

Ruoyao WangGraham ToddZiang XiaoP. Jansen
2024
ACL

Virtual environments play a key role in benchmarking advances in complex planning and decision-making tasks but are expensive and complicated to build by hand. Can current language models themselves 

Few-shot Dialogue Strategy Learning for Motivational Interviewing via Inductive Reasoning

Zhouhang XieBodhisattwa Prasad MajumderMengjie ZhaoJulian McAuley
2024
ACL Findings

We consider the task of building a dialogue system that can motivate users to adopt positive lifestyle changes: Motivational Interviewing. Addressing such a task requires a system that can infer 

The Unreasonable Effectiveness of Easy Training Data for Hard Tasks

Peter HaseMohit BansalPeter ClarkSarah Wiegreffe
2024
ACL

How can we train models to perform well on hard test data when hard training data is by definition difficult to label correctly? This question has been termed the scalable oversight problem and has 

Data Contamination Report from the 2024 CONDA Shared Task

Oscar SainzIker Garc'ia-FerreroAlon JacoviJinglin Yang
2024
arXiv

The 1st Workshop on Data Contamination (CONDA 2024) focuses on all relevant aspects of data contamination in natural language processing, where data contamination is understood as situations where 

Answer, Assemble, Ace: Understanding How Transformers Answer Multiple Choice Questions

Sarah WiegreffeOyvind TafjordYonatan BelinkovAshish Sabharwal
2024
arXiv

Multiple-choice question answering (MCQA) is a key competence of performant transformer language models that is tested by mainstream benchmarks. However, recent evidence shows that models can have 

Data-driven Discovery with Large Generative Models

Bodhisattwa Prasad MajumderHarshit SuranaDhruv AgarwalPeter Clark
2024
ICML

With the accumulation of data at an unprecedented rate, its potential to fuel scientific discovery is growing exponentially. This position paper urges the Machine Learning (ML) community to exploit 

Skill Set Optimization: Reinforcing Language Model Behavior via Transferable Skills

Kolby NottinghamBodhisattwa Prasad MajumderBhavana DalviRoy Fox
2024
ICML

Large language models (LLMs) have recently been used for sequential decision making in interactive environments. However, leveraging environment reward signals for continual LLM actor improvement is 

Tell, Don't Show!: Language Guidance Eases Transfer Across Domains in Images and Videos

Tarun KalluriBodhisattwa Prasad MajumderManmohan Chandraker
2024
ICML

We introduce LaGTran, a novel framework that utilizes text supervision to guide robust transfer of discriminative knowledge from labeled source to unlabeled target data with domain gaps. While 

1-10Next