Location via proxy:   
[Report a bug]   [Manage cookies]                

I am a research assistant professor at the Toyota Technological Institute at Chicago. My research interests lie in statistical machine learning, particularly in applications to high-stakes decision-making and evaluation problems such as admissions, hiring, grading, and peer review. I draw inspirations from psychology to build human behavioral models, develop algorithms with theoretical guarantees, conduct crowdsourcing experiments, and implement policy changes that make real-world impacts.

Previously, I was a President’s postdoctoral fellow in the School of Industrial and Systems Engineering (ISyE) and the Algorithms and Randomness Center (ARC) at Georgia Institute of Technology, working with Ashwin Pananjady and Juba Ziani. I received my Ph.D. in the School of Computer Science at Carnegie Mellon University, advised by Nihar Shah. I received my B.S. in Electrical Engineering and Computer Sciences from UC Berkeley.

Email: jingyanw [at] ttic.edu

Room: 427

For students: If you are interested in visiting TTIC and working with me during the summer, please apply to the visiting student program.


News
  • Feb 2024: Presenting at the ITA workshop in San Diego.
  • Feb 2024: Presenting my research at the CSIP Seminar in the ECE Department at Georgia Tech.
  • Jan 2024: Excited to participate in the Dagstuhl Seminar and give an overview of my work on various bias mitigation algorithms.
  • Dec 2023: Presenting at the Network Lunch at SLMath.
  • Oct 2023: Visiting SLMath as a research member for the Algorithms, Fairness, and Equity program in Berkeley during Oct - Dec 2023. Happy to chat more about research!
  • Oct 2023: Presenting at the INFORMS annual meeting on our paper "Debiasing Evaluations That Are Biased by Evaluations".
  • Sept 2023: Paper "Perceptual Adjustment Queries and an Inverted Measurement Paradigm for Low-Rank Metric Learning" accepted to NeurIPS 2023.
  • Aug 2023: Participating in the Introductory Workshop: Algorithms, Fairness, and Equity at SLMath in Berkeley.
  • July 2023: Presenting at EC 2023 on our paper "Modeling and Correcting Bias in Sequential Evaluation".

Research Overview
The goal of my research is to develop accurate, fair, reliable, and efficient evaluation systems. I draw inspirations from psychology to build human behavioral models, develop algorithms with theoretical guarantees, conduct crowdsourcing experiments, and implement policy changes that make real-world impacts. Specifically, I identify, analyze and improve three interrelated key components: algorithms, people, and design.
  • Algorithms: It's important to holistically think about different types of objectives, which induce intricate connections and tradeoffs underlying these evaluation systems. I develop algorithms that effectively aggregate evaluation data, integrating accuracy guarantees with the following considerations:

    • Fairness: We consider a crowdsourcing model (Bradley-Terry model) that infers item quality from pairwise comparisons. We propose a modification to the widely-used maximum likelihood estimator, which is simple to implement and enjoys a win-win in the bias-accuracy tradeoff – that is, it significantly improves fairness while achieving optimal accuracy. [paper | talk]

    • Reliability: In applications such as hiring and admissions, one important goal is to successfully recruit a pre-defined number of candidates. We formulate a Knapsack-style combinatorial optimization problem, and present theoretical guarantees for various algorithms that simultaneously consider quality and the uncertainty involved. [paper]

  • People: Algorithms do not operate in isolation; it is crucial to keep in mind that the input data come from human evaluators. People are not perfectly rational agents, and have biases even if we consciously try to avoid them. My work identifies different sources of human biases:

    • Miscalibration: People have different numerical scales when reporting ratings. We design algorithms that take into account the miscalibration of people – for example, some people tend to be lenient and give higher scores, and some others tend to be strict. Our algorithms are provably more robust to people’s miscalibration, and hence make the evaluation more fair. [paper | blog post]

    • Personal experience: Evaluators are influenced by their experiences. For example, in teaching evaluation, students often give higher ratings to an instructor if they receive higher grades in this course. We capture such tendencies by a natural monotonic constraint, and propose an estimator that reduces the bias in a data-dependent fashion, without prior knowledge on the amount of the bias. [paper | talk]

    • Ordering effect: Evaluations are made sequentially in applications such as sports/arts competitions and hiring. We provide a model to capture the nature of such sequential evaluation, supported by theory and crowdsourcing experiments. We propose a ranking estimator that is both efficient and optimal (in certain notions), whereas using the ranking naively induced by the scores is provably not optimal. [paper]

  • Design: People engage with algorithms, and algorithms influence human behaviors. I study design and policy decisions that underpin the interactions between people and algorithms:

    • Task allocation: In large-scale evaluation tasks, we examine allocation schemes that assign the task to reviewers in a distributed fashion, and provide a comprehensive comparison between these allocation schemes, using a mixture of theoretical and experimental approaches. [paper]

    • Data elicitation: Cardinal and ordinal queries have their pros and cons. We propose a new data elicitation scheme that achieves the best of both worlds. Taking preference learning as an example, it asks questions of the form "to reach the same level of satisfaction with a 2-dollar bus ride, how much do you want to spend on taxi?", which combines ordinal reasoning and cardinal reporting. [paper]
I extend my insights from research to real policy improvements in practice. For example, we compile the data and evaluate the gender distribution in award-winning papers in 16 top computer science conferences in the past 10 years, which shows prominent differences across conferences. [blog post] We also consider the biases caused by the alphabetical ordering practice in scientific publication. Taking cognizance of this bias arising from alphabetical ordering, the Machine Learning Department at CMU has randomized the ordering of students and faculty on its webpages, along with many other institutions. [blog post] The debiasing algorithm that I have developed is currently deployed in practice with grant review agencies.

Preprints
Journal Publications and Under Review
Peer-Reviewed Conference Publications

Teaching
  • Co-instructor
    ISYE 8813 (Algorithmic Foundations of Ethical Machine Learning), Georgia Tech, Fall 2023

  • Guest lecturer
  • IST402 (Crowdsourcing and Crowd-AI Systems), Penn State, Spring 2023
    PIC 16B (Python with Applications II), UCLA, Winter 2023
    ISYE 6740 (Computational Data Analysis), Georgia Tech, Fall 2022

  • TA
    16-720 (Computer Vision), CMU, Fall 2017

  • Lab Assistant
    EE 20N (Signals and Systems), UC Berkeley, Fall 2013


Misc
I play the violin with the Chicago Metropolitan Symphony Orchestra.

Next concert:
March 15, 2025 (Saturday at 7:30pm)

Gannon Concert Hall, Holtschneider Performance Center, DePaul University (2330 N Halsted St, Chicago, IL 60614)

During my time in Atlanta, I played with the Atlanta Community Symphony Orchestra. We performed free concerts in the Atlanta metro area.