In Conversation: Maria Kogelnik & Alejandro Martinez-Marquina on experimental economics
Ahead of a new workshop series, two researchers discuss how experiments can complement traditional tools in economics, deepen our understanding of behavior, and guide more informed policy debates.
While the bulk of economics research still relies on naturally occurring data, experimental economics has reshaped how economists study human behavior. By creating structured environments where incentives, information, and choices can be precisely controlled, experimental economists can test long-standing assumptions about decision-making and shed light on the mechanisms that drive real-world outcomes. As governments and societies confront increasingly complex policy challenges, a better understanding of human behavior in economics is more important than ever.
In a recent conversation, two experimental economists – Maria Kogelnik, an Associate Research Scientist in Economics at Yale and an EGC affiliate, and Alejandro Martinez-Marquina, Assistant Professor at the University of Southern California Marshall School of Business – discussed how the field has evolved and what their own research reveals about controversial policies like gender quotas. This conversation took place ahead of the new “Experimental Economics Workshop” on December 5th, which aims to highlight frontier experimental research on discrimination and prejudice and spark deeper engagement across economics.
How did experimental economics first emerge within the broader field, and how has it evolved over time?
Alejandro Martinez-Marquina: Economics started with the idea that people are incredibly rational. But when economists began running experiments, they found that people often do things very differently than what was predicted by their models. Since then, experimental economics hasn’t evolved dramatically – but what’s changed is the kind of questions we’re asking. Before, economists had games they were interested in understanding: famous experiments like the prisoner’s dilemma and public goods games. Now, it’s more about running experiments to understand actual policies, which is exciting.
Maria Kogelnik: Experiments are just one of the many methods you can use to answer economic questions. But if you're interested in why people behave as they do and why they hold certain beliefs, it can be helpful to generate an environment and add layers of control, which lets you pin down channels and shed light on mechanisms of interest. Experimental economics originated in the lab, but field experiments have become more common, and online experiments became widely used during the pandemic.
Megan Wright
Maria Kogelnik presents at the November 2024 EGC Postdoctoral Retreat.
How does experimental economics relate to psychology and behavioral economics?
Kogelnik: Economists like to draw insights from other disciplines, from mathematics and psychology to political science. Behavioral economics has done a really nice job incorporating major insights from psychology into economic models and using experiments to understand economic behavior. But you could also use experiments to answer questions in fields like macro or finance.
Martinez-Marquina: One of the most famous Nobel Prize winners in economics, Daniel Kahneman, was a psychologist – so there is a lot of overlap. But while psychology often focuses on the micro-foundations of what goes on in the brain, we’re more interested in how things like framing and biases affect outcomes: the choices people make that affect their financial well-being.
What drew each of you to experimental economics, and what distinguishes it from other subfields?
Martinez-Marquina: Before economics, I was studying to be an architect. Architecture is a blank canvas – you must be creative to make sure everything fits together. Experiments are similarly creative; it’s not just following the same five steps every time. You start with a big question with a lot of potential answers, then come up with a very elegant design to identify the one thing that affects behavior. In architecture, your customer might want A, B, C, D, E, F, and G – but adding one feature can kill another, so you need to get creative. A simple building that works in every aspect is beautiful; I think a well-designed experiment is beautiful, too.
Kogelnik: When I started my PhD, I wanted to become a health economist – but then I took a class in experimental economics and fell in love with all the cool and creative things you can do. Often in economics, you have a pretty good sense of what data and models you need to answer your research question. With experiments, that’s very often not the case. You have a question that excites you, but it is not straightforward how you’ll answer it. So you think about all the possible things that could be going on, then find a way to control and pin them down, shedding light on one versus the other.
What have your experiments on gender quotas – a widely used tool to address gender biases – revealed about why these policies are so controversial?
Kogelnik: In my project with Philipp Strack, our initial instinct was that part of that emotional debate is a fundamental misunderstanding about how quotas affect selection outcomes. One channel is psychological: people like to think highly of themselves, so if they don’t get hired or promoted, they can blame the outcome on quotas. Another is cognitive: understanding the effects of quotas is mentally taxing and involves contingent reasoning, and people struggle with this. We design an experiment that allows us to turn these channels on and off, to learn about how and why quotas are misperceived as well as the implications. While quotas may help improve selection outcomes by overcoming evaluator biases in the short run, we show that an unintended consequence of quotas is that they can causally increase biases.
Martinez-Marquina: Juan Gonzalez Blanco and I start from a similar question but focus on a different aspect – that these policies can backfire, because many people dislike them. We ran an online experiment to simulate hiring decisions, with participants acting as recruiters. We found evidence of backlash against quotas, but it was performance-specific: when recruiters were required to hire an additional female candidate, their hiring rates declined and they offered reduced salaries to women – but only when the candidates underperformed relative to men.
What do these findings imply for policymakers?
Kogelnik: We do not have a policy recommendation per se, because a policy like gender quotas is quite complex. They can improve outcomes on some dimensions, or for certain groups. But if you want to implement quotas, it is important to know about all the effects they can have – including unforeseen side effects.
Martinez-Marquina: Just because a quota policy worked in one context does not mean it will work everywhere. Where are they effective? Quotas can be useful in very competitive environments, particularly those where female participation is already very low – say, a STEM major with 5% women. Objective measures of performance matter, too. In academic environments, for example, where evaluation is subjective, quotas can have negative effects.
How have past experiments shaped policy, either positively or in ways that proved more complicated?
Martinez-Marquina: The most famous example is nudges, where we probably shouldn’t have had as much impact as we did. Policymakers got excited by some early papers, but the effects did not really materialize. This helped us realize that the bar for policy recommendations must be higher.
Other examples seem to be going better. Experiments on discrimination, for instance, show that women negotiate less for wage increases, so policymakers are pushing for wage-transparency laws. And experiments on child tax credits have found that people often just don’t pay attention to key information. Once you realize the mechanism is inattention, you can target attention – through open enrollment periods, for example.
Kogelnik: Cleverly designed experiments allow us to learn “why” – why do people do the things they do? This is useful for policymakers because it helps you know what to target. If you know a particular treatment has an effect, that’s insightful; but to design policies around it, it’s helpful to understand why the effect arises.
Most experiments have been conducted in rich countries. What scope is there for applying these approaches in lower-income countries?
Kogelnik: It seems there’s a lot of appetite among development economists for incorporating experiments into their work. Development economists have largely focused on unincentivized, hypothetical survey questions, but these entail all kinds of biases. While every method has some shortcomings, incentivized experiments that involve real stakes can offer rich insights into why people behave the way they do, and could be especially powerful when combined with other methods.
Martinez-Marquina: For things like social norms, altruism, and cultural factors, it’s problematic that most evidence comes from rich countries. But some evidence suggests the mechanisms we study do replicate in lower-income countries, and I would speculate that many – like impatience or risk aversion – are even more important for poorer populations. If you find detrimental behaviors in rich countries, those effects might be much worse for people in low-income countries.
What do you see as the most important emerging challenges for the future of experimental economics?
Martinez-Marquina: As AI gets better at imitating human behavior, one growing idea is that you can pilot experiments with “virtual subjects.” But if AI lowers research costs to basically zero, researchers could hypothetically run tons of experiments and just pick the designs that yield the biggest effects. I don’t know the solution – maybe robustness treatments – but it’s an emerging challenge.
Kogelnik: Experiments are already inexpensive, which creates a different issue: a lack of field-wide standards on design and data collection. For example, journals typically only require clean datasets, but it could be useful to see the pure, messy data – showing how often subjects failed attention checks or comprehension questions, and so on.
How do you hope the Experimental Economics Workshop on December 5th will shift how Yale economists engage with experimental methods?
Kogelnik: Not many Yale faculty are working experimentally. But there is a lot of interest, since experiments can tackle questions that are hard to study with naturally occurring data. By having a workshop – the first in a new series – we can bring together experimental expertise while exposing less-familiar researchers to the unique insights experiments bring.