I am a PhD Candidate in Civil & Environmental Engineering at the University of California, Berkeley. I am fortunate to be advised by Joan Walker.
I study how to make reliable predictions about the effects of interventions in complex social systems, where we often cannot run the experiments we wish we had. Decisions such as congestion pricing, transit expansion, or policy incentives require anticipating behavioral responses under conditions people have never experienced. Traditional predictive models struggle here because the rules of the system change.
My research aims to build a principled foundation for reasoning about interventions at scale. I work across three complementary lenses:
Together, these approaches support a broader epistemic project: establishing what kinds of evidence are sufficient for a given decision, and designing modeling infrastructure that makes those evidentiary standards transparent. My work seeks to formalize how different data sources and modeling approaches can be combined, validated, or challenged when the goal is to understand how behavior will change under new conditions.
I ground this agenda in human mobility and transportation systems: domains where interventions are high-stakes, data are heterogeneous, and distribution shift is the norm. But the questions apply much more broadly: How can we intervene responsibly in systems where behavioral responses determine societal outcomes?
During the earlier years of my PhD, I worked at Lawrence Berkeley National Lab in the BEAM group, building large-scale agent-based transportation models that simulate millions of decisions to evaluate policies before implementation.
Earlier, I spent five years as a consultant forecasting demand and designing pricing strategies for transportation infrastructure across more than twenty countries. That work showed me how heavily policy depends on behavioral predictions we can’t directly test, and how urgently we need tools that reason about interventions honestly.
I study how to make reliable predictions about the effects of interventions in complex social systems, where we often cannot run the experiments we wish we had. Decisions such as congestion pricing, transit expansion, or policy incentives require anticipating behavioral responses under conditions people have never experienced. Traditional predictive models struggle here because the rules of the system change.
My research aims to build a principled foundation for reasoning about interventions at scale. I work across three complementary lenses:
- Experiments, when causal variation can be generated directly.
- Observational data, when structure can be inferred from large-scale behavioral traces.
- Simulation, when counterfactuals must be explored computationally.
Together, these approaches support a broader epistemic project: establishing what kinds of evidence are sufficient for a given decision, and designing modeling infrastructure that makes those evidentiary standards transparent. My work seeks to formalize how different data sources and modeling approaches can be combined, validated, or challenged when the goal is to understand how behavior will change under new conditions.
I ground this agenda in human mobility and transportation systems: domains where interventions are high-stakes, data are heterogeneous, and distribution shift is the norm. But the questions apply much more broadly: How can we intervene responsibly in systems where behavioral responses determine societal outcomes?
During the earlier years of my PhD, I worked at Lawrence Berkeley National Lab in the BEAM group, building large-scale agent-based transportation models that simulate millions of decisions to evaluate policies before implementation.
Earlier, I spent five years as a consultant forecasting demand and designing pricing strategies for transportation infrastructure across more than twenty countries. That work showed me how heavily policy depends on behavioral predictions we can’t directly test, and how urgently we need tools that reason about interventions honestly.