Why write (now)?

I have decided to keep a blog as I explore the intersection of machine learning, climate, and how people make decisions.

In my past research, I focused on a branch of machine learning built on Bayesian statistical principles. The idea is to quantify uncertainty. What are your prior expectations about the problem you are trying to solve? How much evidence does the data provide to match those expectations? Are there patterns in your data that match those expectations or are this much stronger evidence that other hidden patterns exist? Bayesian machine learning computes the answer to these questions and relays when it is confident about what it finds or whether it is just guessing.

You are probably already familiar with these kinds of results—a dot surrounded by a bar or a line (indicating a range of outcomes). The effect of a medical treatment, the impact of changing a setting, a prediction of something in the future. Quantifying uncertainty, it seems, is worth it; especially in high-stakes applications such as healthcare.

When we present results with uncertainty, we expect people to interpret them and reach some actionable conclusion. We hope these results will change how they think or act, in a way that incorporates uncertainties. This is still a challenge today—a surprising number of readers, including professionals, can be misled even based on how such results are presented (Hofman et al., 2020), let alone the fact that they represent uncertainties in the first place.

This brings me to climate change, where uncertainties abound. We are tasked with convincing large group of people (individuals and those who lead institutions) to internalize results like these and take action. Incorporating how people interpret results is only recently gaining traction in the ML community, largely as a response to the recent proliferation of “black box” techniques (Rudin, 2019).

This is why I decided to develop a course on climate and why I decided to write. ML is complicated, yes. But people are arguably even more so. Their intersection is fascinating, and I hope this blog will help—even if in a small way—explore it.

  1. Hofman, J. M., Goldstein, D. G., & Hullman, J. (2020). How visualizing inferential uncertainty can mislead readers about treatment effects in scientific results. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–12.
  2. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215.