The choice of the problem

I was re-reading the GPAI report on Climate Change and AI when I noticed the following paragraph (emphases mine)

There are a range of “silver bullet” climate solutions that have been proposed over the years, ranging from radical new energy sources to geo-engineering projects, which are unlikely to provide feasible solutions given economic, social or time constraints. There is a risk that AI could raise similar hopes. While there are also valid cases where moonshot challenges can focus attention on critical areas of the climate challenge and advances could help unlock more rapid climate action, there is a need to ensure balance between such high profile moonshot challenges and less “exciting” but critical innovation.

Is machine learning (ML) a “silver bullet” or a less “exciting” but critical innovation? It depends on how we choose what climate problem to work on.

ML is a general-purpose tool that can solve a wide selection of problems. As a result, ML’s effectiveness depends on what we apply it to (Davenport & Ronanki, 2018). A sensible approach is to start from our biggest issue at hand and work our way down from there.

I propose that a more inclusive approach, inspired by academic research, may yield better results.

Let’s start with the sensible approach.

Begin by asking which human activity emits the most greenhouse gases: manufacturing1. Continue by ranking manufacturing sectors by emissions: steel and cement often compete for the top spot. Now, imagine a large steel company would like to minimize its emissions, and wants to start with a single mill.

Should the company pinpoint and rank emission sources in that mill? Say a specific furnace is revealed as the point of greatest emissions. Should they work directly on reducing emissions from this furnace?

Exclusively following this kind of top-down approach can be too reductionist—in fact, this is a common criticism of Six Sigma for industrial process optimization.

In the example with our mill, focusing solely on the issue of the one furnace may not have a scalable impact on the rest of the organization. If solving this particular problem does not directly scale to the dozens of other mills this steel company runs across the globe, then we may have overlooked a different problem that does—possibly enabling a greater net reduction of emissions, achieved in a similar amount of time.

So scalability might be an issue. But it still feels like the sensible approach was getting us somewhere. Are we missing anything else?

This struggle reminded me of an excellent paper that my advisor pointed me to during my doctoral studies (Webb, 1961). Webb contemplates a question that torments all PhD students: “what research problem should I work on?”

I have rummaged around and turned up six widely used bases for doing an experiment: curiosity, confirmability, compassion, cost, cupidity, and conformability—or, more simply: “Am I interested,” “Can I get the answer,” “Will it help,” “How much will it cost,” “What’s the payola,” “Is everyone else doing it?”

After chucking at the definition of payola (“the practice of bribing someone to use their influence to promote a particular interest”) I wondered whether this inclusive approach to research could be helpful in applying ML to solve climate problems.

Here is my take.

Curiosity: This is not so useful as a criterion. I think it is safe to assume equal curiosity about all aspects of climate problems. If not, we should try to keep a broad perspective.

Confirmability: This is straightforward. It is what the GPAI authors refer to as less “exciting” but critical innovation. We must balance solution feasibility (moonshots vs. expanding on the tools already available to us today) as we do not have infinite time to solve our climate problems.

Compassion: I really like the word compassion here. The idea is not only to scope out whether a solution will help, but to consider all stakeholders when deciding what to work on. The scale of the problem—say, measured as a percentage of the 51 billion tons of CO2 we emit annually—is only one aspect. Carefully defining and considering all stakeholders (including workers and surrounding communities) is critical.

Cost: This is obviously important. Cost considerations should also include system-wide incentives, which is what Bill Gates tries to get at when computing green premiums.

Cupidity: I would reframe this as “solution greed”. If I tackle one problem (I want to solve it), does it allow me to tackle others (can I solve more)? A less individualistic wording here might be “How scalable is my solution? Does it only solve this problem or can it also solve others?”

Conformability: I would reframe this as “Can I get buy in?” What do the key stakeholders of a problem need to trust my solution? This has a compounding effect. Solving that first problem helps break the inertia within organizations; a willingness to try solving the next problem is important to scaling and effecting systemic change.

Returning to our steel example, I would argue that cost, cupidity, and—perhaps surprisingly—conformability are the key factors that would drive the successful application of ML to reducing emissions.

Take conformability as an example. Does the person responsible for the furnace want to reduce its emissions? In my experience, the answer is typically “no”—unless she is properly incentivized. Another, potentially smaller, problem where the stakeholders are properly incentivized may be a better first project. Success in the smaller project could then be leveraged to incentivize the furnace owner to fully buy in.

The sensible approach highlighted a problem worth solving. That’s a good thing. But actually solving the problem requires a broader, more inclusive approach. The criteria above can help in this effort. In the end, this directly affects whether ML becomes a “silver bullet” or a less “exciting” but critical solution — it depends on the choice of the problem.

  1. Sometimes energy can rank higher, depending on how you estimate emissions. Either way, manufacturing and energy both correspond to a quarter to a third of anthropogenic emissions each

  1. Davenport, T. H., & Ronanki, R. (2018). Artificial intelligence for the real world. Harvard Business Review, 96(1), 108–116.
  2. Webb, W. B. (1961). The choice of the problem. American Psychologist, 16(5), 223.