GPAI releases its report on Climate Change and AI

The Global Partnership on Artificial Intelligence (GPAI) has released its 2021 report on Climate Change and AI, in collaboration with with Climate Change AI and the Center for AI and Climate. This is an expansive, 91-page read with high-level recommendations and plenty of details for those who want them.

I am proud to see our work at Fero highlighted in the report (see pages 85–86). For the last three years, we have enabled a steel manufacturer to prevent 450 thousand tons of CO2 emissions per year. I hope we can expand that to the tens of millions we know we can prevent as soon as possible. But, I am not writing to talk about our own use case; rather, I want to dive into other highlights from this report.

The report identifies eleven key application sectors: electricity systems, buildings and cities, transportation, heavy industry and manufacturing, agriculture, forestry, climate science, societal adaptation, ecosystems and biodiversity, markets, and policy. (All topics I hope to survey in my class in the spring.) Regarding manufacturing, the report touches on the important point that (all emphases mine)

AI can be used in adaptive control and process optimization to reduce the energy consumed by industrial processes, as well as in demand response to schedule such processes to reduce emissions intensity.

This refers to real-time process optimization, which is important to distinguish from static or one-off analyses. The reason this is still the norm today is trust. Classical process optimization, based on frameworks such as Six Sigma or The Toyota Way, rely on people to act on insights from a static report. Real-time control and optimization requires a much more stringent level of acceptance testing, where the professionals deploying such technologies must trust that the technology will work. Time and again this is something I hear from industrial professionals — building that trust is key to any kind of real-time deployment, which is where ML really has the opportunity to have a scalable impact.

Switching over to deployment requirements, the report identifies that

Business models, playbooks, and value chains for AI are also needed in key climate-relevant sectors to foster deployment and integration. However, such business models are nascent, and there are often many barriers preventing companies that are developing AI solutions from sharing the value of their work with incumbent industries interested in adopting them. […] both sets of entities often have stringent requirements surrounding in-house ownership of intellectual property, leading to difficulties forming collaborations between these entities in practice.

This is an insight that is worth expanding upon. A key point is that intellectual property can be maintained if the AI solution is a tool. A company might have legitimate worries about an AI vendor leveraging its data to benefit a competitor — this can indeed be the case if data is pooled across customers. However, this particular obstacle can be overcome by empowering corporations to build and deploy their own ML solutions.

Which brings us to an bottleneck summarized as a

Lack of cross-disciplinary and cross-sectoral experts: In addition to people with expertise in either data science or a climate-relevant sector, interdisciplinary teams working at the nexus of AI and climate change also need experts who are trained in both and can translate between areas. There are currently few people who have both AI expertise and domain expertise in a climate-relevant sector, as well as relevant socio-technical expertise needed to foster responsible development, deployment, and governance.

I agree, although tooling technology plays a role as well. Twenty years ago, building a web application that connects to a database, runs dynamic computations, and returns results in an adaptive graphical user interface may have required a team of engineers, each with their own specialties. Now, a single engineer might accomplish the same in less than a week. This opens the door to a more efficient communication opportunity between experts in software/ML and those in the domain area of interest.