Artificial Intelligence is clearly a powerful tool that could help a number of sustainability objectives, but are there risks attached to these potential benefits? Global Head of Sustainability Research Stephen Byrd and Global Sustainability Analyst Brenda Duverce discuss.
----- Transcript -----
Stephen Byrd: Welcome to Thoughts on the Market. I'm Stephen Bryd, Morgan Stanley's Global Head of Sustainability Research.
Brenda Duverce: And I'm Brenda Duverce from the Global Sustainability Team.
Stephen Byrd: On the special episode of the podcast, we'll discuss some key A.I. related opportunities and risks through the lens of sustainability. It's Friday, April 14th at 10 a.m. in New York.
Stephen Byrd: Recent developments in A.I. make it clear it's a very powerful tool that can help achieve a great number of sustainability objectives. So, Brenda, can you maybe start by walking us through some of the potential benefits and opportunities from A.I. that can drive improved financial performance for companies?
Brenda Duverce: Sure, we think A.I. can have tremendous benefits to our society and we are excited about the potential A.I. can have in reducing the harm to our environment and enhancing people's lives. To share a couple of examples from our research, we are excited on what A.I. can do in improving biodiversity protection and conservation. Specifically on how A.I. can improve the accuracy and efficiency of monitoring, helping us better understand biodiversity loss and support decision making and policy design. Overall, we think A.I. can help us more efficiently identify areas for urgent conservation and provide us with the tools to make more informed decisions. Another example is what we see A.I. can do in improving education outcomes, particularly in under-resourced areas. We think A.I. can help enhance teaching and learning outcomes, improve assessment practices, increase accessibility and make institutions more operationally efficient. Which then goes into financial implications A.I. can have in improving margins and reducing costs for organizations. Essentially, we view A.I. as a deflationary technology for many organizations. So Stephen, the Morgan Stanley's Sustainability Team has also done some recent work around the future of food. What role will A.I. play in agriculture in particular?
Stephen Byrd: Yeah, we're especially excited about what A.I could do in the agriculture sector. So we think about A.I. enabled tools that will help farmers improve efficiencies while also improving the quantity and quality of crop production. For example, there's technology that annotates camera images to differentiate between weeds and crops at the pixel level and then uses that information to administer pesticides only to weed infested areas. The result is the farmer saves money on pesticides, while also improving agricultural production and enhancing biodiversity by reducing damage to the ecosystem.
Brenda Duverce: But there are also risks and negative implications that ESG investors need to consider in exploring A.I. driven opportunities. How should investors think about these?
Stephen Byrd: You know, we've been getting a lot of questions from ESG investors around some of the risks related to A.I., and there certainly are quite a few to consider. One big category of risk would be bias, and in the note, we lay out a series of different types of bias risks that we see with A.I. One example would be data selection bias, another would be algorithmic bias, and then lastly, human bias. Just as an example on human bias, this bias would occur when the people developing and training the algorithm introduce their own biases into the data or the algorithm itself. So this is a broad category that's gathered a lot of concern, and that's quite understandable. Another area would be data privacy and security. An example in the utility sector from a research entity focused on the power sector, they highlight that the data collected for A.I. technologies while being meant to train models for a good purpose, could be used in ways that violate the privacy of the data owners. For instance, energy usage data can be collected and used to help residential customers be more energy efficient and lower their bills, but at the same time, the same data could also be used to derive personal information such as the occupation and religion of the residents.
Stephen Byrd: So Brenda, keeping in mind the potential benefits and risks for me that we just touched on, where do you think A.I's impact is likely to be the greatest and the most immediate?
Brenda Duverce: Beyond the improvements A.I. can have on our society, in our ESG space in particular, we are excited to see how A.I. can improve the data landscape, specifically thinking about corporate disclosures. We think A.I. can help companies better predict their scope through emissions, which tend to be the largest component of a company's total greenhouse gas emissions, but the most difficult to quantify. We think machine learning in particular can be useful in estimating these emissions by using statistical learning techniques to develop more accurate models.
Stephen Byrd: But it's ironic that when we talk about A.I., within the context of ESG, one of the drawbacks to consider around A.I. is its potential carbon footprint and emissions. So is this a big concern?
Brenda Duverce: Yes, we do think this is a big concern, particularly as we think about our path towards net zero. Since 2010, emissions at data centers and transmission networks that underpin our digital environment have only grown modestly, despite rapid demand for digital services. This is largely thanks to energy efficiency improvements, renewable energy purchases and a broader decarbonization of our grids. However, we are concerned that these efficiencies in place won't be enough to withstand the high compute intensity required as more A.I. models come online. This is a risk we hope to continue to explore and monitor, especially as it relates to our climate goals.
Stephen Byrd: In terms of the latest developments around risk from A.I, there's been a call to pause giant A.I. experiments. Can you give us some context around this?
Brenda Duverce: Sure. In a recent open letter led by the Future of Life Institute, several A.I. researchers called for a pause for at least six months on the training of A.I. systems more powerful than GPT-4. The letter highlighted the risk these systems can have on society and humanity. In our view, we think that a pause is highly unlikely. However, we do think that this continues to bring to light why it is important to also consider the risk of A.I. and why A.I. researchers must follow responsible ethical principles.
Brenda Duverce: So, Stephen, in the United States, there's currently no comprehensive federal regulation specifically dedicated to A.I.. What is your outlook for legislative action and policies around A.I., both here in the U.S. and abroad?
Stephen Byrd: Yeah, Brenda, I'd say broadly it does look like the pace of A.I. development is more rapid than the pace of regulatory and legislative developments, and I'll walk through some developments around the world. There have been several calls across stakeholder groups for effective regulation, the US Chamber of Commerce being one of them. And last year we did see some state level regulation focused on A.I. use cases and the risks associated with A.I. and unequal practices. But broadly, in our opinion, we think that the likelihood of legislation being enacted in the near term is low, and that in the U.S. in particular, we expect to see more involvement from regulatory bodies and other industry leaders advocating for a national standard. The European approach to A.I. is focused on trust and excellence, aiming to increase research and industrial capacity while ensuring safety and fundamental rights. The A.I. ACT is a proposed European law assigning A.I. to three risk categories. Unacceptable risk, high risk and applications that don’t fall in either of those categories which would be unregulated. This proposed law has faced significant delays and its future is still unclear. Proponents of the legislation expect it to lead the way for other global governing bodies to follow while others are disappointed by its vagueness, the potential for it to stifle innovation and concerns that it does not do enough to explicitly protect against A.I. systems used for weapons, finance and health care.
Stephen Byrd: Finally, Brenda, what are some A.I. related catalysts that investors should pay attention to?
Brenda Duverce: In terms of catalysts, we'll continue to see innovation updates from our core A.I. enablers, which shouldn't be a surprise to our listeners. But we plan to continue to monitor the ever evolving regulatory landscape on this topic and the discourse from influential organizations helping to push for A.I. safety around the world.
Stephen Byrd: Brenda, thanks for taking the time to talk.
Brenda Duverce: Great speaking with you, Stephen.
Stephen Byrd: And thanks for listening. If you enjoy Thoughts on the Market, please leave us a review on Apple Podcasts and share the podcast with a friend or colleague today.