The risks around artificial intelligence tend to dominate the headlines, but how can the investment community deploy the technology for sustainable ends?
As investment professionals incorporate environmental, social and governance (ESG) factors into their decision-making, they face a major challenge: the quality of ESG data.
This is a relatively new area of finance, and data may be patchy, out-of-date, unverified or even non-existent. It might be inconsistent and difficult to compare. Some ESG factors, such as those related to social impact, are hard to measure. Or investors might simply be overwhelmed by mountains of information and unsure what is material or how to apply it.
While the availability of data on various ESG issues has risen in recent years, not all of it is good. Using the wrong ESG data exposes investors to financial and reputational risks. And regulatory scrutiny of fund labelling – including Europe’s Sustainable Finance Disclosure Regulation, which is designed to prevent greenwashing – have raised the risks of relying on poor or insufficient data.
But advances in artificial intelligence (AI) have fueled hopes that this technology can help fill the gaps in ESG datasets, just as it is helping investors with analysis more broadly.
For example, AI played a key role in the development of the ESG taxonomy of the Sustainable Development Investment Asset Owner Platform, whose asset owner members have a combined USD1.5 trillion under management. The initiative is designed as a standard for investing into the UN’s Sustainable Development Goals and aims to measure how much assets such as companies contribute to the SDGs (see Figure 1).
Processing power
There are many ways in which AI can help improve the availability and reliability of ESG data. Many asset managers are already using AI – directly or through data providers – to source and process data they can use to identify sustainable investment opportunities, emerging trends or imminent risks to their portfolios. Some investors use data-screening services to exclude investments that don’t meet their ESG criteria.
The ESG data that investors need is likely scattered across numerous sources in various formats. AI can be used to trawl through hundreds of thousands of publicly available sources in multiple languages. They can include corporate websites, sustainability reports or filings, or news stories, press releases, independent research and conference call transcripts – and, increasingly, social media.
In short, the technology can help investors access more information than would be possible through human analysis, and far faster.
One example is natural language processing (NLP) – a branch of AI whereby computers analyze language in a similar way to humans – which can detect the sentiment of a text.
If negative customer reviews about a company are mounting or controversy starts brewing on social media, the technology might be able to identify an imminent ESG threat before a human analyst catches on, or before the company’s share price takes a hit. NLP can flag news about businesses polluting or treating employees badly before that information gains more traction. It might also be able to pick up information on factors that are hard to measure. For example, by analyzing data job review websites, AI can quantify employee satisfaction, which could be a useful measure of a company’s social performance.
NLP is being used to address many of the most common challenges for institutional ESG investors, including a lack of standardized third-party data, limited company disclosures, and subjective metrics.
Risks and reporting
AI can also monitor how companies’ activities are affecting biodiversity and ecosystems, such as whether or to what extent a company is contributing to deforestation or producing waste or air pollution. It can also be used to sift through satellite images for evidence of methane emissions or environmental pollution. This could help identify risks along a company’s value chain, outside of its direct operations. On the flip side, AI might be able to discern the impact of natural disasters or extreme weather on corporate assets and activities.
AI can even help review companies’ compliance with expanding reporting requirements, like those mandated under the Corporate Sustainability Reporting Directive or Task Force on Climate-Related Disclosures.
There are clear benefits from using AI to harness data, most notably its potential to outperform human capabilities. Human analysis is subjective and may contain errors. But AI can evaluate vast amounts more data far more quickly and accurately than humans can. That could allow asset managers to integrate more ESG factors into their investment decisions.
Safety concerns
Still, there may be limitations. Analysis based on data released by a company is only as good as the data itself. Although reporting requirements are increasing, ESG disclosures are not yet standardized, and only cover larger businesses and certain markets. Executives can avoid words with negative connotations to outmaneuver AI-based sentiment analysis of corporate reports. The proliferation of data means analysts need to make sense of different data sets.
Moreover, while there are benefits to using AI to harness ESG data, AI poses its own environmental and social risks.
AI requires serious computing power. By 2027, researchers estimate that powering the world’s AI could require more electricity than many small countries, thereby raising carbon emissions substantially.
Experts are also warning of the harm the technology could do to society.
Perhaps the most tangible and current fear is that AI could cause job losses – or worse. By some estimates, advances in automation resulting from generative AI could impact as many as 300 million jobs. Some researchers and industry leaders are even warning that AI could pose an existential risk to humanity if one day AI starts to do things humans don’t want it to do.
There are also concerns that AI perpetuates biases because it is based on training data that can include human biases. British intelligence agencies, meanwhile, say generative AI systems will pose a stark threat to democracy in the next two years, with its potential to manipulate and deceive populations. On a similar note, AI that leverages peoples’ personal data or tracks their online activity raises privacy concerns. In a 2022 survey, CFA charterholders flagged transparency and the protection of intellectual property as the top two risks around the adoption of AI and big data.
Lawmakers and regulators have expressed concern about the use of AI in financial services, particularly over information privacy and cybersecurity risks. Asset managers will need to consider the potential for new AI regulation to be introduced in the coming years before they invest in developing the technology.
But AI can also be a powerful aid to productivity, and a force for good. Asset management is one area where this technology is already having a positive impact, as an effective tool for harnessing ESG data.
You may also be interested in
Want to learn more about Data science?
With the advent of data science, the investment industry is changing rapidly. Investment firms are facing new challenges and these changes will have implications for your career.