Blog fee

AI ESG Risks: What Investors Need to Know | blog post

Portrait of the co-author Daram Pandian, Associate, Private Equity & Venture Capital, PRI

The artificial intelligence (AI) industry is booming. A PwC 2021 survey out of 1,000 companies found that only 5% were not using AI, up from 47% a year earlier.

This trend has also manifested itself in venture capital, with investors directing some US$75 billion into AI in 2020, according to the OECD. Eight years earlier, that figure was $3 billion.

Responsible venture capital investing is growing, and when it comes to AI investments, the case for considering ESG factors and potential real-world outcomes is particularly strong.

Like many emerging and rapidly evolving technologies, AI systems present significant ESG risks and opportunities, not only for the companies that develop, sell or use them, but also for people, society and the environment. environment with which they interact.

Venture capital GPs can help establish lasting structures, processes and values ​​in portfolio companies before their practices become entrenched and difficult to change as these companies evolve.

We recently facilitated a workshop on this subject for the PRI venture capital network members – and discuss the key themes explored below.

What are the main risks associated with AI systems?

Recent examples of material ESG issues associated with AI systems include:

Failure to consider these and other AI ethics issues can create significant risks for GPs related to reputation, compliance and value creation, although these may vary. , depending on whether an issuing entity creates the AI ​​system or simply integrates it into its operations, for example.

According to the workshop discussion, determining the materiality of AI ethical issues is something that venture capitalist GPs grapple with.

For example, the risks associated with a company using an AI system to optimize a production process would be different from those that could arise if personal or consumer data were collected.

Problems can also arise if an AI system is used for unintended purposes, such as facial recognition technology misused by government authorities (Medium).

Assess the ethical risks of AI

GPs can take several approaches to assess the AI ​​ethics of a potential venture capital investment:

  • By type of application: assign general risk levels based on legislation such as the EU law on artificial intelligence (see also existing and proposed AI laws, below), which divide AI applications into categories of risk, from unacceptable to low risk.
  • Third-party assessment: the use of a third-party service with technical, ethical and legal expertise (e.g. AI ethics auditors, ESG service providers specializing in AI) to assess in detail the risks of an AI system, especially in early-stage startups that have mature products.
  • Evaluate the AI ​​responsibility of the start-up: assess how a start-up uses AI ethics in its own workflow and product development – ​​start-ups that develop and deploy AI responsibly are more likely to detect ethical issues of AI and solve them.

This can be done during screening and due diligence – for example, through conversations about AI ethics with start-ups or using third parties to assess the technology in question.

GPs can include AI ethics in their investment memos and dashboards or include a side clause or agreement in a term sheet to ensure expectations are clearly defined. Depending on the scope of GP influence, they can also push for AI ethics metrics and reporting to be on the issuing company’s board agenda.

In our workshop, participants highlighted the importance of providing education and training on AI ethics to portfolio company founders and GP deal teams, through seminars or other resources.

Existing and Proposed AI Laws

AI-specific regulation has already been adopted in various jurisdictions, for example:

  • China recently passed a law which regulates AI algorithmic recommendation services.
  • The U.S. state of Maine restricted the use of facial recognition by government authorities.
  • New York City prohibited employers to use AI for recruiting, hiring and promotion without suffering audit bias.

Other AI-specific laws are on the horizon. For instance:

  • EU Artificial Intelligence Law is the most significant effort to regulate AI, and it is expected to become law. It divides AI applications into risk categories and defines rules for each.
  • The most significant federal regulatory effort in the United States is the Algorithmic Accountability Actthat would require companies to carry out impact assessments on the automated systems they sell and use.

Readers wanting more information on AI regulatory efforts can refer here resource list.

A budding topic with growing relevance

Anecdotal conversations with venture capitalist GPs indicate that their approaches vary – some have developed structured processes with specific questions and areas of risk to assess, while others are aware of the ethics of AI as a subject but may not apply these considerations consistently.

The focus on the ethics of AI is most prevalent among VC GPs targeting the tech sector, or within it, those focusing solely on AI. But that is likely to change, given the growth of AI systems in all sectors and industries beyond technology, and the fact that several jurisdictions have passed or are developing laws to regulate the development and deployment of the systems. of AI.

Our workshop highlighted another area of ​​potential tension, where a potential investment presents AI-related ethical risks that are not financially significant but could lead to negative outcomes. For example, a social media company whose product is driven by algorithms that could lead to user addiction and negative mental health effects. Some GPs may feel they cannot consider the ethics of AI in these circumstances due to their perception of fiduciary duty.[1]

One way GPs could address this issue would be to clarify in conversations with potential and existing LPs, especially when fundraising, to what extent they will consider AI ethics when determining if they have to make an investment.

Having such conversations with LPs would not be out of place. Asset owners increasingly expect their investment managers to consider ESG factors and want to understand the positive and negative real outcomes to which their capital contributes.

Indeed, client demand is one of the main drivers of responsible investing in the venture capital industry.

This, alongside the clear rationale for assessing ESG risks and opportunities that many companies present, particularly in emerging sectors such as AI, will continue to shape the development of more formal and standardized practices.

The PRI supports this development in a number of ways, including bringing signatories together to discuss relevant due diligence topics. We have also produced case studies that highlight emerging best practices among investment managers and asset owners, and published an article, Start up: Responsible investment in venture capital, to assess the landscape to date. .

Explore these resources

This blog is written by PRI staff members and guest contributors. Our aim is to contribute to the wider debate around topical issues and help showcase some of our research and other work we undertake to support our signatories. official views, blog authors write in an individual capacity and there is no “home view”. The views and opinions expressed on this blog also do not constitute financial or other professional advice. If you have any questions, please contact us at [email protected].