How investors can act to make sure AI is a force for good

Published on: 10 January 2024

Artificial intelligence (AI) is infiltrating more and more areas of our lives, and that represents a challenge from a responsible investment perspective. In this article Simone Andrews, Senior Responsible Investment Manager on the Fixed Income Team at APG, explains how investors can engage with companies about the potential human rights risks that these emerging technologies could give rise to. As she says, “the key is to strike the right balance between mitigating the risks that AI involves without stifling the innovation that could help improve people’s lives”.

In summary
• AI is growing rapidly, but its use is not without risk.
• While AI can be a force for good, there are serious concerns about its potential impacts on human rights.
• In such a fast-growing, technical field, it’s difficult for investors to keep on top of the latest developments.
• APG engages with tech firms to urge them to harness the power of AI responsibly.

Generative AI – AI that’s able to create new content – is tipped to grow by 42% per year over the next decade to become a USD 1.3 trillion market by 2032. But its use involves risks – some of which we are only just beginning to understand.

Policymakers are responding. For example, the EU has come up with the Artificial Intelligence Act, which covers areas including facial recognition technology, emotion recognition, surveillance, credit scoring and unfair hiring practices. The US has also set out standards for AI safety and requirements for developers and users of AI systems.

Do the rules go far enough? “Generally, we believe these regulations are a good start and much needed,” says Simone Andrews, who is a member of APG’s responsible investment team and leads engagements in the technology and health care sectors. “It’s clear we need a baseline for how algorithmic systems are developed and deployed that respects users’ rights. The question is whether the regulation will be effective over the long term, bearing in mind the speed that generative AI is developing at.” She points out that the EU’s AI Act relies in part on companies self-regulating, which may not be enough.

Potential for undesirable social outcomes

AI could impact people’s fundamental human rights in various ways. For example, its use could infringe upon people’s freedom of expression, privacy or right to non-discrimination. Andrews notes that 2024 is an election year in the US, EU and Taiwan, and social media will be awash with news, views and propaganda in the run-up to polling days. “We’re concerned about the spread of misinformation and hate speech on social media platforms using AI, as this could impact people’s right to free and fair elections,” she says.


Another issue could be that biased information becomes embedded in large language models. Bias in these models tends to disproportionately impact society’s most marginalised communities, and could further limit, for example, their employment opportunities or access to housing. There is also concern about the potential for AI to replace workers in several industries, including tech and auto. IBM’s chief executive recently announced plans to replace nearly 8,000 jobs in the company with AI in the coming years.


Hard for investors to keep up

All of this should be a concern for investors in the companies that are currently leading the way in developing AI technologies, and also in firms using these technologies. Engaging with large tech companies on data privacy, cybersecurity and now AI is vital, but with technology changing so fast, it’s a challenge for investors to keep up.

“We’ve been focusing on trust, safety, privacy, data collection and discrimination, but generative AI has the ability to actually disrupt the fabric of society,” Andrews explains. AI has potential applications across health care, finance and education – fields that can really change people’s lives. “Our job is to start these conversations and advocate for our clients. In terms of engagement, we’re still at the very beginning.”

Engagement with such influential companies is not without challenges

Investors need to agree on AI best practices
When engaging with companies about AI-enabled technologies and services, Andrews acknowledges the importance of transparency in helping to understand the impact of a product on the end user. Along with ongoing human rights assessments, she asks questions about what responsible AI policies the company has in place and the level of accountability and board oversight.


Her aim is to push digital companies to prioritize human rights when they are building out AI, and to ensure there are safeguards. Companies also need to provide evidence of what they’re actually doing to protect end users. “We need to see how these policies are implemented and measured, and what grievance mechanisms are in place. For example, it’s not enough for a company to say it doesn’t condone bias in AI systems. As investors, we need real disclosure, such as machine learning model cards – how are companies actually making sure there’s no bias embedded in large language models?” Independent verification is ideal, but Andrews says the broad investor community also needs to come together to agree on what best-practice disclosure on AI should look like. 


Challenges of engagement
“Big tech companies Amazon, Meta, Microsoft, Apple and Alphabet are household names with global influence”, says Andrews. “They are at the forefront of the digital revolution, and they all have huge, diverse and loyal customer bases and play major roles in many aspects of people’s lives.”


Engagement with such influential companies is not without challenges. For example, governance issues such as concentrated ownership and dual share classes may make gaining access to the board difficult. “Some tech companies only allow limited dialogue with shareholders on these issues,” Andrews explains. “That’s when we use our voting power. We work alongside other investors with aligned values on AI ethics to put more pressure on the companies. And we engage with privately owned tech-enabled companies in our portfolio, too.”  

Striking the right balance
As a responsible investor, APG’s goal is to create long-term value for its pension fund clients and their participants, and it believes trustworthy and safe AI systems can help do this. “But we advocate for tech and other companies to adopt stricter policies on the use of AI-enabled services and platforms”, Andrews continues. “As we’ve seen at firms like Meta, while the deployment of AI has contributed to its success, it can also create new risks if trust is eroded. If customer loyalty fades and brand reputation takes a hit, this can potentially impact both the business and its share price.” According to Andrews, the key is to strike the right balance between mitigating the risks that AI involves without stifling the innovation that could help improve people’s lives.


“Ethical standards for the development and deployment of AI are still evolving, but APG encourages companies to adopt a clear approach, with AI-based policies focused on human rights. In particular, we believe tech companies need to carefully consider both the positive and negative potential impacts of their AI-related products and services, make sure they’re responsible in how they develop AI and make their privacy policies completely transparent.

 

“We will continue engaging to make sure companies move in the right direction, Andrews concludes. “In cases where our efforts are unsuccessful, we can use other tools, like voting to influence behavior, and if all else fails we may divest. However, if companies at the forefront of AI put safeguarding human rights at the heart of their approach, we believe the benefits of this new technology could outweigh the risks it involves.”