Philanthropy Leaders Recommend Investments in Responsible A.I.

US
Ford Foundation and Omidyar Network recognize Anthropic’s groundbreaking generative language A.I.—which incorporates and prioritizes humanity—as an alignment with their missions to make investments that generate positive financial returns while benefiting society at large. Unsplash+

Artificial intelligence (A.I.) is having a very real impact on our politics, our workforce and our world. Chatbots and other large language models, text-to-image programs and video generators are changing how we learn, challenging who we trust and intensifying debates over intellectual property and content ownership. Generative A.I. has the potential to supercharge solutions to some of society’s most pressing problems, from previously incurable diseases to our global climate crisis and more. But without clear intent and proper guardrails, A.I. has the capacity to do great harm. Rampant bias and disinformation threaten democracy; Big Tech’s dominance, if further consolidated, has the potential to crush innovation. Workers are rapidly displaced when they don’t have a voice in how technology is used on the job.  

As philanthropic leaders who manage both our grants and our capital for social good, we invest in generative A.I. that protects, promotes and prioritizes public interest and the long-term benefit of humanity. With partners at the Nathan Cummings Foundation, we recently acquired shares in Anthropic, a leading generative A.I. company founded by two former Open A.I. executives. Other investors of the company—which is recognized for its commitment to transparency, accountability and safety—include Amazon (AMZN) ($4 billion) and Google (GOOGL) ($2 billion). 

We understand both the promise and the peril of A.I. The funds we steward are themselves the product of profound technological transformation: the revolutionary horseless carriage at the beginning of the last century and an e-commerce platform made possible by the fledgling internet at the end. Innovation is coded in our DNA, and we feel a profound responsibility to do all we can to steer the next paradigm-shifting technology toward its highest ideals and away from its worst impulses. 

Every harbinger of progress carries with it new risks—a Pandora’s box of intended and unintended consequences. Indeed, as French philosopher Paul Virilio famously observed, “The invention of the ship was also the invention of the shipwreck.” Today’s leaders would do well to heed Tim Cook’s charge to graduates in his 2019 Stanford commencement speech: “If you want credit for the good, take responsibility for the bad.”

We are doing exactly this. At the Ford Foundation, we invest in organizations that help companies scale responsibly by developing frameworks for ethical technology innovation. We’re backing public-interest venture capital that funds companies like Reality Defender, which works to detect deep fakes before they become a larger problem. And we’re betting big on the emerging field of public interest technology. From organizations like the Algorithmic Justice League, which recently pressed the IRS to stop forcing taxpayers to use facial recognition software to log into their IRS accounts, ultimately leading to the end of that practice, to initiatives like the Disability and Tech Fund, which advances the leadership of people with disabilities in tech development, civil society is walking in lockstep with tech leaders to ensure that the public interest remains front and center. 

Similarly, Omidyar Network aims to build a more inclusive infrastructure that explicitly addresses the social impact of generative A.I., elevating diversity in A.I. development and governance and promoting innovation and competition to democratize and maximize generative A.I.’s promise. It’s why, for example, Omidyar Network funds Humane Intelligence, an organization that works with companies to ensure their products are developed and deployed safely and ethically. 

And now, Ford Foundation and Omidyar Network recognize Anthropic’s groundbreaking generative language A.I.—which incorporates and prioritizes humanity—an alignment with our own missions to make investments that generate positive financial returns while benefiting society at large. Anthropic is a Public Benefit Corporation with a charter and governance structure that mandates balancing social and financial interests, underscoring a responsibility to develop and maintain A.I. for human benefit. Founders Dario and Daniela Amodei started the company with trust and safety at its core, pioneering technology that guards against implicit bias.

Their pioneering chatbot, “Claude” distinguishes itself from competitors with its adherence to “Constitutional A.I.,” Anthropic’s method of training a language model not just on human interaction but also on adherence to ethical rules and normative principles. For instance, Claude’s coding incorporates the UN’s Universal Declaration of Human Rights, as well as a democratically designed set of rules based on public input.

Today, we see a unique opportunity for our colleagues in business and philanthropy to lay an early stake in a rapidly evolving field, putting the public interest front and center. According to Bloomberg, the generative A.I. market is poised to become a $1.3 trillion industry over the next decade. Investors who recognize this growing field as an opportunity to do well must also prioritize the public good and consider the full range of stakeholders who are implicated in the advent of this technology. 

Ultimately, everyone with an interest in preserving democracy, strengthening the economy, and securing a more just and equal future for all has a responsibility to ensure that this emerging technology helps, rather than harms, people, communities and society in the years and generations to come.

The Case for Investing in Responsible A.I.

Products You May Like

Articles You May Like

California is changing electricity billing. What it means for you
Merrick Garland contempt resolution clears House panel
Supreme Court Justice Samuel Alito faces scrutiny over upside down U.S. flag outside his home
GOP bill pushes for cops to help ICE track down migrants who abscond
Alexander Georgiev kept Colorado alive before Matt Duchene delivered knockout

Leave a Reply

Your email address will not be published. Required fields are marked *