AI for Good? Exploring the Shift Toward Public Benefit Corporations

Key Takeaways:

  • PBCs Aim to Balance Profit and Purpose: Public Benefit Corporations (PBCs) are designed to align financial success with social and environmental goals. They offer a governance structure that prioritizes accountability to all stakeholders.

  • OpenAI’s Transition Sparks Debate: OpenAI’s planned shift to a PBC raises concerns about whether the model will genuinely uphold the company’s original mission of benefiting humanity or prioritize profit maximization under looser accountability frameworks.

  • Climate and Privacy are AI’s Hidden Costs: AI development poses significant challenges, including high energy and water demands impacting climate and privacy risks from inadequate data regulation. Addressing these issues is critical to ensuring responsible AI growth.

AI Companies Declare They’re Public Benefit Corporations – What Happens Next?

Will AI companies use their Public Benefit Corporation (PBC) status for good?

The Artificial Intelligence (AI) industry’s massive growth, fueled by investments from Silicon Valley and tech giants like Microsoft, is attracting much attention. According to reports, generative AI might hit $1 trillion in market valuation by 2032. Several AI companies seek to combine altruistic endeavors and profit by filing as a public benefit corporation (PBC), a for-profit corporation with the added layer of public benefit.

Given AI's promises, from possibly mitigating natural disasters to expedited and more accurate cancer screening, it seems natural that the AI industry operates, at least in part, with the public good in mind.

However, underneath the growth and innovation, several ethical concerns put the industry at odds with the intentions of a public benefit company, namely:

  • Energy consumption and potential climate impact

  • Data privacy and security

  • Biases and discrimination in AI tools 

 As the industry grows, will AI companies be able to meet the requirements of a PBC? 

What is a Public Benefit Corporation?

A Public Benefit Corporation (PBC) is a for-profit corporate structure designed to balance profit generation with the pursuit of social, environmental, or other public benefits. Unlike traditional corporations, which are legally required to prioritize shareholder profits, PBCs must consider the impact of their decisions on all stakeholders, including employees, customers, communities, and the environment. 

By embedding public benefit commitments into their corporate charters, PBCs promote long-term value creation and accountability, appealing to investors and consumers who prioritize sustainability and social responsibility.

Key Requirements to Form a PBC:

  • Public Benefit Purpose: Clearly define one or more public benefits in the company’s Certificate of Incorporation.

  • Stakeholder Consideration: Legally obligated to consider the impact of decisions on all stakeholders, not just shareholders.

  • Annual Reporting: Publish regular reports detailing progress toward achieving the stated public benefit goals.

  • State Compliance: Follow state-specific regulations; PBCs are authorized in 35 states and the District of Columbia.

  • Director Responsibilities: Directors must balance public benefit commitments with fiduciary duties to shareholders.

 This dual focus on profit and purpose defines the PBC model as a mission-driven approach to modern business.

OpenAI May Join xAI and Anthropic as a PBC

Founded in 2015 by Sam Altman (along with others like xAI and Tesla co-founder Elon Musk, who recently brought legal action against OpenAI), OpenAI started as a nonprofit organization with the lofty mission to develop artificial general intelligence (AGI) that “benefits all of humanity.” In less than ten years, the company has grown to a $157B valuation with one of the fastest-growing products of the last decade.

The monumental growth has not been without its controversies. In November of 2023, the non-profit board that governs OpenAI ousted Altman before he was reinstated less than a week later. The company also came under fire for utilizing actress Scarlett Johansson’s voice for an AI assistant, reportedly against her wishes. Additionally, several former employees have raised issues with safety and accountability.

Similarly, Anthropic, founded by ex-OpenAI executives, was hit with a copyright suit from authors alleging that the company’s generative AI models use illegally obtained copies of their work. 

Intellectual property, transparency, and data safety are a handful of concerns AI critics raise. The broader implications of some of these risks can materially impact the goal of a PBC.

The Implications of AI and Climate Change

Big Tech companies are investing in nuclear energy to meet the demands of generative AI.

Nearly every major technology company, including Apple, Google, and Microsoft, is rolling out some form of generative AI despite critics and consumers raising concerns about the fast-growing technology. One major concern is AI's energy demands. 

AI’s computing power demands are significant, and a growing body of research points to an enormous need for water and electricity. According to one estimate, the global demand for AI-related infrastructure might soon consume six times more water than the entire country of Denmark. AI’s burgeoning integration into the daily lives of consumers and enterprises sparks concerns about energy use, with some models consuming significant electricity. To keep up with the demands, companies are churning out data centers and physical facilities that house applications and data. 

Data centers currently account for nearly 2% of global electricity use, and some estimates suggest that the use could rise to 9% in less than a decade.

Tech Companies Miss the Mark on Sustainability Goals

Tech companies like Microsoft, which previously pledged to be “carbon negative” by 2030, are feeling the strain. However, rising emissions, partly attributed to infrastructure expansion, like new data centers, may have led to the increase in their most recent sustainability reports. While Microsoft offsets electricity demands with renewable-energy credits, the materials used in building data centers contribute to carbon emissions. Climate activists are ringing the alarm on the potential negative impacts of artificial intelligence on society and the environment.

From Privacy to the Climate: The Most Pressing AI Challenges

Aside from concerns around climate change, Citing AI’s “trust problem,” Harvard Business Review identified 12 “AI risks”:

  • Disinformation

  • Safety and security

  • The black box problem

  • Ethical concerns

  • Bias

  • Instability

  • Hallucinations in LLMs

  • Unknown unknowns

  • Job loss and social inequalities

  • Environmental impact

  • Industry concentration

  • State overreach

Ethics and Social Issues

In the last few years, AI-based products and test services have yielded mixed results:

  • Amazon piloted AI-powered recruitment software that graded candidates on a scale of 1-5. Later, the media reported that the tool unequivocally favored male candidates, even lowering the score for female candidates from all-women colleges. 

  • A 2019 study found that a commercial risk-prediction tool was less likely to refer African-American patients for care and resources.

  • One tutoring company’s AI-powered recruiting service automatically rejected candidates over a certain age. The company agreed to pay a $365,000 fine after the US Equal Employment Opportunity Commission (EEOC) said the software rejected over 200 otherwise qualified candidates.

Ensuring AI respects human rights and values beyond mathematical data is critical, but creating universally accepted ethical principles faces significant challenges.

Safety and Security

AI models require a lot of data, and with fast-evolving technologies, governing bodies are having difficulty regulating how data is sourced, stored and used. Recently introduced or amended regulations aim to address these risks, requiring generative AI models above a certain risk threshold to undergo "red-teaming" (simulated attacks to identify vulnerabilities). However, these measures often lack standardization and may amount to "security theater," particularly for resource-limited startups unable to conduct exhaustive tests. 

 Additionally, experts warn about serious risks such as malicious AI use in cyberattacks or AI systems being “jailbroken” to follow illegal commands.

Anthropic AI’s Benefit Trust

Despite the concerns and issues, proponents still see the possibility of AI doing actual good for society. If companies like OpenAI can figure it out, becoming a PBC would benefit the company, investors and consumers. One competitor, Anthropic, has already done so.

Launched as a PBC by two former OpenAI employees with their signature generative AI product, Claude, Anthropic instituted the “Anthropic Long-Term Benefit Trust.”

 As a Delaware PBC, Anthropic is legally committed to advancing its public mission—“responsibly develop and maintain advanced AI for the long-term benefit of humanity”—while pursuing financial success. This innovative trust, holding a special class of stock, gradually gains power to elect a majority of Anthropic’s Board of Directors.

 The trust provides a flexible yet accountable governance framework that promotes public benefit and profit. Investors and customers seem to see the benefit–the company is set to reach $1 billion in revenue this year.

Will PBCs Usher in an Era of Ethical AI?

Experts argue about the motivations of leaders like Altman and Musk in establishing a benefit corporation. OpenAI’s registration in Delaware, a state known for its business-friendly laws, is central to these concerns. Amendments to Delaware’s PBC laws in 2020 could allow OpenAI to appear committed to balancing profit with purpose while limiting accountability. For example, an amendment to Section 237 clarifies that only shareholders with at least 2% ownership or $2 million in equity can sue to enforce the company's social mission, potentially insulating leadership from broader scrutiny.

On the other hand, some experts argue that fears about changes in Delaware’s laws are unfounded and that the additional rigor of PBCS could provide better oversight for AI companies.

The Bottom Line

As AI continues to shape our world, companies like Anthropic adopting Public Benefit Corporation (PBC) structures offer a potential framework for balancing profit with social responsibility. However, it’s crucial to fully understand how PBCs operate and the implications of their governance. While PBCs could provide much-needed oversight and align AI development with societal benefits, concerns about their potential misuse for “impact washing” persist. As the AI industry evolves, a deeper understanding of PBCs is essential to ensure they genuinely foster ethical and sustainable innovation. 

As a Public Benefit Corporation, Keene Advisors works with PBCs and other socially responsible organizations to balance profitability with public benefit goals. 

For a complimentary consultation on how Benefit Corporations can integrate ethical practices into their business operations and gain access to impact investors, contact us today:

Previous
Previous

Corporate Credit Spreads Hit 20-year Lows - Is it Time to Refinance Your Debt?

Next
Next

Dynamic Budgeting: 10 Steps to Building a Three-Statement Model