Are Board Making The Same Mistakes Regarding AI Governance As They Did Regarding Cybersecurity in The Berkshire Dilemma?

image

The board of directors of Berkshire Hathaway and their owners must decide whether to lead or not to lead in the office on AI.

estimates that global GDP could be almost 15 % higher by 2030 as a result of artificial intelligence ( AI), adding the equivalent of$ 15.7 trillion in economic growth and output to current GDP levels. This influence is described by PwC as” the biggest business opportunity in today’s fast-changing economy.”

The advancements being made around non-biological knowledge, i. e., AI and the use and treatment of this technology into business processes to substitute or augment human intervention, decision-making and action is a great thing for business value propositions, which makes it a great thing for investor interests.

But is it large enough to compel corporate boards to change from their traditional, cliched leadership mindsets and outdated monitoring practices? If AI be driving a fresh wave of board change? one where corporate boards update their director’s knowledge and alter how they manage themselves and their actions to enhance their ability to control the AI system and understand how AI is being used throughout the business and what its implications are?

Berkshire investment Tulipshare Capital LLC thinks so and has proposed that Berkshire Hathaway transform their business management strategy around AI through the addition of an AI commission on the Berkshire board. The annual BRK general meeting in Omaha on May 3 will focus on this issue and will be up for vote on the at the moment.

In describing the reasoning behind their shareholder proposal for a BRK AI committee, Tulipshare says:

A new committee of independent directors on Artificial Intelligence ( AI ) directors be established by Berkshire’s shareholders to address the risks posed by the creation and use of AI in its own operations, portfolio companies, and new investments. The committee charter shall authorize the committee to meet with employees, customers, suppliers, and other relevant stakeholders at the discretion of the committee, and to retain independent consultants and experts as needed.

MORE FOR YOU

Shareholders are in favor of the responsible use of AI to promote growth, increase efficiency, and uphold Berkshire and its portfolio’s competitiveness. However, AI technologies also pose regulatory, societal, and human rights risks that require proactive management. The National Institute of Standards and Technology established a” Risk Management Framework” outlining a proper approach to AI risk that evaluates harm to people, organizations, and ecosystems, all of which are becoming more and more important as AI usage spreads across industries. The White House Office of Science and Technology Policy’s ethical guidelines for AI emphasize the importance of safety, transparency, algorithmic fairness, and human oversight.

AI systems, if not responsibly governed, can cause significant harm, as seen when Amazon, Berkshire’s portfolio company, scrapped a biased hiring tool and Alexa spread false claims about the 2020 US election, highlighting risks to fairness, public trust, and democracy. In 2024, Glass Lewis and ISS argued that more transparency would enable shareholders to assess the risks posed by the use of AI, and that it would not put the burden on the business too much on. Apple, another Berkshire portfolio company, supported a shareholder proposal in 2024.

Berkshire’s substantial investments in AI-driven companies amplify the need for strong governance. Warren Buffett warned that in order to reduce significant risks posed by its misuse and lack of understanding, the irreversible nature of AI development calls for thorough oversight. Without it, Berkshire risks falling behind in a rapidly evolving market, especially as institutional investors like Norges Bank publicly set its expectations regarding governance of AI by its portfolio companies, and Legal &amp, General Investment Management has also promulgated its expectations for AI adoption and publicly supported the AI proposal to Apple alongside Norges.

As AI systems become more complex, fail to function as intended, or result in unfavorable outcomes, Berkshire and its portfolio are increasingly exposed to financial, legal, and reputational risks. Companies failing to implement ethical AI governance face growing legal challenges, including lawsuits for discrimination and violations of privacy laws. By establishing clear ethical AI standards, Berkshire could anticipate risks, ensure regulatory compliance, avoid legal battles, and defend its reputation and consumer trust.

We urge shareholders to support the creation of this independent AI committee to better manage the risks and opportunities of AI, ensuring the long-term value and reputation of our Company so that Berkshire remains at the forefront of responsible corporate governance in an increasingly AI-driven world.

Boardrooms around the world are faced with new and unique challenges from AI, but this is not the only information technology challenge boards have recently faced. Boardrooms are also facing challenges in governing cybersecurity and in the not-too-distant past they have been confronted with other information technology developments such as social media, cloud computing, IoT, and even the advancement of the internet itself.

Given the pace, scope, and level of digital disruption that has occurred over the past few decades, the boardroom has not surprisingly adapted to and evolved as an oversight control within the digital business systems that power the companies they serve. But boardrooms need to be as adaptive as these issues are disruptive, to be effective, however most of them are showing that they are not up to the challenge. This imposes on investors more risk than they ought to, and they are responsible for footing the bill when these risks become reality.

Boards have both a responsibility to govern the positive value creating opportunities of these technologies alongside their negative consequences. However, the majority of U.S. public company boards have done little to improve their ability to govern these issues, aside from actively opposing common sense board reform that would benefit investors and other stakeholders. While there are some boards that are leading by adding directors to the board with the relevant expertise to understand these technologies and by changing how they organize themselves to bring more focus to these issues, they are the exceptions, not the rule.

Boards have made a variety of mistakes with regard to cybersecurity oversight, from viewing it as a general risk to believing that effective oversight can be achieved within the framework of an outdated legacy governance model. This belief has prevented the addition of director cyber experts onto the board and relegated cybersecurity and even AI oversight to an audit committee afterthought. Because neither policy strengthens the boardroom’s role as a control over the cybersecurity system, this inaction could actually harm the company’s overall cyber risk profile.

Which raises the question of whether the corporate boardroom is the primary source of America’s chronic private sector cybersecurity weaknesses? Will it also contribute to American underperformance regarding the use and risks of AI?

Independent Board Director and Former Fortune 50 CISO Joanna Burkey, DDN. QTE goes over this in detail:

With the rapidly evolving complexity of digital systems, and every company’s increasing reliance on them, it is not sustainable for the audit committee to continue to be the sole repository for technology risk conversations. A dedicated governance structure is required to effectively manage the extensive impacts of these issues on an enterprise as technology, especially AI and cybersecurity, introduces more and different types of risks and opportunities. It’s not an either/or situation either — a technology committee can still liaise with an audit committee to enhance information sharing as appropriate, but the audit committee is not the appropriate place for in-depth discussions pertaining to technology, digital transformation and their related risks.

In this perspective, boardrooms across America appear to have failed at both cybersecurity and AI. One only has to look to the current prevalence of audit committee responsibility for these issues to see how slow boards are at evolving to meet the challenge of the moment with AI and cybersecurity. According to Deloitte’s Audit Committee 2025 report, AI governance also poses a challenge for the Top 10 boardroom audit committees in 2024 and 2025.

This is a common weakness with cybersecurity governance as well, as the report indicates that 62 % of non-financial services respondents assign primary responsibility for cybersecurity oversight to their audit committee, with cybersecurity reflected as the top 2025 and 2024 audit committee priority. Audit committee responsibility frequently misaligns director skills to these issues and marginalizes these complex issues from the audit committee’s primary financial reporting responsibility and busy agenda, delegating AI and cybersecurity to an audit committee afterthought.

The failure of boardroom leadership to evolve beyond this status-quo and antiquated mindset which was imposed in 2002 after the financial reporting crisis that spawned the Sarbanes-Oxley Act, fails investors on these issues and handicaps America’s path to the digital future. A governance model established in 2002 is insufficient to address the AI-driven future of 2025 and the opportunities that the digital future holds for the digital future. With AI disruption upon them, boards should be looking in the mirror, maybe the problem is coming from within the boardroom.

The Berkshire enigma reflects a lack of leadership on this transformative issue, which brings it into the boardroom of one of America’s most well-known businesses, led by Warren Buffet, 94. The Berkshire board, of which Mr. Buffet chairs, made this recommendation to shareholders about the Tulipshare Capital LLC AI committee proposal:

UNANIMOUSLY FAVORS A VOTE AGAINST THE PROPOSAL FOR THE SUMMARY REASONS:

Berkshire’s Board recommends a “no” vote on this proposal. The Board does not believe that creating a new committee of independent directors on artificial intelligence is necessary or in the best interests of shareholders.

The Board periodically receives updates on the major risks and opportunities of Berkshire’s operating businesses. Berkshire manages its operating companies on a peculiarly decentralized basis and has little to do with the day-to-day operations of these companies. The creation of a new, independent Board committee focused on Artificial Intelligence would be inconsistent with Berkshire’s culture and is unnecessary.

The subsidiaries are required to regularly evaluate and review their individual operations and compliance risks in accordance with Berkshire’s decentralized management model, as required by the government’s Prohibited Business Practices Policy ( publicly accessible at https ://berkshirehathaway .com/govern/pbpp-2024dec.pdf ). This risk assessment is required to take into consideration the management of emerging risks to ensure compliance with applicable laws. Risks associated with the use of new technologies, such as artificial intelligence, are specifically covered by the subsidiaries ‘ evaluation of external risks.

Berkshire’s Governance, Compensation and Nominating Committee develops and recommends corporate governance guidelines applicable to the Company and its Audit Committee reviews how the Company assesses and manages the company’s exposure to risk. The Board believes that this governance structure, in addition to the risk assessment obligations imposed on its subsidiaries in relation to the use of artificial intelligence, provides an appropriate level of oversight at this time, and that an independent Artificial Intelligence committee is not necessary. Accordingly, the Board recommends that our shareholders vote against this proposal. ( BRK Def 14A 2025 )

In short,” shareholders would not be served by changing how we’ve done things in the past”.

The Berkshire audit committee is also in charge of leading a bad governance practice called cybersecurity oversight, which is also a top priority. The Berkshire Prohibited Business Practices Policy mentioned in this statement reflects a compliance focused risk assessment model. A compliance-driven model won’t go far enough to properly understand and assess AI risk because there are no processes to comply with, despite the few to no existing U.S. regulatory requirements in place that govern the use of AI.

Mr. Buffett is quoted as saying “risk comes from not knowing what you are doing”. The Tulipshare proposal attempts to prevent that from occurring in the Berkshire boardroom. They are also encouraging Berkshire to set the AI tone-at-the-top for their portfolio companies, to lead against a significant unknown, which is when leadership is needed the most.

By leading in the boardroom on AI, I’d like to encourage Berkshire to set the tone-at-the-top for American business and boardrooms. But it requires greater boardroom effectiveness than the current BRK governance status-quo will deliver and a vote FOR the Tulipshare shareholder proposal. Fortunately, investors have the power to protect their interests and cast ballots for them.

I chose AI boardroom leadership as I voted my shares today FOR the proposed BRK AI committee. I ask other BRK investors to cast their votes for the proposed AI committee and to VOTE FOR the BRK boardroom leadership on AI.

There’s far more to gain, than to lose with a FOR vote. Join me for the journey of AI boardroom leadership.

Leave a Comment