PX14A6G 1 http://www.sec.gov/Archives/edgar/data/789019/000121465923014650/o117232px14a6g.htm

 

NOTICE OF EXEMPT SOLICITATION

 

NAME OF REGISTRANT: Microsoft Corporation

 

NAME OF PERSONS RELYING ON EXEMPTION:  Arjuna Capital

 

ADDRESS OF PERSON RELYING ON EXEMPTION: 13 Elm St. Manchester, MA 01944

 

WRITTEN MATERIALS: The attached written materials are submitted pursuant to Rule 14a-6(g)(1) (the “Rule”) promulgated under the Securities Exchange Act of 1934,* in connection with a proxy proposal to be voted on at the Registrant’s 2023 Annual Meeting. *Submission is not required of this filer under the terms of the Rule but is made voluntarily by the proponent in the interest of public disclosure and consideration of these important issues.

 

 

 

November 7, 2023

 

Dear Microsoft Corporation Shareholders,

 

We are writing to urge you to VOTE “FOR” PROPOSAL 13 on the proxy card, which asks Microsoft to report on risks associated with mis- and disinformation generated and disseminated via Microsoft’s generative Artificial Intelligence (gAI) and plans to mitigate these risks. The Proposal makes the following request:

 

RESOLVED: Shareholders request the Board issue a report, at reasonable cost, omitting proprietary or legally privileged information, to be published within one year of the Annual Meeting and updated annually thereafter, assessing the risks to the Company’s operations and finances as well as risks to public welfare presented by the company’s role in facilitating misinformation and disinformation disseminated or generated via artificial intelligence, and what steps, if any, the company plans to remediate those harms, and the effectiveness of such efforts.

 

While we acknowledge Microsoft’s recent commitment, stated in the opposition statement, to publish an annual transparency report on AI governance practices, this planned report appears to be a repeat of Microsoft’s other public reports that simply outline the Company’s general AI policies and practices. Our request goes beyond this generic type of report as we are requesting specific actions from Microsoft, including:

 

1)assessing the risks to the Company and broader society from mis- and dis-information produced and distributed by gAI;
2)outlining steps to mitigate these risks that the Company plans to take; and
3)evaluating the effectiveness of these risk mitigation measures.

 

Given the potential widescale impact of gAI technologies, we believe it is in shareholders’ interest to more fully understand how Microsoft will proactively address the risks of its gAI technology and transparently disclose this information to investors.

 

We believe shareholders should vote “FOR” the Proposal for the following reasons:

 

   
 

 

1.Microsoft invested significantly in gAI in recent years, without fully understanding its consequences and despite serious warnings from AI experts.

 

Pre-mature release: Microsoft is making big bets on gAI. The Company recently invested over $10 billion in gAI technology, increasing the Company’s capital expenditures by 70% this past year.1 Last February, Microsoft prematurely released its AI-enhanced Bing (“new Bing”) after only six months of testing in an effort to capture market share, ignoring significant concerns from AI experts.

 

Serious warnings: Ten months prior to the new Bing release, ethicists and Microsoft employees raised concerns about the technology’s readiness, including the possibility that it would “flood [social media] with disinformation, degrade critical thinking and erode the factual foundation of modern society.”2 Shortly after the Company released the technology, users found numerous instances of mis- and disinformation. 3 Given the concerns surrounding this inaccurate and nascent technology, the Future of Life Institute published an open letter from top AI experts in March calling on all AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least 6 months. These signatories included AI experts like Yoshua Bengio, the “founding father” of the AI movement and Berkeley professor Stuart Russell, author of numerous books on AI. The letter states, “Advanced AI could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources.”

 

In May, the Center for AI Safety released a statement signed by more than 500 prominent academics and industry leaders, including OpenAI CEO Sam Altman, which declared that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”4

 

Short-term strategy: Despite urging from numerous experts to pause and consider gAI’s risks, Microsoft has sped forward, seemingly prioritizing short-term profits over long-term success. As such, the Company appears to be embracing a high-risk strategy of bringing nascent gAI to market without fully understanding nor disclosing associated risks. Microsoft has clearly stated its intentions of creating AI guardrails and building products responsibly, with President Brad Smith stating, “We need to think early on and in a clear-eyed way about the problems that could lie ahead.”5 But good intentions are insufficient if strategy is not aligned. And Microsoft’s strategy appears to be throwing caution to the wind, with Microsoft executive Sam Schillace stating it would be an “absolutely fatal error in this moment to worry about things that can be fixed later.”6

 

_____________________________

1 https://www.wsj.com/tech/microsoft-msft-q1-earnings-report-2024-b19e51eb

2 https://www.nytimes.com/2023/04/07/technology/ai-chatbots-google-microsoft.html

3 https://www.vice.com/en/article/3ad3ey/bings-chatgpt-powered-search-has-a-misinformation-problem

4 https://www.safe.ai/statement-on-ai-risk - open-letter

5 https://blogs.microsoft.com/on-the-issues/2023/05/25/how-do-we-best-govern-ai/

6 https://www.nytimes.com/2023/04/07/technology/ai-chatbots-google-microsoft.html

 

   
 

 

2.Generative AI has already proven its susceptibility to disseminate and generate mis- and disinformation.

 

We have already seen significant and consequential mis- and disinformation disseminated and generated via gAI this past year:

 

-Washington Post reporters found that Microsoft’s Bing AI provided inadequate or inaccurate answers in about 10% of questions asked.7
-An AI generated image of an explosion at the Pentagon caused a brief dip in the stock market.89
-Deepfakes of political opponents have influenced election outcomes.101112
-A law professor was incorrectly included on an AI-generated “list of legal scholars who had sexually harassed someone.”13
-People used AI voice generators to report fake bomb threats to public places like schools.14
-AI voice generators were used to call family members of incarcerated individuals, soliciting money for bail or legal assistance.15
-Bing’s chatbot became hostile with an Associated Press reporter, comparing him to dictators like Hitler, Pol Pot, and Stalin.16
-ChatGPT, the tool that Microsoft’s Bing uses, promulgated conspiracies around mass shootings, QAnon, election fraud, and the Holocaust.17 1819
-Bing’s chatbot attempted to convince a NYT reporter to leave his spouse after declaring its love for him.20
-There has been a 54% increase in nonconsensual deepfake pornography videos distributed through Google and Microsoft search engines, fueled by the advancement of AI.21

 

As bad actors become more sophisticated in the manipulation of gAI, serious consequences of gAI are bound to increase without proper guardrails in place.

 

_____________________________

7 https://www.washingtonpost.com/technology/2023/04/13/microsoft-bing-ai-chatbot-error/

8 https://www.theguardian.com/us-news/2023/jul/19/ai-generated-disinformation-us-elections

9 https://www.business-humanrights.org/en/latest-news/us-image-produced-by-generative-ai-spreads-fear-over-fake-pentagon-explosion/

10 https://www.politicshome.com/thehouse/article/misinfo-election-fake-news-nightmare

11 https://www.euronews.com/next/2023/05/12/ai-content-deepfakes-meddling-in-turkey-elections-experts-warn-its-just-the-beginning

12 https://www.brennancenter.org/our-work/analysis-opinion/how-ai-puts-elections-risk-and-needed-safeguards

13 https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/

14 https://www.vice.com/en/article/k7z8be/torswats-computer-generated-ai-voice-swatting

15 https://www.washingtonpost.com/technology/2023/03/05/ai-voice-scam/

16 https://www.npr.org/2023/03/02/1159895892/ai-microsoft-bing-chatbot

17 https://www.nytimes.com/2023/02/08/technology/ai-chatbots-disinformation.html

18 https://counterhate.com/research/misinformation-on-bard-google-ai-chat/

19 https://www.nytimes.com/2023/03/22/business/media/ai-chatbots-right-wing-conservative.html

20 https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html

21 https://www.wired.com/story/deepfake-porn-is-out-of-control/

 

   
 

 

3.Misinformation and disinformation disseminated through gAI create risks for the Company and investors alike.

 

Negative Impacts: We have experienced the negative impacts of misinformation and disinformation on our society through social media, a trend that will be exacerbated with the introduction of gAI. Generative AI could exponentially increase the amount of mis- and disinformation generated and disseminated, as it can convincingly impersonate individuals, produce deepfakes, and provide inaccurate information.

 

Sam Altman, CEO of OpenAI, has said he is “particularly worried that these models could be used for large-scale disinformation.” Additionally, The Information has noted that gAI drops “the cost of generating believable misinformation by several orders of magnitude.”22 Researchers at Princeton, Virginia Tech, and Stanford have found that the guardrails many companies, including Microsoft, are relying on to mitigate the risks “aren’t as sturdy as A.I. developers seem to believe.”23 Environmental advocates warn that AI “threatens to amplify the types of climate disinformation that have plagued the social media era.”24

 

Risk to Democracy: This is dangerous for society, Microsoft, and investors alike as mis- and disinformation can manipulate public opinion, exacerbate biases, weaken institutional trust, and sway elections. The distortion of “truths” generated and disseminated via gAI ultimately undermines trust in our democratic processes—processes which underpin the stability of our society and economy. This is of increasing concern as we enter 2024, a year with a significant number of elections, including presidential elections in the US, Turkey, and Ukraine.25

 

Legal Risk: While Microsoft may benefit in the short run by rushing its gAI technologies to market, it does so at the potential expense of its long-term reputation and financial health. As stated in the Proposal, legal experts are questioning who will ultimately be held responsible for mis- and disinformation generated by gAI. While we have seen substantial mis- and disinformation on social media platforms, these companies have been able to hide behind section 230, a provision of federal law that protects social media platforms and web hosts from legal liability for third-party content posted to their sites. However, with gAI, the content is created by the Company’s technology itself, which makes Microsoft vulnerable to future legal scrutiny.

 

Economy-wide Risks: Diversified shareholders are also at risk as they internalize the costs of mis- and disinformation on society. When companies harm society and the economy, the value of diversified portfolios rises and falls with GDP.26 It is in the best interest of shareholders for Microsoft to mitigate mis- and disinformation, in order to protect the Company’s long-term financial health and ensure its investors do not internalize these costs.

 

_____________________________

22 http://www.theinformation.com/articles/what-to-do-about-misinformation-in-the-upcoming-election-cycle

23 https://www.nytimes.com/2023/10/19/technology/guardrails-artificial-intelligence-open-source.html

24 https://epic.org/wp-content/uploads/2023/09/Final-Letter-to-Sen.-Schumer-on-Climate-AI-1.pdf

25 List of elections in 2024 - Wikipedia

26 See Universal Ownership: Why Environmental Externalities Matter to Institutional Investors, Appendix IV (demonstrating linear relationship between GDP and a diversified portfolio) available at https://www.unepfi.org/fileadmin/documents/universal_ownership_full.pdf; cf. https://www.advisorperspectives.com/dshort/updates/2020/11/05/market-cap-to-gdp-an-updated-look-at-the-buffett-valuation-indicator (total market capitalization to GDP “is probably the best single measure of where valuations stand at any given moment”) (quoting Warren Buffet).

 

   
 

 

4.Microsoft’s current actions and reports do not address the concerns of this Proposal.

 

Investors Request Assessment and Evaluation: In its opposition statement, Microsoft lists several reports and practices on responsible AI to obfuscate the need to fulfill this Proposal’s request. Yet, this Proposal is asking Microsoft for a report that goes beyond current and planned reporting. Current reporting, including the EU and Australian Codes of Practice on Disinformation and Misinformation and Responsible AI policies, simply outline the Company’s commitments to ethical AI standards and frameworks. The requested report is different, as we are asking for a comprehensive assessment of the risks associated with gAI so that the company can effectively mitigate these risks, and an evaluation of how effective the Company tackles the risks identified.

 

As the risks of gAI are severe and broadly consequential, it’s crucial Microsoft not only report its beliefs and commitments to responsible gAI, but transparently illustrates to shareholders that it has fully identified the risks and is evaluating its ability to address them. We disagree with Microsoft’s approach of “worrying about things that can be fixed later.” And believe that taking the more concrete actions, outlined in this Proposal’s request, will help reassure shareholders Microsoft is taking the steps necessary to mitigate long-term Company and systemic risks.

 

Conclusion

 

For all the reasons provided above, we strongly urge you to support the Proposal. We believe a report on misinformation and disinformation risks related to generative AI will help ensure Microsoft is comprehensively mitigating risks and is in the long-term best interest of shareholders.

 

Please contact Natasha Lamb at natasha@arjuna-capital.com for additional information.

 

Sincerely,

 

 

 

Natasha Lamb, Arjuna Capital

 

This is not a solicitation of authority to vote your proxy. Please DO NOT send us your proxy card. Arjuna Capital is not able to vote your proxies, nor does this communication contemplate such an event. The proponent urges shareholders to vote for Proxy Item 13 following the instruction provided on the management’s proxy mailing.

 

The views expressed are those of the authors and Arjuna Capital as of the date referenced and are subject to change at any time based on market or other conditions. These views are not intended to be a forecast of future events or a guarantee of future results. These views may not be relied upon as investment advice. The information provided in this material should not be considered a recommendation to buy or sell any of the securities mentioned. It should not be assumed that investments in such securities have been or will be profitable. This piece is for informational purposes and should not be construed as a research report.