top of page

AI Policy

Updated: May 31

By - Tim Rosado


Alongside some of the most pressing issues of today––such as numerous security and other threats deriving from Russia and China, global warming, and violent crime––are other significant emerging concerns challenging policymakers across the globe. One of the most significant is planning for, and managing, the advancement and deployment of artificial technology (AI).

What follows is the beginning of a running list of the most recent notable viewpoints, analysis and developments that concern AI policy in the United States and around the world, in particular how governments should respond.

_______________


One-Sentence Letter on AI


A group of AI experts, scientists, and public figures signed onto (May 30) a one sentence statement on AI risk, released on a website of the Center for AI Safety. The statement: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

_______________

Brad Smith, Microsoft's President


Microsoft's President, Brad Smith, publicly endorsed (May 25) governmental regulations on AI including requirements that systems connected to critical infrastructure can be fully turned off or slowed down, much like emergency braking systems used in transportation. He also called for laws to clarify when additional legal obligations should apply to AI, make it clear when images and video are produced by AI, and that companies be required to obtain licenses to deploy highly capable AI models.

_______________


ChatGPT CEO Testimony


Sam Altman, the CEO of OpenAI who makes the maker of the popular ChatGPT AI chatbot, testified before the Senate Judiciary Committee (May 16) on AI regulation. Mr. Altman, and his company, have been advocates for Federal regulation of AI as the technology advances and its use is expanded.


The key recommendations made by Mr. Altman with respect to government oversight, regulation, and support:

  • The US government should consider a combination of licensing or registration requirements for development and release of AI models above a crucial threshold of capabilities, alongside incentives for full compliance with these requirements. Mr. Altman said he supported the creation of a new Federal agency to undertake this effort.

  • US government should consider facilitating multi-stakeholder processes, incorporating input from a broad range of experts and organizations, that can develop and regularly update the appropriate safety standards, evaluation requirements, disclosure practices, and external validation mechanisms for AI systems subject to license or registration.

  • Policymakers should consider how to implement licensing regulations on a global scale and ensure international cooperation on AI safety, including examining potential intergovernmental oversight mechanisms and standard-setting.

_______________


Mustafa Suleyman, DeepMind Co-Founder


Mustafa Suleyman, co-founder of AI lab DeepMind which is now a subsidiary of Google, said at a San Francisco forum (May 9) that there will be a "serious number" of unhappy and "very agitated" losers in an AI-driven economy 5 to 10 years from now, and that governments need to consider how to support persons whose jobs are destroyed by AI, including potentially through a basic income approach.

_______________


Apple Co-Founder Steve Wozniak


Apple co-founder Steve Wozniak recently told the BBC (May 9) that AI humans have to take responsibility for what is generated by AI, that AI content should be clearly labeled, and that regulation should hold big tech firms accountable for the information produced.

_______________


Google CEO Views


Google's Chief Executive Officer (CEO) Sundar Pichai stated in an interview on 60 Minutes (April 16), that AI regulation is necessary to ensure it is aligned to human values, such as morality. And, without any guidelines Mr. Pichai believes AI would be abused by bad actors, such for spreading disinformation which can cause significant societal harm.

Mr. Pichai said that the process of developing regulations must include engineers, social scientists, ethicists, philosophers, among others, and that the process must be "very thoughtful.”

_______________


Schumer Framework


Senate Majority Leader Chuck Schumer announced (April 13) that he is working to develop a framework for the regulation and oversight of artificial intelligence applications.


While currently short on details, and with claims that the proposal is being refined, the Senator outlined that the proposal "will advance four guardrails to deliver transparent, responsible AI while not stifling critical and cutting-edge innovation." The four guardrails are "Who, Where, How, and Protect." The first three are intended to "inform users, give the government the data needed to properly regulate AI technology, and reduce potential harm." The guardrail "Protect" focuses on aligning systems "with American values" and ensuring technologies deliver on promises "to create a better world."

_______________


Biden Statement

President Biden discussed AI and related policy in comments before a meeting of his Council of Advisors on Science and Technology (April 4). Among other things, the President said that:

  • Tech companies "have a responsibility, in my view, to make sure their products are safe before making them public."

  • "Social media has already shown us the harm that powerful technologies can do without the right safeguards in place."

  • Congress needs to pass bipartisan privacy legislation that require companies "to put health and safety first in the products that they build."

  • In response to a question concerning whether AI is dangerous, the President replied that "it remains to be seen. It could be."

_______________


Washington Post Story

The Washington Post published a story on the current AI backlash (April 5) by Will Oremus. Mr. Oremus argues that current concerns about AI may be overblown generally because the technology is a long way off from seriously impacting our society, economy, and jobs. He argues that our policy focus should be on the risks associated with the companies managing current capability–– i.e., those companies that "cut corners" on the technology leading to problems such as race-based bias, AI-assisted crime, and the spread of misinformation and disinformation.

_______________


Time/McNamee Editorial


Time published (April 5) an editorial from Roger McNamee of the private equity firm Elevation Partners, who questions whether private companies should be able to do what he characterizes as "uncontrolled experiments on the entire population" (much like current social media companies) without "guardrails or safety nets" to include demonstrating that the AI products are safe. Mr. McNamee argues that much of current AI-produced information is drawn from junk information around the web (because of the low cost), not from experts who are, and can, produce better information. He suggests that laissez-faire technology policy has enabled this kind of problem, putting our society and democracy at risk.


Mr. McNamee argues that what is required now is a different approach to the development and deployment of new technologies, with a prioritization of consumer safety, democracy, and other values over returns to shareholders (presumably through more oversight and regulation).

_______________


Dimon Comments


JP Morgan Chase CEO Jamie Dimon's annual public letter to shareholders highlighted AI and its current and future use in the company. In the letter, Mr. Dimon characterized AI as "an extraordinary and groundbreaking technology" critical to the company’s future success. He claims that the company already "has more than 300 AI use cases in production today for risk, prospecting, marketing, customer experience and fraud prevention," and that "AI runs throughout our payments processing and money movement systems across the globe."


Mr. Dimon also claims in the letter that the company has an interdisciplinary team of ethicists helping prevent unintended misuse, anticipate regulation, and promote trust with clients, customers and communities; and, that it is an "absolute necessity" that use of AI-generated data follow the laws of the land for the benefit of both the company and the larger financial system.

_______________


The Open Letter


A large group of AI leaders, thinkers, users, and strategists published an open letter (March 29) calling for an immediate pause for at least 6 months of the training of AI systems more powerful than the latest version of Chat GPT (i.e., GPT-4). The letter included notable signatories such as Elon Musk, and company co-founders of Apple, Pinterest, Ripple, and Skype.


During this proposed pause, the letter asks for actions on the part of AI labs and experts, as well as policymakers. AI labs and experts are asked to jointly develop and implement a set of "shared safety protocols" audited and overseen by "independent outside experts." Envisioned protocols should "ensure that systems adhering to them are safe beyond a reasonable doubt." The pause will represent a "stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities" not the stopping of AI development, the letter claims.


The letter asks AI developers to work with policymakers to "dramatically accelerate development of robust AI governance systems" to include at least "new and capable regulatory AI authorities;" "oversight and tracking" of the most capable AI systems and large pools of computational capability;" systems to help distinguish real from synthetic, and to track model leaks; an auditing and certification ecosystem; address liability for AI-caused harm; provide "robust public funding" for AI safety research; and "well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause."

_______________


And from the Maker of ChatGPT


The letter follows the release in February of an AI statement by the maker of ChatGPT––Open AI, regarding the company's plans and what it believes should be the way-forward on AI. The statement, while consistent with the group's letter, is somewhat less alarmist and does not include the kind of specific policy measures sought by the open-letter group.


In its statement, OpenAI acknowledges that while artificial general intelligence (AGI) has the potential to "give everyone incredible new capabilities," this would come with serious risk of misuse, drastic accidents, and societal disruption." While the company does "not believe it is possible or desirable for society to stop its development forever," AI developers "have to figure out how to get it right."


OpenAI says the best way to do this is:

  • A gradual transition to a world with AGI, rather than a sudden one. Over time, the balance between the upsides and downsides of AI deployment "could shift" (i.e., upsides overtaking downsides) allowing more significant deployment of AI capability.

  • Have "society" agree on the bounds of how AI is used but permit individual users to have "a lot of discretion." World institutions need to be strengthened with "additional capabilities and experience to be prepared for complex decisions about AGI" and company products will "likely be quite constrained" during that time. At the same time, individuals should be empowered to make their own decisions about AI use.

  • Have a "global conversation" about three key questions: how to govern AI systems, how to fairly distribute AI-generated benefits, and how to fairly share access.

For the long term, OpenAI says that "there should be great scrutiny of all efforts attempting to build AGI and public consultation for major decisions."


_______________





Working from Home

Explore Our Policy Calendar

bottom of page