Developments
___________________________
Bias-related AI Oversight by Federal Agencies
Officials from four Federal agencies pledged (April 25) in a joint statement to use their authorities to protect the public against unlawful bias and discrimination, and other harmful outcomes.
Specifically, the statement provides that the agencies “monitor the development and use of automated systems and promote responsible innovation [and] pledge to vigorously use our collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”
Statement signatory agencies included the Federal Trade Commission, Consumer Financial Protection Bureau, the Department of Justice, and the Equal Employment Opportunity Commission.
(posted: 4-28-23)
_________________________
AI Complaint Filed with the FTC
The Center for AI and Digital Policy filed an AI-related complaint against OpenAI (April 4) with the Federal Trade Commission saying the company's ChatGPT-4 product is "biased, deceptive, and a risk to privacy and public safety."
The Center argues that the company released the product despite its own acknowledgement of the dangers of AI technology and its lack of assessments indicating the safety of ChatGPT-4; and also that the company may violate FTC guidelines on product soundness, explainability, transparency, accountability, and fairness, as well as is AI-specific product policy views. It is not known how long it might take the FTC to consider, and possibly issue a ruling, on the complaint.
(posted: 4-4-23)
_________________________
Influential Letter on AI Way-Foward
A large group of Artificial Intelligence (AI) leaders, thinkers, users, and strategists published an open letter at the end of March calling for an immediate pause for at least 6 months of the training of AI systems more powerful than the latest version of Chat GPT (i.e., GPT-4). The letter included notable signatories such as Elon Musk, and company co-founders of Apple, Pinterest, Ripple, and Skype.
During this pause, the letter asks for actions on the part of AI labs and experts, as well as policymakers. AI labs and experts are asked to jointly develop and implement a set of "shared safety protocols" audited and overseen by "independent outside experts." Envisioned protocols should "ensure that systems adhering to them are safe beyond a reasonable doubt." The pause represents a "stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities," not the stopping of all AI development.
The letter asks AI developers to work with policymakers to "dramatically accelerate development of robust AI governance systems" to include at least "new and capable regulatory AI authorities;" "oversight and tracking" of the most capable AI systems and large pools of computational capability;" systems to help distinguish real from synthetic, and to track model leaks; an auditing and certification ecosystem; address liability for AI-caused harm; provide "robust public funding" for AI safety research; and "well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause."
In terms of AI research and development, the letter proclaims that such R&D "should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal."
(posted: 4-5-23)
_________________________
Bans on ChatGPT
Italy, at least temporarily, banned ChatGPT (March 31) after the country's independent Data Protection Authority made the decision because of both data privacy concerns and the lack of an age verification process for users. It is not clear how long such a ban may be in place, and the country's Deputy Prime Minister labeled the action as "excessive" and said he hoped for subsequent action that would allow the capability to be restored. Other western nations–such as Germany, Ireland, and France–reportedly may at least be looking at the Italian ban in considering whether to take a similar action.
(posted: 4-2-23)
_________________________
U.K. Regulatory Framework on AI
The United Kingdom published a white paper (March 29) that claims support for an "pro-innovation approach" towards AI regulation, which seems to mean allowing technologies to develop and deploy, and stepping in with regulation only when issues arise.
In that respect, future regulations will focus not on specific technologies, but the context of use. Regulatory approaches will be iterative and agile, with any future regulatory action guided by principles of safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; contestability and redress.
(posted: 3-29-23)
_________________________
AI-Generated Works are Not Copyright Protected
The US Copyright Office issued a decision (February 21) that works developed using artificial intelligence (AI) are not protected by current US copyright law.
In the decision, the Copyright Office said that it “will not knowingly register works produced by a machine or mere mechanical process that operates randomly or automatically without sufficient creative input or intervention from a human author.” This leaves open, however, situations where a determination cannot be made as to whether or not works were generated using AI.
The case at hand–Zarya of the Dawn–concerned artwork for a comic book developed using Midjourney, while the author wrote her own text. The Copyright Office concluded that the text could be protected but not the individual artwork despite claims by the author that she made modifications to the AI-generated art.
The Copyright Office says that in cases where non-human authorship is claimed, appellate courts have found that copyright does not protect such creations; that courts interpreting the phrase “works of authorship” have uniformly limited copyright to creations of human authors.
(posted: 2-27-23)
_________________________
AI in the Military Domain Agreement
Sixty countries participating in a Responsible AI in the Military (REAIM) Summit (February 2023) endorsed a call to action concerning military uses of AI; specifically supporting “the need to put the responsible use of AI higher on the political agenda and to further promote initiatives that make a contribution in this respect.”
There was also agreement on establishing a “Global Commission on AI” in order “to raise all-round awareness, clarify how to define AI in the military domain and determine how this technology can be developed, manufactured and deployed responsibly" and, "the Commission will also set out the conditions for the effective governance of AI.”
Separately in October (2022), six robotic companies announced in a letter that they will pledge not to weaponize their products.
Specifically, the companies pledged not to “weaponize our advanced-mobility general-purpose robots or the software we develop that enables advanced robotics and we will not support others to do so.” They also pledged “to explore the development of technological features that could mitigate or reduce these risks.”
The companies say that adding robots to weapons, remotely or autonomously operated, raises “new risks of harm and serious ethical issues. This would also “harm public trust in the technology in ways that damage the tremendous benefits they will bring to society.”
Finally, the companies “call on every organization, developer, researcher, and user in the robotics community to make similar pledges not to build, authorize, support, or enable the attachment of weaponry to such robots.”
(posted: 2-21-23)
_________________________
US-EU Artificial Intelligence Cooperation
The US and the European Union agreed (January 27) to step up collaboration on beneficial uses of artificial intelligence (AI).
The agreement formalizes of collaborations seeking to bring together US and EU experts to conduct research on AI, computing, and related privacy-protecting technologies.
In its announcement, the White House claims that through the agreement, the US and EU are seeking “responsible advancements” in AI to address major global challenges in five key areas of focus: Extreme Weather and Climate Forecasting, Emergency Response Management, Health and Medicine Improvements, Electric Grid Optimization, and Agriculture Optimization.
(posted: 1-30-23)
_________________________
Artificial Intelligence (AI) System “Bill of Rights”
The Biden Administration released (October 4) what it is calling a "Blueprint" for an artificial intelligence (AI) “Bill of Rights.” The Blueprint lays out five principles intended to serve as a guidebook for the development at Federal, state, and local level of policies, regulations, and laws governing the use of AI. It is not a formal legislative proposal.
The five principles include:
Safe and Effective Systems. Systems should operate based on “their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards” and they should be designed to proactively protect against harms “stemming from unintended, yet foreseeable, uses or impacts of automated systems.”
Algorithmic Discrimination Protections. Organizations should take proactive and continuous measures to protect against discrimination including “proactive equity assessments as part of the system design, use of representative data and protection against proxies for demographic features, ensuring accessibility for people with disabilities in design and development, pre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight.”
Data Privacy. Seek permission, and respect data privacy decisions, regarding the collection and use of data in “appropriate ways and to the greatest extent possible,” and where not possible have safeguards, do not “obfuscate user choice or burden users with defaults that are privacy invasive,” and use consent only to justify collection of data in cases where it can be appropriately and meaningfully given.
Notice and Explanation. Provide “generally accessible plain language documentation,” with clear descriptions of the overall system functioning, “the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible.”
Human Alternatives, Consideration, and Fallback. Provide the ability to opt out from automated systems in favor of a human alternative, where appropriate. Human consideration and fallback “should be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and should not impose an unreasonable burden on the public.”
The Blueprint is part of a more comprehensive National AI Initiative that has also resulted in a risk management concept paper, a National Artificial Intelligence Research and Development Strategic Plan, and a paper (March 2022) on the development of standards to prevent AI bias.
Internationally, the State Department expressed U.S. support of AI principles established within the OECD, as well as the work of the Global Partnership for Artificial Intelligence (GPAI).
(updated: 10-5-22)
_________________________
European Union AI Framework
The European Union issued both a strategy and a “framework” to address EU citizen rights and safety risks tied to artificial intelligence (AI). The framework will lead to requirements for market entrance and certification of high-risk AI systems through a mandatory CE-marking procedure (i.e., a product marking system of the European Union).
A new EU enforcement body will be put into place: the European Artificial Intelligence Board (EAIB). The EAIB will manage a layered enforcement system, where low risk products receive lighter regulation and oversight, with higher risk products subject to stringent standards and enforcement. Some applications could potentially be banned. The spectrum of enhanced oversight and regulation would range from mere non-binding self-regulatory impact assessments to heavy, externally-audited compliance requirements throughout the life cycle of an application.
The EU is in the process of developing rules around the framework, though the timeline is not publicly known.
(updated: 3-21-22)
_________________________
Quantum-Resistant Algorithms
The National Institute of Standards and Technology announced (July 5th) four winners of a six-year effort to develop encryption algorithms designed to withstand an assault from any future quantum computer, “which could potentially crack the security used to protect privacy in the digital systems we rely on every day — such as online banking and email software.”
The four selected encryption algorithms will become part of NIST’s Post-Quantum Cryptography Standardization. NIST is apparently looking at other algorithms in addition to these first four, and says it intends to finalize standards within the next couple of years.
(updated: 7-7-22)
_________________________
Quantum Directives
The Biden Administration issued two directives in support of advancing quantum technologies (May 4, 2022) – an Executive Order (EO) and a National Security Memorandum.
The EO merely establishes a National Quantum Initiative (NQI) Advisory Committee to ensure that the Federal NQI program is informed “by evidence, data, and perspectives from a diverse group of experts and stakeholders.”
The Memorandum requires a few key actions:
It directs the Federal Government to pursue a coordinated and integrated approach on Quantum Information Science (QSI), including on foundational scientific research on quantum-resilient cryptographic standards and technologies.
It requires agencies to inventory systems that are vulnerable to exploitation from quantum technologies and requires specific milestones for quantum-resistant cryptographic migration, including the development of migration plans no later than one year after cryptographic standards are established.
It directs Federal agencies to develop comprehensive plans to safeguard intellectual property, R&D, and other sensitive technology from acquisition by adversaries, as well as educate industry and academia on adversarial threats.
(updated: 5-4-22)
_________________________
FTC Algorithmic Decision-Making Rule
The Federal Trade Commission (FTC) is considering initiating a rulemaking "to curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination" -- i.e., so-called algorithmic discrimination/bias. A specific rule has not yet been proposed.
(updated: 3-21-22)
_________________________
Connected Policies
___________________________
No Results Found

Block Nuclear Launch by Autonomous AI Act
Status
This legislation proposes to codify within law existing Department of Defense policy "that federal funds can be used for any launch of any nuclear weapon by an automated system, without meaningful human control."
Status: Identifcal proposals were introduced in both the House and Senate on April 26, 2023.

Laying Down Harmonized Rules on Artificial Intelligence
Status
This is a proposed legal framework of the European Union that would put in standards and rules on the use and management of artificial intelligence. Under the framework, a limited number of unacceptable AI use cases, such as social profiling by governments, would be completely banned and high-risk use cases would be subjected to prior conformity assessment and wide-ranging new compliance obligations.
Status: the framework is still going through EU legislative processes and therefore is not final. The current framework was proposed in April of 2021.