The federal legislation of the United States aimed at regulating the artificial intelligence industry has become closer to the current technological reality after on Wednesday, May 15, Senate Majority Leader Chuck Schumer, along with a bipartisan trio of senators, announced work on an extensive action plan that will become a kind of concept for congressional committees’ approaches to AI in the forthcoming bills.
A 31-page roadmap has been published, which stipulates that the US government has to allocate billions of dollars to accelerate research and development in the artificial intelligence area. This strategy is in line with the approach of Chuck Schumer, a Democrat from New York, and his associates in the context of the perception of AI, a kind of ideological core of which is the priority attention to innovation in the United States in a highly competitive environment.
The roadmap also contains instructions to some Senate committees to come up with guardrails for artificial intelligence. In this case, attention is focused on those aspects of the use of advanced technology that involve the greatest risks. In particular, potential problems such as discriminatory actions committed within the framework of the use of artificial intelligence, applying AI to interfere in elections, and job displacement are mentioned in the relevant context.
Chuck Schumer said on Wednesday that harnessing the potential of artificial intelligence requires a comprehensive approach. According to him, this thesis is a guideline for a bipartisan working group that specializes in solving issues related to applying AI.
It is worth noting that some provisions of the roadmap correspond to the long-standing goals of Congress. For example, in this case, it implies an initiative to adopt a national data privacy law, which provides consumers with enhanced control over personal information. Also, the corresponding law would become a kind of legal framework that would help regulate the use of data of the mentioned category by companies operating in the artificial intelligence industry, or by firms applying advanced technology.
Moreover, part of the provisions of the roadmap comply with the norms of the legislation of the European Union in the sphere of AI control. One example of this conceptual overlap is the proposed ban on the use of artificial intelligence systems for social scoring systems. It is worth noting that a similar rule is also contained in the approach to regulating machine intelligence, which was introduced in China. In this case, there is what can be described as a kind of undeclared international agreement within the framework of the concept of monitoring artificial intelligence. At the same time, the corresponding strategy reflects a generally accepted, in a sense historical approach to ensuring national security, which in the context of the current configuration of technological reality has become an urgent issue in the digital environment.
The roadmap also calls on congressional committees to develop a coherent policy strategy on when and how to impose export controls on powerful artificial intelligence systems. Also, this concept should determine how AI models should be classified for national security purposes. The corresponding provision of the roadmap is aimed at ensuring national safety in the digital environment.
Also, members of the Senate, belonging to the group of initiators of active regulation of artificial intelligence, urge Washington to allocate annually at least $32 billion per year or at least 1% of the gross domestic product of the United States for research and development in the AI area. It is worth noting that the corresponding proposal was also contained in the report of the National Security Commission on Artificial Intelligence for 2021.
The development of the organizational plan was carried out for several months. As part of this process, meetings and listening sessions were held with leading technology companies, leaders of the civil rights movement, labor unions, and intellectual property holders.
It is expected that the roadmap will become a catalyst for legislative work, which started last year after Chuck Schumer personally led the relevant activities together with the New Mexico Democratic Sen. Martin Heinrich and Republican Sens. Mike Rounds of South Dakota and Todd Young of Indiana.
Todd Young said Wednesday that the roadmap is the most comprehensive and impactful bipartisan recommendation on artificial intelligence ever issued by the legislative branch.
Currently, in the context of AI regulation, the upper house of Congress leaders are trying to move from the learning stage to the action stage. As part of the related efforts, Senate committees have been tasked with crafting legislation that can be passed piecemeal. Chuck Schumer has already stated that due to the approach of the United States presidential elections, scheduled for November 5, the current year, he can make it his priority to adopt a law that will protect the electoral process from interference using artificial intelligence. According to him, regulating AI is a difficult task for Congress, which is unlike any other.
He also promised that the time frame for achieving results within the framework of the specified activities will be short, measured in months, not years. At the same time, some policy analysts and congressional aides assess it as an unlikely prospect that in the year of the presidential election in the United States, it will be possible to pass a law defining the concept of regulating artificial intelligence.
The European Union has achieved significant results in the process of forming a legislative framework for monitoring and controlling virtual intelligence. In March of the current year, the EU adopted the AI Act. This law provides for a ban on using certain artificial intelligence apps and significant restrictions on the application of other digital products of a similar category that contain significant risks.
Several representatives of the technology sector positively assessed the action plan in the sphere of AI regulation, which was published by the Senate. Dana Rao, general counsel and chief trust officer at Adobe, says that the mentioned strategy in the area of artificial intelligence is an encouraging start that aims to protect the screen and recording industry from the use of unauthorized replicas. He also noted that it will be important for governments to ensure the protection of the entire creative ecosystem as a whole.
Dana Rao urged lawmakers to pass legislation providing for a national right to protection from impersonation. Such a legal solution will solve the problem when artists encounter their own clones generated by artificial intelligence.
Gary Shapiro, CEO of the Consumer Technology Association, says that technology is borderless, and the United States, as a global leader in the area of innovation, needs a clear national policy strategy in the AI industry with guardrails so that US companies have the opportunity to develop safely in an appropriate functional environment.
At the same time, among consumer advocates, the initiative of the Senate representatives on the regulation of artificial intelligence was received ambiguously and generally more critically. A kind of consolidated opinion on the relevant issue in this case is that the roadmap contains very vague recommendations for eliminating the risks associated with the use of virtual intelligence.
Evan Greer, director of the advocacy group Fight For the Future, says that the mentioned framework proposes to direct tax funds from residents of the United States to research and development in the area of machine intelligence for military, defense, and private sector profiteering. Also in this context, it was noted that there are almost no significant solutions in the roadmap regarding such potentially problematic consequences of the use of artificial intelligence as the impact of advanced technology on police activities, immigration, and workers’ rights. Moreover, Evan Greer stated that when reading the mentioned document, it seems as if OpenAI CEO Sam Altman and Big Tech lobbyists participated in the formulation of this strategy.
It is worth noting that in the context of the large-scale introduction and mass use of artificial intelligence, not only the issue of ensuring national security or such an aspect of preventive measures to prevent potential damage from applying advanced technology as guaranteeing global public protection from negative scenarios of usage AI but also the personal vulnerability of users in cyberspace is of particular importance. In this case, it implies the fact that fraudsters also have access to artificial intelligence. Due to this circumstance, their activities become more sophisticated and more complex in terms of detection capabilities. In the context of the relevant problem and countering it, the digital literacy of users is of particular importance. For example, a query in an Internet search system, such as How do I know if my phone camera is hacked, will allow anyone to get information about signs of unauthorized access to the device.
The fact that artificial intelligence is actively used as a tool for committing cybercrimes is evidenced by the fact that over the past year, the Federal Trade Commission (FTC) recorded a significant increase in the number of user complaints about advertising materials that were generated using AI for fraud. An important circumstance in the context of this problem is that, with a high degree of probability, most of the mentioned complaints will be sent directly to the management of virtual platforms on which potentially malicious content has been noticed, and not to the regulator. Based on this consideration, it can be assumed that the FTC data only partially reflects the situation with the distribution of information materials generated by artificial intelligence for criminal purposes in the digital environment.
At the same time, tools are already being developed to counter negative AI use cases. For example, in October, the Reality Defender startup raised $15 million in investments to develop tools designed to detect deepfakes, which have become one of the most common forms of manipulative application of artificial intelligence. Cybercriminals, using AI to simulate images and voices, can convince the victim to commit actions that will later cause significant damage. The co-founder and CEO of Reality Defender, Ben Colman, says that the process of the emergence of new information generation methods is continuous, warning that another type of deepfake may come as a surprise to users.
Serhii Mikhailov
Serhiiās track record of study and work spans six years at the Faculty of Philology and eight years in the media, during which he has developed a deep understanding of various aspects of the industry and honed his writing skills; his areas of expertise include fintech, payments, cryptocurrency, and financial services, and he is constantly keeping a close eye on the latest developments and innovations in these fields, as he believes that they will have a significant impact on the future direction of the economy as a whole.