security

“Oversight of AI: Rules for Artificial Intelligence” and “Artificial … – Gibson Dunn


June 6, 2023

Click for PDF

Gibson Dunn’s Public Policy Practice Group is closely monitoring the debate in Congress over potential oversight of artificial intelligence (AI). We offer this alert summarizing and analyzing the U.S. Senate hearings on May 16, 2023, to help our clients prepare for potential legislation regulating the use of AI. For further discussion of the major federal legislative efforts and White House initiatives regarding AI, see our May 19, 2023 alert Federal Policymakers’ Recent Actions Seek to Regulate AI.

* * *

On May 16, 2023, both the Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law and the Senate Homeland Security and Governmental Affairs Committee held hearings to discuss issues involving AI. The hearings highlighted the potential benefits of AI, while acknowledging the need for transparency and accountability to address ethical concerns, protect constitutional rights, and prevent the spread of disinformation. Senators and witnesses acknowledged that AI presents a profound opportunity for American innovation, but warned that it must be adopted with caution and regulated by the federal government given the potential risks. A general consensus existed amongst the senators and witnesses that AI should be regulated, but the approaches to, and extent of, that regulation varied.

Senate Judiciary Committee Subcommittee on Privacy, Technology, and the Law Hearing: “Oversight of AI: Rules for Artificial Intelligence”

On May 16, 2023, the U.S. Senate Committee on the Judiciary Subcommittee on Privacy, Technology, and the Law held a hearing titled, “Oversight of AI: Rules for Artificial Intelligence.”[1] Chair Richard Blumenthal (D-CT) emphasized that his subcommittee was holding the first hearing in a series of hearings aimed at considering whether and to what extent Congress should regulate rapidly advancing AI technology, including generative algorithms and large language models (LLMs).

The hearing focused on potential new regulations such as creating a dedicated agency or commission and a licensing scheme, the extent to which existing legal frameworks apply to AI, and the alleged harms prompting regulation like intellectual property and privacy rights infringements, job displacement, bias, and election interference.

Witnesses included:

  • Sam Altman, Chief Executive Officer, OpenAI;
  • Gary Marcus, Professor Emeritus, New York University; and
  • Christina Montgomery, Vice President and Chief Privacy and Trust Officer, IBM.

I. AI Oversight Hearing Points of Particular Interest

We provide a full hearing summary and analysis below. Of particular note, however:

  • Chair Blumenthal opened the hearing by playing a statement written and voiced by AI that mimicked the senator’s own writing and voice. The Chair used this demonstration of the technology’s current capabilities to emphasize its alleged existing harms and risks. The potential harms, the Chair said, include “weaponized disinformation, housing discrimination, harassment of women and impersonation fraud, voice cloning,” as well as the potential workforce displacement. Chair Blumenthal took the position that existing law suggests AI companies should (1) be transparent by disclosing known risks and allowing independent researcher access and (2) be competing based on safety and trustworthiness. Moreover, Chair Blumenthal suggested use limitations “where the risk of AI is so extreme that we ought to impose restrictions, or even ban their use especially when it comes to commercial invasions of privacy for profit and decisions that affect people’s livelihoods.” Finally, the Chair raised the issue of accountability or liability for  harm.
  • Ranking Member Senator Josh Hawley (R-MO) emphasized AI’s potential impact, questioning whether it will be an innovation more like the Internet or the atom bomb. Senator Hawley thought the question facing society, and Congress specifically, is whether Congress will “strike that balance between technological innovation and our ethical and moral responsibility to humanity, to liberty, to the freedom of this country.”
  • Witnesses and several senators suggested creating a dedicated federal agency or commission and a licensing scheme. A “scorecard” or “nutrition label” discussed by Chair Blumenthal and Professor Marcus could indicate to consumers the particular AI system’s safety and “ingredients” (i.e., data, algorithms). Ms. Montgomery advocated for “precision regulation” that would focus on particularly risky uses of AI, rather than AI generally. Mr. Altman advocated for pre-public deployment testing and threshold requirements.
  • Also relevant to the development of regulations and standards was the role of the international community, from the draft E.U. AI Act to international body involvement in the development of ethical norms. Senator Dick Durbin (D-IL), for example, asked how an international authority could fairly regulate all entities involved.
  • Senators questioned the witnesses on the risks and benefits associated with specific AI applications, including risks to intellectual property and privacy rights. Senators were also greatly concerned about election interference, bias, competition and market dynamics, and job displacement.

II. Key Substantive Issues

Key substantive issues raised in the hearing included: (a) a potential AI federal agency and licensing scheme, (b) the applicability of existing frameworks for responsibility and liability, and (c) alleged harms and rights infringements.

a. AI Federal Agency and Licensing Scheme

The hearing focused on whether and to what extent the U.S. should regulate AI. As emphasized throughout the hearing, the impetus for regulation is the speed with which the technology is developing and dispersing into society coupled with senatorial regret over past failures to regulate emerging technology. Chair Blumenthal explained that “Congress has a choice now. We have the same choice when we face social media. We failed to seize that moment. The result is predators on the Internet, toxic content, exploiting children creating dangers for them.”

Senators discussed a potential dedicated federal agency or commission for regulating AI technology. Senator Peter Welch (D-VT) has “come to the conclusion that we absolutely have to have an agency.” Senator Lindsey Graham (R-SC) stated that Congress “need[s] to empower an agency that issues a license and can take it away.” Senator Cory Booker (D-NJ) likened the need for an AI-centered agency to the need for an automobile-centered agency that resulted in the creation of the National Highway Traffic Safety Administration and the Federal Motor Car Carrier Safety Administration. Mr. Altman similarly “would form a new agency that licenses any effort above a certain scale of capabilities, and can take that license away and ensure compliance with safety standards.” Senator Chris Coons (D-DE) was concerned with how to decide whether a particular AI model was safe enough to deploy into the public. Mr. Altman suggested “iterative deployment” to find the limitations and benefits of the technology, including giving the public time to “come to grips with this technology to understand it . . . .”

In Ms. Montgomery’s view, a precision approach to regulating AI strikes the right balance between encouraging and permitting innovation while addressing the potential risks of the technology. Mr. Altman “would create a set of safety standards focused on . . . the dangerous capability evaluations” such as “if a model can self-replicate and . . . self-exfiltrate into the wild.” Potential challenges facing a new federal agency include funding and regulatory capture on the government side, and regulatory burden on the industry side.

Senator John Kennedy (R-LA) asked the witnesses what “two or three reforms, regulations, if any” they would implement.

  • Montgomery would implement transparency and accountability reforms. She also advocated for a use-based approach to regulation, including defining highest-risk cases and impact assessments. Ms. Montgomery would also require disclosures in connection with training data and models.
  • Professor Marcus would require safety testing akin to what exists for the Food and Drug Administration (FDA). Moreover, Professor Marcus advocated for a “nimble monitoring agency” to address not just the pre-deployment review of AI, but also post-deployment monitoring with the ability to call products back. Finally, Professor Marcus stated he would fund AI research and an AI constitution, geared toward ethical and honest development of the field.
  • Altman “would form a new agency that licenses any effort above a certain scale of capabilities, and can take that license away and ensure compliance with safety standards.” Moreover, he “would create a set of safety standards focused on . . . the dangerous capability evaluations.” Finally, Mr. Altman would require independent audits.
Readers Also Like:  HILL TECH & CYBER BRIEFING: TikTok-China Links Spur Visa ... - Bloomberg Law

Transparency was a key repeated value that will play a role in any future oversight efforts. In his prepared testimony, Professor Marcus noted that “[c]urrent systems are not transparent. They do not adequately protect our privacy, and they continue to perpetuate bias.” He also explained that governmental oversight must actively include independent scientists to assess AI through access to the methods and data used.

b. Applicability of Existing Frameworks for Responsibility and Liability

Senators wanted to learn who is responsible or liable for the alleged harms of AI under existing laws and regulations. For example, Senators Durbin and Graham both raised questions about the application of 47 U.S.C. § 230, originally part of the Communications Decency Act, which creates a liability safe harbor for companies hosting user-created content under certain circumstances. Section 230 was at issue in two United States Supreme Court cases this term—Twitter v. Taamneh and Gonzalez v. Google—both of which were decided two days after the hearing.[2] The Supreme Court declined to hold either Twitter or Google liable for the effects of violent content posted on their platforms. However, Justice Ketanji Brown Jackson filed a concurring opinion in Taamneh, which left open the possibility of holding tech companies liable in the future.[3] The Subcommittee on Privacy, Technology, and the Law held a hearing in March, following oral arguments in Taanmeh and Gonzalez, suggesting the committee’s interest in regulating technology companies could go beyond existing frameworks.[4] Mr. Altman noted he believes that Section 230 is the wrong structure for AI, but Senator Graham wanted to “find out how [AI] is different than social media . . . .” Given Mr. Altman’s position that Section 230 did not apply to the tool OpenAI has created, Senator Graham wanted to know whether he could sue OpenAI if harmed by it. Mr. Altman said that question was beyond his area of expertise.

c. Alleged Harms and Rights Infringement

The hearing emphasized the potential risks and alleged harms of AI. Senator Welch stated that AI has risks “that relate to fundamental privacy rights, bias rights, intellectual property, dissent, [and] the spread of disinformation” during the hearing. For Senator Welch, disinformation is “in many ways . . . the biggest threat because that goes to the core of our capacity for self-governing.” Senator Mazie Hirono (D-HI) noted that measures can be built into the technology to minimize harmful results. Specifically, Senator Hirono asked about the ability to refuse harmful requests and how to define harmful requests—representing potential issues that legislators will have to grapple with while trying to regulate AI.

Senators focused on five key areas during the hearing: (i) elections, (ii) intellectual property, (iii) privacy, (iv) job markets, and (v) competition.

i. Elections

A number of senators shared the concern that AI can potentially be used to influence or impact elections. The alleged influence and impact, they noted, can be explicit or unseen. For explicit or direct election influence, Senator Amy Klobuchar (D-MN) asked what should be done about the possibility of AI tools directing voters to incorrect polling locations. Mr. Altman suggested that voters would understand that AI is just a tool that requires external verification.

During the hearing, Professor Marcus noted that AI can exert unseen influence over individual behavior based on data choices and algorithmic methods, but that these data choices and algorithmic methods are neither transparent to the public nor accessible to independent researchers under current systems. Senator Hawley questioned Mr. Altman about AI’s ability to accurately predict public opinion surveys. Specifically, Senator Hawley suggested that companies may be able to “fine tune strategies to elicit certain responses, certain behavioral responses” and that there could be an effort to influence undecided voters.

Ms. Montgomery stated that elections are an area that require transparent AI. Specifically, she advocated for “[a]ny algorithm used in [the election] context” to be “required to have disclosure around the data being used, the performance of the model, anything along those lines is really important.” This will likely be a key area of oversight moving into the 2024 elections.

ii. Intellectual Property

Several Senators voiced concerns that training AI systems could infringe intellectual property rights. Senator Marsha Blackburn (R-TN), for example, queried whether artists whose artistic creations are used to train algorithms are or will be compensated for the use of their work. Mr. Altman stated that OpenAI is “working with artists now visual artists, musicians, to figure out what people want” but that “[t]here’s a lot of different opinions, unfortunately,” suggesting some cooperative industry efforts have been met with difficulty. Senator Klobuchar asked about the impact AI could have on local news organizations, raising concerns that certain AI tools use local news content without compensation, which could exacerbate existing challenges local news organizations face. Chair Blumenthal noted that one of the hearings in this AI series will focus on intellectual property.

iii. Privacy

Several senators raised the potential privacy risks that could result from the deployment of AI. Senator Blackburn asked what Mr. Altman’s policy is for ensuring OpenAI is “protecting that individual’s right to privacy and their right to secure that data . . . .” Chair Blumenthal also asked what specific steps OpenAI is taking to protect privacy. Mr. Altman explained that users can opt out of OpenAI using their data for training purposes and delete conversation histories. At IBM, Ms. Montgomery explained, the company “even filter[s] [its] large language models for content that includes personal information that may have been pulled from public datasets as well.” Senator Jon Ossoff (D-GA) addressed child privacy, advising Mr. Altman to “get way ahead of this issue, the safety for children of your product, or I think you’re going to find that Senator Blumenthal, Senator Hawley, others on the Subcommittee and I are will look very harshly on the deployment of technology that harms children.”

iv. Job Market

Chair Blumenthal raised AI’s potential impact on the job market and economy. Mr. Altman admitted that “like with all technological revolutions, I expect there to be significant impact on jobs.” Ms. Montgomery noted the potential for new job opportunities and the importance of training the workforce for the technological jobs of the future.

v. Competition

Senator Booker expressed concern over “how few companies now control and affect the lives of so many of us. And these companies are getting bigger and more powerful.” Mr. Altman added that an effort is needed to align AI systems with societal values. Chair Blumenthal noted that the hearing had barely touched on the competition concerns related to AI, specifically the “monopolization danger, the dominance of markets that excludes new competition, and thereby inhibits or prevents innovation and invention.” The Chair suggested that a further discussion on antitrust issues might be needed.

Readers Also Like:  Cisco reports major security flaw, users urged to patch immediately - TechRadar

Senate Homeland Security and Governmental Affairs Committee Hearing:“Artificial Intelligence in Government”

On the same day, the U.S. Senate Homeland Security and Governmental Affairs Committee (HSGAC) held a hearing to explore the opportunities and challenges associated with the federal government’s use of AI.[5] The hearing was the second in a series of hearings that committee Chair Gary Peters (D-MI) plans to convene to address how lawmakers can support the development of AI. The first hearing, held on March 8, 2023, focused on the transformative potential of AI, as well as the potential risks.[6]

Witnesses included:

  • Richard A. Eppink, of Counsel, American Civil Liberties Union of Idaho Foundation;
  • Taka Ariga, Chief Data Scientist, U.S. Government Accountability Office;
  • Lynne E. Parker, Ph.D., Associate Vice Chancellor and Director, AI, Tennessee Initiative, University of Tennessee (and former deputy U.S. chief technology officer and director of the White House’s National AI Initiative Office);
  • Daniel Ho, Professor, Stanford Law School; and
  • Jacob Siegel, Writer

We provide a full hearing summary and analysis below. Of particular note, however:

  • Chair Peters expressed his commitment to bipartisan legislation to regulate the federal government’s use of AI. He highlighted the pending legislation authored by him and Senator Mike Braun (R-IN), S. 1564, the AI Leadership Training Act, which would create an AI training program for federal supervisors and management officials.[7] After the hearing, the AI Leadership Training Act was successfully reported out of HSGAC. The bill is now awaiting consideration by the full Senate.
  • Ranking Member Rand Paul (R-KY) focused on the potential for the federal government to use AI to police information, raising particular concerns about the potential use of AI by federal agencies for surveillance and censorship. He stated that it is “a mistake to concentrate on the technology and not the concentration of power,” explaining that he supports the use of AI, so long as fundamental rights are protected.
  • Committee members and witnesses expressed concerns about the potential risks associated with the government’s use of AI, such as algorithmic bias, privacy infringement, suppression of speech, and the impact on jobs. Many of these concerns were echoed by senators in the Senate Judiciary Committee Subcommittee on Privacy, Technology, and the Law hearing discussed above.
  • There was a general consensus that Congress needs to develop guidelines, standards, and regulatory frameworks to govern AI adoption across federal agencies. Speaking to reporters after the hearing, Chair Peters acknowledged the growing calls for Congress to create “clear lines of accountability and oversight,” but stated that they must “be thoughtful, deliberative and take [their] time.”

I. Potential Harms

Several senators and witnesses expressed concerns about the potential harms posed by government use of AI, including suppression of speech, bias and discrimination, data privacy and security breaches, and job displacement.

a. Suppression of Speech

In his opening statement and throughout the hearing, Ranking Member Paul expressed concern about the federal government using AI to monitor, surveil, and censor speech under the guise of combating misinformation. He warned that AI will make it easier for the government to invisibly “control the narrative, eliminate dissent, and retain power.” Senator Rick Scott (R-FL) echoed those concerns, and Mr. Siegel stated that the risk of the government using AI to suppress speech cannot be overstated. He cautioned against emulating “the Chinese model of top down party driven social control” when regulating AI, which would “mean the end of our tradition of self-government and the American way of life.”

b. Bias and Discrimination

Senators and witnesses also expressed concerns about the potential for biases in AI applications causing violations of due process and equal protection rights. For example, there was a discussion about apparent flaws identified in an AI algorithm used by the IRS, which resulted in Black taxpayers being audited at five times the rate of other races, and the use of AI-driven systems at the state-level to determine eligibility for disability benefits resulting in thousands of recipients being wrongfully denied critical assistance. Richard Eppink testified about his involvement in a class action lawsuit brought by the ACLU representing individuals with developmental and intellectual disabilities who were denied funds by Idaho’s Medicaid program because of a flaw in the state’s AI-based system. Mr. Eppink explained that the people who were denied disability benefits were unable to challenge the decisions, because they did not have access to the proprietary system used to determine their eligibility. He advocated for increased transparency into any AI systems used by the government, but cautioned that even if an AI-based system functions properly, the underlying data may be corrupted “by years and years of discrimination and other effects that have bias[ed] the data in the first place.” Senators expressed particular concerns about law enforcement’s use of predictive modeling to justify forms of surveillance.

c. Data Privacy and Cybersecurity

Hearing testimony highlighted concerns about the collection, use, and protection of data by AI applications, and the gaps in existing privacy laws. Senator Ossoff stated that AI tools themselves are vulnerable to data breaches and could be used to penetrate government systems. Daniel Ho highlighted the scale of the problem, noting that by one estimate the federal government needs to hire about 40,000 IT workers to address cybersecurity issues posed by AI. Given the enormous amounts of data that can be collected using AI and the “patchwork” system of privacy legislation currently in place, Mr. Ho said a data strategy like the National Secure Data Service Act is needed. Senators signaled bipartisan support for national privacy legislation.

d. Job Displacement:

Senators in the HSGAC hearing echoed the concerns expressed in the Senate Judiciary Committee Subcommittee hearing regarding the potential for AI-driven automation to cause job displacement. Senator Maggie Hassan (D-NH) asked Daniel Ho about the potential for AI to be used to automate government jobs. Mr. Ho responded that “augmenting the existing federal workforce [with AI] rather than displacing them” is the right approach, because ultimately there needs to be a human in charge of these systems. Senator Alex Padilla (D-CA) agreed and provided anecdotal evidence from his experience as Secretary of State of California, where the government introduced the first chatbot in California state government. He opined that rather than leading to layoffs and staff reductions, the chatbot freed up government resources to focus on more important issues.

II. Recommendations

The witnesses offered a number of recommended measures to mitigate the risks posed by the federal government’s use of AI and ensure that it is used in a responsible and ethical manner.

Those recommendations are discussed below.

a. Developing Policies and Guidelines

As directed by the AI in Government Act of 2020 and Executive Order 13961, the Office of Management and Budget (“OMB”) plans to draft policy guidance on the use of AI systems by the U.S. government.[8] Multiple senators and witnesses noted the importance of this guidance and called on OMB to ensure that it appropriately addresses the wide diversity of use cases of AI across the federal government. Lynne Parker proposed requiring all federal agencies to use the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) during the design, development, procurement, use, and management of their cases of AI.  Witnesses also suggested looking to the White House Office of Science and Technology’s Blueprint for an AI Bill of Rights as a guiding principle.

Readers Also Like:  Q&A: IU Health's Nick Sturgeon on Connected Medical Device ... - HealthTech Magazine

b. Creating Oversight

Senators and witnesses proposed several measures to create oversight over the federal government’s use of AI. Multiple witnesses advocated for AI use case inventories to increase transparency and for the elimination of the government’s use of “black box systems.” Richard Eppink argued that if a government agency or state-funded agency uses AI technology, there must be transparency about the proprietary system so Americans can evaluate whether they need to challenge the government decisions generated by the system. Lynne Parker stated that the U.S. is “suffering right now from a lack of leadership and prioritization on these AI topics” and proposed that one immediate solution would be to appoint AI chief officers at each federal agency to oversee use and implementation. She also recommended establishing an interagency Chief AI Officers Council that would be responsible for coordinating AI adoption across the federal government.

c. Investing in Training, Research, and Development:

Speakers at the hearing highlighted the need to invest in training federal employees and conducting research and development of AI systems. As noted above, after the hearing, the AI Leadership Training Act, which would create an AI training program for federal supervisors and management officials, was favorably reported out of committee. Multiple witnesses stated that Congress must act immediately to help agencies hire and retain technical talent to address the current gap in leadership and expertise within the federal government. Ms. Parker testified that the government must invest in digital infrastructure, including the National AI Research Resource (NAIRR) to ensure secure access to administrative data. The NAIRR is envisioned as a shared computing and data infrastructure that will provide AI researchers and students across scientific fields and disciplines with access to computing resources and high-quality data, along with appropriate educational tools and user support. While there was some support for public-private partnerships to develop and deploy AI, Senator Padilla and Mr. Eppink advocated for agencies building AI tools in house to prevent proprietary interests from influencing government systems. Chair Peters stated that a future HSGAC hearing will focus on how the government can work with the private sector and academia to harness various ideas and approaches.

d. Fostering International Cooperation and Innovation:

Lastly, Senators Hassan and Jacky Rosen (D-NV) both emphasized the need to foster international cooperation in developing AI standards. Senator Rosen proposed a multilateral AI research institute to enable likeminded countries to collaborate together to engage in standard setting. She stated, “China has an explicit plan to become a standards issuing country, and as part of its push to increase global influence it coordinates national standards work across government and industry. So in order for the U.S. to remain a leader in AI and maintain a national security edge, our response must be one of leadership coordination, and, above all cooperation.” Despite expressing grave concerns about the danger to democracy posed by AI, Mr. Seigel noted that the U.S. cannot abandon AI innovation and risk ceding the space to competitors like China.

III. How Gibson Dunn Can Assist

Gibson Dunn’s Public Policy, Artificial Intelligence, and Privacy, Cybersecurity and Data Innovation Practice Groups are closely monitoring legislative and regulatory actions in this space and are available to assist clients through strategic counseling; real-time intelligence gathering; developing and advancing policy positions; drafting legislative text; shaping messaging; and lobbying Congress.

_________________________

[1] Oversight of A.I.: Rules for Artificial Intelligence: Hearing Before the Subcomm. on Privacy, Tech., and the Law of the S. Comm. on the Judiciary, 118th Cong. (2023), https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-rules-for-artificial-intelligence.

[2] Twitter, Inc. v. Taamneh, 143 S. Ct. 1206 (2023); Gonzalez v. Google LLC, 143 S. Ct. 1191 (2023).

[3] See Twitter, Inc. v. Taamneh, 143 S. Ct. 1206, 1231 (2023) (Brown Jackson, K., concurring) (noting that “[o]ther cases presenting different allegations and different records may lead to different conclusions.”).

[4] Press Release, Senator Richard Blumenthal, Blumenthal & Hawley to Hold Hearing on the Future of Tech’s Legal Immunities Following Argument in Gonzalez v. Google (Mar. 1, 2021).

[5]  Artificial Intelligence in Government: Hearing Before the Senate Committee on Homeland Security and Governmental Affairs, 118th Cong. (2023), https://www.hsgac.senate.gov/hearings/artificial-intelligence-in-government/

[6] Artificial Intelligence: Risks and Opportunities: Hearing Before the Homeland Security and Governmental Affairs Committee, 118th Cong. (2023), https://www.hsgac.senate.gov/hearings/artificial-intelligence-risks-and-opportunities/.

[7] S. 1564 – the AI Leadership Training Act, https://www.congress.gov/bill/118th-congress/senate-bill/1564.

[8] See AI in Government Act of 2020, H.R. 2575, 116th Cong. (Sept. 15, 2020); Exec. Order No. 13,960, 85 Fed. Reg. 78939 (Dec. 3, 2020).


The following Gibson Dunn lawyers prepared this client alert: Michael Bopp, Roscoe Jones Jr., Alexander Southwell, Amanda Neely, Daniel Smith, Frances Waldmann, Kirsten Bleiweiss*, and Madelyn Mae La France.

Gibson, Dunn & Crutcher’s lawyers are available to assist in addressing any questions you may have regarding these issues. Please contact the Gibson Dunn lawyer with whom you usually work, the authors, or any of the following in the firm’s Public Policy, Artificial Intelligence, or Privacy, Cybersecurity & Data Innovation practice groups:

Public Policy Group:
Michael D. Bopp – Co-Chair, Washington, D.C. (+1 202-955-8256, mbopp@gibsondunn.com)
Roscoe Jones, Jr. – Co-Chair, Washington, D.C. (+1 202-887-3530, rjones@gibsondunn.com)
Amanda H. Neely – Washington, D.C. (+1 202-777-9566, aneely@gibsondunn.com)
Daniel P. Smith – Washington, D.C. (+1 202-777-9549, dpsmith@gibsondunn.com)

Artificial Intelligence Group:
Cassandra L. Gaedt-Sheckter – Co-Chair, Palo Alto (+1 650-849-5203, cgaedt-sheckter@gibsondunn.com)
Vivek Mohan – Co-Chair, Palo Alto (+1 650-849-5345, vmohan@gibsondunn.com)
Eric D. Vandevelde – Co-Chair, Los Angeles (+1 213-229-7186, evandevelde@gibsondunn.com)
Frances A. Waldmann – Los Angeles (+1 213-229-7914, fwaldmann@gibsondunn.com)

Privacy, Cybersecurity and Data Innovation Group:
S. Ashlie Beringer – Co-Chair, Palo Alto (+1 650-849-5327, aberinger@gibsondunn.com)
Jane C. Horvath – Co-Chair, Washington, D.C. (+1 202-955-8505, jhorvath@gibsondunn.com)
Alexander H. Southwell – Co-Chair, New York (+1 212-351-3981, asouthwell@gibsondunn.com)

*Kirsten Bleiweiss is an associate working in the firm’s Washington, D.C. office who currently is admitted to practice only in Maryland.

© 2023 Gibson, Dunn & Crutcher LLP

Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice. Please note, prior results do not guarantee a similar outcome.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.