Pallone Opening Remarks at Hearing with Biden Administration Officials on Artificial Intelligence Leadership and Innovation
"As I have repeatedly stated in our previous AI hearings in our subcommittees, I strongly believe that the bedrock of any AI regulation must be privacy legislation that includes data minimization and algorithmic accountability principles."
Energy and Commerce Committee Ranking Member Frank Pallone, Jr. (D-NJ) delivered the following opening remarks today at a Full Committee hearing on "Leveraging Agency Expertise to Foster American AI Leadership and Innovation:”
Today’s hearing is an important opportunity to hear what steps the executive branch is taking to harness, advance, and ensure the safe use of artificial intelligence, or “AI.”
While AI is not new, the speed at which we are witnessing the deployment of generative AI is staggering. The effects it will have on our everyday lives are tremendous. Indeed, this technology has led to an explosion of AI systems and tools that answer consumers’ questions, draft documents, influence the way patients are diagnosed and what health insurance will cover, and make employment and housing decisions. Many of these systems are trained on massive amounts of data that Big Tech has collected on all of us. And that’s why the lack of nationwide protections around what data companies can collect, use, and sell to train these AI systems should concern every American.
Given the opportunities and challenges AI offers, I am pleased that President Biden issued an Executive Order on the safe, secure, and trustworthy development and use of AI. The Executive Order recognizes both the promise and peril of AI and adopts a coordinated, federal-government-wide approach for the development and use of AI in a responsible manner.
Specifically, the order requires the Secretary of Health and Human Services to establish a safety program that receives reports and acts to resolve harm from AI’s use in health care practices. It tasks the Secretary of Energy with addressing the threats that AI systems have to our nation’s critical infrastructure, as well as any chemical, biological, radiological, nuclear, and cybersecurity risks. The Secretary of Commerce must develop guidance for content authentication and watermarking so that AI-generated content is easily identified. Commerce will also lead an effort to establish international frameworks for harnessing benefits and managing risks. The Assistant Secretary of Commerce for Communications and Information will also assess the benefits, risks, and accountability frameworks for open-source foundation models.
These are all important actions that the Biden Administration is taking, but we cannot lose site of the fact that sufficient guardrails do not currently exist for Americans’ data and AI systems. As a result, we are unfortunately hearing of a growing number of reports of harmful impacts from the use of AI systems. There have been instances where AI has been used to mimic a friend or relative to scam consumers out of their hard-earned money. Deepfakes have been used to further misinformation or disinformation campaigns. There are reports of chatbots leaking medical records and personal information. AI systems have discriminated against female candidates for jobs and people of color in the housing market. And there is an acknowledged concern that increased adoption of AI technologies into our critical infrastructure, like the electric grid, can add new vulnerabilities and cyber risks. This is all extremely concerning.
We cannot continue to allow companies to develop and deploy systems that misuse and leak personal data and exacerbate discrimination. That is why we must make sure developers are running every test they can to mitigate risks before their AI models are deployed.
Last year, Republicans and Democrats were able to work across the aisle and pass the American Data Privacy and Protection Act out of Committee by a vote of 53 to 2. That legislation included provisions focused on data minimization and algorithmic accountability, with heightened privacy protections for children.
Clearly defined privacy and data security rules are critical to protect consumers from existing harmful data collection practices, and to safeguard them from the growing privacy and cyber threats that AI models pose. As I have repeatedly stated in our previous AI hearings in our subcommittees, I strongly believe that the bedrock of any AI regulation must be privacy legislation that includes data minimization and algorithmic accountability principles. Simply continuing to provide consumers with only “notice and consent” rights is wholly insufficient in today’s modern digital age.
I will continue to push for a comprehensive, national federal privacy standard. It is the only way we can limit the aggressive and abusive data collection practices of Big Tech and data brokers, ensure our kids' sensitive information is protected online, protect against algorithmic bias, and put consumers back in control of their data.
I look forward to hearing from our witnesses and working with our partners in the federal government to collaborate, innovate, and lead in developing policies that both harness the transformative power of AI while also safeguarding the rights and well-being of Americans.
Thank you, I yield back.
###