Pallone Opening Remarks at Oversight Hearing on AI Chatbots
Energy and Commerce Committee Ranking Member Frank Pallone, Jr. (D-NJ) delivered the following opening remarks at an Oversight and Investigations Subcommittee hearing on "Innovation With Integrity: Examining the Risks and Benefits of AI Chatbots:”
Over only the last few years, artificial intelligence tools have been woven into many of the products and services Americans use every day. While a wide variety of AI tools have been developed, AI chatbots have become one of the most visible and widely used tools on the market.
According to OpenAI, ChatGPT now has more than 400 million weekly active users globally who submit billions of queries every day. About 330 million of those queries are reportedly from users based here in the United States.
There are some obvious benefits of AI chatbots. Like traditional search engines, chatbots are a powerful tool that can help users quickly find information from across the internet. However, unlike search engines, chatbots can also summarize the information provided and engage users in a dialogue to refine follow-up questions so they produce more useful information. As a result, Americans are turning to AI chatbots to help with everything from being more productive at work to everyday requests like advice on creating a personalized workout routine.
These are some of the benefits, but we are already seeing some of the potential risks of AI chatbots that lead to very real, and sometimes tragic harm. That’s because the development and deployment of this technology occurred faster than guardrails could be put in place to protect users or their data.
For example, there are now multiple, well-documented cases of chatbot users experiencing a mental health crisis and taking their own lives shortly after lengthy conversations with AI chatbots. Copies of these chats that have been made public show that the chatbots may have enabled or even encouraged suicidal behavior. There are also reports that AI chatbots may have worsened users who were struggling with other challenges like eating disorders or where chatbots engaged with minors using sexually explicit content.
While companies say that these are unintended harms they are working to address, the extensive reach of chatbots necessarily means that even a small number of tragic outcomes represent an enormous impact on users.
Americans wide use of AI chatbots also raise significant privacy concerns, particularly if users turn to chatbots for physical or mental health advice. We already know that once our personal or private health data is online—on social media or any other websites—that it can be incredibly difficult to fully delete it. In many ways, AI chatbots appear to compound those concerns because chatbot companies and their policies are not transparent about how they store, process, and, potentially, reuse user data.
Many chatbot users also appear to believe the conversations they are having are private. That’s just simply not the case. Chatbots save their conversations and collect other personal data that may then be used as AI training data or shared with undisclosed third parties.
Simply put, we know too little about how these AI chatbots work. This lack of transparency has made it difficult for researchers and policymakers to study the supposed benefits of and actual harms caused by chatbots. As a result, we are behind in developing and implementing appropriate guardrails that can protect chatbot users from harm while allowing them to benefit from increased efficiency and greater convenience.
There is a clear and urgent need for high-quality research on chatbots and greater oversight so that Congress can develop appropriate AI guardrails to avoid continued harms from chatbots while ensuring that further innovations can be made safely.
As that work continues, however, Congress must be sure to allow states to put in place safeguards that protect their residents. Earlier this year, as part of their Big Ugly Bill, Republicans attempted to prevent states from regulating AI in any way for an entire decade. And Republican Leadership has said that they may try that same misguided effort again very soon. There is no reason for Congress to stop states from regulating the harms of AI when Congress has not yet passed a similar law.
I look forward to hearing from today’s panel of experts and discussing ways to reduce the risks presented by chatbots and how we as policy makers can ensure Americans can use chatbots safely.
Thank you, I yield back.
###
