Congress Restricts Staff Access to ChatGPT to Protect Privacy

Congress Restricts Staff Access to ChatGPT to Protect Privacy

The U.S. Congress has imposed new limits on the use of ChatGPT, a popular generative AI chatbot, for its staff members. The House of Representatives has issued a memo stating that only the paid version of ChatGPT, called ChatGPT Plus, is authorized for use in the House, and only for research and evaluation purposes. The memo also warns staff not to paste any confidential or non-public text into the chatbot, and to enable privacy settings to prevent data leakage.

ChatGPT is a powerful AI tool that can generate realistic text based on a given prompt or context. It was created by OpenAI, a research organization backed by Microsoft and other tech giants. ChatGPT has been widely used for various applications, such as writing stories, creating content, and generating code. However, it also poses potential risks to privacy and security, as it may inadvertently reveal sensitive information or produce harmful or misleading content.

According to the memo, obtained by Axios, House Chief Administrative Officer Catherine L. Szpindor said that ChatGPT Plus incorporates "important privacy features that are necessary to protect House data." These features include deleting chat history and preventing user interactions from being incorporated back into the AI model. ChatGPT Plus costs $20 per month per user and requires a subscription from OpenAI.

The memo also states that "no other versions of ChatGPT or other large language models AI software are authorized for use in the House currently." This means that staff cannot use the free version of ChatGPT or other similar AI tools that are available online.

The new rules come amid growing concerns about the regulation of the AI industry and its impact on society. Senate Majority Leader Chuck Schumer recently called on Congress to pass new legislation to address the challenges and opportunities posed by AI, such as national security, job loss, and innovation. He also outlined a framework for Congress' areas of focus on AI policy.

Several tech companies have also restricted or banned the use of ChatGPT and other generative AI tools for their employees, citing fears of data breaches or ethical issues. For example, Apple and Samsung have reportedly prohibited their workers from using ChatGPT-like tools for work-related tasks.

ChatGPT's creator, OpenAI CEO Sam Altman, has defended the tool as a positive force for creativity and innovation. He has also testified before the Senate on the potential benefits and risks of AI and urged lawmakers to support research and development in the field.

Source:
(1) Congress Restricts Staff Access to ChatGPT to Protect Privacy. https://news.yahoo.com/congress-restricts-staff-access-chatgpt-201936856.html.
(2) House restricts congressional use of ChatGPT - The Verge. https://www.theverge.com/2023/6/26/23774286/chatgpt-sam-altman-congress-house-ai-chatbots.
(3) US Congress limits use of ChatGPT: What is restricted, what's not. https://timesofindia.indiatimes.com/gadgets-news/us-congress-limits-use-of-chatgpt-what-is-restricted-whats-not/articleshow/101288786.cms.
(4) Congress is reportedly limiting staff use of AI models like ChatGPT .... https://article.wn.com/view/2023/06/26/Congress_is_reportedly_limiting_staff_use_of_AI_models_like_/.
(5) House of Representatives restricts staffers’ use of ChatGPT. https://sjvsun.com/u-s/house-of-representatives-restricts-staffers-use-of-chatgpt/.