US Senate members can now use ChatGPT and other AI tools for official work
It is not just big companies that are making use of AI these days. The Senate is doing the same. A recent report suggests that the Senate has issued a memo allowing members to use Google’s Gemini chat, OpenAI’s ChatGPT, or Microsoft Copilot for their daily work. Of course, confidential matters would not be entrusted to such technologies, nor would their assistance be required in key decision-making. Microsoft Copilot is already integrated into Senate platforms.
The New York Times recently reported that a top Senate administrator allowed the use of the three aforementioned chatbots for official work on Monday. In the memo sent out to the members, the chief information officer for the Senate sergeant-at-arms, who oversees its computers and security, said that Copilot could “help with routine Senate work, including drafting and editing documents, summarizing information, preparing talking points and briefing material, and conducting research and analysis.”
On the matter of security and privacy, the memo adds that “data shared with Copilot Chat stays within the secure Microsoft 365 Government environment and is protected by the same controls that safeguard other Senate data.” The extent of the use of AI in official Senate work is unclear, but these new guidelines clearly encourage it. Still, while the technology might be helpful in day-to-day activities, there may be concerns about how classified information is handled.
The report also adds that a policy was adopted regarding AI use in 2024. As per the nonpartisan nonprofit organization POPVOX Foundation, AI may be used for work that does not involve sensitive information. However, it would not be used in decision-making, and some of the work that the technology is allowed to do must also be approved. An example of this would be drafting talking points for a member of Congress.
While the relationship between the Senate and certain AI companies may seem amicable, the same cannot be said for the Pentagon. Recently, Anthropic sued the Department of Defense on Monday after it was termed a “supply chain risk.” A different New York Times report states that the company has filed two lawsuits: one in the U.S. District Court in the Northern District of California and one in the U.S. Court of Appeals for the District of Columbia Circuit.
“This is a necessary step to protect our business, our customers, and our partners,” Anthropic said in a statement. “We will continue to pursue every path toward resolution, including dialogue with the government.” The reason for the company reacting in such a manner is that the term used by the Pentagon refers to firms that are deemed a major national security risk, like ties to non-allies.
More on Market Realist
Oracle to fire 30,000 employees to cut costs — plans to spend billions on AI to replace them
A sneaky new type of layoff could impact millions – most workers might even realize it
Elon Musk says Grok can 'help with taxes' after woman gets $1,400 more refund using it