US Senate members can now use ChatGPT and other AI tools for official work

There are safeguards to the use of the technology but day-to-day activities are allowed.

By

March 11 2026, Published 8:24 a.m. ET

pn//uploads/dbb de f a dbadf__

It is not just big companies that are making use of AI these days. The Senate is doing the same. A recent report suggests that the Senate has issued a memo allowing members to use Google’s Gemini chat, OpenAI’s ChatGPT, or Microsoft Copilot for their daily work. Of course, confidential matters would not be entrusted to such technologies, nor would their assistance be required in key decision-making. Microsoft Copilot is already integrated into Senate platforms.

Article continues below advertisement
pn/cdcc  d e dd

The New York Times recently reported that a top Senate administrator allowed the use of the three aforementioned chatbots for official work on Monday. In the memo sent out to the members, the chief information officer for the Senate sergeant-at-arms, who oversees its computers and security, said that Copilot could “help with routine Senate work, including drafting and editing documents, summarizing information, preparing talking points and briefing material, and conducting research and analysis.”

Article continues below advertisement
pn/c c  a babfcdaf

On the matter of security and privacy, the memo adds that “data shared with Copilot Chat stays within the secure Microsoft 365 Government environment and is protected by the same controls that safeguard other Senate data.” The extent of the use of AI in official Senate work is unclear, but these new guidelines clearly encourage it. Still, while the technology might be helpful in day-to-day activities, there may be concerns about how classified information is handled.

Article continues below advertisement

The report also adds that a policy was adopted regarding AI use in 2024. As per the nonpartisan nonprofit organization POPVOX Foundation, AI may be used for work that does not involve sensitive information. However, it would not be used in decision-making, and some of the work that the technology is allowed to do must also be approved. An example of this would be drafting talking points for a member of Congress.

pn/ad ade dc b ceccab
Article continues below advertisement

While the relationship between the Senate and certain AI companies may seem amicable, the same cannot be said for the Pentagon. Recently, Anthropic sued the Department of Defense on Monday after it was termed a “supply chain risk.” A different New York Times report states that the company has filed two lawsuits: one in the U.S. District Court in the Northern District of California and one in the U.S. Court of Appeals for the District of Columbia Circuit.

“This is a necessary step to protect our business, our customers, and our partners,” Anthropic said in a statement. “We will continue to pursue every path toward resolution, including dialogue with the government.” The reason for the company reacting in such a manner is that the term used by the Pentagon refers to firms that are deemed a major national security risk, like ties to non-allies.

More on Market Realist

Advertisement

Latest Economy & Work News and Updates

    © Copyright 2026 Market Realist. Market Realist is a registered trademark. All Rights Reserved. People may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.