Last Friday’s post got a lot of very insightful comments. In response to one comment about the potential of using AI to break through the wall of complex law and policy,
writes:That is, I suspect, exactly what they're doing right now [meaning DOGE]: spinning up the A.I. systems that will be tasked with taking a comprehensive and detailed look at both the legal and regulatory structure, and the expenditures. This is something that previous reform commissions never had the technical capability to attempt before, because the technology didn't exist. The most shocking thing I believe we'll see greater public awareness of because of DOGE is the degree to which even Congress doesn't know what is going on.
This is likely spot on. The reality is that in many domains, the regulatory and spending complexity is such that it’s very hard for anyone to know what’s going on. You might think it’s Congress’s job to understand how the laws they’ve written have been operationalized, but that’s one of their chief complaints — that they don’t really understand what happens within the agency and they don’t always think it's consistent with their intentions. And the agencies themselves are dealing with the accretive nature of what comes down from Congress — new laws naturally reference and amend old laws, creating one confusing web of language. Then there’s the web of the regulations previous staff have written, not to mention the policies, forms, and processes that have been born from those regulations that seem to carry the weight of the law but are really somewhat arbitrary expressions of one way they could be operationalized. It becomes hard to sort out what cannot be changed without an act of Congress from what would need a new rule (and therefore a rulemaking process) from what could entirely legally be modified if only Bob over in compliance would stop threatening to call the Inspector General. (And actually can be changed over Bob’s objections, of course, but the team is then saddled with an IG inquiry, which will detract from other priorities.)
Francis Maude was the Minister for the Cabinet Office when the UK stood up the Government Digital Service. He was its chief champion and protector. Being Minister for the Cabinet Office is like being White House Chief of Staff but with cross-government efficiency and reform, cybersecurity and digital transformation, and coordination between different government departments thrown in. A very big job and a very important position. Not only did he back GDS despite fierce overt and covert opposition, he also led an efficiency program that saved $75 billion and a superb open data program. After all that, he told me:
I and my immediate team were absolutely exhausted. It was the emotional wear and tear of having to check every single thing you’re told, unless you’re absolutely confident of its provenance. Of having to say four times a day: “Show me the chapter and verse. Show me where it says we can’t do this.” It just grinds you down.
Imagine DOGE walking into agencies on January 21st and not having to say that four times a day. If they’re building good AI models (and, let’s hope, testing them), they’re not going to ask that at all. They’re going to know what’s legal and what’s not, or at least think they’ll know. (It’s all always open to interpretation.) Right there, the wall I talked about on Friday is immediately pierced. It’s not so much an information asymmetry we’ll be looking at, but an asymmetry of understanding, and of confidence (merited or not) in their ability to act, and act fast.
It should also be a huge wake-up call for Congress, as @Darulharb points out. Congress seems to have talked endlessly about AI over the past two years, but on a practical basis is taking a quite conservative approach to its own use of it. The team at Popvox has been following this closely, as they attempt to help the first branch address what founder Marci Harris calls the pacing problem, as societal needs and technological progress change rapidly but government does not. In September, the House released a policy that “establishes a new restrictive framework with Committee on House Administration and Chief Administrative Officer as gatekeeper through which Members and staff must proactively seek pre-approval for new uses or tools.”1 That means that ChatGPT Plus is still the only commercial AI tool approved for use in the House. Popvox wishes for better:
In our view, the new policy — if followed by lawmakers and policy staff — would significantly curtail their ability to fully understand the tools and technologies that they are endeavoring to regulate in their policy roles.
It may also significantly curtail their ability to understand their own work, both legislative and oversight, and act quickly, right as a potentially adversarial actor is emerging. Most commentary on DOGE has pitted it against the agencies it has vowed to drastically cut, but Congress is going to want to have a say in what they propose to do — and of course, the executive branch is legally bound to spend what Congress has appropriated, though the Trump administration has signaled an interest in challenging (or ignoring) the Impoundment Control Act of 1974 which, according to a statement from the Democrats in the House Committee on Budget, “established procedures to prevent the President and other government officials from unilaterally substituting their own funding decisions for those of the Congress.” In other words, if the budget passed by Congress says we’re spending this, we’re spending this, DOGE or Trump be damned. I have no idea how that is going to play out legally, but I do suspect that if DOGE has the tools my commenter thinks they probably already have at their disposal, one party in this brewing fight is going to have some significant advantages. Marci’s pacing problem frames the fast pace of change in society at large against a slow pace in government, but we may be about to see a massive pacing problem — a dramatic speed asymmetry — within government itself.
Marci reminds me that the Impoundment Control Act that reinforced the executive branch’s obligation to spend according to the budget also set up the Congressional Budget Office. Congress wanted this because it didn’t trust Nixon’s Office of Management and Budget to be the single source of truth on cost-benefit. It makes sense; you never want to have to rely on analysis done by an adversary when you’re negotiating. The instincts that led them to beef up their own muscle then should be kicking in now, and it might start with giving themselves access to the full range of tools at their disposal.
Though even that full range of tools may be child’s play compared with DOGE’s arsenal. I hear rumors that Elon has about 40 or so engineers already squirreled away in a building near Lafayette Square. I have no special knowledge, but what I imagine them doing is exactly what
suggested: spinning up AI systems that are going to change the playing field in ways few in Washington understand. I’m curious to see what happens when January 21st comes around and Elon lets the DOGE out. Transition might be a little different this time.House Information Technology Policy 08.0 (HITPOL 08.0)
I think this treats the capabilities of LLMs far too credulously. In reality they are not nearly this good, one cannot simply dumb the annotated US Code and CFR into them and hope to get anything useful out.
Wow, this is quite the double-edged sword! We have a real or maybe not real regulatory issue barring a significant service upgrade here (that departments can’t share information like address changes with other departments, leading to endless confusion, lost benefits etc). Using AI to untangle that web of bureaucratic barriers could give that process a real boost. Thanks for posting this.