AGI: From Ballots to Bots
In the short term, generative AI in governance is probably going to look a lot like private sector use. It will replace and help in many of the tasks currently done by armies of interns and aides.
In his forum lead, Charles T. Rubin worries that turning human governance over to artificial general intelligence (AGI) will cause immense problems. The world, in particular the United States, has many incompetent government officials. We don’t need to worry, though, that soon some of them will be robot overlords—that technology is nowhere in sight. It’s understandable that the concern is in the public consciousness because the new generative AI models make robots with human-level (or better) cognitive capabilities feel proximal. Thus, it is easy to imagine that we will soon be living in a post-scarcity world.
In reality, AI progress is more mundane, simply replacing the tasks that are part of many jobs. These productivity gains are substantial, but less sensational and thus, don’t get the headlines. Rather, the media and entertainment industry depict utopias (supported by mechanical, intelligent slaves) and dystopias (ruled by mechanical, intelligent overlords). Rubin imagines both at the same time. He believes AGI is inevitable and perfect enough to lead to a post-scarcity world (although the mechanism is unclear), but would be flawed at government. How would we have perfect production, and thus no scarcity, without significantly improved government? The skills and technology are not sufficiently orthogonal to have one without the other.
Of course, reasonable people (and a fair number of unreasonable people) disagree on whether AGI is close, or even what will signal “human level.” Intelligence differs widely among humans, particularly by task. Alan Turing, the creator of the “Turing Test,” likely believed conversational ability was sufficient because it can be quite difficult (particularly for computer scientists). Other AGI tests require an understanding of physical space as well, something current generative AI models are particularly struggling with. For example, a robot would assemble flat packed furniture from directions (a hurdle few humans can pass). Even if technology gets close to AGI, there will continue to be debate on whether it was achieved.
Confusion is understandable, as the timeline for the advent of AGI has been notoriously hard to predict. In 1950, Turing predicted that it would happen when computer storage was about 125 MB, which he estimated to be about 50 years. In 1970, Marvin Minsky, another early AI pioneer, predicted ADI was “three to eight years” away. Ray Kurzweil has consistently predicted AGI in 2029. Many current AI leaders are also predicting GAI is right around the corner. Perhaps they will be right. Or perhaps it is not an extension of our current technologies, and they are like those fooled by the ELIZA chatbot in 1966. It doesn’t matter how much building skyscraper technology progresses, it won’t reach the moon. Our current generative AI technology is not a rocket ship.
Even if AGI is developed, it will not be perfect. Rubin concludes by asking, “if utopia is not in the cards, why would we want a world where human work and effort are subordinated to or made redundant by AI?” Utopia is not in the cards. It never is. AI is flawed like every other tool that humanity has developed. However, this is human progress—tools subordinating work to allow people to focus on better pursuits. Every invention has some downsides, and AI is no different. It may not be necessary in the strictest sense, but it is yet another marginal step in improving the human condition.
Therefore, generative AI will not eliminate scarcity. It may markedly improve productivity and will certainly displace jobs in many sectors. For example, there is already some decline in freelance writing, transcription, stock photography, and customer service jobs. Many are understandably anxious about where new jobs will be created to replace those lost, particularly beyond those adjacent to generative AI. New jobs will come, but they will take time. The first mobile phones immediately created high paying jobs for engineers, but many years later, the industry indirectly spawned jobs for Uber drivers and TikTok content creators. At least in the near term, scarcity is more likely to be removed by things like fertilizer and shipping containers than groundbreaking technology.
Joseph Schumpeter labeled the process where some jobs are destroyed and new (presumably better) others are created “creative destruction.” The new jobs are not filled by the same disrupted people, however,particularly at the modern pace of change. That being said, it is likely AI will transform parts of jobs, e.g. writing copy, rather than replacing them completely. New jobs will pop up, for example, creating generative AI, using it, and creating new tools around it. Mobile phone creators made plenty of wealth, but so did everyone in the ecosystem, from those making phone cases and those making apps, and even those who indirectly profit simply by using these new platforms. It’s quite hard to see the new jobs now, when it’s such a nascent technology—but the new jobs will come.
Of course, generative AI is more creative (according to some definitions of the word) than previous technologies, and will not just replace manual labor, but also intellectual work. Just like the calculator radically transformed mathematical work, generative AI will transform writing. Now, it is an excellent search engine that returns its results in natural language, rather than a list of links. It will definitely remove some of the drudgery involved in researching and writing, but it won’t eliminate the need for human interaction. There are more accountants than ever before, even though the bookkeeping clerk has disappeared. Likewise, generative AI will replace some use cases for human-created art. However, the new art form co-exists alongside the old one—paintings and photographs, concerts and Spotify, Broadway and Hollywood. Already, there are art forms based on generative AI.
Even if it were to become massively more capable, humans would never “turn our affairs over to superintelligent AI,” as Rubin says may happen. Superintelligent AI will have better spreadsheets than the central planners of the past, but will still run into the same problems technocracy and central planning always have. Additionally, people are notoriously reluctant to turn their affairs over, although maybe frustration with our horrible politicians and automation bias will lead them there.
Inevitably, AI will continue to take over more tasks from human beings, but it’s completely inaccurate to imagine it will be out of control. As Asimov realized with his Three Laws of Robotics, people will not let AI loose without strong built-in control mechanisms. His intelligent robots had control mechanisms that essentially made them slaves. Model implementation processes have humans “in the loop,” creating, validating, and testing the system before deployment, as well as monitoring and correcting the system as it runs. In more critical situations, humans can be “on the loop,” supervising with the authority to stop or alter the system’s actions. Both public and private organizations have been building frameworks about this, for example, the NIST AI Risk Management Framework. Rubin asks what happens if the AI “hordes all the energy resources for itself.” Well, then the developers either fix it, or worst case, they unplug it. We should be far more scared of human hoarding, as they are far more difficult to control.
Additionally, while it is possible that just one company will achieve AGI, currently there is healthy competition. There is a proliferation of models from both large companies and smaller companies. The free, open source models, like DeepSeek, also provide lower-cost alternatives at very close to the same performance. As James Madison said in Federalist #51, “Ambition must be made to counteract ambition.” Technology companies with powerful, evil models will be fought both by other companies, as well as people who don’t want to cede power to AI or the companies that control it.
Regardless, it’s very premature to debate what a super-intelligence will be capable of doing. Rubin is almost certainly correct that interpreting human motivations will be difficult, along with many other challenges. While we don’t need to worry about humans allowing AI to rule us, we should worry about how it is currently being used in government. It doesn’t need to be super-intelligent to be used for evil. Like all tools, it will be aligned with those who use it. For example, in the private sector, banks use AI for loan decisions that align with their own interests, not those of the borrowers. Likewise, government AI serves the bureaucrats, not the citizens. Therefore, it is very reasonable to ask how to ensure AI use is in the public interest. It is also reasonable to ask how to keep people from blindly trusting its decisions, since it can be just as deceptive and stupid as human politicians.
In the short term, generative AI in governance is probably going to look a lot like private sector use. Probably, it will replace and help in many of the tasks currently done by armies of interns and aides, like writing reports and summarizing regulation and legislative bills. It will help with some of the drudgery, like processing passport renewals and grants. Chatbots will help people learn about and use government services. Applications like fraud detection for things like Medicare claims will expand. Of course, it is employed more nefariously for things like surveillance and predictive policing. Ensuring these uses are transparent and effective is effort is well-spent today, rather than worrying about potential AGI.
AI doing a lot of the boring business of the government is not terribly exciting. If AI could ever replace the sorry excuses for human beings that are most of our legislators with some perfect system, weighing inputs from all stakeholders, and creating a balanced, benevolent dictatorship, that’s a problem we’d have to face when it comes. But our primary worry should be that the AI will act like biased, unhinged, idiotic jerks. But then again, we have to worry about the same thing with our politicians.
Rachel Lomasky is Chief Data Scientist at Flux, a company that helps organizations do responsible AI. Throughout her career, she has helped scientists train and productionalize their machine learning algorithms.
Source: https://lawliberty.org/forum/from-ballots-to-bots/?mc_cid=653aedb777&mc_eid=345c45bfca