Efficiency: AI Implementation in Theory and in Practice
Kamden Baer
July 12, 2024
One of the open-ended questions in my last post was highlighted by Oscar, who—in response to “what happens when the goal is not to be more efficient?”—noted that the goal of AI generally is greater efficiency. I see no reason to disagree, and on my own I cannot come up with any sort of counterexample or exception. When I asked ChatGPT-4o to give examples of goals of artificial intelligence other than efficiency, it gave and explained the following list: enhancing human creativity, improving decision-making, personalization, accessibility, advancing scientific research, sustainability, healthcare improvements, safety and security, education advancement, and cultural preservation. In my broad understanding of the concept, I would consider many of these goals to still fall under the umbrella of efficiency, or at least causes of greater efficiency. That being said, I might have been giving AI too much credit when it comes to its capabilities. I overlooked the fact that ultimately (at least at the moment…) we as people decide how AI is created and implemented. My qualm with AI might have less to do with its aim of efficiency, and more to do with its implementation as a tool to beget efficiency. I remember learning about the balance of efficiency versus equity in high school, in most likely my AP U.S. History course. I believe I learned about it in the context of checks and balances between the federal government’s three branches, i.e. the goal of balanced dynamics of power between each of them. Now, I recognize that this dichotomy might remain too simple for both philosophies of government and of technology, but it still provides a useful framework to dive into this question of efficiency as it relates to AI. Increased efficiency being brought to a world where everything is otherwise equitable in terms of who creates AI and who uses AI would be the perfect scenario. Everything gets better across the board. Unfortunately, that is not the world in which AI currently finds us. Many people have jobs that don’t need a computer, let alone AI. I think about the retail and food service employees, the cleaners, the farmers and farm hands, the construction workers and the mail carriers. None of these careers benefit greatly from improvements in efficiency due to AI, as opposed to more traditional office jobs, but they still provide valuable and irreplaceable services and make a living for the people that work them. AI might lead to more efficiency and leisure time for people who benefit most from the product, but I don’t foresee them taking up the labor of those less advantaged by AI. Where there is efficiency, there is not necessarily equity, and so this unevenness of AI implementation could detract from its power. Likewise, few companies have the resources and knowledge to create these models. In theory, the broad range and scope of AI lends itself well to a diversification and personalization of models as they respond to the needs of people and businesses. However, I’ve noticed the Microsofts, Googles, and IBMs of the world swoop in—with the aforementioned resources and knowledge—and rein in a lot of the more popular models into their respective spheres of influence. The efficiency of these models might benefit greatly from the expertise of these companies, but many doors to more tailored products also close as a result. I don’t think I would be as concerned on this point if not for the fact that current and future AI is being touted as on par with or far surpassing human intelligence in many respects. Returning to my AP History lesson, the founding fathers presumably did not calculate the role of the modern technology industry in their construction of the checks and balances of the federal government. Twitter and Meta already hold considerable sway in election cycles, and yet regulation in this realm is lackluster. My fear is the consolidation of this unchecked technological governance with superhuman artificial intelligence and its effects on public governance. And yet, taking a step back, I might be leaning a little extreme in my disfavor of AI—I feel a bit like a conspiracy theorist—so please let me know if you vehemently disagree, or agree, or somewhere in between and why!