An aerial view of the xAI data center in Memphis, Tennessee. (Steve Jones/Flight by Southwings for SELC)
Artificial intelligence (AI) has become integrated into everyday life, shaping algorithms which recommend what to watch and read, as well as tools to manage finances and schedules. AI supports many of our personal and professional routines, and it has become an imperative for global business and communication.
However, when AI blurs the line between fact and fabrication, it can raise questions about whether democratic institutions can function effectively in an age of machine-driven content. At the core of this challenge is epistemic trust, which is the confidence in factual knowledge. While democratic societies are not dependent on citizens agreeing on values or policy preferences, many countries such as the member nations of the Organisation for Economic Co-operation and Development argue there is a need for a common understanding of reality to deliberate, vote, and hold institutions accountable.
AI is also a growing economic force. Analysts project AI could contribute more than $15 trillion to the global economy by 2030. However, analysts also note the growth generated by the AI industry will not be evenly distributed and may exacerbate inequalities between nations and communities.
The pervasive nature of artificial intelligence, while often unnoticed, continues to pose environmental, ethical, and political dilemmas which are becoming increasingly prominent topics of political discourse.
Navigating the Energy Demands of AI
Training large AI models requires tremendous computing power, which correlates with increased capacity and higher electricity consumption. Data centers, which provide the energy to run AI workloads, are projected by some to be responsible for nearly half of the global electricity demand growth by 2030.
The increase has raised concerns from researchers and experts at the Massachusetts Institute of Technology about the viability of AI growth and its compatibility with climate goals. They argue the increased electricity demand may limit nations’ ability worldwide to transition to renewable energy quickly while complicating emissions targets.
The United States is facing new challenges associated with the energy consumption levels of data centers and the data centers’ reliance on the electric grid. Spikes in electricity demand have led to increased costs and put a strain on electric grid’s infrastructure. According to an analysis of the nation’s largest electric grid which covers the Mid-Atlantic and parts of the Midwest, the global consulting firm ICF, estimates residential rates may rise from 30% to 60% by 2030, with data centers making up over 90% of the projected new demand for electricity in the region.
Some proponents of AI growth contend AI does not have universal negative environmental implications. Some technology companies such as Amazon, Meta, and Alphabet are now seeking funding for renewable energy to power data centers, developing chips which consume less energy, and designing algorithms to intelligently manage energy grids and reduce waste. Other companies are even leveraging AI to enhance environmental monitoring, optimize energy consumption, and prevent and mitigate the impacts of climate change. While AI drives energy and resource demands in the environment, it may simultaneously emerge as yet another major avenue to help mitigate the climate disaster it has contributed to.
The Challenge of Deepfakes
Deepfakes, which are AI-generated or altered videos, audio, or images, pose another growing topic of political conversation and discourse. Highly realistic fakes can perpetuate false claims and disseminate rapidly, possibly contributing to the erosion of public trust in leaders and political institutions. A 2023 report by DeepMedia identified more than half a million deepfake videos online, along with an alarming increase in scams utilizing deepfake technology.
The rise of deepfakes may complicate efforts to uphold integrity in political discourse, creating a climate in which voters struggle to discern reality from misinformation. Compared to the historical perspective of misinformation, the distinct risk of deepfakes is that they can more effectively feature individual manipulation. Research shows individuals often are unable to distinguish between AI and human content, and AI has been shown to manipulate audiences psychologically.
A real-world example highlighting the threat of AI manipulation in elections came in January 2024, when several New Hampshire voters received a call from an individual who claimed to be President Joe Biden, discouraging them from participating in the state’s primary. The voice was not Biden’s; it was a deepfake. The consultant responsible indicated it was intended to be a conversation starter about the manipulation that could happen in elections. However, he was fined $6 million by the Federal Communications Commission and was also criminally charged.
The normalization of deepfakes contributes to what scholars call “liar’s dividend,” or a collective impact where people believe anything could be fake. The erosion of trust in actual news and media sources could lead to further regression of political discourse, as it is easier for an individual or politician to call something truthful that they do not like “fake.” If voters can not know what is real, some like Mark Riedl, a professor in the Georgia Tech School of Interactive Computing and associate director of the Georgia Tech Machine Learning Center, argue that elections and our democratic institutions are in jeopardy.
Democracy and Distrust
The expansion of AI-generated (GenAI) content has also diminished public trust in media. Almost three-quarters (74%) of respondents from a study conducted by Deloitte who were familiar with or had experimented with GenAI, as well as 62% of habitual users, said the growing popularity of GenAI makes it harder for them to trust what they see online.
Distrust of AI-generated content extends beyond the media. For example, AI tools can create political campaign-related material almost instantaneously. While these tools create efficiencies, they may raise a broader concern about whether political content is authentic and whether there is any accountability in terms of political messaging.
The increasing distrust of media and information systems complicates democratic engagement. It is difficult for individuals to make rational decisions when they do not trust the content they receive. It also fundamentally undermines the foundation of a viable democracy.
As AI continues to grow, policymakers are being encouraged by some to implement clear and transparent guidelines. Academics are urging tech companies to take responsibility and accountability seriously, and the federal government is advising citizens to remain aware and skeptical of the information they consume.
