By Eli Fathi and Peter K. MacKinnon

Eli Fathi (L photo attop) C.M., is chair of the board at Ottawa-based MindBridge Analytics. Peter MacKinnon (R photo at,top) is a senior research associate in the Faculty of Engineering at the University of Ottawa, and Member IEEE-USA Artificial Intelligence Policy Committee, Washington, D.C. AI-generated image by MacKinnon.
Artificial Intelligence is transforming how democratic societies think, govern and make decisions. Democracy will be greatly shaped by whether AI becomes a tool for empowerment or a mechanism of control.
AI is here to stay and it will continue to accelerate its capabilities and its adoption by governments, businesses and the public.
To manage the impact of the AI evolution on open societies, we must build strong safeguards. These guardrails and regulations must include appropriate certifications, enforcement rules for infractions and civic literacy about AI. Without that, AI could undermine the very institutions democracy depends on.
AI is an emergent disruptive technology with a wide range of dual uses, for good and nefarious purposes. AI promises massive economic growth and social benefit, but also risks significant job loss and deepened inequality. AI will permeate almost all blue-collar jobs and white-collar professions.
Goldman Sachs, an investment bank, is predicting a 25-percent reduction in the current global workforce due to AI by 2030, impacting some 300 million people. This is only five years away, and the trend is increasing job loss and displacement along with the emergence of job assistance through agentic AI tools.
Financial uncertainty for displaced workers can directly impact their behaviour and how they will interact in an open society while seeking job security and financial stability. This will require shared responsibility by enterprises and governments to address.
Meanwhile, companies are faced with the dilemma of how to navigate the changing technology curve and its impact on their future while remaining competitive and supporting their workforce. To survive they must embrace and ride the technology curve and carry out scenario planning against various likely futures.
Governments need to deal with the impact on society by rapidly evolving fields such as AI and quantum technologies. These new technologies invariably enter markets with limited or no regulation in place beyond existing standards and guidelines.
Moreover, the legislative processes to develop appropriate regulations both nationally and globally consistently lag the introduction and use of these new technologies.
Importantly, rather than making any laws in rigid terms, they should be flexible and nimble with respect to the rapid rate of advancement in these technologies.
It is not possible to stop progress, yet there is a serious policy need to plan for job losses and transitions now before the wave of AI hits the national and global workforce in uneven ways. The impact on the workforce can be cushioned by continuously training and upskilling individuals for new skills and know-how.
Governments and the private sector and all institutions must collaborate in developing, funding and executing such programs for a long term of adjustment, including learning best practices from other jurisdictions.
How AI could threaten democracy
The threat of AI to democracy could come in many forms, such as those related to elections, increased surveillance and manipulation of results, leading to the erosion of human agency.
AI can create fake news, deepfakes and synthetic media at scale. It is already eroding trust in journalism, elections and the idea of a shared reality of governance. For example: fake political ads, AI-generated candidate videos and “deepfakes” are spreading faster than they can be debunked.
AI can be used to influence and mislead by using bots to create and post content online that appears genuine when it is fake. This concept of “astroturfing” is a worrisome tool in the hands of nefarious players as it gives the impression that there is a grassroot support for a specific idea that can sway and distort public opinion.
Autocracies already use these tools to suppress dissent; democracies risk sliding toward “soft surveillance,” a form of persuasion over coercion. Soft surveillance could enable institutions to use AI to monitor, nudge and predict behaviour. Microtargeted political ads and recommender systems can subtly manipulate public opinion.
Democracy disperses power and it depends on openness, accountability and debate. As AI continues to develop, the question arises as to whether AI will be used to centralize control, as in tilting toward autocracy, or empower citizens and institutions in maintaining and revitalizing democratic processes.
On the other hand, AI amplifies power; thus whoever controls data, algorithms and compute resources can gain enormous influence over ways and means of governing in a digital world. Today, this applies equally to large corporations leading this transformation.
The private sector will continue to develop new applications based on the evolving AI technology curve that is helping society, yet it can be repurposed and push the boundaries of the current legislative envelop.
For example, Flock Safety, an Atlanta-based company, oversees 80,000 cameras, mostly in public places, that are across 49 states and help to solve an estimate one million crimes annually with the assistance of AI. Such initiatives can have strong public support as well as detractors concerned about privacy.
To accrue the positive benefits of AI without impacting the way open democratic societies are run and function, societies must agree to adhere and follow a framework acceptable and supported by the majority.
A recommended policy framework for moving forward on these broad issues is the OECD AI Principles, adopted in 2019 to “promote innovative and trustworthy AI that respects human rights and democratic values with five core principles, namely inclusive growth, human rights, transparency, robustness and accountability”.
Society must embrace these technologies to reap new and improved benefits; AI is no exception. At the same time, it is essential to minimize undesirable consequences of adopting AI into the daily lives of individuals.
Over the years, democracies have withstood many challenges and will likely overcome the potential unintended consequences of AI by using AI to strengthen the democratic values of society.
As society moves rapidly toward online platforms, AI will be able to assist individuals in better understanding and navigating the complex world of today. To achieve this outcome, governments and institutions must establish appropriate regulations and guardrails.
In addition, society-wide digital literacy campaigns are necessary to foster a well-informed public and societal buy-in.
R$
| Organizations: | |
| People: | |
| Topics: |