"Is Artificial Intelligence a Game Changer in Political Campaigns?"
- Mahnoor Khakwani
- Apr 2, 2024
- 6 min read

The initiative mentioned, where tech companies pledge to combat deep fakes depicting political candidates, represents a recognition of the potential threats posed by AI manipulation in the political sphere. Deep fakes, which are realistic but false audio, video, or image content created using AI, can be used to spread misinformation and manipulate public opinion, posing serious risks to democratic processes.
The involvement of major tech companies like Google, Meta (formerly Facebook), TikTok, and others in this accord indicates a growing awareness within the tech industry of its responsibility to address the societal implications of its technologies. By committing to detect and mitigate the spread of deep fakes, these companies are taking a proactive stance in safeguarding the integrity of political discourse.
However, the effectiveness of such initiatives remains to be seen. Detecting and combating deep fakes is a complex technical challenge, and the efficacy of current detection methods may be limited, especially as AI-generated content becomes increasingly sophisticated. Additionally, the decision not to outright ban deep fakes raises questions about the extent to which these companies are willing or able to regulate their platforms.
Meanwhile, regulatory efforts in countries like Canada are still evolving. Governments are grappling with how to effectively regulate AI technologies without stifling innovation or infringing on free speech. In Canada, as in many other countries, there is ongoing debate about the appropriate legal and regulatory frameworks to address the challenges posed by deep fakes and other AI-driven phenomena.
Overall, while initiatives like the one mentioned represent a positive step towards addressing the risks of AI manipulation in politics, they are unlikely to provide a comprehensive solution on their own. Effective regulation, technological innovation, and broader societal awareness and engagement will all be necessary to mitigate the threats posed by deep fakes and other forms of AI-driven misinformation in the political sphere.

The contrasting approaches between American and Canadian regulators and campaigners regarding the use of artificial intelligence tools in elections highlight differences in regulatory frameworks, technological adoption, and political culture between the two countries.
In the United States, the rapid implementation of new rules governing AI tools in elections reflects a sense of urgency to address the potential risks and opportunities associated with these technologies, particularly in the context of a presidential election year. Campaign operatives are increasingly exploring the use of AI for voter targeting, mobilization, and fundraising, driven by the desire to gain a competitive edge and adapt to evolving electoral dynamics.
On the other hand, Canadian legislators and election authorities are taking a more cautious approach, particularly in a year when several provinces are holding general elections. The slower pace of regulatory action may stem from a desire to carefully evaluate the implications of AI in the electoral process and ensure that any regulatory measures strike the right balance between innovation and safeguarding democratic integrity.
Steve Outhouse's observation that AI is still an emerging tool in the world of Canadian elections underscores the relatively nascent stage of AI adoption among Canadian campaigners compared to their American counterparts. While digital tools are increasingly integral to modern political campaigns in Canada, the use of AI-specific applications for voter targeting and mobilization may be less widespread or sophisticated.
Overall, the differing approaches to AI in elections between the United States and Canada reflect broader variations in regulatory environments, technological readiness, and political strategies. As AI continues to evolve and play a greater role in electoral campaigns worldwide, both countries will likely grapple with how to effectively harness its potential while mitigating its risks.
The difference in the use of AI tools between New Brunswick and the United States reflects varying levels of adoption and comfort with these technologies within the political campaign landscape.
In New Brunswick, while AI tools may be considered to augment existing digital tools, there is a clear preference for maintaining a human touch in voter outreach. Campaigns are opting for live callers or messages recorded by actual individuals rather than fully automated AI-generated interactions. This approach underscores a commitment to personal communication and traditional voter contact methods.
Similarly, the New Brunswick Green Party spokesperson also emphasized the reliance on human beings and traditional methods for voter outreach, indicating a common sentiment among Canadian political parties to prioritize human interaction in their campaigns.
In contrast, campaign operatives in the United States are already leveraging AI tools for various aspects of campaign management, including script development for phone calls, generating images and video content, and analyzing voter data. Companies like Votivate LLC are offering AI-driven solutions to provide real-time voter data, advanced analytics, and assistance with various campaign activities.
Despite the absence of such sophisticated AI tools in the Canadian context, some practitioners of digital campaigning may be starting to integrate AI into their workflows. However, the overall use of AI in Canadian political campaigns appears to be relatively limited compared to the United States, reflecting differences in technological adoption and political culture between the two countries.

The statements from Dean Tester and other professional campaigners illustrate how AI is being utilized as a tool to enhance efficiency and productivity in political campaigns, primarily by expediting research, writing, and idea generation processes. However, these practitioners emphasize the importance of human oversight to ensure the quality and integrity of the final products. While AI can accelerate certain aspects of campaign operations, the output still requires human judgment and refinement before being disseminated to the public.
The recent incident in the New Hampshire primaries, where a fake robocall purportedly from U.S. President Joe Biden was distributed using AI-generated voice technology, highlights the potential risks associated with AI misuse in political campaigns. Legislatures in numerous U.S. states have responded by passing or considering laws to regulate the use of AI in campaigns, with some states requiring disclosure to voters when AI tools are employed for persuasion.
In the wake of the New Hampshire incident, the U.S. Federal Communications Commission (FCC) took decisive action by prohibiting the use of all robocalls—political or otherwise—that utilize artificial intelligence. However, the Canadian Radio-television and Telecommunications Commission (CRTC) takes a different stance, suggesting that existing regulations may be sufficient to address AI-generated robocalls. While there is no specific policy regarding the use of AI in robocalls, the CRTC indicates that such use could potentially violate existing legislation or regulatory rules, depending on the circumstances.
The differing regulatory approaches between the United States and Canada reflect variations in legal frameworks and regulatory priorities. While the U.S. has taken proactive measures to address the specific threat posed by AI-generated robocalls, Canadian authorities rely on broader regulations to deter such misuse. As AI continues to play a growing role in political campaigns, ongoing vigilance and adaptation of regulatory frameworks will be essential to ensure the integrity and fairness of electoral processes.

The statements from Patrick Bundrock of the Saskatchewan Party and other Canadian political figures highlight a growing awareness of the potential risks associated with AI in electoral contexts and a willingness to take proactive measures to address them. While there have been relatively few reported incidents of AI misuse in Canadian elections thus far, political parties and election authorities are actively engaging in discussions and considering regulatory changes to safeguard the integrity of the electoral process.
Elections Saskatchewan and Elections Canada are both examining the implications of AI for elections and considering updates to relevant legislation and regulations to address emerging threats. The Canadian Security Establishment's warning about the potential use of AI-generated fake images and videos to undermine democracy underscores the need for vigilance and preparedness among political actors and voters alike.
The incident in the 2023 Toronto mayoral race involving a deep fake video demonstrates the evolving capabilities of AI technology and the potential for increasingly convincing manipulation of digital content. As such technologies become more accessible and sophisticated, there is a growing recognition of the importance of transparency and accountability in political communication, including commitments by parties in British Columbia to refrain from misrepresenting AI systems as human beings.
Federal Minister for Democratic Institutions Dominic LeBlanc acknowledges the government's concern about the inappropriate use of AI in elections and emphasizes the need for measures to prevent such misuse and maintain public confidence in the electoral process. As Canada considers changes to its electoral legislation, the impact of AI on elections is likely to be a key consideration, reflecting a broader global effort to address the challenges posed by technological advancements in the political sphere.
Comments