Wonks and War Rooms

The Uses of AI in Canadian Politics

Elizabeth Dubois Season 5 Episode 5

In this special episode Elizabeth is joined by our panel of experts — Samantha Bradshaw, Wendy Chun, Suzie Dunn, Fenwick McKelvey and Wendy H. Wong —  for a roundtable discussion on how artificial intelligence is being deployed in Canadian political contexts. The topics range from mis- and disinformation, facial recognition, synthetic media, deep fakes and voice cloning to technical terms like GANs and large language models. We discuss the ways identities can be manipulated through AI, how generative AI creates content that dilutes our trust in images and media, and how AI relies on past data to make decisions about our future. We also look at potential solutions to all these challenges, including how to develop tools and techniques to detect disinformation, and questions around regulating AI while also enabling its use in creative expression.

This episode is packed with more resources than we can list below, so take a look through the annotated transcript for more links!

Additional resources: 

Check out www.polcommtech.ca for annotated transcripts of this episode in English and French.

Elizabeth: [00:00:04] Welcome to Wonks and War Rooms, where political communication theory meets on-the-ground strategy. I'm your host Elizabeth Dubois. I'm an associate professor at the University of Ottawa and fellow at the Berkman Klein Center. My pronouns are she/her. Today, we've got a special episode for you. This is the recording from a live episode of Wonks and War Rooms that we did, looking at the uses of AI in Canadian politics. I hope you enjoy. Samantha, can you introduce yourself, please?


Samantha: [00:00:31] Yeah, definitely. Thanks so much, Professor Dubois, for hosting this. My name is Samantha Bradshaw and I'm an assistant professor at American University. I study new technology and security issues. A lot of my work has focused on online mis and disinformation campaigns. Really excited to be here.


Elizabeth: [00:00:49] Thank you. Suzie.


Suzie: [00:00:51] Hi everyone. Thank you for the invitation to come and speak with all of you. Really looking forward to the conversation. My name is Suzie Dunn. I'm an Assistant Professor at Dalhousie University's Schulich School of Law. Most of my research looks at a variety of what I call identity manipulation. So I'm really interested in deepfakes and generated images. And I also do research on technology-facilitated violence and online harms and looking at the ways that AI can cause harms in digital spaces.


Elizabeth: [00:01:21] Awesome. Thank you. And Wendy W.


Wendy W: [00:01:24] Hi, and thanks for having me. My name is Wendy Wong. I'm a professor of political science and principles research chair at the University of British Columbia. I'm interested in global governance and international relations, and I've been doing research on human rights for about 20 years now. I currently spend a lot of time thinking about the governance of data intensive technologies like AI, so how we should govern them, how they govern us, and what are the political and social implications we should be thinking about. And I have a book coming out this October from MIT Press called "We the Data Human Rights in the Digital Age". And I'm looking forward to our chat today.


Elizabeth: [00:02:00] Thank you. Fenwick.


Fenwick: [00:02:03] Hi, Elizabeth. Great to be here. So my name is Fenwick McKelvey. I'm an associate professor in information and communication technology policy at Concordia University. I also am co-director of Concordia University's new Applied AI Institute, where we're looking at many of the issues around AI governance and ways to respond to AI social impacts. I'm also—I run the machine agencies at the Applied AI Institute, which is a research cluster dedicated to the intersection of human and machine agencies. This is a topic today really near and dear to my heart. I think for a long time I've been very focused on the long, long history of what we might call artificial intelligence in politics and working on a book-length project about the story of the use of computers in American politics to solve its international and domestic political issues. So looking forward to offering some historical context, as well as maybe plugging some of our past work around political bots and some of the other long history of big data in politics in Canada. Thank you.


Elizabeth: [00:03:01] Thank you. And Wendy C.


Wendy C: [00:03:04] Hi, everyone. I'm Wendy Chen. I'm the Canada 150 research chair in new media at Simon Fraser University, also professor of communication and director of the Digital Democracies Institute, where we take on some of the questions around polarization, mis and disinformation, but also increasingly thinking about what we're calling data fluency. So ways to reinvent, intervene and to speak in different ways and resist this data-filled world we find ourselves part of.


Elizabeth: [00:03:37] Thank you. And I'm Elizabeth Dubois. As Florian mentioned, I'm an associate professor at the University of Ottawa in the Department of Communication. I'm also a university research chair in politics, communication and technology. And I run the Pol Comm Tech lab. And this year I am a fellow at the Berkman Klein Center at Harvard. So for today, instead of starting with the typical me offering a definition of the topic for you to tell me whether or not it makes sense, I figured since we have such an expert lineup, I'd hand it over to you each to tell me how you think about what AI is and maybe offer some examples of AI in politics in Canada, the ones that you're most interested in, the things that you think are most important that we talk about, however you want to define that. I'll give each person about a minute to just talk about some of those uses of AI thought of quite broadly, and then we'll get into some more details. So for this one, can I start with Suzie?


Suzie: [00:04:37] Sure. Yeah. So I look at a really specific area of artificial intelligence. I'm really interested in the ways that identities can be manipulated and co-opted through AI. And sometimes this comes in the form of misinformation or disinformation. Other times it's in [the] form of satire. So I look a lot at what we call synthetic media. Often people talk about this in the form of deepfakes. Deepfakes. The way that I think about them are more manipulated videos. So fake videos that are created to have people say and do things they haven't said or done. And more recently, we're seeing a real pickup in AI generated images. So Midjourney, Dall-E, Stable Diffusion are now being used to create fake images of politicians, fake historical events, fake political events. And they're getting a lot of media attention right now. One area that I don't think we talk enough about, but I think is really interesting is voice cloning. So mimicking people's voices and I'll provide some examples later on in the conversation. But there's companies like Descript that allow people to put their voices into this tech and then you can type whatever you'd like and get a replication that sounds exactly like a person's voice saying things they've never said. And there's been conversations around ChatGPT recently as well, that if you ask ChatGPT or any of these large language models about a particular person, so ask them questions about a politician.


Suzie: [00:06:02] You can do it about yourself. If there's stuff about you on the internet and it creates text about a person that's not always accurate, and there's some confusion around the accuracy of the content that ChatGPT is creating. And we were asked to think about some of the challenges and opportunities that come from this technology. And so some of the challenges—there's two sides to the challenges with this issue. The first is that people believe fake content, so some content will be realistic enough that when it's shared on the internet, they'll think that it's actually a politician doing something. It'll change how someone votes. It'll change how someone thinks about a particular politician. And so there's actually the risk of this misinformation, disinformation being believed. But really, one of the larger issues that I think is coming up and is going to be one of the trickier ones to deal with is how it dilutes our trust in images and videos. Where we used to be able to have a stronger sense of trust in images and videos and what we were reading. And in the same way that we've always had to be critical of written media, I think we're going to have to be hyper vigilant about looking at videos and images now and not necessarily trusting that they're real.


Suzie: [00:07:08] And so that creates a challenge. And a lot of the conversation on AI-generated content about politicians has been quite negative about how it can cause a lot of harms, how it can disrupt politics, impact elections, et cetera, et cetera. But there's also quite a lot of opportunities to be used in using this type of content. There's a great report out by Witness called Just Joking, and it talks about this kind of the two sides where it's clear that content is harmful, it's clear that content's satirical or a parody or political commentary and there's this kind of murky grey area. But a lot of artists and activists are using AI-generated content in a way to bring attention to issues even when they're mimicking our political leaders. They're doing it in a way that might be more obvious. It's clear that it's parody, and they might be using it to bring attention to environmental issues, to issues around sexual violence. And so there are really creative and positive uses of this type of AI.


Elizabeth: [00:08:03] Thank you so much. Let's go to Wendy W now.


Wendy W: [00:08:07] Okay. So I think, you know, we have a really good opportunity to talk right now in this point in history about what technologies are falling under this umbrella we call AI. I think right now there's a lot of—a swell of public interest/fascination/horror. But. Right. I don't think it's clear that we actually know what we mean when we say AI, especially in public discourse. And I think we're at the point where we actually need to make better sense of what it is we're talking about. So some of the things that I think we can we label as AI or things like the outputs we're seeing from the large language models that Suzie just referred to, for example—should— these outputs should give us pause in terms of thinking about not just the text and the errors, but also how the AI analyzes, creates and pushes that content based on data that that has been created by us. And, you know, or to think about differently. What about facial recognition technologies? So these are technologies that take data from our faces to match us up with other faces to find out whether someone is who we think they are or not. And that type of technology also uses AI, or we can think about AI chat bots, not just ChatGPT, but these chat bots out there that are used to mimic people. So companies like Replika, for example, which creates synthetic people to converse with, or bots that use data from people to quote, "recreate them" as forms of interaction for others in the world.


Wendy W: [00:09:42] And so these are all examples of where we could say AI is being used. And I think that it's really important to know the differences and the sort of concerns at play when it comes to all these different types of AI, because they all are touching on various ways of how we exercise our fundamental rights as human beings, our human rights. And I hope to talk a little bit more about that today. A lot of times you hear about these challenges to our rights as, quote, "harms" and these harms need to be curtailed. But I think we actually need to think about how AI fundamentally changes our human experiences. And so it's not just about harm, but it's about our lives more generally. And so that's why I think at this very moment, actually one of the things that we need to think about with regard to the politics around AI is to encourage and create frameworks for thinking about the importance of digital literacy for democratic participation. And I wrote about this a few weeks ago in The Globe and Mail with my colleague Valérie Kindarji. And one of the things that we raised is the question of, what do we need to know, as the public, in order to decide what kinds of automation or machine learning or AI we're comfortable with as a society.


Elizabeth: [00:11:00] Thank you. Let's go to Fenwick next.


Fenwick: [00:11:03] Yeah, I always think that a lot of my work is interested in this kind of intersection between AI governance and that I think we can look at it on two sides. Is one, looking at all the ways that artificial intelligence is now being proposed as a solution to many different facets. And here I'd wind it out from just in terms of political campaigns to thinking about this as being a solution to many of our policy problems. So some of my work with my colleague, Dr. Sophie Toupin, has kind of emphasized how AI is being proposed as a solution to Canada's broken immigration system. And so ways that that system, and the complexity of that system, is really a site of automation, really points to how AI is with us presently as a tool to fix things. This is also something that political campaigns might or may not have to be dealing with as past iterations of Canada's online harms legislation.  Again, just shout out to kind of Wendy emphasizing the kind of significance of harms as a particularly important framing device that online harms is another place where there's been a push to have AI content moderation tools, ways of artificial intelligence being able to detect and stop problematic speech, as well as some of my past work which has been at the CRTC looking at AI being used to block fraudulent calls, which is a mandate we're living through.


Fenwick: [00:12:22] And I say that because it's important to pay attention to what are the ways that artificial intelligence is changing, how governments and politics work as a really active tool being used now to kind of automate many of our regulatory decisions. What I wanted to emphasize and what's really sticking with me right now is some of the other work that I'm working with Dr. Toupin and Maurice Jones around consultations in Canada. A lot of the consultations we've had around artificial intelligence are all being done through text and really very kind of cursory consult engagement tools. And I think, missing that point of literacy that's just been mentioned. And that I think is really one part that's staying with me for this panel is just the fact that so much of how we think about the business of government is really text based and limited in scope. That really makes it a prime site for automation and for ways of using manipulated submissions. And so to me, the part of this I want to stress at the end is just how capable our democratic institutions are and how little they've invested in public consultation, making it, I think, a ripe target for potentially misuse of synthetic content that my co-panelists have discussed.


Elizabeth: [00:13:31] Thank you. Samantha.


Samantha: [00:13:32] Awesome. Yeah. So a lot of my work has looked at influence operations and foreign influence operations and the way that bad actors use bots or computational propaganda to influence elections, citizens around the world. If you drill down to the nitty gritty of these campaigns, most of them are not very sophisticated. They make use of what a lot of information operation researchers call things like "copypasta", where it's just text that's kind of copy and pasted over and over to sort of amplify a certain kind of message or narrative and try to reach as many eyeballs as possible. And this can be done manually or through the use of what we sometimes call bots,  which are automated pieces of code that are designed to mimic human behavior. So they'll often share, retweet this message over and over. And it's really easy to detect this kind of behavior. But, you know, platforms have a lot of tools at their disposal to be able to identify bad actors who are using these kinds of very sophisticated techniques. But when we think about this now in the context of AI, there's a lot of questions around how AI can make these kinds of faking behaviors and narratives a lot better. So taking ChatGPT 3, for example, the way that large language models are changing text based propaganda. It's not so much about just copy and pasting text anymore. If you can have a language model produce very different iterations of a message with the same core meaning. This can help reduce the cost of generating and disseminating propaganda and mis- and disinformation, and it allows more entrants into the space.


Samantha: [00:15:23] We can also think about this in the context of what some of Suzie was talking about around image generation. A lot of information operations in the past, you know, they'd have to steal photos, but now we've seen a lot of fake accounts use generative adversarial networks or GANSs to create a fake profile picture of a person that is not real but might look like a member of a community that its trying to influence. And this can be used to help reach the right people with the right message and make detection by everyday people a lot harder. So when I'm thinking about the challenges and opportunities here, you know, I think the issues of trust that Susie mentioned are really, really important, but also issues around detection and how we're going to build out tools and techniques to identify these campaigns. You know, questions around whose responsibility is it, at—when do we—at what point do we intervene? What are even appropriate intervention strategies? Do we label these this kind of content, these kinds of accounts? I think there's a lot of unanswered questions here. So these are the issues that I'm thinking a lot about.


Elizabeth: [00:16:43] Amazing. Thank you. And Wendy Chen.


Wendy C: [00:16:45] So what we've been most concerned about is the ways in which technical defaults actually carry with them social and political valences and assumptions. So how all these systems, irregardless, if they’re proprietary or not, perpetuate certain notions about the relationship between the past and the future as well as segregation. So if you think of how these programs are trained, they're trained on past data, right? And so many studies have shown if the the past is biased, what it will produce is bias, because that's the ground truth. But what's also key is that they're validated as correct only if they predict the past, right? They're actually not tested on their ability to predict the future, but rather a past data set that's been put aside. So what this means is that the future must repeat the past within certain standard deviations. So it's a closure of the future that's in essence, part of all these systems. And this shouldn't surprise us because a lot of the techniques, you know, linear and logistic regression correlation, et cetera, which are so key, actually emerged from 20th century eugenics. Where the idea was to be able to figure out what didn't change in order—so you could create a certain form of future. Another thing that's key is that a lot of these systems, if you think about how unsupervised systems, cluster items or you think about recommendation systems, etc., is they're built on the notion of homophily, which is the idea that similarity breeds connection, right? So all these systems presume that you should be grouped with people who are, quote unquote, "like you". So what this does, in essence, means that polarization isn't an error, but rather a goal.


Wendy C: [00:18:34] And what's key in choosing these likes that define these clusters is that they go for the most divisive. So it's not the most popular, but the thing that divides populations most clearly. And the very notion of homophily actually emerges from 20th century, mid 20th century studies of US residential segregation. And so there's a way in which the logic of segregation is baked into these systems. Now, having said this, what I find productive or interesting about these systems is if we use them against their grain. So if you think through some of these programs which have been shown to be discriminatory, such as COMPAS, which is the system used to predict risk recidivism in some courts in the US, which have been shown to be biased against certain racial minorities. There's a way to take their predictions and to say so ah-ha! So what it's revealing is what will happen in the future if we don't change our current practices. And the example I always think of are global climate change models. When a global climate change model tells us the most likely future based on our past actions. That's a moment for intervention. We just don't accept it and say, okay, this is the way it is, but let's change it. And we try to change the world, not the model. We don't think it's a mistake with the model. And so I think if we think through the limitations but possibilities of these together, then we can come up with different ways to understand politics and interventions and possibilities.


Elizabeth: [00:20:10] Thank you so much. There is so much here to get through. There are so many intersecting points across all of your ideas. We aren't going to have enough time, but let's do our best. I think now would be a good point to spend a little time thinking about election campaigns because they're this moment when political uses come to light. There's a lot more attention on potential political uses from journalistic coverage to everyday citizens' experiences. So, Fen, maybe you can kick us off here with just a super, super brief history of how kind of AI has been used, kind of going back to some of the data uses that you were talking about in your intro. And then perhaps Wendy Wong you can hop in on that data front also, but we'll start with Fen.


Fenwick: [00:21:02] Yeah.


Fenwick: [00:21:03] I'll begin by saying that the book project and I've been looking at it, um, the use of computers in American politics since at least the 1960s. I think it's really important to be mindful about how much our visions of how politics have worked is so now totally intertwined with computers. And so if there's one part of this, it helps to kind of unpack, is that we are so convinced that politics is computational. When new computer technology comes about, we think of it as a very radical disruption of how politics works. And if there's one insight, then it's how we can use or try to think about what are the ways that we're kind of bounded in by thinking that politics does or doesn't work with computers. And I think to, you know, to Dr. Chen's argument, I think part of this is really about that kind of politics of the future and how do we think about the consequences of computers in thinking about our political future. And I think that it's not that I have an answer, that's part of the project, but it's being intentional that there has been, you know, a promise that computers are going to disrupt politics since the 1960s. So one of the tensions we live through is that for 70 years there's been a pitch of selling technologies as a disruptive change in computing and—that—or in politics. And that's something I just want to stress. And I think part of what to be mindful of this is that that becomes part of a skill because if that's been the case, then partly what we're doing here is being able to call out things that are just sales pitches, malarkey, you know, the polite way of putting it, they are trying to sell people because it's so easily—being—be able to be conned into thinking that this new app or this new technology is going to radically disrupt politics.


Fenwick: [00:22:46] And that, I think, is one part of why I'm saying it's like important to pay attention to how long this has been going on because so many of these applications, it's really hard to tell whether this is actually genuinely new or just the latest iteration of innovations that have happened. Know if you're talking about targeting or micro-targeting since at least the 1970s. I will say that campaigns, we can think about how they work in two ways and I'll be brief here is that there's stuff that campaigns are affected by and there's stuff that campaigns are trying to do themselves. And so when we're talking about political campaigning, artificial intelligence and generative AI has  real implications for online advertising. And campaigns are going to be affected by shifts and how they can use generative AI in their targeting as well as who they're kind of—cluster—trying to connect with, as well as just how that industry is going to shift. And so in one sense, I think the changing nature of online advertising. Because that's really the value proposition for so much of ChatGpt, is just going to generate more low-quality content for advertising, which is, I think, an important part of saying what's disruptive about it? But then I think the—and then also then says that campaigns are going to have to navigate that and that's going to also have impacts where we see, you know, areas that Elizabeth you've identified, grey areas of advertising, influencers, bots, those types of advertising are also going to be trends that campaigns are going to have to navigate.


Fenwick: [00:24:05] On the inverse side of this, campaigns do things. And one of the things they use data for is to make better decisions. What they're trying to do is maximize their spend to make sure they get more voters and more money. And so one of the ways that AI is already potentially going to have an impact, although it's hard to know for sure, is whether through the data analysis that all campaigns are doing about weighting their writings, making decisions about who's a strategic voter or not, or what message is going to be tested. Artificial intelligence is going to be an important decision-making tool. And firms like Blue State Digital have already promised using artificial intelligence personas to test messaging. And certainly that internal change really puts attention to what are the ways that campaigns are using data, have used data to make decisions about voters and how that might be impacted by artificial intelligence.


Elizabeth: [00:24:54] That was fantastic. Fenwick Thank you so much. And yeah, I mean, so Fenwick and I, back in the 2015 federal election in Canada, looked at political bots in campaigns. Sam has already defined bot for us, which is really helpful, but it's this sort of continual practice and experimentation that I think is really interesting about campaigns. And then when you pair it with this very, very long history of relying on data about people to make choices, it really tells an important, interesting picture because it's not fully disruptive. All of a sudden we have a new campaign approach. It's AI is being used to change the existing habits. And there's this history we need to rely on. So, yeah, thank you so much, Fenwick, for that. That's really helpful. One of the things you were saying made me think about the idea of like the promise of AI versus what AI can actually do. And it makes me think about the Cambridge Analytica case and all of this like, the fear of what the company said they could do versus what they actually did. And it took a long time to tease out what the actual implications were. And sometimes we aren't even going to know the actual implications. But kind of the—the potential and the myth of it—has a political impact, too, just the idea that it could be happening. And so I wanted to actually hand this over to Samantha for a minute because I know you've thought a lot about the idea of like foreign interference and the potential for different kinds of actors to use computational approaches. And sometimes we see them play out and sometimes they don't play out, but they still have had a political impact just by virtue of potentially being there. Can you speak to that a little more?


Samantha: [00:26:39] Yeah. Um, I think what's and you know, I think what's really interesting about this potential versus real impact when you introduce the idea of AI to this question is that we're already starting to see research that shows that a lot of these generative language models can produce propaganda messages that are just as believable or persuasive as campaigns that we've seen in the past and messages that we've seen in the past. So there's some great research being done. One of my former colleagues, Josh Goldstein, ran a bunch of experiments where he showed participants messages from real propaganda campaigns and then generated similar messages using these large language models. And they were, most of the time, just as persuasive or sometimes even more persuasive than the propaganda that was actually being produced. And I think this has really important implications when we're thinking about AI in the context of information operations and politics and the future of our democracy. Because, you know, when messages can become more credible, it can have a much greater impact than what we're seeing right now with just kind of bots pushing messages out at scale and, you know, coming back to the idea that it's also very much less discoverable than in the past. If messages are highly tailored, highly personalized and being generated by these large language models, how do we go about detecting that? Because it's not like an image or like a GAN where you can look for technical indicators that something has been artificially imaged or artificially created. It's a little bit harder with text. And, you know, I know there's methods and people are developing new technologies to go about this, but there are just different challenges, I think that we need to consider when we're thinking about the future of AI and foreign influence operations.


Elizabeth: [00:28:44] Thank you. I wonder, Suzie, do you want to hop in on this? I know you've been doing a lot that starts to connect here.


Suzie: [00:28:51] Yeah, well, as the last two speakers were speaking I was thinking about also like when do we regulate this, right? Like when are the moments of regulation and looking at some of the trends that are popping up around the world. There's been some laws introduced in places like California where it's within 60 days of an election. You're not allowed to use fake deepfakes. But outside of that period, it might be appropriate for deepfakes to be used in a political context. And here in Canada, we have Section 91 of the Canadian Elections Act that says during elections you're not allowed to have false content stated about someone which has been challenged in some lower courts in Canada. But it's interesting to see when are we regulating this type of content both within the legal framework and also within social media context? Because when I think about voice cloning, a lot of the voice cloning companies, the legitimate ones generally say like, only "you" can recreate your voice. You know, you're not allowed to do anyone else's voices. And so there's some barriers that come to how they're actually able to create AI. But you don't see the same trends in the image creation. You don't see the same trends in video creation saying the only person that you're allowed to make fake content about is yourself. So I think there's ways as well where companies can create regulation on this. And the rules that we see in places like Twitter and TikTok and Facebook generally say, you know, you shouldn't use synthetic media that can cause extreme harm, right? But what's that definition of what harm can be? Because it's such a wide variety of it.


Suzie: [00:30:22] When you think about some of the fake images that have popped up recently, like Trump being arrested, right. There was that Dutch journalist Eliot Higgins, who, you know, Trump was stating that he was going to be arrested. There was a lot of hype-up around this. And then these fake images were released, which can really easily kind of get picked up by conspiracy theorists or other people who are kind of hoping for this type of action. You know, and there's this boundary of when should we allow these types of images to be created? Because they can cause some chaos. You know, there's less harmful images like the Pope and the Balenciaga jacket. Right? Which also has, you know, kind of unusual political impacts, but generally, generally quite harmless. So I think this definition as well, too, of like what actually crosses over into what we think should be limited. And I think, you know, to date, when we think about what some of the boundaries are, political figures are often excluded from certain protections in law when it comes to manipulation of their images and manipulation about things that are said about them. But as we're heading more into this generative AI in these manipulated videos and contexts, like we need to think about what are the boundaries of that? And then the challenges around detection. You know, the examples that we've seen of deepfakes so far haven't actually been that effective. When we think about the Ukrainian president's, um, deepfake that came out that said …


Suzie: [00:31:38] … Ukrainian troops should surrender.  That was debunked fairly quickly. So we haven't seen many examples yet that have had true, true impact. But I think we're going to be struggling for the next few years on how to define what we think is harmful, how do we regulate it? How do social media companies regulate it? What are the safeguards that we need to put in place in order to allow the creative use of expression through these various forms of AI? There's been a lot of talk lately about banning certain forms of AI, banning facial recognition, banning development in AI. There was that letter that came out a week or two ago about, you know, should we actually cease development on this type of AI? And I think that there's a real question on what the actual harm is that we need to think deeply about. And there are some examples that are clearly harmful, like a lot of my research is around sexual deepfakes. And I think those ones are pretty easy to make rules around, you know, like we shouldn't we shouldn't allow this type of technology to be used against people like journalists like Rana Ayyub, who is a Indian journalist who was critiquing the Indian Indian government and then had sexual deepfakes, used to discredit her. But I'm really keeping an eye on this conversation of, what is harm, what crosses the line and when do we regulate and when do we create safeguards that allow for some protections around some of these potential harms that we don't actually know what they are just yet. A lot of it is speculative.


Elizabeth: [00:33:02] Thank you. I really appreciate that. And, you know, from the election perspective, too, there's things like you can't tell somebody to go to the wrong polling station or that the Election Day has changed or that you can text in your vote. Those kinds of things. There are some things where it's like, yes, obviously this is something that shouldn't be allowed. But you're right, there's this murky area. Harm is really difficult to define. I thought maybe Wendy Wong, you might want to hop in on this one because you had been talking about this idea of harm and its connection to human rights.


Wendy W: [00:33:38] Yeah. So this has been such a fascinating conversation because I think that the other speakers are really addressing this idea of the choice to automate, right? The choice to use AI. And what this says, I think about the way we think about, you know, our social and political interactions and the need for for automation and to use AI technologies to either win, you know, get benefits in an electoral process or to process information that we're getting or to create some of these, you know, deepfakes and other automated outputs that we're seeing out there that are that are really kind of raising questions about what we're seeing, how real it is. And so, you know, I want to sort of think about how, you know, the stuff that Suzie and Samantha are talking about in particular is really shaping how we learn and what we know, right? That's really the problem. And I think the harms are often talked about very vaguely, as others have referred to. And I think that's actually problematic because there are fundamental rights that are being curtailed, that are being, you know, handicapped and and hurt because we don't have access to trusted or credible information. Right. So not only just freedom of expression or the idea that we should all have freedom of conscience, like we can think about how that might be curtailed, if what we're what we're exposed to is simply not authentic or or made up or fabricated. But then we can think about other ways that our rights can be affected.


Wendy W: [00:35:10] So like the freedom of assembly, if you don't know what's going on, it's very hard to resist or to know what you believe in or to find others who think like you or to discuss things in a in a way that's so fundamental to to our democracy. I also think I think, as you alluded to, Elizabeth, you know, the right to vote, right? I mean, it's not just being told the wrong polling place. It's actually being an informed citizen and the ability to know what is the right choice for you as an individual. You know, I think we can go on, you know, we can think about how this affects our rights to education again, what kinds of information we're exposed to, what kind of literacy we actually have. And I think this is the kind of harm we should be thinking about. We should be thinking about grounding this idea not just in short term harms that we that a lot of people have already pointed out, such as racial bias and discrimination. Those are for sure harms. And we can see those are human rights violations. But thinking about more long term, how information quality actually affects the way we see ourselves as individuals, our societies, and therefore how we engage each other, how we exercise our rights. You know, the things we read and see and hear all affect our perceptions of our world. So how can we, especially in a democracy like Canada, be effective and engaged citizens if we cannot really be sure of what we're looking at and seeing?


Elizabeth: [00:36:34] Thank you. And I was seeing lots of nods from the other panelists, too, on this front. It's super tricky because the appeal of these more specific kind of smaller potential harms that are being addressed is that they can be well-bounded or—ish. Even the well-bounded ones get a bit unruly pretty quickly. But then when we go to this higher level and think about it in terms of how we see ourselves and how that plays into democracy, it gets really difficult from a regulatory perspective and a legal perspective. I wonder if anybody kind of wants to hop in on that idea. Where do we go recognizing that there are these really wide reaching. Things we need to think through and potential for harm, but also potential for opportunity and good right to the like. Okay, but what do we do in the nitty gritty of making sure that the public good is preserved.


Suzie: [00:37:40] Well, think one thing to build a bit on what Wendy was saying as well. I think there's a lot of hype in this whole, you know, AI is going to kill the world, you know, and think that there's like this real fear around AI. And sometimes the harms that we're thinking about are really exaggerated. And when we're thinking really far down the road, we're thinking, oh, is AI going to learn how to reprogram itself and shut down all of our electrical grid systems? And, you know, we won't even have elections. And I think that there's some of that narrative that's going on. But really, I think what Wendy is talking about, like we already know, like we already know a lot of the harms of AI, like like especially what Wendy Chen was saying around this idea of like by relying on AI to create the reality of our world, we're just replicating all of these problems and we're trapping ourselves in the past and we're trapping ourselves in racism, we're trapping ourselves in sexism, we're trapping ourselves in these same kind of human rights violations that we've seen over and over and over. And so I think when we're thinking about the focus of what we need to be really drilling down on, I think that there are these really clear examples that we've seen about the way that like AI is really causing discriminatory bias against equality seeking groups and like maybe focusing in on some of these issues that are really important.


Suzie: [00:38:53] And I think it's trickier around the things like when we think about the way that AI is used to create memes and jokes and things, especially with a lot of these alt-right groups, right, that are used to kind of dehumanize racialized groups, that are used to dehumanize women that are, you know, that then legitimize discrimination against them. Like those are those are those are more difficult on figuring out how to regulate. But when we're thinking about AI regulation and some of the focus of the conversations that we need to have, I think that there are some really concrete ideas that we can focus on now rather than thinking of the Terminator example, but like looking at the ways that these these biases are replicated and what can we actually do? Because there's this mythology around AI that there's no way that we can fix any of these problems. There's no way that we can, you know, have requirements for people who are creating AI to at the very least have to have, you know, things like equality and safety baked into the processes that they're going through as they're creating these technologies. Like there might be some space for regulating the profession of people who are creating this type of technology as a way to at least have them prove that they're taking these things into consideration in a way that clearly right now they're not. And it's leading to these really discriminatory harms.


Samantha: [00:40:05] And I can jump in here as well, kind of just building off of what Suzie said, too, thinking about the profession. If you look at the people who are designing a lot of these systems, they tend to be white male engineers based in Silicon Valley. And so there isn't a lot of representation for the rest of the world, even though a lot of these technologies are impacting all kinds of different people in very diverse ways. So how do we meaningfully bring into the design process different and alternative voices? Because it's not just a matter of hiring more female engineers or hiring more Black engineers, it's also giving them meaningful positions within companies to actually make a difference and bring their lived experience to the table so that we can start to eliminate some of the bias and just have other voices and other ideas going into the design processes of these systems that are increasingly having huge impacts on on our lives and on politics.


Wendy C: [00:41:08] And as well, I think what we also need is a fundamental rethinking of the methodology. So if we think through all the systems that we're talking about, they're based on what they rely on is wide based surveillance. So the tracking of users clicks, mouse clicks, et cetera. There's a logic of surveillance that's built into this and also into the whole notion which is too late of the right to take stuff down. Right? So the problem with that is the right to be deleted is that we need to talk about the right not to be tracked in the first place. And what that means is rethinking fundamentally not only privacy laws, but laws about us being in public. Because the problem with the way, a lot of the ways, people are talking about privacy is that "let's just—if social media companies just didn't share their data with anyone else, it would be okay"—which is completely wrong. And it's giving this notion of privacy to an enclosure that we need to question. And a lot of the things that we're talking about in terms of synthetic images, et cetera. Those come from public databases or the notion that as soon as you post something, you're in public and once you're in public, you give up your rights. And so I think especially given the ways in which our images and things are being treated as public, so we become, in essence, celebrities, et cetera are treated as such. We need to think through more rigorous ways of thinking about being in public. Because what's so important to think about in terms of social media or social, is that it deliberately mixes the public and the private. And we need to be able to deal in terms of rights and our thinking in terms of these systems which deliberately mix these and not think only in terms of an older versions of privacy that don't work because there's all sorts of inferred data, et cetera, but also that other part, which is we want to be—there's public engagement that's part of democracy and how can we protect this public engagement?


Wendy W: [00:43:05] I think what you just said was so interesting, Wendy, because, you know, I think we actually are thinking about the same problem, but I think about it not in terms of changing our ideas of the public, but in changing the norms around collecting data around our activities, around our behaviours. And this is, you know, I study a lot about or I think a lot about data and the datafication of human life. And the way it relates to AI, of course, is because AI only works if there are a lot of data lying around to be sorted and pooled. Right. AI does not work in the absence of this enormous quantity of data about human behaviour. So I love what you just said, Wendy, about the changing ideas of the public. I also think we need to change our ideas around whether it's okay to collect data around every single activity and thought that we have now. And a lot of times people talk about the use of data, right? Can data be shared? In my mind, it's fundamentally changed the human experience if you've collected the data. And whether it can be shared or not is irrelevant almost because once data are collected, they're as good as forever. I mean, there's really no way we can verify where the data go, whether they've been deleted, who's using it or who's using the data being collected.


Wendy W: [00:44:17] So in that sense, you know, thinking about datafication and its link to AI, the datafication of our society on the one hand has enabled fantastic technologies we carry around in our pockets all the time. They facilitate democracy, right? They facilitate some of these things we're talking about. But the negative side is. There's so much information about our daily minutiae out there, and this is a fundamental shift in human existence. We just have not had this before. And I think right now regulation needs to grapple with this, not just through consent. There's no way all of us can consent to the collection of data. I mean, we just want to get it out of the way so we can do our transaction and move on. What we need to do is rethink this level of individual consent. And also, I think governments really need to be aware of the collective effects of data. You know, I think, Wendy, you started talking about inferred data. I mean, this is all about the grouping, as you referred to before, right? How are we getting group[s] together? These groupings matter because they're not conscious groupings that we made. They're algorithmically made. That is a very important political and social difference, that checking, "I agree", does not address in the least. And we still think all the regulatory mechanisms out there fundamentally believe in the individual level of consent.


Fenwick: [00:45:33] And think this is also a good point to just interject and why we're at a really important crossroads. Before the House [of Commons] right now, is—was it C-27? I always forget the number of bills here. Which is a privacy reform bill, which also includes an Artificial Intelligence and Data Act (AIDA) And really there's certain structural facets of that bill which really are out of step with what's being discussed here. There's a real desire to bake in anonymized and de-identified data as something that doesn't have the same consent obligations. So really raising this concern that it's going to incentivize mass data collection so long as it doesn't involve individual personal information, which really is not getting at, I think, the theme that's coming up here about how to be responsible for that. As well as I want to say that part of that bill also just to get back to AI politics, doesn't actually bring about what's been talked about for years is privacy, privacy rights or privacy expectations for political parties. Which is something that's been advocated for years, seems like an easy fix, but it's not there. So one of the parts I just want to flag for listeners is that this is a really important moment to pay attention because these exact debates are ones that are going to be coming up as C-27 and AIDA passes through the House. And if there's an opportunity to get engaged, to push for more attention, for collective rights, more attention to, I think risks and consequences for inference as well as obligations for parties about how they collect the data two times now.


Elizabeth: [00:46:59] Thank you, Fen. That's super helpful. And just last week in the budget there was an announcement that, okay, we got a deal with the fact that political parties don't really have to adhere to existing privacy legislations. And so, yeah, you're right. The time is now. The time is also running out on this call. We have had such a good conversation. There's so much more we could go through, but we've only got a few minutes left. So what I'm going to do is ask everybody to wrap up with 30 seconds to a minute each. Kind of looking forward to what you expect in the next election in Canada. What do you think is next? When we are thinking about the kinds of ways that AI might be used in election times? And let's start with Wendy Chen.


Wendy C: [00:47:49] Oh, just quickly, in terms of inferred data, Teresa Scassa had done amazing work around that and many others, I would say in terms of what I expect for the next elections, increased divisive issues, a sort of scurrying through to figure out those little niche things that create angry clusters and putting them all together. So a proliferation of micro divisions and a linking of them all together in order to form majorities of anger.


Elizabeth: [00:48:20] Thank you. Let's go to Suzie next.


Suzie: [00:48:23] I think in upcoming elections, I think as part of election campaigns and staff and things, they're probably going to end up having like debunkers who are just going to be scouring the Internet for fake content, scouring the Internet for fake videos, fake voice detections. And so I think that there's going to be a lot of interest in the type of tech that's available to detect the difference between real and fake videos and that there's going to have to be clear strategies on how political parties are going to be able to prove what content is real and and what content is fake.


Elizabeth: [00:48:53] Thank you, Samantha.


Samantha: [00:48:56] Yeah, I think for me my answer is kind of a combination of the two responses we've just heard. I think a lot of maybe the disinformation or misinformation that we're going to see being generated might have been generated through AI-based applications. And I think we're going to see also platforms updating their policies around synthetic media and maybe other kinds of policies around AI generation in the lead up to elections.


Elizabeth: [00:49:26] Thank you. Fenwick.


Fenwick: [00:49:27] I'll say a test and a prediction. I think an important test will be whether parties advertise they're using artificial intelligence as part of their war room, showcasing. So whether we've entered a moment where AI really has swung from something that's cool to something that we're worried about, we'll see that in how parties connect their campaigning efforts with artificial intelligence. I think the prediction is that one thread about when the open letter was mentioned, that's a particular type of politics in AI, where many researchers really believe in long term views about artificial intelligence as a disruptive threat to AI's existential future over other issues like climate crisis and climate emergency. And I think it'll be very interesting to see how AI is framed as a policy issue, whether we'll see these kinds of very tangible, clear, accepted issues of what we know. The problem with AI is now whether that'll get uptake or we're going to be left constantly debating whether we live in the next version of Terminator.


Elizabeth: [00:50:31] Thank you. Wendy.


Wendy W: [00:50:32] Yeah, just continuing along what Fenwick just said. You know what I hope they talk about? What I hope is that we talk more about how AI fits into the national strategy. There's a pan-Canadian strategy. We have some of the most prominent researchers in the world working in Canada. So AI is clearly important. I think it's time that we as data subjects become data stakeholders. And one of the things that I'm hoping the government brings into play is thinking about digital literacy in a very serious way, which is helping all of us decipher what the machine is doing and how we can change the terms of that that coexistence That is not inevitable, as a lot of us have already said today.


Elizabeth: [00:51:17] Thank you so much. Thank you to the entire panel. This has been a really fascinating discussion and there's so much more we could have covered. We didn't talk, for example, about the use of machine learning approaches to predict elections and replace traditional polling. We didn't talk about how there have been machine learning based-bots that have been created to assess when harassment and hate speech is happening and try and counteract it. We didn't talk about a lot of the ways that these tools are being used to try and pull more people into democratic systems. You know, we spent a lot of time focusing on challenges and concerns and harms, but there's so much more. So in light of that, one of the things that we're planning on doing is building from this conversation today to create a report that's going to look at uses of AI in Canadian politics that will be coming out in the next few months.


Elizabeth: [00:52:11] Finally, I wanted to thank you all for attending today and listening and contributing your questions. And I want to thank the AI and Society initiative at the University of Ottawa for their support making this event and the subsequent report possible.


Elizabeth: [00:52:30] All right. That was our episode. Thank you so much for listening to this very special live recording episode. We talked about so many different things related to the uses of AI in politics and we're really excited about the report to come. If you'd like more information about any of our panelists or any of the resources and concepts we talked about today, as always, you can find links in our show notes or head over to polcommtech.ca for annotated transcripts, which are available in English and French. Thanks so much and have a great day.




People on this episode