Wonks and War Rooms

Counter-speech as Content Moderation with Kesa White

April 03, 2024 Elizabeth Dubois Season 6 Episode 9
Wonks and War Rooms
Counter-speech as Content Moderation with Kesa White
Show Notes Transcript

In this episode Elizabeth discusses the idea of counter-speech as content moderation with far right extremist researcher, Kesa White. Kesa describes her work on “dog-whistling,” talks about how counter speech can be helpful but doesn’t solve the problem of hate speech online, and explains some of the challenges tech companies face with content moderation. Drawing on her own experience with hate-speech she emphasizes how important it is for us to keep being “in the know” about social media and what is being said.

We are doing a call-out for people who have had some kind of impact or have been helped by this podcast - we’d love to hear from you! Here is a google form to fill out to help us track the impact of our podcast!

Additional Resources


Check out www.polcommtech.ca for annotated transcripts of this episode in English and French.

Elizabeth Dubois: [00:00:04] Welcome to Wonks and War Rooms where political communication theory meets on the ground strategy. I'm your host, Elizabeth Dubois. I'm an Associate Professor and University research chair in politics, communication and technology at the University of Ottawa. My pronouns are she/her. Today we're talking about counter speech and content moderation with Kesa White. But before we get into the episode, I just wanted to do one more call out. If you have been helped by this podcast in some way, had some sort of professional development, you use it in your classroom. You use it at work, you share the episode with others. We'd love to know. We're trying to track the impact and outcomes of this podcast, in order to find additional funding, and to be able to continue to support the pod, so you can check the show notes for a link to a very short Google form that asks a few questions, or feel free to email us at polcommtech@gmail.com. That's polcommtech@gmail.com and just let us know what's helped with the pod. All right, into the episode. Kesa, can you introduce yourself please?


Kesa White: [00:01:10] Yeah. Hi, my name is Kesa White. I am a full time worker at a tech company, on the side I'm also a far-right extremist researcher. And I do a lot of work related to hate speech, content moderation, a lot of different groups, and how they use social media to kind of manipulate the system or even just spread their messages. And my pronouns are she/her/hers.


Elizabeth Dubois: [00:01:35] Awesome. Thank you so much for being here. Kesa, we have a lot to dive into today. We're talking about counter-speech as our kind of anchor for the conversation. We previously did an episode on content moderation which will be linked to in the show notes, but counter-speech to start off in the academic literature, one of the definitions that I found is defined as a "direct response or comment that counters the hateful or harmful speech that's present" [Consult: Thou Shalt Not Hate: Countering Online Hate Speech.] And in this case, we often talk about [it] on social media. And there are a bunch of different categories that some researchers have come up with, so I won't list them all. But they range from things like denouncing the hateful or dangerous speech, to responding with humour and sarcasm, to having  really hostile language in response. And those different types of counter speech then elicit different kinds of responses from the people eliciting the really negative stuff in the first place. So before we dive in any more, does that make sense? Is this definition one that you work with? Are there additions that you would add?


Kesa White: [00:02:40] Those definitions definitely do make sense, especially because counter-speech has become such a large term, depending on how you're talking about it, whether it be in tech companies, academia, government. So I think that really pretty much covers all the ground.


Elizabeth Dubois: [00:02:55] Awesome. So next up let's talk about your experience with this idea of counter-speech. So you mentioned being at a tech company. How do you think about counter-speech from that? Like tech company perspective. What does that look like? How does that fit in? Obviously, a tech company can't control whether or not counter-speech is there, but they could incentivize it or de-incentivize it.


Kesa White: [00:03:18] Yeah, because I wear multiple hats. I'll say between tech company and academia, I see both sides of it. And a lot of the time you'll see humor being used as one of those ways to mask that counter-speech a little bit. So it's one of those plausible deniability type of concepts, which makes it very hard to moderate, but then also just to understand what's exactly being said, because it's one of those if you're not within that in-group, you're not necessarily understanding what they're talking about. Humor is really one of those big things that is used in a lot of these communities to kind of tow the line between being flagged for moderation, but then also being able to convey their message across to their followers or their echo chambers, whoever it may be.


Elizabeth Dubois: [00:04:08] Yeah, that's super interesting. And, we know that humor and sarcasm is particularly difficult to deal with when we're talking about any of these automated systems for content moderation, too. And so, for those who didn't listen to the content moderation episode yet, one of the things that happens when big platforms are trying to maintain a certain kind of information environment is they will have algorithmic approaches or sometimes AI driven approaches to saying, "Okay, any content that looks like this is not allowed". So content that looks like hate speech is not allowed, content that looks like far right extremist content is not allowed, and they have their own rules. And then an automated system does it. But counter-speech can often look very similar to the actual extremist or hateful speech. And then when you add in the layer of humor and sarcasm, how do you ever even figure out who is, honestly and intentionally trying to make the space better and more welcoming versus who are the people who are there to hate?


Kesa White: [00:05:12] Yeah, it's a really difficult concept to pick apart because you have those individuals that are using that humor and sarcasm and memes are a really big part of this culture versus the individuals that are making a joke of whatever has been said about them as kind of like a coping mechanism. So it's really hard to pull those apart. And it's something that I think that us, just as a whole, between academia, tech companies just need to do better at because it's really hard to be plugged into these communities just to begin with, to understand what's happening. But also you have to be able to sympathize with the individuals who are kind of using humor as a way to cope with whatever's being said about them. So it really is hard to moderate that type of content, especially when it's AI-like generated content, when it's automated content moderation versus human content moderation and makes it even more difficult where humor is able to go through whatever that AI detection model may be. But then if it's human content, human moderated, it's a little bit easier. But if you're someone who doesn't study these different types of topics, it might just be able to bypass that as well. So you really do need these individuals that are highly specialized in these areas more than ever, especially with the elections coming up and changes in politics and things that are going on. There's pretty much all around the world. So you do need these specialized individuals that pretty much study these topics on a daily basis at a very toxic level, spending time online.


Elizabeth Dubois: [00:06:51] Right, which then leads into a whole other conversation about the mental health impacts and the impacts on the people doing all that moderating, which we'll leave to the side for the moment. But I want to dig into what you were saying there about knowing the community that you are trying to moderate really, really well. So I did a project, I called it my Mean Tweets project, looking at online harassment of political journalists during the 2019 federal election in Canada. And we had students working with our team, going through tweets and analyzing them to identify what was harassment and negativity and toxicity and hate and what wasn't. And even within that context, which is quite broad, all of Canadian politics, it's not a super niche community. We were trying to understand even there there were particular acronyms being used, there were particular hashtags being used. This dog whistling where people were saying something with a very specific vernacular that some people would just ignore, as, "That doesn't matter at all". And other people [will say], "Oh no, that is very clearly racist" or "That is very clearly sexist". And we figured that out because we had a bunch of different people all trying to do the annotation, all trying to identify whether it was negative or not. But, [as] you know, most tech companies don't have hundreds of people in all the different kinds of communities that could possibly exist on their platform doing this on a daily basis.


Kesa White: [00:08:22] I would say definitely coming from academia to the tech world worked to my advantage because I was studying the topics that I'm working on now at the tech company before even coming here. So when you talked about dog whistles just a second ago, my project with BKC and the RSM fellowship specifically looked at dog whistles and different terms across social media platforms to see how they're evading content moderation [Consult:“Not All Superheros Wear Capes: Identity Triggers the Trolls”]. In the span of the fellowship, I collected about 700/800 different terms, how they're used on different social media platforms and how they're still being able to be used on these platforms despite them should be being banned, or even just on some list of "You shouldn't even be able to use this". But on these different platforms I listed, I looked at between mainstream and also fringe platforms. I found all of these different terms across all these platforms, and I'm thinking to myself, how are these able to be used?


Kesa White: [00:09:23] But then going back to the conversation we're having now, if you're not a part of these communities, you don't necessarily know what you're looking at, but then coming from that academic world and being able to see how these conversations are going, because I'm spending a lot of time on telegram, the chans and all these different dark places that nobody else wants to go, I'm able to see what these conversations are looking like, the different terms they're using. And then I in turn saw how those conversations were going, but then use that information to then go on to say, Facebook, Instagram, TikTok, X to see if any of those terms were then being used on those social media platforms. And to my surprise, a lot of them were, which is really sad. But then to the naked eye, you wouldn't necessarily know what's happening, but you can't just ban a hot dog emoji because to you, it just looks like a hot dog, but to them it means something racist. It's really hard to moderate these different terms because a lot of the times they are just emojis or just little words that have those double meanings. But to them it's a racist, sexist term.


Elizabeth Dubois: [00:10:32] Yeah, that project, [we will] link to it in the show notes also, anything that you've got publicly available [about it], I think it's fascinating and just so important to have that work done. And it's also important to recognize that that list changes over time. The emojis that are the current popular ones to do that dog whistling are going to change as the current topics of the day change, or as content moderation approaches change. They start figuring out a way to deal with the hot dog emoji, then they're going to replace it with something else, which really makes it a tough cat and mouse game. What I find so interesting about the dog whistling work that you've done is it really highlights how important it is to know who is creating the content that is going out, because that hot dog example is really good. If it's coming from somebody who is known to be part of a particularly extremist group, then it's a much higher chance that they are not using it to say, "I went to a baseball game and had a hot dog while I cheered on my favorite team". They're probably using it in the worst way, and that could help us understand better how to do this content moderation. But it also leads to a whole bunch of new problems, right? How much of this do you think needs to be dealt with by understanding more about the person making the posts, versus just the content itself?


Kesa White: [00:11:56] I think it's a mixture of both. Because you do have those. I guess I'll call them extremist influencers who do have that wide reach and that are able to reach a large mass of people compared to just some low level person. That just might be like hanging out on the “chan” boards. But a lot of the time these “chan” boards, for example, do have that large traction because of all these mass attacks and mass shootings where they have stemmed from, 4chan, for example. So we do know that people are looking at these different boards and people are able to then amplify whatever term is being used. But then you have these extremist influencers who have millions and thousands of followers, whatever it may be, they have that opportunity to make those terms go viral in their world and for them to be then used across different places. But it's really important to know who is using these terms, versus what the term does mean. But a lot of the times you don't necessarily know who the person behind the screen is because they are using VPNs, they are using code names, whatever it is. So it's really hard to track who that individual might be. But you could say, for example, track their username across different social medias, but extremists are smart, unfortunately, which is what we probably don't want to hear. So they might change their username in some variation. But it's just really important to take the terms with a grain of salt, whatever the terms may be. Who's using it? And just looking at the context behind [the term], whatever might be said, who the person is, or it might just be someone who's joking about it because it's some kid, whoever might have heard somebody else use it, but they won't necessarily be as much [of a] threat compared to the person [that] has millions of followers. But you still have to worry about that kid who thought it was niche and fun at the end of the day, because it's then, well, "Where did they see this term in the first place?"


Elizabeth Dubois: [00:13:57] Yeah. And what you're saying to me just really highlights how important the social relationships are in all of this. There's the level of, "This person has a large reach because they're an influencer with a ton of followers. So the chance for virality is higher". And sure, yes, that's true. But also, it sounds like part of the story you're telling is one of community and the way that community can build around these terms and around being 'in the know'. So using these kinds of terms makes you feel part of this group. And then the more you feel part of the group now you're on the ladder in terms of the kinds of actions and steps you might be willing to take. And this idea of being on the ladder and taking more and more intense steps in a community is not strictly related to being extremist or far right extremism. We see that in all kinds of social and political movements.


Kesa White: [00:14:52] Yeah, what you're saying is completely valid and definitely does make sense. And it's one of those ways that they do get individuals to kind of join in on the cause to recruit them and things like that. It's similar to just almost using these terms, because it's 'what the cool kids might be doing' and everyone wants to fit in, in some way. So "I'm going to do it because I saw this person with a million followers doing it", or "I saw one of my friends doing it. So I think it's the cool thing to do". And then you see a lot of these people that have been  in the movement for a long time will then find these kids or whoever it may be, that are those low level people that are on the bottom of the ladder who haven't made their way to the top. They'll go after them to feed them more information, to bring them into the fold where now they went from down here on the ladder. So now they're kind of getting closer and closer to the top just because they use this one term, this one dog-whistle, and now they're more in the fold or being invited to different encrypted chats, being invited to different events, whatever it may be, just because they use the hot dog emoji. And now they're kind of in the know.


Elizabeth Dubois: [00:16:03] Yeah, yeah. Mapping that all out and understanding that is really fascinating. And it connects with one of the findings from some of this counter-speech research, which shows that counter speech, when it comes from somebody who's consistently been in that community or has recently left that community, is much more likely to change the community's reaction than when it's coming from an outsider [for more information on the findings, consult the chapter Counter-speech is everyone’s responsibility in Regulating Free Speech in a Digital Age]. Right? So if somebody uses that emoji and then somebody from within the community [says], "Hey, that's racist. Maybe we shouldn't do that, the real problem is this thing over here", that's way more likely to work than if somebody who is perceived to be totally an outsider were to say the exact same thing. And that seems to me like it builds off of a lot of those community dynamics you were just talking about.


Kesa White: [00:16:52] Yeah, definitely. Because what I might see today might be different from tomorrow because it is constantly changing, as you mentioned previously. That's why it's even more important to be in tune with these communities and [be] an expert in the context and what's being said across different social media platforms, even just reading the books and everything, because things are constantly changing, which makes it really hard for us to do our jobs. But it's really important that we do have this understanding of what's happening on the ground, because it could be the difference between "This is going too extreme", but [also] between life and death at the end of the day.


Elizabeth Dubois: [00:17:33] Yeah. And because some of the content that gets shared really does have these immediate, impactful, real world consequences. And we've seen that in places all over the world.


Kesa White: [00:17:45] Yeah, it doesn't take long for things to go viral at all. Like the Pepe meme. It meant one thing, and then it was co-opted to another thing, and it just evolves and evolves and evolves. And the Chad memes and the beta memes and the alpha memes, all of that different type of stuff originally meant one thing, but then just evolved and changed into something else. And now you're seeing it being used as profile pictures, which can signal something for someone. You're seeing it as memes with different texts. It just shows how things are able to evolve over time, how things can go from being something that's funny to now something that can be considered just really harmful, depending on what the text might say or even just what that meme signifies in itself.


Elizabeth Dubois: [00:18:32] Yeah, because so much of what is signified depends on the context of its use and how people are interpreting it at that time.


Elizabeth Dubois: [00:18:41] I want to change gears slightly for a minute. One of the things that we observed in our Mean Tweets research and that other people have seen is sometimes people, an individual, is getting a whole lot of harassment and hate. And then there's a group of people who come to their defense and they start sharing all of these really positive comments, "Ignore the people who are hating", "They're not worth your time". "You're amazing". "I love your work for these reasons and those reasons". What do you think the effect is of that kind of counter-speech?


Kesa White: [00:19:16] So I'll say that going back a long time, when I was in undergrad, I was victim to a troll storm from a neo-Nazi. My roommate and I, we had never met this man a day in our lives, where I think we were 17-18. [we] don't really know why this man is coming after us. And it was just an entire troll storm of hate speech coming after us. All of our personal information was put online and it was just crazy. But at the time, because we were just in undergrad at our little liberal arts college, American University, so nobody really knew about us [or] what was happening at the time. So we didn't have those supporters at that time and people saying, "Oh, all the great work that you've done" and all that stuff. So it really was at the time, a traumatic experience. But then fast forward all these years later when things like that happen, especially on social media, I'll see a lot of it, like on telegram and X coming up, you see a lot more people coming out to support you saying, "Hey [here's some] positive messages", "Here's all the great work that you've done". "Keep your head up". "Love the work that you've done". "You must be pissing them off [and] that's why they feel like they need to attack you, because you're right". Whatever it may be. And I think it definitely does build morale when that is done, but then also can shine a bigger spotlight on you at the same time. So it kind of goes hand in hand because it does mess with your mental health in some capacity, even though in the time you may not want to admit it or may not seem like it, but I can honestly say that throughout all of this that I developed anxiety and depression. But it honestly helps doing this work because I would hate for someone to have to go through what I went through in undergrad. So just knowing that I'm able to then do this work to make (sounds so cliche) the world a better place. But then even when I see my colleagues and everything being harassed online, sending them those positive message[s] really does help in the short term. But then we also do need those long term solutions on "How do we protect ourselves from this happening later on" from them finding out more information about you, because now your colleagues are providing more links to your work. [And then] individuals who might have been harassing you now have more things to nitpick from (if your work is now being highlighted) even more. We do need long term solutions to these different types of attacks and harassment happening because unfortunately, it's going to keep happening.


Elizabeth Dubois: [00:21:50] Yeah. And I don't know what it was like to experience that, especially, you know, at age 17. But it is truly incredible how you've been able to turn that really horrible experience into such an important research trajectory. And I think you're right that we need people who have experienced these horrible things to be able to help us define and determine what we can do better and how we can fix it. Do you think that counter-speech, whether it's the people offering positive things or we also see counter-speech when it comes to mis and disinformation, people pointing out hypocrisy and contradictions, pointing out incorrect information, do you think that kind of counter-speech is a part of how we solve some of these problems? You know, you mentioned the longer term solutions. I don't know if this is part of the longer term solution or not. What do you think?


Kesa White: [00:22:45] I wish it would be part of the longer term solution, but it's always what might be fact to me, might be fiction to you. So it's this kind of never ending game of whack a mole. There's always some source that anyone can cite that could back up whatever you're trying to say, even if it is false. So it might be considered a contradiction, but "Oh, I found a source from wherever, so I'm backing up what I'm saying". So it really is hard to counter hate speech, unfortunately. I know that's not the answer you were looking for, but we do definitely need more solutions as we see that this is a problem that we do need to discuss, and we do need to find actual, tangible solutions for, to prevent this from keeping happening. Because of course, you can have those little messages at the bottom of a post that say, "This might not be factual information" or "Check your sources", whatever. But not everyone looks further for sources, they just see it as a "Oh, it's just a mark on the post. Okay, keep moving. I'm just going to keep diving into the source", whatever it may be. So I'm not sure what else we could do because people are doing as much as they pretty much can. It's just up to individuals in those communities that they are a part of to change the trajectory and also just the narrative, whatever they're talking about.


Elizabeth Dubois: [00:24:10] Yeah, unfortunately, that all makes a lot of sense [laughs]. One of the categories of counter speech that we haven't talked about yet that's been brought up, is there's some evidence to suggest when you respond to hateful and extremist content with a warning about offline consequences or sometimes online consequences [such as] "You're going to get banned" or "The police are going to be called", those kinds of things, that can sometimes dissuade people. What are your thoughts on that? It seems like, especially if you don't know exactly who's behind the account, those offline consequences might not actually be all that compelling.


Kesa White: [00:24:51] Oh, yeah. Definitely not compelling at all. Even how cigarettes, for example, have the warning label on them. But people still [smoke] cigarettes just because they can, even though they do have that warning. But even though we don't know the actual identity of who these people are, there's ways to find out who they are and whoever might have those tools and everything like that. But it doesn't dissuade them from participating in these actions or doxxing people or swatting people. It's going to keep happening because it's 'edgy'. It's unfortunately cool. And just to say that they were able to participate in something like that, like collective action, it doesn't make you want to stop. It makes you want to keep going to see how far you can pretty much go before it's too late, because I'm sure the insurrectionists didn't think that they were going to end up behind bars after planning something on social media. So of course, if they had known that, maybe some of them would have still participated, but not all of them, because is it really worth going to jail for some of these people? But, unfortunately, it's just going to keep going. I know these aren't the answers that we necessarily want, and it does make our jobs so much harder because we do have to come up with solutions while moderating content, while keeping up to date, while things are changing every day, every hour, every minute. So it's just so much that we have to digest. But there are some really good researchers out there, and I'm sure if we all put our heads together and maybe just cut ourselves off from the world for like a year or two, we'd be able to come up with some solution on how we can battle all of these different problems. Because groups are just going to keep emerging, new terms are going to keep emerging, and there's just something new, they might be talking about something new or forming a new group as we speak. But unless we're in these communities and knowing what's necessarily being talked about, we wouldn't know.


Elizabeth Dubois: [00:26:43] Yeah. And what you're saying there also reminds me of a point that I think sometimes gets forgotten that this is unfortunately, seemingly quite a part of human existence, certainly North American human existence. And which technology we're talking about doesn't really matter whether it's social media or search or instant messaging or whatever. These kinds of groups keep popping up. And so it's not just getting the right tech solution, but also understanding the social context and setting and having those different kinds of potential responses work together, both technical and social and political.


Kesa White: [00:27:23] Yeah, because even with things like political climate also matter, because not everyone is going to be tech savvy and going to put everything on the internet. We had the Ku Klux Klan before the internet, and we saw how much of a hold that they had on society. So you don't just need the internet to form a group. It does make connecting people easier across the world, but it just shows that you don't need technology to bring people necessarily together. Because now that we see that content moderation is picking up on a lot of these tricks and terms, we might see an increase in offline action, even just participation and planning and everything like that. And just because we don't see a group on social media doesn't necessarily mean that they are dormant or they're not planning anything. They could be planning something in the basement in the house next door. So it's really hard to know what's going to happen next, unfortunately. Technology does provide that insight and does give us a window into these communities. But like I said, you might have someone in their basement planning or an entire group that might be planning something, or even just forming, coming together, whatever it may be, which makes it even more difficult for us.


Elizabeth Dubois: [00:28:39] Man, that's a lot. There's definitely a lot for us to work on, and I'm really glad that you're doing some of that work. We've talked a lot about Counter-speech now, and we've sort of touched on other little pieces of the moderation puzzle, but I'm wondering if you could maybe pull some of those pieces together for us. What do you see as the major components here for what we can or should be doing about the horribleness that shows up on the internet?


Kesa White: [00:29:11] I think definitely calling on researchers that do have that expertise to be able to help with the things that are happening on the internet and being able to embrace change. For example, my parents don't necessarily know the ins and outs of the internet, but they have been around longer than I have, but because technology has taken over, we do need those younger people and individuals that do know how the internet works to be able to make some kind of change in these communities and be able to come together and develop solutions, because we do all have our own area of expertise. I'm not an expert on Christian nationalism, but I know someone that is, that might be able to bring in their expertise to put that piece to the puzzle together. And I know someone who studies ISIS, I don't study ISIS, so bringing everyone together because the disciplines are similar in some ways, even though they are focused on different regions, whatever it may be. But they are taking tactics from one another and they know what works, what doesn't work, and everything like that. Hopefully in the future, it is something that we are able to develop or even just have the conversations on it because it is really important conversations, because [there's] only so much we can do with our research, and just writing op eds. So it's really important to have a seat at the table. And I saw a post the other day and it said, "If there's no seats available at the table, bring your own folding chair". So there's always opportunity and people that are willing to listen if you are willing to pretty much do the work.


Elizabeth Dubois: [00:30:58] Yeah, I think that makes a lot of sense, and the call for interdisciplinarity and collaboration really resonates with me. And the call for, "if there isn't a chair available, bring the folding chair". I think that's really important because we also know that a lot of the conversations, especially in tech companies, but also in academia over the years, end up being the fairly well-off white guys who had a lot of opportunities to get to these really high up roles in these organizations. And so making sure that we're building new chairs, bringing them and adding them to the table seems pretty essential to dealing with these kinds of problems.


Kesa White: [00:31:41] Yeah, it's really important to (especially with these topics), to embrace diversity because everyone has different experiences, comes from different areas of the world, and has that maybe at some point been a victim of the harassment and hate that we are discussing. I'm only 27, I'm a black girl from a little town, I never thought that I was going to be doxed. I don't think anybody ever thinks that they would be doxed in their lifetime. It wasn't a good experience, but I would say that, that is the drive that keeps me going every day and also just makes me want to work harder because I see how much change I can provide, and just my expertise and just knowing how these people think "What made them hate me? I'm a 17 year old at a little school in Washington, D.C. like, why me?". So just knowing those things is what keeps me going and embracing interdisciplinary research and people that come from diverse backgrounds is really important too, because everyone does have those different perspectives and different ways of looking at the world. And we can't just put all of our trust in one person for them to be the know all, say all.


Elizabeth Dubois: [00:32:54] Yeah, I absolutely agree. Thank you so much. We are running out of time. So we'll go to the final question a little pop quiz. So can you provide a 1 to 2 sentence answer to the quick question "what is counter-speech?"


Kesa White: [00:33:11] Speech that is able to counteract or disprove content or speech or rhetoric that is already being said, whether it be to further a message or to depress a message.


Elizabeth Dubois: [00:33:28] Yeah. And I mean, in this case, we're talking in the context of hate and harassment.


Kesa White: [00:33:33] Yeah.


Elizabeth Dubois: [00:33:34] Awesome. Thank you so much. This has been a really enlightening conversation. It's been super helpful. All right. That was our episode on Counter-Speech within content moderation. We talked about counter speech to extremism and hate speech and all kinds of horrible things on the internet. I hope you learned a lot from the episode and enjoyed the conversation. As always, you can find more links in our additional resources section in the show notes, and over in the annotated transcripts that we make available in English and French at the Polcommtech.ca lab, that's Polcommtech.ca. I also want to acknowledge that I am recording from the traditional and unceded territory of the Algonquin people, and I want to pay respect to the Algonquin people, acknowledging their long standing relationship with this unceded territory. Thanks for listening.