Wonks and War Rooms

Image Manipulation with Juliana Castro-Varón

Elizabeth Dubois Season 5 Episode 2

Juliana Castro-Varón is the founder of the digital open access publisher Cita Press, and a fellow at the Berkman Klein Center at Harvard University. This episode she and Elizabeth discuss the historical examples of image manipulation, how photographic manipulations can mislead the public and the impact of images on our memories. They also talk about ways to spot fakes.

Additional resources

Check out www.polcommtech.ca for annotated transcripts of this episode in English and French.

Elizabeth: [00:00:04] Welcome to  Wonks and War Rooms, where political communication theory meets on-the-ground strategy. I'm your host Elizabeth Dubois. I'm an associate professor at the University of Ottawa and fellow at the Berkman Klein Center at Harvard University. My pronouns are she/her. Today we're talking about image manipulation with Juli. Juli, can you introduce yourself, please?


Juli: [00:00:22] So my name is Juliana Castro. I'm the founder of Cita Press, which is a digital open access publisher. And I'm also a fellow at the Berkman Klein Center at Harvard University, where I'm focusing on the image, kind of like the history of image manipulation, specifically photographic manipulation for misleading the public.


Elizabeth: [00:00:42] Amazing. Thank you. I am so excited to talk to you today. Obviously, the idea of like deepfakes and the role of A.I. in developing fake news through images has been pretty prominent in public discourse, people are pretty worried about it, and so I'm really excited to talk about the history of that and where that comes from and what the implications might be. Before we get into our conversation —I want to do a little bit of—how we understand image manipulation in communication theory. So here what I want to start off with is image manipulation as an idea is typically used as an umbrella term in communication theories, and it's an umbrella term including editing, painting operations, filters—it could include intentional forgery and tampering, or it could be a general technique for making a photo easier to see and understand.


Elizabeth: [00:01:42] So there's a whole wide array here of what constitutes image manipulation. And one of the things that I think is important is, recently, in the conversations about image manipulation, we jump directly to the potential harmful uses. So things that could confuse people, could obfuscate things, could intentionally lack information in order to make people believe something happened that didn't happen. But we also need to think about things like all of the fantasy movies ever that required a whole lot of manipulation to to be created and the simpler kinds of manipulation, like a filter on Instagram. So these are the kind of broad examples that I've got here, and I'm wondering if there are things that you would add to that, given your background. Is there a different way that you understand image manipulation or things that I've missed here?


Juli: [00:02:39] No, I think I understand it the same. But for the purposes of what I'm working on, which we'll talk more in a second. Because I'm looking at the technologies all the way until the 1500s and the 1800s, we don't have Instagram filters, we don't have everybody doing images. So it's a lot easier to, kind of, pinpoint, especially before computers and digital cameras. It's a lot easier to be like, "Oh, this guy in 1838 did this thing and that thing," whether or not it was purposely misleading was, a kind of, a background thing for other people later on [editing images]. One thing that I have found useful to kind of divide the way I studied these images is general intention. And of course we can't know exactly how [or] why somebody in the 1800s would edit an image, but in the uses and how that image gets distributed later on, you can assume that it's either to, first, make themselves look better. And we will talk about some examples about this. Second, make others look bad. Third, this is my favourite, but this is not, kind of, very political—just [for] artistic purposes. In this one, I am not putting, [or] making somebody look slimmer, but rather creating digital images that nobody has created before or doing it for visual purposes that have nothing to do with, kind of like, our contemporary understanding of beauty. And the fourth one is that one, right? Kind of like selling magazines, essentially. Just like you would make this person look slimmer and taller and whiter, and that would help the magazine sell more magazines. And so it's always, many of these are kind of entangled [in] capitalism, of course—


Elizabeth: [00:04:44] Mm hmm.


Juli: [00:04:46] But it's been useful for me to divide those very extremely general ways of understanding intention in why people did these things. Because in the 80s people were crazy with these tools, right? Photoshop became a very (there's a [very] good The Verge article about it) and what that made is that I cannot study every single image that was done in the 80s. I have to study the general landscape, but I can study pretty much every single image that was made before 1865, right? Because there were not that many.


Elizabeth: [00:05:26] Because there were so few, it was like, "Well, we've got all seven of them".


Juli: [00:05:31] Exactly.


Elizabeth: [00:05:31] Cool. And, so, I really love this kind of categorization of intentions that you've outlined here. Would you say one type is more common than the other and has the most common type changed over time?


Juli: [00:05:46] I can't tell right now which one is more common. Probably the one of  like first type  because we're so self-obsessed. They want to look at making yourself look better and because now it's easy. But in the history of political images, making others look bad would happen very often. One example of making yourself look better that I love is the fact that Abraham Lincoln's most famous picture in which he is by a globe and by a very illustrated background, is a composed image of the body of John Calhoun. And this happened because at the time—this is 1860—right before he's elected. At the time, cameras were common and people were accustomed to cameras, but the distribution of images in the media was not yet that common. So many people had never seen the face of the president. Many people had never seen the face of their politicians. But they knew pictures. They knew that people could be photographed. They had seen pictures of themselves and the people they knew. And so there were some rumors that Lincoln was ugly and uneducated and vulgar. And so his campaign went and created this composed image with another image in which John Calhoun looks very non-vulgar and educated—


Elizabeth: [00:07:32] Right.


Juli: [00:07:32] And they put Lincoln's face on it, and then he was elected. [Factcheck: Abraham Lincoln was assassinated in April 1865. This date precedes Thomas Hick’s photo composition of Lincoln, which is generally agreed to have been done after his death, rather than as part of an election campaign.]



Elizabeth: [00:07:38] That's that's incredible that the idea of like—okay, well we've got this image problem—as in like the image people imagine of this candidate—how do we fix it? Let's give them a physical image that counters that story that's been told—"Oh, we don't actually have one that works, take someone else's body and match it up with the head of our candidate!" And that sort of political tactic is one that we could totally imagine happening today, right? I mean, not quite as blatantly as—here's somebody else's body on our candidates [head]—but we do know that the idea of adjusting the audience's perception by creating particular images happens a lot. So in Canada, for example, the now Prime Minister, Justin Trudeau, had a campaign where he was in front of all of these vast, beautiful landscapes, and he was talking a lot about environmentalism in that campaign. And the idea was—picture him in these gorgeous places. Now that wasn’t image manipulation, but I'm just kind of working to connect it here to some of our other theories of image in politics.


Juli: [00:08:51] This is interesting and important because very often these images are about what could have happened, not what definitely didn't happen, could have never happened, is fully fake. Now we're talking, like, right now, that has changed a little bit. And very often we would have images of things that are completely impossible. But the idea that you would edit an image so that it could look like something, it could look—


Elizabeth: [00:09:19] Yeah.


Juli: [00:09:20] Right, like when you Photoshop yourself, you're like, "Oh, that's generally my face".


Elizabeth: [00:09:24] Yeah.


Juli: [00:09:25] And so it feels less evil than completely making up a lie. Right? And that and that's very interesting to me because this image of Lincoln in particular, for example, it could have happened! That's how men looked in the 1800s. Of course, that is a—it's probably a common pose on a common background for pictures at the time. However, because we know that it was composed—


Elizabeth: [00:09:58] Mm hmm.


Juli: [00:09:58] We judge it a different way. Right now I just feel like it's very—it's silly for a campaign to edit an image because it's so easy to determine that it was edited. Then it becomes about the fact that it was edited and what they were trying to make look like happened—like it really happened or it didn't happen—more than the image itself.


Elizabeth: [00:10:23] Totally.


Juli: [00:10:23] But I think it has changed that right now, it's not such a good idea to do that because you can be caught easily. But in the past there's so many examples of people just really not understanding how the technology worked. And I think that's something that may happen with A.I. People don't know exactly how the technology works. We now know that Photoshop is a thing. We know that, right? So if we see something either badly Photoshopped—


Elizabeth: [00:10:55] Mm hmm.


Juli: [00:10:55] Or suspicious, that looks too much like [it] probably didn't happen. We know that the tools to make that look real exist. We as a society, people who are [on] the Internet and [are] contemporary people who have access to certain tools—know that.


Elizabeth: [00:11:17] Mm hmm.


Juli: [00:11:17] And it took a little bit in the past for people to understand what machines could do. And I like the comparison because I think, with AI, that we're just so enamoured by the idea of composing images and making images out of nothing and—all these kind of magician-like metaphors that we entangle AI in, have a lot in common with how people saw photography in the mid 1800s, in my opinion. One example is that and—always Lincoln—I don't know why, I mean, in the US in particular, people loved his image. After he died in 1871 [Editor’s correction: Lincoln died in 1865], there was one photographer that created a technique of overexposure to creating the look of ghosts. And so he created—promised people that he would take a picture of their loved one's spirit. And he did—one of his most famous pictures is this image of Lincoln's widow with Lincoln's ghost.


Juli: [00:12:32] And we know that it is not Lincoln's ghost. But at the time, this kind of like, spiritualism and trust in the paranormal was very common. It was right after the Civil War. And so people wanted to believe in ghosts. And people didn't understand that you could overexpose the image. Some photographers used it to create entertaining images of a magician, kind of like circus-like things. But others used it to purposely promise people that they were taking pictures of their ghosts.


Elizabeth: [00:13:16] Hmm.


Juli: [00:13:16] He was taken to court for this and later acquitted and continued to make images. And he was very talented photographer. Really.


Elizabeth: [00:13:25] Wait for a second here. He was acquitted because everyone believed that he was actually, in fact, taking pictures of the ghost? Or because—


Juli: [00:13:32] I don't know the details of why he was acquitted, but I understand that it was like—whether or not people believed was not as important and whether or not he was promising and delivering—


Elizabeth: [00:13:46] Hmm.


Juli: [00:13:46] He was delivering a picture of the spirit. Right. Like it looked like [it]. But I don't know the details. Like maybe kind—


Elizabeth: [00:13:53] We'll do—


Juli: [00:13:53] —Of


Elizabeth: [00:13:53] We'll do some background research and add into the show notes the rest of the history.


Juli: [00:13:58] Great, great, great, great. I know he was acquitted, which is crazy. But also, like, why would you take him to—Yeah, like, misleading the public? But what if he believes in ghosts himself? Can you just be like, "Oh, I mean, he thinks he …"I don't know. Like, that's the thing they—


Elizabeth: [00:14:17] Yeah.


Juli: [00:14:18] Know to—


Elizabeth: [00:14:18] And I mean, that's the problem with so much—I mean, fake news, disinformation, misinformation. Sometimes people are sharing things, genuinely believing it to be true. Right. Like some people, when they, you know, find a picture of, you know, "God in a pancake they made" are genuinely like, "This is a sign from heaven." Right? They're sending pictures [of it] around online. And then other people are using Photoshop to make their pancake look like there's a picture of God burnt into it. And they're like, "Hey, look! You know, I got one too!". And that's a fairly basic idea—you see that kind of meme going around on the Internet all the time. But I think it gets to that root of we don't always know what we're sharing, and then holding people responsible for those kinds of things gets tricky.


Juli: [00:15:14] Absolutely. And that's the thing that is changing and will continue to change as the technology becomes easier to use. It's like anybody can do anything and that's to some degree great, but to another degree, dangerous. Because then we not only have images, but then we need tools to tell us whether or not the images are real.


Elizabeth: [00:15:37] Totally.


Juli: [00:15:38] And like, yeah, I mean, photography was like the reality machine. And right now we need other machines to tell us whether or not these images are even real.


Elizabeth: [00:15:49] Yeah. So let's kind of walk through that. We talked about, you know, this overexposure technique. Eventually people learned that that was a thing, [and] no longer believed that their spirits were being captured in these images. Are there other examples sort of along the history of image manipulation where there was a technique or a tool that was used that deceived or confused people, and then eventually we learned about it?


Juli: [00:16:16] I mean, yes. Multiple times. [Most] famous one being Photoshop. Photoshop is from the 80s, but it didn't become popular until a little bit later. And when it became popular, then something similar—that—what had happened before with digital cameras—happened. And it was that everybody was like, "We can't trust images anymore. Truth has ended, the end of truth, everything is fake now." And that didn't quite happen, right? And I think we are going through a similar paranoia right now. [To] just be like—


Elizabeth: [00:16:58] Mm—


Juli: [00:16:58] Oh—


Elizabeth: [00:16:58] Hmm—


Juli: [00:16:59] "Deep fakes! Everything is fake! We can't trust anything"—So on and so forth. I think that, of course, now it's easier to make images, but it's also easier to know whether or not the images were faked. Now, the problem is not that the images are made, but that the images travel very fast, and that's its own problem. Right? That The images arrive everywhere in a matter of seconds and minutes. Maybe the problem of whether or not people have time to fact check it, not the technology of making the image.


Elizabeth: [00:17:37] Mm hmm.


Juli: [00:17:38] Because you could tell whether or not an image has been manipulated and images have been manipulated for 200 years. So with Photoshop does—the same happened right when it became very popular and people understood how it worked. They were like, "It’s the end of truth. We can't trust the media anymore." And the truth is, the media could not be trusted. They would edit, put people together that weren't together, and then put a little tiny note that said "this is a composed image" that nobody read. So, some not entirely transparent intention is in it very often. And I guess the problem becomes judging or determining whether or not that is bad enough to socially punish it. I mean, models continue to be made even thinner on photographs and on magazines. This has happened for multiple decades now, right? Since we've been alive, and still happens and we have feminism and all the things. No, nothing. It continues to happen. So the problem, like—maybe is people's ideas about beauty, and not the technology. The technology has existed for a very long time.


Juli: [00:19:11] Another example that I think is useful in kind of like the editing of the past is one that I find very silly but very interesting, is that sometime in the 2000s, a bunch of people decided that cigarettes were bad enough that we couldn't show children that people smoked in the past and cigarettes were erased from history in a way that we could argue whether or not that is positive for children is one thing, but whether or not that is positive for history is another one.


Juli: [00:19:50] And so they erased the cigarette in the poster of Abbey Road that Paul McCartney's holding. They erased cigarettes in movies. They erased cigarettes in—they banned cigarettes in movies and erased cigarettes in Winston Churchill's very famous image, which is now just so ridiculous. But the machine allows people to do things, but it is not the machine being like, "Oh, you know what, let's erase a cigarette." Like, it's just so—it comes from this very human desire to control the narrative. And it has enormous impact on how people remember.


Juli: [00:20:40] There's one study from, I think it’s the early 2000s, maybe 2007, I believe, that claims that people remember how things are shown in pictures, even if they were present and something else happened. And they use the example of a doctored image of Tiananmen Square, a protest that didn't have that many people, but the composed image, put a bunch of people there. So even if they had been to the protest, they've remembered that there were many people because that was the image that they had seen and seen and seen. And this is important because, of course, we are to some degree editing how things happen in the long term and editing how people remember the things that happen, regardless of whether or not they were present. And they have been slightly different.


Elizabeth: [00:21:41] Yeah, I think that example is so intriguing and so essential for this conversation about why we should care about image manipulation at all. Because there's been a few studies that do show that, that we develop memories based in part on even the manipulated images, even when we know that they've been manipulated, even when we're told later that they've been manipulated, it still has this impact on how we remember events and how we imagine what history was. So I think it's really essential and we'll add into the show notes, some links to that stuff to read more if you're interested, because there's a bunch there.


Elizabeth: [00:22:23] It then makes me think about, okay, well, what do we do about it?If we know that there's this power that images have, what are the solutions or what skills do people need to have or what technologies should or shouldn't be used? And I mean, this is a giant question to ask, but you've already hinted at a few of these things. I think you've talked a little bit about—the more experience you have with it, the more you understand that the technology exists, the better folks are at identifying when it's happening. And sometimes they choose to continue using it and sometimes they choose not to. Are there other things that you can think of that as a society we need to have in order to deal with the reality that image manipulation has happened for a long time and probably is going to continue.


Juli: [00:23:11] I think the main one is to understand the technology to some degree. To— I like the idea of magic. But in this particular thing that is so important, I don't love the metaphors of magic because they have an aura of beauty and surprise. And A.I. is that, sure. It's fascinating to see what it can do, but it's also very dangerous because these are human-built things, very much and very often with the purpose of making money, not always, but very often. And so I think understanding to some degree, for example, that A.I. images are created by algorithms and that algorithms process data. And so that means everything that is created is created based on what it has already, "seen." Right. So the image is not created out of nowhere. They might just kind of like the statistics of the images that it processes. So the algorithm creates an image of a person that is overwhelmingly white. That means it has more white people in the database. It gets more complicated than that, of course. But I think general understanding of how technologies work is never bad.


Elizabeth: [00:24:47] Mm hmm.


Juli: [00:24:48] I think we need some legislation eventually to—especially to control the most problematic aspects of this technology. We—with deepfakes and child pornography and all the just horrific things that the technology allows horrific people to do.


Elizabeth: [00:25:13] Mm hmm.


Juli: [00:25:15] But I think I think in general, I am advocating for a—for kind of a more cautious approach to looking at images, one that doesn't assume that everything because it could have happened happened, but also one that doesn't assume that because it is an image, it was probably faked.


Elizabeth: [00:25:40] Yeah, that makes a lot of sense. The idea of—we need to be aware, we need to be cautious, we need to be critical, but we also don't need to, you know, stick our heads in the sand and ignore and say, "We don't want images at all anymore." That doesn't really make sense. We want to have a situation where, like with most of our media that we consume, the more literate you are about how it's made and why it's made and why it showed up on your screen, the the better you are at understanding the intent behind it and the purposes of it and the potential ramifications of it.


Juli: [00:26:20] Yeah. I also believe that we, especially those who work on the Internet or are extremely online, we've been trained to know now what an image that is fake looks like. It’s like when you get a spam email that says, "Oh, you won 100,000 million!" and you know that it’s fake—


Elizabeth: [00:26:46] Yeah.


Juli: [00:26:47] Right? Like you just—or just be like, "Buy this thing!" And insane promises and many emojis and weird type, you know that is a spam, fake email. How do you know that? Well we've been—


Elizabeth: [00:27:02] Mm


Juli: [00:27:02] Trained—


Elizabeth: [00:27:02] Hmm.


Juli: [00:27:02] Because we've seen them. We've seen them so many times. We can catch them now. Not everybody has that. I—


Elizabeth: [00:27:09] Mm hmm.


Juli: [00:27:09] Think people would see an image and not zoom in on it a lot to determine whether or not it was faked.


Elizabeth: [00:27:20] And one of the things is the tools that we use to deal with the images that you can't just zoom in and know or you don't know immediately. Like use reverse image search on Google use. You know, there are like deep learning models that have been developed that are really good at detecting fakes but are not necessarily accessible to the average person. There's an extra step and sometimes an access issue, whereas with those spam things, you know, it's just like the more you've experienced it, the better you are at just identifying it.


Juli: [00:27:56] Yeah, but with images you also—like—if you've seen Photoshopped images enough, you know, kind of that the borders look a little weird. You may not be able to fully name that the exposure on one of the people doesn't match the exposure on the other one. But you can tell that something is off. It's a good example of just what things you can look for if you're trying to catch whether or not an image is fake. And similar tools exist for A.I. images, for example. Very often they won't— like if it is a person with earrings, very often they—


Elizabeth: [00:28:38] Mm


Juli: [00:28:38] Won't—


Elizabeth: [00:28:38] Hmm.


Juli: [00:28:38] Have—they won't have the same earrings. They will have one earring, and then the other one would be [different] because it doesn't know that earrings necessarily have to match—because sometimes you have a picture in which one of the earrings is covered. And so if you have two ears—we can put an example in your show notes—


Elizabeth: [00:28:55] Yeah.


Juli: [00:28:55] So that people can see. But essentially there are ways of catching an A.I image. You'll see in the example, that it just looks a little bit off and then there are like some weird hairs, or the hands are weird. I believe, however, again, that may change in six months. Like that—


Elizabeth: [00:29:16] Yeah.


Juli: [00:29:17] May change—


Elizabeth: [00:29:17] And


Juli: [00:29:17] With—


Elizabeth: [00:29:18] That's the nature of so much disinformation that we see online. It's also—I did a bunch of work looking at the use of political bots and automation on social media accounts. And what we saw over and over again is each time we get better at detecting it, whether it's humans detecting what shows up on their screen or tech companies detecting, kind of, behind the scenes what's happening, the people creating the disinfo [disinformation] and doing the manipulative information sharing. Get better at making technologies that do it more discretely or at going around kind of the walls that we put up, whether that's us knowing like, "Oh, I should check the earrings," or "I should use reverse image search" or whatnot, or a tech company saying, "Well, you can't automatically post to your account. Anybody that's posting consistently every 30 seconds—we're going to shut down". So they do it like, 30 seconds, one time; 45 seconds, another time; 2 minutes, another time; 10 seconds, another time. Right? Like these disinformation campaigns—when they're campaigns to share that manipulated information—they're constantly working with the technology to get a step ahead of our ability to understand that it's happening.


Juli: [00:30:36] Yes. I mean, it's a lot harder to spot kind of large, purposely created to mislead campaigns. But it's never bad to kind of understand a little bit better how—why would people do it and suspect even our own kind—of doing it.


Elizabeth: [00:30:59] Yeah. Yeah. I think it's really important to remember that image manipulation isn't always part of some big nefarious disinformation campaign. Sometimes it's about, as you said, you know, it can be just to simply make yourself look better because you prefer when your nose doesn't look as crooked. And so you make an adjustment with Photoshop. Or you like how the fuzzy filter makes your skin look nice and clear, right? It could be that, "Oh, I want my competition to look bad." It could be the artistic purposes that you talked about, or it could be the idea of beauty and selling stuff, which we didn't even really get into. But that's a giant, a giant conversation for another day. So I would like to end off the episode the way I end off all episodes, which is with a little pop quiz. So what I'd like you to do— short answer question—how would you define image manipulation?


Juli: [00:32:00] [Laughs] As the ability to edit an image that a machine took.


Elizabeth: [00:32:04] Quick follow up. What's the difference between just editing a photo and manipulation or like that, that leading-to-disinfo [disinformation] or various nefarious words people use when we're afraid of it all?


Juli: [00:32:20] From the technical point of view, there isn't. And I am focusing on the more misleading examples, but I have some other examples in the timeline in which people just erase the little post, like a little tiny pole in the image. And it's an image of a war zone. And they're like, "You know what? It will look more beautiful," and but it's an image of a war zone and still they're just kind of editing a little bit and not— everything else is untouched. So there isn't really a difference. But I'm obviously not looking at every single image that has ever, ever been edited, but rather key examples.


Elizabeth: [00:33:04] Yeah, that's really helpful. Thank you. And I think something that our conversation has brought up is, you know, it's how those images are then used in stories and narratives about what's happening and how that impacts our memory, how that impacts what we understand to be history as the thing that can sometimes change that. Thank you so much. This was a wonderful conversation. I really appreciate you taking the time.


Juli: [00:33:28] Of course. Very nice to see you again.


Elizabeth: [00:33:34] All right. That was our episode on image manipulation. I hope you enjoyed it. Today we talked about a lot of different examples and offered a lot of different things in our show notes, so be sure to check them out. You can also find our fully annotated transcripts in French and English on the website at polcommtech.ca.




People on this episode