An AI-generated image of a robot endorsing peace. (Dall-e 2)

How to make AI work for peace

U.S. Institute of Peace disruptive technologies expert Heather Ashby discusses the urgent need to orient AI toward supporting peacebuilding and nonviolence.
An AI-generated image of a robot endorsing peace. (Dall-e 2)

Subscribe to “Nonviolence Radio” on Apple Podcasts, Android, Spotify or via RSS.

This episode of NV Radio offers insight into the ways artificial intelligence, or AI, might be used to support peace and nonviolence. Stephanie and Michael welcome Dr. Heather Ashby of the U.S. Institute of Peace, an expert on technology and its intersection with government and politics. Their discussion explores the ways AI might be used for both ill and for good in the public sphere. This dual possibility gives rise to the urgent need to understand how to orient it towards peace. Though aware of the dangers inherent in AI, Dr. Ashby reminds listeners that:

The original idea when social media started was to increase the commons so that you’re meeting people in different parts of the world, or even in your country, your state, who you normally wouldn’t have encountered. [At this point, the aim of AI must be] to hold on to that and to try to leverage these tools to be able to do that. And to make connections and to build grassroots support.

We need not fear the potential damage AI could cause so long as we work deliberately to build its capacity to bring people together, to gather and spread reliable information as a way to promote peace, increase understanding and sustain communities throughout the world.

Stephanie: Today’s show is about artificial intelligence and its potentialities and implications for peacebuilding and nonviolence.

AI technology is advancing faster than ever before and at this point it’s not enough to keep up actively with knowing what it is or what vocabulary is used around it, we have to strategize actively and implement mechanisms to use it for higher purposes than violence and destruction. The co-opting of this technology for violence is very likely and something we have to find a way to shift… So, learning the conversation is only part of the responsibility we have in the world of peace and nonviolence, we can find ways for AI to help do work for peace. That’s a really exciting conversation to have, even the Pope on World Peace Day focused his remarks on this topic. . .

I turned to the United States Institute of Peace for a discussion with Dr. Heather Ashby, who is their associate director for USIP’s program on disruptive technologies and artificial intelligence.

Dr. Ashby joined USIP after seven years with the Department of Homeland Security, where she worked at the intersection of homeland security and international affairs. She also focused on U.S.-Russia relations.

Her research interests in the field of international affairs include misinformation, disinformation, and misinformation, as well as hate speech, propaganda, artificial intelligence, and digital security and safety. Dr. Ashby also researches and publishes on Russian activities in the Global South. though this show’s focus is AI.

Let’s turn to Dr. Ashby.

So, Doctor Heather Ashby, you’re with the US Institute of Peace. Can you tell us a little bit about this institution and how you came to be involved in it?

Heather: Yeah, no problem. So, the US Institute of Peace has been around, at least by this year, for 40 years. It was created during the Cold War, when we were in a different conflict environment globally.

And since that time, the institute has evolved to address conflicts and the way they have changed over time, as the US position in the world has changed over time. And so, the institute is focused on preventing, mitigating, and resolving violent conflict where it occurs in the world. And so, we have offices all throughout the world, particularly in Africa, Asia, and Latin America, with people in communities, working with grassroots civil society organizations, as well as local peace builders to help address the conflicts that they are dealing with.

Stephanie: Was this founded around the same time as the Peace Corps? Are they related in any way?

Heather: These were separate efforts. For the US Institute of Peace, it was a grassroots movement of religious organizations, war veterans to come together and say that based on experiences and what they observed through the 1960s and 70s, in particular, the US needed an Institute of Peace to help educate the public about peace.

Michael: Yeah, I was part of that movement, actually. We actually had as our goal, a department of peace that would be parallel to the Defense Department, which is actually a department of war. And so we regarded USIP as a kind of compromise, which we accepted.

Heather: Yeah, well, it’s great to hear that – that you have that experience participating in the effort to create the Institute of Peace. And I would say that we’re trying to live up to those original ideals by making ourselves more available to the public. So, people who come to DC, want to look at memorials on the National Mall – because our headquarters is at the end of the National Mall before you head into Virginia – just coming in, looking at the building, speaking to our experts.

And we have exhibits on peace and conflict, like images from conflicts and stories about the conflict to go along with it and interactive aspects of those exhibits available online. So, we’re really ramping up educating the public about peace.

Michael: Really glad to hear that.

Stephanie: So, I wanted to talk to you today about your research on AI. You wrote an article for the US Institute of Peace, A Role for AI in Peacebuilding. I hope that we can jump into that and discuss that. But first, my question is what got you into that assignment, that interest in the role of AI.

Heather: For me, I’ve always been interested in technology. When I worked for the Department of Homeland Security, that was an aspect of my work – working internationally and domestically. But it was looking more at the state perspective of cyber crime, cybersecurity, and the way cyber issues / technology issues feed into national security.

And so, by working at the US Institute of Peace, it’s taking the opposite view on the issue of how local partners and other organizations on the ground can fight for peace and use technology because it’s a part of war, or how to have more of an active involvement in war.

And so, I started with net technology through – as well, mis and disinformation. And so, with the rise of generative AI, as well as the role that AI plays in content moderation on social media platforms, I was thinking more about AI and peacebuilding and how peacebuilders could use technology, but also prepare themselves for how it’s currently being used and what it could be used for in the future.

Stephanie: Well, let’s jump into AI. I want to start with a perspective of maybe we just don’t know. To bring our listeners along the journey with us so that we can all get on the same page. So, let’s start off with the discussion of generative AI. What does that mean exactly?

Heather: Generative AI is more what we’re hearing a lot about and it’s based on large language models which collect a lot of data, mainly from the internet, though we’re not clear on the datasets that a lot of these companies are using. And it’s built in a way that it interacts with people.

So, you could query questions up, ask it to do things. You can get it to examine data for you, it could write a paper, and so forth. And so, that’s the nature of generative AI. It helps you. It interacts with humans to do certain tasks.

Stephanie: So, in generative AI, you discuss some of the challenges that it’s posing. Say, for example, in elections coming up in the next years. The way that they can be manipulated. And so, how does this affect elections as well as disinformation, malinformation, misinformation?

Heather: Yeah. And I think this is the first wave of elections that we’re really going to see the extent of how generative AI may play a role. Because they’re so available, ChatGPT receives a lot of attention, but we have open-source models. They’re a different generative AI products based on certain research topics that are available out there.

And so, anyone could really access them because some of them are available for free. If you want more advanced capabilities, you could pay for it. And so, you could just put in information and what comes back to you could be inaccurate information about elections.

There have been some studies recently over the past couple of months that said when an organization tried to put in, or an individual’s information about an election in a particular country, it was producing inaccurate information about the position of candidates, where to go to the polls.

And so, it’s supposed to, in an ideal situation, help people approach online search in a different way. With Google, you’re just going to receive a bunch of websites, or Bing or other search engines.

With generative AI, it’s going to respond back to you and the hope is that it will provide you with accurate information and links to where you could double-check it. But that’s not always the case with different software tools. And so, that’s one area that it could have an impact on elections if people use it to try to find out information about the candidates, where to go to the polls, as well as when their primary is. At least in the case of the United States.

There are also challenges, depending on the language. And because so much of the internet is English language, that not only generative AI, but other technology companies, when they’re trying to produce products in other languages – especially social media platforms in terms of content moderation – there are challenges because they can’t pick up on the nuances of the language and the way that the language takes place and shifts and evolves within different communities.

So, Arabic spoken in Egypt is not going to be the same type of Arabic spoken in Libya. And so having the technology be able to pick that up is still a ways off. But it’s out there. And so, if you’re a non-English speaker, and you’re putting stuff into these systems, it could be a whole host of challenges coming out. And if you’re doing that for elections as it continues to spread and develop for various countries, that could also cause additional confusion.

And it lowers the cost of anyone who wants to spread disinformation by making it easier for them to replicate that information. So, generative AI, while we, just the average user, could just go in there and type a question, you could also access it, tie it onto another software product and help generate more information from it. So, it’s behind the scenes helping whatever the product is trying to do.

And if someone wants to have a bunch of inaccurate information, you could query that up. But the hope is that in using those products there are some terms of service that will prevent that, but you could always get around, and we’re seeing that with social media over the years and currently.

Stephanie: What do you mean by that? On social media?

Heather: In terms of, it’s not there to say, “Hey, this is factual information or inaccurate information.” And that’s the same type of challenges that are happening with generative AI. Even though certain software would say, “Oh, please double-check your information.” But where are you going to double-check the information? It’s pulling it from the internet already. So, you’re just going to go to another search engine that may provide you with inaccurate information.

And so, it’s up to the user to determine what’s factual and what’s inaccurate, and you have to do the research. And if you don’t have the background in doing that, then that’s going to prove challenging.

And then from there, it just spreads. Someone could post it on social media, and  it multiplies from there. Or just in person interaction, text message, messaging apps such as WhatsApp, Telegram, it could spread.

Stephanie: And it’s interesting because as we’re going to go through these challenges, it seems like you’ve also thought about space for opportunities for peacemakers and peacebuilders to be able to step in and say, “Okay, so this is a problem. This is a potential risk that you’ve been able to point to. So, what can we do to help change that or transform that spread of dis- or misinformation?” Right?

Heather: Yeah, And I think you have to be building partnerships with AI companies. I think even though in certain sectors AI is farther along. Like in finance, if you’re applying for a credit card online, there’s not an individual who’s monitoring your application. They have set up a whole system, a software system, to do that automatically and get it back to you in a minute or less, about whether you’re approved or not. And to examine risk factors based on their algorithm. So, that’s much farther along.

But in looking at generative AI and the expansion of AI into other sectors, that could be an opportunity in a way that peacebuilders and others didn’t know what social media would become. Now we know so we’re playing catch up. Now is the opportunity to interact with those AI companies as well as nonprofit organizations, research organizations, academic institutions who have funds, who are invested in this to start looking at the conflict potential of these tools that they’re talking about and to come to a common set of terms.

Because when they’re talking about accountability for tech companies, it’s not the same way peacebuilders are using the term accountability, which could be mass atrocities, war crimes. And so, having a way to speak the same language so each side understands each other, and come together to have these discussions, I think is so important.

And if we start that now, as peacebuilders, we could start to impact and make those changes before we’re caught on the other end of it trying to go back.

Michael: I know one of the concepts that has been important to us in peacekeeping, particularly, is rumor abatement. And I can imagine that this is many orders of magnitude more complicated with AI because rumors can spread so much more quickly and so much more pervasively. Do you have something you could say to that?

Heather: Yeah. I’ve heard of that being the issue. And so, we’ve been thinking about that at the US Institute of Peace. And so, one way that I’ve been considering approaching it is training of peacekeepers on these types of issues and messaging ahead of time before the peacekeepers are deployed and continuing that messaging throughout while they’re present. And doing the messaging in a way that people receive their information, not how you think they should receive their information.

So, if they’re communicating over radio, then to convey that over radio. If it’s television, it’s television. If it’s via text message, to do it that way. And AI could be useful in terms of mass-producing texts to go to a large segment of the population that have cell phones that are just primarily text-based and not have downloadable apps that are like Facebook and so forth.

And so another way is understanding a society when you’re in there of how communication is taking place, how they determine what are truthful sources, so you could leverage that within a community. If that’s religious actors, if it’s other nonprofit organizations, if it’s serving government institutions. So, working with them. So, it would be more of a comprehensive approach than just deploying peacekeepers and saying go out there and execute your mission.

It’ll have to be more legwork and understanding the society and how people operate and communicate with each other to make sure that the messages that are coming across are reaching the people who may be susceptible to those rumors.

Stephanie: Talk about deepfakes. What are those, and why are they a threat, and how do they relate to AI?

Heather: Yeah. Deepfakes have been becoming more of an issue over the years. And this is the capacity to use software to manipulate images, voices, and text to misrepresent a person, what they say, and so forth. And so, with deepfakes, it could be someone using an AI tool to splice images together and have a person on a video say words that never came out their mouth. Or it could be put in people in images who shouldn’t be in the image together to misrepresent what may have taken place.

So, if you put Zelensky and Putin and say that they met and had a discussion on how to resolve the war in Ukraine, then that’s a deepfake and that could circulate. I imagine a Russian audience would believe that more than a Ukrainian audience. But that’s a situation in which deepfakes could be used. It could be active in the elections that are coming up this year.

In the past, it was that famous one with Nancy Pelosi, in which it seemed like she was slurring her words. And that was before more advanced AI systems that we’re talking about now. And that was 2019, I believe, or 2018 – involving former Speaker Pelosi.

And so, as technology evolves, we could see deepfakes becoming more sophisticated. And there have been discussions about whether AI could serve as a tool to identify deepfakes, but the irony is as AI becomes more evolved, it may have trouble identifying its own products to spot what is fake and what is real.

Stephanie: Why should people interested in women, peace, and security be interested in this topic of deepfakes?

Heather: Yes. The greatest victims of deepfakes are women and pornographic images of women. So, taking a woman’s head who you’re targeting and putting it on another woman’s body and sharing those images that are not consensual. And so, that is incredibly troublesome. As well as general online targeting of women. And so, the aspect of mis- / disinformation that doesn’t receive enough attention is malinformation. And so, that’s taking factual details about a person and putting them out there to do harm to them.

So, this is doxing. This is swatting, in terms of calling up a police station and saying that there’s a hostage situation at a person’s house, so then they send the SWAT team and heavily armed officers to a person’s house. When they get there, they realize this isn’t the situation that’s taken place. But it causes a lot of danger. Put in addresses online, emails that could then lead to more people attacking the individual.

And so, this is prominent among women who are active in gaming communities, women who are in the public sphere. Because of these attacks, it has the potential to reduce women’s presence in the public sphere. We think of it a lot in Europe and the United States because that’s where most of the research takes place so additional research needs to be done on what happens to women in the global south who are targeted. Or women who are just online who want to engage in that type of online community, online commons, but they’re being targeted by a stalker, or someone in their family, or just someone who wants to have a relationship with them and were rejected. This makes it easier to target them.

I think it’s one of the most critical aspects of women, peace, and security that if we want to advance democracy, you need to have women participate in society. And this is going to seriously – and is seriously undermining their participation in the public sphere. There are a lot of issues I’m passionate about, but especially this one – and to make sure that we’re combatting it. It’s not received enough attention from social media companies on this particular nature of targeting. And in addition to targeting women, it’s people of marginalized groups and communities as well.

Stephanie: Is this being addressed at the UN in any significant way, do you know?

Heather: Yes. It’s picking up steam. It’s not only the UN, the OSCE as well as other governments across the world are looking into this more deeply. Nina Jankowicz is one of the leaders in pushing this forward and bringing international attention to it. So, if your listeners and you are interested in digging more deeply, I would say look into Nina Jankowicz who is a victim herself of malinformation and disinformation when she was seeking to play an active role in the US government on how we can combat this and make a more holistic strategy for it.

I would say follow her. There are also other organizations, She Persisted, that also look at these topics. And you could follow their research and how they are very active globally with civil society organizations as well as international organizations on these topics. And I think in the discussions about AI you have various people bringing this up as an issue about making deepfakes more widespread, more targeted towards women.

Stephanie: This is really fascinating and there’s such a broad discussion. I just want to pass onto the next topic, which is about surveillance and the role of AI and what peacebuilders need to know about what could go wrong and what they need to be doing.

Heather: Yeah – with surveillance, it’s already happening now and one of the challenges with AI and the discussions about AI is that there are groups of people and organizations that want to focus on existential risk. So, how you could build a biotech bomb, AI using nuclear weapons. But they’re focusing less on the everyday harms, and that’s within surveillance technology.

It’s facial recognition. It’s having the ability to comb through mounds of data that are collected through camera system and other means. Whereas previously, you had individuals go through and some technical help with a software system, but now, it makes it easier. And so you could think of AI as – one of the top users of AI is the capacity to go through mounds and mounds of data to look for patterns, trends, and to spot things that would be more difficult if a person did it.

And so, that is one of the big dangers with surveillance technology, is it enhances surveillance technology because you could gather more data on individuals, on communities, and alert whoever is using it, designed it, on various issues.

And so, it could be taking facial recognition of nonviolent activists, and you have an image that you’ve taken of them at a protest or some other way that they’re in the public. And you’re trying to identify them, and so you comb through the whole internet looking for similar images to try to identify the person. That’s what could take place with surveillance technology powered by AI. And there are a number of companies doing this.

On the flip side, there are some organizations looking at how these tools could be used to help in the peacebuilding field. There’s one effort within Ukraine of trying to help with identifying perpetrators of war crimes through facial recognition. So, identifying who the Russians were who committed the war crimes in Ukraine.

And so, within everything that could be completely horrible and terrible, but there’s the aspect of it in which it could be useful, and we just have to make sure that it’s done in a way that’s responsible.

Same thing with drone technology. You could use it in a non-weaponized way to monitor ceasefires and the border of ceasefire lines to avoid putting too many people there who may be impacted by the violence in terms of monitoring it. So, that’s one aspect.

And so, with drone images, you could have AI examine those images and spot any type of discrepancies rather than having someone sit at a computer and monitor all these images coming in through the drones.

Stephanie: Yeah, I imagine, too, that you have some perspective, coming again from the Department of Homeland Security where this could be like a wonder, you know, to identify “terrorists.”

Heather: Yeah, and who is a threat. That’s the main thing with that which the government looks into is how to define a threat. You could put factors into an algorithm and AI could go out there and gather data for you to look at the behavior patterns of people online.

With weapon systems, I think it’s drones, and then we’re going to have targeted assassinations that are happening through facial recognition through drone technology. Monitoring of satellite imagery – that’s less weapon system, but more – providing more data to help militaries in whatever actions that they’re engaged in.

There are always discussions about biotech weapons. AI is using that and giving people the materials to build the capability, as well as what do you do when weapon systems are autonomous and who’s held responsible. So, there are going to be a lot of questions within the accountability field of how do you hold people accountable within this space?

Stephanie: Yeah. And you point out that we can’t just sit around and wish that this is going to go away or that, you know, it’s going to keep advancing, AI and its relationship to weapons. So, it’s of utmost importance to start finding ways to put in spaces for accountability, understanding how they could be used, scenarios, and how to prevent them from being used for wrong purposes.

I love the way that – yeah, they exist, so why can’t we use them for peace, then? Why can’t we use them for advancing disarmament?

Heather: Yeah. Exactly. And the challenges that we put so much money, as you know, into war capability and so much less in peace. The budget for the US Institute of Peace is $55 million. And, of course, we know the budget for DoD is in the hundreds of billions.

Stephanie: Well, there’ve been some summits. Also, there was one in the EU as well?

Heather: Yeah. And the EU is working on their legislation on regulating AI, and so, they’re still going. It’s coming close to finalization from the last time I checked on it. And similar to what they were doing in terms of data, privacy, e-commerce. I think the EU is leading the way in terms of multinational approach to regulation and thinking about these tools.

And while it’s not perfect, they’re still putting it out there. And it’s useful that it’s coming from the Europeans because of the way technology companies operate. That they focus more intensely, at least on social media companies, on markets where they are very active and under the most scrutiny. And that’s the United States and in European countries, and less so do they care about the harms that are taking place in Myanmar or Ethiopia or Sri Lanka.

The hope is that with the ability of the EU to try to put in guardrails around this, looking at facial recognition software that could be used by security institutions or law enforcement is that it could pick up elsewhere in different parts of the world while the US is still having those debates. But at least there are members of Congress bringing attention to some of these challenges.

In terms of the UK summit, what was hopeful there was the greater understanding of a way forward that didn’t necessarily take place in the conclusion. It was just more that conversation should take place. Iit was limited in scope in terms of just focusing on what they call the frontier model.

Those big large language models that only a few companies and institutions can produce because of how much money goes into building the computing power for it, designing the algorithms to scrape the internet or create their datasets – it was much narrower in scope.

What I’m looking forward to is the UN Advisory Board and what conclusions they reach ahead of the Summit of the Future that’s going to take place in September of this year. They did a very international collaboration of people for that board from different organizations, countries, institutions to have conversations in different aspects of AI, linking up to sustainable development goals that were developed years ago and to see what international governance structures could be around AI.

That’s something to definitely monitor this year, as well as work that – it seems that the Vatican and the Pope are trying to lead. The Pope had a message last month around the International Day of Peace, talking about the role of AI and how in 2024 they were going to make more of an effort to come up with ways of thinking about AI in the peace space.

I would recommend people to keep tracking those – the UN and what the Vatican and the Pope are discussing around AI and convening people together.

Stephanie: Well, it seems to me that underneath all of this is a sense that we have to have trust in someone or some ideal. And I think that without that trust it’s really hard to decide if, you know, who’s telling the truth and whose truth do we want to follow? So, how is trust built into these discussions as well in terms of using AI for peacebuilding and essentially maintaining democracy, right? Because we’re looking at if it gets out of control, it seems like we’re going to a world of greater harm and violence and authoritarian use of mis- and disinformation, right? So, who do we trust in this process?

Heather: I would say the peacebuilders because at the end of the day, with all this technology, it’s hard to replace person to person interaction and the nature of peacebuilding is to interact with people, person to person. I think that should be a key aspect. And what peacebuilders could do is monitor the online space to understand the narratives that are taking place, who are the people who are bullying and targeting individuals online, maybe get in touch with them.

Some of them aren’t bots or a foreign government. They’re individuals who are working collaboratively with each other. And so, could we stage interventions with those people to try to reduce their activities of maliciousness online? That’s another aspect.

Combining on-the-ground work with monitoring of the technology space, I think, will be so key. And understanding how societies are distinguishing or identifying what’s true and what’s misleading information, because it’s not going to be the same. What takes place in a Central African Republic, and who’s trusted as sources, is not the same as what happens in the United States. I think we need to approach it in a way that we’re attuned to the nuances of each society.

Stephanie: You said that if you had more space in your article, you would have said a lot more. So, I’d like to give you some space here to say what didn’t make it into your article or what else you think is important.

Heather: I think that thinking more about how to combine technology with that on-the-ground activity of people. I have this idea – but we don’t have funds to implement it – of just mapping out the information ecosystem within countries where we’re active as peacebuilders. And, to build on what I said earlier, to understand where people receive their information, where they go to for the news, who are the influencers online that shape people’s opinions and whether we can interact with those individuals.

What type of institutions in the government do people trust or need to trust in terms of governance structures? The other aspect is whether we can shore up judicial systems because they’re going to play such a key role in litigating what takes place online. One of the things these technology companies do not want is all these countries coming up with different rules and structures about what should be online, how to moderate content because it’ll be more difficult for them to operate.

And so, if we could constantly push a lot of these issues within the court systems and shore up court systems in various countries, then we could help alleviate the tension on it. There’s a case taking place in Kenya – two cases. One was brought by activists in Ethiopia regarding online targeting of an individual in Ethiopia who ended up dying after they contacted Facebook and said, “This person is being targeted. Can do you do something?” They didn’t do anything. And so, they’re bringing the lawsuit in Kenya because it’s a hub for content moderation for Facebook in that country.

The other one is the content moderators themselves, who are Kenyans and the trauma that they endure. Content moderation takes place with AI in addition to people reviewing the posts and teaching the AI what you should flag.

And so, that is important in terms of how that case plays out because it’s not receiving as much attention as, say, cases that are taking place before the US Supreme Court or in the European court system. The ability to shore up and support more cases within various countries throughout the world, I think, will be key in terms of trying to serve as an effort to implement regulation.

And then that will push up to a more international level instead of just having these informal agreements or, hey, I trust you that you’re going to do this in good faith. You could have people finally lock in and have stronger rules about what it means to have freedom of speech within societies, what is hate speech, so it doesn’t vary so unevenly across countries and across continents.

Stephanie: As I’m thinking about the recommendations that you’re making, they seem to be research and policy and big thinking. What about for activists on the ground who are still going to go out there and protest and do disruptive actions? What do they need to know about AI and its potential harms and what they’re about to do or its potential advantages?

Heather: Yeah. I think it’s to – you could do some of this. You don’t need a lot of technology – like technical knowledge, to go online. And so, if you’re on Twitter or Facebook, you could just do keyword searches about what’s taken place. So, if it’s a topic that you’re very interested in, and it’s the source of your activism, you could see what people are saying about it to get a sense of the narratives that are being produced, so you could better target messaging, counter-messaging on it, without having to say you need to scrape the whole internet. It could just be searches on their own.

I think the other aspect is collective action, working with others, and planning a strategy. I think the judicial system will be so key. Looking for ways to see whether you can participate in any lawsuits that are taking place by nonprofit organizations that are being targeted by certain companies such as X. The Center for Countering Digital Hate is being targeted because they’re pointing out hate speech online.

So, how can you support those organizations by bringing more attention, maybe signing a petition, writing messages to members of congress, or going out there into a protest? There are everyday actions that could take place in support of these organizations that are doing so much work, nonprofits, to point out what’s going on and hold technology companies accountable and help to advance new ideas in this space.

I think that’s an aspect of where nonviolent activists can participate and just join up with others in different parts of the world to hold conversations to even just write articles on Medium or other. Just have a website to put it on there to share the knowledge.

Stephanie: Thank you so much.

Michael: I was on a panel at the UN in 2014, and I made a couple of suggestions at the end of my presentation. One of them was that the UN should have a look – this was a panel on violence in the media. So, my suggestion was that the UN create a kind of index that would allow the general public to see which sources of media were propagating violence.

They thought it was a very good idea. And they told me right away that nobody would do it. So, it’s just, you know, a bit frustrating. And I wonder if you’re aware of efforts to educate people to make them AI literate, the way we make people media literate. Is anyone involved in that, or any governments or organizations involved in it?

Heather: Yeah. I think it’s the Center for Humane Technology or All Tech is Human does efforts, a nonprofit based out of New York, to engage with individuals. You could sign up for their news alerts, to mentor/mentee programs, as well as their open trainings, to learn more about technology. And so, that’s one source.

I think a lot of what I have knowledge of is mainly on media literacy, not necessarily AI knowledge. But I think there may be more of an effort in 2024 to think through and how to apply this in different communities. Even in communities in which they may not be a poignant AI, but AI is being used on them, is to have that knowledge.

Because if you’re online, using social media, you’re part of the AI because AI is doing content moderation for you in a local language. And so, having that awareness and knowing that you don’t need the latest iPhone to be plugged into the world of AI or use ChatGPT because it’s already there, and you’re already a part of it. And if it’s scraping the internet for data, you could be caught up in that as well.

Stephanie: Well, thank you so much for your time. What brings you hope in this conversation?

Heather: I think despite all the harms that could take place, the fact that people have greater awareness and that these tools that people are worried about are the same tools that you could share information on and awareness in order to mobilize people. And so, it’s to think about it as the counter aspect of everything that’s taken place.

Yes, this could be a harm, but how can we actually use it to help reach more people who we couldn’t before? The original idea when social media started was to increase the commons so that you’re meeting people in different parts of the world, or even in your country, your state, who you normally wouldn’t have encountered. It’s to hold on to that and to try to leverage these tools to be able to do that. And to make connections and to build grassroots support.

And to spread messages, counter messages when there’s so much noise on let’s add all these billions of dollars to build this one weapon system that you’re now able to go online and have greater access to counter that and to present another alternative of what could be – how to resolve a conflict or prevent a conflict.

Stephanie: Beautiful. Thank you.

Michael: Amen.

Stephanie: You’re at Nonviolence Radio and we’ve been speaking with Dr. Heather Ashby. She is from the US Institute of Peace and their associate director of their program on disruptive technologies and artificial intelligence.

Let’s turn now to the Nonviolence Report with Michael Nagler

Nonviolence Report

Michael: I’m Michael Nagler. This is the Nonviolence Report for the very beginning of 2024.

And in this month, January, on the 15th, we will be celebrating the birthday and Martin Luther King Jr. I just want to read a couple of quotes from him. He said, “We need a radical reordering of our national priorities. Ultimately, a great nation is a compassionate nation.” Oh boy, if we could only take that to heart.

And in 1964, his famous Nobel Prize speech, he said, “I refuse to accept the cynical notion that nation after nation must spiral down a militaristic stay away into the hell of thermonuclear destruction.” That is Martin Luther King and his control of imagery at his very best.

Before I get into the news, I would like to mention that we recently had a very good interview with our friend Georgia Kelly, who’s an expert on the Mondragon Cooperatives in the Basque Region of Spain. And we’re always very interested in economic social experiments that can lead us up the stairway and away from the hell of thermonuclear destruction to a compassionate nation and a compassionate world.

Well, the event that’s on everybody’s mind and on everyone’s radar right now is the horror that is unfolding in Gaza. It’s interesting to note that South Africa, formally an apartheid nation, has invoked the genocide convention against Israel. And the Wall Street Journal has reported that this the most devastating warfare in the modern record.

I don’t want to give you all the details now, how many people killed, how many children without water, etc. But I will add a quote from the World Food Program’s chief economist. He said, “I’ve been to pretty much any conflict and I have never seen anything like this, both in terms of its scale, its magnitude, but also at the pace that this has unfolded.” And I want to add another criterion from an interview that was conducted by Owen Jones in the UK of a couple of experts on Israel.

This is Paul Rogers who said that, “Not everyone is behind Netanyahu. Look at Women Wage Peace,” which is, again, one of the joint ventures between Jewish and Palestinian women.

There’s also – and I didn’t know this – in addition to Neve Shalom and Hand In Hand, those two Israeli Palestinian schools, there’s actually a town. I haven’t found out which one, which is deliberately a Jewish and Palestinian town.

Now, as we know from past experience, if we are willing to learn from it, the action that Israel is carrying out now will only redound to their bitterness and sorrow as we have seen in other parts of that general region, namely Afghanistan, but also in Africa, where ISIS and the Taliban are resurgent after the military effort taken against them.

So, to continue with the good news though, there is an organization called The Green Olive Collective. And they have highly informative webinars with Palestinians and others, including this very important group called Combatants for Peace, former IDF soldiers who are now for peace.

And recently, Common Dreams posted interviews with three Israeli kids who really kind of captured my heart. Their last names are Mitnick, Davidov, and Keidal. Even before Gaza happened, they decided to be conscientious objectors. And in fact, they’re part of a group of more than 200 high school students, what they call […], in Israel, who announced back in August that they would refuse to enlist because of Israel’s occupation of Palestine, including the West Bank, East Jerusalem and Gaza.

And here’s one quote from [Tal Mitnick and Ariel Davidov] which appeared in Waging Nonviolence today. “For me, one of the most fascinating things about activism as a Jew in Israel is that it can encompass several spheres. We can engage in politics, pressure the Knesset, and try to create a democracy in Israel, while also working in fellowship with Palestinian activists and people in the occupied territories.”

I’m thinking of a word here that was used by Daniel Levy in one of those Owen Jones interviews. He talked about dehumanization and rehumanization, the need for Israelis and Palestinians to look into one another’s eyes. If you don’t do that, you continue on what Daniel Levy has called a death spiral for Israel.

Well, to get back to our country for a minute. There’s a special report for the end of 2023 that Rivera Sun has put on Nonviolence News.

And what she did to wrap the year was list 66 success stories for safer communities, teens, women, LGBTQ people, and beyond. Heartening successes like; 96 Wins for Labor Strikes, 99 Gains for the Earth, Climate, and Environment, 54 Advances in Racial Justice. This is all to the good.

Campaign Nonviolence Action Days. We’ve just been through the 10th annual iteration of this campaign. And it was very successful. There were no less than 5057 actions. And so, Campaign Nonviolence gave out a number of changemaker youth grants and worked on Nonviolent Cities project, etc.

The Peace Alliance brings this to our attention. There is currently a bill in congress, its number is H.R.1111, to create a Department of Peace. I might just add that this idea, that in addition to a Defense Department, which is a euphemism for a war department, that the United States should have a Department of Peace, was advanced in – I think the exact date is 1734. It was a Philadelphia physician who made that proposal.

So, it took a while to gather momentum. It did lead to the creation of the United States Institute of Peace, which is not quite the same thing, yet, because it’s not a department of the government. So, this is what the Department of Peace would be, iit has 40 cosponsors and they are going to be – they, being the Peace Alliance – are going to be setting up Zoom meetings for you with the office of your members of congress. And they’re providing a lot of material. You can join over 400 groups and individuals in endorsing the legislation. And they have other actions as well.

Moving abroad for just a second: in Italy, there is a Peace & Justice Pilgrimage scheduled, as there is most years. And this will be from June 23 to June 30. A pilgrimage, of course, to the home city of Francis of Assisi, which I’m happy to say is not a very changed city since his day. Quite apart from the magnetic influence of the great saint who lived there, just the feeling of walking around in a 12th century town and experiencing human scale, before buildings and transportation and sort of took over, I found a very, very invigorating experience.

I’d like to mention a couple of books, in closing. One of them is by Jerry Elmer, and this comes up because I mentioned the Refuseniks a while ago. It’s a comprehensive book called, “Conscription, Conscientious Objection and Draft Resistance in American History.” This is a follow-up on books by Peter Brock, which did tell a very thorough history up until about the 1960s, I think.

Elmer points out, and this is a direct quote, “During WWI there were more than 1900 criminal prosecutions involving antiwar or antidraft speeches, newspaper or magazine articles, pamphlets, and books. This is not even Refuseniks, just calling for nonparticipation in that war, which we really did not have much reason to participate in.

And in the two-year period between June of 1917 and June of 1919, 877 people were convicted. I imagine that one of them probably was Emma Goldman, who was not only convicted because deported.

Conscientious Objectors were persecuted rather violently in that period. Among others, there were four Hutterites, who would be automatically excused today because they belong to a nonviolent religion. They were sentenced to 20 years, which they spent in solitary at Alcatraz. And I won’t even give you the details here of how they were treated.

If you want to read a really interesting book about this whole period in history and this response to the war, there’s a wonderful book called Archibald Baxter. It’s called, “We Shall Not Cease.” He was a farm laborer in Australia. There were only 14 Conscientious Objectors who were forcibly transported to the Western Front in 1918, Baxter among them. That is a very, very moving book. “We Shall Not Cease,” a testimony to human courage on the one hand and human and backwardness and ignorance on the other, against which courage, of course, prevails.

There’s also poetry of the period that I happen to be thinking of. E.E. Cummings has a poem called “I Sing of Olaf.” And Edna St. Vincent Millay has a wonderful poem that goes, “I shall die, but that is all that I shall do for death.”

That was one of the two books I wanted to mention. The other is by Deepak Bhargava and Stephanie Luce. It’s called “Practical Radicals”. And they’re offering, on historical evidence, that a creative mix of multiple aligned strategies are necessary to achieve transformational change with which I completely agree.

“Their book is an ode to strategy,” says our friend Maria Stephan, “bridging from the world as it is to the world as it should be,” and highlights seven key strategies: base-building, disruptive movements, narrative shifting – that’s number one in our Roadmap – electoral change, inside-outside campaigns, the momentum model, and collective care.

Finally, I want to mention a resource called Restorative Media. It is a global source of inspiration and information on this very important institution and it’s media – restorative media. They now need some support from us to stay alive. They’re the first restorative justice podcast. And they’ve been doing it for over 12 years. And they have had an impressive list of guests. And therefore I will call your attention to it. Restorative Media.

And while we’re at it, to Restorative Justice on the Rise, a project that started in 2011. And their first podcast was with Arun Gandhi. These podcasts are created by Molly Rowan Leach, who also interviewed me.

Unarmed Civilian Peacekeeping is something we always like to keep a pulse on which we like to keep our finger at Metta. And in October, 84 people from 26 countries gathered in Geneva, Switzerland to share with United Nations representatives the success of unarmed civilian protection and accompaniment as it’s now called. UCP/A. And so, they brought in stories of its effectiveness from all corners of the world.

And here’s really what’s most important, the capacity for UCP to be a true game-changer in decreasing global violence was just undeniable from this evidence that they presented. They also helped to develop a code of ethics, and they outline ways to protect the communities that they serve from undue outside influence. And six different continents, actually, had attendees there problem-solving, developing ideas for best practices and strategies and brainstorming next steps for promulgating.

And here I’m quoting, “For promulgating these incredibly powerful and empowering methods for nonviolent conflict transformation.” There will be a Zoom on January 17 at 8PM Eastern Time. And you can find the link from Meta Peace Teams in this country or looking up UCP.

And that is the roundup, or part of it, of the nonviolent events that have taken place in our world at the very beginning of 2024. I sincerely hope it’s going to be all upward from here. And I look forward to sharing that with you. Stephanie: You’ve been listening to Nonviolence Radio. We want to thank our guest today, Dr. Heather Ashby from the US Institute of Peace. To KWMR, our mother station, to our entire Nonviolence Radio team, thank you very much, also to Waging Nonviolence and the Pacifica Network for their support in syndication of the show. And to your all of our listeners if you want to learn more about nonviolence or hear archives find us at www.mettacenter.org. Until the next time, please take care of one another.