Public Sensemaking of Risk and Threat

Text Transcript with Description of Visuals

AudioVideo
Alex Kirkpatrick: Hello. Alex Kirkpatrick here from the Center for Sustaining Agriculture and Natural Resources at Washington State University. In this video, we’re going to look at how we might best go about communicating the risks of AI with agricultural audiences based on empirical research. We’ll look at some ways in which perceived risk affects adoption of science and technology, like AI, and we’ll explore some best practices that you might utilize when engaging audiences with risk. Let’s explore.Text appears, “Alex Kirkpatrick, PhD.” Alex stands in front of a multicolored background.
[Music]An autonomous farming machine drives across a field as a woman uses a tablet device. Water sprinklers move through a greenhouse. A woman holds a tablet up to a crop as she holds it in her hand.
Social science can enhance your strategic communication efforts by showing you the psychological variables that inform important behavioral outcomes that you might want to inform, like adoption decisions or risk mitigation behaviors. You may or may not wish to persuade thoughts, attitudes, and behaviors, but, in any case, social science can at least tell you what factors are most important to your audience’s decision-making, ensuring that you, as a communicator, are equipping them with the most useful information to help them decide and act. For example, the extended parallel process model explains, to some extent, how people respond to risk and threat messaging. It emphasizes the importance of arming audiences with information on the potential problem, like workplace replacement or digital privacy, and efficacy information at the same time. In other words, appropriate risk communication focuses on both the problem and the solution in tandem, never just one, independent of the other.The logos for Western SARE, “Sustainable agriculture, research and education.” Washington State University. “Ag AID Institute, national agricultural AI institute for transforming workforce and decision support.” “USDA, national institute of food and agriculture. US department of agriculture.” Text reads, “Public sensemaking of risk and threat.”
[Music] Let’s take the threat of job loss through automation as a working example. Think of your risk message, whether it be a social media post or a face-to-face dialogue with your audience, as containing twin components: threat and efficacy. Part of the threat component might be whatever statistics you have on hand regarding the objective risk of unemployment. Efficacy might include how well or poorly social safety nets perform in mitigating the threat of joblessness or upskilling as a solution to avoiding unemployment. According to this well-tested model, a person might reflexively first consider the threat. Is it severe enough to worry about? Am I susceptible? If the severity of the threat is perceived as low, i.e., unemployment isn’t such a big deal to me for whatever reason, or if they think they aren’t susceptible to the threat, then they’ve kind of already made up their mind. The threat is low, so I don’t need to really do anything or talk about it anymore. The message has to communicate sufficient severity and susceptibility if an audience is to take the threat seriously and be prompted to action, like adopting a mitigation strategy. And that goes for AI, wildfire, flood, any threat. But let’s assume that the threat is seen as realistically severe and the person sees themselves as sufficiently vulnerable. Then the process can continue. An AI and Ag audience perhaps needs information on the efficacy of things like upskilling and AI literacy in countering job loss. Perhaps you’re offering a course on AI literacy, in which case, how does attending your course help avoid a threat? Relatedly, the audience considers whether they themselves can actually achieve new gains in employability or literacy. Perhaps they question if they have the ability to learn from your course. Can I do it. If the response is not seen as being effective or people don’t feel they have the ability to achieve the response, or both, then the response is usually fear of the acknowledged threat. And fear without efficacy is pretty useless. It potentially freezes the person. No more useful conversation, ignore or deny the threat, reject your message, inaction. But you are an effective communicator and have successfully kept the conversation focused on a threat that both exists and can be navigated through an appropriate response that is achievable to your audience; in which case, the message is received, and a space has been created for behavior change, whether that be mitigation of the threat of unemployment, agreement to attend your course on AI literacy, or adoption of an AI application that might pose some threats but ones that can be navigated successfully.Alex talks in front of a background. Text reads, “processing risk messages.” More text reads, “the extended parallel process model.” “Message components. Threat. Efficacy.” An arrow points to a box that reads, “threat appraisal. Perceived severity. Perceived susceptibility.” An arrow labeled, “low,” leads to a box that reads, “message rejected inaction.” An arrow labeled, “high” leads to a box that reads, “efficacy appraisal. Response efficacy. Self efficacy.” An arrow labeled “low,” points to a box that reads, “fear.” An arrow from “fear” points to a box that reads, “message rejected inaction.” An arrow from efficacy appraisal labeled “high,” leads to a box that reads, “message acceptance. Action change.”
Let’s work through another example. Take the threat of AI superintelligence, the realistic threat to health, safety, and human control, the symbolic threat to human uniqueness and values, a valid threat. But AI superintelligence doesn’t exist. So your aim as a communicator might be to have the threat rejected so you can talk about more pressing threats, perhaps the risk to privacy if your client is thinking of adopting a new machine learning software. First off, always acknowledge the perceived threat as being valid. You may lose trust if you don’t, and trust is essential for effective risk communication to take place. But in essence, you might emphasize what exists and what does not. You don’t worry about efficacy because you don’t want the conversation to go all the way to efficacy appraisal. You don’t want to prompt behavior change at all, presumably. You can help your audience persuade themselves that the severity of superintelligence, though hypothetically high, is not something that affects them at all. Extremely low probability, zero susceptibility, thus zero threat. Let’s move the conversation on.Text reads, “the improbable threat of AI superintelligence.” Text reads, “message components. Threat. All that exists right now is weak AI with narrow functions. Reactive machines with limited memory. Superintelligence is purely hypothetical at the moment and not something nearby that you are likely to experience in your lifetime. Efficacy. This isn’t something that you need to make any plans for in any immediate sense.” The “threat appraisal” box appears with the bullet points. “Perceived severity” and “perceived susceptibility.” An arrow labelled, “low,” points to “message rejected, inaction.”
Often, you might not want to persuade anyone to take a specific action, but rather simply inform any decision that they come to; in which case, imagine a local farmer who wants to hear what you think about an autonomous tractor because they might get one. They see the upsides to productivity, but they’re worried about the risks. Whatever your ultimate aim, you’re serving your audience best by arming them with information on both threat and efficacy simultaneously. Be transparent about what you do and don’t know. If you can, arm with evidence of the likelihood of certain risks, even though we must acknowledge people’s tendency to ignore probability. If they’re talking about the threat specifically, they probably already perceive the threat as high and themselves as susceptible enough for it to be something worth worrying about; in which case, your role is to enable an informed decision by equipping them with efficacy enough to make a rational, informed choice.Text reads, “the realistic threat of autonomous vehicles.” Text reads, “message components. Threats, technical malfunctions. Health and safety. Privacy. Efficacy. Infrastructure and safety systems. Learning resources. Software protection.” An arrow points from the message components box to a new box that reads, “threat appraisal. Perceived severity. Perceived susceptibility.” An arrow labeled, “high” points to a box that reads, “efficacy appraisal. Response efficacy. Self efficacy.” An arrow labeled, “high” points to a box that reads, “informed adoption. Decision slash action.”
On the other hand, maybe you’re talking to someone who’s really enthusiastic about adopting AI but perhaps hasn’t really thought about the risks involved; in which case, you’ll want to convey both the severity of the threat and your audience’s susceptibility alongside mitigation strategies. Whatever the scenario, the extended parallel process model tells us that an effective risk communicator is one that can convey both risk and efficacy in tandem to better persuade or simply inform audience decision-making about risk. But what might inform a person’s decision to contact you about the risks of AI in the first place?Alex speaks in front of a background.
[Music] The Risk Information Seeking and Processing model tells us that, once again, it all starts with an initial threat perception. An individual will come to perceive a threat based on the characteristics of the risk or based on their conversation with you or their media diet, or a host of other influences. A sufficiently high-perceived threat will evoke an effective response. High-perceived threat, low perceived efficacy, and the response may skew towards fear as the previous model suggests. But with sufficient efficacy, the response may be anger, confusion, anxiety, stress, perhaps even hope and optimism. Either way, where more emotion is stirred, more information seeking is likely to occur. Heightened emotion heightens communication behavior generally. I’m sure you don’t need me to tell you that emotive information carries farther, and here, emotion stokes information seeking. Your emotions will also influence how sufficiently informed you feel. If you’re anxious about automation, you might feel insufficiently informed. You might manage that anxiety by seeking out more information on AI’s effects on the labor market. If you feel sanguine because you perceive yourself as invulnerable to automation, you might feel that you know enough; in which case, you’re less likely to want any more information. But how informed you feel also, perhaps rather obviously, influences your information seeking. This felt sufficiency is strongly informed by your own subjective norms, of course. And what I mean by that is your normal state of behaviors and attitudes regarding information. For example, a lot of people feel they know a lot about something even when they don’t. High subjective knowledge we call that. And, of course, your individual norms, when it comes to information, have a direct influence on how you seek or ignore new information. If you’re a heavy lifelong CNN and mass media consumer, then this will influence both how informed you feel about a topic, depending on coverage, and how you seek more information. In turn, whatever information you’re seeking will influence any outcome behaviors in some way. Seeking information on breast screening may help you convince yourself to go and get checked, or an advertisement from Meta might support your decision to download an app with their latest chat bot integrated. But if you want to learn more, whatever you learn will inform associated outcome behaviors. You may well be the information source that people seek out in regards to AI. You may in turn stoke more information seeking elsewhere, especially if you stoke threat without efficacy and leave them in a fearful, disempowered state, or maybe you balance threat and efficacy and stir hope and curiosity in your audiences, leaving them eager to find out more. In that instance, it’s useful that you pre-arm yourself with credible resources to point them to. For best results, make your messages emotive. Remember that people are active, emotional beings, not merely passive receivers of dry, factual information.Text reads, “risk information seeking.” Two circles under the label “threat” are named, “severity” and “susceptibility.” Arrows appear from both circles to point to a new circle labeled, “affective response.” An arrow points to a new circle that reads, “risk information seeking.” A large arrow points down from “affective response.” Inside the arrow, the top is labeled “high” and the bottom, “low.” Text reads, “information sufficiency.” An arrow points from “information sufficiency” to “risk information seeking.” A circle beneath the large arrow reads, “informational subjective norms.” An arrow points from “informational subjective norms” to “risk information seeking.” An arrow points from “risk information seeking” to text that reads, “Behavior.”
Perhaps it’s more ethical to actually aim to communicate other things alongside credible raw information. But how about transparency as an outcome variable? Perceived transparency is central to building trust and facilitating the urban flow of perspective and information. We can communicate transparency through our openness to share what we do and what we don’t know, what we do and what we don’t feel, what personal values and attitudes we have on a subject. We can communicate transparency by avoiding jargon, and statistics in isolation, and clarifying in simple terms for our audiences. We can communicate transparency through a proactive willingness to share resources, coming equipped with additional resources, whether they be contacts, links, or suggested readings. We can communicate transparency through active listening and displaying empathy and emotion, rather than dry, hierarchical, formal communication alone. While doing so, demystify the mysterious by bringing AI into the here and now. Show examples of AI all around us. Having an audience think about spam filters, social media, opening their phones with their face or thumbprint can prime them to perceive the reality of AI all around them, in proximity to themselves. Talk about narrow AI, machine learning, and deep learning. This can help avoid getting derailed with abstract faraway risks, like superintelligence. Active listening is key to all science and risk communication. This means making eye contact, providing non-verbal responses, repeating back, clarifying, mirroring emotion and sentiment. It’s beneficial to communicate through body language that you care, you’re present, you’re listening. This not only aids the flow of information by making you trustworthy, but it actually keeps you mindfully in the moment, absorbing new information and your audience’s perspectives. You could also actively listen to your audience without even talking to them. Social media analytics can tell you a lot about who your audiences are and what factors they might find important in terms of AI risks and rewards. The effectiveness of any of these strategies is dependent on how well you know your audience. Actively analyzing your audience, their preferences, their communication styles, their ideologies, their media diets, these can all tell you a lot about them and enhance your ability to communicate with them. You can better translate complex information into their lived experiences the more you know them. And remember to deal with threat and efficacy in tandem, never just one independent of the other. And yes, I will bang that drum again.Text reads, “AI risk communication tips.” Text appears, “dare to be transparent and trustworthy.” More text appears, “demystify the arcane. Emphasize AI’s presence. Weak AI with narrow function. Focus on existing risks not hypothetical threats.” More text appears, “practice active listening.” More text appears, “know your audience and translate into their language.” Text reads, “communicate threat and efficacy as one.”
The first listed on screen explores the extended parallel processing of AI risks through YouTube. This serves as a nice exploration of the model, as well as how risk is heavily informed by media choice, particularly where a technology is relatively novel. The second is a study of the risk model in the context of organic farmers, which you might find informative on multiple dimensions. Last is a book by Paul Slovic, probably the preeminent name in risk communication. You can’t go wrong reading his work, as most of the concepts presented here are at least in part developed from his seminal works on public understanding of risk. Highly recommended.Text reads, “use social science as a communication tool.” “Schwarz, 2024. The mediated amplification of societal risk and risk governance of artificial intelligence. Technological risk frames on YouTube and their impact before and after ChatGPT. Journal of risk research, online.” Another bullet point reads, “Bessette et al, 2019. In the weeds. Distinguishing organic farmers who want information about ecological weed management from those who need it. Renewable agriculture and food system, 34 5, PP. 460 to 471.” Another bullet point reads, “Slovic, 2010. The feeling of risk. New perspectives on risk perception. Earthscan.”
[Music] In this video, we’ve thought about risk communication in general and in relation to AI in agriculture specifically. I’ve emphasized how social science and communication literature can enhance your strategic communication efforts, whether in relation to artificial intelligence or any other topic, really. There’s a world of research to explore and use to guide your own approach to communication. We explored just a couple of frameworks. One of them, the extended parallel process model, informs us that when discussing the threats of AI, we should emphasize both threat and efficacy in tandem, so as to empower your audience to make informed decisions. Alternatively, if you’re trying to persuade adoption behavior, you might emphasize AI as an efficacious solution to wider threats facing agriculture, like workforce variability. But whatever your aims, be transparent about the threats and risks presented by AI adoption, or else lose trustworthiness, the key to good communication. Active listening is also key, as is a deep analysis of your audience, so you can best translate complex threat information into a language they can readily understand and use to inform their decision. But remember to not be too dry, as science and risk communication has a tendency to be. We are emotional beings, and risk decisions are inherently emotional, so construct emotive messaging.Text reads, “the takeaways.” A background with a lightbulb surrounded by growing leaves. Text reads, “social science and communication research can inform strategic AI communication.” Another bullet point reads, “communicate AI threats and efficacy in tandem.” Another point reads, “risk communication best practices include. Transparency and openness. Active listening. Audience appreciation and translation. Emotiveness.”
[Music] Engage in a personable manner, and remember our shared humanity. There are many other communication models to explore and potentially utilize; however, they’re beyond the scope of this toolbox. But hopefully, this video has been a useful introduction to risk communication and communicating the risks of AI in agriculture, more specifically. Thank you for your energy and engagement.Text reads, “the closer.” Alex speaks in front of a background.
[Music]Text reads, “bye for now.” The logos for “USDA, national institute of food and agriculture. US department of agriculture.” “Western SARE Sustainable agriculture, research and education.” “Ag AID institute, national agricultural AI institute for transforming workforce and decision support.” Text reads, “This material is based upon work that is supported by the national institute of food and agriculture, US department of agriculture, under award number 2023-38640-39571 through the Western Sustainable Agriculture Research and Education program under project number WPDP24-013. USDA is an equal opportunity employer and service provider. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the authors and do not necessarily reflect the view of the US department of agriculture. This material is based upon work supported by the AI research institutes program supported by NSF and USDA-NIFA under the AI institute. Agriculture AI for Transforming Workforce and Decision Support. AgAID. Award number 2021-67021-35344.”