Text Transcript with Description of Visuals
| Audio | Video |
| Alex Kirkpatrick: Hello, I’m Alex Kirkpatrick from the Center for Sustaining Agriculture and Natural Resources at Washington State University. Spreading scientific and technological literacy and knowledge is important, but it’s also important to appreciate the limitations of setting literacy as an overarching goal of science and environmental communication. Aiming to transfer knowledge alone may in fact distract from setting more profitable, perhaps even more achievable communication goals. This video explores the concepts of trust and trustworthiness, both in relation to artificial intelligence and you as a communicator of science and technology. Let’s explore. | Text on screen. “Alex Kirkpatrick, PhD.” Alex appears, speaking in front of a multicolored screen. |
| [ Music ] | A circular combine harvester cuts through a field of golden wheat. As it does, a woman controls it via a touch pad. Then, in a large greenhouse, liquid sprays from the tube that stretches across the vegetables growing below it. In a field, a woman uses a tablet to take a photo of a soybean plant. Then, a human-controlled harvester drives over and cuts wheat. A screen appears with the logos for “Western Sare,” “Washington State University,” “AG AID institute” and “USDA NIFA.” Text on screen, “Trustworthy AI and trustworthy communication.” |
| Let’s start by differentiating between trust and trustworthiness. Trust is action. Trust is to make oneself vulnerable to another, particularly where the trustee might lack the information or resources to feel they fully comprehend every element of a decision. For example, you might trust to get vaccinated against a virus like COVID-19. The choice to take that action might depend on the extent to which you trust science, scientists, authorities, medicine, and your doctor. In relation to artificial intelligence, a farmer might choose to trust a technology provider, an engineer, even the information you provide them when deciding whether or not to adopt an innovation like an autonomous drone system. Sharing knowledge is important, but trust is the outcome variable that you often intend to affect ultimately. Trust me, trust my information. Allow the information to shape your worldview and behavior. Knowledge is knowledge, but trust is action. The route to persuading trust is to signal trustworthiness. Your own trustworthiness is something that you have a significant amount of control over. We can define it as the perception of a trustee who is creating a vulnerability. For example, you may hold the belief that your doctor is qualified enough and has the integrity to recommend appropriate treatments for you. They’re trustworthy, so can be relied on for complex information, because you yourself lack an understanding of your own physical and mental health that’s as advanced as theirs. | Alex reappears on the screen, speaking. A heading reads, “There is a difference between trust and trustworthiness.” Bullet points. “Trust equals the action of making oneself vulnerable to a trustee.” “Example, trusting a doctor to vaccinate you and protect you from a virus.” “Example, trusting a new technology to work effectively.” A new bullet point reads, “Trustworthiness equals a trustor’s perceptions or beliefs about a potential trustee who is creating a potential vulnerability.” |
| A scientist, a science communicator, a representative of a scientific institution, likely conveys trustworthiness via multiple subjectively received cues. These cues signal to a truster the level to which they should take the action of trusting, making themselves vulnerable. According to some science communication scholars, trust in scientists is formed of three dimensions. The first is benevolence, which refers to the perception that a person has genuine care and concern for the well-being of others, and is motivated by a desire to act in the best interests of those they serve. Perceived openness translates to the feeling that a person displays a willingness to share truthful, transparent, and complete information, even when it includes uncertainties, risks, and potential drawbacks. But it doesn’t stop there. Studies also find that demonstrating a willingness to be open about yourself, your life, your personal background, your feelings, and personal thoughts also help to communicate openness. The third dimension is integrity. This refers to the perception that a person consistently adheres to strong moral and ethical principles and demonstrates a commitment to doing what is right, even when it’s difficult or inconvenient. | A photograph shows a man and woman in a field talking with each other with baskets full of green vegetables about them. A heading reads, “Trustworthy communicators signal 3 key attributes.” Bullet points. “Benevolence. Genuine care or concern for the wellbeing of others.” The photograph changes to two men talking in a field of corn with maize growing behind them. A new bullet point. “Openness. Willingness to share honest, transparent, and complete information.” This is followed by two other men standing in a wheat field. Bullet point. “Integrity. Consistently adheres to strong moral and ethical principles.” |
| Of course, this definition of trustworthiness doesn’t really translate to machines and algorithms. But providing a clear and shared definition of trustworthy AI is a matter of hot debate. In fact, it’s one of the most salient debates in relation to AI in society. It’s debatable whether trust can actually be calibrated to any external standard that defines AI trustworthiness. But according to the trust in explainable AI model, the perceived trustworthiness of AI is dependent on six considerations. Can I trust it to work well most of the time? Is its behavior predictable? A user is going to find it difficult to trust an AI if it behaves in unpredictable ways. Is it mechanically reliable? Am I going to get the right answers? It’s a big risk to trust AI to assist your decision-making if it hallucinates or otherwise gives wrong answers to important questions. Data security and privacy are among the primary concerns of members of the public. So, will it keep my data safe? Is it compliant and safe for users to use? As is the case with driverless cars, people are often hesitant to trust an autonomous machine with their physical health and safety. But now let’s look at why trustworthiness and fostering trust might be the most profitable goal to guide your strategic science communication output. | As Alex speaks in a studio, a heading reads, “Trustworthy AI signals 6 key attributes.” Bullet points. “Consistently works well.” “Predictable outputs.” “Reliable.” “Gives the correct answers.” “Protects data and privacy.” “Meets health and safety standards.” |
| Sometimes as scientists and professional communicators, we get a little bogged down in transmitting rather than listening. We can become fixated on the concept of knowledge sharing and literacy. Social science shows that scientists have a tendency to reflexively assume that any public skepticism about STEM topics is due to a lack of STEM knowledge. But knowledge isn’t necessarily correlated with emotion and attitude on any subject. An attitude is formed of cognition, affect and behavior. Raw knowledge or literacy might help to shape all of these aspects in some way, but it isn’t as powerful as you might think. And the reason we often overestimate the power of literacy, facts, logic, scientific knowledge, is perhaps because it was so powerful for us. But most people don’t engage in any technical STEM learning after high school. So a scientist’s information preferences are not common on a societal scale. All these assumptions back the hypothesis that still haunts most STEM communication efforts: The belief that positively addressing the perceived deficit in attitudes is as simple as transmitting knowledge downstream from expert to non-expert. | Text on screen “limitations of and solutions to ‘the deficit model.’” The next slide show a photo of a man in a white coat leaning on a podium as a student looks on. Behind him on a screen, images of molecular biology, specifically “RNA and DNA structures.” The heading reads, “The knowledge deficit hypothesis is flawed.” Bullet points. “Assumption 1. Public skepticism about STEM is due to a lack of STEM knowledge.” “Assumption 2. Knowledge is necessarily correlated with emotion and attitude.” “Assumption 3. Scientific and technical understanding is the epistemological gold standard of knowledge.” “Hypothesis, Addressing the deficit in public understanding of STEM will improve deficient attitudes toward science, technology and scientists.” |
| As important as knowledge and sharing knowledge is, need it be the focus of how you construct, frame, and disseminate information. One major downfall of the deficit hypothesis is that hypotheses are testable and social science finds little support for it. We instead find that socioeconomic status, political ideology, selective exposure to media, etcetera, are all far more powerful drivers of STEM attitude than knowledge and literacy. Furthermore, scientific literacy among people is actually fairly good according to the NSF Science and Technology Indicators. Perhaps more importantly, these indicators have remained fairly stable over the years and generations, while trust in science and scientists has faced stark declines and polarization in recent years. Trust, we find, is one of the main drivers of scientific attitude, or decisions to adopt or reject scientific innovations and ideas, like AI. Ultimately, the issue from a STEM communication vantage is that focusing on literacy as the primary goal of STEM communication potentially distracts from aiming to develop mutual trust and collaboration or engaging with feelings of risk and uncertainty. | As Alex speaks, the heading reads, “Research challenges the Knowledge deficit hypothesis.” Bullet points. “Empirical data does not support these assumptions or the hypothesis.” “Trust in science and scientists is the main driver of STEM behavior.” “Overly focusing on literacy and transmission of raw fact distracts from setting more profitable goals.” |
| So how might we go about creating spaces for building trust and mutual listening surrounding potentially contentious topics like artificial intelligence and agricultural sustainability? Let’s look at perhaps the most common mode of communication professionals engage in. And there are ways of avoiding your presentations and workshops defaulting to downhill one-way communication events that treat audiences as passive receivers of information, which they are not. All good communication is built from audience analysis. So gather what information you can on your attendees, their group and individual differences, their priorities, and let your observations guide your communication plan. There are a million one-ways to get an audience involved, from quick surveys, chatting back and forth, games. Ultimately, your communication should be authentic to you. Authenticity is a trustworthy attribute. Open with participation after you introduce yourself. Think of it as a hook to snare attention and involvement. Plan for some involvement every eight minutes or so. Going beyond 10 minutes of unidirectional information spillage risks losing attention. It establishes a boundary between you, the speaker, and the audience, the receiver. Organize presentations and workshops into a max of 30 to 45 minute blocks, less if you can. Information overload can lose you trust, it loses you credibility, and it can make you feel like a poor communicator. And don’t overly focus on yourself and your oratory. Don’t neglect the importance of visual communication, audio and music, your non-verbal communications too. Invite deliberation. Portray the resilience, the openness, the credibility, and benevolence to actively engage with all people with all sorts of questions. Have the confidence and transparency to say when you don’t know enough about something. Perhaps invite others in the audience to answer questions on your behalf if they know. Create a forum, facilitate a space for dialogue to show people they are valued and heard. | As Alex speaks, the heading reads, “Strategize presentations and workshops.” Bullet points. “Do your audience analysis homework.” “Focus on audience participation and involvement.” “Ask for participation at opening and every 8 to 10 minutes thereafter.” “Keep it to 30 to 45 minutes with breaks.” “Multimodal communication.” “Invite questions and show resilience to controversy.” |
| Cafe Scientifique, or Science Cafe, is a concept that originated in Britain and France and spread fast throughout the world. Public houses, bars, coffee houses, restaurants, tea shops, any culturally relevant meeting spot provide a socially levelling venue to host conversations and deliberations about science and technology. To quote Anne Grand, a proponent of cafe science, “If a person walks into a lecture hall, they expect to be lectured at. If a person walks into a pub, a bar, a cafe, they expect to have a conversation on their terms.” A typical format might invite people in at their leisure, knowing that, say, 7 pm, an AI engineer from a research group is going to stand up and give a short, informal talk about what they’re working on or the issue at hand. Then there might be a short break before an open forum. Afterwards, free-form discussions about what was presented can take place and anyone can approach the experts to talk more. There’s no need for tools and shields like PowerPoint because this isn’t formal and it isn’t a one-way conversation. It’s relaxed dialogue and an opportunity to model openness, humanness, and social proximity with audiences. Cafe Science is a great way to engage anyone with any complex issue or science, but it might be particularly effective for engaging publics surrounding potentially problem-laden topics like the influence of AI on sustainable agriculture. It’s a great space for publics to hear, perhaps even contest, the scientific perspective and feel heard themselves. | A poster reads “biodiversity science cafe October 28th 7:00 PM.” Below this are three individual photographs of a man and two women. To the left the heading reads, “Café Scientifique is a space for informal engagement.” Bullet point. “Conversations about science on a level playing field.” A new poster appears of multicolored hands reaching up. Text reads, “Café Scientifique, please join us for an interactive evening with a diverse panel of patients, family members and caregivers to help us answer the question, Together, how can we improve the care of critically ill patients?” The logos for “Alberta Health service, CIHR, IRSC Canadian Institutes of Health Research,” and one that is not readable. To the left, a new bullet point. “Promoted informal engagement.” A new poster reads, “Sacramento science distilled. House plant botany the inside scoop about your photosynthetic housemaids. A conversation with Alison Greenlon, public program coordinator.” A photo of a woman along with images of plants. In collaboration with “science really says, capital science communications,” and the “powerhouse Science Center.” A new bullet point. “Short talks, breaks, open forum discussions.” A new poster reads “Athens Science Cafe presents Wayne Parrot, PhD. the 411 on GMO’s.” A new bullet point. “PowerPoint is discouraged.” A new poster has a logo “pro veg international.” The heading reads, “the future of food alternative proteins as a multiple problem solution and a unique opportunity.” The time and place, along with host and guest also appear along with several bowls of seeds. A bullet point. “Particularly effective for engaging Publics surrounding problems in science and society.” |
| slightly more formal way of upstreaming engagement or affording the public a say in your research or plan before it’s fully formed is the Citizen Jury. For example, if you’re creating a program surrounding AI preparedness in AG, it might benefit you to consult your audience before you even start designing the intervention. A Citizen Jury is a deliberative, democratic process in which a group of stakeholders is brought together to discuss, evaluate, and make recommendations on a specific issue. You select representative stakeholders—perhaps farmers, ranchers, laborers, local community members—to be jurors for the afternoon, the day or the week, or whatever your timeframe is. You transparently present them your plan for consideration. The jury evaluates, asks questions, deliberates with each other, and makes recommendations. In this way, you’re incorporating public values and perspectives into your approach from the outset and not trying to retrofit and tailor after the fact. You’re affording publics an enhanced stake. You are showing you care, benevolence. By giving citizens more say over the science or related interventions that directly affect them, you’re modeling transparency and trustworthiness. It’s also an engaging way of sharing information and knowledge. Of course, any recommendations are non-binding and advisory only, but they do have moral, democratic, and social clout. | As Alex speaks, the heading reads, “Citizen juries can upstream engagement.” Bullet points. “Affected citizens discuss, evaluate and make recommendations.” “Incorporates public values and perspectives in the design process.” “Particularly effective surrounding contentious science and tech.” “Non-binding recommendations.” |
| Nowadays a good deal of STEM communication and engagement happens online, and how trustworthiness is communicated through media can be significantly different from communicating trustworthiness in person. Peripheral cues are more superficial cues to trustworthiness, such as seeing how many likes a post has or how attractive the media or speaker is. These things are more important to people at a distance, lacking the immediacy of live dialogue and all the cues that come with face-to-face communication. For example, richer media can be associated with trustworthiness and credibility. Poor quality media output can diminish trustworthiness. Superficial things like visual clarity, audio quality, good writing and editing are cues as to your credibility, therefore trustworthiness, like it or not. Social presence is the extent to which the receiver perceives the immediate presence of the communicator, perceives their personality and humanness. It is associated with greater perceived trustworthiness. Practically, you can enhance your social presence through richer media—video conferences, YouTube videos, audio, podcasting, etcetera. Consistent asynchronous communication through social media, podcasts and the like, can result in a parasocial relationship with your audience. You might not know them, but they might come to feel that they know you. When people feel they have a positive parasocial relationship with someone or an institution, they tend to trust them. I’ve been trusted by Washington State University and Western SARE to communicate asynchronously with you about AI and its impacts on agriculture. I display their symbols at the beginning and end of each video. Many of you might trust those symbols. Therefore, vicariously, perhaps you trust me by association. Persuasion is a symbolic process. | A photo of hand holding a cell phone while the other hand touches the screen. Icons float away from the phone, thumbs up and Love Heart icons. To the left the heading reads, “Trustworthiness can be communicated asynchronously and online.” Bullet point. “Peripheral cues are more influential.” A new photo of a person holding a cell phone up to a laptop screen. Over this, a screen with a play icon, and the text, “video tutorials.” New bullet point. “Richer media can be a peripheral cue to trustworthiness.” A new photo shows a man in an online meeting with four other people. A new bullet point. “Social presence is vital.” A new photo shows a cell phone lying flat in the hand of someone, and from it, male and female icons float into the air. A new bullet point. “Allow for parasocial relationships.” The “Westin Sustainable Agricultural Research and education” logo. The “Washington State University” logo. The “National Institute of Food and Agriculture US DEPARTMENT OF AGRICULTURE” logo. A new bullet point. “Promote Symbolic associations.” |
| So here are the important things to take away. Trustworthiness is one of the hot topics in AI design. It’s likely that an adopter is concerned with the reliability of a system, its predictability, who’s accountable for its actions and decisions, and issues related to safety and data security. This might be a useful shorthand for knowing what to focus on when researching or communicating about AG AI applications. In terms of the scientist, the communicator, or the AI developer, trustworthiness is signaled through perceived benevolence, openness, and integrity. Mutual trust is the superconductor of information, particularly complex techno-scientific information of the kind you might have to communicate. But more than that, trust in science and scientists is a key influencer of science, attitude, and behavior, often more so than literacy. This is why many science and technology communicators nowadays focus on communicating trustworthiness and fostering trust in science through outreach. | Text on screen. “The takeaways.” An electric light bulb burns bright. Inside it stems grow up and out of the glass to become green leaves. Bullet points. “Trustworthiness is a key topic in AI. Reliability. Predictability. Accountability. Safety. Data security.” “Trustworthiness is a key topic in STEM communication. Benevolence. Openness. Integrity.” “Dialogue approaches to STEM communication can enhance trust and be tailored to different publics.” |
| How you best go about engaging your audiences with AG AI or any other topic comes down ultimately to your individual communication style, the topics you cover, and the audiences you serve. But if you aim to signal trustworthiness or achieve trust as an outcome, knowledge will flow back and forth less inhibited. Trust is to communication what silicon is to an electron, a superconductor enhancing the flow of information. Thanks for engaging. | Text on screen. “The closer.” In the studio, Alex speaks. |
| [ Music ] | Text on screen. “Cheers.” The logos for “National Institute of Food and Agriculture US DEPARTMENT OF AGRICULTURE,” “Western SARE,” and “Ag Aid Institute.” Text on screen reads, “This material is based upon work that is supported by the National Institute of Food and Agriculture, U.S. Department of Agriculture, under award number 2023-38640-39571 through the Western Sustainable Agriculture Research and Education program under project number WPDP 24-013. U.S.D.A. is an equal opportunity employer and service provider. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the view of the U.S. Department of Agriculture.” More text reads, “This material is based upon work supported by the AI Research Institutes program supported by NSF and USDA-NIFA under the AI Institute, Agricultural AI for Transforming Workforce and Decision Support, (Ag AID). Award Number, 2021-67021-35344.” |