Public Understanding of AI

Text Transcript with Description of Visuals

AudioVideo
Alex Kirkpatrick: Hello. I’m Alex Kirkpatrick from the Center for Sustaining Agriculture and Natural Resources at Washington State University. Good strategic communication evolves from first understanding who it is you’re communicating with, their demographics, their values, their lived experiences, their existing understanding, and where they got it from. The more you know or can infer about your message recipient, the better the chances that you could construct a message that is meaningful to them, useful to them. And to that aim, this video explores what social science and public opinions research can tell us so far about the public understanding of artificial intelligence. In relation to this, we’ll explore the influence of media on public attitudes. Let’s explore.Text on screen, “Alex Kirkpatrick, PhD.” Alex speaks in front of a colored background.
[ Music ]A circular combine harvester cuts through a field of golden wheat. As it does, a woman controls it via a touch pad. Then, in a large greenhouse, liquid sprays from the tube that stretches across the vegetables growing below it. [ Music ] In a field, a woman uses a tablet to take a photo of a soybean plant. Then, a human-controlled harvester drives over and cuts wheat. The logos for “Western SARE,” “Washington State University,” “AG AID institute” and “National Institute of food and agriculture. US Department of Agriculture.” Text on screen, “Public understanding of AI & media influence.”
Alex Kirkpatrick: As yet, there haven’t been many studies into what folks in agriculture think about AI or its impact on agricultural practice. So perhaps it’s good to start with who’s actually using AI in general. One way or another, you’re probably using AI quite a lot during the day, as is everybody else. For example, spam filters, social media, chatbots, facial recognition, et cetera. But only about 27% of Americans think they use AI on a daily basis or almost constantly throughout the week. Twenty-eight percent perceive that they use it several times a week only. Meanwhile, 45% of Americans think they don’t really use it much at all during the week. That may be unlikely considering its ubiquity and our tech dependency. Under a third of Americans can recognize AI in all six of those common technologies listed to the right. Thirty-nine percent can see AI in three to five of these technologies, while around a third, again, recognize AI’s presence in zero to two of them.As Alex speaks, text on screen appears. “People may use AI more than they think they do. Kennedy et al. 2023. Public awareness of artificial intelligence in everyday activities. Pew Research Center.” A pie chart appears with the heading “Perceived usage.” In Blue, “Almost constant” covers “27 percent” of the pie chart. In yellow, “several times a week,” “28 percent.” In red, “less than several times per week,” “45 percent.” A second pie chart appears with the heading, “Awareness of AI in common tech.” In blue, “high, 30 percent.” In red, “low, 31 percent.” In yellow, “39 percent.” A list to the right reads, “1, Wearable fitness trackers.” “2, Chatbots.” “3, Product recommendations.” “4, Facial recognition.” “5, Playlist recommendations.” “6, Spam filters.”
So, the story seems to be one of high dependence on a technology that isn’t widely recognized or understood. And this can make having informed conversations about AI, its risks and benefits that we need to have as a society and in agriculture quite tricky. And that’s why I always recommend, throughout this toolbox, grounding your conversations by first exploring everyday examples of AI like, spam filters, et cetera. So how are people feeling about AI? Well, in general, people are more concerned than excited about the prospect of AI in society. Only about 10% of Americans are more excited than concerned about AI. Over half of all Americans are more concerned than excited. The top worry folks seem to have is about privacy. Fifty-three percent of Americans appear to feel that AI will hurt more than it helps when it comes to keeping their personal information safe and private. Thirty-seven percent are unsure. Only about 10% among us feel that AI is likely to help when it comes to data security. This, and other surveys around the world, tend to suggest significant levels of ambivalence and pessimism. Another theme that this study revealed, and one that crops up in other studies, is that lower education levels are associated with more negative views of AI. In reality, AI might be just as likely, if not more likely, to impact higher-income folks. But that reality may not be as salient to higher-income workers as it is to lower-income folks, for a range of reasons, including relative economic security or the fact that automation has, historically, tended to disproportionately impact manual tasks. On the more positive side, the same survey found that people are most optimistic about AI helping them find the products they want. Perhaps everyday AI in the form of recommendation algorithms feels safer, more familiar to many. You can see by the weight of all the gray bars that people are still very unsure either way. It’s also interesting to me that other than privacy concerns, people are also pretty pessimistic about AI’s impact on customer service and finding accurate information online. Misinformation has been a problem for society in the digital age, with real-world impacts in relation to science, health, the environment, and politics. I see in other studies, too, that people fear the presence of AI and social media bots could make that situation worse.Alex speaks in a studio. A new heading reads, “people feel concerned about AI.” “Tyson and Kikuchi, 2023. Growing public concern about the role of artificial intelligence in daily life. Pew Research Center.” A pie chart has the heading “AI’s presence in everyday life.” A second chart appears, with the heading, “AI’s impacts on privacy.” A new heading reads, “people are optimistic about some AI innovations more than others.” “Tyson and Kikuchi, 2023. Growing public concern about the role of artificial intelligence in daily life. Pew Research Center.” “Feelings on the impacts of AI use cases” A bar chart rises from zero to “100 percent.” Seven bars are colored gray, orange and blue. Blue is, “Helping more than hurts.” Orange is “hurts more than helps,” and gray is “not sure.” Bar 1 heading reads, “Finding products and services online.” “49 percent” is in blue, “15,” orange and “35” gray. Bar 2 heading reads, “making safer cars and trucks.” Blue, “37,” orange, “19” and blue “44.” Bar 3 heading reads, “doctors providing quality health care.” Blue is “37” orange, “20,” and gray, “42.” Bar 4 reads, “People taking care of their own health.” Blue is “33,” orange, “19,” and gray, “47.” Bar 5 reads, “Finding accurate information online.” Blue is, “33,” orange, “27,” and gray, “40.” Bar 6 reads, “Companies providing quality customer service.” Blue is “28,” orange is “34,” and gray, “37.” Bar 7 reads “policing and public safety.” Blue is “24,” orange is “26,” and gray, “49.”
Peer-reviewed academic research also points to an unsure, divided America. According to this article from Eom et al, last year, many, if not most, Americans wanted more government oversight and guardrails for AI. However, the current US administration is lessening regulation and oversight and leaving industry to formulate their own ethical and safety guardrails. In particular, people wanted social concerns, such as racial bias and other socioeconomic inequities, addressed first before AI is integrated into society on a mass scale, although it seems like this integration is happening faster than most perceive. It’s still a big question as to how much AI will help or hinder social harmony and equity. Medical applications, like machine learning for disease detection, are particularly welcomed, though. To me, this says that other pro-social uses of AI, perhaps to enhance sustainability or detect agricultural diseases, might also be welcomed if framed for the wider good. But the strong association people perceive between AI and commercial tech companies might be a powerful barrier to acceptance of AI among some. People in general don’t really place a lot of trust in Silicon Valley. When communicating AI with the public or grounding the conversation in everyday AI, it might be useful to know what technologies people feel most familiar with. According to this recent study, people’s comfort level on a scale of zero to four was highest for GPS and mapping applications. Technology is, of course, widely used in agriculture and land management. People were least comfortable with the prospect of self-driving cars that are dependent on GPS and mapping. Presumably, this discomfort might extend to any self-driving technologies, like autonomous farm vehicles. People are also relatively uncomfortable with facial recognition and ChatGPT, two everyday AI technologies.As Alex speaks, a heading reads, “Americans divided over AI.” “Eom et al, 2024. Societal guardrails for AI. Perspectives on what we know about public opinion on artificial intelligence. Science and Public Policy.” Bullet points read. “Publics call for more regulation and government oversight before fully embracing AI.” “Emphasize addressing concerns such as racial bias and social inequity before mass adoption.” “Some applications, example, cancer screenings, welcomed.” “Majority are pessimistic that tech companies example, Facebook, X, Google, et cetera, will prevent misuse of their platforms.” A new page heading reads, “Platt et al, 2024. Public comfort with the use of ChatGPT and expectations for healthcare. Journal of the American Medical Informatics Association, 31 (9), 1976-1982.” A bar graph measures comfort, from zero to “3.5.” The bars are, “GPS and mapping. 3.16.” “Banking and fraud alerts, 2.98.” “Smart watches, 2.8.” “Lane change alerts, 2.72.” “Content recommendations, 2.63.” “Online shopping recommendations, 2.6.” “Alexa home assistant, 2.56.” “chat GPT, 2.23.” “Facial recognition, 2.19.” “Targeted ads, 1.97.” “Self driving cars. 1.66.”
What about AI in the workplace specifically? Well, according to the Census Bureau, official integration in US firms is still relatively low, at between 6 and 7%, but that figure doubled between 2023 and 2024, hinting at rapid integration across businesses. Of course, actual usage in the workplace is likely a great deal higher. What this figure says is that only about 6 and 7% of US firms have officially integrated AI into their business practices and workflows. But the other 93% or so are likely using AI-backed technologies, like social media, Zoom, or generative AI, like ChatGPT. One would imagine. Mainly, businesses are using AI to automate marketing and online communication. They’re integrating virtual customer service agents, and they’re using machine learning for analytics. So here, we see, again, AI isn’t necessarily threatening manual labor, as people often assume. But firms say they aren’t really laying off staff as a result; rather, they replace some worker tasks with AI, freeing those workers up to do other things. However, AI might not replace existing workers necessarily, but it’s unclear if it’s becoming a barrier to creating and filling new roles.A new page heading reads, “Businesses report little replacement. Bonney et al, 2024. Tracking firm use of AI in real time, A snapshot from the business trends and outlook survey (Discussion Paper. US Census Bureau).” Bullet points are. “Integration still low but doubled over the year 2023 to 2024 (6 to 7 percent of US firms).” “Marketing automation.” “Virtual agents.” “Data slash text analytics,” “Replacing worker tasks, (not workers).”
Meanwhile, a survey of a thousand employed Americans found the following: About one 126 of them said they’d lost a job to AI in the past. It’s unclear if that’s perception or reality, but it’s certainly a different story to the one that business firms are telling. Among creatives, artists, knowledge workers, one in five said they’d lost a job to artificial intelligence. Half of the workers were at least somewhat worried about losing their current job to AI. Among them, the most concerned were people of color and younger people. This hints again at the fact that people who have historically earned less on the societal scale tend to feel more insecure about the effects of AI. So overall, the public is somewhat ambivalent and unsure about AI, erring towards the pessimistic. They’d like to see more regulation and guardrails, perhaps in an effort to manage this uncertainty. But not all AI technologies are seen the same, and not all publics feel the same about AI. The important thing, I suppose, is to not only monitor research for insights, but to ask. And this is especially true of agricultural audiences, who are very diverse in terms of roles and socioeconomic backgrounds and because there simply isn’t enough data on what people in agriculture think and feel about AI. But what’s likely true of agricultural audiences, like the general public, is that their views of AI and science and technology in general are heavily influenced by media.A new heading. “Workers report being replaced. Dahlin, E. 2024. Who Says Artificial Intelligence Is Stealing Our Jobs. Socius, 10. 12.6 percent said that they’d lost a job to AI in the past.” “20 percent of creatives slash knowledge workers said they’d lost a job to AI in the past.” “50% of respondents were at least somewhat concerned about losing their current job to AI.” “BIPOC or less than 25 year old or lower income predicts concern.”
[ Music ] After high school education ends, the vast majority of people are dependent on media to stay informed about science and technology. Science fiction does impact audience perceptions of AI. Some studies suggest it can heighten the salience of far-away risks, like superintelligence. Meanwhile, other studies suggest that people who consume more science fiction are more pro-AI. Science fiction consumption can also predict support for generative AI and AI artwork. But there’s a difference between correlation and causation, right? It makes sense that people who are more enthusiastic about new technology also consume more science fiction. Media never influences audiences in a straightforward manner. Studies suggest that consuming more factual AI-related media, like news, tech blogs, et cetera, is related to more active engagement and information sharing. On the one hand, some studies find that more awareness through media relates to higher perceived risks, even fear. At the same time, more media exposure seems to also relate to increased expectations that AI will help to enhance personal performance and efficiency. Again, we see an ambivalence, an uncertain balance between perceived risks and perceived benefits. Importantly, people who engage with AI content, like news and discussions on social media, feel they know more about AI. That doesn’t necessarily mean they do know more objectively than those who engage less with AI-related media, but they subjectively feel more informed. This implies that engaging your audiences in online discussions about AI may help them feel more empowered and efficacious. Agenda-setting theory is one of the most well-supported and researched areas of media effects and communication. We find that the more media train attention on a subject, the more people start to perceive that issue as being important to society. Another way of thinking about that is that increased media coverage might tell people what to think about, but not necessarily what to think. On the other hand, if media aren’t really covering a topic such as agricultural artificial intelligence, then people aren’t likely to think that that topic is of much importance to society, even if it is. This graph shows the total number of articles about artificial intelligence published in the top four national newspapers in the US, including The New York Times and Washington Post, and the top four in the UK, including The Guardian and Times of London, between 2008 and the end of 2017. Trust me when I say that this upward trend that begins in 2014 only reaches steeper and higher as new commercial technologies, like ChatGPT, and driverless technologies hit the market in later years. From data like this, we can infer that the issue of AI was increasingly important to society year on year from 2014 onwards, stoked by technological news events, like advanced game-playing computers beating human champions at chess and Go, and thought leaders, like Stephen Hawking, warning of the dangers of increasingly intelligent and autonomous machines. While that data might show you that AI was increasingly attracting media attention over time, it doesn’t tell you in what context. Media’s ability to actually persuade what audiences think is associated with the second level of agenda setting, more commonly known as framing. The media packages AI within frames of reference, which influence audience perspectives. Science fiction frames are commonly employed and often appear alongside frames of existential threat. This can create an almost reflexive association between AI and societal collapse in audiences’ minds, who are used to learning about AI in proximity to these speculative themes, killer robots, et cetera. Increasingly, AI is framed as positive, playful, enjoyable. It depends heavily on the media outlet in question and who their audience is. But studies across the world note rather positive framing of AI in media over recent years. It’s been found by some research that positive framing, in particular, can shape positive public perceptions, but also negative predictions can stoke anxiety, even fear, depending on the context. In Europe, in particular, you might increasingly see AI framed in relation to threat and the efficacy of social solutions, like universal basic income. Even in the US, some politicians have campaigned for universal basic income solutions to the threat of mass automation, and they’ve attracted a lot of media attention along the way. If replacement starts to happen on a mass scale, you’re likely to hear more talk of social safety nets, like basic income policies, as people attempt to mitigate any threat. Commonly, you might also see AI framed in relation to technological and social progress. The Pandora’s box frame refers to themes of unchecked power, the unknown, and releasing forces beyond our control. Meanwhile, AI’s implications for defense, policy, privacy, and industrial dominance mean it’s often framed in relation to national security. Often, frames depend, at least in part, on the ideology of the media doing the framing. Right-wing media tends to focus on the positive industrial gains of AI or the positive impacts on the economy, national defense, law and order. Meanwhile, left-wing media around the globe has framed AI more cautiously, even negatively, in respect to employment bias and potential injustices. Again, know your audience, know their media diet, and see how AI is framed in that media. And don’t forget, whether you do so intentionally or not, you will also frame AI in a certain way when you communicate about it. Why not make it a conscious decision and plan out in advance strategically in what context and how you’re going to frame AI when you discuss its impacts on agriculture and sustainability with your audiences?Text on screen. “Media and public understanding of AI.” A heading reads, “Media affects public attitudes and behavior related to AI.” A digital art rendition of automated robots marching through a futuristic street with ominous drones following. A bullet point reads, “Science fiction influences public perception and support.” [ Music ] The image changes to a cell phone screen with the text, “Breaking news” across it. More bullet points. “AI media consumption predicts participation in AI discussions and information sharing.” “News exposure relates to perceived risk and fear.” “News exposure also relates to higher performance expectancy.” “Social media exposure is related to subjective AI knowledge, utility in bridging the AI knowledge gap.” A new photo shows a pair of hands, and floating above the palms, the letters “A” and “I.” This is surrounded by icons, linked by neon thread. The icons are, a tree like structure, a bar chart, a gear symbol, A document or screen, and others. As Alex speaks, more text appears beside him. The heading reads, “media helps to shape AI agenda.” Text. “Agenda Setting Theory.” “The more the media focus attention on any issue, the more publics and policymakers deem that issue as important to society, and worthy of more focus.” A line graph heading reads, “total number of AI articles published per year.” The graph rises from zero to “800” from the year “2008” to “2017.” There are “2” lines, with blue representing the UK and red representing the US. The rise at a similar rate, from less than “100” in “2008” to almost “800” in the UK and “600” in the US by “2017.” As Alex continues, more text joins the previous. “Second-level agenda-setting.” “How the topic is framed influences public perception.” An artist’s impression of a robot hand holding up a human skull. A news heading reads, “AI could pose extinction level threat to humans and the US must intervene, State Dept.-commissioned report warns.” To the left of this, the heading reads, “The media frames AI for audience consumption.” A bullet point. “Existential threat is prevalent in US news.” The picture is replaced by that of AI animated cows smiling in a field, with the news heading, “Google Labs just launched a fast and fun AI image generator. Meet Whisk.” A bullet point reads, “So are discourses around benefits, enjoyment.” A photo of a woman, looking at her phone while holding a take away coffee cup. Also there are 3 emojis, each with a check box. The smiley emoji has a check, but the speechless and sad emojis don’t. A bullet point. “Framing can shape positive slash negative public opinions.” An infographic illustration with the heading, “Universal Basic Income.” Four boxes, “unconditional, automatic, individual” and, “as a right,” are checked. Arrows stem from a taxes graphic and circle and point to three people. Three arrows then stem from the people and point to the taxes icon. A bullet point reads, “Social security, economic policy and the “threat” of mass automation are covered.” A computer animation of a wooden box, its lid open and a swirling mass of stars flooding out, mixed with pink and blue mist. Bullet points. “Other common themes include.” “Social progress.” “Pandora’s box.” “National security.” In the studio, Alex.
[ Music ] Here are the key insights to take from this video. Public awareness of AI is not commensurate with usage. Some scholars call this the “AI effect,” whereby audiences don’t perceive AI’s presence in technologies as those technologies become more familiar. Across the globe, studies find that people hold more negative views of AI than positive. I’ve seen no evidence to suggest that this phenomenon doesn’t also translate to agriculture. But AI in ag and opinions of agricultural audiences are arguably understudied. People are most concerned with the impact AI might have on their data privacy, their ability to remain employed, and its potential to heighten digital misinformation. These are issues you might want to explicitly research and address when talking about AI with audiences. Importantly, it’s lower-income folks or those with less formal education that express the greatest fears. In reality, AI is exceptional technologically because it might well disproportionately affect higher-income and knowledge workers, rather than manual workers. But perception and reality often differ. People in the US seem to want the government to take a bigger role in regulating AI; however, that doesn’t seem to be the aim of the current US administration, who are loosening government oversight. As with just about any science and technology or political issue, media play a central role in constructing audiences’ thoughts, attitudes, and behaviors towards AI, particularly in these early stages of diffusion, when AI literacy and awareness might be low and people rely on media for information.

[ Music ] In other modules, I introduce some ways that you can better listen to your audience and enhance your effectiveness as a strategic communicator. But I hope this overview of what we know so far about the public understanding of artificial intelligence has inspired some thinking about AI in society and perhaps inspired some thinking about what ag audiences potentially think and feel about this fourth industrial revolution.
Text on screen. “The takeaways.” An electric light bulb burns bright. Inside it stems grow up and out of the glass to become green leaves. Bullet points. “Public’s awareness of AI less than usage (the AI effect).” “People are more pessimistic than optimistic about AI.” “Privacy, misinformation and employment are primary concerns.” “People with lower income and education attainment often feel most impacted.” “People want more government oversight.” “Media shapes public attitude.” In white over black text reads, “The closer.” In the studio, Alex. More text. “Until next time.”
Thank you for your energy and your attention.The logos for “USDA, national institute of food and agriculture. US department of agriculture.” “Western SARE Sustainable agriculture, research and education.” “Ag AID institute, national agricultural AI institute for transforming workforce and decision support.” Text on screen reads, “This material is based upon work that is supported by the National Institute of Food and Agriculture, U.S. Department of Agriculture, under award number 2023-38640-39571 through the Western Sustainable Agriculture Research and Education program under project number WPDP 24-013. U.S.D.A. is an equal opportunity employer and service provider. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the view of the U.S. Department of Agriculture.” More text reads, “This material is based upon work supported by the AI Research Institutes program supported by NSF and USDA-NIFA under the AI Institute, Agricultural AI for Transforming Workforce and Decision Support, (Ag AID). Award Number, 2021-67021-35344.”