Text Transcript with Description of Visuals
| Audio | Video |
| Dr. Alex Kirkpatrick: Hello, I’m Dr. Alex Kirkpatrick from the Center for Sustaining Agriculture and Natural Resources at Washington State University. This video explores the threats, possible risks, and perceived risks associated with artificial intelligence. Many of the possible threats posed by increased AI use in agriculture haven’t been fully realized in 2025 when I’m recording this lecture. Agricultural AI adoption is, as of 2025, still in its relative infancy, but there’s no better time to think about, communicate, and manage risk than before risk events occur. Many of the threats I’ll discuss are the general threats associated with new technologies and automation. There may be some threats, particularly to agriculture, that aren’t covered in this video and are perhaps even yet to emerge, but I hope this video helps to prime your thinking about the potentially adverse consequences of AI in society that are often the focus of media and public’s attention alike. Let’s explore. | Text on screen, “Alex Kirkpatrick, PhD.” Doctor Alex Kirkpatrick addresses the camera. |
| [Music] | A combine harvester plows a crop as a woman stands in a field typing on a tablet. Machinery sprays crops in a green house. In a field, a woman uses a computer tablet to looks at a plant. Title slide, “The possible risks and threats of artificial intelligence.” The logos for Western SARE, Washington State University, Ag AID Institute and USDA National Institute for Food and Agriculture display above. |
| Dr. Alex Kirkpatrick: First, let’s clarify three interrelated concepts referred to in this video. Important concepts to risk communication: threat, risk, and perceived risk. | Title slide, “The possible risks and threats of artificial intelligence.” The logos for Western SARE, Washington State University, Ag AID Institute and USDA National Institute for Food and Agriculture display above. |
| A threat is a potential danger or event that could cause harm. For example, there exists a threat that generative AI, such as large language models that can chat with users about a broad range of topics, could replace some agricultural extension work in terms of being able to answer stakeholder questions and queries in an accurate and timely manner. Lots of environmental and agricultural organizations are likely already relying on generative AI to write blog posts, social media content, and presentations, etc. The concept of risk in comparison is more quantitative and a product of the likelihood of the threat actually materializing and the potential magnitude of the negative outcomes should it occur. High risks are generally both relatively likely to occur and relatively costly when they do, like developing cancer or getting injured racing motorcycles. Meanwhile, low risks are either relatively unlikely to occur, have little impact if they do, or both. On one hand, it might be highly likely that generative AI can and could replace some tasks that extension agents or crop consultants currently perform, but the risk remains low if the impact is to simply free those workers up to do more work rather than losing their jobs entirely. Alternatively, the risk could be very big if the magnitude of the threat, say mass unemployment across industries, is supported by even a slight probability. We’re in a position when it comes to AI that we don’t yet have enough data to offer accurate risk assessments. This is often the case with new technologies, so many of AI risks are perceived rather than directly observed currently. To most people, risk is more of a feeling. Perceived risk is often irrational, illogical, and heavily influenced by cognitive, emotional, and social factors. Probability neglect describes the observation that non-experts routinely discount the likelihood of an event occurring when assessing a risk and instead focus only on the potential magnitude of the consequences if things go wrong. If the magnitude of the threat is high, like mass unemployment due to AI automation, the risk is likely to be perceived as high despite the relatively low probability of it occurring in our immediate future. Importantly, perceived risk is subjective and can’t be assumed, but you can and perhaps should ask how your audience perceives a risk and to what extent. As with all effective communication, first, know your audience. A threat is a real threat when a threat is felt. It’s critical that these feelings are acknowledged, respected, and never diminished if effective dialogue is to take place. Regardless of how likely or unlikely, big or small, you might personally perceive that same risk. | Three columns display with the heading, “threat,” “risk,” and “perceived risk.” Under the heading “threat,” text reads, “a source of potential danger or an event that could cause harm. e.g., Generative AI poses a threat to people employed in agricultural communication and extension roles.” Text added under the heading “risk” reads, “Threat likelihood X threat magnitude.” Underneath, the following text is added, “E.g., The risk of unemployment depends on multiple factors such as the feasibility of an organization’s AI integration and the degree to which generative applications outperform human extension and communication agents.” Text added under the final heading, “Perceived risk,” reads, “Individuals’ or groups’ feeling of risk, uncertainty and vulnerability often neglecting probability.” Underneath, the following text is added, “E.g., Individual extension agents and those working in agricultural communication might feel differently vulnerable to the impacts of AI on their work depending on issue awareness, self efficiency or the perceived severity of the threat.” |
| So, let’s talk about the salient threats of AI. | Back to Doctor Alex Kirkpatrick, who speaks to the camera. |
| There exists the threat of individual job loss through automation and, on the societal scale, mass unemployment. In such a scenario, the wealth gap between those who automate and those whose roles are automated stretches even wider. Hence, the magnitude of the threat is great. It’s important to always acknowledge, though, that the risk is highly uncertain. We simply don’t know the extent to which AI will actually replace people like farm laborers, drivers, and decision makers, etc. This uncertainty can actually heighten worry and perceived risk among those who feel most impacted. Some economists project that developments in AI and advanced robotics will reshape economies globally, rapidly changing the types of tasks that humans are required to perform in the workplace, replacing many workers. Meanwhile, others caution that such technologies will have significantly smaller economic impacts than earlier innovations like the personal computer, and point to a lack of progress in automating sectors such as retail and construction as evidence of AI technology’s limited impacts on the human workforce. But a 2017 report from the McKinsey Global Institute, although now a little dated admittedly, suggested that those jobs most susceptible to automation are the 51% of activities in the global economy that involve either physical activities or the collection and processing of data. The same report suggests that around half of the activities that humans are currently paid $16 trillion to perform in the global economy could be automated by simply adapting existing technologies. | Text on screen, “Economic threat.” A dollar sign sits at the end of the text. A slide with the title, “Automation is an economic threat.” Beside it, a dollar sign sits on a stock market arrow with points at both ends. Bullet point text reads, “Job replacement, mass unemployment, economic inequity.” A new bullet point added underneath reads, “High magnitude loss, uncertain probability.” The image changes to a man in a suit crouched on the floor with his head in his hand. Stock market chart lines display behind him. A new bullet point added underneath reads, “‘Fourth Industrial Revolution’ hype or realism?” The image changes to two people in protective helmet looking down an aisle in a data center. Another bullet point added underneath reads, “51% of paid tasks in the global economy are “highly susceptible to automation.”” The image changes to a dozen farm workers bent over picking in a field. |
| Here and now, in 2025, we aren’t really observing much in the way of job replacement as a result of AI in any industry. But what we know from prior revolutions is that even when tasks become automated, people are generally moved to other tasks, upskilled, or in a position to exploit automation to perform more of the same tasks more efficiently. On the more hopeful side of things, there are many in ag who see AI as a potential emancipator, performing all the less desirable, even dangerous manual tasks, freeing workers up to perform other necessary functions that they might not otherwise have time or energy to perform. But none of this necessarily counters the threat of AI replacement felt by many workers. | Back to Alex Kirkpatrick, who speaks to the camera. |
| There are numerous technical threats posed by AI, as with any emerging technology. For example, AIs can hallucinate, get things wrong, and produce erroneous information. Imagine an AI model trained improperly or insufficiently in relation to classifying disease, pests, or healthy crops. This could lead to poor decision-making, like deploying unnecessary pesticides. Malware, hackers, ransomware, all software and computer systems are subject to attack hypothetically. Over-reliance on any technology can lead to skill fade. If an AI system goes down for any reason, that might not be such an issue to the farmer with decades of lived experience. But outages could be a more significant problem if skill fade occurs or newer tech-reliant generations of agriculture workers who never learned those skills in the first place. Model drift is when an AI model no longer reflects reality, and AI becomes less reliable because the historical data isn’t as applicable to the contemporary scenario. For example, a model trained on historical weather patterns might be of limited use under more uncertain or extreme weather conditions. Any issues with data quality, such as inaccuracies or incompleteness, may also be reflected in the AI’s outputs, affecting any predictions, recommendations, or decisions a system makes. Any new machinery, of course, particularly autonomous machinery such as driverless tractors or autonomous fruit-picking robots, pose a physical health and safety risk that needs to be managed. | Text on screen, “Technological threats.” Alex Kirkpatrick presents alongside the text, “AI systems pose technical threats.” A bullet point added underneath reads, “Model errors and misclassification.” A new bullet point reads, “Adversarial attacks.” A further bullet point reads, “System failures.” A new bullet point reads, “Model drift and degradation.” Next bullet point, “Data quality.” Next bullet point, “Mechanical health and safety threats.” |
| There are also many more abstract social threats beyond those implied by unemployment alone. Algorithmic bias occurs when automated systems produce systematically skewed results due to flawed data, assumptions, or design. This can lead to discrimination where certain groups, often along the lines of race, gender, or socioeconomic status, are unfairly treated or disadvantaged by algorithmic decisions. For example, AI is already being used to make insurance and lending decisions, leading some to worry that AI might be unfairly preferencing some or discriminating against others due to biases in historical training data and factors beyond their control. Precision agriculture relies on data, which might include yield, soil health, even farmer behaviour. Such data could potentially be shared or sold without consent. This has been an issue with other technologies associated with AI, like social media, or farmers might be forced to consent to share important data as a prerequisite of using a tool. The AI divide refers to unequal access to and benefits from artificial intelligence technologies, often reinforcing or amplifying the existing digital divide. It encompasses disparities in AI literacy, infrastructure, data access, and decision-making power, favoring wealthy, technologically advanced groups with better access to resources and education, while marginalizing others. The black box problem in AI refers to the lack of transparency in how complex models, especially deep learning systems, make decisions. Because their internal processes are not easily interpretable, it’s difficult for users, developers, or regulators to understand, audit, or explain their outputs, raising concerns about accountability, fairness, and trust. Technology moves quickly. Keeping pace with technology or managing new complex systems, especially when your livelihood depends on it, can create additional cognitive load and stress for users. | Text on screen, “Social threats.” A slide with the title text, “AI poses social threats.” Beside it displays the image of balanced gold weighing scales, with the computer images of charts, graphs and cogs floating around it. A bullet point added underneath reads, “Algorithmic bias and discrimination.” New bullet point, “Surveillance and loss of privacy.” The images changes to a surveillance camera. New bullet point, “The AI divide.” A new image displays a man stood next to a small pile of money, beside a man stood on a huge pile of money reaching into the clouds. Next bullet point, “Accountability, (the “black box” problem).” An image displays of a cube emitting red light at its base onto a tiled surface. New bullet point, “Stress and the rate of technological change.” A new photograph shows a farmer in a straw hat and boots sitting in a crop field with his hand to his head. |
| All these threats to health, safety, productivity, economic well-being might have you questioning, “is AI really worth it?” Well, if so, you’re not alone. Trustworthiness and explainability have emerged as central concepts to the diffusion of AI innovations. How much trust should we give to AI systems? How much trust should we give to the people who code those systems? How much trust should we give to policymakers who are supposed to have our safety and interests at heart? These are the questions that agriculture and society more broadly must grapple with as AI integrates more and more into our everyday lives. | Back to Alex Kirkpatrick, who addresses the camera. |
| [Music] In this video, we’ve thought about risk in relation to AI and agriculture. We’ve learned that to many, if not most, risk is a feeling based on perceived susceptibility and magnitude. While quantitative risk is the likelihood times costliness of a threat occurring, most humans neglect probability and simply see the potential magnitude of their losses. This can make it incumbent upon you to acknowledge and discuss threats that you might know are relatively unlikely to occur, but are still very important to your audience. We also explored some of the salient threats that AI poses to agriculture and agricultural audiences. These include technical threats such as over-reliance, model failures, and adversarial attacks. Also, the social threats of mass unemployment, the black box problem, and privacy concerns. And we must acknowledge that the true risks of increasing AI use in agriculture are as yet uncertain here now in the early days of diffusion. In addition, there are almost certainly threats that aren’t covered here or are still materializing or going unseen. | Text on screen, “The takeaways.” A slide shows an illustration of a light bulb with plants climbing up it from the base. A bullet point is added beside the image with the text, “Risk is a feeling to many.” New bullet point, “People tend to neglect probability.” Bullet point, “AI threatens sustainable agriculture in numerous ways:” Sub bullet point, “Technical threats.” Sub bullet point, “Economic and social threats.” Bullet point, “The objective risks of AI are uncertain.” New bullet point, “There are likely unseen threats.” |
| [Music] | Text on screen, “The closer.” |
| Dr. Alex Kirkpatrick: Another video in this toolbox goes into more detail and depth as to how you might go about communicating these threats with your unique audiences. But I hope this video has been a useful primer into the most salient threats that society perceives in relation to AI in general and how that might translate to threats facing agriculture. Thank you very much for your energy and your attention. | Doctor Alex Kirkpatrick addresses the camera. |
| [Music] | Text on screen, “Adiós.” |
| [Music] | Text on screen reads, “USDA National Institute of Food and Agriculture. U.S. Department of Agriculture. This material is based upon work that is supported by the National Institute of Food and Agriculture, U.S. Department of Agriculture, under award number 2023-38640-39571 through the Western Sustainable Agriculture Research and Education program under project number WPDP 24-013. U.S.D.A. is an equal opportunity employer and service provider. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the view of the U.S. Department of Agriculture. This material is based upon work supported by the AI Research Institutes program supported by NSF and USDA-NIFA under the AI Institute, Agricultural AI for Transforming Workforce and Decision Support, (Ag AID). Award Number, 2021-67021-35344.” The logos for Western SARE Sustainable Agriculture Research and Education, and Ag AID Institute display to the left. |