AI Ethics

Text Transcript with Description of Visuals

AudioVideo
Dr. Alex Kirkpatrick: Hello. I’m Dr. Alex Kirkpatrick from the Center for Sustaining Agriculture and Natural Resources at Washington State University. If AI is given the agency and autonomy to do what humans used to do in agriculture, how do we then attribute moral responsibility? Can a machine be held responsible for its actions? If a machine learns from data provided by a society with a long history of bias and discrimination, does that machine give biased and discriminatory outputs? This video explores the nascent and evolving field of AI ethics and the philosophical questions that we, as a culture, may have to answer. Let’s explore.A man speaks on screen. Text in a ribbon along the bottom of the screen reads, “Alex Kirkpatrick, PhD.”
[ Music ]A combine harvester moves, while a woman at the edge of the field controls it by tapping buttons on a tablet. In an industrial greenhouse, crops are watered by an automatic arm with sprinklers. In another field, a different woman holds up a sprig of a crop, viewing it through the lens of a tablet in her other hand.

A slide on screen. Text. “AI ethics.” The logos for “Western SARE, sustainable agriculture research and education,” “Washington State University,” “USDA National Institute of Food and Agriculture US Department of Agriculture,” and “Ag AID Institute, National Agricultural AI Institute for Transforming Workforce and Decision Support.”
Alex Kirkpatrick: Ethics matter because they protect against exploitation, support fairness, and build long-term trust, all crucial if AI is going to be accepted and effective in agriculture. As society grapples with the ethics of AI and affording power to autonomous machines, so must agriculture contend with large philosophical and moral quandaries. People are often concerned about their data above all else when it comes to AI. Who owns the data collected from farms and agricultural operations? This is a big question. AI often exists in a so-called black box, meaning it’s not entirely clear how it comes to decisions. Even if we could always trace input to output, does the average AI user in agriculture have the know-how to interrogate the algorithm? Should the user have the ability to override an algorithm? Who reaps the rewards of AI in agriculture, and who experiences the brunt of the risks? How are these risks and rewards distributed across agricultural operations and surrounding communities? Relatedly, there exists a lot of socioeconomic inequity already in society broadly, which extends to agriculture. Does AI widen or shrink existing gaps between the diverse groups of people that make up agricultural communities? Training and maintaining AI is very resource-intensive, demanding a lot of energy to power data centers, supercomputers, cooling systems, hardware, manufacturing, and more. How do these impacts weigh against the potentially positive impacts on precision agriculture and resource management? Furthermore, AI might automate tasks that humans are, at the moment, paid to perform. What impact does this have on labor and the lives of the individuals affected? A new slide. A picture on the left side of the screen of a marble statue in a Greco-Roman style of a man holding a scroll. A heading on the right of the slide reads, “AI in agriculture raises significant ethical questions.” Text appears below the heading. Bullet point, “privacy. Who owns the data?” A new picture on the left side of the slide of a hand holding up an ear of corn in a corn field. Over the image are scientific drawings. New text is added. Bullet point, “Control. Who gets to challenge or override algorithmic decisions?” A new picture on the left of a hand on a laptop keyboard. New text is added. Bullet point, “fairness. How are AI’s risks and benefits distributed?” A new image on the left of two men harvesting apples. New text is added. Bullet point, “Bias: How does AI reinforce inequity?” A new image on the left. People bend over picking tea wearing hats that also cover their necks. New text is added. Bullet point, “Sustainability. How is AI impacting labor and the environment?”
There are doubtless many more questions like these that are as yet unresolved and demand attention and deliberation. Given the pace of technological development, the urgency of answering such questions has led to many AI ethics initiatives and policies.Alex appears on the screen, speaking in front of a multicolored background.
The Organization for Economic Cooperation and Development is an intergovernmental organization with 38 member countries, including the United States and most of Europe. Its aim is to stimulate economic progress and world trade. And they were the first intergovernmental organization to adopt standards in AI ethics back in 2019. Their first principle is that ethical AI should promote prosperity for both people and the planet, enhance inclusion, environmental sustainability, and reduce inequalities. In agriculture, this could translate to ensuring small-scale and indigenous farms benefit as much from AI diffusion as larger operations, or that the positive environmental impacts from using AI outweigh the negatives in terms of training and maintaining systems. The next principle is that respect for human dignity and privacy must guide AI throughout its lifecycle, including safeguards against discrimination and ensuring human oversight. Other than designing algorithmic safeguards against cyber-attack or bias and to enhance user control, this means establishing clear data ownership agreements. Stakeholders should have meaningful access to how AI decisions are made, from inputs to outputs. They should be able to access information on the algorithm’s design and challenge outputs. A farmer should have the opportunity to know why a deep-learning model makes the recommendations it does or contest legal or economic decisions that affect them that were made using AI. AI should perform reliably in any scenario, even misuse, with clear mechanisms for overriding or decommissioning, if necessary. For example, an autonomous harvester, like the one in the image, should be designed to withstand any field conditions. It should behave safely, even if someone’s misusing it or using it without permission. The user themselves should be able to safely stop or shut down the machine, whether it be contained within a physical robot or existing only in the virtual realm. The last principle involves ensuring traceability and systematic risk management, allowing for responsibility to be clearly assigned along the AI’s lifecycle. That means clearly assigning liability, among other things, and making it clear how remediation is afforded, if at all, if a machine causes any form of harm. A new slide with the heading “Proposed value-based ethical AI principles.” Beside it is the OECD logo of the globe and two green arrows. On the right side of the slide is a picture of a green computer motherboard, with at the center, the triangular recycling symbol. New text appears below the heading. Bullet point, “Inclusive growth and sustainable development.” A new image on the right side of the slide of a composite face made up of vertical slices of six different faces. New text appears below the heading. Bullet point, “human rights and privacy.” A new image on the right side of the slide of green letters against a black background. A magnifying glass hovers over the text and highlights the word “error” written in red. New text appears below the heading. Bullet point, “transparency and explainability.” A new image on the right side of the slide of a combine harvester in a field with a golden yellow crop. New text appears below the heading. Bullet point, “Robustness and safety.” A new image on the right side of the screen. A gavel rests on an open book. New text appears below the heading. Bullet point, “accountability.”
The Organization for Economic Cooperation and Development makes several specific recommendations for AI policymakers and developers based on these ethical principles. They recommend public-private partnerships to drive innovation in trustworthy AI, inclusive of technical, social, and ethical dimensions. They further recommend using open and representative datasets to reduce bias, ensuring greater sharing of data, and enhancing international cooperation and exchange. To share open and representative datasets, organizations and governments need to build sharing infrastructures, like online platforms, to exchange or host data. It’s recommended that such networks promote an inclusive digital environment, creating pathways for smaller actors and developing economies to access and use open data. Given the rate of technological change, AI governance should be flexible, harmonic across national borders, for global consistency. It’s recommended that policymakers and governments, in particular, invest in AI literacy, training, and reskilling. Furthermore, they recommend social protection structures to protect against the economic consequences of automation for displaced or replaced workers.The speaker appears on the left side of the screen within a new slide. On the right is text. The heading reads, “ethical principles yield practical recommendations,” and the logo of the OECD. New text appears below the heading. Bullet point, “invest in ethical AI R and D.” New text appears below the heading. Bullet point, “foster an inclusive AI ecosystem.” New text appears below the heading. Bullet point, “shape agile, co-operative governance.” New text appears below the heading. Bullet point, “develop human capacity and workforces.”
Of course, ethical guidelines are just that, guidelines. They aren’t legally binding, even when dozens of nations, including the USA, have approved them. There are no criminal consequences for not adhering to such guidelines or methods of enforcement. But despite there being a number of other governmental and corporate attempts to articulate ethical standards for AI across many nations, they’re all remarkably similar. To me, this implies that the potential risks of AI are commonly perceived globally, and there is a good level agreement on what good AI should be, at least in theory.Alex appears in front of a multicolored background, speaking.
Agriculture professionals perhaps have a unique role to play as ethical stewards, translating technical standards into on-the-ground action. As collaborators linking farmers with tech developers and policymakers, ag professionals can advocate for ethical AI technologies and deployment within agriculture. But what might this mean in practice? Well, look for systems to be grounded in fairness, justice, and respect for human dignity, safety, and sustainability. Create inclusive spaces for dialogues about AI with stakeholders of all backgrounds. In doing so, you can foster trustworthy communication and promote AI literacy and engagement, affording people the resources to explore the ethical questions of AI deployment in ag. By knowing what constitutes ethical AI, ag professionals can review systems on behalf of their audiences to help guide adoption decision-making. The technical and risk aspects are important dimensions. But is the AI developed and deployed in an ethical manner? You can help answer that question. Keep in mind that ethical AI, according to most, is environmentally, socially, and economically sustainable, inclusive in its design and deployable by a wide range of people and operations, protective of user rights and privacy, safe to use and reliable, explainable. Decision-making and design process is transparent. And clearly, someone or some entity is explicitly responsible for all of those aspects.The speaker appears onscreen within a new slide. He is on the left. There is text on the right side of the slide. A new heading reads, “ag-professionals can be stewards for ethical AI.” New text appears below the heading. Bullet point, “lead with values.” New text appears below the heading. Bullet point, “engage in dialogue.” New text appears below the heading. Bullet point, “promote AI literacy and engagement.” New text appears below the heading. Bullet point, “audit AI systems.” New text appears below the last bullet point. Bullet point, “Is it sustainable?” Bullet point, “Is it inclusive?” Bullet point, “Does it protect privacy and dignity?” Bullet point, “Is it safe and robust?” Bullet point, “Is it transparent?” Bullet point, “Who is responsible?”
[ Music ] Humans have been debating ethics for millennia, and there’s doubtless a lot more to talk about in relation to AI and ethics that are far beyond the scope of this video. But I hope this video has sparked some thinking about what constitutes ethical deployment of intelligent, autonomous systems in agriculture and how you might go about addressing these issues with your audiences. Ultimately, AI affects us all, and we all have a stake in this ethical debate. Thank you very much for your attention and your energy.A new slide. Text on screen. “The closer.” The speaker appears on screen.
[ Music ]A new slide. Text on screen. “That’s all folks!”
[ Music ]A new slide. The logo for the “USDA, National Institute of Food and Agriculture US Department of agriculture.” The logo for “Western SARE, sustainable agriculture research and education.” Text. “This material is based upon work that is supported by the National Institute of Food and Agriculture, U.S. Department of Agriculture, under award number 2023-38640-39571 through the Western Sustainable Agriculture Research and Education program under project number WPDP 24-013. U.S.D.A. is an equal opportunity employer and service provider. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the author or authors, and do not necessarily reflect the view of the U.S. Department of Agriculture.” The logo for “Ag aid institute.” Text. “This material is based upon work supported by the AI Research Institutes program supported by NSF and USDA-NIFA under the AI Institute, Agricultural AI for Transforming Workforce and Decision Support, in brackets, Ag aid. Award number 2021-67021-35344.”
[ End music ]A new slide. Text on screen reads, “Stay in touch! Alex.kirkpatrick@cornell.edu”