Curated resources and timely truth-telling for curious skeptics, thoughtful innovators, and future-ready leaders.
Originally published June 22, 2023.
The volume of news and information about generative AI is hard to track; no one can reasonably keep up with it all. The technology is evolving rapidly, and so is our understanding of it. But when it comes to generative AI’s growing impact on ethics, policy, and society, we have witnessed the emergence of a few truths.
Truth #1: We need to think about ethics and regulations. Generative AI is new territory — which can bring new opportunities as well as new threats. We have a lot of questions, and most people agree we need to do something about it. (Large AI companies have asked Congress for regulations, even though they have been lobbying against restrictions in Europe.) Ethics come down to what risks individual people and organizations decide they will tolerate, whereas regulations articulate what risks our governments will tolerate. Both ethics and regulations are important, but everyone balances the scales differently.
Truth #2: Not all concerns are the same. When people express concerns about AI, they are often talking about different things — from bias and economic impacts to disinformation and militarization. Actual present-day harms are different from speculative future risks; understanding the specific issues (and the differences in severity or complexity) can help individuals, organizations, and governments take proportionate, proactive steps. Some things people worry about will be solved in the near term; some issues can be addressed easily with a concentrated effort. An important question to ask is “What will get sorted out on its own, and what won’t?” — followed by “Who should intervene, and when or where should they step in?”
Truth #3: Generative AI is here to stay. Many tools and technologies present some level of risk. Cars, for example, are extremely dangerous — but most people accept the risks associated with driving because they want the benefits of a fast, convenient trip from one place to another. Most people wouldn’t entertain the idea of eliminating cars, but we do expect drivers to take precautions and we expect car manufacturers to improve safety features. Likewise, this is another seatbelt moment for tech. We can’t reasonably expect to shut down generative AI now; instead, we should be thoughtfully considering how individuals, organizations, and governments can make it as safe and as beneficial as possible.
Truth #4: Mistakes will happen. While it would be nice to think that all innovation could be contained, nearly all experiments present some level of risk. When it comes to therapeutics, the FDA has established the tolerable level of risk, a process for testing in labs and on humans, and the acceptable amount of time for evaluation and approval. What would clinical trials look like for generative AI? Of course, many people would like to see faster development and approval of life-saving therapeutics, and no one is willing to wait 30 years for tech innovation — but we have not collectively had an opportunity to determine the acceptable risk thresholds for AI. Decision-makers in every sector should engage their ethical imaginations to envision the possible consequences of decisions and actions.
With all of this in mind, we’ve curated resources to help you expand your understanding of the ethical, social, and legal issues presented by generative AI, develop your point of view, and consider your influence within organizations and communities to determine when, where, and how it should be deployed.
Ethics, policy, and society: Curated resources
What will our society look like when artificial intelligence is everywhere? Smithsonian Magazine (April 2018). “Once it arrives, general AI will begin taking jobs away from people, millions of jobs — as drivers, radiologists, insurance adjusters. In one possible scenario, this will lead governments to pay unemployed citizens a universal basic income, freeing them to pursue their dreams unburdened by the need to earn a living. In another, it will create staggering wealth inequalities, chaos and failed states across the globe. But the revolution will go much further.”
A conversation with Bing’s chatbot left me deeply unsettled. The New York Times (February 16, 2023). “Still, I’m not exaggerating when I say my two-hour conversation with Sydney was the strangest experience I’ve ever had with a piece of technology. It unsettled me so deeply that I had trouble sleeping afterward. And I no longer believe that the biggest problem with these A.I. models is their propensity for factual errors. Instead, I worry that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.”
Pause giant AI experiments: An open letter. Future of Life Institute (March 22, 2023). “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. … Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
ChatGPT is about to revolutionize the economy. We need to decide what that looks like. MIT Technology Review (March 25, 2023). “We can decide how we choose to use ChatGPT and other large language models. As countless apps based on the technology are rushed to market, businesses and individual users will have a chance to choose how they want to exploit it; companies can decide to use ChatGPT to give workers more abilities — or to simply cut jobs and trim costs.”
What we’re doing here. Planned Obsolescence (March 26, 2023). “The obsolescence regime is a world where economic and military competition don’t operate on human timescales and aren’t constrained by human limitations — in this regime, a company or country that tries to make do with mere human creativity and understanding and reasoning alone would be outcompeted as surely as one that refuses to touch a computer would be today.”
Every day is April Fool’s Day now. Vice Motherboard (March 31, 2023). “Even if you’re trained in recognizing fake imagery and can immediately spot the difference between copy written by a language model and a human (content that’s increasingly sneaking into online articles), doing endless fact-checking and performing countless micro-decisions about reality and fraud is mentally draining.”
Timnit Gebru is building a slow AI movement. IEEE Spectrum (March 31, 2023). “We have to figure out how to slow down, and at the same time, invest in people and communities who see an alternative future. Otherwise we’re going to be stuck in this cycle where the next thing has already been proliferated by the time we try to limit its harms.”
Federal legislative proposals pertaining to generative AI. Anna Lenhart (April 12, 2023). “There is a narrative floating around that Congress has yet to propose legislation “to protect individuals or thwart the development of A.I.’s potentially dangerous aspects.” While it is true that the potential harms (and benefits) from generative AI are vast and arguably no bill clearly covers the full range, the misperception of the state of AI policy has led people to overlook the wealth of proposals for legislation on AI that already exist. This chart aims to outline important provisions that have been drafted at the federal level.”
The ballad of ‘Deepfake Drake.’ The New York Times (April 28, 2023). “But the question AI raises is, do we even need that connection? Do we just want something that sounds pleasant enough in the background? Oh, that sounds like Drake, or that sounds like Bob Dylan. Or do we need to know that this is coming from the depths of their soul and from their lungs and their heart? Maybe we do or maybe we don’t. But as AI keeps coming in music and in other art forms, we’re all going to have to answer that question for ourselves.”
‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead. The New York Times (May 1, 2023). “Industry leaders believe the new AI systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education. But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.”
AI is about to make social media (much) more toxic. The Atlantic (May 5, 2023). “Last year, the two of us [social psychologist Jonathan Haidt and former Google CEO Eric Schmidt] began to talk about how generative AI — the kind that can chat with you or make pictures you’d like to see — would likely exacerbate social media’s ills, making it more addictive, divisive, and manipulative. As we talked, we converged on four main threats — all of which are imminent — and we began to discuss solutions as well.”
If we want AI in the public interest, we’re doing it wrong. Forbes (May 11, 2023). “Largely missing from this conversation is thorough consideration of the impact and opportunity for public interest, and the voice of average citizens who will use, benefit from and be afflicted by this technology. If we want AI to be anchored in the public interest — designed to serve all citizens — we’re doing it wrong. Instead of a focus on what the AI can do, we should be asking what humans actually need it to do, and develop accordingly. While the notion of creating technology in the public interest is not new, the tactics and tools at society’s disposal are often secondary to making a profit.”
Congress really wants to regulate AI, but no one seems to know how. The New Yorker (May 20, 2023). “Figuring out how to assess harm or determine liability may be just as tricky as figuring out how to regulate a technology that is moving so fast that it is inadvertently breaking everything in its path. … In the case of OpenAI, which has been able to develop its large language models without government oversight or other regulatory encumbrances, [licensing and regulations] would put the company well ahead of its competitors, and solidify its first-past-the-post position, while constraining newer entrants to the field.”
Image background photo by Ersilia Giacca.