by Bob Cole, Exploratory Initiatives & Partnerships, DLINQ
It doesn’t take long to search and pull up a few recent news stories that reflect the mix of feelings I’m bringing to this year’s digital detox on alternate futures:
Exclusive: Surveillance Footage of Tesla Crash on SF’s Bay Bridge Hours After Elon Musk Announces “Self-Driving” Feature Ken Klippenstein, The Intercept, Jan 10, 2023
Iran to use facial recognition to identify women without hijabs – Iranian official says algorithms can identify anyone flouting dress codes. Khari Johnson, ArsTechnica, Jan 11, 2023
A mental health tech company ran an AI experiment on real users. Nothing’s stopping apps from conducting more. David Ingraham, NBC News Jan 10, 2023
How AI is impacting local artists This is Nashville, WPLN News, Jan 5, 2023
Timnit Gebru Is Building a Slow AI Movement – Her new organization, DAIR, aims to show a more thoughtful mode of AI research. Eliza Strickland, IEEE Spectrum, March 31, 2022
How do these headlines make you feel? For me, they are reminders that we live in a world riven with black box algorithms that influence our decision making, enable mass surveillance, and extract our data for profit. It’s also an unease about the direction that multinational corporations and governments are taking “artificial intelligence” systems, along with questions about oversight and accountability for the harms these systems can perpetuate. There’s reason for concern, but could there also be hope for more collaborative and ethical pathways into a future with AI?
It’s essential to remember that AI systems are built by people. It’s also helpful to make a distinction between General and Narrow AI. General AI is the imaginary stuff of superintelligent robots in movies like Terminator. Narrow AI, on the other hand, is what we actually currently have. In simplest terms, it refers to the types of task based systems powering the predictive algorithms that decide which content we are served while we surf the web, scroll through social media platforms, or play with new generative text and image tools like ChatGPT, Lensa, Stable Diffusion or Dall-E-2. Sentient General AI doesn’t exist, yet the marketing and hype surrounding AI has been very successful in shaping the public’s imagination and fueling a multi-billion dollar race for funding and development. But to what end?
In a December 2022 newsletter article “AI’s Jurassic Park Moment,” scientist and author Gary Marcus warns, “something incredible is happening in AI right now, and it’s not entirely to the good.” Marcus is referring to the experimental release of generative AI tools like OpenAI’s ChatGPT or DALL-E-2. You may have heard of the controversy over the AI-generated artwork that won first prize at a state fair, or how these tools are stirring up debates in education. He reminds us that these systems are inherently biased, unreliable and unable to verify the credibility of their output. His greatest worry is how the experimental release of these systems is opening up future scenarios where bad actors have a new arsenal for low-cost, automated disinformation that could overwhelm the Internet with misleading news stories, social media posts, and deep fake videos. As more resources are invested into these centralized data systems, Marcus suggests new data and identity verification tools will be needed if we are going to be able to counter this threat.
In a Noema article about the hidden labor behind AI, ethical AI advocates Timnit Gebru, Milagros Miceli, Adrienne Williams warn that, “…the idea of superintelligent machines with their own agency and decision-making power is not only far from reality — it distracts us from the real risks to human lives surrounding the development and deployment of AI systems.” They explain that data in these systems run by big tech companies like Amazon, Google, Microsoft, and OpenAi require grueling labeling and content moderation processes. Anthropologist Mary L. Gray and computer scientist Siddharth Suri refer to this class of contingent labor that lacks basic worker protections as “ghost work”. These essential workers help to train AI systems by flagging hate speech, violent content and fake news. Gebru, Micelli and Williams call for a more collaborative AI Ethics research agenda to highlight unjust labor in AI systems, raise awareness, and place pressure on corporations engaged in these practices. They argue that “co-creating research agendas with workers based on their needs, supporting cross-geographical labor organizing efforts and ensuring that research findings are easily accessed by workers…” could help to shift power to those most impacted by these systems and slow down their proliferation so that safeguards can be put in place.
Author and media director Morten Rand-Hendrickson offers a helpful re-framing of AI systems in a post, Tools and Materials: A Mental Model for AI. He suggests that, “...thinking of AI as tools and materials rather than intelligent things with magical human-like powers is an essential mental shift as we figure out how to fit these things into our lives and our world…,” then goes on to ask,
“How do we use AI and all technology to create human flourishing and build futures in which we all have the capabilities to be and do what we have reason to value?”
Human flourishing feels like the right north star for this time and his question led me to wonder about a few applications of AI tools that sparked my imagination from a place of co-creation, collaboration and solidarity.
As an avid cyclist and bike commuter, I spend a lot of time experiencing and thinking about public bike infrastructure and safety. While I’m fortunate in the Monterey Bay area to have access to a fairly extensive protected bike path, many pedestrians and cyclists in American urban areas do not.
The cyclist pictured here is a fictional re-rendering of the four lane Cabrillo Freeway in San Diego, California, produced by design and advocacy startup transformyour.city, whose mission is to help communities envision alternatives to car centric urban spaces and get involved with campaigns for more walkable and bike friendly cities. Images are generated through an over-painting process with OpenAi’s DALL-E-2 text to image generators. AI renderings like these cannot replace the work of urban planning, community engagement, or direct action. Nevertheless, I wonder how they might be used to inspire the public imagination, perhaps in collaboration with efforts like the Slow Streets program in San Francisco, the War on Cars podcast, or the Crosswalk Collective in Los Angeles.
In recent years California has experienced historic climate whiplash; extreme weather events like atmospheric rivers, flooding, drought, and wildfires. As climate precarity increases, turning predictive AI systems on our environment may help communities to adapt. One example is the ALERTWildfire system which uses an array of tower-mounted cameras to collect images every ten seconds and combines AI techniques to compare them with historical photographs. Initial studies suggest that AI alerts can reach emergency responders as fast as ten minutes before an emergency call may come in and there is potential for AI networked stations to augment wildfire forecasting. Here again, these systems do not replace the need for citizens to report emergencies, but they offer another tool in the path to climate resiliency.
As a former English as second language teacher, one of the most exciting examples of communities working with AI systems is the work being done in New Zealand to revitalize Māori language and culture. The Māori nonprofit Te Hiku Media in partnership with data scientists and engineers is developing Papa Reo, a multilingual learning platform they hope will be a model for minority and indigenous language revitalization. They collaborate directly with communities to design mechanisms to collect, protect, and manage their linguistic data. Grounded in the traditional Māori cultural value of kaitiakitanga, or guardianship, they have also created a data license to ensure that they hold the power of consent and that benefits from the project return to the community. Such community-based innovations are inspiring a movement for data sovereignty and ethical applications of artificial intelligence tools that treat historically marginalized people as co-creators of a shared possible future.
In addition to local grassroots movements, there is growing international momentum to codify protections against AI technologies. For example, in late 2022 here in the United States, the White House Office of Science and Technology Policy released a Blueprint for an AI Bill of Rights: A Vision for Protecting Our Civil Rights in the Algorithmic Age. The framework includes five areas for protection worth listing here:
- Safe and Effective Systems: You should be protected from unsafe or ineffective systems.
- Algorithmic Discrimination Protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
- Data Privacy: You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
- Notice and Explanation: You should know when an automated system is being used and understand how and why it contributes to outcomes that impact you.
- Human Alternatives, Consideration, and Fallback: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.
While it’s unclear how the blueprint may translate into legislation, it’s a practical guide we can build upon and question as we dream alternate digital futures.
- Seek out ethical guidelines and legislative efforts in development to protect citizens against the most harmful forms of AI like those in Brazil or the European Union’s AI Act being debated this year.
- Explore the Have I Been Trained? image database developed by Spawning to see if works by an artist or photographer you know have been scraped as data for popular AI text to image models without consent.
- Contribute to the Real or Fake Text research experiment to inform discernment between AI and human generated text.
- Consider a technology you encounter on a regular basis. Reflect on some of the “Questions Concerning Technology” posed by technology and ethics writer L.M. Sacacas
- Dig deeper with Dr. Timnit Gebru, founder of the Distributed AI Research Institute via a lecture at the Harvard Radcliffe Institute “The Quest for Ethical Artificial Intelligence” or The Gray Area podcast interview with Sean Illing “Is Ethical AI possible?”
- Listen or read the transcript from Ezra Klein’s January 2023 interview with Gary Marcus “A Skeptical Take on the A.I. Revolution”
- Don’t ask if AI is good or fair, ask how it shifts power, Pratyusha Kalluri, Nature, Vol 583, July 2020, p16