Image credit: Program in Ethics, Bioethics, and Technoethics. LA Salle Ramon Llull University, Barcelona.

Extending human possibility

We are embodied and culturally embedded knowers. We are also technology enhanced. Our biological phenotype is extended by technology. Technology is fundamental to what it is to be human.

Think about it. It was technology that enabled collaborative and competing tribes of Paleolithic humans to thrive, and ultimately dominate the planet. There are now 8.2 million of us in 195 countries—more than half living in elaborate cities!

1.8 million years old stone chopper from the Olduvai Gorge, Tanzania.
Image credit: Smithsonian.


Technology and TOK

Knowledge and Technology is an Optional Theme and is deeply embedded in every TOK domain. Making techno-morality judgments is challenging. As always with technology, we must weigh unintended downsides against astonishing gains! 

Mavic 3 quadricopter drone, from the Chinese company DJI, weighs less than 1 kg and is the model most frequently used by Russian and Ukrainian troops.
Photo credit: DJI, swissinfo.ch

Ethics at TOKresource.org

Class activities with ethical themes abound on the TOKresource.org site. Start with Apprenticeship in ethics. Then choose from this menu with a techno-morality focus:

Trolley problems and self-driving cars
Gorillas, intelligent robots and personhood
AI—including alignment and slaughterbots
Deep-faking it
You are the product
AI and academic integrity
AI Art
Promethean Dreams

Class activity I:
Stochastic parrotS — distorting mirrors

During this first class techno-morality activity we encounter two powerful metaphors that provoke meta-thinking about AI.

TOK students should familiarize themselves with the term “stochastic” and incorporate it into their everyday academic vocabulary. The Greek origin of the word might seem intimidating at first, but the meaning is straightforward and fundamental. It refers to the randomly determined, probabilistic element that underlies statistical theory. Stochastic thinking drives key concepts in math and informs methodology in the human and natural sciences.

Begin the class by sharing the image of the green parrot vomiting binary code. Continue with Cloud Gate, the mirror-like, public sculpture outside the Art Institute of Chicago. Then pick a few students at random to perform public readings of the quotes from Bender et al. and Shannon Vallor. (You might quip that your student selection was stochastic!)

Next, confirm the truism that everyone in the room has spent time with chatbots and text-to-image AI generators. Also spend a few moments clarifying how LLM training works.

Then the fun starts. Kick off upbeat whole class discussion using the generative questions below.

Image credit: Meaning Spark Labs

Chatbots are like parrots; they repeat without understanding.
— Bender, Emily M. et al. (2021) "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency.

LMMs

Large Language models (LLMs) like OpenAI GBT-5 and DeepSeek begin their training with self-supervised learning on vast datasets to learn language and general knowledge. Next, human supervised, fine-tuning trains the LLM to follow instructions and perform specific tasks. Finally, reinforcement learning kicks in to align the model's outputs with desired behaviors.

In The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking (2024), Shannon Vallor asserts that AI systems “are constructed as immense mirrors of human intelligence. They do not think for themselves: instead they generate complex reflections cast by our recorded thoughts, judgments, desires, needs, perceptions, expectations and imaginings.”

GENERATIVE QUESTIONS

  • To what extent are real parrots stochastic?

  • Is it fair to say that human beings are “stochastic parrots,” at least to some degree?

  • Chatbots are evolving. Is it still the case that Large Language Models statistically mimic text without any real understanding?

Anish Kapoor (2006) Cloud Gate. Stainless steel sculpture. Millennium Park, Chicago.

“Mirrors do not merely reveal things as they are: mirrors also magnify, occlude, and distort what is captured in their frame. Their view is always both narrower and shallower than the realities they reflect".”

Shannon Valor (2024:13)
The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking.
Oxford University Press, New York.


Class activity II:
Techno-morality landscape

AI-related existential threats, such as bad actors using slaughter bots, non-aligned Superintelligence, and the ethics of robot personhood are dealt with elsewhere on the site.

There is more to the techno-morality landscape than using Chatbots and text-to-image generative AI. It is worth considering the environmental impact of AI; making decisions in court cases, job interviews, and medical diagnosis by AI algorithm; and emergent psychological impacts.

This class activity echoes the TOK Exhibition. Students are arranged —stochastically—in teams of two or three. Each team is assigned one of the following techno-morality topics:

1. Hungry Judge
2. AI surveillance at work
3. Griefbots
4. Companion robots
5. Psychosis trigger
6. AI Actors in Hollywood

Each team will have a timed ten minutes to assimilate their stimulus material and do some further online research. They will a make succinct presentation using video clip and/or imagery, then lead a whole class discussion based on the following generic questions:

  • What on earth is going on here?

  • What are the losses and gains in terms of human well being?

  • Based on ethical considerations, what kinds of limits, if any, seem appropriate?

Invite students to suggest their own techno-morality landscape topics that could be added to this list.


1. Hungry Judge

In his viral conversation with astrophysicist Neil deGrasse Tyson, Robert Sapolsky is using the Hungry Judge Effect study as an argument against free will.

Would it be better or worse to have an AI algorithm determine sentences in criminal court cases?

How important is the human touch?

2. AI surveillance at work

Taylorism-based efficiency studies were imposed in factories at the turn of the 20th Century.

Sheherjar Y. Kaoosi juxtaposes the 1936 Modern Times movie with what it is like to work at Amazon under AI surveillance via wristbands and cameras.

How is AI affecting the dignity of work?

3. Griefbots

Griefbots are AI-powered, interactive simulations of deceased persons.

Do Griefbots help or hinder the grieving process?

4. Companion robots

Companion robots provide emotional support to combat loneliness and anxiety. PARO is modeled after a baby harp seal.

5. Psychosis trigger

Prolonged, intense use of chatbots can trigger, or worsen, psychosis in vulnerable individuals.

Chatbots are friendly. They can reinforce and validate a user's distorted beliefs rather than challenge them.

6. AI actor

The actress, Tilly Norwood—who is 100% AI—has generated an enormous backlash in Hollywood.

ENDNOTE

  • What general points about the AI techno-morality landscape arose from our group discussions?

  • Are technological tools ethically neutral?

  • Where should we go from here?

Wall-E Wall-E
Image credit: Voicedub AI

Man, the flower of all flesh, the noblest of all creatures visible, man who had once made god in his image, and had mirrored his strength on the constellations, beautiful naked man was dying, strangled in the garments that he had woven.
— E.M. Forster, The Machine Stops (1909)”