As an emerging technology, Generative AI (GenAI) is negotiable and unfinished, requiring further work. Adjusting it to everyday uses requires adopting new ways of working and ongoing experimentation. GenAI will do well because we humans make it do well. Not only do we accept AI as a teammate, but we are willing to work for it to become successful in this role.
In our
Strengthening co-evolution
While AI employs machinic logic, human cognition is flexible and adaptive, effortlessly adjusting to subtle changes in the broader environment. For instance, consider how easy it is for humans to switch from focusing on a complex numerical task to engaging in a casual conversation, or how we integrate embodied knowledge and visual information to steer ourselves in a crowded room. With the most recent GenAI qualities, however, we see processes where AI is becoming enmeshed in complex everyday tasks. GenAI helps in idea generation, and in learning new skills. The technology does not think, but it puts sentences together in a way that appears to be thinking. By doing so, GenAI mimics thinking that has been done by humans over centuries, so it is no wonder that it often appears to be human-like or reasoning. GenAI connects us to a very human body of knowledge, while deepening the collaboration between humans and machines.
The coming together of humans and machines takes place in many ways, and not without tensions and value struggles. Algorithms shape our behavior as we shape them through mutual co-evolving (
A social fabric that supports AI
“Tell me what you’ve learned about me ChatGPT” stories are circulated on social media detailing the personality assessments that ChatGPT has done based on what people have revealed about themselves through their questions and responses. Journalists tell us that if addressed politely, ChatGPT delivers better answers. Tech experts talk about AI as a teammate and give it visualization tasks that they would not master. Humans are embedding GenAI in social and cultural contexts. When they interact with new services, people are feeding them information about sensory perceptions and sharing cultural cues and experiences. AI benefits from the cultural attention that it is given. This new social fabric enriches both AI and human culture, enabling AI to offer responses that adhere to cultural norms, values, and emotional subtleties.
AI collaborations are never neutral. There is always the possibility that the co-evolving process is undermining our autonomy. The talk about addictive qualities of algorithmically-boosted services is a way to say that these services are distracting us from what we think is important in life. While a felt loss of autonomy creates distance, the anticipatory and pleasurable engagements with AI continue to strengthen the co-evolving of humans and technological companions. The more excited and engaged we are, and share our lives with algorithmic systems, the more algorithms resemble us, as they generate and package information that is fed to them. Laura Savolainen asked in her thesis (
Invisible work for AI
In our
While many everyday tasks are geared toward supporting machines, it is no coincidence that we can observe a deep sense of loss that accompanies these developments. The ‘feel’ of technology relations offers an opportunity to think of the politics and practices involved (
Humans must keep reminding themselves that it is on their backs that these developments are built. The future of AI is not just about the technology but about the relationship we cultivate with our new AI companions. We need to question these companions if they promise too much or threaten what we think is valuable in life, whether it is our time, work, professional beliefs, or desired societal trajectories.
Minna Ruckenstein
References