Cory Doctorow calls tools like ChatGPT “plausible sentence generators.”
I am very grateful to David Walbert @dwalbert for two posts—here and here—in response to my posts on Ivan Illich’s theory of tools. He got me to think about the ideas a little more carefully than I did in my first enthusiasm. And in doing so, I’m attempting to think with Illich’s ideas rather than simply repeating them.
Let’s see if we can read Ivan Illich animistically, by reframing his ideas against the background of a living cosmos. For those who may be unfamiliar, animism is—to use Graham Harvey’s famous definition—the recognition that
the world is full of persons, only some of whom are human, and that life is always lived in relationship to others.
It would seem that Illich sees tools as objects, created and used by human subjects, but not as subjects in themselves. That is, he is concerned that tools are designed in such a way that they give maximum creative freedom to their users as they exist in relationship with other humans, without ever addressing the relationship with the tool itself. So let’s see if we can push his ideas in an animist direction.
In a convivial society, the human tool user can enter into a partnership with a tool such that the human can exercise their creativity freely while respecting the nature of the tool and the tool can fulfill its own purpose freely and peacefully, without dominating its human partner.
The question then becomes: can I enter into partnership with this tool? Will the partnership be one where each respects the nature and role of the other?
I need to make a fine measurement so I partner with a caliper and it gives me the measurement. We each accomplish our purpose. But if I then use the caliper as a hammer, I am not respecting the nature of the tool and it will not cooperate with me in driving a nail.
If I have a small business that I need to market to others, I could partner with the Big Tech social networks to get the word out. I am willing, for example, to learn how best to use the tool to advertise my service. But (and this is based on the experience of people I know in this situation) is the social network an equal partner? Certainly not. In order to get the word out, I must continually seek the approval of the algorithm. It’s a never-ending series of tricks I have to pull—and we’re all familiar with what that looks like. In this case, I and the tool are not in an equal partnership. No matter how much I try to adapt myself to the demands of the tool, it will never adapt itself to my needs because it is designed according to machine logic, not human nature.
So to get back to one of David’s important questions: are tools inherently convivial, or is the conviviality in the use? He makes a useful distinction between tool and technology and use. Seek (to use his example, which I like because I also use it!) is an app that helps identify plants and animals. It is a tool based on the technology of artificial intelligence. The technology could be harmful while a tool based on it could be convivial—or even just my use of it. The distinction, I say, is useful because it allows or more nuance than a simple yes/no vote on any given tool.
It’s also useful—back to the animist framework—because relationship are similarly complex and require wisdom and judgment. I can partner with Seek in order to better name the beings around me, despite the face that Seek is part of a technology that is much more complex and fraught with potential abuse. I can use my own judgment to limit the partnership in such a way that no tools or the technology they embody exercise control over my creative activity. Some tools (hammers and calipers) are simple and require less judgment; some are more complex and require it.
Ivan Illich makes an excellent observation on the ways in which science as a tool (remember he defines tools as “rationally designed devices”) has passed through the two watersheds. As a reminder, Illich says that tools can pass through two stages of growth. Tools which remain in the first stage are those that extend human capabilities without constraining human autonomy. Tools that pass into the second stage take on a life of their own and enslave their users:
There are two ranges in the growth of tools: the range within which machines are used to extend human capability and the range in which they are used to contract, eliminate, or replace human functions. In the first, man as an individual can exercise authority on his own behalf and therefore assume responsibility. In the second, the machine takes over—first reducing the range of choice and motivation in both the operator and the client, and second imposing its own logic and demand on both. Survival depends on establishing procedures which permit ordinary people to recognize these ranges and to opt for survival in freedom, to evaluate the structure built into tools and institutions so they can exclude those which by their structure are destructive, and control those which are useful.
Science, Illich says,
has come to mean an institutional enterprise rather than a personal activity, the solving of puzzles rather than the unpredictably creative activity of individual people. Science is now used to label a spectral production agency which turns out better knowledge … The institutionalization of knowledge leads to a more general and degrading delusion. It makes people dependent on having their knowledge produced for them. It leads to a paralysis of the moral and political imagination.
This is related to what I’ve said before about the problem with the “trust/believe the science” catchphrase: science is a method, not an authority. The scientific method is an amazing tool that can be used by anyone to discover knowledge. It extends humanity’s capabilities.
But eventually people want science to think on their behalf and science becomes an authority figure—this is the point at which science passes into the second, dangerous stage of growth. It now becomes the property of the scientific priesthood, who dictate to the rest of us what “science says” and we’re meant to “believe the science” and thus abandon our own autonomy.
I hear someone asking: does this mean we’re supposed to “do our own research” and start believing internet anti-vaxxers and conspiracy theorists? Well that’s a loaded way of asking the question, isn’t it? Here we see the bind the second stage growth of science has put us in. Because the scientific method (stage one) has transmogrified into the scientific authority (stage two), we are faced with the false dichotomy of 1. believe the authorities or 2. give yourself over to hucksters and fanatics.
This is a genuine conundrum. We must simultaneously respect the findings of genuine scientific inquiry while also maintaining our own personal autonomy, which often requires questioning authority. I don’t know how to solve this problem. All I can do is ask questions, always being wary of self-deception and dogmatic thinking.
One of the foundational ideas in Ivan Illich’s Tools for Conviviality (see this post from yesterday for a more general introduction) is that the failure of the industrial model of tools is rooted in a key error: namely, that we could make tools that work on behalf of humanity. That, in fact, we could replace human slaves with tool slaves. But we have found that when we replace human slaves with tool slaves, we become enslaved to the tools. Once tools grow beyond their natural scale, they begin shaping their users. The bounds of the possible become defined by the capabilities of the tools.
The leads inevitably to technocrats—the minders of the machines, the managers, the experts learned in the ways of the tools. The technocrats become the new priesthood, interpreting the tools for the masses and instructing them in tool values. Does a tool fail? Never. It is we who have failed the tool. We need to be better engineers.
In this way our desire to create tools to work on our behalf results in our enslavement to the tools. The crucial component of autonomous, human creativity is missing.
This lies at the root of our fears of AI, even if it isn’t said in so many words. AI seems to me to be the ultimate (to this point) expression of the tool slave model. We have created a tool that actually thinks on behalf of humans (or at least is aimed in that direction, even if it isn’t quite there yet). We are farming out to a tool what we have traditionally considered the quintessentially human activity: rational thought.
I’ve had a little experience with ChatGPT recently. I’ve been helping my daughter with Algebra 2. Despite having taken the class many years ago, today I have zero working knowledge of Algebra 2. And we’re working through Algebra 2 in an abysmally bad online learning system. (It’s the same one we had to use during the COVID lockdown and it nearly broke us all.) So, yeah, we’re asking ChatGPT a lot of math questions—and it turns out the AI is really good at it.
So I am not blind to the potentially great uses of this kind of technology. (Illich, by the way, also says that convivial tools do not have to be low tech.) I think everyone would agree that old-fashioned encyclopedias are convivial tools, i.e., they facilitate autonomous human creativity; they can be picked up and put down at will; they make very few demands upon humans, etc. Search engines, as such, can also be convivial tools in that they are faster, digitized versions of encyclopedias. AI-assisted search might also be convivial in some ways. I could find the same information I’ve been using to help my daughter with math in a math textbook or an internet search unassisted by AI, but it would take considerably longer.
The danger comes when we allow AI to think for us. We can, of course, say we won’t do that, pinky swear and all. However, once tools get beyond their natural scale, they start forming/de-forming our values. To take an example that has been discussed for years, there used to be certain norms about face-to-face communication among humans. Along came smartphones. We’ve been saying for years that we shouldn’t allow the tools to shape the way we interact (or rather, don’t interact) in face-to-face situations. Nevertheless, we all have a great deal of experience with the way the tool does, in fact, dictate our behavior. And our values! Grandparents are upset when their grandchildren are looking at their phones during a visit. But those same kids are not upset when their peers do the same thing.
So how sure are we that we will, by and large, resist the temptation to allow AI to think and create on our behalf?
There is also the more practical danger of the technocratic bounding of reality. What will be the impact if we allow AI to think on our behalf and the minders of the AI have throttled what the AI is allowed to tell us? I can even imagine that the technocrats (having an infinite confidence in their own expertise) might have very good intentions when they make such decisions. Nevertheless, are we content to let these decisions be made on our behalf?
One of the unique features of AI is that the technocrats don’t even fully understand what is happening within the tool. They are priests of an unknowable god: AI works in mysterious ways, its wonders to perform. There is a certain amount of this kind of uncertainty that we have learned to live with; for example, we do not always understand why a given pharmaceutical drug works. But we’re also familiar with the elderly who are on a raft of medications, many of which were prescribed to deal with the side effects of the others. The opacity of the tool creates an increasing level of dependence on the tool to fix the problems created by the tool.
In Tools for Conviviality, Illich develops a theory of tools. Illich defines “tools” as “rationally designed devices” and which therefore range from hammers to health care systems. Or, as in the case above, social networks.
A convivial society, says Illich, is one in which there is
autonomous and creative intercourse among persons, and intercourse of persons with their environment. … [Conviviality is] individual freedom realized in personal interdependence.
Convivial tools, therefore, give people
the freedom to make things among which they can live, to give shape to them according to their own tastes, and to put them to use in caring for and about others.
The opposite of convivial tools are industrial tools, which end up exploiting their users. An industrial tool passes through two watersheds: first, it solves a defined problem. Second, it grows beyond its natural scale, alters values, and becomes an end in itself. For example, cars initially solve a transportation problem. Then, cities and roadways and employment models are build around them. We move from using cars as tools to solve a limited problem to serving the tool itself—which is, in fact, not a tool anymore but an organizing principle of our lives.
Convivial tools allow maximum freedom for their user’s creativity and independence, without infringing on the same freedom for others.
Tools foster conviviality to the extent to which they can be easily used, by anybody, as often or as seldom as desired, for the accomplishment of a purpose chosen by the user.
Of course there are several other issues that arise from this—who defines the limits of the tools, what does this mean for present industrial society—and Illich does discuss these issues. But for my present purposes, this is sufficient.
If you’re still holding out hope that renewable energy is the future, you might want to read this.
Goia says, “I’d pay more for trust.” What about those without discretionary income? Also, the trust crisis with regard to the tech companies means they have too much power. Nobody should have so much power that their ability to distort reality represents a crisis.
The cultivation of taste, in morals as well as in art, is neither snobbish nor elitist; it is, rather, the key means by which we emancipate ourselves from the tyranny of passions that the people who make our smartphone apps would like to see dominate us.
I have an longstanding interest in what could be called alternative modes of living. Examples: Hermits Tiny houses Permaculture food forests In fact, I participated in an alternative mode of living by growing up in a radically fundamentalist Christian church that practiced separation from the world through strict rules for living. (When Rachel and I married we had neither wedding rings nor a television!) Having lived through experiences of what can only be called religious abuse, I believe I possess some clarity about the dangers of these exercises.
Dave Danielson @ddanielson has a good post on the choices presented by a lot of writing about smartphone use: The choice of device is not an all or nothing proposition, but is often presented that way. We can choose our own level of engagement with a device, and govern our behavior to use a device as we choose. This is also useful to think about in the context of the NYT article on Luddite teens shared by Patrick Rhone.