Posts in: Longer writing

One of the foundational ideas in Ivan Illich’s Tools for Conviviality (see this post from yesterday for a more general introduction) is that the failure of the industrial model of tools is rooted in a key error: namely, that we could make tools that work on behalf of humanity. That, in fact, we could replace human slaves with tool slaves. But we have found that when we replace human slaves with tool slaves, we become enslaved to the tools. Once tools grow beyond their natural scale, they begin shaping their users. The bounds of the possible become defined by the capabilities of the tools.

The leads inevitably to technocrats—the minders of the machines, the managers, the experts learned in the ways of the tools. The technocrats become the new priesthood, interpreting the tools for the masses and instructing them in tool values. Does a tool fail? Never. It is we who have failed the tool. We need to be better engineers.

In this way our desire to create tools to work on our behalf results in our enslavement to the tools. The crucial component of autonomous, human creativity is missing.

This lies at the root of our fears of AI, even if it isn’t said in so many words. AI seems to me to be the ultimate (to this point) expression of the tool slave model. We have created a tool that actually thinks on behalf of humans (or at least is aimed in that direction, even if it isn’t quite there yet). We are farming out to a tool what we have traditionally considered the quintessentially human activity: rational thought.

I’ve had a little experience with ChatGPT recently. I’ve been helping my daughter with Algebra 2. Despite having taken the class many years ago, today I have zero working knowledge of Algebra 2. And we’re working through Algebra 2 in an abysmally bad online learning system. (It’s the same one we had to use during the COVID lockdown and it nearly broke us all.) So, yeah, we’re asking ChatGPT a lot of math questions—and it turns out the AI is really good at it.

So I am not blind to the potentially great uses of this kind of technology. (Illich, by the way, also says that convivial tools do not have to be low tech.) I think everyone would agree that old-fashioned encyclopedias are convivial tools, i.e., they facilitate autonomous human creativity; they can be picked up and put down at will; they make very few demands upon humans, etc. Search engines, as such, can also be convivial tools in that they are faster, digitized versions of encyclopedias. AI-assisted search might also be convivial in some ways. I could find the same information I’ve been using to help my daughter with math in a math textbook or an internet search unassisted by AI, but it would take considerably longer.

The danger comes when we allow AI to think for us. We can, of course, say we won’t do that, pinky swear and all. However, once tools get beyond their natural scale, they start forming/de-forming our values. To take an example that has been discussed for years, there used to be certain norms about face-to-face communication among humans. Along came smartphones. We’ve been saying for years that we shouldn’t allow the tools to shape the way we interact (or rather, don’t interact) in face-to-face situations. Nevertheless, we all have a great deal of experience with the way the tool does, in fact, dictate our behavior. And our values! Grandparents are upset when their grandchildren are looking at their phones during a visit. But those same kids are not upset when their peers do the same thing.

So how sure are we that we will, by and large, resist the temptation to allow AI to think and create on our behalf?

There is also the more practical danger of the technocratic bounding of reality. What will be the impact if we allow AI to think on our behalf and the minders of the AI have throttled what the AI is allowed to tell us? I can even imagine that the technocrats (having an infinite confidence in their own expertise) might have very good intentions when they make such decisions. Nevertheless, are we content to let these decisions be made on our behalf?

One of the unique features of AI is that the technocrats don’t even fully understand what is happening within the tool. They are priests of an unknowable god: AI works in mysterious ways, its wonders to perform. There is a certain amount of this kind of uncertainty that we have learned to live with; for example, we do not always understand why a given pharmaceutical drug works. But we’re also familiar with the elderly who are on a raft of medications, many of which were prescribed to deal with the side effects of the others. The opacity of the tool creates an increasing level of dependence on the tool to fix the problems created by the tool.


In Tools for Conviviality, Illich develops a theory of tools. Illich defines “tools” as “rationally designed devices” and which therefore range from hammers to health care systems. Or, as in the case above, social networks.

A convivial society, says Illich, is one in which there is

autonomous and creative intercourse among persons, and intercourse of persons with their environment. … [Conviviality is] individual freedom realized in personal interdependence.

Convivial tools, therefore, give people

the freedom to make things among which they can live, to give shape to them according to their own tastes, and to put them to use in caring for and about others.

The opposite of convivial tools are industrial tools, which end up exploiting their users. An industrial tool passes through two watersheds: first, it solves a defined problem. Second, it grows beyond its natural scale, alters values, and becomes an end in itself. For example, cars initially solve a transportation problem. Then, cities and roadways and employment models are build around them. We move from using cars as tools to solve a limited problem to serving the tool itself—which is, in fact, not a tool anymore but an organizing principle of our lives.

Convivial tools allow maximum freedom for their user’s creativity and independence, without infringing on the same freedom for others.

Tools foster conviviality to the extent to which they can be easily used, by anybody, as often or as seldom as desired, for the accomplishment of a purpose chosen by the user.

Of course there are several other issues that arise from this—who defines the limits of the tools, what does this mean for present industrial society—and Illich does discuss these issues. But for my present purposes, this is sufficient.


Ivan Illich, Tools for Conviviality, p.29 (pdf):

A convivial society should be designed to allow all its members the most autonomous action by means of tools least controlled by others. People feel joy, as opposed to mere pleasure, to the extent that their activities are creative; while the growth of tools beyond a certain point increases regimentation, dependence, exploitation, and impotence.

Illich uses the word “tools” very broadly here: “rationally designed devices.” This includes everything from hammers to machines to health care systems. He defines conviviality as “individual freedom realized in personal interdependence.” A convivial tool, therefore, is a tool (broadly defined) that gives a person creative autonomy.

He contrasts this with industrial tools, which begin in service to a particular need but eventually capture the user and society itself. Think of cars. At first they vastly improved transportation. A hundred years later, we have traffic jams and car payments and car insurance and registration fees and BMV paperwork and the costs of maintenance and fuel. What began as a tool to serve humans has transformed into a tool served by humans.

Think now of computing devices and the internet. For those of us who remember life before them, their appearance was a revelation. Yet now we all have the experience of becoming servants to the tools. Modern technology is, in short, a monumental hassle. A hassle, furthermore, that we must endure if we are to participate in a tech-driven society. It is becoming increasingly difficult, for example, to live without a smartphone.

What if some part or another of our technology fails on a large scale, even for a brief time? How incapacitated would we be in such a situation? That would be a good measure of the degree to which our tools have become our masters.


I’ve changed my ideas and practices a lot over the twenty-five years or so of my adult life. But one thing has remained constant since I was a kid devouring content at the Lew Rockwell website: I am a libertarian on social issues. In fact, my commitment to anti-authoritarian principles has only deepened. (To clarify, I am libertarian in this way only. I have long since abandoned libertarianism as a political philosophy.)

This seems to be an unpopular position across the spectrum these days. Large chunks of the right seem single-mindedly focused on imposing their religious views on everyone. Large chunks of the left seem single-mindedly focused on enforcing their own orthodoxy through cultural power.

How about letting people do what they want, so long as their actions do not block others from their own liberty? I’m quite aware that this is not a simple matter, that there are important discussions to be had about where my freedom and your freedom impinge on each other. It’s a conversation worth having but no one seems interested in that now. Today it’s all about the exercise of power to force others into submission.

Maybe it’s my background in an authoritarian, fundamentalist quasi-cult. I am viscerally repelled by people seeking to impose their beliefs on others. Hell, I don’t even like it when people try to loan me books because it feels like I’ve been handed an obligation. Why—why—do so many people seem utterly unable to tolerate the existence of people who do not believe or behave as they do? If I had to guess: since we all live in such uncertain times, maybe some people are desperate for conformity and certainty?

The thing that strikes me about the authoritarian tendency is its arrogance. I am baffled by people who stride about the world, certain that they know how others should be living and thinking. Are there no clouds of doubt in their mental atmosphere? Or are there nothing but clouds and they are seeking to banish them?

A libertarian stance on social and cultural issues—for me—acknowledges the fragmentary nature of our understanding. A truly humble attitude would see the life-altering nature of the decisions we are forced to make in our lives with something like a reverential awe. It would see the complexity of the forces that converge on a single being and shape their trajectory. It would hold those who must make those choices in care and compassion. Even when you would have chosen otherwise! Even when you believe they made a grave error!

For now, it appears that the short-term belongs to the power-hungry zealots. But zealots tend to burn themselves out or kill themselves off. Here’s hoping for a more humble future.


Clive Thompson says there is a biophilia paradox—and I could not disagree more.

The problem is that while we moderns desperately need exposure to nature, it sure doesn’t need exposure to us. … We humans should be living a little more densely, to give nature more space away from us.

It goes without saying that humanity is the single most destructive force on earth. Nevertheless, ideas like this only serve to reify the human-nature divide—the very divide that led us onto the path of destruction. Our current way of relating to the world is not the only way.

Our problem is that we are out of relationship with the world. This problem will only be exacerbated by further separating us from it. Thompson’s vision is a carceral environmentalism. We are not dangerous felons who must be isolated from the natural world. We are children of the same mother.


Finished reading At Work in the Ruins by Dougald Hine. This book is worth your attention. Dougald is best known—to me anyway—as the co-founder with Paul Kingsnorth of the Dark Mountain Project.

This book originated with Dougald’s realization that he needed to stop talking about climate change. Not that he came to believe any less strongly in the reality and serious threat of climate change—rather, the problem with talking about climate change is the framing. Climate change is a finding of data-driven science but climate change points us to larger issues that science cannot answer. Are our current troubles merely the result of unfortunate effects of atmospheric chemistry or are they the result of a disastrous way of living on the Earth?

Most people who talk about climate change, especially the philanthropists and technocrats who steer the course of governments, see climate change as a problem to be solved by STEM. These are the people on the “big path” that

sets out to limit the damage of climate change through large-scale efforts of management, control, surveillance and innovation, oriented to sustaining a version of existing trajectories of technological progress, economic growth and development.

Yet this is more of the same thinking that brought us to this point of converging crises. It is the program of human control over nature.

Dougald writes in favor of the “small path”, which is

made by those who seek to build resilience closer to the ground, nurturing capacities and relationships, oriented to a future in which existing trajectories of technological progress, economic growth and development will not be sustained, but where the possibility of a world worth living for nonetheless remains.

The dream of modernity, the technocratic future, may well lie in ruins. But as the title of Dougald’s book suggests, there is work to be done in these ruins. As he and Kingsnorth wrote in the Dark Mountain Manifesto:

The end of the world as we know it is not the end of the world, full stop. Together, we will find the hope beyond hope, the paths that lead into the unknown wold that lies ahead.


A word in defense of solitude

Freddie deBoer’s recent essay about the escapism built into much of online life is well worth your attention. When I was first drafting this post yesterday, I wrote that Freddie missed some important points. I’ve now re-read it a few times to ensure I wasn’t misreading him and I suspect he wouldn’t necessarily disagree with what I say below. So just take this post as a “yes, and…” to Freddie’s.

Continue reading →



Letters with @jsonbecker, week three

This is week three of a continuing series of letters with Jason Becker. Week one is here and week two is here. Dear Jason, Your description of Tulum was very interesting. It’s the first I’ve heard of it. And, yes, I can see what you mean by it being a contradiction. I like the idea of lifting people out of poverty; at the same time, it sounds like the usual corporate greenwashing.

Continue reading →


Letters with @jsonbecker, week two

This is week two of a continuing series of letters with Jason Becker. More information and the letter from week one can be found here. Dear Jason, It was interesting to read about your history online and a little more about the motivations for this project. I sincerely hope this project leads you to the interactions you are looking for. With that, let’s move in the direction you’re wanting to go.

Continue reading →