Posts in: Longer writing

R.G. Miga, who (judging by context clues) lives near Cayuga Lake, asks what the lake wants:

It’s looking like the lake wants its swamp back. The lake has gotten tired of these impetuous people and their silly little projects. It’s been talking with the waterfalls in the cliffs above, who are also tired of being dammed up and denied their full power; the waterfalls remember how things used to be, too, back before these fragile creatures started bustling around with their schemes. They want it all back. They want what belonged to them for thousands of years before.

This is an animist way of speaking about the land, but one that attempts to be realistic about the situation:

There’s plenty of vague gesturing in this direction in progressive circles, toward making decisions based on the imagined personhood of the land. But this often fails, because people want to imagine the land as a kindly old grandparent—the nurturing sort who wishes you would make better choices, would visit more often, but will resign themselves to quiet, long-suffering disappointment if you keep screwing up.

In our case, it makes more sense to imagine the lake as an angry demigod that has the power to comprehensively fuck up our lives if we keep trifling with it.

This, I believe, is the way any animism that is facing the reality of the world today must speak about the gods and spirits. We are no longer living in the world of the bucolic poets. We are living in the world of Robinson Jeffers, with the violence of the ocean and the indifference of granite in his poems. We are living in a world where Pan—exiled by the Christian church—has returned and brings anxiety with him.


Ivan Illich makes an excellent observation on the ways in which science as a tool (remember he defines tools as “rationally designed devices”) has passed through the two watersheds. As a reminder, Illich says that tools can pass through two stages of growth. Tools which remain in the first stage are those that extend human capabilities without constraining human autonomy. Tools that pass into the second stage take on a life of their own and enslave their users:

There are two ranges in the growth of tools: the range within which machines are used to extend human capability and the range in which they are used to contract, eliminate, or replace human functions. In the first, man as an individual can exercise authority on his own behalf and therefore assume responsibility. In the second, the machine takes over—first reducing the range of choice and motivation in both the operator and the client, and second imposing its own logic and demand on both. Survival depends on establishing procedures which permit ordinary people to recognize these ranges and to opt for survival in freedom, to evaluate the structure built into tools and institutions so they can exclude those which by their structure are destructive, and control those which are useful.

Science, Illich says,

has come to mean an institutional enterprise rather than a personal activity, the solving of puzzles rather than the unpredictably creative activity of individual people. Science is now used to label a spectral production agency which turns out better knowledge … The institutionalization of knowledge leads to a more general and degrading delusion. It makes people dependent on having their knowledge produced for them. It leads to a paralysis of the moral and political imagination.

This is related to what I’ve said before about the problem with the “trust/believe the science” catchphrase: science is a method, not an authority. The scientific method is an amazing tool that can be used by anyone to discover knowledge. It extends humanity’s capabilities.

But eventually people want science to think on their behalf and science becomes an authority figure—this is the point at which science passes into the second, dangerous stage of growth. It now becomes the property of the scientific priesthood, who dictate to the rest of us what “science says” and we’re meant to “believe the science” and thus abandon our own autonomy.

I hear someone asking: does this mean we’re supposed to “do our own research” and start believing internet anti-vaxxers and conspiracy theorists? Well that’s a loaded way of asking the question, isn’t it? Here we see the bind the second stage growth of science has put us in. Because the scientific method (stage one) has transmogrified into the scientific authority (stage two), we are faced with the false dichotomy of 1. believe the authorities or 2. give yourself over to hucksters and fanatics.

This is a genuine conundrum. We must simultaneously respect the findings of genuine scientific inquiry while also maintaining our own personal autonomy, which often requires questioning authority. I don’t know how to solve this problem. All I can do is ask questions, always being wary of self-deception and dogmatic thinking.


One of the foundational ideas in Ivan Illich’s Tools for Conviviality (see this post from yesterday for a more general introduction) is that the failure of the industrial model of tools is rooted in a key error: namely, that we could make tools that work on behalf of humanity. That, in fact, we could replace human slaves with tool slaves. But we have found that when we replace human slaves with tool slaves, we become enslaved to the tools. Once tools grow beyond their natural scale, they begin shaping their users. The bounds of the possible become defined by the capabilities of the tools.

The leads inevitably to technocrats—the minders of the machines, the managers, the experts learned in the ways of the tools. The technocrats become the new priesthood, interpreting the tools for the masses and instructing them in tool values. Does a tool fail? Never. It is we who have failed the tool. We need to be better engineers.

In this way our desire to create tools to work on our behalf results in our enslavement to the tools. The crucial component of autonomous, human creativity is missing.

This lies at the root of our fears of AI, even if it isn’t said in so many words. AI seems to me to be the ultimate (to this point) expression of the tool slave model. We have created a tool that actually thinks on behalf of humans (or at least is aimed in that direction, even if it isn’t quite there yet). We are farming out to a tool what we have traditionally considered the quintessentially human activity: rational thought.

I’ve had a little experience with ChatGPT recently. I’ve been helping my daughter with Algebra 2. Despite having taken the class many years ago, today I have zero working knowledge of Algebra 2. And we’re working through Algebra 2 in an abysmally bad online learning system. (It’s the same one we had to use during the COVID lockdown and it nearly broke us all.) So, yeah, we’re asking ChatGPT a lot of math questions—and it turns out the AI is really good at it.

So I am not blind to the potentially great uses of this kind of technology. (Illich, by the way, also says that convivial tools do not have to be low tech.) I think everyone would agree that old-fashioned encyclopedias are convivial tools, i.e., they facilitate autonomous human creativity; they can be picked up and put down at will; they make very few demands upon humans, etc. Search engines, as such, can also be convivial tools in that they are faster, digitized versions of encyclopedias. AI-assisted search might also be convivial in some ways. I could find the same information I’ve been using to help my daughter with math in a math textbook or an internet search unassisted by AI, but it would take considerably longer.

The danger comes when we allow AI to think for us. We can, of course, say we won’t do that, pinky swear and all. However, once tools get beyond their natural scale, they start forming/de-forming our values. To take an example that has been discussed for years, there used to be certain norms about face-to-face communication among humans. Along came smartphones. We’ve been saying for years that we shouldn’t allow the tools to shape the way we interact (or rather, don’t interact) in face-to-face situations. Nevertheless, we all have a great deal of experience with the way the tool does, in fact, dictate our behavior. And our values! Grandparents are upset when their grandchildren are looking at their phones during a visit. But those same kids are not upset when their peers do the same thing.

So how sure are we that we will, by and large, resist the temptation to allow AI to think and create on our behalf?

There is also the more practical danger of the technocratic bounding of reality. What will be the impact if we allow AI to think on our behalf and the minders of the AI have throttled what the AI is allowed to tell us? I can even imagine that the technocrats (having an infinite confidence in their own expertise) might have very good intentions when they make such decisions. Nevertheless, are we content to let these decisions be made on our behalf?

One of the unique features of AI is that the technocrats don’t even fully understand what is happening within the tool. They are priests of an unknowable god: AI works in mysterious ways, its wonders to perform. There is a certain amount of this kind of uncertainty that we have learned to live with; for example, we do not always understand why a given pharmaceutical drug works. But we’re also familiar with the elderly who are on a raft of medications, many of which were prescribed to deal with the side effects of the others. The opacity of the tool creates an increasing level of dependence on the tool to fix the problems created by the tool.


In Tools for Conviviality, Illich develops a theory of tools. Illich defines “tools” as “rationally designed devices” and which therefore range from hammers to health care systems. Or, as in the case above, social networks.

A convivial society, says Illich, is one in which there is

autonomous and creative intercourse among persons, and intercourse of persons with their environment. … [Conviviality is] individual freedom realized in personal interdependence.

Convivial tools, therefore, give people

the freedom to make things among which they can live, to give shape to them according to their own tastes, and to put them to use in caring for and about others.

The opposite of convivial tools are industrial tools, which end up exploiting their users. An industrial tool passes through two watersheds: first, it solves a defined problem. Second, it grows beyond its natural scale, alters values, and becomes an end in itself. For example, cars initially solve a transportation problem. Then, cities and roadways and employment models are build around them. We move from using cars as tools to solve a limited problem to serving the tool itself—which is, in fact, not a tool anymore but an organizing principle of our lives.

Convivial tools allow maximum freedom for their user’s creativity and independence, without infringing on the same freedom for others.

Tools foster conviviality to the extent to which they can be easily used, by anybody, as often or as seldom as desired, for the accomplishment of a purpose chosen by the user.

Of course there are several other issues that arise from this—who defines the limits of the tools, what does this mean for present industrial society—and Illich does discuss these issues. But for my present purposes, this is sufficient.


Ivan Illich, Tools for Conviviality, p.29 (pdf):

A convivial society should be designed to allow all its members the most autonomous action by means of tools least controlled by others. People feel joy, as opposed to mere pleasure, to the extent that their activities are creative; while the growth of tools beyond a certain point increases regimentation, dependence, exploitation, and impotence.

Illich uses the word “tools” very broadly here: “rationally designed devices.” This includes everything from hammers to machines to health care systems. He defines conviviality as “individual freedom realized in personal interdependence.” A convivial tool, therefore, is a tool (broadly defined) that gives a person creative autonomy.

He contrasts this with industrial tools, which begin in service to a particular need but eventually capture the user and society itself. Think of cars. At first they vastly improved transportation. A hundred years later, we have traffic jams and car payments and car insurance and registration fees and BMV paperwork and the costs of maintenance and fuel. What began as a tool to serve humans has transformed into a tool served by humans.

Think now of computing devices and the internet. For those of us who remember life before them, their appearance was a revelation. Yet now we all have the experience of becoming servants to the tools. Modern technology is, in short, a monumental hassle. A hassle, furthermore, that we must endure if we are to participate in a tech-driven society. It is becoming increasingly difficult, for example, to live without a smartphone.

What if some part or another of our technology fails on a large scale, even for a brief time? How incapacitated would we be in such a situation? That would be a good measure of the degree to which our tools have become our masters.


I’ve changed my ideas and practices a lot over the twenty-five years or so of my adult life. But one thing has remained constant since I was a kid devouring content at the Lew Rockwell website: I am a libertarian on social issues. In fact, my commitment to anti-authoritarian principles has only deepened. (To clarify, I am libertarian in this way only. I have long since abandoned libertarianism as a political philosophy.)

This seems to be an unpopular position across the spectrum these days. Large chunks of the right seem single-mindedly focused on imposing their religious views on everyone. Large chunks of the left seem single-mindedly focused on enforcing their own orthodoxy through cultural power.

How about letting people do what they want, so long as their actions do not block others from their own liberty? I’m quite aware that this is not a simple matter, that there are important discussions to be had about where my freedom and your freedom impinge on each other. It’s a conversation worth having but no one seems interested in that now. Today it’s all about the exercise of power to force others into submission.

Maybe it’s my background in an authoritarian, fundamentalist quasi-cult. I am viscerally repelled by people seeking to impose their beliefs on others. Hell, I don’t even like it when people try to loan me books because it feels like I’ve been handed an obligation. Why—why—do so many people seem utterly unable to tolerate the existence of people who do not believe or behave as they do? If I had to guess: since we all live in such uncertain times, maybe some people are desperate for conformity and certainty?

The thing that strikes me about the authoritarian tendency is its arrogance. I am baffled by people who stride about the world, certain that they know how others should be living and thinking. Are there no clouds of doubt in their mental atmosphere? Or are there nothing but clouds and they are seeking to banish them?

A libertarian stance on social and cultural issues—for me—acknowledges the fragmentary nature of our understanding. A truly humble attitude would see the life-altering nature of the decisions we are forced to make in our lives with something like a reverential awe. It would see the complexity of the forces that converge on a single being and shape their trajectory. It would hold those who must make those choices in care and compassion. Even when you would have chosen otherwise! Even when you believe they made a grave error!

For now, it appears that the short-term belongs to the power-hungry zealots. But zealots tend to burn themselves out or kill themselves off. Here’s hoping for a more humble future.


Clive Thompson says there is a biophilia paradox—and I could not disagree more.

The problem is that while we moderns desperately need exposure to nature, it sure doesn’t need exposure to us. … We humans should be living a little more densely, to give nature more space away from us.

It goes without saying that humanity is the single most destructive force on earth. Nevertheless, ideas like this only serve to reify the human-nature divide—the very divide that led us onto the path of destruction. Our current way of relating to the world is not the only way.

Our problem is that we are out of relationship with the world. This problem will only be exacerbated by further separating us from it. Thompson’s vision is a carceral environmentalism. We are not dangerous felons who must be isolated from the natural world. We are children of the same mother.


Finished reading At Work in the Ruins by Dougald Hine. This book is worth your attention. Dougald is best known—to me anyway—as the co-founder with Paul Kingsnorth of the Dark Mountain Project.

This book originated with Dougald’s realization that he needed to stop talking about climate change. Not that he came to believe any less strongly in the reality and serious threat of climate change—rather, the problem with talking about climate change is the framing. Climate change is a finding of data-driven science but climate change points us to larger issues that science cannot answer. Are our current troubles merely the result of unfortunate effects of atmospheric chemistry or are they the result of a disastrous way of living on the Earth?

Most people who talk about climate change, especially the philanthropists and technocrats who steer the course of governments, see climate change as a problem to be solved by STEM. These are the people on the “big path” that

sets out to limit the damage of climate change through large-scale efforts of management, control, surveillance and innovation, oriented to sustaining a version of existing trajectories of technological progress, economic growth and development.

Yet this is more of the same thinking that brought us to this point of converging crises. It is the program of human control over nature.

Dougald writes in favor of the “small path”, which is

made by those who seek to build resilience closer to the ground, nurturing capacities and relationships, oriented to a future in which existing trajectories of technological progress, economic growth and development will not be sustained, but where the possibility of a world worth living for nonetheless remains.

The dream of modernity, the technocratic future, may well lie in ruins. But as the title of Dougald’s book suggests, there is work to be done in these ruins. As he and Kingsnorth wrote in the Dark Mountain Manifesto:

The end of the world as we know it is not the end of the world, full stop. Together, we will find the hope beyond hope, the paths that lead into the unknown wold that lies ahead.


A word in defense of solitude

Freddie deBoer’s recent essay about the escapism built into much of online life is well worth your attention. When I was first drafting this post yesterday, I wrote that Freddie missed some important points. I’ve now re-read it a few times to ensure I wasn’t misreading him and I suspect he wouldn’t necessarily disagree with what I say below. So just take this post as a “yes, and…” to Freddie’s.

Continue reading →