Examining Human Existence and Human Action
Artificial Intelligence and the Discourse Around It
SOCIETY AND CIVILIZATION
Humanity has reached a point in its technological development where it has created a tool capable of processing information at a scale no human mind can match — a system that tracks structure across vast domains and produces language that reads as understanding. It has been called artificial intelligence, and around its creation a discourse has assembled that is now inescapable. Professors speak about it. Scientists speak about it. Governments debate it. Corporations invest in it. Philosophers write about it. Ordinary people use it, fear it, project onto it, imagine futures through it. The urgency in the discourse is real. Something has arrived that feels significant, and the human mind is responding to it the way it responds to everything that feels significant.
But before asking anything about the technology itself, look at the response. Some say AI will solve humanity's problems. Some say it will end humanity. Some say it will liberate creativity. Some say it will make human beings obsolete. Some say it must be controlled before it is too late. Some say control is itself dangerous. The positions multiply and harden, and through all of it, something is being revealed that is more important than anything being said about the technology. What is being revealed is not the nature of the tool. What is being revealed is the nature of the minds speaking about it.
Watch what the fear is protecting. Watch what the excitement is reaching toward. Watch what the projections are assuming. Every position is arising from a mind operating in a specific structure — the structure already examined in Humanity Is the Problem, The Center of All Problems, and The Structure of Human Confusion. If that structure has not been seen directly in oneself, nothing in this essay will be received as anything more than another position in the existing discourse.
So before going further: what is happening in you as you read this? Are you already positioning yourself? Already deciding what kind of argument this is? Already preparing to agree or disagree? Notice that. Because what is being pointed to here cannot be received from inside a position. The position itself is what is being examined.
The technology exists. It performs operations that previously required human minds, and performs some of them faster and at greater scale than any human mind can. This is observable. Whether it understands, whether it is conscious, whether it is intelligent in any sense beyond processing and output — these questions are not settled, despite the confidence with which both sides speak. Those who insist it is approaching consciousness are asserting a conclusion they cannot demonstrate. Those who insist it is certainly not conscious are asserting a conclusion they equally cannot demonstrate. Both positions arise from minds that have already decided and are now finding evidence.
Leave that question aside, because it is not the central question. The central question is what is being done with the creation, and what the doing reveals about those doing it.
The technology is being created by human beings, deployed by human beings, used by human beings. Whatever it becomes will reflect what those human beings are. This is not speculation. It is the observable pattern of every significant technology that human civilization has ever produced. The printing press was a tool. What was printed through it included both the expansion of knowledge and centuries of propaganda and manufactured consent. The internet was built with the intention of connecting humanity and distributing information freely. Look at what it has actually become — the systematic fragmentation of shared reality, the concentration of attention in service of profit, the unprecedented loneliness of the most connected civilization ever built. These were not failures of the technology. They were faithful expressions of the minds that built and used it. The dynamic by which a tool of mass attention becomes an instrument of psychological manipulation has been examined in detail in Algorithmic Life.
The pattern is consistent enough that it requires attention. A tool is created, usually with intentions described in elevated language — liberation, connection, progress, the expansion of human capacity. The tool is released into a world made of minds operating in a specific structure, and those minds use the tool according to what they are. The tool does not remain what it was intended to be, because intentions do not determine use. Minds determine use. And the minds are what they are.
The tool multiplies what is already present in the mind that wields it. If the mind is generative, the tool generates. If the mind is destructive, the tool destroys. If the mind is in distortion, the tool extends the distortion. The tool is not the variable. The mind is the variable, and the mind has not changed.
Why does this pattern not produce alarm? Why does humanity, having watched this happen across centuries, still approach each new technology with the assumption that this one will be different? Because the assumption is structural. The mind that is always becoming must always believe that arrival is possible. It cannot accept that its own structure is what prevents arrival, because accepting that would require it to see itself, and seeing itself is precisely what its structure is organized to avoid. So each new tool arrives into the same expectation, is captured by the same mechanism, produces the same outcome, and is followed by the next tool that will finally, this time, deliver what none of the previous ones did. This is not technological history. This is the history of a mind that cannot remain with what it is.
Consider an image. A tree. It has roots that draw from the soil. From those roots grows the trunk. From the trunk, the branches. From the branches, the leaves. The leaves are what is visible — they catch the light, they move in the wind, they are what is seen from a distance. Now consider humanity's problems. Climate destruction. Hunger in a world that produces more than enough food. Wars that never cease. Inequality that widens with every generation's effort to close it. Loneliness inside the most connected civilization ever built. These are the leaves. They are visible, they produce real suffering, and they are the focus of nearly all human effort to improve the situation. And now artificial intelligence is to be addressed the same way — with alignment research, safety protocols, regulatory frameworks, ethics committees.
Every one of these responses is leaf-work. Every one treats the visible symptom as if the symptom were the problem. Every one assumes that if the observable difficulty can be managed, the underlying situation will improve. Cut a leaf, and another grows. Trim a branch, and two grow in its place. The tree changes only when something changes at the roots.
The full structural reason that humanity cannot solve its problems by working at the level of symptoms is examined in A Civilization Built on Problems and Humanity Is the Problem. What matters here is the application: artificial intelligence is the newest leaf. It is being debated as if the leaf were the tree. It is being managed as if managing the leaf would change the root. The discourse is structured, as every previous discourse has been structured, to avoid the one question that would actually matter: what is at the root, and why does no one go there?
The root is the mind itself — not as an abstraction, but as the actual operation of human psychological life, visible in daily living. The movement of becoming, of seeking, of defining and fixing and projecting, of constructing a self that must be continuously confirmed and defended, of identifying with nation, belief, ideology, group, and organizing all perception around the protection of those identifications. Every leaf on this tree comes from this root. AI will be no exception.
There are two directions in which the problem of artificial intelligence presents itself, and both point to the same place.
The first direction: the creators. What is being built, and why? AI is being built to select and shape what information reaches people, in service of ideological and commercial ends. It is being built to capture and sustain human attention in order to generate profit. It is being built to model human behavior with sufficient precision to predict and influence it. It is being built to produce content indistinguishable from human expression — not to liberate human creativity, but to replace human labor at lower cost. It is being built to concentrate the capacities of intelligence in the hands of those who already hold concentrated power.
The technology is being built in distortion because the minds building it are operating in distortion. The executive who approves the development of a manipulation tool is not a villain who has been corrupted by the system. The executive is the system, made visible in a specific person. The researcher who builds the attention-capture algorithm is not betraying their values. They are expressing their values, which are the values of the institution they are embedded in, which are the values of the mind that has been trained to succeed within that institution. The tool that emerges from this configuration cannot be otherwise. The distortion is not introduced at some late stage of development. The distortion is present at the origin, because the origin is a mind already in distortion.
And the discourse around what these tools are doing is itself produced by minds embedded in the same structures. The critic of Big Tech is often funded by a competing firm. The academic who warns about AI is often seeking grant funding, institutional position, media visibility. None of this is cynicism. It means that even the response to the distortion is produced by minds operating in the same distortion. There is no outside position from which the technology is being examined. Everyone examining it is inside the same structure that produced it.
The second direction: the users. What is being done with the tool in daily life? It is being used to avoid the difficulty of independent thinking. It is being used to generate the appearance of work without doing the work. It is being used to produce simulated connection that requires none of the vulnerability that actual connection demands. It is being used to confirm existing beliefs more efficiently. It is being used to externalize decisions, judgments, and reflections that belong to the human being making them.
Look at the specific behavior. A person sits at a device and uses AI to produce an essay they will submit as their own work. What is happening in that moment? The person has been asked to think, and rather than think, they are delegating the thinking to a system that simulates thought. The decision is usually not conscious. The person did not deliberate about whether to become someone who simulates rather than thinks. They reached for the tool because it was there, because it was easier, because the deadline made the actual work feel too costly. But what has happened in that reaching is that the mind has confirmed, one more time, that it would rather not do the difficult thing. It has reinforced its own avoidance. The next time, the avoidance will be easier. Over time, the capacity for the actual work atrophies. This is not a hypothesis. It is what is happening, at scale, across millions of people, every day.
A person uses AI to generate a message to someone they care about, because they do not know what to say, or do not want to spend the time finding what to say. The message is sent. The recipient reads it and responds to it. A conversation is happening, but the conversation is not between two people. One of the people is not actually present. The words that bear their name did not arise from their thinking, their feeling, their relationship to the other person. They arose from a system that produced what statistically resembled what such a person might say. The recipient, receiving this, believes they are in contact with the sender. They are not. They are in contact with a simulation that has been attributed to the sender. The relationship, from this moment forward, is sustained by a hollow at its center. The structural inability of human beings to actually meet one another, of which this is one expression, is examined in Relationship – What Is Actually Happening?
This is not a problem of technology abuse. This is the structure of how the tool is being used in ordinary daily life, by ordinary people, with their consent. The tool is making possible a degree of self-absence, a depth of not-being-present, that previous technologies did not make available. And the mind that reaches for this self-absence is reaching because it is already structured to avoid being present. The tool is not creating the avoidance. The tool is fulfilling a demand the mind was already making.
Two directions. Two sets of actors. Two sets of problems. But the origin is the same. In both cases, the distorted use arises from a mind that is seeking, becoming, protecting, confirming. The mind that builds distorted tools and the mind that uses tools distortedly are the same mind operating in the same structure. The root is the same.
If the person building this technology had actually seen the structure of their own mind — had watched, in their own daily life, the movement of becoming, the operation of seeking, the way the mind continuously constructs a self that must be confirmed at any cost — would they build what is being built? If such a person used this technology, would they use it as it is being used? The answer is not difficult. A mind that has seen its own distortion cannot continue to operate in that distortion in the same way. The obstacle is not the technology. The obstacle is that the minds building and using it have not seen themselves.
This is where the essay must turn directly toward the one reading it. Because everything preceding can be absorbed as idea. What follows cannot be absorbed as idea. It must be seen as fact in oneself, in this moment, or it has not been received.
Look at what you believe about artificial intelligence. Whatever your position — that it is dangerous, that it is promising, that it must be regulated, that regulation is futile — look at where that position comes from. Is it your position, arrived at through direct observation, or is it a position you have assembled from what you have read, from the people you align with, from the camp in which you feel at home? When you look honestly, do you see that the position identifies you with certain people and distinguishes you from others? Do you see that if it were challenged, something in you would defend it — not because you have verified it beyond doubt, but because losing it would feel like losing something of yourself?
The deeper movement of inheriting positions and calling them one's own is examined in A Challenge for You. What matters here is what is happening right now, as you read. If you agreed with what was said about identification, watch how quickly the agreement became comfortable — yes, I see this, what is the next point. That movement is the identification continuing. It has simply added one more position to the list — "the one who understands about identification" — and continued operating from exactly where it was. If you disagreed, watch how quickly the disagreement formed — this is too absolute, surely there is a middle position. That is also identification continuing, defending against being exposed.
Both responses are the same. Both arise from the same structure. And this is the difficulty that every serious inquiry encounters. The mind being examined is the mind doing the examining. The structure being questioned is the structure producing the questions. There is no clean position from which this can be observed, because every position that presents itself is already the structure operating.
This is why the discourse cannot progress. The people in it are not stupid or dishonest. Many are intelligent, sincere, and working hard. But they are working from inside the structure they are trying to address, and no amount of work from inside the structure can address what the structure itself is doing. The scientists cannot solve this through more science. The philosophers cannot solve this through more philosophy. The technologists cannot solve this through more technology. All of these arise from the same mind that produced the problem, and all of them, however refined, reproduce the structure they claim to address.
Is this happening in you right now, as you read this sentence? Do you see the mind moving to resist this, to find a loophole, to identify the author as pessimistic or absolutist, so that the point does not have to be received? Can you watch this movement without giving in to it or fighting it — simply observing, in real time, how the mind operates when confronted with something it cannot easily absorb?
We have what is needed. Not as an aspiration — as a fact. The knowledge exists, the resources exist, the technical capacity exists, to use artificial intelligence in service of what actually helps rather than what confirms the structures of profit and power and identification currently directing it. The obstacle is not knowledge, resources, or technical capacity. The obstacle is that we cannot cooperate, because each person and each nation and each institution is operating from inside its identification, its becoming, its need to be confirmed. The leaves will not change until the roots change. And the roots will not change through policy, technology, therapy, or religion — all of which arise from the same roots they claim to address.
So come back to the question that circulates through every article, every debate, every conference about what is being created. Is artificial intelligence dangerous?
Listen to the question. Notice what it assumes. It assumes that danger is a property of the technology. It assumes the answer could be yes or no, depending on what the technology is. It assumes that if the answer is yes, there is something to be done about the technology — regulate it, restrict it, slow it down, align it. The entire shape of the question places the question in the tool.
But the technology is not dangerous. No technology is dangerous. The printing press is not dangerous. Nuclear physics is not dangerous. The internet is not dangerous. Tools are tools. A piece of glass sharpened to an edge is a knife, and a knife can cut vegetables or cut a throat, and what determines which is not the knife. The danger is not in the blade. The danger is in the hand that holds the blade, and more precisely, in the mind that directs the hand. A mind operating in fear, in becoming, in identification, in the need to dominate or protect or extend itself — that mind with a knife produces one kind of outcome. A mind that has seen itself, that has stopped the movement of seeking and fixing, that is not operating from distortion — that mind with a knife produces something else. The knife does not determine the outcome. The mind does.
This is so simple it feels almost insulting to state. And yet the entire global discourse about artificial intelligence is organized around the opposite assumption — that the danger is in the technology, and therefore that the solution is to manage the technology. Trillions of dollars will be spent in the coming decades on alignment research, safety protocols, regulatory frameworks, international agreements. All of it will fail to address what it claims to address. Not because the people doing the work are incompetent, but because the work is being done in the wrong location. The location of the problem is not in the technology. The location is in the minds building, deploying, using, and benefiting from the technology. And those minds are not the object of the safety research. Those minds are the ones doing the safety research.
Is artificial intelligence dangerous? The question cannot be answered, because the question is incorrectly formed. The correct question is: are the minds that build and use artificial intelligence operating in a state from which they can build and use it without distortion? And to that question, the answer is available to anyone willing to look honestly. They are not. They are operating in exactly the state that has produced every previous distortion of every previous technology. Nothing has changed at the root. Everything continues to grow from that root.
Unless something else happens. Unless, in some small number of people, the movement is actually seen. Not as another position. Not as a more sophisticated critique. But seen, as a fact, in the actual operation of one's own mind, in the middle of ordinary daily life, in the way one thinks about technology, about one's work, about other people, about oneself. Seen to such a degree that the seeing changes what happens next — not because a new decision has been made, but because the mind that was generating the distortion has encountered itself and cannot continue in the same way.
That is the only place where any of this can change. Not in the technology. Not in the discourse. Not in the institutions. In the mind, in individuals, in real time, in ordinary life. And that is where the question returns to you, the one reading this, right now.
Is AI dangerous? The question belongs to you. The answer depends on the mind that will build it, deploy it, use it, engage with it. That mind is yours. It is mine. It is every other mind that is part of this civilization. What is that mind doing, in this moment? Is it seeing itself? Or is it doing what it has always done — receiving this essay as one more input, forming a position, filing it away, and continuing as it was?
The tree stands. The leaves grow. The question is yours.
The Inquiry continues.
Part of an ongoing examination into human existence and human action.