Computers (and the programs run on them) don’t have agency

The Wikipedia article on Agentic AI says an Agentic AI system is an autonomous system that makes decisions and does things without human intervention. Robots usually have fixed rules, but AI agents analyze data and learn.

The Wikipedia one on intelligent agent emphasizes goal orientation as a defining characteristic. A thermostat, human being, organization or even geographical region is an agent if it has goals, makes decisions and acts on the decisions.

But is it useful to think of a thermostat or other simple control mechanism as an agent with goals? The goal of a thermostat is to keep a temperature constant. But is that the thermostat’s goal or the goal of its users? Isn’t saying it’s the thermostat’s goal, or that the thermostat has goals, rather than uses or a purpose, giving them capabilities that living things have?

As a general principle, it is not necessary to propose a purpose or reason to explain something.

A: Why is he running up the stairs?
B: He always runs up stairs (on other occasions and 
other stairs)

B’s explanation is a good explanation. Many explanations in science saying how something is consistent with or the result of a more general principle are like this, aren’t they. Avoiding teleological arguments about goals is one aim.

And this agency, is that in the first or second sense of DuplicitousAgent? I think proposing intelligent agents, they mean the first sense, doers rather than helpers.

The Wikipedia article says that defining AI in terms of agents:

I don’t think treating AI as agents does any of these things, and giving people (real agents) and their AI tools the same status is bizarre, more appropriate for science fiction than scientific fact.

I agree purpose and goals are important. They’re fundamental to what computer processes are achieving. But ascribing agency to mechanical processes to handle their purpose and goals is unnecessary and a mistake.

People have tools. They have goals using these tools. Tools are created with a purpose in mind. People use the tools to some end.

The tools have a use, meaning they are useful. But they don’t have goals. ‘The goal of a tool’ means its use has a purpose, but the purpose is the user’s not the tool’s. Actions with the tool have a purpose. There is a reason for the action. It is the reason the tool is used.

The purpose of the hammer is to hammer nails into wood, but

When all you have is a hammer, everything looks like a nail.

I think that’s relevant to purpose and agency. When you are using the hammer for another purpose, your purpose, without realizing it’s not the purpose for which tool makers intended it to be used, you’re liable to make mistakes.

Rather than using the hammer this way without thought about purpose, be very clear about your purpose, and how the way you want to use it is compatible or not with the way it is designed to be used.

Think of using a hammer to put bottle caps on beer bottles when you don’t have a bottle cap applicator. Without thought about purpose, you bring down the hammer with some force on the cap, breaking the bottle, instead of just giving it a light tap.

Our tools do our thinking for us, but we shouldn’t let them do that.

Is the AI community falling into the trap somehow, of thinking their apps with purposes have agency?

It is a truth not universally acknowledged, but as the gun lobby says,

Guns don't kill people, people kill people.

That doesn’t mean everyone should have access to guns. It means it’s important to distinguish who/what has the purpose, the tool or you.

Don’t let your tools do your thinking for you. A tool is a means to an end, the goal or purpose of the tool, not an end in itself.

I don’t understand how the AI community can believe agency for their apps is the right approach.

Whose interests are served by apps having agency? Don’t obey/welcome our new overlords, Big Tech and AI.

In a curious reversal of agency for AI apps, Frankenstein’s monster, as distinct from an AI app, had agency as portrayed in the story by Wollstonecraft, but his identification with his creator by the use of the Frankenstein name for him denies this agency in the present day.

AI community, be warned.

Compare and contrast with the problem of agency and EvolutionaryGoals.

Agency as a framework for understanding AI is unhelpful.
Associating apps with Not making the distinction Rather than attacking agency head on, I’m going to attack the problem of goals. The argument is going to be one from analogy.

It’s like explaining the world as being the result of God’s handiwork. That explanation shuts down understanding rather than advances it. Goals are important, and then end-means analysis becomes important in explaining how to reach them, but agency doesn’t help devise ways to get an AI app to reach the goals we set for it.

Consider the accepted idea that evolutionary change is not goal oriented. Individuals are goal-oriented. Preservation of species by the procreation of individuals of the species to which they belong appears goal-oriented but the proliferation of certain individuals and disappearance of others by selection is not an end, either success or failure, to which evolutionary change is working.

Nevertheless, there persists the mistaken idea that evolution is a process of improvement guided by trial and error. It is actually the temporal interaction of two independent processes, random genetic change and the chopping block of natural selection (ie, the fate of species).

I think the problem biology has defining ‘species’ is connected to this confusion. Are species real or do they exist in name only?

Deciding AI apps have agency is the same kind of mistake.

Incidentally, I thought the Shakespeare family line had disappeared, but famous Shakespeares on Wikipedia shows it alive and well.

The disappearance of family lines does not indicate those lines were failures. It only indicates that there was a last surviving male member. By definition, this member did not have any male children, but a female member might have continued a line of matrilineal descent.

I want to argue that the attention a proliferating species gets at the expense of one that disappears, and the mistaken idea that evolutionary change has goals, together lead to the idea that the proliferating species has the goal of self-preservation, that it has it to a greater degree than the disappearing species and that it is more successful at achieving that goal.

I think that’s a mistake. Self-replication by instances is just something the species does. Making lots of copies of itself leads to self-preservation of the species/population, but self-preservation is not a goal of the species. It is just a consequence of the replication and big population. Inferring goals requires more than a cause and effect relationship, as the theory of evolution, which is not goal-oriented, shows. There needs to be a context of interaction over time.

Leaving aside the question whether species have the goal of self-replication through (a)sexual reproduction of their instances, what about individuals? People have goals. They have ideas about how to reach them and engage in trial and error to reach them. But what about the sexual activity responsible for preservation of the species? People have a drive to engage in sex, but specifically for men do they have goals they seek to attain through sexual intercourse? Does a means-end analysis help us understand their sexual activity?

Listing possible goals:

I think these are better considered parts of or stages in sex. Rather than sex being a means by which these goals are achieved.

For men, orgasms aren’t considered the ends of sex and sex isn’t considered the means by which orgasms are achieved. Men are not considering the purpose of sex when they engage in it. There is no ends-means distinction.

The same applies to eating. It’s not really for eaters a means-ends activity, though if you’re hungry, it may be. You know that eating something is a way to stop feeling hungry.

Unless you’re Socrates, who said he ate to live but that others lived to eat.

Many men father children. 60% of men in the US are fathers and 75% of men aged 40-49 are fathers. But most men I don’t think engage in sex as a means to an end, for the purpose of having children.

They recognize it as a consequence, foreseen or unforeseen, of sex, but not the reason they engage in it. I think they are surprised rather than relieved when their partners tell them they are pregnant. They may want children, but it is not the reason they are sexually active.

Although the reason some men apparently do have sex is because they want children. Ferdinand_de_Lesseps for example had 12 children in the 15 years between the ages of 65 and 80, before dying 9 years later at the age of 89.

This ‘blind spot’ men have to becoming fathers is no problem for homo sapiens because their biology does the thinking required for them.

The category mistake of ascribing purpose to evolution and perhaps goals to species is mirrored in the mistake of ascribing goals and agency to AI apps.

running up stairs explanation

%

Me at

Back to [AI]repolo(AI.html)