November 23, 2023

More of a rant than analysis this morning.

I have a schooling background in advanced statistics, philosophy of science, evolutionary theory, and ecology. Recently, I have focused on political economy, economics, software, and climate change. I do not pretend to be an expert in these things, but I do think a lot about how to integrate all these things to create a framework that allows analysis. When it comes to this analysis being helpful, your mileage may vary, of course.

However, I am increasingly confused by what I see going on. Not confused in that I do not understand why it is happening in the mechanical sense, but I am very confused by the response to what is going on.

AI discussions are a space that I find particularly fascinating and confusing right now. It seems to have reached into peoples' imaginations (which is not bad in and of itself), but the people who are driving the analysis of AI and its applications tend to be people who know nothing at all about either what "AI" really is, how it works, or what even consciousness, intelligence, or the difference in inductive or deductive reasoning is.

Without getting into the details of how I think about AI and applications (I also do not pretend to be an expert, but I do have some understanding of it), I find the conversation about science and consciousness in the context of AI completely ridiculous.

The term "Artificial Intelligence" (AI) is a triumph of propaganda over fact. There is nothing artificial or intelligent about "AI" systems.

AI models are constructions. Models built to run specific tasks. The people building them built them to do those specific things. It is "artificial" only in the sense that we have offloaded the thinking to a machine. This is as "artificial" as doing math with an abacus.

AI models are statistical inference programs. Programs built to run through a type of logic system and nearness of information stored in a type of database. It is "intelligent" the way that a large list of grocery items to make dinner are intelligent.

This is not to be glib about the sophistication of AI systems, but it is a bit confusing that anyone think that human consciousness is this unsophisticated that a larger list of items pushed through a constructed statistical model will result in an emergent property (or, as we used to say, a dialectic) that will be like human "intelligence".

It seems to me that this is a misunderstanding at the very fundamental level of what human consciousness is. Or, what intelligence is.

Just take one part of "consciousness" that is missing from this conversation about "General Artificial Intelligence" (another term that makes no sense).

Consciousness does not live inside one person, it is the collective movement of all human activity over time. You cannot become conscious by yourself.

A computer doing statistics is impressive. It will be able to statistical inference faster than any person on a dataset that no person can have in their brain.

But, why is this at all interesting or suspected to be dangerous to humanity?

Think of the size of the system that exists to power this single "intelligence" equal to a one very educated person with a regular computer in front of them. Is one of these that scary? To some people, perhaps, but hardly something we would worry about.

The concern is not that a superintelligence whose "brain" is spread across computer chips is going to annihilate humanity.

The real concern is that a group of humans will try to use this machine to do bad things.

AI doomsday scenarios are built from an absurd misunderstanding of economics, energy, and how current AI systems work.

Since there is no actual intelligence going on, there is no evil intelligence possible.

Our concern should be around accidental consequences from people doing stupid things. Not AI doing it on its own.

I think doomsday stuff undermines the clarity needed to see the real threats.

And it is used by the silicon valley dudes to delay regulatory intervention.

It will not be surprising to anyone, but I see the problem as a very specific one: AI is commodified and controlled by a bunch of people who have very bad aims: to make profit from AI. They will rent the large machine to whomever will pay. They will control how it is doing the math and how the model is built without oversight and without the collective understanding and decisions necessary to shape that oversight. They will do this with only their shareholders as money gobbling machines in mind.

The danger is not AI, it is AI power in the hands of capital.

The call should be for regulation and open research. The evolution of regulations should come in line with the research itself. Just like nuclear research. Energy research. Psychology research. Just like research into biology and pharmaceuticals.

Just like high-energy and advanced physics at CERN.

All these are tools that, in the wrong hands and with no oversight, could cause great damage to society.

We need to have collective intelligence to control Artificial Intelligence. More importantly, we need collective decision making to control capital's application of this tool.

Fantasists (called "rationalists" today because that's the kind of propagandist world we live in) love to think and talk about doomsday scenarios. These are fun to think about and watch movies about (if you are into that sort of thing), but these fantasies of fools are not where the real dangers to humanity are.

You do not need to dream-up end-of world scenarios, they already exist. And, we should be using this new tool to help us find solutions to our mess.

What we cannot do is give it to the suspected sociopaths that run anti-social capitalist profit machines and expect a good outcome.