How to think about AI

We talk a lot about Artificial Intelligence these days (for good reason).
What exactly is it?
I know I'm not the only one who thinks there's a bit of confusion in at least some of this talk.
In this post I'm going to try to sort through it. My goal here is not to answer the question, but rather to try to untangle a bit the relevant concepts and to simplify how we should approach it.
Let's start with two baseline assumptions. With each, there's a risk we may part ways - I'm happy to defend them another day, if so.
ASSUMPTIONS
  1. First, our brains are physical objects. All of the deliberate movements and sounds we make are caused by the physical goings-on inside our craniums, and none of that is anything magical. Science works. This is non-negotiable, for me. If you don't think this, then we should be having a different discussion.
  2. Secondly, slightly more controversially, the question as to how we come to utter the words we do, etc., is not waiting on as yet undiscovered principles of basic physics. It's not the case that there's something about the way the molecules and other structures in our brains are arrayed which somehow unleashes esoteric phenomena not so-far understood by physical science, maybe some hitherto unknown quantum specialness. At the level of physics, it's all just normal, familiar stuff going on in the grey matter. How those myriad molecular-level physical processes combine to cause us to speak is a complex and fascinating problem, but our solving it won't require the physics, chemistry or biology books to be re-written.
So, again, point 1 is that it's all scientifically understandable, and point 2 that it's understandable with today's physics, chemistry and biology.
THE JARGON: "COGITATION"
So, let's build on these assumptions. A couple of scenarios:
* You're in the bedroom, your spouse is in the hall. S/he yells to you - "Hey, where's the broom?" It's normally in the hall closet, but your spouse knows that. Then you remember you left it in the corner of the bathroom. You yell back, "It's beside the shower!"
* You're walking through town on your usual route, when you find the road you would take is closed. You are frustrated, but briefly consider your options, come up with a best route, then set out on your way.
In each of these cases, there's a small challenge, you think briefly, then you take action (yelling an answer being a kind of action).
When we think like this, something happens in our brains. Here's my basic claim (compare: the theory of Anne Elk (Miss)): what we care about concerning A.I., or what many of us care about or should care about, is the scientific question, What exactly is that thing which happens in our brains, on occasions like this? What is the nature of the processing? There are of course other questions, but this, I think lots of us will agree, is the biggie.
With this in mind, a relevant current question is, Is what's going on inside ChatGPT, relevantly the same as this? Or is there something substantial we do which is different?
For reasons I will get to, my opinion is that we should not simply call this something - this phenomenon, if you like- "thinking". I propose to steal a word from our normal vocabulary and to stipulate that it refers in the context of discussions like this to this naturally occurring phenomenon. For no particular reason I have landed on the words 'to cogitate', 'cogitation', and so on. So: henceforth, here, 'cogitation' is being used as a name for this kind of processing, whatever it is, which happens in our brains on these occasions - maybe happens in our brains pretty much all the time. It may well be that cognitive science practitioners already have a word for exactly this - if so, I'd be happy to adopt it. I haven't come across it.
Now, you might think, the word 'intelligence' has all along been doing the work I'm now assigning to 'cogitation', so why bother? But first, 'intelligence' isn't a name for a kind of processing, and that really is what we need. And second, 'intelligence' and related words have everyday meanings, and it's not clear, to me at least, that we normally properly differentiate our everyday uses from the specialized need we have here. To make this vivid: a machine can be quite stupid, and yet be doing the very kind of processing we do - just not doing it particularly well. Such a machine could be of considerable interest. Alternately, certain quite conventional computer programs (e.g., old-school chess programs) can be in an important sense 'intelligent', even though they pretty clearly don't do what we do when they process information. So it seems that artificial stupidity can in some cases be a lot more interesting than artificial intelligence. Without some new jargon, we'd need to be comfortable saying that in a sense there can be stupid intelligence. (Intuitively: intelligence is a measure of something; what we care about is the thing being measured. Compare temperature and heat.)
RESTATING THE PROBLEM
With this bit of jargon, our problem now is this:
What are the processes <xyz>, such that a thing is a cogitator just in case it does <xyz> ?
Or, more simply, What is cogitation?
A first question to notice is whether there really is in the world some one, well-enough delineated phenomenon to be looking for, for this new word to refer to. In the 1980s the view was quite popular that there is not - that what happens in our heads, in scenarios like the ones above, is some moderately complex, orchestrated interoperation of many different, independent, specialized processes. This view was popularized by Marvin Minsky in his book, The Society of Mind. Whether this view is correct is an empirical matter. My personal, operating assumption, following many others and buttressed by the large-language models, is that it is not. But: if we many are wrong, it will just mean there is no one, clearly defined such thing as cogitation - which would a useful thing to know and to be able to state succinctly.
A second, slightly complicated question, is about the kind of 'phenomenon' cogitation is. What I have in mind is the matter of whether cogitation is a physical or a computational process. It's being a physical process would mean it can take place only in a physical substrate of the right kind. It might mean, for example, that silicon hardware can never truly cogitate . But I think there's a convincing case to be made that it is properly a computational process. What matters in scenarios like our earlier and relevantly similar cases, is that incoming information, however encoded, be processed so as to be met with suitable outgoing behaviours, where in many cases this outgoing behaviour is the production of more information. This is really what we care about. Information processing is computational- whether two instances of information processing are of the same type is independent of the physical media in which they occur. All of this, if right, means that cogitation is properly taken to be a 'computational phenomenon'(not physical), which possibly isn't really a thing. This is a bit awkward, I admit, but I don't think its ultimately a big deal. Our question in this case is about the algorithm computed - it's clear enough for practical purposes what we're after.
SEPARATE CONCEPTS, SEPARATE PROBLEMS
Having set all this up, we can finally get to why we're really here.
First, consciousness. We live in an age when papers about consciousness get published in prestigious scientific journals. There's a lot of confusion about the topic (more than about AI) - a lot of the time it's not clear what would count as experimental confirmation of a claim, and sometimes what we're even talking about. One important value of our reformulation of the question here in terms of 'cogitation' is that it should just factor-out all this confusion. If you think consciousness is relevant to cogitation, this may be because you understand 'consciousness' to refer to some clearly, unproblematically computational or physical process, as the case may be, completely unmysterious in itself. If so, then you can just say what you think it is and how it fits in. All the questionably intelligible talk about "what it's like" and about so-called 'qualia' can be completely side-stepped. On the other hand, if you think that consciousness really is special and mysterious and essential to what I'm calling 'cogitation', meaning you think that one or other of my guiding assumptions ealier is false, then we at least have the benefit of being able to articulate the difference between us. If this is the case, it serves as an invitation to you to step-up and to say in clear terms what consciousness is, that it is relevant to cogitation.
Second is the question about meaning and belief. A key characteristic of our beliefs and our hopes - the so-called 'intentional states' of our minds - is that they are about the world. When you think that your dog is asleep, your thought is about something external to and independent of you, to wit, your dog. Similarly, when you say, e.g., 'Fido is asleep', the word (name) 'Fido' means, or refers to -is related to- that independent being. It is an abiding problem in philosophy to explain these relations. Generally, it is considered not to be trivial - not to be a relation, for example, which familiar computers can obviously stand in to the the thing in question, simply by generating some text with the word 'Fido' in it. One might have thought that for something to be an AI (so, lapsing back to the terms I am rejecting), its words have to have meaning and its internal states be about the world - that understanding AI requires solving the problem of intentionality. The jargon I'm recommending completely sunders these problems. Investigators into cogitation do not have to, and should not have to, solve the philosophical problem of intentionality, in order to do their work. These are separate problems.
My last point is the one I passed over earlier, concerning thinking. In the picture I am recommending, we have two quite different, incompatible ways of understanding ourselves. One is as complex physical objects bound by the laws of nature, tracing sometimes elaborate but in a sense completely predictable paths through the world. The problem of cogitation belongs to this understanding. This is the world of science and the understanding it affords. The second understanding is as rational, willful agents, bound in some sense by the principles of reason, with intentionality and capable of meaningful speech. In this view, we talk meaningfully and reason and matter to one another. The activity of thinking belongs to this second picture - it is something done by intentional agents, not by merely complex physical systems. This position is a kind of dualism, I acknowledge. It is not a dualism of substance - of mind and matter- though, but rather a dualism in our understanding of ourselves.