This document details the model which motivates the thinking here. It is in two versions, bottom-up and top-down. They differ only in their order of presentation of the model's definitions. If you want to cut to the chase then work back to the specifics, the top-down version is the one to consult:  bottom-up  top-down

Effectively everything said here about truth, meaning, consciousness and other matters is a consequence of what I am inventively calling 'the model'. The model imagines a minimal world of agents engaged in a simple activity superficially like speaking a language.

It's augmented incrementally until familiar properties of language emerge. Other people have of course done this - what I think distinguishes the approach here is the elements it does and does not include, and its scaled-back explanatory ambitions.

As the model is developed, its elements acquire new properties. To keep things clear, these properties are named following a convention - their names all have a leading '_' (in some places I have used a leading 'p'). The names make plain the real-world properties they're supposed to model. The idea behind a thought experiment such as this is to separate, in principle, the activities

  1. of imagining a world of people engaged in plausible activities and stipulating definitions to describe them (this is hopefully uncontrovesial in itself), and
  2. of mapping those definitions onto familiar concepts (this is where the controversy comes in, potentially).
Ultimately the goal is to get to a point where we are inclined to say, e.g., 'truth is _truth', i.e., that a particular thing in the real world is true just in case the relations in which it stands to other parts of the world mirror the relations in which its corresponding model element stands to corresponding other parts of the model, when the model element is _true.

The model in the first instance imagines a large set of people, which it terms '_agents', milling about in reasonable proximity. They are prone to uttering strings of sounds, '_sentences', which, as it happens, resemble sentences of (say) English - but we are to imagine these strings having no meaning (initially). The model stipulates that _agents reflexively assign them -utterances of _sentences- each a '_value' between 0 and 1. This is sort-of a gut-reaction, yes-no evaluation, with the valuation being intrinsic to the 'token' _sentences. It is formalized in the model by allowing that every agent has a kind-of table or function which associates a number to every possible token _sentence. Keeping to our clever naming practice, this is called the '_value function'.

The crux of the project is here. Whereas in most conventional accounts of language -I think it's fair to say- the goal is to explain sentence evaluations in terms of other things (meanings-grasped or what have you), in the model these are fundamental. They are the given.

The reason things mostly are the usual way and not as in the model is hardly surprising. There are effectively infinitely many token sentences, yet somehow our valuations are (mostly) predictable. The merest sense of intellectual responsibility would seem to compel us to try to produce a theory which generates predictions about what we understand and agree to, in terms of what is said - to be able to explain the reliability of our expectations about sentence valuations based on their words' meanings. But this, I think (as others have thought), actually misapprehends what needs explaining, and the terms of any possible explanation. Considered as a natural phenomenon -as measuable dispositions towards sentences considered as sound-sequences- there is an important and viable, finitary, scientific project here. Neural net modelling, particularly in the last ten years, has made enormous strides in this respect. What is key for present purposes is that such scientific theorizing has no need of meanings or truth - any semantic or intentional concepts. It is a project of natural science proper. Theorizing of this type carries the whole explanatory burden. We are absolved of the obligation to formulate a theory in semantic terms. Which is a good thing, because any such theory is bound to fail.

Anyway. Grasping that the semanticist's assumed explanatory burden is specious, is a hurdle which must be leapt. Once it is, a much clearer view becomes available.

The way it's imagined, the model also needs to distinguish between _sentences which an _agent has heard or uttered and those s/he has not (an _agent's _value function maps all _sentences, heard or not). With just these elements in place, interesting questions arise as to how best to model real language. One consequential realization is that the _value function should be designed so that whether an _agent _values some given _sentence should depend on what other _sentences s/he has already heard and _values (intuitively, what you're inclined to believe depends in part on what you already believe). This makes things tricky, but realistic.

_Sentence _valuations are meant to be mostly obvious to _agents, but not of any great intrinsic interest - like, say, basic colour recognition. To give _agents a motive to speak, a further property, _pleasure, has to be added to the model. _Pleasure is meant to be some nice feeling distinct from any familiar feeling, which is experienced when certain _valuable _sentences are encountered. It's intended to correlate to the real-world benefits truth facilitates getting, including avoiding familiar pains. It serves in the model as the only motive for exchanging _sentences – for ‘_conversing’. It is a contrivance of the model which ultimately has to be cashed-out when mapping the model onto reality.

With a bit of finessing, it emerges in the model that _agents would naturally want to cooperate to amass as large a collection of valuable _sentences as possible (a bit like how we naturally cooperate collectively to improve our individual financial circumstances, even as we compete). This gives rise to a property which on inspection looks a whole lot like truth in normal talk - the property of belonging in that collection.

With _truth in place it becomes possible to introduce _words, and to define a concept solely in the model's terms which arguably does all the work which we should expect of a concept of word-meaning. And with _word-_meaning in place, we can get straight-forwardly to a concept of _sentence-_meaning or proposition.

(Note: this explanation follows the leading-'_' convention for the model's terms, explained in 'The Model' -> 'The Idea'.)

Possibly what distinguishes the model from other treatments is that it represents a speaker's goal in talking to be, not the acquisition of information, but rather simply the maximization for her or himself of some simple good (_pleasure, in the model). The details of the model which make this interesting (and, I think, realistic) are that

  1. It's somewhat hard to come up with novel _valuable _propositions. They're valuable in part because they're scarce.
  2. It costs effectively nothing to repeat a _sentence to someone else -to share its _pleasure, if you like.
  3. The _values _agents assign to new token _sentences depend in part on tokens they've already heard and _value (what you're inclined to _believe depends on what you already _believe).
  4. Where _agents have mostly the same prior _beliefs, their _valuations of new _sentences mostly agree. Where their prior beliefs differ, so may valuations.
  5. Lastly, the _values _agents assign to _sentences are in part - but only small part - up to them. They have a limited ability to choose whether to _value new token _sentences. It costs effort, however, to change one's _beliefs. (people have to be at least a bit responsible for what they believe, right?)

These factors would conspire to create an interesting dynamic, in some ways like what governs an economy of goods. On the one hand, _agents would have a strong incentive to share their _sentences (to talk to each other). That is, the inhabitants of the model would naturally evolve a sort of contract whereby they each voluntarily speak their _sentences, provided others do likewise, since their doing so would be to everyone's benefit. They would be motivated to cooperate. Acquiring the _pleaures of _language would not be a zero-sum game (far from it).

But they would also have an incentive to compete. The idea is that

  1. _agents individually have large stocks of _valued _sentences (their _beliefs), which would significantly affect the _values of newly encountered _sentences.
  2. To the extent _agent B's _beliefs overlap with _agent A's and they share their _sentences, B effectively works for A in getting new _pleasurable _sentences. A will likely _value what B does and so likely get pleasure from B's new discoveries. And the same point goes for the _speech community in general.
  3. When A's _valuation of a _sentence differs from B's, it is thus in her interest to win the community and likewise B over to her _valuation. The same would be the case for B. They would, in effect, each be striving to get the community -and one-another- to work for themselves individually, rather than having to forfeit that _sentence's _value as a cost of staying in general harmony with the community.

What would competition look like? Suppose we inhabit the model, and you utter a _sentence in my presence. Other things being equal, the model implies you _value this _sentence. Suppose I do not _value it -it is not added to my set of _beliefs. For the reasons just given, I now have an incentive to change your _valuation. How do I do this? Well, if I _disvalue it, this may well be in large part in virtue of other specific '_disconfirming' _sentences in my _belief set. If any of these is not in your _belief set, then maybe by uttering it I will get you to _value it, and so tip your _valuation of the first mooted _sentence to negative. It could indeed be that it is already in your _belief set, but that your _valuation heuristic failed to take account of it - in which case I would merely be '_reminding' you. However, if, as it happens you _disvalue my suggested _disconfirming _sentence, then clearly the process can recur.

It's not wildly fanciful to suppose that a community of people motivated and constrained as in the model would come to attach some importance to the type of exchange just described. When all factors are taken into consideration, including the bias toward _valuing newly encountered _sentences which are otherwise agnostic, and the effect of pressure to conform, one can see that standards in the conduct of such exchanges would naturally emerge . Being a competitive activity ultimately aimed at a shared good, the manner of its conduct, and of its participants' willingness to revise (or not), their _beliefs, would be something worth noticing. The obvious name for the set of norms governing such exchanges, and for _agents' practices in revising their _belief sets, would be '_rationality'.

It is a counterpart of the norm ascribed to the _sentences themselves, '_truth'.

(Note: this explanation follows the leading-'_' convention for the model's terms, explained in 'The Model' -> 'The Idea'.)
The model imagines that _sentences are of interest to the people in it, not for any information they may convey, but rather for a certain contrived good ('_pleasure') associated with some of them (_sentences which express novel '_propositions'). _Agents' sole goal in the model is to maximize their personal quotient of this _pleasure.

Because the good can be shared effectively without cost (just by speaking a _sentence to someone), and because the model's _agents are mostly in agreement, it's easy to see their best strategy individually would be to agree mutually to assist one another. They would naturally form an implicit contract whereby each speaks her valuable _sentences to others on the understanding that they will do likewise.

Crucially, the model stipulates further that _agents' _valuations of newly encountered _sentences depend in part on what they have encountered and _valued in the past, and also that _agents have some lattitude to re-_value _sentences --just a bit. The point of this wiggle room is that if _sentence _valuations are interdependent as stipulated, the possibility will arise of a newly encountered _sentence being in conflict with a big prior collection. The project of _value maximization is best facilitated by allowing a bit of tolerance with respect to individual _sentences, so as not to oblige abandonment of what may after all prove to be a sound collection. _Value is in a way cumulative for _agents, and they do best in the project of getting it if they're allowed a modicum of discretion. If _valued _sentences are understood as beliefs, these stipulations hopefully won't feel too exotic.

_Valuations being thus interdependent leads to a first interesting property of _sentences. We can suppose there to be some one set of them which would maximize overall _value for any given _agent, were she to value all and only its elements. If, as we have said, her goal in trading _sentences is to get as much _value as she can, then she is ultimately looking to _value all and only the elements of this set. At any given time, however, since the aforementioned wiggle room opens the door to _valuing _sentences which aren't really part of the set, she may _value _sentences which in fact are not in it. For any given _sentence, _valued or not, then, there will be the question as to whether it is in the set. Clearly the property of being in that set is of interest, and deserves a name. We will say a _sentence is _true for _agent a just in case it's in _agent a's set, and _false for a if not.

'_truth for a' is of limited interest. For one thing, an _agent has no way of knowing what is in that set, apart from what she _values at any given moment. So the distinction between what she actually _values and what she ought to, to maximize her total _value, would not hold any interest for her. For another, two additional stipulations of the model, that novel _valuable _sentences are hard to come up with but very easy to share, and that people mostly agree in their _valuations, imply that an _agent will be getting most of her _valuable _sentences from exchanges with others. This suggests that the parallel question about collective _value maximization may hold more interest.

Suppose _agent a _values _sentence s and utters it in the presence of b who, as it happens, _disvalues it. As things stand, we have four possibilities.

First, it may be that s is genuinely in a's maximizing set but not in b's - their maximizing sets simply disagree, and s is _true for a and _false for b.

More interestingly, it may be that b is mistaken about s - that in fact his maximal set agrees with a's, but he doesn't realize it. So s is _true for both a and b - a is right and b is mistaken.

Or, vice versa - it may be that s is _false for both - a is mistaken and b is right.

(The last, not so interesting case of course is that a and b's sets diverge on s, but they're both mistaken (s is _false for a and _true for b)).

Given these possibilities, what ought _agents to do when confronted with such differences? In particular, should they treat what is _true for one _agent to be _true for all? If _truth is just _truth-for-a –if it is merely '_idiolectic'– then for every disagreement between two _agents about a token _sentence either there is a genuine divergence between them as to what is _true in their respective _idiolects, or their _idiolects properly agree and one or other of them is simply mistaken about what is _true for him. It becomes apparent that the best strategy for _agents individually to maximize _value is to treat all differences as signalling a mistake for one or other _agent. That is, it emerges that it is quantifiably advantageous to treat _truth as objective– provided enough _agents opt-in to this strategy and assuming, as we have, both that _agents mostly coincide in their appraisals of _value and that appraisals of _value are at least in part malleable.

To see this, suppose that an _agent disagrees with the majority about some widely _valued _sentence. She then forfeits the _value (and attendant possible _pleasure) potentially of all _sentences whose _support depends on it. These may be considerable in number. If she disagrees with the majority about even a small fraction of all _sentences, she loses not just the _pleasure of the _sentences hierarchically supported by them but also the confidence that _theory _sentences offered by others will rest on foundations which she would _value, and hence, in many cases, the possibility of _valuing _sentences on the strength of others' ‘_testimony’. Admittedly, this _agent will get some compensation in the form of hierarchically dependent _sentences she contrives herself on the basis of her _idiolectic _sentences. As even the most prolific of solitary _sentence producers will fall far short of what her community collectively can contrive, however, and since she will be deprived of the initial error-check afforded by communal acceptance, she will on balance be considerably less well-off. The extra _value to be gained by an individual by participating in a community of _agents in which effectively everyone commits to and contributes to the construction of a single shared edifice of _truths will easily offset the cost of sacrificing the _pleasure of any genuinely _idiolectic _sentential affinities.

And so we have in the model a concept of _truth: A _sentence is _true, roughly, just in case it is an element of the set of _sentences which would maximize the combined _value of all _speakers, were they all to _evaluate all of its elements. This being impossible, it stands merely as an ideal. I will note for now just two points.

The first point is just a reminder that the definitions so far are mere stipulations about hypothetical people in hypothetical circumstances. It is fair to ask whether such people are possible, and whether, if they existed, they would behave as described. These are not philosophically loaded questions -should not be, anyway. What is of course loaded is the fitness of the stipulated concepts to model their real counterparts. My claim, of course, is not just that such people are possible but that we in fact are them, and our usual concepts, the model's.

The model recognizes a nominally truth-like ideolectic property, '_truth-for-an-_agent'. A committed defender of indivualistic rationality might plausibly cleave to this in the hope of resisting the depredations of the current approach. This is a mistake on two fronts, I think. First, it makes what is true completely unknowable, and hence apparently without practical significance.

Second, though, I think it fails to see, what the model conveys, that the concept of _truth (for all) has a character substantially different than that of _truth-for-a -that it would play an importantly different role in the lives of _agents- and that this character correctly maps onto our familiar concept of negotiated, public truth.

(Note: this explanation follows the leading-'_' convention for the model's terms, explained in 'The Model' -> 'The Idea'.)
So far we have created a model of a much pared-down correlate of a natural language like English, and seen how its minimal elements would lead to both cooperative and competitive interactions among its speakers as they pursue their individual ends. We have seen, too, that this dynamic would naturally be described in terms of an ideal which on consideration looks a lot like truth. We are now in a position to appreciate how concepts emerge in the model which correlate to our familiar concepts of meaning and proposition.
The concept of _truth in the model was arrived at, notably, without assuming _sentences to be composed of _words. _Sentences to date have been just sound patterns of a kind. We can now add _words to the model, understanding them to be just smaller, inert sound sequences. We'll take there to be a large, finite set of them, and stipulate that negatively or positively valuable _sentences are always decomposable into some sequence of them. The path to their _meanings is a bit tortuous, but I think at each point, locally straight-forward. I will outline the main ideas here, but acknowledge there is more to say.
Let's suppose there to have been uttered some token _sentence s on an occasion, and investigate the matter of the _meanings of its constituent parts.
The first point is that we will actually be concerned with _phrases, not specifically _words, a _phrase being understood to be any word-sequence which is not itself a _sentence.
The second point is that our primary concept is not of _meaning as such, but rather of sameness of _meaning - specifically, sameness of _meaning of two token _phrases. The _meaning of a token _phrase p will then be identified with the set of all token _phrases having the same _meaning as p (allowing ourselves some latitude, as sets aren't strictly the right sorts of things to be meanings).
The core idea is that token _phrases q and r have the same _meaning just in case, were you to substitute r for q in all the (atomic) _sentences containing q which are _true at the moment of its utterance, you would get all and only the _true _sentences containing r at that moment.
So: suppose our token _sentence s is
  • "Bob's dog is a mutt".
By the terms of our definition, what would it take to settle that "Fido" means (refers to) the same thing (animal) as "Bob's dog"?
Well, at the moment, there will be a bunch of other _true token "Bob's dog" _sentences:
  • "Bob's dog is on the couch"
  • "Bob's dog is sleeping"
  • "Bob's dog suffers from arthritis"
  • etc..
Our requirement is that the token _sentences,
  • "Fido is a mutt"
  • "Fido is on the couch"
  • "Fido is sleeping"
  • "Fido suffers from arthritis"
  • etc..
all be _true as well, and that there be no additional _true "Fido" _sentences.
So far, so good. But a number of questions immediately surface. I will consider here, just two.
What about Ann's dog, also named "Fido", who is a pure-bred Dachshund? Do we not now also have a token _sentence,
  • "Fido is a Dachshund",
which is _true? Agreeing that
  • * "Bob's dog is a Dachshund"
is not _true, does this not preclude identifying the _meanings of "Bob's dog" and "Fido"? The question relates to ambiguity, and raises an obvious problem. In the familiar way of thinking about language, something -typically connected to speakers' intentions- serves to disambiguate word meanings, and hence to permit evaluating "Fido is a mutt", in the moment. Simply put, in the usual way of thinking, word-meaning determines sentence meaning, and for apparently good reason - not the other way around, as here.
Our thought is that, at the very moment of speech, for phrase ambiguity not to be an obstacle to talk, it must, barring special circumstances, be the case that prior speech would condition us to dis-value "Fido is a Dachshund". The key, for the model, is that there be an intutitive, pre-theoretical notion of sentence valuing by which we would (mostly) disvalue, "Fido is a Dachshund" in our example moment. Differences in token _sentence valuations must underwrite token _word disambiguation, in the model, not vice versa (as in conventional approaches).
What about synonymy of tokens in different moments? Allowing the foregoing, one might notice that what has been provided-for so far is only synonomy-in-a-moment - resolution of the question as to whether two token _phrases uttered in some one moment of speech have the same _meaning. However: any half-way adequate account of meaning has to have something to say about synonomy across different moments - (using my jargon). So, yesterday, I said "Fido is on the rug" (and maybe, "Fido is not on the couch"), today I say, "Fido is on the couch" (and "Fido is not on the rug"). Clearly it has to be possible that my uses of "Fido" in the two moments have the same _meaning - that I was talking about one and the same dog at the two times. So far, nothing has been said about how in the model this can be made sense of.
This challenge I think is not too difficult, but it does require introducing some technical apparatus. We start by noticing that the presently-_true "Fido" _sentences will include past and future-tensed _sentences, as will the sets of _sentences _true at other times. For example,
today:
  • "Fido is on the couch"
  • "Yesterday, Fido was on the rug."
yesterday:
  • "Tomorrow, Fido will be on the couch"
  • "Fido is on the rug."
Taking account of these, our sets of "Fido"-_sentences from yesterday and today will align (will map 1-1), albeit in some cases with verb-tense differences between corresponding elements. The required apparatus I mentioned is just a way of translating all _sentences into terms which are present-tensed but explicitly indexed to a particular time and place, removing the verb-tense diffrences. And this all has to be done without presupposing any intuitive concept of meaning, of course. If it can be done, then a moment of speech can pick out a set of 'Fido'-_sentences in a manner which does not tie it '_grammatically' to the time of utterance. This set may or may not then exactly coincide with the 'Fido'-_sentences at some other time or place.
What now of the meanings of whole sentences, so-called "propositions"? The approach here mirrors the approach taken for meaning. We start by taking the primary concept to be of sameness of _proposition, and allowing the _proposition expressed by a token _sentence s to be (roughly speaking) the set of token _sentences which express the same _proposition as s. So what is it for two token _sentences to express the same _proposition?
The guiding thought here is that two token _sentences express the same _proposition just in case their respective _phrases can be mapped one-to-one in order. For example, consider
  • s1: Bob’s dog is a mutt (spoken yesterday across town)
  • s2: Fido is a Heinz 57 (spoken today here)
These two token _sentences express the same _proposition just in case the contained tokens of
  • a) ‘Fido’ and ‘Bob’s dog’ have the same _meaning, and
  • b) ‘is a mutt’ and ‘is a Heinz 57’ have the same _meaning
This definition differs from familiar definitions in more than one way. First, it does not provide that, for example, "Sue is the sibling of Jasmeen" expresses the same proposition as "Jasmeen is the sibling of Sue", as perhaps intuitively is the case. The concept of _proposition can possibly be augmented to provide for this, but details remain to be worked-out.
A second possible divergence is that the token _sentences which constitute a _proposition may differ in truth value. Intuitively, my utterance today of "Fido is on the rug" may express the same proposition as my utterance yesterday, though yesterday's was true and today's, false. The primary objects of _belief remain token _sentences, though there is clearly scope to expand our repertoire of concepts here, to accommodate cases ("token _proposition", etc).
One issue which looms large in philosophical discussions of propositions is the matter of propositional attitudes - things like beliefs and desires - which relate people to propositions, and the semantics of the sentences which express them, such as "Bob believes Fido is asleep". One virtue of the present approach is that it greatly simplifies this problem. By inverting the priority of truth and meaning, the whole need to invent different semantics for words in and out of so-called opaque contexts mostly falls by the wayside.
There is evidently more to be said, but I leave the concept here for now.
(Note: for reasons of simplicity, this explanation drops the leading-'_' convention for the model's terms, used elsewhere).
One particularly interesting consequence of the model is that it represents, without any substantial auxilliary assumptions, sentences we would use to express conscious experience or to report dreams, and permits us to characterise their semantics. Doing this, then descending semantically, helps us to get a grip on the subject matter in question.
The elements of the model were nothing more than a set of speakers or '_agents', a set of '_sentences', a set of functions mapping _sentence utterances to '_values' (one function for each _agent), and a '_pleasure' property associated with some sentences' being _valuable. The model's contribution is to make plain the minimal parameters needed to give rise to convincing concepts of truth and meaning in language. These are,
  1. sentence uttered
  2. time of utterance
  3. place of utterance
  4. place of hearer's focus of attention
  5. context of utterance, being the set of token sentences recently heard by the hearer
  6. sentence utterer
  7. full set of encountered token sentences valued by the hearer ('beliefs')
Philosophers traditionally have distinguished beliefs based on sensory experience (the candle in front of me is lit) and those arrived at through rational reflection (17 + 8 = 25). Acknowledging the vagueness of this distinction but otherwise ignoring, for now, the voluminous discussion of the subject in philosophy, the two classes of token sentences corresponding to these two types of belief can be specified in the model, like so:

A token sentence s is an observation sentence for a just in case her valuing of s is (relatively) independent of her belief set B. That is, the value function returns the same value for s regardless (almost) of the B parameter.

A token sentence s is a theory sentence for a just in case a's valuing of s is dependent on B.

In articulating these definitions, what becomes apparent is that there are more distinctions to be made.
The point I want to get to is that the model permits us to isolate a class of sentences relevant to the present subject. These are token sentences which
  1. are positively valued only when utterer = agent valuing
  2. are independent of context
  3. are independent of beliefs
We can stipulate in the model that this class is non-empty - that in fact all agents value some such sentences. Let's label sentences in this class, 'Q-sentences'.
Our thought is to get clear on the semantics of such sentences, and in so doing to illuminate their subject-matter.
So, first, are such sentences candidates for being true? The model's initially frustrating but ultimately illuminating answer is, in a way, yes, and in a way, no. Recall that the model tells us,
Truth: A token sentence is true just in case it's an element of a set which would maximize the combined aggregate value of all speakers, and which is such that the removal of any element would result in a set of lower combined aggregate value.
By this definition, Q-sentences are indeed true or false as the case may be. Adding valued such sentences to the set does increase total value, even if they are valued only by one person.
However: they are in a sense degenerate, and so not fully-fledged peers of true sentences of other classes. As the elaboration of the concepts relevant to truth in the model made plain, its interest is its role as a norm of conversation. The purpose of conversation is, for each of us, to maximize our individual quotient of value, our effectiveness in achieving which vastly increases with increasing conversational participants. Conversation is regulated by a contract which binds participants to strive to maximize collective sentential value, other things being equal; truth is what does this. Adding a new token sentence to your belief set not only increases your net sentential value, but is expected also to expand your facility in accruing others.
Q-sentences, being valued solely by their utterer and independent of the belief parameter to the value function, 'B', are disengaged from this activity. My knowing you dreamt last night that there was a blue armadillo on the dresser increases my net sentential value only by the default amount I accord your sentences on account of trust - the sentence does precisely nothing else for me. Were you to value the denial of this sentence - were the denial true in the degenerate sense we are allowing such sentences to be, it would make no difference at all to anyone. Indeed, we could posit a more restrictive definition of truth which excluded such sentences altogether, one which would do all the work expected of a concept of truth, though at a cost of simplicity and generality.
Q-sentences are true, then, but only in this qualified, degenerate sense. What now about the meanings of their relevant words and phrases? Recall that the model manages solely with a concept of same meaning, and that this is effectively a matter of truth-preservation in context of utterance. Two token words have the same meaning just in case one can be swapped for the other in all (atomic) sentences true at the moment without changing the sentences' truth values.
With this in mind, consider two relevantly true token sentences,

s1, uttered at 1:00PM: The pain sensation in my toe is now throbbing.

s2, uttered at 2:00PM: The pain sensation in my toe is now dull and constant.

Is it possible that the two tokens of 'the pain sensation in my toe' mean -refer to- the same thing? I think this is an interesting question for any theory of sensation talk, but we can set aside any general concerns here about object identity as they apply to sensations. Let us allow that by the terms of the model, they do. The interesting point is that insofar as sameness of meaning is wholly a matter of preservation of truth, and the truth being preserved is degenerate in the way just discussed, the sense in which these tokens have the same meaning is similarly degenerate. The meanings of expressions used in Q-sentences inherit the degeneracy of their containing sentences' truth.
The significance of all this lies in the contrast with the sentences of science. The hallmark of science is objective verifiability. A scientific sentence is true or false, as the case may be, in precisely the way Q-sentences are not: for any agent, believing a scientific sentence, and so having it included in the B parameter to her value function, implies there will be other token sentences whose value will be different than had the sentence not been included. The propositions of natural science are characterised by the reliability of their connections to theory and experience and so, in the terms of the model, by their support connections to other sentences. All of this being the case, we should expect Q-sentences to be inescapably opaque to science.
Q-sentences, I submit, correctly model qualia reports such as interest theoreticians seeking an understanding of consciousness. Q-sentences have enough to them for us to take interest in them - to treat them as more than mere jabberwocky-nonsense - but not enough to make them worthy of public discussion; not enough for science to be able to get a grip on them. The problem of consciousness arises because we fail to grasp the differences between the two classes of sentence. We treat Q-sentences as though they are full-fledged peers of the sentences of science, and so expect them to be worthy participants in scientific theory. This is a mistake.
It may be objected that this discussion completely omits consideration of what may be thought to be the crucial matter, which is why the inhabitants of the model would value what I've labelled 'Q-sentences', and why we are apt to 'hold true' our real counterpart sentences. I hold true "I have a pain sensation in my toe" precisely because I feel it, because of the qualia! What about the model's agents? It's all very well to posit these Q-sentence place-holders in the model, what matters is what would explain their valuations. That is the subject matter, properly-speaking, of investigations into consciousness, and it's where this treatment completely whiffs. So it may be objected.
I hope it will be clear on consideration that the conception of the reference of 'a pain sensation in my toe' presupposed by this objection simply begs the question against what I am proposing. To try to buttress this, I will say here three things.
A first response, which requires no commitment to the model, is simply to shift the burden regarding the mooted explanatory lacuna. What mode of explanation, exactly, is in question? If it's meant to be a causal 'because', then the objector has the very hard work ahead of her of explaining (A) how purely subjective (objectively undetectable) qualia can be shoe-horned into a scientifically respectable framework, and -what is maybe even more challenging- (B) how the subjective 'I' which experiences them is meant to be made scientifically intelligible; to make good the claim that it (the 'I') is physically changed in being causally impacted by the qualia. What are the physical boundaries and constitution of the self? One allure of the approach recommended here is that it completely dismantles these apparently intractable questions. Alternately, if it's meant to be a justifying 'because', the objector seems to be equally at sea. How exactly does the justification work? Wittgenstein's injunctions immediately bubble-up, here. And if the 'because' is neither causal nor justificatory, then what sort of 'because' is it, exactly?
The second point concerns the role of explanation here in general terms. It's all very well for me to throw the explanatory challenge back in the objector's face, as I just have, but it's not much progress if there's reasonable agreement that an explanation of some sort is called for. A main component of the model is the case that the whole explanatory idiom to which the objector's mooted explanation would belong, is misbegotten. It emerges from a conception of rationality as individualistic, according to which truth and meaning can be made sense of only if language can be, in the relevant way, theoretically tethered to non-linguistic reality. The model's point of departure is that sentence valuations - valuations such as the objector insists need explanation - are what's given in experience, and that this experience is fundamentally social (it depends on experience of other people uttering the sentences in question). The thought that intellectual responsibility compels us to try to account in terms of some finite, scientific theory for this admittedly infinite domain of facts originates in a failure to grasp the nature of rationality and our relation to the world. The case that rationality and our relation to the world are as the model maintains is, in the main, simply that the picture of these things it recommends works and is complete. The model does allow that the relevant physical facts can be explained by a finite theory, just not a theory formulated in the terms of semantics (viz, the kind of scientific, non-semantic theory neural net researchers are providing). The aspirations of theoretical semantics are overweening.
The third, related point concerns the putative objects of experience, including the qualia which so vex us. The objector's protest is premised on a conception of the objects of experience as things which can be got hold of independently of language, and which can serve somehow in substantive explanations of our judgements. Some people will not be budged from this position - they will always prefer dismissing philosophical positions such as I am defending to what they will insist are the undeniable facts of experience. To those who would dig their heels in here, I would urge consideration of the profusion of books and papers under whose weight university library shelves groan, on the subject of how perceptual experience figures in judgement. What sort of a thing is a perceptual experience, even of something as pedestrian as a dog on a rug nearby in good lighting, that it can be a basis for judging, e.g., "The dog is on the rug." - how exactly does that work? I should repeat here my warning that this is quite a different problem than explaining how it is that light reflecting off the dog causes retinal and many other neural actuations, ultimately eventuating in the production of vocal chord movements. It is (many think) about how a judgement like this can be justified. It is only the beginning of your problems, if you are going to insist that your privately experienced feeling is the basis for your claim 'I have a tickle in my left big toe'.
(Note: this explanation follows the leading-'_' convention for the model's terms, explained in 'The Model' -> 'The Idea'.)
The starting point for the model, and by extension this whole project, is the recognition that our having a feeling of assent or dissent on hearing a token sentence is properly understood as a primitive fact, not amenable to systematic explanation , and the attendant realization that if we allow that speakers are looking to maximize this feeling of assent, then -adding in a few plausible assumptions- sentences acquire a property with exactly the contours of truth.
What was initially missing from the model was a motivation for speakers to maximize the feeling of assent or agreement, as we require them to have. This feeling of assent we are now leaning on is comparable to a sense of harmony or dissonance, but only in a weak sense. I did not intend that this whole account of language should rest on the idea that it's sufficiently intrinsically appealing that people would seek it out (through conversation) for its own sake. To fill the gap, the model introduced the (admittedly contrived) concept of '_pleasure'. _Pleasure was stipulated to be something we get from certain _valuable token _sentences only - not all - and in different degrees (it was later amended to be associated with _valuable _propositions not already _believed). It was meant to be a model-specific feeling, distinct from any we actually have, and sufficiently intrinsically worthy to motivate maximizing _sentential _value (the thought, again, being that an _agent's only known way to maximize _pleasure would be simply to maximize the number of _valuable _propositions encountered). One benefit of basing the model on _pleasure, rather than on some range of real pleasures, as one might seek to do, is that it thwarts any temptation to try to ground the diversity of word-meanings in the diversity of our real pleasures. It's a key point that only a single pleasure is needed to get a full treatment of truth and meaning.
Let us now remove _pleasure altogether from the model. What real thing should be modelled in its place to provide _agents' motivation to converse? More simply, bracketing for a moment our theoretical ambitions, what is the practical value of truth? This latter question is not difficult. If I learn that "There is tea in the cupboard." is true and I have the ability to procure the pleasure of tea if there is tea in the cupboard, then I can have the pleasure of tea. Similarly, if I learn that "Rain is predicted this afternoon." is true and I have the ability to avoid the discomfort of being rained on if I know rain is predicted this afternoon, then I can avoid this discomfort. I submit that the full litany of conditional opportunities like this, generalized as I will discuss, wholly exhausts the benefit of truth.
The answer just given is formulated in terms of the truth of sentences in order to reflect the primacy of (token) _sentential _truth in the model. Since the model has a fully adequate concept of _proposition, however, the representation of these conditions and their coordination with actions can be simplified a bit and the wholes represented as '_abilities':
  • _Ability1: If it is _true that there is tea in the cupboard, then one can have the pleasure of tea.
  • _Ability2: If it is _true that one's car will be towed if not moved, then one can avoid the displeasure of having one's car towed.
  • _Ability3: If it is _true that swimming conditions at the lake are perfect, then one can have the pleasure of a swim.
  • _Ability4: If it's true that uranium-235 is fissile, then one can enjoy the pleasures of having a nuclear power plant.
  • etc.
The suggested answer to our question is that the model be augmented with an effectively infinite set of _abilities like this, and to associate with every _agent a more or less large subset (so, substituting 'ai' for 'one' as applicable).
My proposal is that this, effectively without further elaboration, is a fully adequate response to the current requirement. The model, it was emphasized, provides for support relations between _sentences, insofar as the _value of _sentence2 for ai may be conditional on _sentence1's being in ai's _belief set. This means that coming to appreciate the _value of (coming to _believe) some new _proposition quite distinct from any directly implicated in an _ability may nevertheless facilitate the getting of a practical benefit. For example, suppose that if ai _believes the _propositions
  • pi: When Pat has returned from the supermarket, there is tea in the cupboard.
    and
  • pii: Pat has returned from the supermarket.
then ai reliably also comes to _believe
  • piii: There is tea in the cupboard.
And now suppose that on an occasion ai antecedently _believes pi and comes to _believe pii, perhaps on account of someone saying it (stating a _sentence which expresses it). Provided ai has _ability1, the pleasure of a cup of tea becomes available. All of this is just to illustrate that acquisition of any of some very large subset of not currently _believed _valuable _sentences may ultimately prove to afford practical benefits, and that _agents in the model -as changed- would thus have a general motivation to converse. Acquisition of _value is in a way like acquisition of money, in being at one remove from the goods it facilitates getting.
It's worth re-emphasizing, too, that this proposal keeps _semantics autonomous from the specifics of _agents' _abilities. Although getting _pleasure is tied to the having of _abilities conditional on _propositions being _true, the details of the _semantics of _language remain divorced from the specifics of the _abilities, as the model details.
A lot of the burden of the model has now been loaded onto _abilities. Is there not now a task no less big than our original problem of explaining semantics, to explain these? Don't we need to unpack a bit what's involved in being able to build and bring online a nuclear fission reactor, on learning that uranium-235 is fissile (like, being able to get some in high enough concentration, for starters)? Isn't this obviously just more philosophical whack-a-mole?
My short answer is -for our purposes- 'No'.
There is in philosophy a wealth of discussion on what exactly it is to know how to do this or that, and how this relates to such things as knowing that such-and-such is the case. Much ink is spilled on the subject of the nature of the knowledge needed to be able, say, to catch a baseball (land a triple salchow /sing "Queen of the Night" from The Magic Flute).
There is, on the face of things, no deep mystery about such abilities. My position all along has been that none arises if we look more closely, provided we've understood things as the model proposes. Considering a human being as an extremely complex machine, there is, to be sure, a very complicated scientific problem to understanding how it is that our brains and bodies execute these things. Researchers at universities and modern robotics companies have of course made astounding progress toward solving this problem in the last decade. This problem, though, is clearly not what exercises philosophers.
Quite distinct from science is what I am calling, for want of a better term, common sense. Coaches of children's baseball watching tryouts have no problem differentiating which kids know and which do not, how to catch a ball. There is in daily life rarely any problem about what it is to have or lack some skill, though we often make such judgments (there are of course edge-cases). Where problems do arise, it may be, say, due to unclarity about the standards to which the skill should be executed. Do I know how to make an omelette? For myself on a Sunday morning, sure. But ask me (improbably) to help out in the kitchen of a high-end restaurant, I'd be wise to say No. Here again, I think this has nothing to do with philosophers' concerns.
The philosopher's problem arises due to a felt need either
  • somehow to reconcile or bridge common sense and science, or
  • for a third explanatory idiom.
Let's take these in turn.
Common sense explanations work. When we attribute an ability to someone, expectations arise, and these expectations are typically satisfied. The concepts of having an ability or skill are as pervasive and enduring as they are because our understandings of one-another in terms of them are fruitful.
This calls for higher-order explanation. Why does our thinking in terms of abilities work? One motivation for supposing that common-sense and science must be bridged is the combination of the beliefs,
  1. that the success of common-sense must be due to the existence of a set of intra-cranial physical states or properties whose interrelations mirror the interrelations between what common sense attributes to us, and
  2. that this isomorphism would imply that the terms of common-sense in fact refer to the mooted physical states or properties - that common-sense is in fact a kind of proto-science.
I take both of these beliefs to be mistaken. Modern large-language model AI has, I think, effectively laid to rest the first thought, though I realize this remains a contentious issue in some quarters. I won't take up the fight here. But even were it true, I share the view of many that the latter belief would still be mistaken. The question comes down to the relation between our conceptions of our selves on the one hand as intentional agents and on the other as physical systems. It's to some extent understandable that traditional philosophy is exercised by such problems, as they're made pressing by the individualistic conception of rationality and our selves which it recommends. The social conception recommended by the model has no such requirement. It takes as independent and conceptually incommensurable the terms of our pre-scientific interpersonal engagement as agents, and the investigations science makes of our insides.
I have admitted that the effectiveness of common sense needs explaining. If not as proposed, then how? It's not unreasonable to think that better understanding of the details of the workings of neural nets will reveal the existence of patterns of patterns which permit deriving, in some approximation, the properties and inferences of common sense. My conviction, though, is that, as in other disciplines like genetics, the direction of insight will be almost entirely from the level of novel science to the level of familiar experience, rather than vice-versa. Trying to get to a scientific understanding by way of trying to sharpen up the explanations of common sense is a mistake.
The other suggestion is that there exists a wholly independent explanatory idiom, separate from science and common-sense. This level of explanations posits the existence of faculties and perceptions and conscious awareness. It is distinct from natural science in embracing a subjective self which is the seat of these things. Natural science has no use for such properties (how do you measure a self?). But it is distinct also from common sense, in that it imagines there to be facts hidden from view (and as yet not grasped or understood) which substantively explain our common sense judgments.
There is a lot to say about this explanatory idiom. Prominent among its posits is a certain conception of dream experiences which I think a lot of people find very intuitive, and would be loathe to give up (as my view requires). The rejection of the idiom is what unites Austin, Ryle and Wittgenstein, as I understand them. I take it to be made imperative (failing a reductionist view) by the individualistic conception of rationality forced on us by the understanding of language and our selves which the model looks to unseat. It's intuitively appealing, but also deeply problematic and wholly dispensable. The model recommends doubting our intuitions.