This document details the model which motivates the thinking here. It is in two versions, bottom-up and top-down. They differ only in their order of presentation of the model's definitions. If you want to cut to the chase then work back to the specifics, the top-down version is the one to consult:  bottom-up  top-down

Effectively everything said here about truth, meaning, consciousness and other matters is a consequence of what I am inventively calling 'the model'. The model imagines a minimal world of agents engaged in a simple activity superficially like speaking a language.

It's augmented incrementally until familiar properties of language emerge. Other people have of course done this - what I think distinguishes the approach here is the elements it does and does not include, and its scaled-back explanatory ambitions.

As the model is developed, its elements acquire new properties. To keep things clear, these properties are named following a convention - their names all have a leading '_' (in some places I have used a leading 'p'). The names make plain the real-world properties they're supposed to model. The idea behind a thought experiment such as this is to separate, in principle, the activities

  1. of imagining a world of people engaged in plausible activities and stipulating definitions to describe them (this is hopefully uncontrovesial in itself), and
  2. of mapping those definitions onto familiar concepts (this is where the controversy comes in, potentially).
Ultimately the goal is to get to a point where we are inclined to say, e.g., 'truth is _truth', i.e., that a particular thing in the real world is true just in case the relations in which it stands to other parts of the world mirror the relations in which its corresponding model element stands to corresponding other parts of the model, when the model element is _true.

The model in the first instance imagines a large set of people, which it terms '_agents', milling about in reasonable proximity. They are prone to uttering strings of sounds, '_sentences', which, as it happens, resemble sentences of (say) English - but we are to imagine these strings having no meaning (initially). The model stipulates that _agents reflexively assign them -utterances of _sentences- each a '_value' between 0 and 1. This is sort-of a gut-reaction, yes-no evaluation, with the valuation being intrinsic to the 'token' _sentences. It is formalized in the model by allowing that every agent has a kind-of table or function which associates a number to every possible token _sentence. Keeping to our clever naming practice, this is called the '_value function'.

The crux of the project is here. Whereas in most conventional accounts of language -I think it's fair to say- the goal is to explain sentence evaluations in terms of other things (meanings-grasped or what have you), in the model these are fundamental. They are the given.

The reason things mostly are the usual way and not as in the model is hardly surprising. There are effectively infinitely many token sentences, yet somehow our valuations are (mostly) predictable. The merest sense of intellectual responsibility would seem to compel us to try to produce a theory which generates predictions about what we understand and agree to, in terms of what is said - to be able to explain the reliability of our expectations about sentence valuations based on their words' meanings. But this, I think (as others have thought), actually misapprehends what needs explaining, and the terms of any possible explanation. Considered as a natural phenomenon -as measuable dispositions towards sentences considered as sound-sequences- there is an important and viable, finitary, scientific project here. Neural net modelling, particularly in the last ten years, has made enormous strides in this respect. What is key for present purposes is that such scientific theorizing has no need of meanings or truth - any semantic or intentional concepts. It is a project of natural science proper. Theorizing of this type carries the whole explanatory burden. We are absolved of the obligation to formulate a theory in semantic terms. Which is a good thing, because any such theory is bound to fail.

Anyway. Grasping that the semanticist's assumed explanatory burden is specious, is a hurdle which must be leapt. Once it is, a much clearer view becomes available.

The way it's imagined, the model also needs to distinguish between _sentences which an _agent has heard or uttered and those s/he has not (an _agent's _value function maps all _sentences, heard or not). With just these elements in place, interesting questions arise as to how best to model real language. One consequential realization is that the _value function should be designed so that whether an _agent _values some given _sentence should depend on what other _sentences s/he has already heard and _values (intuitively, what you're inclined to believe depends in part on what you already believe). This makes things tricky, but realistic.

_Sentence _valuations are meant to be mostly obvious to _agents, but not of any great intrinsic interest - like, say, basic colour recognition. To give _agents a motive to speak, a further property, _pleasure, has to be added to the model. _Pleasure is meant to be some nice feeling distinct from any familiar feeling, which is experienced when certain _valuable _sentences are encountered. It's intended to correlate to the real-world benefits truth facilitates getting, including avoiding familiar pains. It serves in the model as the only motive for exchanging _sentences – for ‘_conversing’. It is a contrivance of the model which ultimately has to be cashed-out when mapping the model onto reality.

With a bit of finessing, it emerges in the model that _agents would naturally want to cooperate to amass as large a collection of valuable _sentences as possible (a bit like how we naturally cooperate collectively to improve our individual financial circumstances, even as we compete). This gives rise to a property which on inspection looks a whole lot like truth in normal talk - the property of belonging in that collection.

With _truth in place it becomes possible to introduce _words, and to define a concept solely in the model's terms which arguably does all the work which we should expect of a concept of word-meaning. And with _word-_meaning in place, we can get straight-forwardly to a concept of _sentence-_meaning or proposition.

(Note: this explanation follows the leading-'_' convention for the model's terms, explained in 'The Model' -> 'The Idea'.)

Possibly what distinguishes the model from other treatments is that it represents a speaker's goal in talking to be, not the acquisition of information, but rather simply the maximization for her or himself of some simple good (_pleasure, in the model). The details of the model which make this interesting (and, I think, realistic) are that

  1. It's somewhat hard to come up with novel _valuable _propositions. They're valuable in part because they're scarce.
  2. It costs effectively nothing to repeat a _sentence to someone else -to share its _pleasure, if you like.
  3. The _values _agents assign to new token _sentences depend in part on tokens they've already heard and _value (what you're inclined to _believe depends on what you already _believe).
  4. Where _agents have mostly the same prior _beliefs, their _valuations of new _sentences mostly agree. Where their prior beliefs differ, so may valuations.
  5. Lastly, the _values _agents assign to _sentences are in part - but only small part - up to them. They have a limited ability to choose whether to _value new token _sentences. It costs effort, however, to change one's _beliefs. (people have to be at least a bit responsible for what they believe, right?)

These factors would conspire to create an interesting dynamic, in some ways like what governs an economy of goods. On the one hand, _agents would have a strong incentive to share their _sentences (to talk to each other). That is, the inhabitants of the model would naturally evolve a sort of contract whereby they each voluntarily speak their _sentences, provided others do likewise, since their doing so would be to everyone's benefit. They would be motivated to cooperate. Acquiring the _pleaures of _language would not be a zero-sum game (far from it).

But they would also have an incentive to compete. The idea is that

  1. _agents individually have large stocks of _valued _sentences (their _beliefs), which would significantly affect the _values of newly encountered _sentences.
  2. To the extent _agent B's _beliefs overlap with _agent A's and they share their _sentences, B effectively works for A in getting new _pleasurable _sentences. A will likely _value what B does and so likely get pleasure from B's new discoveries. And the same point goes for the _speech community in general.
  3. When A's _valuation of a _sentence differs from B's, it is thus in her interest to win the community and likewise B over to her _valuation. The same would be the case for B. They would, in effect, each be striving to get the community -and one-another- to work for themselves individually, rather than having to forfeit that _sentence's _value as a cost of staying in general harmony with the community.

What would competition look like? Suppose we inhabit the model, and you utter a _sentence in my presence. Other things being equal, the model implies you _value this _sentence. Suppose I do not _value it -it is not added to my set of _beliefs. For the reasons just given, I now have an incentive to change your _valuation. How do I do this? Well, if I _disvalue it, this may well be in large part in virtue of other specific '_disconfirming' _sentences in my _belief set. If any of these is not in your _belief set, then maybe by uttering it I will get you to _value it, and so tip your _valuation of the first mooted _sentence to negative. It could indeed be that it is already in your _belief set, but that your _valuation heuristic failed to take account of it - in which case I would merely be '_reminding' you. However, if, as it happens you _disvalue my suggested _disconfirming _sentence, then clearly the process can recur.

It's not wildly fanciful to suppose that a community of people motivated and constrained as in the model would come to attach some importance to the type of exchange just described. When all factors are taken into consideration, including the bias toward _valuing newly encountered _sentences which are otherwise agnostic, and the effect of pressure to conform, one can see that standards in the conduct of such exchanges would naturally emerge . Being a competitive activity ultimately aimed at a shared good, the manner of its conduct, and of its participants' willingness to revise (or not), their _beliefs, would be something worth noticing. The obvious name for the set of norms governing such exchanges, and for _agents' practices in revising their _belief sets, would be '_rationality'.

It is a counterpart of the norm ascribed to the _sentences themselves, '_truth'.

(Note: this explanation follows the leading-'_' convention for the model's terms, explained in 'The Model' -> 'The Idea'.)
The model imagines that _sentences are of interest to the people in it, not for any information they may convey, but rather for a certain contrived good ('_pleasure') associated with some of them (_sentences which express novel '_propositions'). _Agents' sole goal in the model is to maximize their personal quotient of this _pleasure.

Because the good can be shared effectively without cost (just by speaking a _sentence to someone), and because the model's _agents are mostly in agreement, it's easy to see their best strategy individually would be to agree mutually to assist one another. They would naturally form an implicit contract whereby each speaks her valuable _sentences to others on the understanding that they will do likewise.

Crucially, the model stipulates further that _agents' _valuations of newly encountered _sentences depend in part on what they have encountered and _valued in the past, and also that _agents have some lattitude to re-_value _sentences --just a bit. The point of this wiggle room is that if _sentence _valuations are interdependent as stipulated, the possibility will arise of a newly encountered _sentence being in conflict with a big prior collection. The project of _value maximization is best facilitated by allowing a bit of tolerance with respect to individual _sentences, so as not to oblige abandonment of what may after all prove to be a sound collection. _Value is in a way cumulative for _agents, and they do best in the project of getting it if they're allowed a modicum of discretion. If _valued _sentences are understood as beliefs, these stipulations hopefully won't feel too exotic.

_Valuations being thus interdependent leads to a first interesting property of _sentences. We can suppose there to be some one set of them which would maximize overall _value for any given _agent, were she to value all and only its elements. If, as we have said, her goal in trading _sentences is to get as much _value as she can, then she is ultimately looking to _value all and only the elements of this set. At any given time, however, since the aforementioned wiggle room opens the door to _valuing _sentences which aren't really part of the set, she may _value _sentences which in fact are not in it. For any given _sentence, _valued or not, then, there will be the question as to whether it is in the set. Clearly the property of being in that set is of interest, and deserves a name. We will say a _sentence is _true for _agent a just in case it's in _agent a's set, and _false for a if not.

'_truth for a' is of limited interest. For one thing, an _agent has no way of knowing what is in that set, apart from what she _values at any given moment. So the distinction between what she actually _values and what she ought to, to maximize her total _value, would not hold any interest for her. For another, two additional stipulations of the model, that novel _valuable _sentences are hard to come up with but very easy to share, and that people mostly agree in their _valuations, imply that an _agent will be getting most of her _valuable _sentences from exchanges with others. This suggests that the parallel question about collective _value maximization may hold more interest.

Suppose _agent a _values _sentence s and utters it in the presence of b who, as it happens, _disvalues it. As things stand, we have four possibilities.

First, it may be that s is genuinely in a's maximizing set but not in b's - their maximizing sets simply disagree, and s is _true for a and _false for b.

More interestingly, it may be that b is mistaken about s - that in fact his maximal set agrees with a's, but he doesn't realize it. So s is _true for both a and b - a is right and b is mistaken.

Or, vice versa - it may be that s is _false for both - a is mistaken and b is right.

(The last, not so interesting case of course is that a and b's sets diverge on s, but they're both mistaken (s is _false for a and _true for b)).

Given these possibilities, what ought _agents to do when confronted with such differences? In particular, should they treat what is _true for one _agent to be _true for all? If _truth is just _truth-for-a –if it is merely '_idiolectic'– then for every disagreement between two _agents about a token _sentence either there is a genuine divergence between them as to what is _true in their respective _idiolects, or their _idiolects properly agree and one or other of them is simply mistaken about what is _true for him. It becomes apparent that the best strategy for _agents individually to maximize _value is to treat all differences as signalling a mistake for one or other _agent. That is, it emerges that it is quantifiably advantageous to treat _truth as objective– provided enough _agents opt-in to this strategy and assuming, as we have, both that _agents mostly coincide in their appraisals of _value and that appraisals of _value are at least in part malleable.

To see this, suppose that an _agent disagrees with the majority about some widely _valued _sentence. She then forfeits the _value (and attendant possible _pleasure) potentially of all _sentences whose _support depends on it. These may be considerable in number. If she disagrees with the majority about even a small fraction of all _sentences, she loses not just the _pleasure of the _sentences hierarchically supported by them but also the confidence that _theory _sentences offered by others will rest on foundations which she would _value, and hence, in many cases, the possibility of _valuing _sentences on the strength of others' ‘_testimony’. Admittedly, this _agent will get some compensation in the form of hierarchically dependent _sentences she contrives herself on the basis of her _idiolectic _sentences. As even the most prolific of solitary _sentence producers will fall far short of what her community collectively can contrive, however, and since she will be deprived of the initial error-check afforded by communal acceptance, she will on balance be considerably less well-off. The extra _value to be gained by an individual by participating in a community of _agents in which effectively everyone commits to and contributes to the construction of a single shared edifice of _truths will easily offset the cost of sacrificing the _pleasure of any genuinely _idiolectic _sentential affinities.

And so we have in the model a concept of _truth: A _sentence is _true, roughly, just in case it is an element of the set of _sentences which would maximize the combined _value of all _speakers, were they all to _evaluate all of its elements. This being impossible, it stands merely as an ideal. I will note for now just two points.

The first point is just a reminder that the definitions so far are mere stipulations about hypothetical people in hypothetical circumstances. It is fair to ask whether such people are possible, and whether, if they existed, they would behave as described. These are not philosophically loaded questions -should not be, anyway. What is of course loaded is the fitness of the stipulated concepts to model their real counterparts. My claim, of course, is not just that such people are possible but that we in fact are they, and our usual concepts, the model's.

The model recognizes a nominally truth-like ideolectic property, '_truth-for-an-_agent'. A committed defender of indivualistic rationality might plausibly cleave to this in the hope of resisting the depredations of the current approach. This is a mistake on two fronts, I think. First, it makes what is true completely unknowable, and hence apparently without practical significance.

Second, though, I think it fails to see, what the model conveys, that the concept of _truth (for all) has a character substantially different than that of _truth-for-a -that it would play an importantly different role in the lives of _agents- and that this character correctly maps onto our familiar concept of negotiated, public truth.

(Note: this explanation follows the leading-'_' convention for the model's terms, explained in 'The Model' -> 'The Idea'.)
So far we have created a model of a much pared-down correlate of a natural language like English, and seen how its minimal elements would lead to both cooperative and competitive interactions among its speakers as they pursue their individual ends. We have seen, too, that this dynamic would naturally be described in terms of an ideal which on consideration looks a lot like truth. We are now in a position to appreciate how concepts emerge in the model which correlate to our familiar concepts of meaning and proposition.
The concept of _truth in the model was arrived at, notably, without assuming _sentences to be composed of _words. _Sentences to date have been just sound patterns of a kind. We can now add _words to the model, understanding them to be just smaller, inert sound sequences. We'll take there to be a large, finite set of them, and stipulate that negatively or positively valuable _sentences are always decomposable into some sequence of them. The path to their _meanings is a bit tortuous, but I think at each point, locally straight-forward. I will outline the main ideas here, but acknowledge there is more to say.
Let's suppose there to have been uttered some token _sentence s on an occasion, and investigate the matter of the _meanings of its constituent parts.
The first point is that we will actually be concerned with _phrases, not specifically _words, a _phrase being understood to be any word-sequence which is not itself a _sentence.
The second point is that our primary concept is not of _meaning as such, but rather of sameness of _meaning - specifically, sameness of _meaning of two token _phrases. The _meaning of a token _phrase p will then be identified with the set of all token _phrases having the same _meaning as p (allowing ourselves some latitude, as sets aren't strictly the right sorts of things to be meanings).
The core idea is that token _phrases q and r have the same _meaning just in case, were you to substitute r for q in all the (atomic) _sentences containing q which are _true at the moment of its utterance, you would get all and only the _true _sentences containing r at that moment.
So: suppose our token _sentence s is
  • "Bob's dog is a mutt".
By the terms of our definition, what would it take to settle that "Fido" means (refers to) the same thing (animal) as "Bob's dog"?
Well, at the moment, there will be a bunch of other _true token "Bob's dog" _sentences:
  • "Bob's dog is on the couch"
  • "Bob's dog is sleeping"
  • "Bob's dog suffers from arthritis"
  • etc..
Our requirement is that the token _sentences,
  • "Fido is a mutt"
  • "Fido is on the couch"
  • "Fido is sleeping"
  • "Fido suffers from arthritis"
  • etc..
all be _true as well, and that there be no additional _true "Fido" _sentences.
So far, so good. But a number of questions immediately surface. I will consider here, just two.
What about Ann's dog, also named "Fido", who is a pure-bred Dachshund? Do we not now also have a token _sentence,
  • "Fido is a Dachshund",
which is _true? Agreeing that
  • * "Bob's dog is a Dachshund"
is not _true, does this not preclude identifying the _meanings of "Bob's dog" and "Fido"? The question relates to ambiguity, and raises an obvious problem. In the familiar way of thinking about language, something -typically connected to speakers' intentions- serves to disambiguate word meanings, and hence to permit evaluating "Fido is a mutt", in the moment. Simply put, in the usual way of thinking, word-meaning determines sentence meaning, and for apparently good reason - not the other way around, as here.
Our thought is that, at the very moment of speech, for phrase ambiguity not to be an obstacle to talk, it must, barring special circumstances, be the case that prior speech would condition us to dis-value "Fido is a Dachshund". The key, for the model, is that there be an intutitive, pre-theoretical notion of sentence valuing by which we would (mostly) disvalue, "Fido is a Dachshund" in our example moment. Differences in token _sentence valuations must underwrite token _word disambiguation, in the model, not vice versa (as in conventional approaches).
What about synonymy of tokens in different moments? Allowing the foregoing, one might notice that what has been provided-for so far is only synonomy-in-a-moment - resolution of the question as to whether two token _phrases uttered in some one moment of speech have the same _meaning. However: any half-way adequate account of meaning has to have something to say about synonomy across different moments - (using my jargon). So, yesterday, I said "Fido is on the rug" (and maybe, "Fido is not on the couch"), today I say, "Fido is on the couch" (and "Fido is not on the rug"). Clearly it has to be possible that my uses of "Fido" in the two moments have the same _meaning - that I was talking about one and the same dog at the two times. So far, nothing has been said about how in the model this can be made sense of.
This challenge I think is not too difficult, but it does require introducing some technical apparatus. We start by noticing that the presently-_true "Fido" _sentences will include past and future-tensed _sentences, as will the sets of _sentences _true at other times. For example,
today:
  • "Fido is on the couch"
  • "Yesterday, Fido was on the rug."
yesterday:
  • "Tomorrow, Fido will be on the couch"
  • "Fido is on the rug."
Taking account of these, our sets of "Fido"-_sentences from yesterday and today will align (will map 1-1), albeit in some cases with verb-tense differences between corresponding elements. The required apparatus I mentioned is just a way of translating all _sentences into terms which are present-tensed but explicitly indexed to a particular time and place, removing the verb-tense diffrences. And this all has to be done without presupposing any intuitive concept of meaning, of course. If it can be done, then a moment of speech can pick out a set of 'Fido'-_sentences in a manner which does not tie it '_grammatically' to the time of utterance. This set may or may not then exactly coincide with the 'Fido'-_sentences at some other time or place.
What now of the meanings of whole sentences, so-called "propositions"? The approach here mirrors the approach taken for meaning. We start by taking the primary concept to be of sameness of _proposition, and allowing the _proposition expressed by a token _sentence s to be (roughly speaking) the set of token _sentences which express the same _proposition as s. So what is it for two token _sentences to express the same _proposition?
The guiding thought here is that two token _sentences express the same _proposition just in case their respective _phrases can be mapped one-to-one in order. For example, consider
  • s1: Bob’s dog is a mutt (spoken yesterday across town)
  • s2: Fido is a Heinz 57 (spoken today here)
These two token _sentences express the same _proposition just in case the contained tokens of
  • a) ‘Fido’ and ‘Bob’s dog’ have the same _meaning, and
  • b) ‘is a mutt’ and ‘is a Heinz 57’ have the same _meaning
This definition differs from familiar definitions in more than one way. First, it does not provide that, for example, "Sue is the sibling of Jasmeen" expresses the same proposition as "Jasmeen is the sibling of Sue", as perhaps intuitively is the case. The concept of _proposition can possibly be augmented to provide for this, but details remain to be worked-out.
A second possible divergence is that the token _sentences which constitute a _proposition may differ in truth value. Intuitively, my utterance today of "Fido is on the rug" may express the same proposition as my utterance yesterday, though yesterday's was true and today's, false. The primary objects of _belief remain token _sentences, though there is clearly scope to expand our repertoire of concepts here, to accommodate cases ("token _proposition", etc).
One issue which looms large in philosophical discussions of propositions is the matter of propositional attitudes - things like beliefs and desires - which relate people to propositions, and the semantics of the sentences which express them, such as "Bob believes Fido is asleep". One virtue of the present approach is that it greatly simplifies this problem. By inverting the priority of truth and meaning, the whole need to invent different semantics for words in and out of so-called opaque contexts mostly falls by the wayside.
There is evidently more to be said, but I leave the concept here for now.
(Note: for reasons of simplicity, this explanation drops the leading-'_' convention for the model's terms, used elsewhere).
One particularly interesting consequence of the model is that it represents, without any substantial auxilliary assumptions, sentences we would use to express conscious experience or to report dreams, and permits us to characterise their semantics. Doing this, then descending semantically, helps us to get a grip on the subject matter in question.
The elements of the model were nothing more than a set of speakers or '_agents', a set of '_sentences', a set of functions mapping _sentence utterances to '_values' (one function for each _agent), and a '_pleasure' property associated with some sentences' being _valuable. The model's contribution is to make plain the minimal parameters needed to give rise to convincing concepts of truth and meaning in language. These are,
  1. sentence uttered
  2. time of utterance
  3. place of utterance
  4. place of hearer's focus of attention
  5. context of utterance, being the set of token sentences recently heard by the hearer
  6. sentence utterer
  7. full set of encountered token sentences valued by the hearer ('beliefs')
Philosophers traditionally have distinguished beliefs based on sensory experience (the candle in front of me is lit) and those arrived at through rational reflection (17 + 8 = 25). Acknowledging the vagueness of this distinction but otherwise ignoring, for now, the voluminous discussion of the subject in philosophy, the two classes of token sentences corresponding to these two types of belief can be specified in the model, like so:

A token sentence s is an observation sentence for a just in case her valuing of s is (relatively) independent of her belief set B. That is, the value function returns the same value for s regardless (almost) of the B parameter.

A token sentence s is a theory sentence for a just in case a's valuing of s is dependent on B.

In articulating these definitions, what becomes apparent is that there are more distinctions to be made.
The point I want to get to is that the model permits us to isolate a class of sentences relevant to the present subject. These are token sentences which
  1. are positively valued only when utterer = agent valuing
  2. are independent of context
  3. are independent of beliefs
We can stipulate in the model that this class is non-empty - that in fact all agents value some such sentences. Let's label sentences in this class, 'Q-sentences'.
Our thought is to get clear on the semantics of such sentences, and in so doing to illuminate their subject-matter.
So, first, are such sentences candidates for being true? The model's initially frustrating but ultimately illuminating answer is, in a way, yes, and in a way, no. Recall that the model tells us,
Truth: A token sentence is true just in case it's an element of a set which would maximize the combined aggregate value of all speakers, and which is such that the removal of any element would result in a set of lower combined aggregate value.
By this definition, Q-sentences are indeed true or false as the case may be. Adding valued such sentences to the set does increase total value, even if they are valued only by one person.
However: they are in a sense degenerate, and so not fully-fledged peers of true sentences of other classes. As the elaboration of the concepts relevant to truth in the model made plain, its interest is its role as a norm of conversation. The purpose of conversation is, for each of us, to maximize our individual quotient of value, our effectiveness in achieving which vastly increases with increasing conversational participants. Conversation is regulated by a contract which binds participants to strive to maximize collective sentential value, other things being equal; truth is what does this. Adding a new token sentence to your belief set not only increases your net sentential value, but is expected also to expand your facility in accruing others.
Q-sentences, being valued solely by their utterer and independent of the belief parameter to the value function, 'B', are disengaged from this activity. My knowing you dreamt last night that there was a blue armadillo on the dresser increases my net sentential value only by the default amount I accord your sentences on account of trust - the sentence does precisely nothing else for me. Were you to value the denial of this sentence - were the denial true in the degenerate sense we are allowing such sentences to be, it would make no difference at all to anyone. Indeed, we could posit a more restrictive definition of truth which excluded such sentences altogether, one which would do all the work expected of a concept of truth, though at a cost of simplicity and generality.
Q-sentences are true, then, but only in this qualified, degenerate sense. What now about the meanings of their relevant words and phrases? Recall that the model manages solely with a concept of same meaning, and that this is effectively a matter of truth-preservation in context of utterance. Two token words have the same meaning just in case one can be swapped for the other in all (atomic) sentences true at the moment without changing the sentences' truth values.
With this in mind, consider two relevantly true token sentences,

s1, uttered at 1:00PM: The pain sensation in my toe is now throbbing.

s2, uttered at 2:00PM: The pain sensation in my toe is now dull and constant.

Is it possible that the two tokens of 'the pain sensation in my toe' mean -refer to- the same thing? I think this is an interesting question for any theory of sensation talk, but we can set aside any general concerns here about object identity as they apply to sensations. Let us allow that by the terms of the model, they do. The interesting point is that insofar as sameness of meaning is wholly a matter of preservation of truth, and the truth being preserved is degenerate in the way just discussed, the sense in which these tokens have the same meaning is similarly degenerate. The meanings of expressions used in Q-sentences inherit the degeneracy of their containing sentences' truth.
The significance of all this lies in the contrast with the sentences of science. The hallmark of science is objective verifiability. A scientific sentence is true or false, as the case may be, in precisely the way Q-sentences are not: for any agent, believing a scientific sentence, and so having it included in the B parameter to her value function, implies there will be other token sentences whose value will be different than had the sentence not been included. The propositions of natural science are characterised by the reliability of their connections to theory and experience and so, in the terms of the model, by their support connections to other sentences. All of this being the case, we should expect Q-sentences to be inescapably opaque to science.
Q-sentences, I submit, correctly model qualia reports such as interest theoreticians seeking an understanding of consciousness. Q-sentences have enough to them for us to take interest in them - to treat them as more than mere jabberwocky-nonsense - but not enough to make them worthy of public discussion; not enough for science to be able to get a grip on them. The problem of consciousness arises because we fail to grasp the differences between the two classes of sentence. We treat Q-sentences as though they are full-fledged peers of the sentences of science, and so expect them to be worthy participants in scientific theory. This is a mistake.
It may be objected that this discussion completely omits consideration of what may be thought to be the crucial matter, which is why the inhabitants of the model would value what I've labelled 'Q-sentences', and why we are apt to 'hold true' our real counterpart sentences. I hold true "I have a pain sensation in my toe" precisely because I feel it, because of the qualia! What about the model's agents? It's all very well to posit these Q-sentence place-holders in the model, what matters is what would explain their valuations. That is the subject matter, properly-speaking, of investigations into consciousness, and it's where this treatment completely whiffs. So it may be objected.
I hope it will be clear on consideration that the conception of the reference of 'a pain sensation in my toe' presupposed by this objection simply begs the question against what I am proposing. To try to buttress this, I will say here three things.
A first response, which requires no commitment to the model, is simply to shift the burden regarding the mooted explanatory lacuna. What mode of explanation, exactly, is in question? If it's meant to be a causal 'because', then the objector has the very hard work ahead of her of explaining (A) how purely subjective (objectively undetectable) qualia can be shoe-horned into a scientifically respectable framework, and -what is maybe even more challenging- (B) how the subjective 'I' which experiences them is meant to be made scientifically intelligible; to make good the claim that it (the 'I') is physically changed in being causally impacted by the qualia. What are the physical boundaries and constitution of the self? One allure of the approach recommended here is that it completely dismantles these apparently intractable questions. Alternately, if it's meant to be a justifying 'because', the objector seems to be equally at sea. How exactly does the justification work? Wittgenstein's injunctions immediately bubble-up, here. And if the 'because' is neither causal nor justificatory, then what sort of 'because' is it, exactly?
The second point concerns the role of explanation here in general terms. It's all very well for me to throw the explanatory challenge back in the objector's face, as I just have, but it's not much progress if there's reasonable agreement that an explanation of some sort is called for. A main component of the model is the case that the whole explanatory idiom to which the objector's mooted explanation would belong, is misbegotten. It emerges from a conception of rationality as individualistic, according to which truth and meaning can be made sense of only if language can be, in the relevant way, theoretically tethered to non-linguistic reality. The model's point of departure is that sentence valuations - valuations such as the objector insists need explanation - are what's given in experience, and that this experience is fundamentally social (it depends on experience of other people uttering the sentences in question). The thought that intellectual responsibility compels us to try to account in terms of some finite, scientific theory for this admittedly infinite domain of facts originates in a failure to grasp the nature of rationality and our relation to the world. The case that rationality and our relation to the world are as the model maintains is, in the main, simply that the picture of these things it recommends works and is complete. The model does allow that the relevant physical facts can be explained by a finite theory, just not a theory formulated in the terms of semantics (viz, the kind of scientific, non-semantic theory neural net researchers are providing). The aspirations of theoretical semantics are overweening.
The third, related point concerns the putative objects of experience, including the qualia which so vex us. The objector's protest is premised on a conception of the objects of experience as things which can be got hold of independently of language, and which can serve somehow in substantive explanations of our judgements. Some people will not be budged from this position - they will always prefer dismissing philosophical positions such as I am defending to what they will insist are the undeniable facts of experience. To those who would dig their heels in here, I would urge consideration of the profusion of books and papers under whose weight university library shelves groan, on the subject of how perceptual experience figures in judgement. What sort of a thing is a perceptual experience, even of something as pedestrian as a dog on a rug nearby in good lighting, that it can be a basis for judging, e.g., "The dog is on the rug." - how exactly does that work? I should repeat here my warning that this is quite a different problem than explaining how it is that light reflecting off the dog causes retinal and many other neural actuations, ultimately eventuating in the production of vocal chord movements. It is (many think) about how a judgement like this can be justified. It is only the beginning of your problems, if you are going to insist that your privately experienced feeling is the basis for your claim 'I have a tickle in my left big toe'.

(Note: this explanation follows the leading-'_' convention for the model's terms, explained in 'The Model' -> 'The Idea'.)

The starting point for the model, and by extension this whole project, is the recognition that our having a feeling of assent or dissent on hearing a token sentence is properly understood as a primitive fact, not amenable to systematic explanation, and the attendant realization that if we allow that speakers are looking to maximize this feeling of assent, then -adding in a few plausible assumptions- sentences acquire a property with exactly the contours of truth.

What was initially missing from the model was a motivation for speakers to maximize the feeling of assent or agreement, as we require them to have. This feeling of assent we are now leaning on is comparable to a sense of harmony or dissonance, but only in a weak sense. I did not intend that this whole account of language should rest on the idea that it's sufficiently intrinsically appealing that people would seek it out (through conversation) for its own sake. To fill the gap, the model introduced the (admittedly contrived) concept of '_pleasure'. _Pleasure was stipulated to be something we get from certain _valuable token _sentences only - not all - and in different degrees (it was later ammended to be associated with first encounters with certain _valuable _propositions). It was meant to be a model-specific feeling, distinct from any we actually have, and sufficiently intrinsically worthy to motivate maximizing _sentential _value (the thought, again, being that a speaker's only known way to maximize _pleasure would be simply to maximize the number of _valuable _propositions encountered.) One benefit of basing the model on _pleasure, rather than on some range of real pleasures, as one might seek to do, is that it thwarts any temptation to try to ground the diversity of word-meanings in the diversity of our real pleasures. It's a key point that only a single pleasure is needed to get a full treatment of truth and meaning.

_Pleasure was only ever meant to be a place-holder. For the model to provide a satisfactory representation of the relations between the concepts of truth, meaning and so forth, it's necessary to cash it out - to replace it with reality-based concepts. The obviously intended idea is to swap in for it the many real mundane pleasures we seek and pains we seek to avoid. How can this be made to work?

The evident first step is to remove _pleasure from the model altogether. The second step is to add into the model, counterparts of all the many mundane pleasures we experience and pains we seek to avoid in real life. For example, there would now be in the model a counterpart of the pleasure of eating a nice crisp apple, or of sitting in a comfortable chair after a long day, or alternately the pain of being burned by a hot pan. We can suppose these counterparts all to be gathered into a tidy set within the model.

The third step is where things start to get a bit complicated. How exactly should _valuable _propositions be related to these newly added mundane pleasures and pains, that the latter should motivate the maximization of the accumulation of the former? An instructive non-starter would be to posit a direct correlation between the two and add nothing more: whenever an element of some given _proposition (a _proposition being a set of token _sentences, remember) is _valuable, I experience a token of some given one of the set of mundane pleasures or pains in the set. For example, there would be a particular pleasure directly associated with "I am seated in a comfortable chair after a long day".

This is how things stand with _pleasure in the model. However, since real pleasures are associated typically with familiar (that is, not new) token propositions, what we really need is some way for _valuable _sentences to facilitate an _agent's bringing about the getting of the pleasure or the avoiding of the pain. What we need is that the _agent should associate with the candidate _valuable _sentences, knowledge how to act to bring about the pleasure.

I see two ways to accommodate this requirement in the model. The first is to associate a select set of _propositions directly with pleasures/pains, as discussed just above, and then to introduce into the model, for each _agent, a kind-of knows-how-to-act-to-make-_valuable mapping or function, between certain other _propositions on the one hand, and members of the select set, on the other hand.

This approach would keep the knows how to act relation within language, and may look potentially useful in explaining actions whose outcome is indifferent, from a pleasure perspective. _Agents would be motivated to exchange _sentences with the thought that encountering a _valuable one in one of those certain mapped sets of _propositions would be a guide to getting some mundane pleasure.

The second way to accommodate the requirement is to add to the model, for each agent and those certain other _propositions just mentioned, a kind-of irreducible, 'knows how to act to get pleasuren when s is _valuable' property. The model, then, would have to be augmented with infinitely many of these inscrutable properties. A _valuable _proposition's having one would allow an _agent to increase her or his mundane pleasure; this would provide an incentive to maximize _valuable _sentences. This proposal is simpler, but may look more to be labelling the problem than solving it, and so less satisfactory for this reason.

My thought is to accommodate the requirement with the second proposal just mentioned. I will close out this discussion with an assertion about this, and an explanation.

The second proposal is the right one precisely because there is no real problem to solve. There is no need to add the slight extra complexity of the first solution, no need to try to make the problem more theoretically tractable. If the thought were to use something like such a relation as part of an explanation of action, then we would be embarking down a long, dead-end road, avoidance of which made possible the insights of the model itself.

(That was the assertion I just said I would make).

"Wait, what? How can this be right? The principle, here, guiding what does and does not need explanation seems to be, 'Reject the need to explain something if it looks hard.' And that's not very admirable."

The explanation I just promised I would give is of the principle which differentiates what does and does not need explanation. My position (hardly original in itself) is that there are plenty of things which we wonder about which are well-worth wondering about, but there are some which we wonder about which as it happens are not worth wondering about.

To begin with, effectively anything amenable to a properly scientific explanation -roughly, an explanation which generates predictions which can be tested- is worth wondering about. This includes the obvious candidates like the properties of matter and the bio-chemical interoperations of the parts of organisms, but it also includes how our brains respond to patterns of input -including others' speech- ultimately to produce patterns of output behaviour -including talk.

Secondly and a bit less obviously, there are questions about what the many particular things which we lump together under certain given concepts, have in common. We say some things are just, others unjust; some musical, others not; some grammatical, others ungrammatical; there are endless examples. In each case there is wide agreement on many cases but not on all. Although the investigations succumb to the law of diminishing returns, there is often value in thinking a bit carefully about why we say what we do - what it is about the cases we are capturing; what exactly it is which underpins our agreement, if anything, where there is agreement. This sort of investigation, which may include a normative aspect, is sometimes the province of philosophy, and it's how I conceive the effort to characterize truth and meaning, here.

This sort of explanation, to be clear, has its limits. Philosophers, infamously, routinely butt-up against the difficulty of specifying necessary and sufficient conditions for a particular thing to count as an exemplar of some type of thing. We can say loosely what it is for something to be a chair, but it just doesn't pay dividends to try to find non-trivial terms which decide the question once and for all.

Thirdly, though, (and no doubt not finally), there are questions about the underlying details of explanations we give in terms of our mentalistic vocabulary: He did X because he believed that P and desired that Q. She said Y because she understood M to mean N. Our concern in such cases is not to understand how it is that we all can agree that he believed P or she understood M to mean N: it's clear enough what counts as a case of one or the other when you encounter it in life. There's no mystery about this, any more than there's a mystery about how we agree one thing to be a chair and another thing not on a particular occasion, most of the time. The question, rather, is about what has to be the case with her or him, in her mind, so to speak, for her to count as believing or understanding these things -

What is it to understand M as meaning N?

What is it to believe that P?
In each of these cases, I am intending the questions to be asking something different than what the scientist might ask in inquiring about brain function. They're asking something we would naturally ask in the first person case - What is up with me, when I mean something by a word, or believe something? - and then extend to others.

It's the asking of these questions - questions which characteristically involve how things are with a person (as opposed to a mere biological specimen of Homo Sapiens)- which I am inveighing against. There simply is nothing to learn. We already know the answers, by virtue of having common sense. There just is no gamut of intelligible sub-personal elements or modules or faculties which in any systematic or useful way explain why we say and do, what we do. Everything in this domain is already open to view. I am, of course, far from the first person to make this point . What I consider I'm offering which is new, is a world-view which naturally quashes these explanatory ambitions from the get-go.