This document details the model which motivates the thinking here. It is in two versions, bottom-up and top-down. They differ only in their order of presentation of the model's definitions. If you want to cut to the chase then work back to the specifics, the top-down version is the one to consult: bottom-up top-down
Effectively everything said here about truth, meaning, consciousness and other matters is a consequence of what I am inventively calling 'the model'. The model imagines a minimal world of agents engaged in a simple activity superficially like speaking a language.
As the model is developed, its elements acquire new properties. To keep things clear, these properties are named following a convention - their names all have a leading '_' (in some places I have used a leading 'p'). The names make plain the real-world properties they're supposed to model. The idea behind a thought experiment such as this is to separate, in principle, the activities
- of imagining a world of people engaged in plausible activities and stipulating definitions to describe them (this is hopefully uncontrovesial in itself), and
- of mapping those definitions onto familiar concepts (this is where the controversy comes in, potentially).
The model in the first instance imagines a large set of people, which it terms '_agents', milling about in reasonable proximity. They are prone to uttering strings of sounds, '_sentences', which, as it happens, resemble sentences of (say) English - but we are to imagine these strings having no meaning (initially). The model stipulates that _agents reflexively assign them -utterances of _sentences- each a '_value' between 0 and 1. This is sort-of a gut-reaction, yes-no evaluation, with the valuation being intrinsic to the 'token' _sentences. It is formalized in the model by allowing that every agent has a kind-of table or function which associates a number to every possible token _sentence. Keeping to our clever naming practice, this is called the '_value function'.
The crux of the project is here. Whereas in most conventional accounts of language -I think it's fair to say- the goal is to explain sentence evaluations in terms of other things (meanings-grasped or what have you), in the model these are fundamental. They are the given.
Anyway. Grasping that the semanticist's assumed explanatory burden is specious, is a hurdle which must be leapt. Once it is, a much clearer view becomes available.
The way it's imagined, the model also needs to distinguish between _sentences which an _agent has heard or uttered and those s/he has not (an _agent's _value function maps all _sentences, heard or not). With just these elements in place, interesting questions arise as to how best to model real language. One consequential realization is that the _value function should be designed so that whether an _agent _values some given _sentence should depend on what other _sentences s/he has already heard and _values (intuitively, what you're inclined to believe depends in part on what you already believe). This makes things tricky, but realistic.
_Sentence _valuations are meant to be mostly obvious to _agents, but not of any great intrinsic interest - like, say, basic colour recognition. To give _agents a motive to speak, a further property, _pleasure, has to be added to the model. _Pleasure is meant to be some nice feeling distinct from any familiar feeling, which is experienced when certain _valuable _sentences are encountered. It's intended to correlate to the real-world benefits truth facilitates getting, including avoiding familiar pains. It serves in the model as the only motive for exchanging _sentences – for ‘_conversing’. It is a contrivance of the model which ultimately has to be cashed-out when mapping the model onto reality.
With a bit of finessing, it emerges in the model that _agents would naturally want to cooperate to amass as large a collection of valuable _sentences as possible (a bit like how we naturally cooperate collectively to improve our individual financial circumstances, even as we compete). This gives rise to a property which on inspection looks a whole lot like truth in normal talk - the property of belonging in that collection.
With _truth in place it becomes possible to introduce _words, and to define a concept solely in the model's terms which arguably does all the work which we should expect of a concept of word-meaning. And with _word-_meaning in place, we can get straight-forwardly to a concept of _sentence-_meaning or proposition.
Possibly what distinguishes the model from other treatments is that it represents a speaker's goal in talking to be, not the acquisition of information, but rather simply the maximization for her or himself of some simple good (_pleasure, in the model). The details of the model which make this interesting (and, I think, realistic) are that
-
It's somewhat hard to come up with novel _valuable _propositions. They're valuable in part because they're scarce.
- It costs effectively nothing to repeat a _sentence to someone else -to share its _pleasure, if you like.
- The _values _agents assign to new token _sentences depend in part on tokens they've already heard and _value (what you're inclined to _believe depends on what you already _believe).
-
Where _agents have mostly the same prior _beliefs, their _valuations of new _sentences mostly agree. Where their prior beliefs differ, so may valuations.
- Lastly, the _values _agents assign to _sentences are in part - but only small part - up to them. They have a limited ability to choose whether to _value new token _sentences. It costs effort, however, to change one's _beliefs. (people have to be at least a bit responsible for what they believe, right?)
These factors would conspire to create an interesting dynamic, in some ways like what governs an economy of goods. On the one hand, _agents would have a strong incentive to share their _sentences (to talk to each other). That is, the inhabitants of the model would naturally evolve a sort of contract whereby they each voluntarily speak their _sentences, provided others do likewise, since their doing so would be to everyone's benefit. They would be motivated to cooperate. Acquiring the _pleaures of _language would not be a zero-sum game (far from it).
But they would also have an incentive to compete. The idea is that
- _agents individually have large stocks of _valued _sentences (their _beliefs), which would significantly affect the _values of newly encountered _sentences.
- To the extent _agent B's _beliefs overlap with _agent A's and they share their _sentences, B effectively works for A in getting new _pleasurable _sentences. A will likely _value what B does and so likely get pleasure from B's new discoveries. And the same point goes for the _speech community in general.
- When A's _valuation of a _sentence differs from B's, it is thus in her interest to win the community and likewise B over to her _valuation. The same would be the case for B. They would, in effect, each be striving to get the community -and one-another- to work for themselves individually, rather than having to forfeit that _sentence's _value as a cost of staying in general harmony with the community.
What would competition look like? Suppose we inhabit the model, and you utter a _sentence in my presence. Other things being equal, the model implies you _value this _sentence. Suppose I do not _value it -it is not added to my set of _beliefs. For the reasons just given, I now have an incentive to change your _valuation. How do I do this? Well, if I _disvalue it, this may well be in large part in virtue of other specific '_disconfirming' _sentences in my _belief set. If any of these is not in your _belief set, then maybe by uttering it I will get you to _value it, and so tip your _valuation of the first mooted _sentence to negative. It could indeed be that it is already in your _belief set, but that your _valuation heuristic failed to take account of it - in which case I would merely be '_reminding' you. However, if, as it happens you _disvalue my suggested _disconfirming _sentence, then clearly the process can recur.
It is a counterpart of the norm ascribed to the _sentences themselves, '_truth'.
Because the good can be shared effectively without cost (just by speaking a _sentence to someone), and because the model's _agents are mostly in agreement, it's easy to see their best strategy individually would be to agree mutually to assist one another. They would naturally form an implicit contract whereby each speaks her valuable _sentences to others on the understanding that they will do likewise.
Crucially, the model stipulates further that _agents' _valuations of newly encountered _sentences depend in part on what they have encountered and _valued in the past, and also that _agents have some lattitude to re-_value _sentences --just a bit. The point of this wiggle room is that if _sentence _valuations are interdependent as stipulated, the possibility will arise of a newly encountered _sentence being in conflict with a big prior collection. The project of _value maximization is best facilitated by allowing a bit of tolerance with respect to individual _sentences, so as not to oblige abandonment of what may after all prove to be a sound collection. _Value is in a way cumulative for _agents, and they do best in the project of getting it if they're allowed a modicum of discretion. If _valued _sentences are understood as beliefs, these stipulations hopefully won't feel too exotic.
'_truth for a' is of limited interest. For one thing, an _agent has no way of knowing what is in that set, apart from what she _values at any given moment. So the distinction between what she actually _values and what she ought to, to maximize her total _value, would not hold any interest for her. For another, two additional stipulations of the model, that novel _valuable _sentences are hard to come up with but very easy to share, and that people mostly agree in their _valuations, imply that an _agent will be getting most of her _valuable _sentences from exchanges with others. This suggests that the parallel question about collective _value maximization may hold more interest.
Suppose _agent a _values _sentence s and utters it in the presence of b who, as it happens, _disvalues it. As things stand, we have four possibilities.
First, it may be that s is genuinely in a's maximizing set but not in b's - their maximizing sets simply disagree, and s is _true for a and _false for b.
More interestingly, it may be that b is mistaken about s - that in fact his maximal set agrees with a's, but he doesn't realize it. So s is _true for both a and b - a is right and b is mistaken.
Or, vice versa - it may be that s is _false for both - a is mistaken and b is right.
(The last, not so interesting case of course is that a and b's sets diverge on s, but they're both mistaken (s is _false for a and _true for b)).
Given these possibilities, what ought _agents to do when confronted with such differences? In particular, should they treat what is _true for one _agent to be _true for all? If _truth is just _truth-for-a –if it is merely '_idiolectic'– then for every disagreement between two _agents about a token _sentence either there is a genuine divergence between them as to what is _true in their respective _idiolects, or their _idiolects properly agree and one or other of them is simply mistaken about what is _true for him. It becomes apparent that the best strategy for _agents individually to maximize _value is to treat all differences as signalling a mistake for one or other _agent. That is, it emerges that it is quantifiably advantageous to treat _truth as objective– provided enough _agents opt-in to this strategy and assuming, as we have, both that _agents mostly coincide in their appraisals of _value and that appraisals of _value are at least in part malleable.
To see this, suppose that an _agent disagrees with the majority about some widely _valued _sentence. She then forfeits the _value (and attendant possible _pleasure) potentially of all _sentences whose _support depends on it. These may be considerable in number. If she disagrees with the majority about even a small fraction of all _sentences, she loses not just the _pleasure of the _sentences hierarchically supported by them but also the confidence that _theory _sentences offered by others will rest on foundations which she would _value, and hence, in many cases, the possibility of _valuing _sentences on the strength of others' ‘_testimony’. Admittedly, this _agent will get some compensation in the form of hierarchically dependent _sentences she contrives herself on the basis of her _idiolectic _sentences. As even the most prolific of solitary _sentence producers will fall far short of what her community collectively can contrive, however, and since she will be deprived of the initial error-check afforded by communal acceptance, she will on balance be considerably less well-off. The extra _value to be gained by an individual by participating in a community of _agents in which effectively everyone commits to and contributes to the construction of a single shared edifice of _truths will easily offset the cost of sacrificing the _pleasure of any genuinely _idiolectic _sentential affinities.
And so we have in the model a concept of _truth: A _sentence is _true, roughly, just in case it is an element of the set of _sentences which would maximize the combined _value of all _speakers, were they all to _evaluate all of its elements. This being impossible, it stands merely as an ideal. I will note for now just two points.
The first point is just a reminder that the definitions so far are mere stipulations about hypothetical people in hypothetical circumstances. It is fair to ask whether such people are possible, and whether, if they existed, they would behave as described. These are not philosophically loaded questions -should not be, anyway. What is of course loaded is the fitness of the stipulated concepts to model their real counterparts. My claim, of course, is not just that such people are possible but that we in fact are they, and our usual concepts, the model's.
The model recognizes a nominally truth-like ideolectic property, '_truth-for-an-_agent'. A committed defender of indivualistic rationality might plausibly cleave to this in the hope of resisting the depredations of the current approach. This is a mistake on two fronts, I think. First, it makes what is true completely unknowable, and hence apparently without practical significance.
- "Bob's dog is a mutt".
- "Bob's dog is on the couch"
- "Bob's dog is sleeping"
- "Bob's dog suffers from arthritis"
- etc..
- "Fido is a mutt"
- "Fido is on the couch"
- "Fido is sleeping"
- "Fido suffers from arthritis"
- etc..
- "Fido is a Dachshund",
- * "Bob's dog is a Dachshund"
- "Fido is on the couch"
- "Yesterday, Fido was on the rug."
- "Tomorrow, Fido will be on the couch"
- "Fido is on the rug."
- s1: Bob’s dog is a mutt (spoken yesterday across town)
- s2: Fido is a Heinz 57 (spoken today here)
- a) ‘Fido’ and ‘Bob’s dog’ have the same _meaning, and
- b) ‘is a mutt’ and ‘is a Heinz 57’ have the same _meaning
- sentence uttered
- time of utterance
- place of utterance
- place of hearer's focus of attention
- context of utterance, being the set of token sentences recently heard by the hearer
- sentence utterer
- full set of encountered token sentences valued by the hearer ('beliefs')
A token sentence s is an observation sentence for a just in case her valuing of s is (relatively) independent of her belief set B. That is, the value function returns the same value for s regardless (almost) of the B parameter.
A token sentence s is a theory sentence for a just in case a's valuing of s is dependent on B.
-
are positively valued only when utterer = agent valuing
- are independent of context
- are independent of beliefs
Truth: A token sentence is true just in case it's an element of a set which would maximize the combined aggregate value of all speakers, and which is such that the removal of any element would result in a set of lower combined aggregate value.
s1, uttered at 1:00PM: The pain sensation in my toe is now throbbing.
s2, uttered at 2:00PM: The pain sensation in my toe is now dull and constant.
What was initially missing from the model was a motivation for speakers to maximize the feeling of assent or agreement, as we require them to have. This feeling of assent we are now leaning on is comparable to a sense of harmony or dissonance, but only in a weak sense. I did not intend that this whole account of language should rest on the idea that it's sufficiently intrinsically appealing that people would seek it out (through conversation) for its own sake. To fill the gap, the model introduced the (admittedly contrived) concept of '_pleasure'. _Pleasure was stipulated to be something we get from certain _valuable token _sentences only - not all - and in different degrees (it was later ammended to be associated with first encounters with certain _valuable _propositions). It was meant to be a model-specific feeling, distinct from any we actually have, and sufficiently intrinsically worthy to motivate maximizing _sentential _value (the thought, again, being that a speaker's only known way to maximize _pleasure would be simply to maximize the number of _valuable _propositions encountered.) One benefit of basing the model on _pleasure, rather than on some range of real pleasures, as one might seek to do, is that it thwarts any temptation to try to ground the diversity of word-meanings in the diversity of our real pleasures. It's a key point that only a single pleasure is needed to get a full treatment of truth and meaning.
_Pleasure was only ever meant to be a place-holder. For the model to provide a satisfactory representation of the relations between the concepts of truth, meaning and so forth, it's necessary to cash it out - to replace it with reality-based concepts. The obviously intended idea is to swap in for it the many real mundane pleasures we seek and pains we seek to avoid. How can this be made to work?
The evident first step is to remove _pleasure from the model altogether. The second step is to add into the model, counterparts of all the many mundane pleasures we experience and pains we seek to avoid in real life. For example, there would now be in the model a counterpart of the pleasure of eating a nice crisp apple, or of sitting in a comfortable chair after a long day, or alternately the pain of being burned by a hot pan. We can suppose these counterparts all to be gathered into a tidy set within the model.
The third step is where things start to get a bit complicated. How exactly should _valuable _propositions be related to these newly added mundane pleasures and pains, that the latter should motivate the maximization of the accumulation of the former? An instructive non-starter would be to posit a direct correlation between the two and add nothing more: whenever an element of some given _proposition (a _proposition being a set of token _sentences, remember) is _valuable, I experience a token of some given one of the set of mundane pleasures or pains in the set. For example, there would be a particular pleasure directly associated with "I am seated in a comfortable chair after a long day".
This is how things stand with _pleasure in the model. However, since real pleasures are associated typically with familiar (that is, not new) token propositions, what we really need is some way for _valuable _sentences to facilitate an _agent's bringing about the getting of the pleasure or the avoiding of the pain. What we need is that the _agent should associate with the candidate _valuable _sentences, knowledge how to act to bring about the pleasure.
I see two ways to accommodate this requirement in the model. The first is to associate a select set of _propositions directly with pleasures/pains, as discussed just above, and then to introduce into the model, for each _agent, a kind-of knows-how-to-act-to-make-_valuable mapping or function, between certain other _propositions on the one hand, and members of the select set, on the other hand.
This approach would keep the knows how to act relation within language, and may look potentially useful in explaining actions whose outcome is indifferent, from a pleasure perspective. _Agents would be motivated to exchange _sentences with the thought that encountering a _valuable one in one of those certain mapped sets of _propositions would be a guide to getting some mundane pleasure.
The second way to accommodate the requirement is to add to the model, for each agent and those certain other _propositions just mentioned, a kind-of irreducible, 'knows how to act to get pleasuren when s is _valuable' property. The model, then, would have to be augmented with infinitely many of these inscrutable properties. A _valuable _proposition's having one would allow an _agent to increase her or his mundane pleasure; this would provide an incentive to maximize _valuable _sentences. This proposal is simpler, but may look more to be labelling the problem than solving it, and so less satisfactory for this reason.
My thought is to accommodate the requirement with the second proposal just mentioned. I will close out this discussion with an assertion about this, and an explanation.
The second proposal is the right one precisely because there is no real problem to solve. There is no need to add the slight extra complexity of the first solution, no need to try to make the problem more theoretically tractable. If the thought were to use something like such a relation as part of an explanation of action, then we would be embarking down a long, dead-end road, avoidance of which made possible the insights of the model itself.
(That was the assertion I just said I would make).
"Wait, what? How can this be right? The principle, here, guiding what does and does not need explanation seems to be, 'Reject the need to explain something if it looks hard.' And that's not very admirable."
The explanation I just promised I would give is of the principle which differentiates what does and does not need explanation. My position (hardly original in itself) is that there are plenty of things which we wonder about which are well-worth wondering about, but there are some which we wonder about which as it happens are not worth wondering about.
To begin with, effectively anything amenable to a properly scientific explanation -roughly, an explanation which generates predictions which can be tested- is worth wondering about. This includes the obvious candidates like the properties of matter and the bio-chemical interoperations of the parts of organisms, but it also includes how our brains respond to patterns of input -including others' speech- ultimately to produce patterns of output behaviour -including talk.
Secondly and a bit less obviously, there are questions about what the many particular things which we lump together under certain given concepts, have in common. We say some things are just, others unjust; some musical, others not; some grammatical, others ungrammatical; there are endless examples. In each case there is wide agreement on many cases but not on all. Although the investigations succumb to the law of diminishing returns, there is often value in thinking a bit carefully about why we say what we do - what it is about the cases we are capturing; what exactly it is which underpins our agreement, if anything, where there is agreement. This sort of investigation, which may include a normative aspect, is sometimes the province of philosophy, and it's how I conceive the effort to characterize truth and meaning, here.
This sort of explanation, to be clear, has its limits. Philosophers, infamously, routinely butt-up against the difficulty of specifying necessary and sufficient conditions for a particular thing to count as an exemplar of some type of thing. We can say loosely what it is for something to be a chair, but it just doesn't pay dividends to try to find non-trivial terms which decide the question once and for all.
Thirdly, though, (and no doubt not finally), there are questions about the underlying details of explanations we give in terms of our mentalistic vocabulary: He did X because he believed that P and desired that Q. She said Y because she understood M to mean N. Our concern in such cases is not to understand how it is that we all can agree that he believed P or she understood M to mean N: it's clear enough what counts as a case of one or the other when you encounter it in life. There's no mystery about this, any more than there's a mystery about how we agree one thing to be a chair and another thing not on a particular occasion, most of the time. The question, rather, is about what has to be the case with her or him, in her mind, so to speak, for her to count as believing or understanding these things -
What is it to understand M as meaning N?In each of these cases, I am intending the questions to be asking something different than what the scientist might ask in inquiring about brain function. They're asking something we would naturally ask in the first person case - What is up with me, when I mean something by a word, or believe something? - and then extend to others.
What is it to believe that P?