This document details the model which motivates the thinking here. It is in two versions, bottom-up and top-down. They differ only in their order of presentation of the model's definitions. If you want to cut to the chase then work back to the specifics, the top-down version is the one to consult: bottom-up top-down
Effectively everything said here about truth, meaning, consciousness and other matters is a consequence of what I am inventively calling 'the model'. The model imagines a minimal world of agents engaged in a simple activity superficially like speaking a language.
As the model is developed, its elements acquire new properties. To keep things clear, these properties are named following a convention - their names all have a leading '_' (in some places I have used a leading 'p'). The names make plain the real-world properties they're supposed to model. The idea behind a thought experiment such as this is to separate, in principle, the activities
- of imagining a world of people engaged in plausible activities and stipulating definitions to describe them (this is hopefully uncontrovesial in itself), and
- of mapping those definitions onto familiar concepts (this is where the controversy comes in, potentially).
The model in the first instance imagines a large set of people, which it terms '_agents', milling about in reasonable proximity. They are prone to uttering strings of sounds, '_sentences', which, as it happens, resemble sentences of (say) English - but we are to imagine these strings having no meaning (initially). The model stipulates that _agents reflexively assign them -utterances of _sentences- each a '_value' between 0 and 1. This is sort-of a gut-reaction, yes-no evaluation, with the valuation being intrinsic to the 'token' _sentences. It is formalized in the model by allowing that every agent has a kind-of table or function which associates a number to every possible token _sentence. Keeping to our clever naming practice, this is called the '_value function'.
The crux of the project is here. Whereas in most conventional accounts of language -I think it's fair to say- the goal is to explain sentence evaluations in terms of other things (meanings-grasped or what have you), in the model these are fundamental. They are the given.
Anyway. Grasping that the semanticist's assumed explanatory burden is specious, is a hurdle which must be leapt. Once it is, a much clearer view becomes available.
The way it's imagined, the model also needs to distinguish between _sentences which an _agent has heard or uttered and those s/he has not (an _agent's _value function maps all _sentences, heard or not). With just these elements in place, interesting questions arise as to how best to model real language. One consequential realization is that the _value function should be designed so that whether an _agent _values some given _sentence should depend on what other _sentences s/he has already heard and _values (intuitively, what you're inclined to believe depends in part on what you already believe). This makes things tricky, but realistic.
_Sentence _valuations are meant to be mostly obvious to _agents, but not of any great intrinsic interest - like, say, basic colour recognition. To give _agents a motive to speak, a further property, _pleasure, has to be added to the model. _Pleasure is meant to be some nice feeling distinct from any familiar feeling, which is experienced when certain _valuable _sentences are encountered. It's intended to correlate to the real-world benefits truth facilitates getting, including avoiding familiar pains. It serves in the model as the only motive for exchanging _sentences – for ‘_conversing’. It is a contrivance of the model which ultimately has to be cashed-out when mapping the model onto reality.
With a bit of finessing, it emerges in the model that _agents would naturally want to cooperate to amass as large a collection of valuable _sentences as possible (a bit like how we naturally cooperate collectively to improve our individual financial circumstances, even as we compete). This gives rise to a property which on inspection looks a whole lot like truth in normal talk - the property of belonging in that collection.
With _truth in place it becomes possible to introduce _words, and to define a concept solely in the model's terms which arguably does all the work which we should expect of a concept of word-meaning. And with _word-_meaning in place, we can get straight-forwardly to a concept of _sentence-_meaning or proposition.
Possibly what distinguishes the model from other treatments is that it represents a speaker's goal in talking to be, not the acquisition of information, but rather simply the maximization for her or himself of some simple good (_pleasure, in the model). The details of the model which make this interesting (and, I think, realistic) are that
-
It's somewhat hard to come up with novel _valuable _propositions. They're valuable in part because they're scarce.
- It costs effectively nothing to repeat a _sentence to someone else -to share its _pleasure, if you like.
- The _values _agents assign to new token _sentences depend in part on tokens they've already heard and _value (what you're inclined to _believe depends on what you already _believe).
-
Where _agents have mostly the same prior _beliefs, their _valuations of new _sentences mostly agree. Where their prior beliefs differ, so may valuations.
- Lastly, the _values _agents assign to _sentences are in part - but only small part - up to them. They have a limited ability to choose whether to _value new token _sentences. It costs effort, however, to change one's _beliefs. (people have to be at least a bit responsible for what they believe, right?)
These factors would conspire to create an interesting dynamic, in some ways like what governs an economy of goods. On the one hand, _agents would have a strong incentive to share their _sentences (to talk to each other). That is, the inhabitants of the model would naturally evolve a sort of contract whereby they each voluntarily speak their _sentences, provided others do likewise, since their doing so would be to everyone's benefit. They would be motivated to cooperate. Acquiring the _pleaures of _language would not be a zero-sum game (far from it).
But they would also have an incentive to compete. The idea is that
- _agents individually have large stocks of _valued _sentences (their _beliefs), which would significantly affect the _values of newly encountered _sentences.
- To the extent _agent B's _beliefs overlap with _agent A's and they share their _sentences, B effectively works for A in getting new _pleasurable _sentences. A will likely _value what B does and so likely get pleasure from B's new discoveries. And the same point goes for the _speech community in general.
- When A's _valuation of a _sentence differs from B's, it is thus in her interest to win the community and likewise B over to her _valuation. The same would be the case for B. They would, in effect, each be striving to get the community -and one-another- to work for themselves individually, rather than having to forfeit that _sentence's _value as a cost of staying in general harmony with the community.
What would competition look like? Suppose we inhabit the model, and you utter a _sentence in my presence. Other things being equal, the model implies you _value this _sentence. Suppose I do not _value it -it is not added to my set of _beliefs. For the reasons just given, I now have an incentive to change your _valuation. How do I do this? Well, if I _disvalue it, this may well be in large part in virtue of other specific '_disconfirming' _sentences in my _belief set. If any of these is not in your _belief set, then maybe by uttering it I will get you to _value it, and so tip your _valuation of the first mooted _sentence to negative. It could indeed be that it is already in your _belief set, but that your _valuation heuristic failed to take account of it - in which case I would merely be '_reminding' you. However, if, as it happens you _disvalue my suggested _disconfirming _sentence, then clearly the process can recur.
It is a counterpart of the norm ascribed to the _sentences themselves, '_truth'.
Because the good can be shared effectively without cost (just by speaking a _sentence to someone), and because the model's _agents are mostly in agreement, it's easy to see their best strategy individually would be to agree mutually to assist one another. They would naturally form an implicit contract whereby each speaks her valuable _sentences to others on the understanding that they will do likewise.
Crucially, the model stipulates further that _agents' _valuations of newly encountered _sentences depend in part on what they have encountered and _valued in the past, and also that _agents have some lattitude to re-_value _sentences --just a bit. The point of this wiggle room is that if _sentence _valuations are interdependent as stipulated, the possibility will arise of a newly encountered _sentence being in conflict with a big prior collection. The project of _value maximization is best facilitated by allowing a bit of tolerance with respect to individual _sentences, so as not to oblige abandonment of what may after all prove to be a sound collection. _Value is in a way cumulative for _agents, and they do best in the project of getting it if they're allowed a modicum of discretion. If _valued _sentences are understood as beliefs, these stipulations hopefully won't feel too exotic.
'_truth for a' is of limited interest. For one thing, an _agent has no way of knowing what is in that set, apart from what she _values at any given moment. So the distinction between what she actually _values and what she ought to, to maximize her total _value, would not hold any interest for her. For another, two additional stipulations of the model, that novel _valuable _sentences are hard to come up with but very easy to share, and that people mostly agree in their _valuations, imply that an _agent will be getting most of her _valuable _sentences from exchanges with others. This suggests that the parallel question about collective _value maximization may hold more interest.
Suppose _agent a _values _sentence s and utters it in the presence of b who, as it happens, _disvalues it. As things stand, we have four possibilities.
First, it may be that s is genuinely in a's maximizing set but not in b's - their maximizing sets simply disagree, and s is _true for a and _false for b.
More interestingly, it may be that b is mistaken about s - that in fact his maximal set agrees with a's, but he doesn't realize it. So s is _true for both a and b - a is right and b is mistaken.
Or, vice versa - it may be that s is _false for both - a is mistaken and b is right.
(The last, not so interesting case of course is that a and b's sets diverge on s, but they're both mistaken (s is _false for a and _true for b)).
Given these possibilities, what ought _agents to do when confronted with such differences? In particular, should they treat what is _true for one _agent to be _true for all? If _truth is just _truth-for-a –if it is merely '_idiolectic'– then for every disagreement between two _agents about a token _sentence either there is a genuine divergence between them as to what is _true in their respective _idiolects, or their _idiolects properly agree and one or other of them is simply mistaken about what is _true for him. It becomes apparent that the best strategy for _agents individually to maximize _value is to treat all differences as signalling a mistake for one or other _agent. That is, it emerges that it is quantifiably advantageous to treat _truth as objective– provided enough _agents opt-in to this strategy and assuming, as we have, both that _agents mostly coincide in their appraisals of _value and that appraisals of _value are at least in part malleable.
To see this, suppose that an _agent disagrees with the majority about some widely _valued _sentence. She then forfeits the _value (and attendant possible _pleasure) potentially of all _sentences whose _support depends on it. These may be considerable in number. If she disagrees with the majority about even a small fraction of all _sentences, she loses not just the _pleasure of the _sentences hierarchically supported by them but also the confidence that _theory _sentences offered by others will rest on foundations which she would _value, and hence, in many cases, the possibility of _valuing _sentences on the strength of others' ‘_testimony’. Admittedly, this _agent will get some compensation in the form of hierarchically dependent _sentences she contrives herself on the basis of her _idiolectic _sentences. As even the most prolific of solitary _sentence producers will fall far short of what her community collectively can contrive, however, and since she will be deprived of the initial error-check afforded by communal acceptance, she will on balance be considerably less well-off. The extra _value to be gained by an individual by participating in a community of _agents in which effectively everyone commits to and contributes to the construction of a single shared edifice of _truths will easily offset the cost of sacrificing the _pleasure of any genuinely _idiolectic _sentential affinities.
And so we have in the model a concept of _truth: A _sentence is _true, roughly, just in case it is an element of the set of _sentences which would maximize the combined _value of all _speakers, were they all to _evaluate all of its elements. This being impossible, it stands merely as an ideal. I will note for now just two points.
The first point is just a reminder that the definitions so far are mere stipulations about hypothetical people in hypothetical circumstances. It is fair to ask whether such people are possible, and whether, if they existed, they would behave as described. These are not philosophically loaded questions -should not be, anyway. What is of course loaded is the fitness of the stipulated concepts to model their real counterparts. My claim, of course, is not just that such people are possible but that we in fact are them, and our usual concepts, the model's.
The model recognizes a nominally truth-like ideolectic property, '_truth-for-an-_agent'. A committed defender of indivualistic rationality might plausibly cleave to this in the hope of resisting the depredations of the current approach. This is a mistake on two fronts, I think. First, it makes what is true completely unknowable, and hence apparently without practical significance.
- "Bob's dog is a mutt".
- "Bob's dog is on the couch"
- "Bob's dog is sleeping"
- "Bob's dog suffers from arthritis"
- etc..
- "Fido is a mutt"
- "Fido is on the couch"
- "Fido is sleeping"
- "Fido suffers from arthritis"
- etc..
- "Fido is a Dachshund",
- * "Bob's dog is a Dachshund"
- "Fido is on the couch"
- "Yesterday, Fido was on the rug."
- "Tomorrow, Fido will be on the couch"
- "Fido is on the rug."
- s1: Bob’s dog is a mutt (spoken yesterday across town)
- s2: Fido is a Heinz 57 (spoken today here)
- a) ‘Fido’ and ‘Bob’s dog’ have the same _meaning, and
- b) ‘is a mutt’ and ‘is a Heinz 57’ have the same _meaning
- sentence uttered
- time of utterance
- place of utterance
- place of hearer's focus of attention
- context of utterance, being the set of token sentences recently heard by the hearer
- sentence utterer
- full set of encountered token sentences valued by the hearer ('beliefs')
A token sentence s is an observation sentence for a just in case her valuing of s is (relatively) independent of her belief set B. That is, the value function returns the same value for s regardless (almost) of the B parameter.
A token sentence s is a theory sentence for a just in case a's valuing of s is dependent on B.
-
are positively valued only when utterer = agent valuing
- are independent of context
- are independent of beliefs
Truth: A token sentence is true just in case it's an element of a set which would maximize the combined aggregate value of all speakers, and which is such that the removal of any element would result in a set of lower combined aggregate value.
s1, uttered at 1:00PM: The pain sensation in my toe is now throbbing.
s2, uttered at 2:00PM: The pain sensation in my toe is now dull and constant.
- _Ability1: If it is _true that there is tea in the cupboard, then one can have the pleasure of tea.
- _Ability2: If it is _true that one's car will be towed if not moved, then one can avoid the displeasure of having one's car towed.
- _Ability3: If it is _true that swimming conditions at the lake are perfect, then one can have the pleasure of a swim.
- _Ability4: If it's true that uranium-235 is fissile, then one can enjoy the pleasures of having a nuclear power plant.
- etc.
-
pi: When Pat has returned from the supermarket, there is tea in the cupboard.
and
- pii: Pat has returned from the supermarket.
- piii: There is tea in the cupboard.
- somehow to reconcile or bridge common sense and science, or
- for a third explanatory idiom.
- that the success of common-sense must be due to the existence of a set of intra-cranial physical states or properties whose interrelations mirror the interrelations between what common sense attributes to us, and
- that this isomorphism would imply that the terms of common-sense in fact refer to the mooted physical states or properties - that common-sense is in fact a kind of proto-science.