Propositional Attitudes and the Model
An enduring problem for people interested in language is to suss-out the details of the relation between the two basic elements of language --word- or phrase-meanings, on the one hand, and sentence-truth or falsity, on the other. The usual, intuitive approach to this is to understand word-meanings to be basic and to suppose we somehow compose them together to get to sentence-meanings and then get from these to truth or falsity. This is the so-called 'compositional' approach to semantics. So, for example, if someone says the sentence "The dog is white.", you would naturally take "The dog" to mean a certain dog and likewise "is white" to mean the property of being white. You would then take the whole sentence somehow to represent any state of affairs in which the dog has the property of being white, and to be true just in case it is in fact so. So far so good (according to the usual view). Consider now, though, a sentence like,
s0: "Fred believes unicorns can't swim.".
We may want to say --we may need our theory of meaning to allow-- that this sentence about Fred is true. What sort of a thing do we want the word "unicorn" to mean, to get the right outcome? It turns out this is not an easy problem.
Words in the class which gives rise to this problem typically relate people (here, Fred) to things they might want, think, etc., (here, that unicorns can't swim) --so-called 'propositions'. The words in question are called 'propositional attitude' verbs, prototypical examples of which are 'believes' and 'desires'. The model I develop follows traditional philosophy in calling the semantic context created by phrases like "believes that", "opaque", and other contexts, "transparent".
Now, the model tells us what meanings and propositions are based on transparent contexts. How do the model's claims cope with propositional attitudes?
The first thing to re-emphasize is that, unlike the approach we just opened with, the model is unapolagetically non-compositional. It's leading feature is that it tells us which sentences are true without adverting to their contained words' meanings. This goes for propositional-attitude sentences as much as for any others, so there is no initial problem deciding the truth of such sentences. We don't need to know any quasi-scientific fact about Fred's conception of mythical beasts' aquatic proclivities in order properly to glean the truth of s0 -- our valuing of the sentence is no different than our valuing of any other, and its truth, likewise.
It's still fair to ask, though, exactly what the meanings of words in opaque contexts, are, according to the model. The model posited that two words in transparent contexts have the same meaning just in case their tense-adjusted moment sets are the same (dropping the leading '_' convention used in the exposition of the model). Setting aside the model's jargon, the point, to a first approximation, is really just that at a moment of speech,
- you can isolate the set of all true token atomic sentences containing some given word (the word's moment set),
- you can rewrite these sentences in a way that removes tense and indexes them to time and place of utterance (tense-adjustment), and
- if you compare any two tokens of given words (may or may not be the same word-type) and their respective such sets are the same, then (and only then)
- the token words have the same meaning.
So now, consider the following two token sentences uttered on an occasion:
s1: Fred believes George Orwell wrote Nineteen Eighty-Four. (true)
s2: It's not the case that Fred believes Eric Arthur Blair wrote Nineteen Eighty-Four. (also true)
Our problem, in short, is that whereas we need for the model to be able to say that 'George Orwell' and 'Eric Arthur Blair' in some sense have different meanings in s1 and s2, as it stands it says they are the same (we figure out their meanings solely from atomic sentences, currently).
To get to the meanings of words in opaque contexts, we evidently need a definition similar to what we have, but with the relevant comparison set of token sentences restricted to the propositional attitudes of the agent (the propositions to which the subject stands in some propositional attitude). The suggestion is that we add new definitions to the model for a distinct type of meaning relativised to agents, to cover this case:
Meaning-for-agent-ai: Token phrases p1 and p2 have the same meaning-for-agent ai
iff the opaque time-adjusted moment set of p1 for ai is the same as the opaque time-adjusted moment set of p2 for ai.
Here, the definition of 'time-adjusted moment set of a phrase' is as we already have in the model, with the qualification that 'opaque' determines the sentences being compared to be all and only the sentences to which ai stands in some propositional attitude.
For example, since, at the moment under consideration, the opaque, time-adjusted moment set for Fred of 'George Orwell' includes
'__ wrote Nineteen Eighty-Four.'
but the opaque, time-adjusted moment set for Fred of 'Eric Arthur Blair' does not, it follows that these two names have different meanings-for-Fred. In contrast, if Fred knows that George Sand was Amantine Lucile Aurore Dupin de Francueil, then for every token (atomic) sentence containing 'George Sand' which he believes or hopes to be true, there will be a corresponding sentence containing 'Amantine Lucile Aurore Dupin de Francueil' which he likewise believes or hopes is true. If Fred believes what would be said in a moment with "George Sand and Chopin had a dysfunctional relationship.", he would also believe what would be said in that moment with "Amantine Lucile Aurore Dupin de Francueil and Chopin had a dysfunctional relationship". The two names will have the same meaning.