Propositional Attitudes and the Model

An enduring problem for people interested in language is to suss-out the details of the relation between word- or phrase-meanings, on the one hand, and sentence-truth or falsity, on the other. So, for example, if someone says the sentence "The dog is white.", you would naturally take "The dog" to mean a certain dog, "is white" to mean the property of being white, and the whole sentence to be true just in case the dog has the property of being white. This is the 'compositional' approach to semantics familiar to philosophers. So far so good (Well, not really. But that's another discussion). But, now, what about a sentence like
s0: Fred believes Beelzebub has wings.
- (stipulating for the sake of argument Beelzebub has never existed)? We may want to say --we may need our theory of meaning to allow-- that this sentence about Fred is true. What sort of a thing do we want the word "Beelzebub" to mean, to get the right outcome? It turns out this is not an easy problem.
Expressions in the class which gives rise to this problem typically relate people (here, Fred) to things they might hope, believe, etc.. (here, that Beelzebub has wings) --so-called 'propositions'. The class of expressions is called "propositional attitudes". The model developed here follows traditional philosophy in calling the semantic context created by phrases like "believes that", _"opaque"_, and other contexts, _"transparent"_.
Now, the model tells us what meanings and propositions are based on transparent contexts. How do its claims cope with propositional attitudes?
The first thing to re-emphasize is that the model is unapolagetically non-compositional. It's leading feature is that it tells us which sentences are true without adverting to their contained words' meanings. This goes for propositional-attitude sentences as much as for any others, so there is no initial problem as to how we judge the truth of such sentences. We don't need to know any quasi-scientific fact about Fred's conception of mythical denizens of the underworld, properly to glean the truth of s0. Our valuing of the sentence is no different than our valuing of any other, and likewise its truth.
Now, philosophers recognize an ambiguity in propositional attitudes, owing to the possibilities that the words in the opaque contexts they create simply mean what they commonly do, or, alternately, have their meanings determined by how things stand with the person with the attitude - the ambiguity between de re and de dicto understandings. If we allow ourselves to be a bit pedantic about s0, the difference is between,
  • de reThere is a certain individual commonly named 'Beelzebub' who is a demon etc.. and Fred believes of this individual that he has wings. (False, because there is no such individual (glossing over complications))
  • Fred believes there is a certain individual commonly named 'Beelzebub' who is a demon etc.., who has wings. (True, we are stipulating)
The model is not taxed by this phenomenon. Whether _agents _value or ought to _value a token _sentence like s0 is a matter no different than whether they _value or ought to _value any other _sentence. As with words in transparent contexts, the moment of speech determines the _truth of the token _sentence, and this in turn the _meaning (de re or de dicto) of 'Beelzebub'. The question as to the _truth of what is said is not waiting on an answer to the _word-_meaning question, making a resolution of the latter imperative, as is the case on the dominant compositional view. (I will come back to this [Scientific project renounced, but what about normative? Making explicit/regimenting our norms of speech, so we can nail down when there's uncertainty which sense. [comparison to linguistics] ]).
It's still fair to ask, though, exactly what the meanings of words in opaque contexts are, according to the model. The model posited that two token words in transparent contexts have the same meaning just in case their tense-adjusted moment sets are the same. Setting aside the model's jargon, the point, to a first approximation, is really just that at a time of speech,
  • you can isolate the set of all _true token 'atomic' _sentences containing some given _word (the _word's _moment set),
  • you can rewrite these _sentences in a way that removes tense and indexes them to time and place of utterance (tense-adjustment), and
  • if you compare any two tokens of given _words (may or may not be the same written/spoken _word) and their respective such sets are the same, then
  • the _words have the same _meaning.
Our problem, in short, is that we need for the model to be able to say that 'George Orwell' and 'Eric Arthur Blair' in some sense have different meanings in the following s1 and s2:
s1: Fred believes George Orwell wrote Nineteen Eighty-Four. [true]
and
s2: It's not the case that Fred believes Eric Arthur Blair wrote Nineteen Eighty-Four. [true]
As things stand it says they are the same (we figure out their meanings from atomic sentences, currently).
To get to the _meanings of _words in opaque contexts, we evidently need a definition similar to what we have, but with the relevant comparison set of token _sentences restricted to the _propositional attitudes of the subject [the _propositions to which the subject stands in some _propositional attitude]. The suggestion is that we add new definitions to the model for a distinct type of meaning relativised to people, to cover this case:
_Meaning for a person oi: Token _phrases p1 and p2 have the same _meaning for a person oi iff the opaque time-adjusted _moment set of p1 for oi is the same as the opaque time-adjusted _moment set of p2 for oi.
Here, the definition of 'time-adjusted _moment set of a phrase' is as we already have in the model, with the qualification that 'opaque' determines the _sentences being compared to be all and only the _sentences to which some person oi stands in some propositional attitude.
For example, since the opaque, time-adjusted moment set for Fred of 'George Orwell' includes
  • '__ wrote _Nineteen Eighty-Four.'
but the opaque, time-adjusted moment set for Fred of 'Eric Arthur Blair' does not, it follows that these two names have different meanings-for-Fred. In contrast, if Fred knows that George Sand was Amantine Lucile Aurore Dupin de Francueil, then for every token (atomic) sentence containing 'George Sand' which he believes or hopes to be true, there will be a corresponding sentence containing 'Amantine Lucile Aurore Dupin de Francueil' (assuming neither name is ambiguous in his lexicon). If Fred believes what would be said at some moment with "George Sand and Chopin had a dysfunctional relationship.", he would also believe what would be said at that moment with "Amantine Lucile Aurore Dupin de Francueil and Chopin had a dysfunctional relationship.", and so on. The two names will have the same meaning.
All of this being said, a clarifying comment is in order. It may have been noticed that the model's new, qualified concept of _meaning is formulated in terms of a '_person' instead of an _agent. I have not as yet added to the model a definition of '_person', and the required concept being of an individual related to a set of token _sentences by an attitude, so evidently there remains some leg work to be done. The underlying message is that this is not really an important concept - it does only quite limited work. But the main thing to emphasize is the distinction between _agents and _people named in the model. These are completely independent. Whereas _agents are elements of the model's definition, people are entities which emerge within the model. To make this vivid: the model's basic elements contain no dogs or chairs. It does, though, contain the words 'dog' and 'chair', from which dogs and chairs can be got to by _semantic descent. The _people just mentioned are just like dogs and chairs in this respect. To take there to be a semantic relation between them and _agents is clearly antithetical to the model.