A Mostly Concise Introduction to the Project
This is another attempt to introduce the basic idea which drives the project here. I believe that well-written philosophy should be intelligible to both beginners and experts. I have tried to write this well.
I start with two thoughts about language I take to be familiar. My hope is that appreciating that they both have to be up-ended will make reviewing them more interesting. I then sketch-out the approach to modelling conversation which drives this project, and try to motivate how a concept fit to model truth naturally arises in it. I conclude with some big-picture thoughts about thinking and communicating.
To begin, then, here are two basic dualities relating to language and a familiar bit of common sense associated with each.
Words and Sentences - Priority of words
The first duality is between the two basic units of language: words and sentences. There are finitely many words in any given language but effectively infinitely many sentences we can make with them. Words have meanings, sentences may be true or false. Words' meanings may be known, the thoughts expressed by sentences, understood.
Our familiar thought about these is that word-meaning is in a sense prior to or more basic than sentence-truth. Intuitively, whereas a person can understand and assess the truth of a sentence only if she first knows the meanings of the words constituting it, it's not generally the case that knowing the meaning of a word is attendant on understanding all sentences, or even any given sentence, containing it. Similarly, whereas it's possible to specify word-meanings without mentioning truth, it's generally thought not to be possible to detail how sentences come to be true without mentioning their constituent words' meanings. Our effectively infinite ability to grasp sentences is explained in terms of our finite reservoir of word-meanings-known, so we usually think.
Thinking and Communicating - Priority of Thinking
The second duality is between the two main language-involving activities, thinking and communicating. Intuitively, language is essential to both. As I write this I am sitting by myself, contemplating how best to persuade others of an unobvious conclusion I hold true, typing out my thoughts as I do so. The activity is conducted by composing thoughts, formulated as English sentences, evaluating them, and keeping, refining or discarding them as the case may be. No communication has happened at this point, the activity is solely of thinking. Hopefully, one day, you will read what I've written, at which point I will be communicating these thoughts to you as I might in talking. Each of these activities, thinking and communicating, requires language, and the two are distinct.
A qualification is in order, here. There are people who say that they do not think in (say) English, do not have an internal monologue. This suggests that language is not needed for thought. For my purposes I require only that if we can think without knowing some language like English, then it must be that we are equipped with some purely private, language-like medium in which our thoughts dwell. Critically, this medium must be like familiar languages in having both word and sentence analogues which must respectively have meanings and be truth-evaluable
.
The upshot of this is that for the purposes of the current point, 'language' should be understood to encompass both private frameworks for mental representation like this --'languages of thought', if you like-- and familiar spoken languages like Urdu and Ukrainian. If you allow there to be both, then all that's needed is that there be an extra language of thought-to-English (say) translation step, for communication.
The familiar idea we have about this duality is that thought is prior to, or more basic than, communication. It is possible to have a thought without communicating it, but to communicate anything with language it is necessary to have had a thought -so we usually think
. It's true that the thinking can be simultaneous with the communicating, but this is not compulsory. Thinking requires fewer ingredients: you don't need an interlocutor to think, you do to communicate.
What's at stake with this latter thought is how the words of a natural language -which, after all, are purely arbitrary sounds- come to be imbued with their semantic properties. Does solitary thinking do the job, or is it in fact through interpersonal communication that semantic properties emerge?
Our dualities, then, are between words and sentences and between thought and communication. And our familiar thoughts are that word-meaning is prior to sentence-truth and that thought is prior to communication.
The Model
Let's focus on the first thought. The project here is to invert it. The proposal is to investigate language taking sentence-truth to be more basic than word-meaning. A lot needs to be said about how this is possible - about how it can be reconciled with the mentioned intuitions. The main thought is to show that if the substantive explanatory project about meanings which those intuitions may seem to imply can be seen to be misconceived, then what remains of them is anodyne. I won't pursue this here, but I hope the thought will start to take shape as the project is developed.
The idea is to develop a model of speakers and language, but not with the usual assumptions. The base model contains some arbitrarily large number of speakers or _agents
. To these are added effectively infinitely many arbitrary, sentence-like strings of sounds, _sentences. _Sentences initially are understood to have no semantic value, and we don't initially care about any detail of their internal composition. The third element of the model is a set of _value mappings or functions, one for each _agent, which assign to each _sentence a number between 0 and 1 for their respective _agents. This _value is meant to correspond to the subjective measure of credibility or confidence we have in sentences, but in the model it is just the measure of a simple gut feeling, with 1 corresponding to maximum confidence, 0 minimum confidence, and 0.5 indifference. The mapping assigns a _value to all _sentences regardless of whether they've been encountered.
The fourth element, the model's main innovation, is a simple, model-specific _pleasure property associated with some _valuable _sentences when first heard by an _agent
.
The idea is that this _pleasure property correlates to the practical value got from hearing true sentences. Each _agent has as their solitary goal, to maximize their individual _pleasure. The model shows that semantic properties of words are independent of the specifics of the world we use them to describe and which they help us to navigate
.
So our basic ingredients, again, are
- _agents (speakers)
- _sentences (inert strings of sounds)
- _value mappings or functions relating _sentences to numbers which measure '_value'
- _pleasure, associated with a subset of _valuable _sentences.
Constraints
What needs to be added to this model to imbue _sentences with a property plausibly called '_truth'? The first step is to give to _agents a motivation to exchange _sentences - to '_converse'
. This is fulfilled by three stipulations. The first stipulation is that uttering _sentences in the company of another comes at effectively no cost to the speaking _agent. The second is that it's a matter of some difficulty to contrive or discover by oneself, _pleasurable _sentences. The third stipulation is that _agents mostly agree as to which _sentences are _pleasurable. With these stipulations in place, _agents would find _conversation mutually beneficial, and would naturally evolve an implicit contract whereby each shares her _pleasurable _sentences on condition that others do likewise.
To repeat, our basic constraints are,
- talk in general costs effectively nothing
- pleasurable talk is valuable
- people are mostly like-minded (have similar _value functions).
Refinements: _Beliefs
The second step is to refine the _value function to make '_language' more like language. The first refinement is to allow that a single _sentence (sound sequence) may be _valued on some hearings, but not on others. A _sentence like 'It is daytime.' may suggest the goal here, though in the model this is still a mere sound-sequence. This is accomplished by adding to the _value function parameters for place and time of utterance. Doing this, in turn, gives rise to a concept of token _sentence, this being a tuple consisting of time, place and _sentence type
.
The second, consequential refinement is to add to the _value function a parameter consisting of the set of _sentences hitherto heard or otherwise entertained and _valued by the _agent -her '_beliefs', if you like.
.
This parameter makes the _value function definition in a way self-referencing, which is something I consider to be a feature, not a bug. It makes what an _agent _values now, potentially dependent on what they have _valued in the past. It introduces relations between _sentences, in that addition to, or removal from, an _agent's _belief set of a single _sentence can now make the difference between the _agent _valuing or _dis-valuing some newly encountered _sentence.
The _value function, then, looks like this,
Vi : s, t, x, B → v
where s is a token _sentence, t is a time, x is a position in 3-dimensional space, B is the set of _sentences hitherto encountered and _valued by _agent i, and v is a number between 0 and 1.
The _beliefs parameter has two crucial implications.
First Implication of _Beliefs: Intrapersonal Conflict and Resolution
Suppose _agent a encounters a token _sentence s1 which she _dis-values, but which she would _value were she not already to have in her _beliefs set some other token _sentence, s0. The model can reasonably be refined to allow that a can choose to _dis-value s0 and so to add s1 to her _belief set -to reject her prior _belief in s0 in favour of s1. _Agents, we may suppose, can revise their _beliefs. Since a's end goal in trafficking in _sentences is ultimately to get as much _pleasure -and so, initially, _value- as she can, and since this choice may foreseeably lead to greater aggregate _value, it may be in her interest to do this. A constraint needs to be added along with this refinement, that making such revisions is not arbitrarily easy to do - that it costs some effort and takes some time.
With these changes in place, there now come to be multiple potential _belief sets open to any given _agent, given any total set of experienced _sentences. Among these, we may suppose there to be some one which maximizes _value for her
. And with this change, token _sentences acquire a new property of interest - that of being in this _value-maximizing set. Alongside the question as to whether a _sentence is _valued by an _agent is now the _question as to whether -so to speak- the _agent ought to _value the _sentence. The natural name of this property is '_truth-for-the-_agent' (or 'subjective _truth').
Second _Beliefs Implication: Interpersonal Conflict and Resolution
The model's constraints imply that _agents get most of their _valuable _sentences from others, and that this is facilitated by broad harmony between individuals' _belief sets. It becomes apparent that, just as intrapersonal conflict is possible, so too is interpersonal conflict.
Suppose _agent a0 _believes some token _sentence s and a1 _disbelieves it. It may be that one or other of them -a1, let's say- is 'mistaken'. s is in fact _true-for-a1, and a1 ought to change his _belief set to accommodate it, to fulfill his goal of maximizing _value and hence _pleasure. Alternately, though, it may simply be that s is _false-for-a1 and _true-for-a0. What should happen in this case? a1 has two options. His first option is just to live with the divergence from a0, accepting the cost of forfeiting the _pleasure of any future _sentences whose _value hinges on s, as well as any which depend in turn on them, and so on, recursively. The second option is to adjust his _belief set to accommodate it, accepting the cost of the effort required to make the adjustment
. My proposal is that the model should be so specified that the latter cost is lower than the former, some considerably majority of the time. In other words, the model should be such that _agents maximize their _pleasure by mostly keeping their _belief sets in harmony with the large majority of _agents, accepting the cost of the effort of sometimes adjusting their _belief sets. It being the case that _agents get the large majority of _pleasurable _sentences from others, this is a reasonable refinement of the model. _Agents' best strategy individually to maximize their _pleasure would be to work to fall into line with the community, most of the time.
It should be emphasized that the definition of '_truth' is not meant to make a virtue of unquestioning conformity. The model is meant to have a bias in favour of agreeing with communal consensus, but not so that individuals' _pleasure is maximized by systematic acceptance of it. Indeed, it's easy to represent in the model both the possibility and propagation of congenial _falsehoods, and the benefit to all of dissent from truculent individuals in providing a corrective to mistaken hegemonic _beliefs
.
_Truth
The foregoing being allowed, _agents in the model would in effect collaborate to erect a shared edifice of _sentences thought to maximize collective _pleasure. _Where there is disagreement, individual _agents would have an incentive to try to win others round to their _valuation. The property of a _sentence, of being an element of the set of _sentences which maximizes the collective _pleasure of _agents, would of course be '_truth' (unqualified). _Truth, it will be noticed, is a property in the first instance of token _sentences.
It's a separate topic, but it can possibly be guessed how '_words' are introduced into the model, and a concept of sameness of '_meaning' defined in terms of their substitutability 'salva _veritate'. With _words and _meaning in place it becomes possible to group together token _sentences whose constituent _words suitably align in '_meaning', into sets. This grouping gives us a concept which does all of the work we should plausibly expect of a concept of '_proposition'.
Thinking and Communicating
I will close by returning to the second of the two language-related dualities with which I began, between thinking and communicating. The familiar thought there was that thinking is prior to communicating. That is, the semantic properties of the elements of language inhere in them concomitantly with the activity of isolated cogitation, prior to engaging in talk with another. It will be understood that the model flips this on its head (or rather, on its feet). According to the model, language gets its semantic properties through dialogue. This implication of the model was quite obviously built into it from the outset. The only justification for this design decision is that the theory it implies is more convincing than the alternative. How exactly the moral should be understood requires more discussion than I should try to undertake here
.
But it is a consequential point: it requires, I believe, a fundamental rethink about the nature of rationality. It has implications, if correct, for a wide range of philosophical questions, including questions about consciousness, the role of perception in judgment, and the nature of justification and of knowledge.
This second duality in language was included up-front in this summary as a first-class concern, despite its not playing a role in the main discussion here, because the model's implications for it are fundamental. The model gets traction only if the points about both the dualities are assimilated, and it's easy to see that this is a lot to swallow in one go. If the latter isn't appreciated, the model won't get any grip. The failure, as I understand it, of philosophy's practitioners to take note of this position in conceptual space is due to the difficulty of inverting at once, both of these foundational premises of conventional thinking.