
Contents
-
-
-
-
1.1 Presupposition 1.1 Presupposition
-
1.1.1 Local Satisfaction 1.1.1 Local Satisfaction
-
1.1.2 Dynamic Semantics 1.1.2 Dynamic Semantics
-
-
1.2 Modals 1.2 Modals
-
1.3 Conditionals 1.3 Conditionals
-
1.4 Anaphora 1.4 Anaphora
-
1.5 In Sum 1.5 In Sum
-
-
-
-
-
-
-
-
-
-
-
Cite
Abstract
This chapter surveys the plan for the book, and introduces the framework of dynamic semantics with a brief discussion of presupposition projection. Presupposition projection is the phenomenon wherein contents “escape” operators which usually cancel entailments; hence, for instance, both ‘Susie stopped smoking’ and ‘If Susie stopped smoking, she’ll win the marathon’ naturally lead to the inference that Susie once smoked, whereas only the first leads to the inference that Susie doesn’t now smoke. The chapter explains the phenomenon and then sketches dynamic semantics’ elegant theory of presupposition projection, which provides a simple and compelling way to introduce and motivate the idea that interpretation can be dynamic, and the general dynamic approach to meaning.
In this introductory chapter, I will explain the plan for the book by very briefly surveying the target phenomena; the general contours of dynamic semantics; what I see as its central problems; and the bounded alternative I will advocate.
I will spend a fair amount of time in this chapter discussing semantic presupposition, for two reasons. First, it provides a simple way to explain the basic motivations for dynamic semantics, and to lay out a simple dynamic fragment. Second, this is the phenomenon that I will have the least to say about in what follows, so a somewhat detailed discussion here will not be redundant.
First, a terminological note. ‘Dynamic semantics’ means different things to different authors. A variety of criteria have been offered for what makes a system dynamic (for instance: order-sensitivity (Chierchia 1995); failures of eliminativity and/or distributivity (van Benthem 1996, Rothschild & Yalcin 2015, 2016); or, more loosely, reference to local contexts). My foil throughout this book is dynamic in all these ways: it is, broadly speaking, the dynamic systems growing out of Heim (1982, 1983), further developed in many ways since (e.g., in Groenendijk & Stokhof 1991, Heim 1992, Dekker 1994, Groenendijk et al. 1996, Muskens 1996, Beaver 2001, and many others). I will not discuss the class of systems which developed out of Kamp (1981) called discourse representation theory (DRT). Kamp’s work was of fundamental importance in the development of dynamic semantics, but I think it is better to reserve the label ‘DRT’ for the representational approach distinctive of Kamp’s work. DRT is sometimes grouped with Heimian semantic approaches under the heading ‘dynamic semantics,’ but this is a confusing practice (even more confusingly, ‘DRT’ is sometimes used simply as a label for what I am calling ‘dynamic semantics’). What is most distinctive about DRT is its theory of discourse representation; those representations are compatible with various semantics, including dynamic semantics. But I will not discuss them further, simply because this is a book about semantics (and a little pragmatics), not about representation, and I don’t see a need to make any commitments about the latter.
My own system crosscuts the standard criteria of dynamicness, and I will not address the question of whether my system should properly be viewed as dynamic or not. The question is at least partly terminological; given the breadth of usage of ‘dynamic semantics,’ it is too vague to answer full stop. Having said that, I will throughout use ‘dynamic semantics’ to mean the particular class of systems growing out of Heim’s work and the many developments since. My system is dynamic in some ways, not in others, and I am perfectly happy if some want to describe it as a species of dynamic semantics; in any case, it owes a deep debt to the dynamic systems I will discuss. But, again, in this book I will use the label ‘dynamic semantics’ more narrowly.
1.1 Presupposition
Semantic presuppositions are contents that can systematically survive in environments that usually cancel inferences (for instance, negation). This pattern is called presupposition projection. Standard examples include:
change-of-state verbs: ‘Susie stopped smoking’ presupposes that Susie used to smoke; ‘Susie started practicing yoga’ presupposes that Susie didn’t formerly practice yoga;
definite descriptions: ‘The linguist arrived’ presupposes, on standard views, that there is a unique (salient) linguist (a claim we’ll explore in more detail in Chapter 9);
clefts: constructions which move a constituent to sentence-initial position for emphasis, as in ‘It was Liam who lost a tooth,’ which presupposes that someone lost a tooth;
factives: attitude predicates which presuppose the truth of their complement, like ‘knows’ or ‘realizes’: ‘Mark realizes that Liam isn’t coming’ presupposes that Liam isn’t coming.
Obviously, these are all inferences licensed by an assertion of any of these sentences. What makes them presuppositions is their projection behavior. For illustration, consider clefts. Sentences with the form ‘It was A who φ-ed’ entail that someone φ-ed. Two key observations lead us to identify this as a presupposition. The first is that this existential inference survives even when the cleft sentence is embedded under negation or disjunction, in the antecedents of conditionals, in attitude contexts, or other environments that generally cancel entailments:
It was not Liam who lost a tooth.
Either Liam got a bloody nose, or else it was Liam who lost a tooth.
If it was Liam who lost a tooth, we need to have a talk with him.
Lily thinks that it was Liam who lost a tooth.
All these constructions lead (defeasibly, but naturally) to the conclusion that someone (in the relevant group and at the relevant time) lost a tooth. And this is surprising. To bring this out, compare what happens in parallel constructions with ‘Liam lost a tooth’—that is, the minimal variant on ‘It was Liam who lost a tooth’ which lacks the cleft construction:
Liam didn’t lose a tooth.
Either Liam got a bloody nose, or else Liam lost a tooth.
If Liam lost a tooth, we need to have a talk with him.
Lily thinks that Liam lost a tooth.
These sentences, unlike the minimal variants in (1), do not lead to the conclusion that someone lost a tooth.
This is surprising. ‘It was Liam who lost a tooth’ plausibly has the very same truth-conditions as ‘Liam lost a tooth,’ with a change only in the focus of the sentence. It seems impossible to imagine a situation where one of these is true and the other false. So it seems like these should pattern together. And, coming from a static worldview, you might think that they should pattern in the second way, where the inference does not survive. To see why, recall that in the static, classical worldview, a sentence like ‘Liam lost a tooth’ denotes a proposition (relative to a context, which determines the value of indexicals, pronouns, tense, and so on; I’ll often leave the relativization to context implicit). There’s debate about what exactly propositions are, but that debate won’t matter for our purposes, so I will identify propositions with sets of possible worlds.1 So, for instance, the proposition expressed by ‘Liam lost a tooth’ is just the set of possible worlds where Liam lost a tooth. Disjunction in the static picture corresponds to union,2 so the proposition expressed by p ∨ q is the union of the propositions expressed by p and q, respectively. The weirdness of presupposition projection can now be seen easily. ‘It was Liam who lost a tooth,’ in the static worldview, denotes some proposition or other; let’s remain neutral about which one it is. ‘Liam got a bloody nose’ denotes a different proposition: on the face of it, the set of worlds where Liam got a bloody nose. The disjunction in (1-b) thus denotes the union of these propositions. Some worlds where Liam got a bloody nose are ones where no one (in the relevant group, at the relevant time) lost a tooth. So, no matter what proposition is expressed by ‘It was Liam who lost a tooth,’ the proposition expressed by the disjunction in (1-b) will include worlds where no one lost a tooth. In the standard static picture of pragmatics, asserting p is a way of communicating that p is true: that is, that the actual world is included in the proposition expressed by p. Hence given all these assumptions, it is hard to see why asserting the disjunction in (1-b) would communicate—as it seems to—that someone lost a tooth, since, given classical assumptions, it is true at worlds where no one did.
When we compare (1-b) with (2-b), we see that the predictions of the static account seem exactly right in the latter case: an assertion of (2-b) does not at all lead us to conclude that someone lost a tooth. So, in a sentence like (2-b), things seem to work just as the classical static picture would have it. So what is happening differently in (1-b)? The reasoning above was non-committal about the meaning of the cleft construction itself, which suggests that there is something about this phenomenon that makes trouble, not just for the detailed semantics of some particular constructions, but rather for the foundational assumptions of classical static approaches to meaning and communication.
Similar points can be made with the other minimal pairs in (1) and (2); as well as corresponding minimal pairs involving different presuppositional and corresponding non-presuppositional constructions, like ‘Susie stopped smoking’ versus ‘Susie used to smoke and doesn’t now,’ or ‘Mark realizes that Liam isn’t coming’ versus ‘Mark truly believes that Liam isn’t coming.’3
Our first observation, then, is that presuppositions project out of environments that usually cancel entailments. But they don’t always do so: projection can be blocked, or filtered, in characteristic ways, as in (3):4
Either no one lost a tooth, or else it was Liam who lost a tooth.
If someone lost a tooth and it was Liam who lost a tooth, then we should talk with him.
If someone lost a tooth, it was Liam who lost a tooth.
Lily thinks that someone lost a tooth, and she thinks that it was Liam who lost a tooth.
Here the inclination to conclude that someone lost a tooth disappears.
The challenge raised by presuppositions is explaining these two sides of projection: why presuppositions project when they do, and why they are filtered when they are.
1.1.1 Local Satisfaction
To account for presupposition, the static system sketched just now needs some kind of elaboration. Two prominent approaches were pursued in the early literature. Peters (1979) and Karttunen & Peters (1979) proposed influential trivalent approaches, in which the failure of a presupposition is modeled as a gap in truth-values. In classical semantics, a sentence p neatly partitions the space of possible worlds into those where p is true and those where ¬p is true instead. On the trivalent approach, a presuppositional sentence p instead partitions the space of possible worlds into three sets: the worlds where p is true, those where p is false and hence ¬p is true, and those where p is neither true nor false: namely, the worlds where its presuppositions are false. So, for instance, ‘It was Liam who lost a tooth’ would be true wherever Liam lost a tooth, false wherever Liam didn’t lose a tooth but someone else did, and undefined wherever no one lost a tooth. In this system, a sentence’s (strongest) presupposition can be identified with the set of worlds where it is either true or false.5 Sufficiently clever stipulations about the connectives and the pragmatics of trivalence, can, broadly speaking, capture the projection patterns just reviewed (modulo the proviso problem which I explain below). Although this approach is very interesting, and has recently been the focus of renewed interest (thanks especially to the work of George 2008 and Fox 2008), I mention it largely to set it aside; it will come up again in passing in Chapter 9.
Instead, I will focus on the other thread of responses to the problem of presupposition projection, which is sometimes known as satisfaction theory. Karttunen (1973, 1974) and Stalnaker (1974b) proposed broadly similar responses to the projection data above, centered on the idea that a presupposition should be thought of as a content which must be antecedently accepted. On this approach, you can’t assert a sentence like ‘It was Liam who lost a tooth’ out of the blue: you can only use it in a context where it is already accepted that someone lost a tooth. Descriptively, this seems untrue—you can say, out of the blue, that it was Liam who lost a tooth, and we can certainly understand you. To account for this, satisfaction theorists propose a process of accommodation through which presuppositions can be quietly added to the common ground (the set of worlds consistent with everything which is commonly accepted in the conversation). This may look like it deprives this account of predictive value, but most of what’s interesting about the account is not here—in its account of what it takes for a presuppositional sentence to be asserted—but rather in the next step. The key insight of these approaches was that the requirement that presuppositions be antecedently accepted should be localized: the presupposition of a clause of a sentence must be entailed, not by the conversation’s context (the “global” context), but rather by that clause’s locally available information (the “local” context).
To get a feel for the idea, consider a conjunction p ∧ q. Once you arrive at q, there is some natural sense in which p is already available. If q presupposes r, then, these theories say, the presupposition must be entailed, not by the common ground itself, but rather by the common ground updated with p (we can think of this update simply as set intersection, so this is the set of worlds in the common ground where p is true). We call that updated common ground the right conjunct’s local context: the information that is available, in a way to be further precisified, to that part of the sentence.
For another example, consider disjunction. The local context for a right disjunct, according to these theories, is the common ground updated with the negation of the information in the left disjunct. So, if q presupposes r, the disjunction p ∨ q can only be used in a context whose common ground, when updated with ¬p, entails r. This predicts a key contrast between our two disjunctions, ‘Either Liam got a bloody nose, or else it was Liam who lost a tooth’ versus ‘Either no one lost a tooth, or else it was Liam who lost a tooth.’ As we have seen, the former, but not the latter, naturally leads to the inference that someone lost a tooth. Satisfaction theory has an elegant story of why the inference disappears in the latter case: the negation of the left disjunct entails the presupposition of the right disjunct (that someone lost a tooth). That means that the presupposition will be entailed by its local context—it will be “locally satisfied”—no matter what the global context is. That is, start with any common ground and update with the negation of the left disjunct and you’ll end up with a local context that entails that someone lost a tooth. That means, in turn, that the disjunction as a whole can be used in any context whatsoever: it puts no requirements whatsoever on the global context. Satisfaction theory thus has a neat story about presupposition filtering: presuppositions are rendered inert whenever their triggers appear in parts of sentences that guarantee that they will be entailed by their local contexts.
Things are slightly more complicated when it comes to satisfaction theory’s explanation of why presuppositions project in cases like (1-b). What this theory predicts is that a disjunction with the form ‘Either Liam got a bloody nose, or else it was Liam who lost a tooth’ can only be used in a context where the common ground entails that someone lost a tooth after it is updated with ‘Liam didn’t get a bloody nose.’ One kind of context that has this feature is one whose common ground entails that someone lost a tooth. But this is not the weakest kind of context that has this feature: the disjunction is also predicted to be acceptable in a context whose common ground entails the disjunction ‘Either Liam got a bloody nose, or else someone lost a tooth.’ That means it’s not actually obvious why, on these theories, the disjunction leads most naturally to the stronger inference that someone lost a tooth. This issue is known as the proviso problem (see Karttunen & Peters 1979, Geurts 1996). For my part, I think it’s a serious open problem for satisfaction theory (see Mandelkern 2016a,b, Mandelkern & Rothschild 2018). Having said that, I’m not going to focus on this problem here: semantic presupposition is not my focus in this book, and its role in this chapter is expository. The key idea that emerges from satisfaction theory’s treatment of presupposition, which I want to emphasize here, is that an adequate treatment of presupposition depends on some notion of a local context. This holds even in variants of satisfaction theory which are designed to get around the proviso problem, like the system I advocate in Mandelkern (2016a).
Things work similarly in the case of the other constructions in (3), mutatis mutandis. So, for instance, the conjunction ‘Someone lost a tooth, and it was Liam who lost a tooth’ is predicted to be assertible in any context, and thus in this sense to have no non-trivial presuppositions, because the left conjunct entails the presupposition of the right conjunct. Of course, the conjunction in question still entails that someone lost their tooth, but it doesn’t presuppose this. To see this, we can embed the conjunction in entailment-canceling environments where presuppositions project, like the antecedents of conditionals, as in (3-b), ‘If someone lost a tooth and it was Liam who lost a tooth, then we should talk with him.’ This does not license the inference that someone lost a tooth, and since conditionals generally presuppose whatever their antecedents presuppose, this shows that the conjunctive antecedent, likewise, does not presuppose that someone lost a tooth. This illustrates, again, satisfaction theory’s successful explanation of presupposition filtering: when a sentence’s local context entails its presuppositions, the presupposition fails to project. By contrast, in a conjunction like ‘Liam got a bloody nose, and it was Liam who lost a tooth,’ the left conjunct does not entail the right conjunct’s presupposition, which means that the sentence puts non-trivial constraints on input contexts. Embedding this conjunction in the antecedent of a conditional suggests that the conjunction presupposes that someone lost a tooth:
If Liam got a bloody nose and it was Liam who lost a tooth, then we will have to talk to him.
(4) leads to the inference that someone lost a tooth. Again, satisfaction theory predicts something slightly weaker, namely that (4) can only be used in a context that entails that either Liam didn’t get a bloody nose or someone lost a tooth. This is another instance of the proviso problem. But, again, I’ll set it aside, and emphasize the success here: namely, that satisfaction theory predicts a contrast between a conjunction like ‘Liam got a bloody nose and it was Liam who lost a tooth,’ which has non-trivial presuppositions, versus ‘Someone lost a tooth and it was Liam who lost a tooth,’ which does not.
1.1.2 Dynamic Semantics
All this reasoning is left relatively informal in Stalnaker’s work. Karttunen, by contrast, recursively characterizes presupposition satisfaction, along the lines that I’ve laid out informally here. There is, however, something prima facie unsatisfying about both these approaches. On the one hand, it’s hard to see how to generalize Stalnaker’s broadly pragmatic approach—which he spells out most explicitly for the case of conjunction—to environments like disjunction and quantification. On the other hand, Karttunen’s two-dimensional approach comprises independent specifications of truth-conditions and presupposition projection rules, which makes it superficially cumbersome and stipulative. Heim’s development of dynamic semantics grew, in part, out of dissatisfaction with these two approaches.
Heim developed dynamic semantics in Heim (1982) centrally in order to account for facts about anaphora (as I explain briefly in §1.4 and, in more detail, in Chapter 8). But her treatment of presupposition in a propositional fragment in Heim (1983) is much simpler, and brings out the basic ideas clearly, so I will (anachronistically) start my presentation there.
Heim’s idea was, in essence, to roll together the two dimensions of Karttunen’s system—truth-conditions and projection conditions—into a one-dimensional representation of meaning. To do this, she treated sentence meanings as functions that update contexts. Contexts, for now, can be treated just as sets of possible worlds.6 Sentences with presuppositions denote partial functions, defined on only those contexts which entail their presuppositions (a set of worlds entails p just in case p is true at every world in that set). Hence, this approach builds both on satisfaction theory and on trivalent theories, but locates trivalence, not in truth-conditions, but rather in definedness conditions on updates.
In more detail, consider a propositional language comprising a set of atomic sentences closed under negation (¬), conjunction (∧), and disjunction (∨). To represent presuppositions, we extend to the language which is the smallest superset of such that contains the sentence pq whenever contains p and q, and which is closed under ¬, ∧, and ∨. pq stands for a sentence p which presupposes q. Write [p] for p’s semantic value, which is, again, a context change potential, or CCP: a (possibly partial) function which takes a context to a context.7 Following convention, we use the post-fix notation c[p] to denote the result of applying [p] to a context c. Given an atomic valuation which takes any atom and world to a truth-value, we define our interpretation function [⋅] on atoms as follows:
Atoms: When A is atomic,
That is, we update a context with atoms simply by eliminating any worlds from the context where the atom is false.8 So, for instance, we update a context with [Liamlostatooth] by keeping exactly the worlds from that context where Liam lost a tooth (treating this as an atomic sentence for simplicity).
Next, consider presuppositional sentences:
Presuppositions:
Three notes on notation. First, for any (partial) function f, ‘f(x) = #’ is simply shorthand for ‘f(x) is undefined.’ Second, when I write c[q] = c, I mean that c[q] is defined and equal to c. Finally, the case notation is meant to be read exhaustively from the top down. So this says that c[pq] is equal to c[p] iff c[q] is defined and equal to c; otherwise, c[pq] is undefined. In other words, an update with a presuppositional sentence is just the same as the update with the corresponding presuppositionless sentence, provided updating with the presupposition does not change the input context, and undefined otherwise. (When c[q] = c, we say that c supports or accepts or entails q. That is, c supports q just in case updating with [q] would not change c, so that c already incorporates any information in q.) This rule thus incorporates the main idea of satisfaction theory: presuppositions are constraints on input contexts.
So consider the sentence ‘It was Liam who lost a tooth.’ Let Liam stand for ‘Liam lost a tooth’ and Someone for ‘Someone lost a tooth,’ treating both as atomic sentences for simplicity. Then ‘It was Liam who lost a tooth’ is parsed in our toy language as Liam Someone. c[LiamSomeone] will be undefined if c includes any worlds where no one lost a tooth. Provided c only includes worlds where someone lost a tooth, the update is the same as the update with Liam: we keep all and only worlds from c where Liam lost a tooth.
Things get more interesting once we incorporate connectives. Negation is simple: we can (in this propositional fragment) treat it simply as the functional correlate of complementation, which updates the context with the negatum and then subtracts the result from the starting context:9
Negation:
In more detail, we first check to make sure the update with the negatum is defined; if not, update with the negation is also undefined.10 If update with the negatum is defined, then updating with the negation yields the set of worlds in the context which don’t survive update with the negatum.
So consider a context c and suppose we update it with ‘It was not Liam who lost a tooth.’ We first check whether we can update c with ‘It was Liam who lost a tooth.’ That, in turn, will be possible iff c entails that someone lost a tooth, given the clause for presuppositions above. Provided c does entail that someone lost a tooth, we keep all and only the worlds from c which are not in c[LiamSomeone]—which is to say, all and only the worlds from c where Liam didn’t lose a tooth. So this approach accounts naturally for the fact the presuppositions project out of negation: it predicts that ‘It wasn’t Liam who lost a tooth,’ just like ‘It was Liam who lost a tooth,’ can only be used in a context which entails that someone lost a tooth.
Conjunction is treated as successive update, first with the left conjunct and then the right conjunct, provided those updates are defined:11
Conjunction:
This builds on the Stalnakerian idea that, when we consider an assertion of p ∧ q, the common ground, in some rough sense, is already updated with p by the time we get to q. The dynamic treatment of conjunction takes this literally: it says that, provided the updates are both defined, a conjunction first updates a context with the left conjunct and then applies the right conjunct to the result.
When the conjuncts don’t contain any presuppositions, this kind of order-sensitive updating is not very interesting. Updating c with ‘Liam got a bloody nose and Liam lost a tooth’ goes as follows: we first update with the meaning of the left conjunct by keeping exactly those worlds where Liam got a bloody nose. Then we take the result and update with the meaning of the right conjunct by keeping exactly those worlds where Liam lost a tooth. The overall result is to keep exactly those worlds from c where Liam both got a bloody nose and lost a tooth.
Things are more interesting when presuppositions enter the picture. Suppose we update a context c with ‘Someone lost a tooth, and it was Liam who lost a tooth.’ Since we first update c with ‘Someone lost a tooth,’ all of the worlds in the updated context will be ones where someone lost a tooth. That means that the presuppositional requirement of the right conjunct is guaranteed to be satisfied: whatever context we start with, updating with ‘Someone lost a tooth, and it was Liam who lost a tooth’ will be defined. By contrast, a conjunction like ‘Liam got a bloody nose, and it was Liam who lost a tooth’ won’t have the same property: this denotes a partial, not total, CCP. In particular, if a context c has a world w where Liam got a bloody nose but no one lost a tooth, then the update will crash (that is, it will be undefined). We’ll start by updating c with the left conjunct; w will survive this update, but then the resulting context will not entail that someone lost a tooth, and so the update with the right conjunct will be undefined. So, like the earlier forms of satisfaction theory, Heim’s dynamic approach predicts a contrast between cases of filtering and cases of projection (although it is still subject to the proviso problem, which, again, I’ll set aside).
This discussion brings out the role of local contexts in this theory. Roughly, we can identify the local context(s) for a clause in a complex sentence, given a global context, as the argument(s) of that clause’s meaning in computing the update of the complex sentence.12 So, in context c, the local context for p in p ∧ q is c, corresponding to the observation that presuppositions always project out of left conjuncts. But the local context for q is, crucially, not c but rather c[p], since that is the argument of [p] in the course of computing c[p ∧ q].
Coming finally to disjunction, we can capture the generalization that the local context for the right disjunct entails the negation of the left disjunct with the following clause:
Disjunction:
That is, disjunction updates c with the left disjunct, updates c with the negation of the left disjunct, and then the right disjunct, and then takes the union of these two updated sets (provided all these are defined). So the local context for the left disjunct, p, is just the input context, which means that presuppositions are predicted to project out of left disjuncts. By contrast, the local context for the right disjunct is the input context updated with the negation of the left disjunct. So a presupposition of a right disjunct will be filtered whenever the negation of the left disjunct entails the presupposition of the right disjunct. Hence a disjunction like ‘Either no one lost a tooth, or else it was Liam who lost a tooth’ is predicted to be presuppositionless (that is, to be defined on any context), while a disjunction like ‘Either Liam got a bloody nose, or else it was Liam who lost a tooth’ is predicted to have presuppositions (that is, to be defined only on some contexts). So (again, modulo the proviso problem) we account for the contrasts we saw above.
Conditionals, quantifiers, and attitude verbs can be integrated into the framework to capture parallel patterns there (see again Heim 1983, 1992 for many of the key developments).
Again, presupposition is not my main focus in this book, and I will have little more to say about it. But it provides a clear motivation for a role for local contexts in interpretation. And the dynamic architecture I have just reviewed provides a compact way to incorporate local contexts, neatly folding projection rules and truth-conditions into the single semantic level of CCPs.
1.2 Modals
On the face of it, presupposition, modality, and anaphora don’t have much in common. But work in dynamic semantics has brought out striking commonalities across these domains, which suggest that all depend in some fundamental way on their local contexts. In the rest of this chapter, I will lay out the three areas that I will explore in the rest of the book—modals, conditionals, and anaphora—briefly drawing out their commonalities and the ideas behind their dynamic treatment, and foreshadowing my key arguments.
Start with epistemic modals: words like ‘might,’ ‘must,’ and ‘may’ which express something broadly epistemic about their prejacents (the sentence they take as an argument). ‘The cat might be hiding in the closet,’ intuitively, expresses something much like ‘For all we know, the cat is hiding in the closet,’ and ‘The cat must be hiding in the closet’ intuitively expresses something much like ‘We know that the cat is hiding in the closet.’ And these are, roughly speaking, the meanings ascribed to these constructions by the standard static account of their meaning, from Kratzer (1977, 1981).
But these glosses break down in surprising ways. Consider the pair in (5), based on Groenendijk et al. (1996) and Yalcin (2015):
# Everyone who is hiding in the closet might not be hiding in the closet.
Everyone who is hiding in the closet is, for all we know, not hiding in the closet.
There’s something distinctly weird about ??bed1) (hence the # mark, which indicates some pre-theoretic sense of defectiveness). It is just hard to understand what ??bed1) is saying. By contrast, while (5-b) feels slightly roundabout, it is perfectly interpretable: it simply says that, of anyone hiding in the closet, we leave it open that they aren’t hiding in the closet. For a similar contrast, consider the pair in (6), based on Yalcin (2007):
# Suppose that Benjy is hiding in the closet, but he might not be.
Suppose that Benjy is hiding in the closet, but, for all we know, he isn’t.
Again, (6-a) feels incoherent, while (6-b) is perfectly interpretable.13
So what’s dynamic about this? Plausibly, the difference between the (a)- and (b)- variants above is that the ‘might’ somehow takes into account the information that precedes it, while ‘for all we know’ does not. ‘Might’ performs some kind of search over a space of possible worlds for one where the prejacent is true; the ‘might’-claim is true just in case this search is successful. Pairs like this suggest that the search is limited to worlds in the local context.
So, for instance, in a conjunction with the form (with ‘’ standing for ‘might’), we will interpret the ‘might’ in such a way that the search always takes place over a space of worlds where the left conjunct is true. That means the search is guaranteed to fail—since ¬p is inconsistent with p—leading to the felt incoherence of the (a)-variants.
The idea that the domain of epistemic modals is limited to their local context can be naturally implemented in the dynamic framework spelled out above, so that epistemic modals check their local contexts for compatibility with, or entailment of, their prejacent, as in the system of Groenendijk et al. (1996).
Thesis: the dynamic treatment of epistemic modals captures important patterns. This will be the argument of Chapter 2.
Antithesis: the dynamic treatment of epistemic modals is implausible. Once you extend our language with the standard dynamic treatment of ‘might,’ classical laws like Non-Contradiction and Excluded Middle fail. Moreover, the dynamic approach can’t capture order variants on the basic data—sentences which embed , rather than —which are equally infelicitous. This is the argument of Chapter 4.
Synthesis: dynamic semantics is right that the interpretation of epistemic modals depends on their local contexts. But the source of this dependence is more roundabout than in dynamic architectures like the one above. My theory starts with a standard static treatment of epistemic modals as quantifiers over contextually accessible possible worlds. Then I aim to capture the dynamics of interpretation with two additions. First, I build on the architecture of Karttunen’s and Stalnaker’s treatments of presuppositions, and, more recently, a series of influential papers by Schlenker (2008, 2009), and incorporate local contexts into a broadly static semantic framework. Local contexts, again, are quantities of information which are available to semantic processing and which are determined by the globally available information together with the rest of the sentence. The key insight of this tradition is that the idea of a local context need not be tied to the rest of the dynamic architecture. The second innovation in my theory is that local contexts never enter into the calculation of truth-conditions. Instead, they affect a second dimension of meaning, which I call bounds. Bounds constrain the interpretation of epistemic modals, limiting admissible resolutions of context-sensitivity to those on which the accessibility relation for epistemic modals is bounded by its local context. By locating this constraint in a separate dimension from truth-conditions—a dimension that never influences truth-conditions directly—and by disaggregating local contexts from the standard dynamic architecture, we avoid the problems that face dynamic semantics. That is the argument of Chapter 5.
1.3 Conditionals
Conditionals, like epistemic modals, appear to be sensitive to their local contexts. For instance, indicative conditionals seem to have a local compatibility requirement: an indicative conditional ⌜ If p, then q⌝ requires, not that p is compatible with the context set, but rather with its local information. Thus, for instance, consider the contrast in (7):
# Suppose that the die didn’t land even, but that if it landed even, we won the bet.
Suppose that the die didn’t land even, but that if it had landed even, we would have won the bet.
The variant with the indicative in (7-a) seems somehow incoherent, by contrast with the subjunctive in (7-b). This looks a lot like the corresponding observation in (6) about epistemic modals. In Chapter 3, we will see a wide range of evidence that there is something dynamic about conditionals: their interpretation is closely connected to their local contexts. A natural extension of the dynamic system with a conditional operator, from Dekker (1993) and Gillies (2004), accounts neatly for some of these patterns.
But, once again, the result has serious problems (Chapter 4:). Again, there are logical problems: for instance, this theory invalidates the Identity principle, which says that conditionals with the form ⌜ If p, then p⌝ are logical truths. And it again misses order variants which are equally incoherent.
In Chapter 5, I argue for a solution that parallels my treatment of modals. I adopt a static treatment of conditionals’ truth-conditions, and account for their sensitivity to local contexts via a non-truth-conditional dimension of bounds, which ties the interpretation of conditionals to their local context. I argue that the result helps make sense not only of the core patterns that I discuss in Chapter 3, but also subtle facts about the logic of conditionals (Chapter 6) and their probabilities (Chapter 7).
1.4 Anaphora
My final topic is anaphora: the interaction of indefinite noun phrases like ‘a cat’ or ‘some dog’ with definite noun phrases like ‘the cat’ or ‘she.’ The simplest way to see what is interesting here is by considering pairs like (8) (based on pairs from Partee discussed in Heim 1982):
Susie has a child. She lives at boarding school.
Susie is a parent. She lives at boarding school.
The puzzle is that ‘she’ in (8-a) is naturally understood to refer to Susie’s daughter, while this interpretation is not readily available for (8-b), where ‘she’ intuitively refers to Susie. But ‘Susie has a child’ and ‘Susie is a parent’ plausibly have just the same truth-conditions. What, then, accounts for the difference in interpretations of the clauses that follow?
A natural first thought is that the difference is pragmatic, but, as we will see in Chapter 8, more complicated constructions, which embed sequences like this in quantifiers or conditionals, show that that cannot be the correct story.
This is a striking problem. It bears on fundamental questions about meaning and reference, since the contrast in (8) shows that truth-conditions do not exhaust sentence meanings. The problem has been relatively neglected by contemporary philosophy, perhaps because of a widespread, but mistaken, belief that it can be dealt with in a broadly classical framework, with the assistance of situations and some pragmatic reasoning (see e.g., Geach 1962, Evans 1977, Parsons 1978, Cooper 1979, Neale 1990, Ludlow 1994, Büring 2004, Elbourne 2005). As Heim (1982) showed, the framework of CCPs provides the foundation for a beautiful account of the divergence in (8). And it is an account where local contexts once more play a central role: whether you can use ‘she’ to refer to a child depends on whether the local context for ‘she’ has been updated with a corresponding indefinite.
But, once more, logical trouble arises. The dynamic treatment of indefinites requires a more complicated treatment of negation than the one given above. The resulting entry invalidates classical laws like Double Negation Elimination. But, as Karttunen (1976) noted, Double Negation Elimination is intuitively valid even for sentences involving anaphora. Moreover, problems with double negation infect the dynamic treatment of disjunction.
I will argue that we can do better by accounting for the dynamics of anaphora via bounds. The bounds of (in)definites help us coordinate their interpretation, while coexisting with broadly classical truth-conditions which interact with negation in a well-behaved way (Chapters 9–10).
1.5 In Sum
The problems that I discuss for dynamic semantics can be addressed in isolation. But by bringing these problems together, and proposing structurally parallel solutions, I hope to convince you that the problems arise from a mistake at the heart of dynamic semantics. Instead of rolling together truth-conditional content and dynamic effects into a single dimension of meaning, we should separate out the contributions of each. Doing so gives us the resources for a successful theory of modality, conditionals, and anaphora, and a new perspective on how interpretation is—and isn’t—dynamic.
Everyone should agree that a proposition determines a set of possible worlds; some think that they cannot be identified with sets of possible worlds because they are more fine-grained, but the fineness of grain won’t matter for the questions we are focusing on.
Or, more generally, a join operation in the algebra of propositions.
Of course, a factive like ‘realize’ plausibly means something more than ‘truly believe,’ even setting aside issues about presupposition. But that doesn’t matter for the present point.
This is what distinguishes presuppositions from other kinds of projective content, like conventional implicatures.
I’ll move freely between talk of presuppositions as sentences or propositions.
Some confusing terminology: a (local) context in dynamic semantics is always a set of worlds, or a set of world-variable assignment pairs, or something in that neighborhood. This conflicts with other usages: for Stalnaker, a context set is a set of worlds, while a context is a concrete speech situation; in other parts of the literature, for instance, in Lewis’s usage, a context is an n-tuple of parameters. I will make clear, throughout, which sense I have in mind whenever it is not apparent and it matters.
Throughout the book, I will use italic letters as variables over sentences of the formal languages which are our targets of interpretation (sentences from which I will use autonymously), and Roman letters as variables over sentences of natural language.
The semantic clauses I give here and throughout will be syncategorematic in format. That is, for instance, rather than giving a semantics directly for ¬, I give a schematic semantics for sentences with the form ¬p. This is entirely a presentational choice, since clauses with this form are somewhat more familiar to philosophers. It is an easy exercise to convert the clauses throughout into a compositional format; for instance, ¬ would have the meaning λp.λc. p(c)≠#. c ∖ p(c), in the partial λ-notation of Heim & Kratzer (1998), where p is a variable with the type of a CCP and c a variable with the type of a context.
So what we have, throughout, is essentially weak Kleene projection behavior for partial functions. This is often left implicit, and I sometimes leave it implicit in later discussion.
Parentheses are omitted, since there is no chance of ambiguity (c[p][q] can only be parsed (c[p])[q], since c([p][q]) doesn’t make sense). I use logical symbols like ∧, ¬, and ∨ throughout the book both in the target language and in the meta-language (where they are interpreted classically).
Argument(s) because a clause can have more than one local context, for instance in the standard dynamic treatment of belief as c[Bap] = {w ∈ c : Ba, w[p] = Ba, w}. Talk of computation here is very rough, and should not be taken to imply anything cognitive. The semantic entries we have given do not say anything about how the meaning of a complex sentence is computed by humans; they just specify what the meaning is.
I am assuming here, and generally will assume, that ‘but’ and ‘and’ are semantically interchangeable. This is obviously a simplification, but all these points can be made directly with ‘and’ just as well, so it is harmless.
Month: | Total Views: |
---|---|
September 2024 | 20 |
October 2024 | 26 |
November 2024 | 34 |
December 2024 | 8 |
January 2025 | 7 |
February 2025 | 12 |
March 2025 | 19 |
April 2025 | 14 |