Skip to Main Content
Book cover for Perspectival Realism Perspectival Realism

Contents

Book cover for Perspectival Realism Perspectival Realism

Stanford Industrial Park, adjoining Stanford University, was booming in the early 1960s. Tenants included the Syntex Corporation, which relocated there, and the Syntex Institute for Molecular Biology, where Carl Djerassi in the Chemistry Department became friend with geneticist and early computer enthusiast Joshua Lederberg. In 1963, the Center for Advanced Studies in Behavioral Sciences at Stanford organized a meeting on computer models of thought. Among the participants was the AI pioneer Ed Feigenbaum, who had studied with Herbert Simon at the Carnegie Institute of Technology and who had just co-edited with Julian Feldman an influential book entitled Computers and Thought.

This marked the start of a fruitful collaboration between Feigenbaum, Djerassi, and Lederberg that led to the so-called DENDRAL project (Lederberg 1964, 1965). DENDRAL was one of the first applications of computer science to organic chemistry. It became a model for thinking about how heuristic programming can be applied to an empirical science. DENDRAL opened the door to automated discovery, raising the question: can computers help us discover new natural kinds?

DENDRAL (see Lindsay et al. 1980) was an algorithm designed to exhaustively enumerate topologically possible arrangements of atoms in line with general rules of chemical valence. Mass spectroscopists seek to infer molecular structure from molecular mass. Physical and chemical properties of any compound are not just determined by which atoms it contains. Structure is equally important. The water molecule, for example, consists of one atom of oxygen and two atoms of hydrogen. The six valence electrons of the oxygen atom combine with the single electrons of each hydrogen atom not in some random way, but in two covalent H–O bonds and four ‘lone pairs’ with a specific H-O-H angle. Bonds and lone pairs form a tetrahedron around the nucleus of the oxygen, with bond lengths and angles constraining the atomic arrangements. This molecular arrangement explains some key macroscopic properties of water.

In more complex compounds, many isomers1 may be consistent with a given mass number. The problem is particularly acute in organic chemistry, where quadrivalent carbon can bind covalently with several other atoms at once. Mass spectroscopy provides an indirect source of information about molecular structure and in so doing it facilitates the identification of unknown compounds in environmental pollutants for example, or forensic samples. High-speed electrons bombard target molecules, breaking them up into ionized fragments, which pass through an electrostatic or magnetic field (at least in the generation of these machines at the time of DENDRAL). The particles at play in each fragment can then be detected via their mass-to-charge ratios, which appear as distinctive peaks in a mass spectrum.2 Any molecular sample will yield ionized fragments with a particular pattern of mass-to-charge ratios, and they can be separated by passing them through electrostatic and magnetic fields.

This is an exceedingly subtle, bottom-up inferential exercise that goes from data (i.e. peaks in a mass spectrum) to phenomena, and from phenomena to natural kinds. The data are those provided by mass spectroscopy. The phenomena are the ionized fragments. The kinds are the different chemical compounds (sometimes isomers) that need to be inferred from the fragments.

Nothing about this is straightforward. Not all chemical bonds cleave in the same way (single bonds are easier to break than double bonds, for example). Several molecules get fragmented at the same time. The ionized molecular fragments have to be accelerated and deflected by either an electrostatic or a strong magnetic field (in mass spectrometers by deflection) so that they can be sorted into different mass-to-charge ratios. A detector then registers the abundance of ions of different mass-to-charge ratios and plots the mass spectrum of the compound accordingly.

Complex molecules produce mass spectra with many peaks. But which particular combination of atoms, in what arrangement, is responsible for the observed peaks? For example, a certain mass-to-charge ratio can be used to infer that the compound must have two atoms of carbon, six of hydrogen, and one of oxygen. But the chemical compound in question can be either dimethyl ether (CH3-O-CH3), or ethanol (CH3-CH2-OH), which have very different physical and chemical properties.

DENDRAL was designed to facilitate such inferences in more complex cases, using graphs to represent what in a ball-and-stick model would be the atoms and bonds of the chemical compounds.3 It was developed to explain molecular structure by packing in expert knowledge about organic chemistry and automating the process of making inferences from the available empirical data about mass spectra to the phenomena (ionized fragments), and from there to the relevant chemical compound (e.g. an isomer).

This heuristic programming allowed to explore the space of possible molecular arrangements within well-defined chemical rules. The programming was interactive, and allowed scientists to revise the rules at any stage or add further constraints at any round of hypothesis generation. In the words of Lederberg:

DENDRAL-64 is a set of reports to NASA . . . that outlines an approach to formal representation of chemical graph structures, and a generator of all possible ones. . . . The DENDRAL generator was then designed so that only one canonical form of a possible automorphic proliferation is issued, greatly pruning the space of candidate graphs. . . . DENDRAL is remarkably neatly structured (as implied by its name) as a generator of trees of candidate structures . . . . These can easily number in the billions or more, in practical cases: the efficiency of the program depends on the pruning of impossible or implausible cases, as early as possible. . . . To give a . . . example, if N (nitrogen) is absent, we don’t generate molecules that may contain N, then retrospectively eliminate each of those twigs. (Lederberg 1987, pp. 9 and 12)

This is another illustration of a perspectival model, in this case an algorithm-aided representation of the chemical space of possible compounds. DENDRAL took on the task of physically conceiving molecular structures compatible with empirically given mass spectra within broad lawlike constraints. Being able to explore the space of possible molecular structures for the same group of atoms and delivering modal knowledge about which chemical isomer might be at stake was delegated to DENDRAL as a way of facilitating inferences from data to phenomena and from phenomena to kinds.

But there is another reason why the story of DENDRAL matters for the inferentialist ontology that I see as central to perspectival realism: it was an interdisciplinary research programme. As Lederberg recalls:

I had no idea how one would go about translating these structural concepts into a computer program. . . . It was fortunate indeed that Ed Feigenbaum came to Stanford just at this time. . . . Stanford University, in the 1960s, was a fortunate place to be for the pursuit of scientific innovation, and equally for a highly interdisciplinary program. Computer science, medical science, chemistry were all in a surge of rapid expansion and new opportunity. . . . Lindley Darden’s discussion of the ‘history of science as compiled hindsight’ [Darden 1987] eloquently captures my own perspectives. My interest in AI has little to do with my background as a biologist, a great deal with curiosity about complex systems that follow rules of their own, and which have great potentialities in preserving the fruits of human labor, of sharing hardwon tradition with the entire community. In that sense, the knowledge-based-system on the computer is above all a remarkable social device, the ultimate form of publication. (Lederberg 1987, pp. 9, 13, and 15)

The interdisciplinarity behind DENDRAL reinforces a key aspect of my discussion. Modelling what is possible is indeed a social and cooperative inferential exercise. DENDRAL offered an inferential blueprint for chemists, mass spectroscopists, and computer scientists to engage with one another and bring their respective expertise to bear on the task of facilitating phenomena-to-kinds inferences. The inferential ability to go from mass spectra to the chemical compounds that might be at play is not the prerogative of the chemist, or the mass spectroscopist, or anyone else. It is a collective endeavour. DENDRAL enabled different epistemic communities to work together, with their respective perspectival practices.

The chemical kinds that we know are not the output of some primordial Putnamian baptism of archetype samples, whose microstructural essential properties were discovered later on. Rather, they are the long-term open-ended inferential outcomes of intersecting scientific perspectives.

How, then, did perspectives intersect here? DENDRAL opened the path to chemoinformatics. Like any interdisciplinary programme, chemoinformatics is not the mereological sum of chemistry plus informatics. It has a disciplinary outlook of its own, with distinctive methodologies, epistemic approaches, and remit that partially overlap with those of both chemistry and informatics. Projects such as Dial-a-Molecule4 and the AI3SD Network (Artificial Intelligence, Augmented Intelligence for Automated Investigation for Scientific Discovery)5 in the UK are testament to the medical and industrial interests associated with such exploratory searches. Chemoinformatics—with its wider experimental practice and methodology—is another example of perspectival modelling. Let us see why by going back to one of my examples in Chapter 6: phosphorylation as an example of a modally robust phenomenon.

Phosphorylation is a common modification of proteins, which is often behind a variety of carcinogenic mechanisms. Enzymes called ‘protein kinases’ carry phosphate molecules and target a number of proteins. One of the challenges in developing anti-cancer drugs is that there are hundreds of known human kinases, and tens of thousands of possible target proteins. Years of biochemical experiments are required to fully elucidate the mechanisms in each case. Thus, in current cancer research, AI-led efforts help identify the greatest possible number of kinase–protein relations so that new potential pharmaceuticals can be produced to target a variety of carcinogenic mechanisms. This is a subtle exercise that requires assessing the toxicity of possible new drugs, and cost-effective methods for synthesizing them. Chemoinformatics comes into these assessments, as Figure 7.1 illustrates.

 Four different computational methods for clustering molecules in chemoinformatics: (a) dissimilarity-based compound selection (DBCS); (b) sphere exclusion; (c) clustering; and (d) cell-based selection. N. Brown (2016)  In Silico Medicinal Chemistry. Computational Methods to Support Drug Design, RSC Theoretical and Computational Chemistry Series N. 8., p. 125. Reproduced by permission of The Royal Society of Chemistry (https://pubs.rsc.org/en/content/ebook/978-1-78262-163-8).
Figure 7.1

Four different computational methods for clustering molecules in chemoinformatics: (a) dissimilarity-based compound selection (DBCS); (b) sphere exclusion; (c) clustering; and (d) cell-based selection. N. Brown (2016)  In Silico Medicinal Chemistry. Computational Methods to Support Drug Design, RSC Theoretical and Computational Chemistry Series N. 8., p. 125. Reproduced by permission of The Royal Society of Chemistry (https://pubs.rsc.org/en/content/ebook/978-1-78262-163-8).

For example, the dissimilarity-based compound selection (DBCS) method in Figure 7.1.a selects compounds that satisfy the criterion of minimum average distance from every other point in the dataset at stake to identify eligible subsets of molecules over the whole space of possible combinations. By contrast, the sphere exclusion method in Figure 7.1.b treats the diverse points (molecules that are sufficiently dissimilar in structure) as centroids of the compounds. This method scales very well and is very cost-effective, but tends to penalize diversity in the compound synthesis.

The cell-based selection method in Figure 7.1.d in turn tends to sample the whole space of molecular structures rather than selecting specific portions on the basis of diversity considerations or scaling factors. It selects synthesizable compounds whose molecules are more evenly distributed at the cost of losing computational efficiency when the number of molecules goes up. Finally, the clustering method in Figure 7.1.c partitions molecules into groups on the basis of similarity considerations.

These different methods (and associated algorithms) furnish another example of perspectival pluralism. The methods are part of the experimental, technological (AI-led), and theoretical resources available within chemoinformatics as a scientific perspective in its own right for making reliable knowledge claims. Methodological and epistemic principles are in place to justify the reliability of the knowledge claims, including considerations about the toxicity of the compound and cost-effective production. Few possible chemical combinations delivered by any of these methods translate into new drugs. Scientists use different algorithms to physically conceive several scenarios about how new molecules can be arranged, so as to deliver knowledge about which new synthetic compound is possible. The general lesson is summarized by Nathan Brown (2016, pp. 130–131):

As with many challenges in the field, there is no single, right answer for every occasion. . . . Diverse sets could be the ones that cover the extremities of the space or that distribute evenly over the entirety of the space. As with all modelling methods, it is important to understand the application prior to selecting the algorithms and other methods since the potential application will affect these decisions. If possible, multiple methods should be used, and importantly, the results visualised to identify whether ‘natural’ clusters are being identified. While it would be nice to have a generally applicable clustering or diversity selection for all applications, this is wishful thinking and it is still necessary . . . to fully consider the range of approaches and desired outputs.

All this has far-reaching implications for how to think about natural kinds. My discussion so far has concentrated on phenomena. But natural kinds have been the traditional battleground of debates on realism in science. Are kinds natural so long as they are found in nature? Do kinds harvested through algorithms such as DENDRAL count as ‘natural kinds’? What makes a kind natural anyway? Is it some microstructural essential property? Or is it because it belongs to some useful taxonomy endorsed by some scientific community for some specific purpose?

The story of DENDRAL, and its legacy for chemoinformatics, challenges some deep-seated philosophical intuitions about natural kinds that cut across the realism/anti-realism divide. Traditionally, there is a division between taking natural kinds as natural divisions carved in nature (realism about kinds), or as conventional labels attached to a bunch of things that someone somewhere has deemed as sufficiently similar (anti-realism/conventionalism about kinds). There are of course many more nuanced views. Some realists endorse natural divisions in nature but refuse to associate them with essential properties. Some anti-realists are realists about individual things and entities but conventionalists about taxonomic classifications, for example.

One trend in philosophy of biology has challenged realist orthodoxy about natural kinds. Probing taxonomic classifications has made clear the inadequacy of thinking about natural kinds as defined by a set of essential microstructural properties.6 This has brought a revival of nominalist approaches to natural kinds following a tradition going back to John Locke and re-energized by Hacking (1991, 1999, 2007a). And it has also ushered in different varieties of realism about kinds—see Dupré’s promiscuous realism (1981, 1993) and Boyd’s homeostatic property cluster kinds (1991, 1999a, 1999b) where realism about individuals or about properties is combined with a good dose of pragmatism about classifications.

Where does perspectival realism sit in this vast and nuanced landscape? What is the relation between the modally robust phenomena introduced in Chapter 6 and the more familiar notion of natural kinds? How is the phenomenon of the decay of the Higgs boson related to the natural kind the Higgs boson? How is the phenomenon of nuclear stability related to kinds of nuclei, from iron to lead? Or that of pollination to kinds of flowering plants? Or of phosphorylation to kinds of proteins?

Let us take microstructural essentialism and conventionalism as two extremes of a nuanced continuum of philosophical views. In what follows, it is not my intention to review this vast literature or chart this whole territory, but only to mark the salient differences of the view I defend from these two classical, long-standing, opposite views. If the perspectival realist concedes that there are natural kinds carving nature at its joints in virtue of some kind-defining microstructural essential properties, the phenomena-first approach would prove ultimately parasitic upon more traditional realist views.

Yet siding with the conventionalist in denying kind-defining properties and in thinking of natural kinds as convenient labels attached to a set of phenomena would open a wedge between perspectival pluralism and the promise of realism.

I am going to deal with this conundrum by introducing a different way of thinking about natural kinds, which is novel although it draws on the insights of other philosophical views. For example, I share with Hacking the sentiment that this is the twilight of the debate on kinds, and that the complexity of the challenges posed by human and social kinds might well be insurmountable. I also share the common wisdom that there cannot be a single metaphysical account for the bewildering variety of kinds to be found in the natural world. I aim for a sophisticated type of realism that I see available to the perspectivalist, who is not fazed by the prospect of taking phenomena as an ontological starting point. It will not be a one-size-fits-all approach to what is a natural kind. It is not a universal metaphysical account offering necessary and sufficient conditions for kind membership. The local moves I have made in Part I will find their counterparts in local moves in this second part of the book. I draw attention to a range of epistemological practices that can help us re-jig the way we think and talk about natural kinds so that realism about kinds is downstream from these epistemological considerations, rather than from a metaphysics-first approach.

In my discussion so far I have eschewed any talk of properties. I have resisted the temptation to think that perspectivism is just the claim that different properties are ascribed to the same target system when seen from different points of view, or from different models. My discussion on perspectival models as inferential blueprints and on perspectival data-to-phenomena inferences has taken us far away from the traditional metaphysical starting point of these discussions: that there is a world of properties (be they dispositional or categorical) as a given and that kinds can be seen either as carving them at their joints or as clustering them in some convenient way. The ontology I have defended is inferentialist all the way up, and places centre-stage epistemic communities with their situated scientific perspectives. Where to go from here?

I take my phenomena-first approach as the springboard for a thoroughgoingly inferentialist and perspectival view of natural kinds. Let us return one more time to chemoinformatics. The success of practices that involve dialling molecules and designing new synthetic compounds for pharmaceutical purposes is not decided by whether or not they unveil some hidden chemical substances in nature. It is measured instead by how these practices allow scientists to make inferences from the available data to phenomena and finally to kinds.

There is no single correct way of making such inferences. Some computational methods are more efficient and cost-effective than others. Some are more representative. Others strive for diversity (looking for molecules that are at the extremities of the chemical space). Purpose ultimately guides the choice of the algorithm and method amid the combinatorial explosion of possible drug-like molecular objects: which synthesizable compound can have particular pharmaceutical applications? Which is not toxic? Which one can be produced in large quantities and cost-effective way?

The view of natural kinds that I am about to lay out places centre-stage intersecting scientific perspectives in opening for us a ‘window on reality’. In this respect, I join a recent trend that has emphasized the role of epistemological rather than metaphysical considerations when thinking about natural kinds. I see Kendig’s discussion on ‘kinding’, Bursten’s methodological role for kinds, Chang’s epistemic iteration in chemical natural kinds, and Knuuttila’s approach to synthetic kinds as kindred approaches to mine. What is novel here is that kinds are the outcome of ever-expanding collections of modally robust phenomena that epistemic communities encounter over time via perspectival data-to-phenomena inferences. In other words, the view I will articulate from here to Chapter 10 stresses the key role that intersecting scientific perspectives play behind our talk and thought about natural kinds.

My goal in this chapter is to offer examples from scientific practices (past and present) as a prelude to my inferentialist view of natural kinds which echoing Putnam (1990) I am going to label as ‘Natural Kinds with a Human Face’ (NKHF), and it goes roughly as follows:

(NKHF)

Natural kinds are (i) historically identified and open-ended groupings of modally robust phenomena, (ii) each displaying lawlike dependencies among relevant features, (iii) that enable truth-conducive conditionals-supporting inferences over time.

In the next section, I review a set of functions typically associated with natural kinds in the philosophical literature. I will call them (A) naturalism, (B) unanimity, (C) projectibility, and (D) nomological resilience. Depending on which function takes precedence, different philosophical views of natural kinds emerge. I then consider four counterexamples, one for each of these views. The counterexamples concern what I am going to call engineered kinds, evolving kinds, empty kinds, and in-the-making kinds. The view of NKHF is designed to shed light on them all.

Scientific realism traditionally begins with homely metaphysical considerations. There is an external world, independently of us human beings, and this world comes pre-packaged with natural kinds: water, gold, hydrogen, but also lemons, zebras, hellebores, and snowdrops, among a myriad of other examples.

For scientific realists, natural kinds mirror divisions in the natural world that do not depend on our language, the evolution of our conceptual resources, or which taxonomic classification happens to be in place. Natural kinds are what there is. A realist about science seeks strategies that guarantee that—to the best of our knowledge—we accurately describe these natural kinds. Thus, to be a scientific realist about the electron theory is to believe that what the theory says about the electron accurately describes a group of entities that we have reasons for thinking form a natural kind.

But what is a natural kind? Defining kind membership by a list of necessary and sufficient properties has proved fraught with difficulties. Granted a rough-and-ready definition of ‘mammal’ as a ‘lactating animal, with fur, and not laying eggs’, the discovery in New South Wales of the platypus at the end of the eighteenth century troubled zoologists. The platypus has fur and mammary glands but lays eggs.7 Does it still count as a mammal? Or is it closer to a duck?

How about isotopic varieties of water? Do they still count as water despite very different chemical properties? Deuterium oxide, for example, also known as ‘heavy water’ is toxic and is used in the production of the hydrogen bomb (see LaPorte 2004, Ch. 4). And what to make of the decision of the International Astronomical Union in 2006 to downgrade Pluto to the rank of a ‘dwarf planet’8 when the definition of what counts as a ‘planet’ changed? Note also the wildly contingent nature of higher taxa where gulls and terns form sub-families, kingbirds and cuckoos correspond to genera, while owls and pigeons make up whole orders, as Dupré (1981) pointed out.

In spite of these difficulties, natural kinds have traditionally been regarded by philosophers as delivering on four main functions in science.

A.

 Naturalism. Beyond the Platonic metaphor of ‘carving nature’s joints’, natural kinds identify ‘functionally relevant groupings in nature’ (Quine 1969). Quine believed that searching for logical principles of similarity for kinds was a doomed enterprise. Kinds nonetheless are ‘part of our animal birthright’ (p. 123), of our subjective and survival-adaptive spacing of qualities into classes or groupings. From sorting wild berries and mushrooms into edible/non-edible to sorting elementary particles into hadrons and leptons, the story of natural kinds is the story of how ‘our innate subjective spacing of qualities accords so well with the functionally relevant groupings in nature’ (p. 126).9

According to Quine, we learned to hone functional groupings on the basis of their ongoing inductive success or failure in serving practical and epistemic needs—from the classification of chemical elements in terms of atomic number to the classification of animals in clades. Quine presented natural kinds as the survival-adaptive outcome of how human beings have successfully learned to navigate the world around them.

B.

 Unanimity. Zoologists might disagree about specific morphological features of a given platypus specimen, and oceanographers about the percentage of deuterium oxide present in the Pacific Ocean or Atlantic Ocean. Astronomers might debate whether Pluto indeed counts as a planet. But as long as mammal, water, and planet form natural kinds, there is something everyone can agree on. Natural kinds are designed to identify features common to a class of entities.

Unanimity figures in traditional realist accounts such as the Kripke–Putnam account of natural kinds (see Kripke 1980, p. 124). Kripke argued that if we were to discover tomorrow that the mineral found in the mountains of America, South Africa, and Russia does not in fact have atomic number 79 but is instead fool’s gold (iron pyrite), it would be wrong to insist that it would still be gold. He argued that properties essential to kind membership are not observable (e.g. the yellow colour of gold) but microstructural properties featuring in theoretical identity statements such as ‘water is H2O’ or ‘gold is the element with atomic number 79’.

In Putnam’s (1975) Twin Earth scenario, the stuff that fills oceans and lakes shares the superficial properties of water on Planet Earth but its chemical composition is XYZ rather than H2O. Putnam argued that we would not count it as water because having the microstructural property of being H2O is necessary for something to be water. Kripke and Putnam oppose naïve descriptivism, the view that the reference of natural kind terms is fixed by a description of the set of properties representative of the meaning of the natural kind term in question.10

For we do not gain a priori knowledge of natural kinds by grasping the meanings of natural kind terms. If anything, we have a posteriori knowledge of natural kinds. We empirically discovered that water is H2O even if we take it now to be necessary for something to be water that it has to share the same microstructural kind relation to a sample in a presumed original causal baptism, according to Putnam’s causal theory of reference.

Natural kinds in this Kripke–Putnam realist tradition are defined by microstructural essential properties, which presumably offer a common platform for unanimous judgements, shorn of all the historical accidents and contingencies of how any particular epistemic community might (or might not) have come to know that water is H2O, or gold is the element with atomic number 79.

C.

 Projectibility. This has traditionally been a defining feature of the realism debate about natural kinds. The idea originates from Nelson Goodman’s new riddle of induction (Goodman 1947) and the risk it poses to the success of our inductive inferences. Any number n of positive instances for a generalization such as ‘All emeralds are green’ up to a specified time t1 inductively support a pair of conclusions: namely, that ‘All emeralds are green’ and ‘All emeralds are grue’ after time t1, where something is ‘grue’ if it is either examined before time t1 and found to be green or it is examined after time t1 and found to be blue.

‘Gruified’ inferences were at the heart of a problem Goodman saw for any theory of natural kinds: how to demarcate between projectible predicates like ‘green’ and non-projectible ones like ‘grue’? Goodman himself did not go much further than explaining the difference in terms of what he called ‘entrenchment’. Some predicates (like green) are more entrenched in our languages than others (like grue). But for others, projectibility became an ongoing concern.

A theory of natural kinds has to make sense of the success of our inductive inferences. Projectible natural kinds can avert the risk of Goodman’s grue scenarios. As Richard Boyd (1991, p. 131) stressed, ‘Even for the purposes of guessing we need categories of substance whose boundaries are not (or not just) “the workmanship of the understanding” ’. To deliver on projectibility, Boyd (1990, 1992, 1999a, 1999b) proposed the homeostatic property cluster kinds (HPCK) account. Natural kinds, on this view, are imperfect and fuzzy clusters of co-occurrent properties supported by a homeostatic mechanism rather than a clear-cut set of properties acting as necessary and sufficient conditions for kind membership.

Some philosophers of biology (Brigandt and Griffiths 2007; Currie 2014; Ereshefsky 2012) have explained projectibility by appealing to the notion of homologues: traits or features observed across species and traceable to a common ancestor (e.g. ‘human arms, bat wings, and whale fins are homologues, they are the same character—the mammalian forelimb’; Ereshefsky 2012, p. 383). In the philosophy of the social sciences too, the notion of ‘cultural homologue’ has proved a helpful tool to explain the projectibility of social kinds.11

D.

 Nomological resilience. Another traditional function of natural kinds is strictly related to projectibility: natural kinds are taken as supporting laws of nature, a feature I call nomological resilience. After all, how can natural kinds license successful inductive inferences if not through their ability to support laws of nature? Knowing that water, for example, is a natural kind opens up the possibility of inferring that the next sample will boil at 100° and freeze at 0° Celsius. Knowing that the electron is a natural kind makes it possible to explain phenomena such as electrostatic repulsion (given Coulomb’s law), electronic configurations for chemical elements (given the periodic table), and the stability of matter more generally (given Pauli’s principle).

That kinds go hand-in-hand with laws of nature was famously pointed out by Ian Hacking in his influential discussion of how what he calls Mill-kinds become Peirce-kinds. Hacking (1991, p. 112) starts with Russell’s view on natural kinds as the ‘class of objects all of which possess a number of properties that are not known to be logically interconnected’, and notes how this definition goes back to John Stuart Mill. In A System of Logic (1843), Mill argued that natural kinds exist in nature and what he called ‘real Kinds’ were characterized by an inexhaustible number of properties that can be ascertained by observation and experiment. Charles S. Peirce later supplemented Mill’s real Kinds. ‘Peirce-kinds’—as Hacking calls them—refer to a class such that there is a systematized body of laws about things belonging to this class and ‘providing explanation sketches of why things of a given kind have many of their properties’ (Hacking 1991, p. 120). Natural sciences often develop Peirce-kinds from Mill-kinds, according to Hacking.

I would add that natural kinds are typically regarded as nomologically resilient. No matter how our images of some have changed over time (say, the electron from J. J. Thomson to Dirac, to QED), the nomological resilience of the class of things we call ‘electrons’ is important. As I have argued in Chapter 5, lawlike dependencies are key to the exercise of physical conceivability and perspectival modelling. They play an additional important role in helping epistemic communities identify relevant groupings of phenomena as belonging to a natural kind.

To recap, the four main functions of natural kinds are associated with different philosophical views about kinds. Naturalism is congenial to metaphysically deflationary accounts (à la Quine) whereby natural kinds reduce to functionally relevant groupings that latch onto natural divisions. Unanimity invites metaphysically more substantive accounts such as those that identify natural kinds with a set of microstructural essential properties. Projectibility underpins a number of realist accounts—from Boyd’s HPCK to Ereshefsky’s homologues in biology—with a less pronounced metaphysical slant and more attention to historical lineages. Nomological resilience, finally, can be seen to be at work across a range of views—from Putnam’s metaphysically robust account12 to Hacking’s nominalism where the emphasis is more on the epistemic agent as ‘homo faber’, in Hacking’s terminology: what artisans and craftspeople do, as when one makes rings with gold, necklace pendants with jade, and hydrogen bombs with deuterium oxide.

Without any presumption of offering a philosophical account of natural kinds that fits all, I do nonetheless present a philosophical stance that can explain and justify why philosophers care so deeply about naturalism, unanimity, projectibility, and nomological resilience, and why natural kinds are typically meant to deliver on these four main functions. The view I propose over the next three chapters is metaphysically deflationary like Quine’s: it does not subscribe to Kripke–Putnam essentialism, nor does it fall back into Boyd’s property-realist story behind the HPCK view. It cuts across traditional philosophical dichotomies in this debate (essentialism/conventionalism; realism/nominalism). Most importantly for my story, it sheds light on how a phenomena-first ontology sits with the debate on natural kinds. In what follows, I present some examples from both scientific practice and the history of science that force us to rethink traditional philosophical stances associated with these four functions of natural kinds.

Few things speak of ‘natural divisions in nature’ more than DNA sequencing. Since the work of Franklin, Watson, and Crick on the DNA structure and the ensuing development of molecular genetics, the nucleotides adenine A, thymine T, guanine G, and cytosine C have been regarded as the DNA natural letters in the alphabet of life. The double-helix structure of A–T and C–G pairs encode necessary information for various forms of life on planet Earth. From the taenia worm to daffodils, from kingfishers and pandas to humans, natural kinds in the life sciences have long seemed to be written out of these four simple DNA building blocks.

Putnam (1975, p. 240) claimed that for something to be a lemon (or belong to the natural kind lemon) it has to have the genetic code of a lemon, much as having chemical composition H2O is necessary for something to be water. The problem, though, is that biological kinds, species, or higher taxa cannot realistically be identified just by invoking the genetic code (see Dupré 1981; Ghiselin 1974; LaPorte 2004). Higher taxa reflect contingent human decisions about classificatory boundaries—as Dupré (1981) pointed out.13 More generally, biological species (be it Citrus limon or Equus quagga or something else) are the product of evolutionary survival-adaptive mechanisms—as evolutionary taxonomy has long studied them—that cannot be entirely reduced to a genetic code.14

But leaving aside the complex issue of how to think about biological kinds in the light of cladistics, evolutionary taxonomy, and so on, there is something problematic from a metaphysical point of view about the Putnamian microstructural essentialist story. That is, first, the presumption that natural kinds are metaphysically identifiable with some essential building blocks (be they chemical building blocks like atoms for chemical compounds or genetic building blocks like nucleotide pairs A–T and C–G for living organisms). And, second, the further presumption that such building blocks are somehow letters in a natural alphabet in which the book of nature is written (be it atomic numbers for chemical elements or DNA code). The reality goes beyond these narrowly conceived metaphysical boundaries.15

Recall here DENDRAL and how current chemoinformatics goes about exploring different methods for grouping molecules into new chemical compounds. New drugs are engineered all the time for industrial and pharmaceutical purposes. Among all the conceivable molecular combinations, only those that have passed specific tests to check for toxicity, stability, and cost-effectiveness get selected for patents and production. New chemical kinds are engineered by AI-led processes. And engineering does not just apply to chemical kinds. It increasingly applies also to kinds in the life sciences.

Consider the announcement in Science on 22 February 2019 of the eight-letter (hachimoji, in Japanese) nucleotide language for DNA and RNA (Hoshika et al. 2019). A team led by Steven Benner at the Foundation for Applied Molecular Evolution in Florida announced they had succeeded in producing synthetic DNA. In addition to the four nucleotides A–T, C–G, synthetic DNA has four additional ‘letters’—purine analogues P and B and pyrimidine analogues Z and S—that would form new pairs P–Z and B–S. Previous attempts at synthesizing DNA with additional ‘letters’ relied on water-repelling molecules (hydrophobic nucleotide analogues). But these attempts failed because in the absence of hydrogen bonds, hydrophobic nucleotides tend to slip and distort the double-helix structure. Hydrogen bonds are required to secure the stability of additional pairs of synthetic nucleotides that can be transcribed into RNA.

Benner and his group found a synthetic DNA that not only forms stable pairs and reliably translates into RNA for protein formation, but can also potentially support molecular evolution and find applications in a variety of medical diagnostics. The discovery sends out a strong message about naturalism for natural kinds, i.e. even our most credible candidates underpinning natural divisions in nature can be synthesized and engineered. Further examples in this direction come from synthetic biology (see Kendig and Bartley 2019; Knuuttila and Loettgers 2017; O’Malley 2014), techno-science (see Russo forthcoming) and nanotechnology (see Bursten 2016, 2018).

The boundaries between the natural and the engineered are fuzzier than one might suppose. And any philosophical account of natural kinds that insists on naturalism (even a thin rather than thick notion of naturalism as in Quine) has to make room for engineered kinds. Thus, this is the first lesson that I’d like to draw out of these examples:

Lesson no. 1: The naturalness of natural kinds is not just the product of our ‘subjective spacing of qualities’ (to echo Quine). It is also the result of our perspectival scientific history.

Chemoinformatics, nanotechnology, and synthetic biology are scientific perspectives on a journey to extend and redefine the boundaries of what we count as a natural kind. When a scientific perspective advances new knowledge claims about new kinds, whose reliability can be assessed and evaluated over time with cross-perspectival assessments, I see no metaphysical reason for excluding them from naturalism about kinds. Our presumed access to natural divisions in nature is no more privileged, direct, unfiltered, or unmediated than our access to kinds delivered through perspectival modelling. Which kinds count as ‘natural’ is ultimately a case-by-case judgement. It rests on the reliability of the historically and culturally situated practice delivering knowledge of the kinds and the methodological-epistemic principles that can justify their reliability within the perspective, with truth-conditions remaining a cross-perspectival affair (as per Chapter 5, Section 5.7).

The history of science challenges essentialist accounts of natural kinds. Kuhn (1990) presented what is in my view one of the most convincing historical arguments against the Kripke–Putnam semantic view. He argued that in Putnam’s (1975) Twin Earth thought-experiment only a differently structured scientific lexicon could describe the behaviour of the hypothetical XYZ at all. And in that new lexicon, H2O might no longer refer to what we now call ‘water’.

Kuhn also rebutted Putnam’s claim that a hypothetical Doppelgänger in 1750 (before Lavoisier’s discovery of oxygen) would have been referring to water all along even if not knowing yet that water was H2O. In 1750, states of aggregation (solid, liquid, and gaseous) were regarded as demarcating chemical species. Thus, water for an eighteenth-century natural philosopher was an elementary liquid substance, with liquidity being an essential property. Only after Lavoisier’s Chemical Revolution did the distinction among solids, liquids, and gases become physical rather than chemical. Kuhn argued that what Kripke called ‘rigid designators’ (namely, names that rigidly designate the same objects) did not apply to natural kind terms such as ‘water’, which have in fact gone through a major conceptual and meaning change over centuries. Epistemic unanimity about natural kinds should not be expected on Kuhn’s view of scientific revolutions.

I am going to use the expression evolving kinds to refer to natural kinds that have evolved across scientific perspectives and adapted to new scientific practices over centuries. The kinds we know and love—I contend—are all evolving kinds: they have survived endless conceptual change. This semi-Kuhnian feature is central to the view of Natural Kinds with a Human Face (NKHF). Naturalism invites us to make a presumption about functionally relevant groupings in nature. But the identification of these groupings has a history of its own, rooted in a variety of scientific perspectives and associated practices that have evolved over time and across cultures, as new technology and experiments became available, or new methodological and epistemic principles were introduced.

A perspectival realist account, then, does not delegate unanimity to rigid designators. Nor does it forsake it in the name of scientific revolutions and conceptual change. But it takes unanimity as some sort of equilibrium point in the survival and adaptation of our ever-evolving kinds across a plurality of scientific perspectives.16 I shall discuss one of my favourite examples, the electron, in Chapter 10. J. J. Thomson, who discovered the electron, did not in fact refer to his particles as ‘electrons’ but as ‘corpuscles’. Thomson believed that there were positive and negative electric charges whose field-theoretic behaviour was described by what he called a ‘Faraday tube’, working within the electromagnetic tradition.17 Faraday tubes allowed Thomson to reconcile the discrete nature of electricity with the continuous nature of the electromagnetic field. But is our electron, which is now part of the current Standard Model of particle physics, the same as his? How can natural kinds realistically offer a common platform for unanimous judgements over time, if our talk and thought of them inevitably evolve over time and across scientific perspectives?

An account of natural kinds that delivers on the promise of epistemic unanimity has to explain their historical evolution across a plurality of scientific perspectives. Why is it that we tend to agree in our judgements about, say, electrons (water, gold, jade, etc.), despite a variety of perspectival1 representations across the history of science?

Recall the two notions of perspectival representation introduced in Chapter 2: i.e. a representation drawn from a particular point of view vs. a representation directed towards one or more vanishing points. I’d like to think of the unanimity in talk and thought about natural kinds as the vanishing point (if any as such exists) towards which our historically and culturally situated perspectival representations are drawn.

Lesson no. 2: The epistemic unanimity granted by natural kinds is not a by-product of microstructural essential properties ‘from nowhere’. It is a product of perspectival scientific history, of how historically and culturally situated epistemic communities learn to engage with one another, to perspectivally model the relevant phenomena, and navigate the inferential space surrounding them.

Hence, an explanation for the unexpected unanimity of evolving kinds has to be sought for in the epistemic grounds upon which epistemic communities come to engage with one another and agree across time on what natural kinds there are, in spite of potential disagreement in their perspectival1 representations.

Projectibility is a cornerstone of Boyd’s HPCK account. On this view, natural kinds secure inductive inferences and explanations in science because natural kind terms refer to fairly stable clusters of co-occurrent properties in nature (which are not entirely ‘the workmanship of women and men’, as stressed by Boyd 2010, p. 219). Boyd’s property realism about kinds is designed to offer the best explanation for their projectibility, and a safe antidote to Goodman’s new riddle of induction. Yet it rides roughshod over a curious, often neglected, fact: namely, that empty kinds have often proved no less projectible than bona fide natural kinds.

I call empty kinds putative kinds whose membership eventually turns out to be an empty set. Theories of ether, caloric, or phlogiston enjoyed scientific success for relatively long periods of time (see Chang 2012a, 2012b; Ladyman 2011; Laudan 1981; Lyons 2002, 2006; Vickers 2013, 2017). Explaining the success of false theories has long been the aim of realists responding to the ‘pessimistic meta-induction’ from history of science (see Kitcher 1992, 1993; McLeish 2005; Psillos 1996, 1999; Stanford 2003a, 2003b). And various philosophical approaches have been developed over time to tackle this kind of prima facie counterexamples to realism about kinds. For example, Kyle Stanford and Philip Kitcher (2000) famously put forward a refined causal theory of reference to handle these counterexamples.18 Hasok Chang (2012a, p. 247) has pointed out that there is never any stability to be expected in the act of fixing reference, and that ‘the correspondence theory of reference is futile, because reference to bits of unobservable reality is just as inoperable as “Truth with a capital T” ’. I shall follow Chang’s advice here in not getting ‘fixated’ about reference-fixing for natural kind terms. The burden of perspectival realism does not rest on semantic arguments for natural kind terms. Nor does it rest on epistemic arguments for the approximate truth of best theories in mature science.

I will not speak of scientific theories, because my realism is bottom-up: from data to phenomena and from phenomena to natural kinds. Thus, instead of asking why a false theory could prove successful for a period, I am going to ask: how could empty kinds prove projectible? How could hypothetical kinds whose membership is in fact an empty set nonetheless support inductive inferences and explanations in relevant areas of inquiry?

One might bypass this question by denying that things such as caloric, ether, phlogiston, and so on, are ‘kinds’. They do not exist. A fortiori they cannot be kinds. Yet these things were imagined, conceived of, supposed to exist, to have properties, and to behave in specific ways. Different models of them were built and used to gain information about different phenomena in nature. Modal knowledge was sometimes obtained using models that conceived of such things. Ditching caloric, ether, phlogiston, and so on, as falsehoods does not begin to capture the crucial role that imagined entities and putative kinds played for centuries in advancing scientific knowledge.

Should not we, then, refer to them as ‘extinct’ kinds?19 Should not we be more liberal in the usage of the ‘natural kind’ label and accept that some of them (e.g. caloric, phlogiston, ether) did live at some point but became extinct later? Much as my take here is very much in debt to the history of science and powerful historicist criticisms of scientific realism (to use Stanford’s 2015 terminology), ‘extinct’ would err on the side of historical generosity, in my opinion. For it would bestow the label of ‘natural kind’ to effectively an empty set. An empty set is always empty—yesterday as it is today. Therefore, my expression empty kinds comes closer to capture the good I see in some of these historicist arguments without the risk of reifying as a kind something that never was.

Let us, then, allow ‘kinds’ to include not just kinds known to exist, but also conceivable kinds, or hypothetical kinds, some of which will survive and become evolving kinds and some of which will eventually turn out to be empty and get discarded. How to explain the unreasonable projectibility of empty kinds for a period of time? Consider as a few examples the following indicative conditionals:

(E.1) If caloric is a physically conceivable ‘matter of fire’20 that binds to bodies, specific heat increases with temperature.

(E.2) If phlogiston is a physically conceivable ‘combustible principle’, metals turn into calxes by removing phlogiston.

(E.3) If ether is a physically conceivable ‘elastic medium’ for the transmission of light, light propagates in transverse waves through it.

Empty kind terms feature in the antecedents of these conditionals. Inductive inferences and explanations could nonetheless still be given for phenomena ranging from specific heat to calcination and optical diffraction. This should not be surprising. In Chapter 5, I made the point that suppositional antecedents in such conditionals can deliver true consequents via enthymematic arguments even when additional hidden premises rely on theories that later turn out to be false (as is the case with these examples).

So the task ahead for perspectival realism is to show the inferential patterns that explain how and why different epistemic communities came to agree that a certain historically identified grouping of phenomena is (or is not) a natural kind in spite of disagreements about how to think of some of these phenomena.

The epistemic agents uttering the indicative conditionals (E.1)–(E.3) were, say, Lavoisier, Priestley, and Fresnel, respectively. Each of them was working within a well-defined scientific perspective that included experimental and technological resources to advance claims of knowledge about a number of phenomena. Those resources included Lavoisier’s ice calorimeter, Priestley’s nitrous air test,21and Fresnel’s optical diffraction experiments. These instrumental practices unfolded a number of inductive inferences seemingly associated with the putative kinds caloric, phlogiston, and ether, respectively. Let us zoom in on one of them: Lavoisier’s ice calorimeter.

In 1783, Lavoisier and Laplace built an instrument that was designed to measure the amount of caloric bound to a body by measuring how ice melted into water. Caloric was supposed to be fixed in the external layer of ice, and in any of the subsequent more internal layers. Thus, it seemed natural to suppose that if twice the quantity of ice were melted, double the quantity of caloric would be released. This was measured through a multilayer structure that could insulate the ice in the central cavity as much as possible from the heat of the surrounding air. The stopcock that controlled the water leaking from the central cavity was separated from the run-off of the external and middle cavity and controlled by a spigot (see Heilbron 1993, pp. 101–105).

The ice calorimeter had flaws. First, not all water melted in the process. Some got retained in the porosity of the ice, and accordingly the measurements were off and difficult to reproduce. But the principle behind it was ingenious, and one that could be extended to measuring the specific heat of gases and fluid substances such as sulphuric and nitric acids (Lavoisier 1799, p. 433).22

The false assumption was of course that heat was a conserved quantity in these transitions of states and hence that the ‘quantity of ice melted is a very exact measure of the proportional quantity of caloric employed to produce that effect, and consequently of the quantity lost by the only substance that could possibly have supplied it’ (p. 423). The suspicion that caloric was in fact an empty kind was in the air already in 1798, four years after Lavoisier’s death. And it was brought up in the most unexpected way in the most scientifically unassuming practice.

A former lieutenant-colonel in the American War of Independence, Benjamin Thompson was knighted, and moved to Bavaria, where he became in 1791 Count von Rumford working as grand chamberlain to the elector of Bavaria and superintendent at the military arsenal in Munich. Among his duties, he supervised the boring of cannons. In this role, he had the chance to observe ‘the very considerable degree of heat which a brass gun acquires, in a short time, in being bored; . . . The more I meditated on these phenomena, the more they appear to me to be curious and interesting . . . and to enable us to form some reasonable conjecture respecting the existence, or non-existence, of an igneous fluid’. His answer was that ‘the heat produced could not possibly have been furnished at the expense of the latent heat of metallic chips’ and was generated instead by friction (Rumford 1798, pp. 81–83). He then proceeded to ask a series of questions:

What is heat?—Is there any such a thing as an igneous fluid?—Is there anything that can with propriety be called caloric? . . . It is hardly necessary to add, that any thing which any insulated body, or system of bodies, can continue to furnish without limitation, cannot possibly be a material substance: and it appears to me to be extremely difficult, if not quite impossible, to form any distinct idea of any thing, capable of being excited, and communicated, in the manner the heat was excited and communicated in these experiments, except it be MOTION. (Rumford 1798, p. 98–99, emphases and capital letters in original)

There were other doubts. In England, Humphry Davy (1812), experimenting with ice cubes that melted by friction despite the temperature being kept at freezing point, had similar thoughts in concluding that the phenomenon of heat (or calorific repulsion as was still called at the time) was caused by motion. It was a good half-century before Rumford’s and Davy’s observations were developed in yet another scientifically unassuming practice. James Prescott Joule came from a wealthy family of brewers in Lancashire, who could afford among the tutors for their children John Dalton, one of the leading chemists of the time and a defender of the hypothesis of chemical atoms. Joule became interested in how to improve the efficiency of the brewery, and ran a series of experiments with a paddle-wheel machine to measure the interconvertibility of heat and work. He used a system of strings and pulleys connected to a paddle-wheel inside an insulated copper container with different liquid substances (water, oil, and mercury). Joule studied how much mechanical work was needed to activate the paddle-wheel and eventually raise the temperature of the liquid in the container.

These experiments confirmed Count Rumford’s insight that heat was not a material substance being released but it was instead a kind of motion: it was indeed produced in proportion to the amount of mechanical work expended. From these experiments, using thermometers, Joule (1850) was able to establish the mechanical value of heat, which became known as Joule’s equivalent. Rumford’s, Davy’s, and Joule’s observations and inferences mark the end of caloric as a putative kind. On these experimental foundations, thermodynamics and the kinetic theory of gases were developed in the second half of the nineteenth century.

Yet the example indicates how empty kinds are more than mere idle posits in a theory or mistaken assumptions to be eventually overthrown. They played an important role for what Hacking called homo faber. Empty kinds inspired the work of artisans, craftspeople, apothecaries, engineers, militaries, and brewers alike. They informed the invention of machines and instruments like the ice calorimeter, or the paddle-wheel experiments. In spite of possible false assumptions and inaccurate measurements, these instruments advanced scientific knowledge by making possible conditionals-supporting inferences about a range of phenomena. These inferences revolved around suppositional questions such as these ones (one can think of them as a Ramsey test for the indicative conditional E.1):23

(a)

If caloric is a physically conceivable ‘matter of fire’ that binds to bodies, will specific heat increase with temperature?

(b)

If caloric is a physically conceivable ‘matter of fire’ that binds to bodies, how much ice will melt in the ice calorimeter for a substance x at temperature y?

(c)

If caloric is a physically conceivable ‘matter of fire’ that binds to bodies, will heat increase indefinitely by boring a cannon?

It was through questions like these that it became possible to start measuring the specific heat of metals and gases. And it is through them (among other things) that transitions of states (from solid ice to liquid water) began to be regarded as physical rather than chemical in nature: the outcome of friction and motion, rather than the release of some hidden igneous fluid. As Kuhn remarked, it was indeed only after Lavoisier that the term ‘water’ came to encompass not just liquid water but also ice and water vapour. Philosophical accounts of natural kinds that care about the seemingly unreasonable projectibility of empty kinds across the history of science should lay less emphasis on theories and more on the experimental practices and technological tools that were built around them.

These experimental practices are part and parcel of scientific perspectives. Lavoisier’s ice calorimeter, together with the gravimetric methods of the apothecaries and assayers familiar to him via his life as a tax collector (see Bensaude-Vincent 1992), were inherent in the scientific perspective he operated with. Similarly, Joule’s paddle-wheel experiment was part of a scientific perspective that started with Dalton’s atomism, Rumford on cannons, and Davy on ice cubes, and later intersected with Faraday’s studies on the interconvertibility of electricity and chemistry. These intersecting scientific perspectives offered a more general standpoint for the interconversion between mechanical work and heat as different forms of energy. In each case, the data-to-phenomena inferences were perspectival. Caloric proved to be an empty kind, by contrast with kinetic energy, because it did not track groupings of modally robust phenomena over time.

Yet the ice calorimeter was the remote ancestor of modern-day electromagnetic and hadronic ‘calorimeters’ used in experiments such as ATLAS at the Large Hadron Collider (LHC) that are able to detect electromagnetic and hadronic particles produced in proton–proton collisions. The persistence of the name ‘calorimeter’ (despite the non-existence of caloric) is testament to the seemingly unreasonable projectibility of empty kinds.

Lesson no. 3: The projectibility of natural kinds does not have to do with the naturalness vs concocted nature of the predicates/properties (e.g. ‘green’ vs ‘grue’) associated with natural kind terms (e.g. emeralds) in scientific statements. It has to do instead with the machines, instruments, experiments, and conditionals-supporting inferences about a range of phenomena (e.g. boring of cannons, melting of ice cubes, paddling wheels in water) that these experiments licensed (within and across scientific perspectives).

Of course, this is only a starting point. A lot more needs be said about how these identified groupings of phenomena constitute what I call evolving kinds, and I attend to this task in Chapters 8 and 9.

Finally, let us turn to nomological resilience. Traditionally, the ability of natural kinds to license successful inductive inferences has been linked to their ability to support laws of nature. Hacking pointed out the role of laws of nature in defining natural kinds through his distinction between Mill-kinds and Peirce-kinds, whereby Peirce-kinds sometimes develop from Mill-kinds. He identified Peirce-kinds with the Putnamian view that there are objective laws that gold, electricity, and so on, satisfy. Accordingly, the ability to infer from one property of the kind (say, atomic number 79 for gold) to other properties for the same kind (malleability, melting point, etc.) is grounded on a systematized body of laws of nature.

In this final section, I defend a role for laws of nature in natural kinds in two ways. In the absence of laws of nature, I contend that Mill-kinds turn out to be empty kinds. And with laws of nature, even in-the-making kinds enjoy the status of Peirce-kinds.

Recall that a Mill-kind (following Hacking 1991, p. 118) is a real Kind (with capital K) ‘if it has a large and plausibly inexhaustible set of properties not possessed by members of K that lack [property] P’. Mill-kinds allow speakers to make inductive inferences based on the knowledge of one distinctive property P, which is the gatekeeper for innumerable others. But inductive inferences in Mill-kinds are not supported by a system of laws of nature.

Imagine you get your inferences via a sort of lottery system that randomly associates property P with a large set of other properties S (but the system might as well have associated P with a different large set of alternative properties T). Mill-kinds are genuine empiricist kinds with nominalist roots. There are no laws buttressing the connection between property P and the large (inexhaustible) set of other properties S over and above constant conjunction and co-occurrence. A problem then arises. How to tell whether the connection is purely accidental, or indicative of some genuine kindhood?

Caloric is a case in point. This empty kind was once a Mill-kind. The putative connection between the (alleged) repulsive property P of caloric (or ‘calorific repulsion’ as was known at the time) and a number of other properties S (e.g. being released in transition of states from liquid to gas, being squeezed out when ice melts) proved a versatile one. From the ice calorimeter to Carnot’s steam engine, this former Mill-kind made possible far-reaching inductive inferences—until they came to a halt, and the success of the inferences turned out to be parasitic upon motions of molecules, kinetic energy, entirely different underpinnings than caloric.

This former Mill-kind did not have laws of nature backing up its inferences and it eventually faded away. ‘Conservation of caloric’ is not a law of nature, despite Sadi Carnot assuming it in the Carnot cycle: there is no such lawlike dependency in nature.

In such cases, Mill-kinds risk hiding empty kinds which will be revealed as time goes by. In my lingo, they do not track modally robust phenomena. And the phenomena they do seem to track do not enjoy lawlike dependencies (or the right type of lawlike dependencies). Empiricist kinds with nominalist roots can secure successful inductive inferences and explanations only to the extent that a Mill-kind is an eligible candidate for becoming a Peirce-kind. And candidates for Peirce-kinds are—in my terminology—those tracking groupings of modally robust phenomena within and across several scientific perspectives, each displaying lawlike dependencies.

But in the presence of such lawlike dependencies, even in-the-making kinds enjoy the status of Peirce-kinds, I contend next. In-the-making kinds, by definition, are hypothesized viable candidates for natural kinds. Take dark matter as an example. The 2019 Nobel Prize for Physics was given to James Peebles (alongside two other physicists, Michel Mayor and Didier Queloz, for their research in exoplanets) for

insights into physical cosmology [that] have enriched the entire field of research and laid a foundation for the transformation of cosmology over the last fifty years, from speculation to science. His theoretical framework, developed since the mid-1960s, is the basis of our contemporary ideas about the universe . . . . The results showed us a universe in which just five per cent of its content is known, the matter which constitutes stars, planets, trees—and us. The rest, 95 per cent, is unknown dark matter and dark energy. This is a mystery and a challenge to modern physics. (See https://www.nobelprize.org/prizes/physics/2019/press-release/)

It is clearly the job of physicists to find the evidence for and give the answer to the question about the nature of dark matter. For the purpose of my philosophical discussion here, dark matter is an instructive example of what I call in-the-making kinds. For despite all the good theoretical and experimental reasons for introducing it into the standard cosmological model, as I finish editing this volume (December 2021), scientists are still waiting to find dark matter particles through a variety of direct and indirect searches,24 including work at the Large Hadron Collider.

What in my philosophical account makes dark matter an example of in-the-making kinds are the laws of nature that enter into a number of phenomena for which dark matter is required, and the perspectival data-to-phenomena inferences at play across a range of intersecting scientific perspectives. Very briefly, here are some of the cosmological details relevant to my philosophical discussion here (drawing on Massimi 2018d; for a historical retrospective, see de Swart et al. 2017; Peebles 2017).

The term ‘dark matter’ was originally introduced in the 1930s by the Swiss cosmologist Fritz Zwicky (1933) to account for why large numbers of galaxies cluster together much closer than one would expect from gravity alone (cf. Bradley et al. 2008 for a recent study). But the notion of dark matter did not take off until the 1970s, when the idea resurfaced to explain another puzzling phenomenon: namely, how spiral galaxies retain their distinctive shape over time (see Ostriker and Peebles 1973).

The hypothesis of a dark matter halo surrounding galaxies was introduced to explain the phenomenon, and the later measurements on spiral galaxies’ rotational velocities by Vera Rubin and collaborators (Rubin et al. 1980) corroborated Zwicky’s original idea. The rotational velocity of spiral galaxies, instead of decreasing with distance from the centre of the galaxy, remains fairly stable. This is taken as evidence for the existence of dark matter halos surrounding galaxies, and inside which galaxies would have formed (the same massive halos are necessary to guarantee dynamic stability to galactic discs). The current standard cosmological model, the so-called ΛCDM model, postulates dark energy in the form of Λ to explain the accelerated expansion of the universe,25 in addition to Cold Dark Matter.

Some of the best evidence for dark matter comes from phenomena at a scale much larger than that of individual galaxies or even clusters of galaxies. Since the 1990s, scientists have known that out of the total gravitating mass density of the universe as a whole (Ωm), only a small fraction is made up of baryons (the heaviest elementary particles—see White et al. 1993). The baryon density is measured from the baryon-to-photon ratio. Data from the WMAP and Planck Collaboration about the cosmic microwave background (CMB) provide an accurate indication of the photon energy density at the time of the last scattering after the Big Bang, while Big Bang Nucleosynthesis (BBN) provides constraints on the abundance ratios of primordial elements (hydrogen, helium, etc.—see Steigman 2007 for a review). From data such as these, cosmologists infer that Ωm far exceeds Ωb. This modally robust phenomenon in turn provides strong evidence for an additional kind of non-baryonic (maybe weakly interacting) matter yet to be experimentally detected: dark matter.

Other phenomena evinced from different kinds of cosmological data support dark matter as a kind-in-the-making. The angular power spectrum of the CMB shows initial density fluctuations in the hot plasma at the time of last scattering. Over-dense regions in these maps show the seeds that led to the growth of structure, and the gradual formation of galaxies and rich galaxy clusters, under the action of gravity over time. Cosmologists infer the existence of a non-baryonic (weakly interacting) dark matter that must have been responsible for the early structure formation.

The matter power spectrum inferred from data about baryon acoustic oscillations (BAO) is yet another piece of evidence. BAO are the remnants of original sound waves travelling at almost the speed of light shortly after the Big Bang and before the universe started cooling down and atoms formed. BAO measurements are used to probe the rate at which the universe has been accelerating at different epochs (and hence as a probe for dark energy). But BAO are also important for dark matter because they are related to the shape of the matter power spectrum, which diverges in a dark matter model and in a no-dark-matter model of the universe (see Dodelson 2011).

Thus, evidence for dark matter as an in-the-making kind accrues through a number of data-to-phenomena inferences:

(1)

from Zwicky’s data about the radial velocities of eight galaxies in the Coma cluster to the more general phenomenon of galaxy clusters;

(2)

from Ostriker and Peebles’s data about the N-body computer model simulation to the phenomenon of the stability of the galactic discs;

(3)

from Rubin et al.’s spectrographic data to measure rotation velocities to the phenomenon of galaxies’ flat rotation curves;

(4)

from BAO data to the phenomenon of the shape of the matter power spectrum;

(5)

from CMB data to the phenomenon of large-scale structure formation of the universe via computer simulations;

(6)

from data about Big Bang Nucleosynthesis to the phenomenon of Ωm >> Ωb.

Each of these inferences is perspectival in a distinct way. The data are gained from experimental and technological resources that are an integral part of the ΛCDM cosmological model. The inferences from data to phenomena tend to be very much model-dependent (with the caveat presented in footnote 25 for the measurement of the Hubble constant). The methodological and epistemic principles that guide and justify the reliability of the knowledge claims so advanced (e.g. Bayesian statistics with a well-motivated choice of priors for theoretical parameters and nuisance parameters—see Massimi 2021 for a review of the dark energy case) are also part of a distinctive scientific perspective.

Laws of nature enter into every one of the data-to-phenomena inferences above.26 Despite the increasing group of phenomena at different scales which point to dark matter, dark matter particles have not yet been detected as I finish editing this book (December 2021). The current status of dark matter as a powerful kind in-the-making can be explained in terms of its nomological resilience across a number of perspectival data-to-phenomena inferences at different scales.

We are looking for a kind that allows us to make successful inferences about an identified and open-ended group of phenomena at different scales. Laws of nature play a key role in turning this hypothetical Mill-kind into a fully fledged Peirce-kind, to use Hacking’s terminology. The final chapter of this story still needs to be written. New physics beyond the Standard Model might hold the key to this puzzle. Whether and when this in-the-making kind will become an evolving kind depends on how the scientific perspectives of contemporary cosmology and astrophysics may intersect with the current and future perspectives of particle physics.

That concludes my two main points in this section. First, in the absence of laws of nature, a once successful Mill-kind turn out to be an empty kind. And, second, laws of nature underpin the nomological resilience of in-the-making kinds and make them enjoy the status of Peirce-kinds.

Lesson no. 4: The lawlikeness of natural kinds is not downstream from some prior holding of microstructural essential properties and relations. But it is not a disposable add-on to Mill-kinds either. For without laws, Mill-kinds turn out to be empty kinds. And with laws, even in-the-making kinds enjoy the status of Peirce-kinds.

In summary, in this chapter I have made the point that a perspectival realist view on natural kinds should be able to accommodate the aforementioned four functions of natural kinds, suitably revised in light of the examples discussed. In particular, a perspectival realist view on natural kinds should be able to accommodate:

* how engineered kinds count as just as natural as more familiar kinds;

* how evolving kinds offer an unexpected platform for unanimous judgements;

* how the unreasonable (temporary) projectibility of empty kinds has less to do with theories and predicates and more to do with machines and instruments designed with them in mind;

* how kinds in-the-making are eligible candidates for evolving kinds as long as they remain nomologically resilient.

In the next three chapters, I articulate the details of this perspectival realist view of Natural Kinds with a Human Face.

Notes
1

Two isomers consist of the same number and type of atoms but have different physical and chemical properties because of their molecular structure.

2

Many thanks to Julia Bursten, Marcel Jaspars, and Jon Turney for helpful comments. See Bursten (2020b) for a helpful discussion on the topic.

3

 Lederberg (1987, p. 7) recalled how ‘various arithmetic tricks were devised that took account of valence rules, plausibility of composition, the negative and positive packing fractions of O and N, and the abnormal proportional discrepancy of H, to keep the search down to a manageable scope. For paper and pencil work (in 1964) this was embodied in a handbook of some 50 pages. . . . Even that small book was later . . . obsoleted by an algorithm that depended on a one-page table with just 72 non-zero entries, and a few arithmetic steps easily done on a 4-function hand calculator’.

6

It goes beyond my scope and goal to discuss this voluminous literature here. The interested reader is referred to Beebee and Sabbarton-Leary (2010), Bursten (2014, 2018), Dupré (1981), Ereshefsky (2004, 2018), Ereshefsky and Reydon (2015), Kendig (2016a), LaPorte (2004, Ch. 3); MacLeod and Reydon (2013), O’ Connor (2019), and Slater (2015); Khalidi (2013) and Magnus (2012) both offer overviews of the debate.

7

I discuss this example in Massimi (2012c). See also Moyal (2004).

8

In 2006, the International Astronomical Union re-classified Pluto as a ‘dwarf planet’ because its gravitational mass is not large enough to clear debris on its pathway.

9

I have addressed Quine’s naturalism as part of my earlier defence of a type of naturalized Kantianism about kinds (Massimi 2014). In that article, I was interested in responding to a series of classical objections against Kantianism about kinds and widespread conflation of Kantianism with constructivism. In this chapter, I do not engage with this topic as such, but I build and expand on some of these original ideas.

10

For a recent defence of a cluster version of descriptivism, see Häggqvist and Wikforss (2018); and for a defence of Putnam’s semantic argument, see Hoefer and Martí (2019). See Beebee and Sabbarton-Leary (2010) for a collection of essays on the topic.

11

Marion Godman, for example, has defended the notion of ‘cultural homologue’ to describe the social kinds of anthropologists and social scientists (e.g. social democracy, among others). She defines a cultural homologue as one that ‘contains systematically arranged information or content . . . [which] is typically a combination of factual knowledge about the world and prescriptive or practical knowledge’ (Godman 2015, pp. 500–501, emphasis in original). Godman et al. (2020) have further explored the extent to which the notion of historical lineage from Ruth Millikan’s (1998, 1999) influential work can be reconciled with a suitable version of essentialism about natural kinds.

12

Hacking, for example, comments on how ‘the Peirceian conception seems to rule, at present’, and cites Putnam and Kripke to illustrate how Mill-kinds have been transformed into Peirce-kinds: ‘There are objective laws obeyed by multiple sclerosis, by gold, by horses, by electricity; and what is rational to include in these classes will depend on what those laws turn out to be’ (Putnam 1983, p. 71, emphasis in original; quote from Hacking 1991, p. 121).

13

‘[B]iological theory offers no reason to expect that any such privileged relations exist, since higher taxa are assumed to be arbitrarily distinguished and do not reflect the existence of real kinds’ (Dupré 1981, p. 78).

14

It goes beyond the scope and aim of the present discussion to enter into the so-called species problem in philosophy of biology, on which a variety of philosophical views have been advanced and defended over the past two decades (see, e.g., Ereshefsky 2010; Ghiselin 1974; Griffith 1999; Hull 1978; Kitcher 1984; Millikan 1999; Okasha 2002).

15

I shall return to this topic in more detail in Chapter 9, where I flesh out why I do not endorse essentialism about natural kinds.

16

I share Ruth Millikan’s (1999) argument for replacing what she called ‘eternal natural kinds’ with ‘historical natural kinds’. Millikan sees the continuity and uniformity of kinds as rooted in their historical lineage. This is particularly evident in the case of biological kinds: ‘Cats must, first of all, be born of cats, mammals must have descended from a common ancestor, and so forth. Biological kinds are defined by reference to historical relations among the members, not, in the first instance, by reference to properties. Biological kinds are, as such, historical kinds. . . . [M]embers of these kinds are like one another because of certain historical relations they bear to one another (that is the essence) rather than by having an eternal essence in common’ (p. 54). I agree and take on board Millikan’s insight here. I would add to her observation that these historical relations are not just (or only) phylogenetic relations (cats from cats, etc.). They are not just the outcome of breeding and cladistics considerations. They are also the product of perspectival and multicultural scientific history. As I am going to argue in detail in Chapters 8 through 10, it is not just biological kinds that are historical kinds, but physical and chemical kinds too. They are all Natural Kinds with a Human Face.

17

See Arabatzis (2006) and Falconer (1987, 2001) for excellent historical accounts of this episode. I recount it in some detail in Chapter 10.

18

To understand earlier uses of natural kind terms such as Priestley’s ‘dephlogisticated air’ to refer to what we now call ‘oxygen’, Stanford and Kitcher argue that ‘some kind of description must play the role of samples and foils in the act of grounding reference, but whether this is a description of internal structure, causal role, causal mechanism or something else altogether will vary with the term-type and even with the term-token under consideration’ (2000, p. 125, emphasis in original).

19

I thank Julia Bursten for raising this question to me in a reading group.

20

A variety of terms were used at the time, ranging from ‘igneous fluid’ to ‘fire matter’, ‘heat matter’, the ‘principle of heat’, the ‘matter of fire’, although in Lavoisier’s Traité élémentaire de chimie the new official nomenclature of ‘caloric’ (calorique) was introduced. The term ‘matter of fire’ was a term of art in a well-defined Newtonian tradition going back to Boerhaave (1732/1735) and even Kant (1755/1986). On Lavoisier’s caloric theory, see Morris (1972).

21

For an analysis of the role of experimentation in the Chemical Revolution and how it bears on debates about realism and perspectivism, see Jacoby (2021).

22

‘[T]he water produced by melting the ice during its cooling is collected, and carefully weighed; and this weight, divided by the volume of the body submitted to experiment, and multiplied into the degrees of temperature which it had above 32° at the commencement of the experiment, gives the proportion of what the English philosophers call specific heat’ (Lavoisier 1799, p. 429). As Heilbron (1993, p. 104) notes: ‘From measurements of the temperature of the gas on entry and exit, the rate of flow, and the quantity of melted ice, they could compute a value for the heat capacity of the specimen under study. In experiments performed during the winter 1783/4 but not published until 1805, they made the specific heat of oxygen to be 0.65, and the specific heat of air 0.33031(!), that of an equal weight of water’.

23

See Chapter 5, footnote 24. As clarified already there, my goal is not to contribute to any formal framework for the semantics or probabilistic logic of indicative conditionals, but simply to pay attention to model-based inferential reasoning that scientists make with perspectival modelling. This is another case in point.

24

And when it comes to its nature, a variety of hypotheses are available. One of the current candidates are hypothetical WIMPs (or weakly interacting massive particles), whose weak interaction with ordinary matter could lead to the recoils of atomic nuclei detectable using large liquid xenon chambers located underground. One such possible WIMP candidate is the so-called neutralino, the ‘lightest supersymmetric particle’ (LSP), whose searches at the Large Hadron Collider (CERN), among other experiments, have given null results as of today. Similarly, direct detection searches for dark matter candidates at two of the largest experiments, LUX in South Dakota and PandaX-II in China’s JinPing underground laboratory, have produced null results so far (see Akerib et al., LUX Collaboration 2017; Tan et al., PandaX-II Collaboration 2016). Alternative possible candidates for dark matter are hypothetical particles called axions (see Di Vecchia et al. 2017), gravitinos (see Dudas et al. 2017), self-interacting dark matter (SIDM), and hypothetical superheavy and super-weakly interacting particles called WIMPzilla (see Kolb and Long 2017).

25

The universe has long been known to be expanding, with the Hubble constant H0 measuring the rate of the accelerated expansion. Ade et al., Planck Collaboration (2015) performed an indirect and model-dependent measurement of the Hubble constant based on ΛCDM and the cosmic microwave background (CMB). More recent measurements using Supernova Ia calibrated by Cepheids (see Riess et al. 2016) have led to an estimated measurement value for H0 of 73.24 ± 1.74 km s−1 Mpc−1. This value is in 3.4σ tension with Aghanim et al., Planck Collaboration (2016). Recent research has further increased the ‘tension’ between the value of the Hubble constant from Planck’s model-dependent early-universe measurements, and more model-independent late-universe probes. In particular, members of the H0liCOW 2019 (H0 Lenses in COSMOGRAIL’s Wellspring) collaboration—using a further set of model-independent measurements of quasars gravitationally bending light from distant stars—have recently measured the Hubble constant at 73.3 ± 1.7. Wendy Freedman et al. (2019) from the University of Chicago used measurements of luminous red giant stars to give another new value of the Hubble constant at 69.8 ± 1.9 km s−1 Mpc−1, which is roughly halfway between Planck and the H0LiCOW values. More data on these stars from the James Webb Space Telescope, which launched in December 2021, will shed light on this controversy over the Hubble constant, as will additional gravitational lensing data. See Verde et al. (2019) for a comprehensive overview of the state of the art in this debate.

26

Just to mention a few (non-exhaustive) examples here, the virial theorem enters into the calculation of the dynamic mass of the Coma cluster and related inference to the possible presence of dark matter in (1). Force laws entered into Ostriker and Peebles’s (1973)  N-body model and estimate for the dark matter halo in the data-to-phenomena inference (2). The relativistic Doppler effect for light entered into Rubin and collaborators’ use of data about optical emission lines to establish the discrepancy between the surface brightness of the luminous mass of galaxies vis-à-vis their mass density in (3). The ΛCDM model, and hence Friedmann equations, are assumed in the inference from the data for BAO from the Sloan Digital Sky Survey to the phenomenon of the power spectrum of matter in (4) and the relativistic Doppler effect in (5) to go from the CMB data to the phenomenon of large-scale structure. Measurements of the light elements’ abundances (especially deuterium measurements) underpinning the phenomenon of the baryon density in (6) are typically compared with CMB-inferred constraints (see Burles et al. 2001, Steigman 2007). I thank Alex Murphy for helpful discussions on this point.

Close
This Feature Is Available To Subscribers Only

Sign In or Create an Account

Close

This PDF is available to Subscribers Only

View Article Abstract & Purchase Options

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Close