
Contents
-
-
-
-
-
-
-
-
-
-
-
-
-
10.1. Die kleine h 10.1. Die kleine h
-
10.2. Hebridean kelp, glass-blowing, and electrical research: J.J. Thomson’s perspective around 1897–1906 10.2. Hebridean kelp, glass-blowing, and electrical research: J.J. Thomson’s perspective around 1897–1906
-
10.3. Grotthuss’s and Helmholtz’s electrochemical perspective ca. 1805–1881 10.3. Grotthuss’s and Helmholtz’s electrochemical perspective ca. 1805–1881
-
10.4. Max Planck’s quantum perspective: the electric charge as a ‘natural unit’ 10.4. Max Planck’s quantum perspective: the electric charge as a ‘natural unit’
-
10.5. Walking in the garden of inferential forking paths 10.5. Walking in the garden of inferential forking paths
-
10.6. Inferential blueprints encore and modally robust phenomena 10.6. Inferential blueprints encore and modally robust phenomena
-
10.7. Chains of conditionals-supporting inferences 10.7. Chains of conditionals-supporting inferences
-
10.8. Coda: what remains of truth? 10.8. Coda: what remains of truth?
-
-
-
-
-
Cite
Abstract
This chapter elucidates the nature and role of ‘truth-conducive conditionals-supporting inferences’ over time, seeing these inferences as joining the dots among modally robust phenomena and their lawlike dependencies. It is this inferential game that ultimately underpins the Neurathian strategy of NKHF. To illustrate this point, the chapter delves into an example taken from the history of the electron. It reconstructs how J.J. Thomson arrived at the identification of the charge-to-mass ratio for what he called a ‘corpuscle’ working on cathode rays around 1897 and the role of situated epistemic communities including glass-blowers and kelp-makers behind this discovery (Section 10.2). It turns then to a different perspective within which Grotthuss and later Helmholtz and others were studying the phenomenon of electrolysis (Section 10.3). And it briefly examines the influential treatment of the electric charge as a natural unit by Planck and the emerging quantum perspective (Section 10.4). These historical details feed into the philosophical analysis in the rest of the chapter, which returns to the notion of perspectival models as inferential blueprints and modally robust phenomena (Section 10.5). It illustrates how Grotthuss’s chain model and Thomson’s model of the Faraday tubes acted as inferential blueprints to support truth-conducive conditionals-supporting inferences. Section 10.7 zooms into the details of one such chain of conditionals-supporting inferences. The division of modal labour between indicative and subjunctive conditionals described in Chapter 5 is here applied to tease out and exemplify how our knowledge that there is an electric charge is the outcome of epistemic communities across scientific perspectives engaging in an inferential game of asking for reasons as to why any particular grouping of phenomena hangs together.
Arriving at each new city, the traveler finds again a past of his that he did not know he had: the foreignness of what you no longer are or no longer possess lies in wait for you in foreign, unpossessed places. . . .
Futures not achieved are only branches of the past: dead branches.
‘Journeys to relive your past?’ was the Khan’s question at this point, a question which could also have been formulated: ‘Journeys to recover your future?’
And Marco’s answer was: ‘Elsewhere is a negative mirror. The traveler recognizes the little that is his, discovering the much he has not had and will never have’.
Italo Calvino (1972/1997) Invisible Cities, pp. 24–251
10.1. Die kleine h
20 May 2019 marked World Metrology Day, celebrated by news headlines around the world: ‘The International System of Units—Fundamentally better’.2 It was the day that saw die kleine h replace le grand Kilo. For the first time in 130 years, one of the seven base units in the International System of Units (SI)—the kilogram—was forced into retirement and replaced with what is regarded as a much better and more fundamental unit: Planck’s constant h.
It was a unanimous decision of representatives of more than sixty nations gathered in Versailles at the General Conference on Weights and Measures. The French grand Kilo—or, better, its 1889 built prototype made of platinum and iridium and kept in a vault at the Bureau of Weights and Measures in Paris—had to give way to a more reliable unit introduced by Max Planck back in 1900.
For a long time, metrologists had worried that the prototype in Paris was dissimilar to copies distributed around the world, with imperceptible and yet increasingly important changes occurring to its mass over time. Fundamental constants for their part do not need prototypes: they are unchanging over time, and, most importantly, they are accessible always and everywhere. What better way to secure the reliability of metrological practices than to have a unit of measure for mass based on one such fundamental constant, namely Planck’s h?
Metrology is not the only domain where the quest for stable, unchanging, and universally accessible fundamental units matters. Such a quest permeates science more broadly. I argue in this chapter that this very same quest is often at play in how we talk and think of natural kinds in relation to historically identified and open-ended groupings of phenomena. It indeed explains the perceived unanimity and projectibility surrounding the natural kinds we know and love.
Consider the natural kind electron. Physics textbooks tell us that the electron is defined by a number of relevant features: its negative electric charge is the physical constant e; it has a spin of ½ in units of h/2π, and a rest mass of 0.511 MeV. Moreover, the dependencies among these relevant features—between charge and mass, or between half-integral spin and Fermi–Dirac statistics, among others—are lawlike. Some of these lawlike dependencies are causal in nature, as when one observes cathode rays bending by increasing the strength of the electric or magnetic field. Others are non-causal in nature, as in the relation between half-integral spin and Fermi–Dirac statistics, as already discussed in Chapter 5.
These lawlike dependencies are at work in a number of modally robust phenomena that over time epistemic communities have learned to identify through reliable data-to-phenomena inferences: from the phenomenon of the bending of cathode rays to spectroscopic phenomena about alkali,3 from electrolysis to black-body radiation, just to give a few examples.
My realism about modally robust phenomena goes hand-in-hand with the Neurathian strategy on ‘Natural Kinds with a Human Face’ (NKHF). It does not eliminate natural kinds. But it does not take them either as the metaphysical seat of essences or as conventional labels. They are instead Spinozian sortal concepts that stand for open-ended groupings of phenomena. And Spinozian sortals are nothing but proxies for the ‘exchange rate’ among phenomena.
In this chapter, I complete my account of NKHF. That calls for a return to the inferences upon which kinds-in-the-making eventually become evolving kinds. Appealing to a sort-relative sameness relation sheds light on the mechanism underneath NKHF. But something is still missing: how is it that one can evaluate as veridical claims of knowledge concerning kinds-in-the-making that evolve through historical journeys across scientific perspectives? Recall my definition:
(NKHF)
Natural kinds are (i) historically identified and open-ended groupings of modally robust phenomena, (ii) each displaying lawlike dependencies among relevant features, (iii) that enable truth-conducive conditionals-supporting inferences over time.
This chapter clarifies the last condition here. How can we veridically maintain that there are indeed electrons if our perspectival representations of them have changed radically? How can science ever be expected to offer a ‘window on reality’ if, at best, scientific representations reflect the agent’s situated point of view?
I complete my answer here by placing centre-stage the perspectival2 nature of our scientific representations and by showing how we do get a window on natural kinds, not in spite of but by virtue of our perspectival1 data-to-phenomena inferences. After all, recall from Chapter 2 how perspectival representations in science, despite being always from a specific vantage point (perspectival1) can nonetheless give us a ‘window on reality’ (perspectival2).
I illustrate this point by recounting one particular episode out of the history of the electron. The electron is probably the best understood particle in contemporary physics. Much as we are all realists about the electron today, the story of the electron is ongoing. The electron is a paradigmatic example of what I’d like to call evolving kinds. And so much could be written about its puzzling quantum mechanical aspects that they deserve a book of their own.4
But I have a more modest philosophical goal. If you think that quantum mechanics is baffling, the earlier history of the electron is even more so. Physics textbooks teach that the electron is an elementary particle defined by a series of kind-constitutive properties: charge, mass, and spin. But a quick glance at the history of our coming to know about the negative electric charge e reveals the deeply perspectival nature of our scientific representations. Our veridically maintaining that there is an electron with charge e is—as my definition (iii) has it—the outcome of truth-conducive conditionals-supporting inferences enabled by a historically identified open-ended grouping of modally robust phenomena.
In the next sections, I recount how Planck’s constant h played a role in our coming to know that there is an electric charge and about what it is.5 I illustrate how the realist commitment to the electric charge crystallized around a number of perspectival data-to-phenomena inferences between 1897 and 1906. These inferences involved three main scientific perspectives broadly construed: the Faraday–Maxwell field-theoretical perspective, in which J.J. Thomson was working (Section 10.2); the electrochemical perspective, to which Grotthuss and Helmholtz contributed (Section 10.3); and the emerging quantum perspective championed by Max Planck (Section 10.4).
Evidence for the electric charge appeared independently in each of these perspectives, no matter how diverse the data and data-to-phenomena inferences were in each case. The unexpected unanimity of natural kinds is not our convergence on a metaphysics of essential properties. It is a long and painstaking process of negotiation. Natural kinds as evolving kinds are the products of our perspectival scientific history and our collective willingness to engage in ‘giving and asking for reasons’ (to echo Brandom 1998) in a conditionals-supporting space of inferences, to which I return in Section 10.7.
10.2. Hebridean kelp, glass-blowing, and electrical research: J.J. Thomson’s perspective around 1897–1906
In 1906, J.J. Thomson was awarded the Nobel Prize for his ‘theoretical and experimental investigations on the conduction of electricity by gases’.6 The award did not mention the electron as such because Thomson’s experiments with cathode rays in 1897–1898 did not lead him to the conclusion that ‘the electron exists’. The Presentation Speech by J.P. Klason, President of the Swedish Academy, is telling when read in conjunction with Thomson’s own acceptance speech. Klason mentioned Thomson’s work with H.A. Wilson (building on C.T.R. Wilson’s method) on the discharge of electricity through gases, and presented Thomson as following in the footsteps of Maxwell and Faraday, especially Faraday’s 1834 discovery of the law of electrolysis, which had shown
that every atom carries an electric charge as large as that of the atom of hydrogen gas, or else a simple multiple of it corresponding to the chemical valency of the atom. It was, then, natural to speak, with the immortal Helmholtz, of an elementary charge or, as it is also called, an atom of electricity, as the quantity of electricity inherent in an atom of hydrogen gas in its chemical combinations. Faraday’s law may be expressed thus, that a gram of hydrogen, or a quantity equivalent thereto of some other chemical element, carries an electric charge of 28,950 × 1010 electrostatic units. Now if we only knew how many hydrogen atoms there are in a gram, we could calculate how large a charge there is in every hydrogen atom.7
Having presented Thomson as the scientist who ‘by devious methods’ was able to answer this puzzle, Klason added (almost as a caveat) that ‘even if Thomson has not actually beheld the atoms, he has nevertheless achieved work commensurable therewith, by having directly observed the quantity of electricity carried by each atom. . . . These small particles are called electrons and have been made the object of very thorough-going researches on the part of a large number of investigators, foremost of whom are Lenard, last year’s Nobel Prize winner in Physics, and J.J. Thomson’.8 The qualification ‘even if Thomson has not actually beheld the atoms’ is important. For the fact that Thomson did not refer to his particles as ‘electrons’ was not just a terminological matter: he did not quite see them as genuine particles having inertial mass,9 and believed that there were positive and negative electric charges whose field-theoretical behaviour was captured by what elsewhere he had modelled as a ‘Faraday tube’.
But today Thomson has gone down in history as the discoverer of the electron. And for good reasons too, thanks to his precise experiments on cathode rays. Exhausted glass tubes had become a tool for electrical research since the time of Faraday in the 1830s.10 Later on, William Crookes developed an active interest in producing high-quality exhausted tubes that were pivotal for his research on cathode rays, radiometry (the latter was deeply entangled with Crookes’s spiritualistic beliefs), and, last but not least, the commercial manufacture of light bulbs.11 Crookes went as far as training his research assistant Charles Gimingham in glass-blowing in the 1870s (see Gay 1996, p. 329) in addition to the expertise of two professional women glass-blowers in his lab.
And Crookes was not the only one to have in-house assistants trained in glass-blowing. J.J. Thomson himself at the Cavendish Lab had an assistant, Ebenezer Everett, who specialized in producing bespoke glass tubes for Thomson’s experiments (see Crowther 1974). As Jaume Navarro (2012, p. 51) reports: ‘Ebenezer Everett . . . became the Cavendish glass blower in 1887, after training in the Chemistry Dept. . . . The task of blowing glass was crucial for the kind of experiments that Thomson was performing on the discharge of gases in tubes, and Everett proved to be very successful at this job, as Thomson always acknowledged’.
An industry of glass-blowing developed in the nineteenth century around optical and electrical researches. British glass manufacture at the time resorted to lead oxide, which hampered electrical conductivity. Hence British scientists preferred the use of what were known at the time as German or French glasses, which instead of lead used soda (see Gray and Dobbie 1898, p. 42). Increasingly, glass manufacturers such as Powell and Sons of Whitefriars in London were requested by scientists like Crookes to produce lead-free glass to conduct their experiments (see Powell 1919). Such requests intensified during World War I when the lines of supply of German-made glass were cut off. To reduce the melting temperature of the glass, increasingly the British glassware industry relied on alkali flux obtained from the ashes of seaweeds (kelp).12
The practice of using kelp for glass-making had been part of the economy of local communities on the West Coast of Scotland and also in Ireland since the eighteenth century (see McEarlean 2007). Samuel Johnson reported such a practice already back in 1775 in ‘A Journey to the Western Islands of Scotland’, where the kelp trade is said to have sparked litigation on the Isle of Skye between the Macdonald and Macleod clans for a ledge of rocks rich in seaweed.13 While the kelp trade proved lucrative for local clans (see Gray 1951), the local population of the Western Isles did not enjoy similar fortunes.14 It is against this socioeconomic backdrop of kelping that glass manufactures for scientific research took place.
The manufacture of kelp-fluxed glass continued throughout the eighteenth and into the nineteenth century and played an important role in the development of chemical research by Scottish-based Joseph Black and Lyon Playfair, with the glassware laboratory in Leith (Edinburgh) producing the glass used in University of Edinburgh lab (see Kennedy et al. 2018). Some of the glass tubes in the electrostatic induction section of the Playfair Collection at the University of Edinburgh, for example, reveal a high-calcium percentage with a ‘presence of strontium indicating that kelp was used as the alkali flux’ (p. 260).
And it was not just chemistry that benefited from lead-free glassware but also and especially the blossoming field of electrical research. J.J. Thomson’s research on electrical conductivity in gases is deeply rooted in this long-standing scientific practice of producing high-quality (ideally lead-free) exhausted glass tubes, following in the footsteps of Faraday and Crookes but also Black and Playfair. While glassware was a key component of the experimental and technological resources available to Thomson’s scientific perspective, its modelling assumptions too deserve a closer look. In brief, using cathode ray glass tubes and relying on classical laws of electrostatics and magnetism, Thomson could measure the displacement of the cathode rays in the presence of an electric or magnetic field. From these experiments, he was able to establish the charge-to-mass ratio (e/m) at work in the modally robust phenomenon of cathode rays bending (or m/e, as Thomson still referred to it in 1897).
The charge-to-mass value was found to be stable under a range of background conditions: it was independent of the velocity, the kind of metal used for the cathode and anode, and the gas used in the tube.15 Most interestingly, under an additional range of interventions, the same lawlike dependency between charge and mass was observed to hold stably across a number of other phenomena in different domains, including electrolysis in chemistry and X-ray ionization in nuclear physics.16
Since the beginning of his career at the Cavendish Laboratory in Cambridge in the 1880s (and still visibly in the 1893 book Notes on Recent Researchers in Electricity and Magnetism), Thomson’s research on electric discharge in gases took place within a well-defined scientific perspective, still popular in Cambridge at the turn of the twentieth century, that I am going to call—for lack of a better term—the Faraday–Maxwell perspective, with the caveat that such a perspective is not of course confined to or centred on the works of Faraday and Maxwell, and stretches well beyond those.17 This was primarily concerned with electromagnetism: the interconversion of electricity and magnetism observed by Ørsted in Denmark and Faraday in England in the 1820s, and to explain which Maxwell in the early 1860s produced mechanical models of the ether. This perspective centred on the field-theoretical analysis of the electromagnetic field (what Faraday had originally called ‘magnetic lines of force’).
But—as always with scientific perspectives—the perspective was not limited to (or exclusively centred on) a particular theoretical body of knowledge claims about electromagnetic phenomena. It equally involved the experimental and technological resources to advance them, including the aforementioned Hebridean kelping industry and glassware manufacture behind cathode ray tubes. But it also involved what I called second-order epistemic-methodological principles that justify the reliability of the knowledge claims so advanced. In this example, specific modelling assumptions concerning the so-called Faraday tubes, which physically conceived of ‘tubes of electric force, or rather of electrostatic induction, . . . stretching from positive to negative electricity’ (Thomson 1893, p. 2).18
Faraday tubes were a way of modelling what we would now call ‘electric flux’ as a measure of the electric field strength, with the two charges (positive and negative) at the two ends of the tube. In the nineteenth century, this was a semi-classical way of conceiving the electric field as a collection of ethereal vortex tubes, carrying electrostatic induction. Thomson toyed with the model of Faraday tubes in 1891 as they allowed him to reconcile claims about the discrete nature of electricity emerging from electrochemical experiments with Maxwell’s electromagnetic field (whereby electricity was analysed primarily as electric displacement in a continuous field). Atoms of opposite electric charge connected by a Faraday tube could serve to represent molecules of electrolytes—polarized with the passage of electric current as in Grotthuss’s chain model of electrolysis.
Yet the Faraday–Maxwell perspective gave a perspectival representation of the electric charge in stark contrast with the one emerging from the electrochemical perspective (as we shall see in the next section). Thomson (1891) had made it clear from the outset that Faraday tubes were not just an expedient to visualize mathematical equations, but they had ‘real physical existence’ and that the contraction and elongation of such tubes could explain the passage of electricity through metals, liquids, and gases.19
Twelve years later, just three years before the Nobel Prize, Thomson returned to the topic in the Silliman Lectures in May 1903 at Yale University (Thomson 1904). These lectures provide an instructive example of his long-standing ontological commitment to electric charge at the dawn of the new century (just when Planck was ushering in quantum physics). Four aspects of Thomson’s methodological commitment to Faraday tubes deserve comment:
Thomson’s treatment of the electric charge is still deeply rooted in the nineteenth-century Faraday–Maxwell tradition of lines of force and mechanical ether models for electromagnetic induction in analogy with hydrodynamics. Thomson refers to the Faraday tube as a ‘tube of force’ or a tubular surface marking the boundaries of lines of force so that ‘if we follow the lines back to the positively electrified surface from which they start and forward on to the negatively electrified surface on which they end, we can prove that the positive charge enclosed by the tube at its origin is equal to the negative charge enclosed by it at its end’ (p. 14). In this way, he explained the old ideas of positive and negative electricity with ‘each unit of positive electricity in the field . . . as the origin and each unit of negative electricity as the termination of a Faraday tube’ (p. 15).
The boundary between Thomson’s corpuscles and the Faraday tube is a lot more subtle than it might seem. The mass of the Faraday tube is nothing but the mass of the bound ether, or, as Thomson puts it, ‘the mass of ether imprisoned by a Faraday tube’ (p. 39). The term ‘corpuscle’ is introduced to refer to ‘those small negatively electrified particles whose properties we have been discussing. On this view of the constitution of matter, part of the mass of any body would be the mass of the ether dragged along by the Faraday tubes stretching across the atom between the positively and negatively electrified constituents’ (p. 50). Thus, Thomson’s corpuscle is effectively nothing but a ‘concentration of the lines of force on the small negative bodies’ so that ‘practically the whole of the bound ether is localised around these bodies, the amount depending only on their size and charge’ (p. 52).
The electric charge is presented as a natural unit and its atomicity is explained in terms of Faraday tubes.20 However, by contrast with the electrochemical perspective, the reasoning leading to Thomson’s conclusion that the electric charge is somehow atomistic does not rely exclusively on electrolysis but also on Wilson’s experiments on the conductivity of the vapour obtained from metallic salts (the so-called electron vapour theory).21
An explanation of Röntgen rays is given in classical terms of corpuscles and via Faraday tubes and with no reference to the quantum hypothesis and electrons losing part of their quantized energy as in Planck’s contemporary treatment of the topic (Section 10.4).
Thomson started with data from cathode ray tubes, which were made possible by a century-long history of kelping behind the glassware industry for scientific instruments. He built on Faraday’s lines of force and Maxwell’s honeycomb model of the ether, and resorted to a field-theoretical model that made it possible to infer what might happen, under the supposition that the Faraday tubes were stretched and elongated. For this conditionals-supporting inference to be truth-conducive, other perspectival data-to-phenomena inferences had to be brought to bear on it across a network of inferences that eventually guided epistemic communities to the correct identification of the electric charge. This was indeed what happened.
10.3. Grotthuss’s and Helmholtz’s electrochemical perspective ca. 1805–1881
The Faraday–Maxwell field-theoretical perspective on electromagnetic phenomena such as cathode rays bending was at some distance from what I call the electrochemical perspective. In 1874, G. Johnstone Stoney used Faraday’s law of electrolysis to conclude that in the phenomenon of electrolysis, ‘For each chemical bond which is ruptured within an electrolyte a certain quantity of electricity traverses the electrolyte which is the same in all cases’ (Stoney 1874/1894, p. 419). Stoney introduced the term ‘electron’ to describe this minimal quantity of electricity.22 In 1881, Hermann von Helmholtz in Germany championed the hypothesis that elementary substances were composed of what he called ‘atoms of electricity’23 (or ‘ions’, as Lorentz later called them). He motivated and justified this view in light of chemical studies of electrolysis going back to the German chemist Theodor von Grotthuss, who in 1805–1806 had published his influential chain model for water electrolysis.
The atoms of electricity were regarded here as the minimum quantity carried by electrolytes (or by the hydrogen atoms) when molecules decomposed with the passage of electricity. Helmholtz’s argument originated from Faraday’s first and second law of electrolysis, which had established that the electric charge of hydrogen atoms (or what we now know to be their valence electrons) was a fundamental unit not further divisible. Helmholtz’s reasoning for taking the electric charge as a physical unit (and in Britain, Stoney’s analogous reasoning) was entirely chemical, rooted in the well-known tradition of eighteenth- and nineteenth-century electrolytical experiments and a long-standing debate on the animal vs metal nature of electricity going back to Galvani’s frogs and Volta’s electric pile (see Pauliukaite et al. 2017). What made e a minimal unit under this perspective was the fact that it was the charge corresponding to chemical valence 1. Thus, a different data-to-phenomena inference was at play in this scientific perspective, one that fed into the indicative conditional
(E.1) If hydrogen and oxygen form a Grotthuss’s chain, hydrogen is released at the negative electrode.
By physically conceiving of a minimal (positive and negative) electrical unit for the ions of electrolytes standing in a chain, Grotthuss’s model could be used to explore what might happen in the well-observed phenomenon whereby water molecules decompose with the passage of electricity with oxygen at one end and hydrogen at the opposite one.
Bringing this kind of information to bear on J.J. Thomson’s perspective proved key in this story. As Thomson himself recounted in his Nobel Prize speech, it became apparent that there was a disparity between the ratio E/M of the hydrogen atom (known from the phenomenon of water electrolysis) and the ratio e/m emerging from the phenomenon of cathode rays bending within the Faraday–Maxwell perspective. A numerical discrepancy appeared of the order e/m = 1,700 E/M. This led Thomson to the following reasoning, which was pivotal for the identification of the ‘corpuscle’ as the first sub-atomic particle:
We have already stated that the value of e found by the preceding method [i.e. Wilson’s]24 agrees well with the value E which has long been approximately known. Townsend has used a method in which the value e/E is directly measured, and has shown in this way also that e equal to E. Hence since e/m = 1,700E/M, we have M = 1,700 m, i.e. the mass of a corpuscle is only about 1/1,700 part of the mass of the hydrogen atom. (Thomson 1906, p. 153)
But the inferences that led to the electric charge were not confined to phenomena about water electrolysis and the bending of cathode rays in an external field (in addition to other phenomena that I do not have the space to cover here). On the other side of the Channel, German physicists were laying the foundations of a new scientific perspective, which was bound to have a lasting impact in the story so far.
10.4. Max Planck’s quantum perspective: the electric charge as a ‘natural unit’
In the Preface to the Second Edition of The Theory of Heat Radiation, Planck announced that his measured value for e lay in between the values of Perrin and Millikan. More importantly, he presented the idea of an ‘elementary quanta of electricity’ as the most important new evidence in support of his hypothesis of the quantum of action:
Recent advances in physical research have, on the whole, been favorable to the special theory outlined in this book, in particular to the hypothesis of an elementary quantity of action. . . . Probably the most direct support for the fundamental idea of the hypothesis of quanta is supplied by the values of the elementary quanta of matter and electricity derived from it. When, twelve years ago, I made my first calculation of the value of the elementary electric charge and found it to be 4.69 × 10−10 electrostatic units, the value of this quantity deduced by J.J. Thomson from his ingenious experiments on the condensation of water vapour on gas ions, namely 6.5 × 10−10 was quite generally regarded as the most reliable value. This value exceeds the one given by me by 38 per cent. Meanwhile the experimental methods, improved in an admirable way by the labors of E. Rutherford, E. Regener, J. Perrin, E.A. Millikan, The Svedberg and others, have without exception decided in favor of the value deduced from the theory of radiation which lies between the values of Perrin and Millikan.
To the two mutually independent confirmations mentioned, there has been added, as a further strong support of the hypothesis of quanta, the heat theorem which has been in the meantime announced by W. Nernst, and which seems to point unmistakably to the fact that, not only the processes of radiation, but also the molecular processes take place in accordance with certain elementary quanta of a definite magnitude. (Planck 1906/1913, p. vii)
With these words, Planck established a tradition with far-reaching philosophical consequences. The idea of an elementary electric charge corroborated his quantum hypothesis and showed how it could be extended beyond the radiation of the black-body, into the nature of matter and electricity. And there was no better evidence for this than to identify e as a physical constant (along the lines of Planck’s own constant h) and present the experiments of Thomson, Rutherford, Perrin, and Millikan as all dealing with the same task: to measure the value for the elementary charge. Planck’s desire to find a connection between h and other physical constants was revealed by Max Klein in a letter to Ehrenfest of 6 July 1905, at a time where the existence of an elementary charge quantum e was only a conjecture. As reported by Klein, Planck was keen to find a ‘bridge’ between his quantum hypothesis h and the experimentally found values for e (see Holton 1973, p. 176 fn. 19).
In Chapter 4 of The Theory of Heat Radiation, Planck returned to the hypothesis of quanta and the temperature of black-body radiation from a system of stationary oscillators and embarked on what in my view is an illuminating journey into the nature of physical constants. After introducing Planck’s constant,
he went on to a discussion of the kinetic theory of gases and how to estimate the number of hydrogen molecules contained in 1 cm3 of an ideal gas at 0 Celsius and 1 atmosphere. He concluded that the ‘elementary quantity of electricity or the free charge of a monovalent ion or electron’ e in electrostatic unit is 4.67 × 10−10, adding that ‘the degree of approximation to which these numbers represent the corresponding physical constants depends only on the accuracy of the measurements of the two radiation constants’ (Planck 1906/1913, p. 173).25 In a single stroke, Planck effectively established:
the theoretical equivalence between the ‘free charge of a monovalent ion or electron’ with the ‘elementary quantity of electricity’ (it is worth stressing Planck’s ambiguous use of the double terminology of Lorentz’s ‘ions’ as interchangeable with Stoney’s ‘electron’);26
the identification of the ‘elementary quantity of electricity’ e with a ‘physical constant’ among others in the context of black-body radiation;
and the accuracy in the values of the physical constant e depending on the refined measurements of radiation constants.
The ambiguity in the terminology ion/electrons is, in my view, symptomatic of Planck’s disengagement from the ontological debate about the nature of the minimal unit of electric charge (and of atoms more generally).27 For Planck, electric charge helped establish the validity and universal applicability of the quantum hypothesis. And what up to that point had been just a hypothesis—the ‘ion hypothesis’, as the German physicist Paul Drude still called it—had become in Planck’s hands a ‘natural unit’.
Drude’s electron gas theory was an important influence for Planck (see Kaiser 2001). Drude himself was working on metal optics, and how to explain phenomena such as dispersion of light and optical reflection from metal surfaces within Maxwell’s electromagnetic theory. Building on van’t Hoff’s kinetic theory of osmotic pressure, Drude patterned electrical conductivity in metals on the model of the kinetic theory of gases, and used Boltzmann’s equipartition theorem with the universal constant a to establish that ‘If a metal is now immersed in an electrolyte in the case of ‘temperature-equilibrium’ [that is, thermodynamic equilibrium] the free electrons [‘kernels’] in the metal would have the same kinetic energy as the ions in the electrolyte’ (Drude 1900, quote from Kaiser 2001, p. 258).
Planck did not speculate on the nature of the minimal unit of electricity. He was more interested in identifying e as a physical constant and in establishing accurate values of various inter-related physical constants. What makes some units natural, according to Planck, are two features that we still identify with physical constants.
Physical constants are objective. Planck maintained that their holding does not depend on us qua epistemic agents: it is not meant to cater to our epistemic needs, or to our research interests. Physical constants are thus set aside from metrological considerations that typically apply to other units, for there is no conventional element presumably affecting their validity.
Physical constants are necessary. They are part of the fabric of nature: they exist and would have existed even if humankind had not existed (or had not developed our particular scientific history). The naturalness of these constants is tied to laws of nature, according to Planck. Their ‘natural significance’ is retained as long as the relevant laws ‘remain valid; they therefore must be found always the same, when measured by the most widely differing intelligences according to the most widely differing methods’ (Planck 1906/1913, p. 175).28
The introduction of the elementary quantity of electricity e in this context, then, marks an important shift in the debate about the nature of electric charge. It signals that ontological discussions about what the electron really is do not matter, because the fundamental unit is not the electron (or hydrogen atom or gas ion or corpuscle), but the electric charge. And electric charge is a physical constant. It is entrenched in laws of nature, whose validity—Planck insisted—holds always and everywhere.
10.5. Walking in the garden of inferential forking paths
How it is possible for different epistemic communities to reach the same conclusion (e.g. that something is and what is), perspectival representations notwithstanding? Natural kind thought and talk depends on explaining why a bunch of phenomena are of the same sort while also making room for the possibility that things could have gone otherwise. While the sort-relative sameness relation sheds light on the mechanism, if you like, behind NKHF, there are still gaps to fill in. I anticipated that truth-conducive conditionals-supporting inferences ultimately explain how and why epistemic communities come to historically agree that a certain open-ended grouping of phenomena is a natural kind.
In this historical episode, the modally robust phenomena, each in its own domain, were beyond anyone’s doubt: the bending of cathode rays, the electrolysis of water, Röntgen rays, optical reflection in metals, and others. How did this coming to historically agree happen?
Here is a classical realist way of thinking about this. There is a world out there packed with natural kinds (e.g. the electron) having some distinctive properties (e.g. negative electric charge). Over time and with great experimental and theoretical efforts, scientists come to know the kinds and their properties. They might have some approximately true beliefs and other false beliefs about them. Thomson might be said to have had approximately true beliefs about the charge-to-mass ratio of his object of study but false beliefs when it came to Faraday tubes and all that. Over time, these false beliefs are rectified and eliminated as we get more true beliefs.
But here is another realist way of thinking about our coming to agree, which I am now putting to test with this episode. Let our fiat be not some granted metaphysical picture of the world out there but our ways of knowing the natural world. As long as we are ready to make this switch from a metaphysical fiat to an epistemological one, our starting point becomes the plurality of historically situated scientific practices—scientific perspectives—through which humankind has encountered the natural world as teeming with modally robust phenomena.
Different perspectives produce different perspectival representations. Helmholtz’s, Thomson’s, and Planck’s each operated with different perspectival1 representations of what we call the electric charge e in that they availed themselves of a variety of situated scientific practices pertinent to their respective scientific perspectives. Helmholtz identified the electric charge as a fundamental unit corresponding to chemical valence 1 via the Grotthuss chain model and decades of experiments on electrolytes. Thomson resorted to the Faraday tube to model electric flux, and to a century-long tradition of kelp-making and glass-blowing to run his experiments. And Planck deployed h as a way of programmatically rethinking units of measure and identified the electric charge with a fundamental constant, ushering in a tradition that continues to these days with die kleine h entering the SI.
What makes these representations perspectival1 is not therefore that they each represent a given property—electric charge—from different points of view. They are not representing a given content as portrayed from vantage point a rather than b or c. For establishing the existence of such a property and its nature was precisely what was at stake in all these investigations—the existence of the electric charge was not epistemically given. It was not the starting point but the end point of these scientific endeavours. If one knew in advance that there is indeed such a constant in nature, it would not be necessary to go through such a century-long painstaking experimental effort to measure it, to model it, to theorize about it.
It is in this specific sense that these representations can therefore also be said to be perspectival2 in the language of Chapter 2, where the two notions of perspectival1 and perspectival2 were presented as Janus-faced and complementing each another. They are perspectival2 in being directed towards establishing that there is indeed an electric charge and finding out its nature. In so doing they open for us a ‘window on reality’ thanks to methods, experimental tools, and modelling practices that were the expression of genuinely different scientific perspectives at the time—the electrochemical, the electromagnetic, and the quantum one—through which a plurality of data-to-phenomena inferences were reliably delivered.
In Philosophy and the Mirror of Nature (1979, pp. 330–331), Richard Rorty famously concluded about the controversy surrounding Galileo’s new discoveries that ‘Galileo won the argument, and we all stand on the common ground of the “grid” of relevance and irrelevance which “modern philosophy” developed as a consequence of that victory’. One could similarly be tempted to claim that Planck won the argument in 1906 and we all stand on the common ground of the ‘grid’ that quantum physics has developed as a consequence of that victory.
But the story I have told differs from the classic realist one as much as from its Rortian counterpart. Our unanimous agreement is neither the result of uncovering ‘hidden goings on’, nor the outcome of converging towards some final reality. Equally, pace Rorty, our unanimously coming to agree about the electric charge is not a matter of winners or losers.
It is instead the unpredictable, unforeseeable, and extraordinary epistemic feat of a plurality of epistemic communities in their historically and culturally situated scientific perspectives, and their sophisticated inferential game between 1897 and 1906 of ‘giving and asking for reasons’ (to echo once again Brandom 1998, p. 389) as to why a particular grouping of phenomena belong together. Progress takes place not in spite of but thanks to a plurality of scientific perspectives. I will return to the importance of a plurality of perspectives in my final chapter. But for now, let me clarify how I see the inferential reasoning at play here through:
the use of perspectival models as inferential blueprints to identify modally robust phenomena;
the conditionals-supporting nature of the inferences linking various phenomena;
their being truth-conducive.
Let me unpack each in turn.
10.6. Inferential blueprints encore and modally robust phenomena
Willingness to engage with other epistemic agents occupying different scientific perspectives (synchronically and diachronically) is key to perspectivism as a pluralist view. I see our coming to unanimously agree that something is and about what it is as the outcome of conditionals-supporting inferences linking phenomena across different domains so that they come to be historically identified as being of the same sort. In the historical example I have briefly examined, the inferential game becomes the game of considering a number of phenomena (let us call them P1, P2, P3) in their respective domains that at the time had been historically identified via a plurality of perspectival data-to-phenomena inferences.
Recall how in Chapter 5 I defined perspectival models as inferential blueprints. The key idea was that
Perspectival models model possibilities by acting as inferential blueprints to support a particular kind of conditionals, namely indicative conditionals with suppositional antecedents.
The representational value of a blueprint consists in its ability to enable the relevant users to make relevant and appropriate inferences over time. The perspectival models offer instructions to an often diverse range of epistemic communities for making relevant and appropriate inferences about the phenomena of interest within broad constraints. Just as architectural blueprints offer a sketch of a building’s shape, proportions, and relations among the relevant parts, perspectival models sketch the lawlike dependencies among relevant features of the phenomena at stake.
Faraday tubes and Grotthuss’s chain model are examples of perspectival models qua inferential blueprints. Consider again Grotthuss’s 1805–6 model—still conceived within the electrochemical perspective which at the time featured the Galvani–Volta controversy and a plurality of models about the nature of animal vs metallic electricity. The model supposed that water formed a chain of positive and negative charges that would be released at the two ends of the electrodes as a way of exploring how electricity might affect water and other fluids.
The model acted as an inferential blueprint for a series of experimental observations run by Michael Faraday in London in the 1830s, observations that eventually revealed stable events in the form of lawlike dependencies across a wide array of electrolytic substances. Grotthuss conceived his model using Volta’s pile, which he took to have poles with opposite attractive and repelling forces. Thirty years later, Faraday did not believe that electrochemical decomposition was the effect of the powers between opposite poles.
Yet just as architectural blueprints give teams of different craft workers instructions about how to build a house, Grotthuss’s model gave Faraday and other scientists of the time helpful instructions for experiments. In Faraday’s case, the experiments were designed to show that ‘for a constant quantity of electricity, whatever the decomposing conductor may be, whether water, saline solutions, acids, fused bodies . . . the amount of electrochemical action is also a constant quantity’ (Faraday 1833/2012, vol. I, p. 145, emphasis in original). Grotthuss’s model equipped Faraday with an inferential blueprint for thinking about the outcome of his experiments with a variety of oxides, chlorides, and salts. Faraday concluded that ‘many bodies are decomposed directly by the electric current, their elements being set free; these I propose to call electrolytes’ (vol. I, p. 197, emphasis in original). The stability of the events observed by Faraday concerning the decomposition under the action of electricity was due to their inherent lawlikeness.29
Faraday went on to call it ‘the general law of constant electro-chemical action’ (Faraday 1833/2012, vol. I, p. 225, emphasis in original). The lawlike dependency that he saw as inherent in ‘the chemical decomposing action’ made the events stable under a number of changes in background conditions: in the intensity of the electricity, the location of the electrodes, or the conductivity or non-conductivity of the medium.
The associated phenomenon—the electrolysis of water, of saline solutions, acids, and so forth—was modally robust in the sense I explained in Chapter 6, Section 6.7.3. A triadic relation linked the data observed, the stability of the event (qua lawlike chemical decomposing that is constant for a constant quantity of electricity), and the perspectival inferences from the data to the stable event. The phenomenon of electrolysis was modally robust in that it could happen in more than one possible way and be identified and re-identified by different epistemic agents over time.
For example, independently of Grotthuss, Humphry Davy arrived at the same conclusion about water electrolysis, from a series of observations concerning electrified water in gold cones, agate cups, tubes of wax, tubes of resin, and so forth.30 Like any good architectural blueprint, Grotthuss’s model with its chain of positive and negative charges was amenable to being scaled up or down. In this case, the ‘scaling-up’ metaphor translates into how the charged ‘electrolytes’ (as Faraday called them) became the charged ‘idle wheels’ in Maxwell’s honeycomb model of the ether (1861–2/1890), which in turn served as an inferential blueprint for a different phenomenon: that of electromagnetic induction. In Maxwell’s model (see Bokulich 2015 and Massimi 2019c), ethereal vortices represented the magnetic field and its strength while idle wheels among vortices represented the electric displacement associated with the magnetic field. Such ethereal vortices accompanied by charged particles resurfaced with J.J. Thomson and his ‘Faraday tubes’, still described in the Silliman Lectures of 1906 as a model for electrostatic induction.
Grotthuss’s 1805–6 chain model for water electrolysis and J.J. Thomson’s Faraday tubes for electrostatic induction are perspectival models. They were representing electricity from two different vantage points: the electrochemical and the electromagnetic perspectives. After all, the two phenomena—electrolysis (P1) and cathode rays bending (P2)—are different in nature. The former takes place at the scale of the molecules of chemical electrolytes. The latter occurs in the interaction between magnetism and electricity.
The relevant data-to-phenomena inferences in each case were also perspectival. Consider, for example, the wildly diverging views that existed throughout the late eighteenth and early nineteenth century about the nature of electricity at work in electrochemistry: from Galvani’s animal electricity to Volta’s metallic electricity, which still informed Grotthuss’s model, or the Victorian context of ether theory in late nineteenth-century Cambridge where Maxwell and Thomson developed their models (see, e.g., Siegel 1981), without mentioning the craftsmanship of glass-blowing and producing kelp-fluxed glass tubes (from Crookes to Thomson).
And yet, in spite of the perspectival inferences from the data, these two phenomena—electrolysis (P1) and cathode rays bending (P2)—proved to be modally robust in that each could happen in more than one possible way, and be re-identified over time. Moreover, as a distinctive type of model pluralism, perspectival modelling has a history of its own. The relevant models—from Grotthuss’s to Thomson’s—lie on a continuum, almost a genealogy. Modelling electricity required Grotthuss, no less than Davy, Faraday, Maxwell, and Thomson after him, to work multi-handed on a number of perspectival models qua inferential blueprints.
One of the distinctive challenges in this exploratory modelling exercise was to reconcile the continuous field-theoretic nature of phenomenon (P2) with the discrete corpuscular nature of phenomenon (P1). Thomson’s Faraday tubes were meant to offer a solution. They were in their own way a remnant of Maxwell’s ethereal vortices combined with discrete corpuscular opposite electric charges at each end—a distant cousin of Grotthuss’s chain model,31 as if Volta’s pile with its opposite electric charges had been coupled with mechanical models of the ether for electromagnetism. Faraday tubes in turn enabled scientists at the turn of the twentieth century to make novel inferences about the relevant phenomena (P1) and (P2) and use the observed lawlike dependencies to ultimately infer what Thomson called the ‘corpuscle’. But what should one say about the nature of the inferences here at play?
10.7. Chains of conditionals-supporting inferences
In Chapter 5, Section 5.7, I contended that the inferences supported by perspectival models can be expressed in terms of chains of indicative conditionals with suppositional antecedents. I also stressed that there is a clear division of modal labour between indicative and subjunctive conditionals in these inferences. Consider, for example, the difference between the following indicative conditional at play in this historical episode:
(E.1) If hydrogen and oxygen form a Grotthuss’s chain, hydrogen is released at the negative electrode.
and the subjunctive conditional (denoting the subjunctive with A rather than E)
(A.1) Were electrodes to be immersed in water, hydrogen would be released at the negative electrode.
The two conditionals conceal a crucial difference behind the syntactical difference between the present tense ‘is released’ and the subjunctive ‘would be released’. Although the consequent is the same in both cases, the subjunctive mode in (A.1) conveys the objective possibility of hydrogen being released, were the antecedent condition to hold. But the indicative conditional (E.1) conveys instead an implicit (unpronounced) epistemic possibility concerning hydrogen being released, under the supposition of the antecedent .
In my philosophical lingo, the subjunctive mode (A.1) speaks to the stability of the event under the antecedent’s holding—hydrogen’s being released at the negative electrode whenever the electrodes are immersed in water—its objective (non-epistemic) possibility being grounded in the lawlike causal dependency between quantity of electricity and electrochemical decomposition observed by Faraday. Hydrogen would still be released if electrodes were immersed in water, regardless of the nature of the metal used for the electrodes, for example.
By contrast, the indicative mode speaks to our epistemic attitudes when we judge whether the phenomenon P1 (water electrolysis) is likely to occur in the physically conceivable scenario described by Grotthuss’s model in the antecedent. This is the realm of perspectival models and of how epistemic agents use these models to physically conceive the scenario captured by the antecedent. As per Chapter 5, Section 5.7, indicative conditionals such as (E.1) are epistemic conditionals with an implicit (unpronounced) modal. Along the lines of Angelika Kratzer (2012), I maintain that (E.1) can be regarded as a bare conditional which is implicitly modalized as follows:
(E.1*) If hydrogen and oxygen form a Grotthuss’s chain, hydrogen may be released at the negative electrode.
The modal verb ‘may’ is again epistemic not in the sense of expressing the sheer belief of a particular epistemic agent or community, but in capturing instead possibilities concerning specific relations within the limits afforded by perspectivism. In this example, the implicit modal verb reflects the particular state of knowledge and perspectival model available to the epistemic community at the time to think and talk about what was objectively possible concerning the hydrogen, under the supposition that the water molecules formed an ionised chain as per Grotthuss’s model. As explained in Chapter 5, I see indicative conditionals as key to the inferential reasoning supported by perspectival models. They tell us that ‘Given the antecedent supposition, plus a number of auxiliary assumptions R, S, T, U, the consequent follows’, where ‘follows’ can be understood in a variety of ways (inductively, deductively, abductively) on a case-by-case basis. For example, (E.1*) can be unpacked as follows:
Let us physically conceive of hydrogen and oxygen as forming a Grotthuss’s chain; then—given auxiliary assumptions R, S, T, U—hydrogen may be released at the negative electrode.
Auxiliary assumptions R, S, T, U include water being a chemical compound rather than an element, electricity being able to decompose it. But also other claims that have now long been forgotten and abandoned, including the idea of an ‘electropolar’ system in nature (see Pauliukaite et al. 2017).
Indicative conditionals often enter into long chains of inferential reasoning spanning several phenomena indexed to different domains and evinced through perspectival data, methods, models, and techniques. Consider, for example, the following chain of indicative conditionals-supporting inferences:
(E.1) If hydrogen and oxygen form a Grotthuss’s chain, hydrogen is released at the negative electrode.
(E.2) If ether vortices move as in Maxwell’s honeycomb model, electric current is displaced.
(E.3) If a Faraday tube of electrostatic induction is stretched and broken, free atoms of electricity are produced (be it in metals or liquid electrolytes).
(E.4) If free atoms of electricity in metals are conceived along van’t Hoff’s kinetic theory of osmotic pressure (as Paul Drude did), dispersion of light and reflection of metal surfaces ensue.
(E.5) If carriers of metallic conductivity are conceived along the model of Drude’s electron gases, the phenomenon of black-body radiation can be calculated.
(E.6) If the monovalent hydrogen ion is conceived along the lines of Planck’s quantum hypothesis, the quantum of electricity (measured from the radiation constants) is equal to 4.67 × 10−10 in electrostatic units.
The inferential chain (E.1)–(E.6) allowed physicists around 1897–1906 to conclude that something was (electric charge) and what it was (a quantum of electricity with a well-defined measurable value). Electric charge as a fundamental unit of nature is not a Lockean nominal essence with necessary and sufficient conditions for membership. For it is not just an itemized list of phenomena P1, P2, P3, . . . from electrolysis to electromagnetic induction, from metal conductivity to radiant heat. What is needed in addition is a set of instructions for epistemic communities—working across different scientific perspectives and willing to engage with one another—to reliably make informed decisions about how to proceed, what conclusions to draw, what tentative conclusions to discard, what further novel inferences to explore and probe about these phenomena and new ones too.
These instructions take the form of conditionals-supporting inferences like (E.1)–(E.6). Models are involved at different points in these inferences. Some of them are perspectival. But these are only a subset of a much larger family of scientific models routinely used to make these inferences, including phenomenological models such as Drude’s electron gas, and theoretical models such as Planck’s theory of black-body radiation.
How can a chain of conditionals-supporting inferences ever successfully deliver instructions as to how to proceed in the garden of forking paths? If these indicative conditionals (and their covert epistemic modals) are advanced by epistemic communities working within situated scientific perspectives at a particular time on the basis of limited evidence, how can they ever deliver any realist commitment on what there is?
One is reminded here of Marco Polo’s answer to Kublai Khan in Italo Calvino’s quote at the opening of this chapter. Situated communities recognize ‘the little that is theirs, discovering the much they have not had and will never have’. Futures not achieved are indeed only branches of the past: but dead branches. Our walk in the inferential garden of forking paths is not some arbitrary meandering. At each step, and each branch point, the scientific paths taken must be explained and justified with our fellow travellers. The inferential game of giving and asking for reasons includes reasons for the futures achieved by our evolving kinds, and those for the dead branches we left behind as empty kinds.
In the example at stake, the instructions encoded by these conditionals-supporting inferences required comparing the phenomenon P1, on the one hand, and its ratio E/M in the hydrogen emerging from Grotthuss’s and Helmholtz’s work on electrolysis (E.1), with the phenomenon P2, on the other hand, and its e/m measured by Thomson’s experiments on cathode rays underpinned by models such as Faraday tubes (E.3) building on Maxwell’s ether model (E.2), and noticing a numerical discrepancy.
To resolve this, a new round of data-to-phenomena inferences was required that this time involved forking subjunctive conditionals at a key juncture (E.3) of the chain of indicative conditionals (recall I use ‘A’ for subjunctive conditionals):
(A.3.a) Were e bigger than E, e/m would be much bigger than E/M;
Or
(A.3.b) Were m much smaller than M, e/m would be much bigger than E/M.
As new data became available (using Wilson’s technique of weighing water droplets that condense around negative charges, and Townsend’s experiments on the charge of ions produced by X-rays) and more refined measurements made possible thanks to Thomson’s cathode rays experiments and improved glass tubes, the choice could reliably be made in favour of (A.3.b). This led Thomson to conclude that his corpuscle had a mass much smaller than the hydrogen atom.
From there the step to the next further inference that there is a quantum of electricity was a short one. A further round of data-to-phenomena inferences was required, this time involving the comparison of Thomson’s value for e (as per A.3.b) with Planck’s value derived from his theory of black-body radiation (via E.4–E.6). And again, the discrepancy between the two opened up yet another inferential forking path at a key juncture (E.6) with the following subjunctive conditionals:
(A.6.a) Were e a semi-classical quantity, its value would be derived from the laws of classical electrodynamics;
Or
(A.6.b) Were e a quantum of electricity, its value would be derived from the laws of black-body radiation.
Further measurement obtained by Rutherford, Regener, Perrin, and Millikan, among others, settled the choice for Planck’s (A.6.b) eventually. In Rorty’s language, we all stand on Planck’s ‘grid’ today in taking Planck’s constant e as a minimal natural unit. Electricity got quantized alongside black-body radiation. Fast forward a century, and die kleine h has established itself as a new fundamental unit in the International System of Units (SI) replacing le grand Kilo.
This is no argument against fundamental physical constants, of course. If anything, this is an argument to the effect that the physical constant e and, more broadly, the natural kind electron are the outcome of conditionals-supporting inferences. These inferences were enabled by lawlike dependencies among relevant features at work in each and every one of the different phenomena that were historically identified and eventually grouped under the sortal concept electron. Ultimately, it is the lawlike dependencies in phenomena, the way they enter into forking subjunctive conditionals at key junctures, and how in doing so they inform communities about choosing which path to take that underpin the truth-conductive nature of conditionals-supporting inferences.
Historically, the identification of the relevant lawlike dependencies in the phenomenon of cathode rays bending constituted the main hurdle. Thomson’s experiments in 1897 and his ability to reliably settle for (A.3.b) gained him the Nobel Prize, no matter how mistaken his beliefs about corpuscles in Faraday tubes. Truth-conducive conditionals-supporting inferences are reiterated and enabled by an ever-growing number of phenomena in the open-ended grouping. Over time, they reliably lead epistemic communities to agree that something is and what it is.
This is how real historical communities over time learn how to navigate the space of what is possible: that is, by comparing a plurality of modally robust phenomena so as to make more and more refined inferences on what might be the case at every twist and turn.
This procedure is entirely fallibilist, anti-foundationalist, and revisable. It does not start from metaphysically given building blocks. It takes seriously the situated nature of our scientific knowledge, our starting always from somewhere, in the form of model-based inferential reasoning with epistemic indicative conditionals. It is truth-conducive in giving and asking for reasons as to why some paths are taken and others are not along the way.
10.8. Coda: what remains of truth?
This brief foray into the history of the electric charge around 1897–1906 shows how a historically identified grouping of phenomena became over time the natural kind electron. To complete the picture of NKHF, this chapter has focused on the last condition (iii) in my definition. I have made three main points:
• Modally robust phenomena P1, P2, . . . display lawlike dependencies among features that are captured by subjunctive conditionals;
• These subjunctive conditionals enter at key forking junctions in long inferential chains of indicative conditionals
• Indicative conditionals are epistemic conditionals about the phenomena at stake, under antecedently held conceivable scenarios by the models.
In other words, the antecedents of these indicative conditionals invite us to physically conceive certain scenarios under particular models. The consequents express claims of knowledge under the supposition of the scenarios. Following Kratzer (2012), I suggested that the consequent of an indicative conditional hides a modal verb, as when the bare conditional (E.1) is rewritten as (E.1*). The fully fledged epistemic conditionals express modal knowledge claims that agents entertain when using a variety of scientific models to make inferences about phenomena. Conclusions about what natural kinds exist are reached by epistemic communities willing to engage with one another across a plurality of scientific perspectives. But—one might insist at this point—what makes their lengthy sequences of conditionals-supporting inferences truth-conducive? If scientific knowledge is genuinely perspectival in the way described, why even bother with ‘truth’? If anything, is not there a lingering danger of scepticism about knowledge at play in perspectivism? The problem is well expressed by Barry Stroud (2020, pp. 147–148):
Could it be that perspectivism perhaps expresses a certain sympathy with this tradition of doubt or suspicion about knowledge? I speculate here, but the idea of knowledge is so directly connected with the idea of truth, which is independent of human beings’ holding the attitudes they do towards it, that perhaps perspectivism sees more promise in shifting the focus away from the idea of knowledge as such, and looking instead to other human attitudes or responses involved in explaining what we want to understand about the whole enterprise of what we call human knowledge. . . . And whatever the goal, can we really understand what we most want to understand about the enterprise of human knowledge by thinking of those who investigate the world as exercising only the concepts needed for the less-committal epistemic attitudes and responses that perspectivism concentrates on, not a concept of knowledge that implies truth and so apparently resists perspectival treatment?
The direction Stroud ultimately recommends to perspectivists shares features of a variety of no-knowledge-centred accounts of science: from Elgin’s (2017) non-factive scientific understanding to Potochnik’s (2017). However, the view I have developed in this book is indeed centred on knowledge, claims of knowledge, and claims that are modal in flavour too. So I ought to say something to justify my use of ‘truth-conducive’.
Behind Stroud’s remark lies a long-standing and deeply entrenched view of knowledge (and knowledge as implying truth) that sits uncomfortably with the perspectival realist narrative I have endeavoured to offer. According to this entrenched view, truth is the aim of science. It is what scientific inquiry should be about in the sense of converging towards some final true story about the way the natural world is. Those who share stronger metaphysical intuitions about the way the world is and how science tracks this (metaphysics-first) ontology—be it an ontology of properties or kinds or something else—will remain unmoved by my account. And this is of course as is to be expected. For my goal in this book has not been to offer winning arguments against a metaphysics-first approach to science. I do not have such arguments—nor can I see any against epistemology-first realist accounts either. The whole point of this book has been to show that if one accepts an epistemology-first stance on the realism debate, then there is a bottom-up story to be told (from data to phenomena to kinds) that can open up a different flavour of realism about science. But it should also be clear by now that perspectival realism—as I have presented it—is far from traditional convergent realist accounts.
There is no truth with a capital T at the end of the inquiry because there is nothing to converge to. No ‘hidden goings on’ of any kind, and no Humean mosaics. There is nonetheless an external world teeming with modally robust phenomena which scientists engage with by picking a way through the garden of inferential forking paths as it cuts across different scientific perspectives. The ‘windows on reality’ that perspectival2 representations afford open up in this process.
As discussed in Chapter 5 (Section 5.7), following Kratzer (2012), truth conditions and assertability conditions easily come apart as one walks along the inferential garden of forking paths. For assertability conditions, speakers’ evidence at the time and in their situated perspective is all that counts, but not so for truth conditions. For example, Thomson was justified to entertain the indicative conditional (E.3) on the basis of the evidence he had back in the 1890s despite the fact that the same evidence did not constitute a truth condition for it. Scientific perspectives do not ratify their own claims of knowledge.
Claims of knowledge must instead be assessable from the point of view of other scientific perspectives, as discussed in Chapter 5, Section 5.7. In this historical episode, for example, Planck’s quantum perspective offered a standpoint from which the indicative conditionals-supporting inferences and associated claims by Helmholtz, Drude, Thomson, et al. could all be evaluated.
This cross-perspectival assessment is key to the notion of perspectival truth that I see at work in perspectival realism. It combines perspectival pluralism about models with a non-convergentist yet still realist account of truth across perspectives. Day to day, whenever truth conditions are vague, scientists typically rely on assertability conditions and specific pieces of contextually available evidence to advance knowledge claims.
But ultimately, the evolution of our NKHF and their projectibility and unanimity do not depend on the assertability conditions but on the cross-perspectival truth conditions for our knowledge claims. And this presupposes the willingness of epistemic agents occupying different scientific perspectives to engage in the inferential game of giving and asking for reasons as to why some knowledge claims are retained and others are withdrawn; why some paths continue and others (futures not achieved) become abandoned branches.
The wider implications of this view for how to think about the multicultural situatedness of scientific knowledge—and the epistemic injustices that arise when engagement with other epistemic communities go badly wrong—are the topic of my final Chapter 11.
Copyright © Giulio Einaudi editore, s.p.a. 1972. English translation copyright © Harcourt Brace Jovanovich, Inc. 1974. Reprinted by permission of The Random House Group Limited. For the US and Canada territories, Invisible Cities by Italo Calvino, translated by William Weaver. Copyright © 1972 by Giulio Einaudi editore, s.p.a. Torino, English translation © 1983, 1984 by HarperCollins Publishers LLC. Reprinted by permission of Mariner Books, an imprint of HarperCollins Publishers LLC. All rights reserved.
For the role of alkali doublets in the discovery of the electron spin, see Massimi (2005, Ch. 2).
Luckily, such books already exist: see Baggott (2000) and Ball (2018), among others.
The material presented in this chapter is reproduced in expanded and adapted form from Massimi (2019c) with permission from Springer.
The Nobel Prize in Physics 1906. https://www.nobelprize.org/prizes/physics/1906/summary/.
Presentation Speech by Professor J.P. Klason, President of the Royal Swedish Academy of Sciences, on 10 December 1906. https://www.nobelprize.org/prizes/physics/1906/ceremony-speech/.
Ibid.
In the rest of the Presentation Speech, Klason remarks that ‘From experiments carried out by Kaufmann regarding the velocity of β-rays from radium, Thomson concluded that the negative electrons do not possess any real, but only an apparent, mass due to their electric charge’ (ibid.).
See Faraday’s Bakerian Lecture (1830).
I thank Craig Kennedy for helpful comments on this.
‘Their rocks abound with kelp, a sea-plant, of which the ashes are melted into glass. They burn kelp in great quantities, and then send it away in ships, which come regularly to purchase them. This new source of riches has raised the rent of many maritime farms; but the tenants pay, like all other tenants, the additional rent with great unwillingness; because they consider the profits of the kelp as the mere product of personal labour, to which the landlord contributes nothing. However, as any man may be said to give, what he gives the power of gaining, he has certainly as much right to profit from the price of kelp as of any thing else found or raised upon his ground’ (Johnson and Boswell 1775/2020, p. 66).
Thomson ran a series of experiments using air, hydrogen, and carbonic acid as different gases, and as cathode he used different materials from aluminium to platinum from which he concluded that ‘the value of m/e is independent of the nature of the gas, and that its value 10-7 is very small compared with the value 10-4, which is the smallest value of this quantity previously known, and which is the value for the hydrogen ion in electrolysis’ (Thomson 1897, p. 310).
The discovery of X-rays (or Röntgen rays) revealed interesting new phenomena about gas conductivity: gases exposed to X-rays conduct electricity at low potential. The phenomenon, as I mention below, could be modelled using Grotthuss’s chain model from electrolysis with so-called Faraday tubes connecting positive and negative charges in gas molecules.
See Smith (2001) for an excellent historical account of Thomson’s experiments and intellectual background in 1897–1898.
‘If we regard these tubes as having a real physical existence, we may . . . explain the various electrical process . . . as arising from the contraction or elongation of such tubes and their motion through the electric field. . . . As the principal reason for expressing the effects in terms of the tubes of electrostatic induction is the close connexion between electrical and chemical properties. . . . We assume, then, that the electric field is full of tubes of electrostatic induction, that these are all of the same strength, and that this strength is such that when a tube falls on a conductor it corresponds to a negative charge on the conductor equal in amount to the charge which in electrolysis we find associated with an atom of a univalent element . . . the tubes resemble lines of vorticity in hydrodynamics’ Thomson (1891, pp. 149–150).
‘Hithertho we have been dealing chiefly with the properties of the lines of force, with their tension, the mass of the ether they carry along with them, and with the propagation of the electric disturbances along them; in this chapter we shall discuss the nature of the charges of electricity which forms the beginning and ends of these lines. We shall show that there are strong reasons for supposing that these charges have what may be called an atomic structure; each charge being built up of a number of finite individual charges, all equal to each other. . . . [I]f this view of the structure of electricity is correct, each extremity of the Faraday tube will be the place from which a constant fixed number of tubes start or at which they arrive’ (
, p. 71).‘Wilson found that the saturation current through the salt vapour was just equal to the current which if it passed through an aqueous solution of the salt would electrolyse in one second the same amount of salt as was fed per second in the hot air. . . . Thus whether we study the conduction of electricity through liquids or through gases, we are led to the conception of a natural unit or atom of electricity’ (
, p. 83).In a 1874 talk presented at the British Association meeting in Belfast and entitled ‘On the Physical Units of Nature’, Stoney presented this minimal quantity of electricity as ‘one of the three physical units, the absolute amounts of which are furnished to us by Nature, and which may be the basis of a complete body of systematic units in which there shall be nothing arbitrary’ (Stoney 1874/1894, p. 418). But Stoney believed that these electrons within each molecule or chemical atom were ‘waved about in a luminiferous ether’ and that in this motion through the ether the spectrum of each gas originated.
‘The most startling result of Faraday’s law is perhaps this. If we accept the hypothesis that the elementary substances are composed of atoms, we cannot avoid concluding that electricity also, positive as well as negative, is divided into definite elementary portions which behave like atoms of electricity. As long as it moves about on the electrolytic liquid each ion remains united with its electric equivalent or equivalents. At the surface of the electrodes decomposition can take place if there is sufficient electromotive force, and then the ions give off their electric charges and become electrically neutral’ (Helmholtz quoted in Stoney 1874/1894, p. 419).
The equivalence between e and E was established thanks both to the work of C.T.R. Wilson, which in turn made possible H.A. Wilson’s measurement of charged droplets, and to John S. Townsend’s measurement of the charges of gas ions. As the historian of science George E. Smith points out, Townsend’s experiment was ‘predicated on Maxwell’s diffusion theory. . . . Townsend inferred a magnitude for Ne, where N is the number of molecules per cubic centimetre under standard conditions. The uniformity of this magnitude for ions of different gases and its close correspondence to the value NE from electrolysis (where E is the charge per hydrogen atom), then allowed Townsend to conclude, independently of any specific value of e or N, that the charge per ion, when generated by X-rays, is the same as the charge on the hydrogen atom in electrolysis’ (Smith 2001, pp. 74–75, emphasis in original).
Among them, the constant that features in the Stefan–Boltzmann law for the black-body, which takes black-body radiation as proportional to the fourth power of the absolute temperature. Planck took its numerical value from Kurlbaum’s original measurements, although Kurlbaum’s results were soon rectified and improved by a series of measurements performed by others.
See Arabatzis (2006, p. 79) for a historical reconstruction of Lorentz’s ‘ions’ vs ‘electrons’ as they were called by Stoney, Larmor, and Zeeman.
I refer the reader to the excellent historical reconstruction of this episode by Arabatzis (2006, Ch. 4).
‘All the systems of units which have hitherto been employed . . . owe their origin to the coincidence of accidental circumstances, inasmuch as the choice of the units lying at the base of every system has been made, not according to general points of view which would necessarily retain their importance for all places and all times, but essentially with reference to the special needs of our terrestrial civilization. . . . In contrast with this it might be of interest to note that, with the aid of the two constants h and k which appear in the universal law of radiation, we have the means of establishing units of length, mass, time, and temperature, which are independent of special bodies or substances, which necessarily retain their significance for all times and for all environments, terrestrial and human or otherwise, and which may therefore be described as “natural units” ’ (Planck 1906/1913, pp. 173–174).
In Faraday’s own words, ‘the chemical decomposing action of a current is constant for a constant quantity of electricity, notwithstanding the greatest variation in its sources, in its intensity, in the site of electrodes used, in the nature of conductors (or non-conductors . . . ) through which it is passed, or in other circumstances’ (Faraday 1833/2012, vol. I, p. 207, emphasis in original).
‘Water slowly distilled, being electrified either in gold cones or agate cups, did not evolve any fixed alkaline matter, though it exhibited signs of ammonia; but in tubes of wax, both soda and potash were evolved. . . . When water was electrified in vacuo scarcely any nitrous acid, and no volatile alkali, was formed. . . . Mr Davy . . . thinks these electric energies are communicated from one particle to another of the same kind, so as to establish a conducting chain in the fluid, as acid matter is always found in the alkaline solutions through which it is transferred’ (Davy 1807, pp. 247–250).
‘We might, as we shall see, have taken the tubes of magnetic force as the quantity by which to express all the changes in the electric field; the reason I have chosen the tubes of electrostatic induction is that the intimate relation between electrical charges and atomic structure seems to point to the conclusion that it is the tubes of electrostatic induction which are most directly involved in the many cases in which electrical charges are accompanied by chemical ones’ (Thomson 1891, p. 150).
Month: | Total Views: |
---|---|
October 2022 | 14 |
November 2022 | 14 |
December 2022 | 5 |
January 2023 | 10 |
February 2023 | 4 |
March 2023 | 8 |
April 2023 | 7 |
May 2023 | 10 |
June 2023 | 2 |
July 2023 | 3 |
August 2023 | 10 |
September 2023 | 6 |
October 2023 | 8 |
November 2023 | 2 |
December 2023 | 3 |
January 2024 | 11 |
February 2024 | 7 |
March 2024 | 7 |
April 2024 | 8 |
May 2024 | 14 |
June 2024 | 6 |
July 2024 | 5 |
August 2024 | 6 |
September 2024 | 5 |
October 2024 | 10 |
November 2024 | 10 |
December 2024 | 1 |
January 2025 | 1 |
February 2025 | 4 |
March 2025 | 1 |
April 2025 | 3 |