Pages

Tuesday, June 9, 2015

The Demarcation Problem

An article by two physicists, Adam Frank and Marcelo Gleiser in the NYT, suggests that science is experiencing an identity crisis. It used to be assumed that what distinguished science from other disciplines was that science was based on testing predictions which were entailed by a theory. This was called the hypothetico-deductive method.

Unfortunately, it seems that some theories in physics and biology have reached the limits of testability. In particle physics, for example, in order to probe more deeply into the structure of matter we have to build particle accelerators that would circle the earth. Since this is economically and, presumably, technically impractical, particle physics may have reached a dead end. It's not that we know everything there is to know, it's that we may have reached a point where we know everything which can be known, at least about particle physics.

Rather than submit to this glum state of affairs, however, some scientists want to expand the definition of legitimate science to include metaphysical speculation. The problem of discerning what to count as science is called by philosophers the Demarcation Problem and the tendency to blur the lines between science and philosophy (metaphysics) is especially prominent among string and multiverse theorists. Here's part of what Frank and Gleiser have to say about this:
A few months ago in the journal Nature, two leading researchers, George Ellis and Joseph Silk, published a controversial piece called “Scientific Method: Defend the Integrity of Physics.” They criticized a newfound willingness among some scientists to explicitly set aside the need for experimental confirmation of today’s most ambitious cosmic theories — so long as those theories are “sufficiently elegant and explanatory.” Despite working at the cutting edge of knowledge, such scientists are, for Professors Ellis and Silk, “breaking with centuries of philosophical tradition of defining scientific knowledge as empirical.”

Whether or not you agree with them, the professors have identified a mounting concern in fundamental physics: Today, our most ambitious science can seem at odds with the empirical methodology that has historically given the field its credibility.

How did we get to this impasse? In a way, the landmark detection three years ago of the elusive Higgs boson particle by researchers at the Large Hadron Collider marked the end of an era. Predicted about 50 years ago, the Higgs particle is the linchpin of what physicists call the “standard model” of particle physics, a powerful mathematical theory that accounts for all the fundamental entities in the quantum world (quarks and leptons) and all the known forces acting between them (gravity, electromagnetism and the strong and weak nuclear forces).

But the standard model, despite the glory of its vindication, is also a dead end. It offers no path forward to unite its vision of nature’s tiny building blocks with the other great edifice of 20th-century physics: Einstein’s cosmic-scale description of gravity. Without a unification of these two theories — a so-called theory of quantum gravity — we have no idea why our universe is made up of just these particles, forces and properties. (We also can’t know how to truly understand the Big Bang, the cosmic event that marked the beginning of time.)

This is where the specter of an evidence-independent science arises. For most of the last half-century, physicists have struggled to move beyond the standard model to reach the ultimate goal of uniting gravity and the quantum world. Many tantalizing possibilities (like the often-discussed string theory) have been explored, but so far with no concrete success in terms of experimental validation.

Today, the favored theory for the next step beyond the standard model is called supersymmetry (which is also the basis for string theory). Supersymmetry predicts the existence of a “partner” particle for every particle that we currently know. It doubles the number of elementary particles of matter in nature. The theory is elegant mathematically, and the particles whose existence it predicts might also explain the universe’s unaccounted-for “ dark matter.” As a result, many researchers were confident that supersymmetry would be experimentally validated soon after the Large Hadron Collider became operational.

That’s not how things worked out, however. To date, no supersymmetric particles have been found. If the Large Hadron Collider cannot detect these particles, many physicists will declare supersymmetry — and, by extension, string theory — just another beautiful idea in physics that didn’t pan out.

But many won’t. Some may choose instead to simply retune their models to predict supersymmetric particles at masses beyond the reach of the Large Hadron Collider’s power of detection — and that of any foreseeable substitute.

Implicit in such a maneuver is a philosophical question: How are we to determine whether a theory is true if it cannot be validated experimentally? Should we abandon it just because, at a given level of technological capacity, empirical support might be impossible? If not, how long should we wait for such experimental machinery before moving on: ten years? Fifty years? Centuries?

Consider, likewise, the cutting-edge theory in physics that suggests that our universe is just one universe in a profusion of separate universes that make up the so-called multiverse. This theory could help solve some deep scientific conundrums about our own universe (such as the so-called fine-tuning problem), but at considerable cost: Namely, the additional universes of the multiverse would lie beyond our powers of observation and could never be directly investigated. Multiverse advocates argue nonetheless that we should keep exploring the idea — and search for indirect evidence of other universes.
Similar dead ends seem to be looming in cosmogeny (the study of the origin of the universe), origin of life, and origin of consciousness studies, all of which raises a question. If scientists yield to the desire to include in the discipline of science explanatory theories which are inherently untestable and which are essentially metaphysical, on what grounds can anyone argue against allowing the teaching of Intelligent Design in public school science classes?