Tuesday, April 2, 2019

Moral Skyhooks (Pt. I)

Jennifer Graham at Deseret News uses the college admissions scandal to highlight moral impoverishment among our cultural elites.

Her article is fine as far as it goes, but it doesn't go far enough.

For those unfamiliar with the scandal, wealthy parents paid up to half a million dollars to get their children admitted into prestigious universities. The strategies employed by William “Rick” Singer, the California consultant who ran the college admissions scheme through a fraudulent charity included doctoring photos, cheating on admissions tests and paying athletic coaches to add the student to their teams even though the student wasn't an athlete.

Some 50 people, including 33 parents and 13 coaches are involved in the scheme, and Graham notes that at least some of the parents were indifferent to the moral issues involved. “To be honest, I’m not worried about the moral issue here,” said one Connecticut parent, co-chairman of a global law firm, according to court documents.

He had no "moral issue" with having his daughter fraudulently diagnosed as learning disabled so she could get extra time taking a college placement test, or with having another person take online courses on her behalf.

Graham's article contains a lot of moral hand-wringing by various ethicists, but no one in the article seems to recognize the fundamental reason for the ethical indifference of parents like those quoted above.

For instance, Graham asks, "If moral standards are encoded in our DNA, as many theologians and philosophers have taught, how can people go so wildly off course? And when our moral compass malfunctions, how can we recalibrate?"

Her first question answers itself. If morality is simply a feeling imposed on us by our genes then why should anyone think themselves obligated to heed it? After all, feelings of selfishness, lust, greed, ethnocentrism, etc. are also generated by our genes and most people think these inclinations should be repressed or ignored. How do we differentiate between genetically-derived impulses which should be followed and those which should not unless we're tacitly adverting to a higher standard which transcends DNA?

Moreover, if genes are the arbiters of moral right and wrong then if some people's genes make them psychopaths why is psychopathic behavior morally wrong?

If moral standards are encoded in our genes then what does it mean to say that it's wrong to commit fraud? At most it can only mean that we've acted in defiance of our genetic programming, but why is that wrong? What law says that we must always act in accord with what our genes dictate?

In other words it's precisely because society has bought into the notion that right and wrong are simply epiphenomenal expressions of the chemicals in our DNA that people have concluded that there's nothing wrong with cheating to get one's child into a prestigious university.

Morality has to be hung from a transcendent support or else it's like skyhooks hanging on nothing at all.

I'll have more to say about this tomorrow, meanwhile you might check out Mike Mitchell's fine piece on this scandal at his blog Thought Sifter.

Monday, April 1, 2019

The Most Fortunate Generation in History

David Harsanyi at The Federalist declares that, notwithstanding the asseverations of millennials like Alexandria Ocasio-Cortez to the contrary, today's millennials are the most well-off generation in human history.

Specifically, he claims, "...if the world continues on its present trajectory, American millennials will have collectively lived the most peaceful, wealthiest, safest, most educated, and most globally connected lives in the history of the world."

The facts he amasses to support this affirmation are impressive:
For starters, millennials had the world opened to them like no one else, benefiting from the perhaps greatest technological revolution in information and commerce–even more consequential than the printing revolution.

For most millennials... any book, any piece of music, any great work of art—virtually any nugget of human knowledge—has been at their fingertips throughout their entire adult lives. And today, more Americans have access to high-speed internet than ever. This is a manifestation of prosperity.

During the adulthood of most of today's millennials there's been only a single quarter of negative GDP growth, and 13 quarters of more than 3 percent growth.
Harsanyi quotes from National Review’s Jim Geraghty:
But the U.S. economy has added jobs for 100 consecutive months, and there are seven million unfilled jobs in the country. The housing market either quickly or gradually recovered depending upon your region, and auto production recovered, both at companies that received government bailouts and those that did not.

There’s not too much inflation or deflation. Energy prices declined as U.S. domestic production boomed. Wage growth has been slow, but some research indicates this reflects companies hiring more young workers, who generally earn less than older, more experienced workers.

Scott Lincicome lays out how more Americans households can afford more products. The stock market hit new world highs last year. In late 2018, the World Economic Forum ranked the United States the world’s most competitive economy for the first time in a decade. Even the more pessimistic economists concede that the U.S. economy’s problems are smaller and less severe than anyone else’s.
There's more:
More broadly, no group of Americans has ever had more of an opportunity to achieve personal prosperity. In 1965, there were only 5.9 million Americans enrolled in college—mostly rich kids. By 2012 there were 21 million Americans enrolled in college. According to the Federal Reserve study, millennials are the most educated generation, with 65 percent of them possessing at least an associate’s degree.

It’s often claimed that millennials have “lower earnings, fewer assets, and less wealth.” This is because many of them are still young. Getting a college education defers financial success to later years. The average earning potential of a college graduate is higher than for trades that don’t need it.

Moreover, more millennials choose to live in expensive urban areas—where they pay high rent and rarely own houses—and get married later than previous generations. The longer you defer getting married, the longer it takes to realize your potential earnings, generally speaking.

Because of...technological advances, almost everything is cheaper today. In the last 50 years, spending on food and clothing as share of family income has fallen from 42 to 17 percent. At the same time we have countless choices—many of which would seem exotic to an average person in 1989.
Millennials have been economically fortunate but they've been even more fortunate, perhaps, in terms of their health never having had to worry "about measles, rubella, mumps, diphtheria, or polio. The cancer death rate has fallen over 27 percent [over the last decade] — which equals more than two million deaths averted during that time period."

Harsanyi goes on to cite statistics which show that violent crime, deaths from vehicular accidents and deaths from war are all down significantly over the last couple of decades, and adds that even the anxieties millennials harbor over what they believe to be the looming disaster of climate change has been blown out of proportion by the scare-mongers:
“Climate change” has affected millennials in about the same way nuclear winter, global cooling, overpopulation, and other Malthusian scares have affected previous generations—which is to say, not at all. Every generation has its End of Days myth. In the real world, the imperceptible change in climate during [our millennials'] lifetime has done nothing to diminish prosperity here or abroad.
All of which makes it very difficult to understand why millennials seem eager, according to some polls, to jettison capitalism in favor of socialism. Perhaps they should reread Aesop's story about killing the goose that laid the golden eggs.

Saturday, March 30, 2019

Head Transplant Update

A few years ago we commented here at Viewpoint on news from Italy wherein a surgeon by the name of Sergio Canavero was striving to develop a technique that would allow him to transplant a human head onto another human being's body.

Part of the process would involve severing and restoring spinal cord function. If this could be done it would offer hope to accident victims whose spinal cords had been severed.

An article in USA Today suggests that progress is being made. Here's the lede:
Surgeons from China and Italy claimed that two studies published Wednesday add evidence to their ability to treat "irreversible" spinal-cord injuries and a related controversial aspiration to perform the world’s first human head transplant.

Xiaoping Ren and Sergio Canavero said the new work they published in a scientific journal showed that monkeys and dogs were able to walk again after their spinal cords were "fully transected" during surgery and then put back together again. The neurosurgeons described the results as medically "unprecedented."
The article goes on to say that,
While the researchers have tested head transplants, with some success, on small animals including mice and dogs, it's a concept that raises profound ethical, psychological and surgical questions.
It's interesting to speculate as to how a brain that has developed for decades in tandem with a particular body would accommodate itself to another body that has entirely different capacities and abilities. How much would have to be relearned? How psychologically jarring would the change be to the patient?

Would the resulting individual be a new person or would he/she be the same person that provided the head?
Canavero intends to eventually perform the extraordinarily expensive operation in China since America and Europe won't permit it: "The Americans did not understand," Canavero told USA TODAY two years ago as he announced that he would soon perform the world’s first human head transplant in China because medical communities in the United States and Europe would not permit him to do it there. From space exploration to climate-change science, China has indicated it intends to lead, not follow, the U.S. in all the major scientific and technological frontiers over the coming decades.

Canavero estimated the procedure would cost up to $100 million and involve several dozen surgeons and specialists. He said the donor would be the healthy body of a brain-dead patient matched for build with a recipient's disease-free head.
The procedure itself is described in the USA Today piece:
The researcher said he would simultaneously sever the spinal cords of the donor and recipient with a diamond blade. To protect the recipient's brain from immediate death before it is attached to the body, it would be cooled to a state of deep hypothermia.

Michael Sarr, a former surgeon at the Mayo Clinic in Minnesota, and editor of the journal Surgery, told USA TODAY in 2017 that....Doctors "have always been taught that when you cut a nerve, the 'downstream side,' the part that takes a signal and conducts it to somewhere else, dies," he said.

"The 'upstream side,' the part that generates the signal, dies back a little – a millimeter or two – and eventually regrows. As long as that 'downstream' channel is still there, it can regrow through that channel, but only for a length of about a foot."

This is why, he said, if you amputate your wrist and then re-implant it and line the nerves up well, you can recover function in your hand. But if your arm gets amputated at the shoulder, it won't be re-implanted because it will never lead to a functional hand.

"What Canavero (would) do differently is bathe the ends of the nerves in a solution that stabilizes the membranes and put them back together," Sarr said. "The nerves will be fused, but won't regrow. And he will do this not in the peripheral nerves such as you find in the arm, but in the spinal cord, where there's multiple types of nerve channels."
Are Canavero and Xiaoping Ren quacks or are they medical pioneers? A hundred years ago no one thought that hearts, livers, kidneys, hands or faces could ever be transplanted, but today they are being transplanted frequently. Perhaps it's just a matter of time before someone whose body is dying but whose brain is healthy can be given a new body.

If they have $100 million to spend on it.

Friday, March 29, 2019

Materialism and Panpsychism

Materialism is the view that matter (and energy) are the fundamental realities in the universe. Everything that exists is reducible to, and explicable in terms of, matter. Materialism has always been a popular metaphysical assumption those holding a naturalistic worldview, but in the 20th century materialism found itself challenged by two developments, one was the discoveries being made in quantum physics and the other was its inability to account for human consciousness.

If we think of consciousness, at least in part, as possessing awareness it appears that subatomic particles like electrons exhibit a very rudimentary consciousness. Thus, some thinkers have revived a theory called panpsychism to account for this.

If we can't conceive of how material objects can generate consciousness perhaps it helps to assume that all material objects, down to the tiniest subatomic pieces, are to some degree conscious and that in aggregate are able to comprise conscious beings like ourselves.

I posted several pieces on the topic of panpsychism in the past (2/15/18, 2/16/18/, 2/19/18) and invite the interested reader to check them out.

Neurosurgeon Michael Egnor, who has written a lot about mind and consciousness, has a column at Mind Matters in which he argues that consciousness requires senses and that inanimate objects, lacking senses, must a forteriori lack consciousness.

Perhaps, but as much as I'd like to agree with Egnor I think there's a problem with his argument. Egnor is a theist and if theism is correct then there are pure minds - e.g. God, angels and perhaps the souls of deceased human beings - that are surely conscious but which possess no physical senses. Thus, consciousness would not seem to require, necessarily, a physical sensory apparatus.

This is not to say that I accept the panpsychist's argument, as a perusal of the posts from February of last year will make plain, but I do think there's a problem with our understanding of material substance. The problem can be found lurking in the materialist's assumption that matter is fundamental and that consciousness is the product of material brains. Maybe things are really the other way around. Perhaps it is mind that is the fundamental substance and that matter is somehow a product of minds. Perhaps matter is to mind as wetness is to water. It's not that every bit of matter possesses mind, as the panpsychist would have it, but rather that matter is an expression of mind. Imagine, for instance, that, like the images on a computer screen, the fine structure of the physical world is comprised of pixels of exceedingly high resolution. These are not pixels made of chemicals like those on your monitor, rather they're pixels made of information or mind. What appears to us to be material stuff could in fact be a three dimensional manifestation of information flowing from a universal mind somewhat like the pixels on a screen are a two dimensional manifestation of the information flowing from the programmer's mind. Whatever the case, the days when it seemed obvious that matter is the fundamental reality appear to be waning. As the physicist Sir James Jeans presciently noted back in the middle of the last century, "The world is beginning to look more like a grand idea than a grand machine."

Thursday, March 28, 2019

Not Enough Evidence

The famous atheist philosopher and mathematician Bertrand Russell was once asked to suppose that he'd died and found himself face to face with God who asked him to account for his lack of belief. What, Russell was asked, would he say? Russell's reply was a curt, "Not enough evidence."

This has been a common response to similar questions for centuries. The unbeliever argues that the burden of proof is on the believer to demonstrate that God does exist. Failing that, the rational course is to suspend belief.

In the lapidary words of 19th century writer William Clifford, "It is always wrong, everywhere and for anyone, to believe anything on insufficient evidence." Of course, Clifford would presumably plead a special exemption for this his own statement for which there's no evidence whatsoever.

In any case, a claim for which there was no conceivable empirical test was considered meaningless by many philosophers since there was no way to ascertain its truth or falsity.

This evidentialism or verificationism, as it was called, enjoyed considerable popularity back in the 19th century and into the 20th among those who wanted to make the deliverances of science the touchstone for meaningfulness, but it eventually fell into disfavor among both philosophers and scientists because, rigorously applied, it excluded a lot of what scientists wanted to believe were meaningful claims (for example, the claim that life originated through purely physical processes with no intelligent input from a Divine mind).

But set the verificationist view aside. Is there, in fact, a paucity of evidence for the existence of God or at least a being very much like God? It hardly seems so. Philosopher William Lane Craig has debated atheists all around the globe using four or five arguments that have proven to be exceedingly difficult for his opponents to refute. Philosopher Alvin Plantinga expands the menu to a couple dozen good arguments for theism.

So how is this plenitude of evidence greeted by non-believers? Some take refuge in the claim that none of these is proof that God exists, and until there's proof the atheist is within his epistemic rights to withhold belief, but this response is so much octopus ink.

The demand for proof is misplaced. Our beliefs are not based on proof in the sense of apodictic certainty. If they were there'd be precious little we'd believe about anything. They're based rather on an intuition of probability. The more intuitively probable it is that an assertion is true the more firmly we tend to believe it.

Indeed, it's rational to believe what is more likely to be true than what is less likely.

Could it be more likely, though, that God doesn't exist? There really is only one argument that can be adduced in support of this anti-theistic position, and though it's psychologically strong it's philosophically inconclusive. This is the argument based on the amount of suffering in the world.

When one is in the throes of grief one is often vulnerable to skepticism about the existence of a good God, but when emotions are set aside and the logic of the argument is analyzed objectively, the argument falters (see here and here for a discussion).

This is not to say that the argument is without merit, only that it doesn't have as much power to compel assent as it may appear prima facie to possess. Moreover, the argument from suffering (or evil) can only justify an atheistic conclusion if, on balance, it outweighs in probability all the other arguments that support theism, but this is a pretty difficult, if not impossible, standard for an inconclusive argument to live up to.

Actually, it seems likely that at least some who reject the theistic arguments do so because they simply don't want to believe that God exists, and nothing, no matter how dispositive, will persuade them otherwise.

Even if God were to appear to them, a phenomenon some skeptics say they'd accept as proof, they could, and probably would, still write the prodigy off as an hallucination, a conjuring trick, or the consequence of a bad digestion. In other words, it's hard to imagine what evidence would convince someone who simply doesn't want to believe.

I'm reminded of something the mathematician and physicist Blaise Pascal said some three hundred and fifty years ago. He was talking about religion, but what he said about religion is probably just as germane to the existence of God. He wrote in what was later collated into his Pensees that, "Men despise religion; they hate it and fear it is true."

The "not enough evidence" demurral is in some instances, perhaps, a polite way of manifesting the sentiment Pascal identified.

Wednesday, March 27, 2019

Abstract Thought and Materialism

Neurosurgeon Michael Egnor points out that among the things that a material brain cannot accomplish just by itself is abstract thought. Egnor concludes that this is evidence for mind/brain dualism because certainly human beings are capable of abstract thinking.

Why does he say that the material brain is incapable of generating abstract thoughts? He makes his case in a short essay at Evolution News, excerpts from which follow:
Wilder Penfield was a pivotal figure in modern neurosurgery. He was an American-born neurosurgeon at the Montreal Neurological Institute who pioneered surgery for epilepsy.

He was an accomplished scientist as well as a clinical surgeon, and made seminal contributions to our knowledge of cortical physiology, brain mapping, and intra-operative study of seizures and brain function under local anesthesia with patients awake who could report experiences during brain stimulation.

His surgical specialty was the mapping of seizure foci in the brain of awake (locally anesthetized) patients, using the patient's experience and response to precise brain stimulation to locate and safely excise discrete regions of the cortex that were causing seizures. Penfield revolutionized neurosurgery (every day in the operating room I use instruments he designed) and he revolutionized our understanding of brain function and its relation to the mind.

Penfield began his career as a materialist, convinced that the mind was wholly a product of the brain. He finished his career as an emphatic dualist.

During surgery, Penfield observed that patients had a variable but limited response to brain stimulation. Sometimes the stimulation would cause a seizure or evoke a sensation, a perception, movement of muscles, a memory, or even a vivid emotion. Yet Penfield noticed that brain stimulation never evoked abstract thought. He wrote:
There is no area of gray matter, as far as my experience goes, in which local epileptic discharge brings to pass what could be called "mindaction"... there is no valid evidence that either epileptic discharge or electrical stimulation can activate the mind....If one stops to consider it, this is an arresting fact.

The record of consciousness can be set in motion, complicated though it is, by the electrode or by epileptic discharge. An illusion of interpretation can be produced in the same way.

But none of the actions we attribute to the mind has been initiated by electrode stimulation or epileptic discharge. If there were a mechanism in the brain that could do what the mind does, one might expect that the mechanism would betray its presence in a convincing manner by some better evidence of epileptic or electrode activations.[emphasis mine]
Why don't epilepsy patients have "calculus seizures" or "moral ethics" seizures, in which they involuntarily take second derivatives or contemplate mercy? The answer, apparently, is that the brain does not generate abstract thought. The brain is normally necessary for abstract thought, but not sufficient for it.

Thus, the mind, as Penfield understood, can be influenced by matter, but is, in its abstract functions, not generated by matter.
There's more at the link. Egnor's argument boils down to this: If the material brain is sufficient to account for all of our cognitive experience, and since stimulation that normally triggers all sorts of "mental" activity never triggers abstract thinking, abstract thinking must arise from something other than the material brain.

This is not proof that there's a mind, of course, but it is certainly consistent with the dualist hypothesis that we are a composite of mind and brain and certainly puzzling on the materialist hypothesis that the material brain is solely responsible for all of our mental experience.

Tuesday, March 26, 2019

Unhealthy Obsession

President Donald Trump has been cleared of the charge of colluding with Russia to fix the 2016 election. Special Counsel Robert Mueller and his team have, after two years of searching and spending $25 million of taxpayers' money, found no evidence to support the charge that had seemed to many in the media as a matter of course: That Mr. Trump was guilty.

So certain were they of the president's culpability that they were willing to forfeit their professional credibility by uttering numerous very intemperate asseverations as a video at Grabien.com documents.

One wonders what evidence that the special prosecutor did not have that these people did have which convinced them that Mr. Trump would surely be found guilty of crimes, some of which (treason) carry the death penalty.

And if they had no evidence but were simply engaging in wishful thinking their irresponsibility in perpetuating what amounts to a slander on the president and further divides an already divided country.

Not only have they been complicit in setting Americans more sharply against each other, but they've made it very difficult for Mr. Trump to succeed on the foreign policy stage since most of our adversaries, like China and North Korea, have probably assumed that the president would soon be politically crippled and that there was no point in truckling to him in whatever negotiations were taking place.

So far from rejoicing that the president is not a traitor many of his domestic enemies are, like those ISIS holdouts in Syria, refusing to give up and admit their error and are instead pinning their desperate hopes on Mueller's claim that although there was insufficient evidence to support the allegation that the president also obstructed justice, there was also insufficient reason to conclude that he did not.

This rather ambiguous loose end has been seized upon by the Democrats in Congress and the media as justification for pressing on in their pursuit of Mr. Trump. Like Captain Ahab obsessed with wreaking vengeance on Moby Dick they're determined to politically and legally harpoon the president, even if their monomania destroys their party's electoral chances in 2020.

Their hatred for Donald Trump is beginning to appear even to some of their allies as all-consuming, and their failure to defeat him in 2016 and their subsequent failures to rid the White House of him seems, like Chief Inspector Dreyfus' failure to rid himself of the inept Inspector Clouseau in the old Pink Panther movies, to be driving them toward madness.

The progressive media has destroyed whatever credibility they may have had in their reporting and commentary of the "Russian Collusion" story, as Rolling Stone's Matt Taibbi observes, so maybe it's time for all these folks to take a deep breath or two and just let it go before they wind up like Chief Inspector Dreyfus.

Monday, March 25, 2019

Thoughts on Friendship

Last month I posted some of C.S. Lewis' thoughts on the topic of friendship. Lewis spoke of how friendship was rooted in shared loves and interests. Lewis writes, for instance, that,
Friendship arises out of mere Companionship when two or more of the companions discover that they have in common some insight or even taste which the others do not share and which, till that moment, each believed to be his own unique treasure (or burden).
He also says this:
The companionship on which Friendship supervenes will not often be a bodily one like hunting or fighting. It may be a common religion, common studies, a common profession, even a common recreation. All who share it will be our companions; but one or two or three who share something more will be our Friends.

In this kind of love, as Emerson said, Do you love me? means Do you see the same truth? - Or at least, 'Do you care about the same truth?' The man who agrees with us that some question, little regarded by others, is of great importance can be our Friend. He need not agree with us about the answer.
St. Augustine also wrote on the same subject. Augustine reflects on the desire to share a common love, particularly a love for the life of the mind (although that's not what he calls it) has on him. He writes wistfully about it:
...I do love wisdom alone and for its own sake, and it is on account of wisdom that I want to have, or fear to be without, other things such as life, tranquility and my friends. What limit can their be to my love of that Beauty, in which I do not only not begrudge it to others, but I even look for many who will long for it with me, sigh for it with me, possess it with me, enjoy it with me. They will be all the dearer to me the more we share that love in common.
Lewis and Augustine have something important to teach us about friendship. Two people can be companions for awhile even if they don't share much in common, but they'll only develop a true friendship if they both love some of the same things. For Augustine the chief of these loves is the love of wisdom, and surely the love of wisdom encompasses the love of truth.

That love has been largely lost in our post-modern age during which a lot of people seem to believe whatever suits their political or religious preferences. So far from loving truth (and wisdom) many seem almost to despise it as irrelevant if it gets in the way of their appetites and prejudices.

I wonder how many modern friendships are grounded in the same love that Augustine muses upon, or even could be.

Saturday, March 23, 2019

The Case for Dualism

My classes will be discussing this week the philosophical debate between dualists - those who believe that human beings are comprised of both a material body and an immaterial mind or soul - and materialists who maintain that we are purely material beings. I thought it'd be helpful to rerun this post that I first put up a couple of months ago to help clarify some of the issues.

The debate is especially acute with regard to our cognitive activity with dualists arguing that thinking involves the integration of our material brains with an immaterial mind and materialists maintaining that the brain is all that's involved in our cognitive experience.

The materialist insists that the brain can account for all of our mental phenomena and that there's no need to posit the existence of an immaterial mind or soul. Moreover, given that brain function is the product of the laws of physics and chemistry, materialists argue that there's no reason to believe that we have free will.

For materialists mind is simply a word we use to describe the function of the brain, much like we use the word digestion to refer to the function of the stomach. Just as digestion is a function and not an organ or distinct entity in itself, likewise the mind is an activity of the brain and not a separate entity in itself.

As neurosurgeon Michael Egnor discusses in this fifteen minute video, however, the materialist view is not shared by all neuroscientists and some of the foremost practitioners in the field have profound difficulties with it.

Egnor explains how the findings of three prominent twentieth century brain scientists point to the existence of something beyond the material brain that's involved in human thought and which also point to the reality of free will.

His lecture is an excellent summary of the case for philosophical dualism and is well worth the fifteen minutes it takes to watch it:
There's a lot at stake in this debate. If materialism is true it not only becomes harder to believe in free will, it's also harder to believe that human beings have dignity, that objective moral obligations exist, that we have a self or identity which perdures through time and that there's a meaningful individual existence beyond the death of the body.

Most materialists accept that none of these beliefs are true, Most dualists believe, or at least hope, that they are. Whether you agree with the materialist or you hope the materialist is wrong you'll want to watch Egnor's video.

Friday, March 22, 2019

A Genuine Miracle?

The last few posts have touched on the topic of the inevitability of genuine miracles occurring if there truly exists a multiverse and the difficulty of ruling them out if our world is just one of a vast ensemble of worlds.

I thought it'd be fitting to add an actual contemporary example of what certainly seems to be a miraculous event that's so amazing a major motion picture has been made about it.

The account of the event appeared in a piece by Josh Shepherd at The Federalist, part of which reads as follows:
On January 19, 2015, 14-year-old John Smith was trapped underwater for 15 minutes. First responders pulled him from the icy waters of Lake Sainte Louise in St. Charles, a northwestern suburb of St. Louis, Missouri.

The St. Louis Post-Dispatch reported a month later: “He wasn’t breathing, and paramedics and doctors performed CPR on him for 43 minutes without regaining a pulse.” Yet, after his mother prayed for him, doctors at St. Joseph Hospital West say Smith inexplicably regained consciousness.

“They never expected the heart monitor to respond,” says Joyce Smith [John's mother] in an interview. “The first doctor who treated John wrote in his medical records: Patient dead. Mother prayed. Patient came back to life.”
Even when John reacquired a pulse, doctors anticipated that since his brain had been deprived of oxygen for so long he'd remain in a vegetative state, but the boy has fully recovered and returned to his basketball team. It's truly a remarkable story, and Shepherd provides much more detail in his article than I've given here.

The movie, due to be released on April 17th, and the team behind the film maintains that all the facts have been medically verified, and the story on-screen reflects accounts from multiple sources.

People today are often skeptical of reports of miracles, as they should be, given the number of fraudulent stories that have circulated over the years, but no one should be so skeptical as to rule out the possibility that something for which there's no room in a naturalistic worldview has in fact happened in this instance, and if one believes there's a multiverse, it would seem, one simply can't rule out that such events can and do occur.

This is the multiverse conundrum. If there is no multiverse then the fine-tuning of our universe for life points inexorably to the existence of a supernatural mind, and if one reverts to the multiverse to explain away cosmic fine-tuning she abandons any grounds for skepticism that miracles happen. And if miracles happen the case for that supernatural mind gets much stronger.

Whether you're skeptical or not read the article at the link and see what you think.

Thursday, March 21, 2019

Miracles and the Multiverse (Pt. III)

As a followup to our previous two posts here's another example of how embracing the multiverse leads to several unintended and uncomfortable consequences for the naturalist.

Cosmologist Sean Carroll, an atheist, has been quoted as arguing that the multiverse hypothesis, though it does not meet the standard criteria of a good scientific theory (i.e. it's not falsifiable or testable), nevertheless should be accepted as legitimate science.

He writes:
Modern physics stretches into realms far removed from everyday experience, and sometimes the connection to experiment becomes tenuous at best. String theory and other approaches to quantum gravity involve phenomena that are likely to manifest themselves only at energies enormously higher than anything we have access to here on Earth.

The cosmological multiverse and the many-worlds interpretation of quantum mechanics posit other realms impossible for us to access directly. Some scientists, leaning on Popper, have suggested that these theories are non-scientific because they’re not falsifiable.

The truth is the opposite. Whether or not we can observe them directly, the entities involved in these theories are either real or they are not. Refusing to contemplate their possible existence on the grounds of some apriori principle, even though they might play a crucial role in how the world works, is as non-scientific as it gets.
This reminds me of a passage from William James who asserted that, "any rule of thought which would prevent me from discovering a truth, were that truth really there, is an irrational rule."

Carroll wants to apply James' maxim to science in the belief that it's not reasonable to restrict science only to conjectures about entities whose existence can be tested.

Thus, the multiverse hypothesis should be considered legitimate science even if it's not testable because it's an entity that's either real or it's not, and "refusing to contemplate [it's] possible existence on the grounds of some apriori principle, even though [it] might play a crucial role in how the world works, is as non-scientific as it gets."

Very well, but then why wouldn't this same standard also apply to the hypothesis that the world is the creation of God? Wouldn't the same standard also apply to Intelligent Design which is banned from public school science classrooms because it allegedly can't be tested and is therefore not regarded as a genuine scientific theory?

Carroll wants to make the multiverse a viable scientific option because it gives him a means to evade the compelling theistic implications of cosmic fine-tuning, but in order to include the multiverse hypothesis in the field of legitimate scientific inquiry he has to open up the domain of science to include conjectures about the existence and activity of a God, which is the very thing he's eager to avoid.

Tomorrow I'll return to the topic of miracles with a description of a contemporary episode that, assuming it's not a hoax, surely counts as a genuine miracle. Indeed, it would require an even greater miracle for it to be a hoax.

Wednesday, March 20, 2019

Miracles and the Multiverse (Pt. II)

Yesterday we took a look at Vincent Torley's analysis of the multiverse in which Torley argued that if there is a multiverse then miracles must not only be possible but certain to occur in some world in the infinite ensemble of worlds.

The naturalist who embraces the multiverse has a another problem in addition to the problem with miracles. Darwinian evolution is predicated on uniformitarianism, the belief that the laws of physics never change, but if there's a multiverse, of which we are a part, then uniformitarianism becomes highly improbable.

Torley writes:
[S]ince the argument for Darwinian evolution is based on the assumption that the laws and parameters of Nature do not vary, it follows that if we live in a multiverse, then our own universe is infinitely more likely to be one in which the miracles of the Bible occurred than a uniformitarian one in which life evolved in a Darwinian fashion.

... there will still be a number of possible universes in the multiverse, in which life pops into existence in the manner described in Genesis 1, and where living things just happen to exhibit the striking traits predicted by Darwinism, whereas there is (by definition) only ONE way for a given set of laws and parameters NOT to vary: namely, by remaining the same at every point in space and time.

The problem [for the naturalist] is that the uniformitarian requirement that the laws and parameters of Nature are the same at every point in space and time – which is rather like hitting bull’s eyes again and again and again, for billions of years – is inherently so very unlikely, when compared to “singularism” (the hypothesis that the laws of Nature undergo slight, short-lived or local fluctuations)...

Thus in a multiverse scenario, uniformitarianism becomes the albatross around the neck of Darwinism: no matter how many of Darwin’s predictions scientists manage to confirm, the sheer unlikelihood of the hypothesis that we live in a universe whose laws never vary renders Darwinism too unlikely a theory to warrant scientific consideration.
What a pickle. The naturalist rejects miracles and accepts Darwinian evolution (i.e. that evolution is a completely natural process with no intelligent input from a non-natural mind) largely because he rejects the existence of God.

He buttresses that rejection by also accepting the idea of the multiverse as an answer to the argument for God's existence based on cosmic fine-tuning, but by accepting the multiverse he pretty much has to give up the underlying assumption of Darwinism (uniformitarianism) and also his opposition to miracles.

He seems to be mired in an intellectual quagmire, and it's not at all clear how he can extricate himself from it.

More on the naturalist's difficulties tomorrow.

Tuesday, March 19, 2019

Miracles and the Multiverse (Pt. I)

Naturalism is the view that physical nature is all there is. It holds that there's no non-physical reality, no supernatural entities. Naturalists usually embrace the idea of the existence of a multiverse in which infinite universes, all with different laws and parameters, exist something like bubbles in a bubble bath. Our universe is just one such bubble.

There's scarcely any empirical evidence for the multiverse, however, and it's popularity seems to stem largely from its utility as a response to the powerful argument for a cosmic Designer based on the incredible improbability that a universe like ours, with astonishingly precise values of the parameters that form the fabric of the universe and make life possible, would exist at all.

If, however, there's an infinite array of different universes with different laws and parameters, then even astronomically improbable universes are certain to be among that infinite manifold. Thus, as amazingly improbable as a life-sustaining world is, one pretty much had to exist, given the existence of the multiverse and we just happen to be in it.

Nevertheless, as philosopher Vincent Torley points out in a lengthy treatment of the multiverse at Uncommon Descent there's a perplexing difficulty for the naturalist who clings to the multiverse in order to avoid falling into theism. If the multiverse exists then not only does the improbable become certain, but so, too, does anything that is possible to occur under some set of physical laws. This would, of course, include miracles.

Miracles, after all, are exceedingly improbable events given the laws which appear to govern our world, but they're not logically impossible. The laws of our universe could be structured in such a way that allows for miracles on rare occasions. Such a world must, after all, exist somewhere in the multiverse and perhaps we just happen to be in it.

The irony is that the naturalist rejects the miraculous because he rejects belief in the existence of God, but in order to sustain his non-belief in God he relies on a hypothesis that makes miracles virtually certain to occur somewhere in the vast ensemble of worlds that comprises the multiverse.

Naturalism sees the universe as invariant. That is, the laws of physics hold everywhere and always. They're inviolable. Thus, miracles, for the naturalist, are physically impossible, but as Torley points out, in a multiverse there should be universes in which the laws of physics fluctuate episodically, thereby permitting anomalous events like miracles, and that these universes should be far more common than uniformitarian worlds in which the laws are invariant.

Here's Torley:
[B]ecause multiverses allow laws to vary bizarrely on rare and singular occasions, and because not all such variations are fatal to life, we can conclude that a life-permitting universe is far more likely than not to experience anomalous events (which some might call miracles), and that a life-permitting universe in which Biblical miracles occur is still more likely than one in which the laws and physical parameters of Nature are always uniform.

Thus [the] belief that we live in in a universe where Biblical miracles occurred will still be more rational than the modern scientific belief that we live in a universe whose laws are space and time-invariant, because [these] universes are more common in the multiverse than law-invariant universes.
We'll have more to say about the difficulty embracing the multiverse hypothesis poses for the naturalist tomorrow.

Monday, March 18, 2019

Lunar Origin

Astronomer Hugh Ross, in a 2014 article at Salvo, discussed some of the current theories on the formation of our moon. Those theories posit a collision between an object about the size of Mars with the early earth and require such an astonishing precision in the masses, momentum and timing of the colliding objects that it's almost literally incredible that it happened at all.

Our astonishment is magnified by the fact that our moon, which is virtually unique in our solar system in terms of the ratio of its size to that of the earth and its proximity to the earth, has to have almost exactly the properties it has in order for life to be sustained on earth.

Robin Canup, the author of one of the more popular theories on the moon's origin, wrote that, "Current theories on the formation of the Moon owe too much to cosmic coincidences."

And earth scientist Tim Elliott observed that the degree and kinds of complexity and fine-tuning required by lunar origin models appear to be increasing at an exponential rate. Among those who study lunar origin, he notes, "the sequence of conditions that currently seems necessary in these...versions of lunar formation have led to philosophical disquiet."

Ross adds that,
Thanks to the exquisitely fine-tuned nature of this impact event, the collision:
  1. Replaced the earth's thick, suffocating atmosphere with one containing the perfect air pressure for efficient lung performance, the ideal heat-trapping capability, and the just-right transparency for efficient photosynthesis.
  2. Gave the new atmosphere the optimal chemical composition to foster advanced life.
  3. Augmented the earth's mass and density enough to allow it to gravitationally retain a large, but not too large, quantity of water vapor for billions of years.
  4. Raised the amount of iron in the earth's core close to the level needed to provide the earth with a strong, enduring magnetic field (the remainder came from a later collision event). This magnetic field shields life from deadly cosmic rays and solar x-rays.
  5. Delivered to the earth's core and mantle quantities of iron and other critical elements in just-right amounts to produce sufficiently long-lasting, continent-building plate tectonics at just-right levels. Fine-tuned plate tectonics also performs a crucial role in compensating for the sun's increasing brightness.
  6. Increased the iron content of the earth's crust, permitting a huge abundance of ocean life that, in turn, can support advanced life.
  7. Salted the earth's interior with an abundance of long-lasting radioisotopes, the heat from which drives most of the earth's tectonic activity and volcanism.
  8. Produced the moon, which gradually slowed the earth's rotation rate so that eventually advanced life could thrive on earth.
  9. Left the moon with a just-right mass and distance relative to the earth to stabilize the tilt of the earth's rotation axis, protecting the planet from rapid and extreme climatic variations.
  10. Created the moon with the just-right diameter and the just-right distance relative to the earth so that, at the narrow epoch in solar-system history when human life would be possible, humans on earth would witness perfect solar eclipses, which would help them make important discoveries about the solar system and universe.
If we didn't have a moon like the one we have we wouldn't be here, and yet the existence of our moon is such a highly improbable occurrence that anyone who studies it is almost overwhelmed by how fortuitous it is.

No wonder, then, that so many of the people who study it, astronomers like Ross, believe that the earth/moon system, just like virtually every other aspect of cosmic architecture, is not an accident, but is rather the intentional product of an unimaginably intelligent and powerful engineer.

There's much more in Ross' article. It was written five years ago which leads one to wonder how much more we know about the moon today that adds to the breathtaking scope of coincidences that were known in 2014.

Saturday, March 16, 2019

Why St. Patrick Is Celebrated

Millions of Americans, many of them descendents of Irish immigrants, will celebrate their Irish heritage by observing St. Patrick's Day tomorrow. We're indebted to Thomas Cahill and his best-selling book How The Irish Saved Civilization for explaining to us why Patrick's is a life worth commemorating.

As improbable as his title may sound, Cahill weaves a fascinating and compelling tale of how the Irish in general, and Patrick and his spiritual heirs in particular, served as a tenuous but crucial cultural bridge from the classical world to the medieval age and, by so doing, made Western civilization possible.

Born a Roman citizen in 390 A.D., Patrick had been kidnapped as a boy of sixteen from his home on the coast of Britain and taken by Irish barbarians to Ireland. There he languished in slavery until he was able to escape six years later. Upon his homecoming he became a Christian, studied for the priesthood, and eventually returned to Ireland where he would spend the rest of his life laboring to persuade the Irish to accept the Gospel and to abolish slavery.

Patrick was the first person in history, in fact, to speak out unequivocally against slavery and, according to Cahill, the last person to do so until the 17th century.

Meanwhile, Roman control of Europe had begun to collapse. Rome was sacked by Alaric in 410 A.D. and barbarians were sweeping across the continent, forcing the Romans back to Italy, and plunging Europe into the Dark Ages.

Throughout the continent unwashed, illiterate hordes descended on the once grand Roman cities, looting artifacts and burning books. Learning ground to a halt and the literary heritage of the classical world was burned or moldered into dust. Almost all of it, Cahill claims, would surely have been lost if not for the Irish.

Having been converted to Christianity through the labors of Patrick, the Irish took with gusto to reading, writing and learning. They delighted in letters and bookmaking and painstakingly created indescribably beautiful Biblical manuscripts such as the Book of Kells which is on display today in the library of Trinity College in Dublin. Aware that the great works of the past were disappearing, they applied themselves assiduously to the daunting task of copying all surviving Western literature - everything they could lay their hands on.


For a century after the fall of Rome, Irish monks sequestered themselves in cold, damp, cramped mud or stone huts called scriptoria, so remote and isolated from the world that they were seldom threatened by the marauding pagans. Here these men spent their entire adult lives reproducing the old manuscripts and preserving literacy and learning for the time when people would be once again ready to receive them.


These scribes and their successors served as the conduits through which the Graeco-Roman and Judeo-Christian cultures were transmitted to the benighted tribes of Europe, newly settled amid the rubble and ruin of the civilization they had recently overwhelmed.

Around the late 6th century, three generations after Patrick, Irish missionaries with names like Columcille, Aidan, and Columbanus began to venture out from their monasteries and refuges, clutching their precious books to their hearts, sailing to England and the continent, founding their own monasteries and schools among the barbarians and teaching them how to read, write and make books of their own.

Absent the willingness of these courageous men to endure deprivations and hardships of every kind for the sake of the Gospel and learning, Cahill argues, the world that came after them would have been completely different. It would likely have been a world without books. Europe almost certainly would have been illiterate, and it would probably have been unable to resist the Muslim incursions that beset them a few centuries later.

The Europeans, starved for knowledge, soaked up everything the Irish missionaries could give them. From such seeds as these modern Western civilization germinated. From the Greeks the descendents of the Goths and Vandals learned philosophy, from the Romans they learned about law, from the Bible they learned of the worth of the individual who, created and loved by God, is therefore significant and not merely a brutish aggregation of matter.

From the Bible, too, they learned that the universe was created by a rational Mind and was thus not capricious, random, or chaotic. It would yield its secrets to rational investigation. Out of these assumptions, once their implications were finally and fully developed, grew historically unprecedented views of the value of the individual and the flowering of modern science.

Our cultural heritage is thus, in a very important sense, a legacy from the Irish - a legacy from Patrick. It's worth pondering on this St. Patrick's Day what the world would be like today had it not been for those early Irish scribes and missionaries thirteen centuries ago.

Buiochas le Dia ar son na nGael (Thank God for the Irish), and I hope you have a great St. Patrick's Day.

Friday, March 15, 2019

No, the Economy Is NOT Terrible

Despite what some presidential candidates are saying the American economy is in the best shape it's been in many Americans' lifetime. Andrew Kugle at The Washington Free Beacon cites a number of statements made recently by Democrats vying for their party's nomination which suggest that they believe, or want voters to believe, that the economy is in tatters.

For example,
  • "The economy in America today is not working for working people," Sen. Kamala Harris (D., Calif.).
  • "When they declare victory at 4 percent unemployment, it is not good enough," Sen. Kirsten Gillibrand (D., N.Y.).
  • "We have enormous crises in this country … in terms of millions of people living in poverty, in terms of a shrinking middle class," Sen. Bernie Sanders (I., Vt.).
  • "The middle-class squeeze is real and millions of families can barely breathe." Sen. Elizabeth Warren (D., Mass.).
  • "Clearly something is broken. Something is broken in our economy," former South Bend, Indiana mayor Pete Buttigieg (D.).
Statistics, however, don't seem to bear out these melancholy claims. According to Kugle:
The unemployment rate is at 3.8 percent, a 50-year low. Unemployment rates for African Americans and Hispanics have reached record lows in the last two years. The labor force participation rate is at 63.2 percent. Consumer confidence reached an 18-year high in September and rebounded in February after a three-month decline.
A 3.8% unemployment rate is considered at, or close to, full employment, so it's not clear what Senator Gillibrand means when she says that 4% is not good enough nor what the others mean when they assert that the economy isn't working. here are a couple more stats from Kugle's piece:
In January, manufacturing jobs were growing at a rate 714 percent faster under Trump than under his predecessor President Barack Obama.

The latest job numbers show wages growing at a rate not seen in a decade, with most of the wage growth occurring in the lower half of wage earners.
Kugle might've also pointed out that food stamp recipients have declined by almost four million people in the last two years, and that compared to the economies of every other first world nation, ours is very healthy.

Of course not everyone in the country is a millionaire, but of all the problems with which we are faced today, the economy seems to be among the least urgent. If politicians are going to tell us that that's not true, they should explain why it's not.

Otherwise, they're being less than honest with the American people, and they're certainly forfeiting their credibility.

Thursday, March 14, 2019

What Does Life Mean?

Holocaust survivor and psychologist Victor Frankl once wrote a book titled Man's Search for Meaning in which he asserted that man can't live without believing that there is some purpose or meaning to his life. To waken in the morning and realize that there's no real point to anything one does in the hours that lie ahead beyond just keeping oneself alive is psychologically deadening.

It can easily lead to existential despair.

Each of us, of course, has projects which inject temporary meaning into our lives and help us to avoid a numbed listlessness, but when we ask what, in the overall scheme of things, those projects amount to, the answer seems to depend on how enduring they are.

Long term projects like raising a family or building a business seem more meaningful than short term projects like mowing the grass or watching a television program. Yet the problem is that if death ends our existence it also erases the meaning or significance of what we do, no matter how important it may seem to us while we're engaged in it.

For some, a relative few, their projects live on after them for a time, but even of many of these it might be asked, what's the point? Napoleon conquered much of Europe, was responsible for the slaughter of tens of thousands of men, but he was overthrown, died in exile, and the monarchy of France was restored.

His deeds and fame live on after his death, but what was the sense of all that death and carnage.

Meaning is a slippery notion, it's hard to define precisely what it is, but if our lives, like the light of a firefly, are here one instant and gone the next, if the earth is doomed to die, a casualty of a solar supernova, then nothing lasts and nothing really means anything. Unless what we do matters forever it doesn't really matter at all.

These gloomy thoughts occurred to me as I read about a lecture given by biologist Jerry Coyne. Coyne told his audience that:
The universe and life are pointless....Pointless in the sense that there is no externally imposed purpose or point in the universe. As atheists this is something that is manifestly true to us. We make our own meaning and purpose.
This is perhaps the consensus view among those holding to a naturalistic worldview. It was eloquently articulated by philosopher Bertrand Russell in his book A Free Man's Worship in which he wrote the following words:
Such, in outline, but even more purposeless, more void of meaning is the world which Science presents for our belief.

Amid such a world, if anywhere, our ideals henceforward must find a home. That Man is the product of causes which had no prevision of the end they were achieving; that his origin, his growth, his hopes and fears, his loves and his beliefs, are but the outcome of accidental collocations of atoms; that no fire, no heroism, no intensity of thought and feeling, can preserve an individual life beyond the grave; that all the labours of the ages, all the devotion, all the inspiration, all the noonday brightness of human genius, are destined to extinction in the vast death of the solar system, and that the whole temple of Man's achievement must inevitably be buried beneath the debris of a universe in ruins - all these things, if not quite beyond dispute, are yet so nearly certain, that no philosophy which rejects them can hope to stand.

Only within the scaffolding of these truths, only on the firm foundation of unyielding despair, can the soul's habitation henceforth be safely built.
It's a bleak view of life, to be sure, but if we're convinced that extinction awaits us, both individually and corporately, it's hard to dispute it. As the writer Somerset Maugham put it:
If death ends all, if I have neither to hope for good nor to fear evil, I must ask myself what am I here for….Now the answer is plain, but so unpalatable that most will not face it. There is no meaning for life, and [thus] life has no meaning.
The Russian writer Leo Tolstoy said essentially the same thing though with a bit more angst at the prospect of the emptiness and futility of existence:
What will come from what I am doing now, and may do tomorrow? What will come from my whole life? Otherwise expressed — Why should I live? Why should I wish for anything? Why should I do anything? Again, in other words, is there any meaning in my life which will not be destroyed by the inevitable death awaiting me?
But, if death is not the end of our existence as a person then there's a chance that maybe there's meaning in the chaotic horror that is human history. If death is simply the transition between two stages of life, like the metamorphosis of a caterpillar to a butterfly, then maybe there's meaning, not only to history, but to each and every individual life.

If death is not the end of our existence then what we do in this life really may matter and may matter forever.

On the other hand, if death is the end, if we are annihilated when our body dies, then all we are is dust in the wind, and philosophers and writers from Schopenhauer to Shakespeare are correct: Life is just a tale told by an idiot, full of sound and fury and signifying nothing.

We're born, we suffer, we die, and that's all there is to it.

Pass the Prozac.

Wednesday, March 13, 2019

Are We Real? (Pt. II)

Yesterday we looked at an article by computer expert Peter Kassan in which he critiques the notion that we're actually virtual beings in a grand cosmic simulation. Kassan is very skeptical that such a simulation could ever be accomplished, and I'd like to finish the discussion of his argument today.

One of his objections is that a material device like a computer cannot produce immaterial effects like consciousness. He has more to say on this in the part of his article we'll look at here. How, for instance, can a computer generate what philosophers call intentionality? He writes:
The argument that a sufficiently complex computer program would be conscious in the same way you and I are goes something like this:
  • The brain is an information processor.
  • A computer is an information processor.
  • A computer can be programmed to process the same sort of information the brain processes in the same way that the brain processes information.
  • The conscious mind arises from information processing in the brain.
  • Therefore, a conscious mind will arise from equivalent information processing on a computer.
The argument depends crucially on the concept of information. A computer contains, processes, and displays data like a highway road sign consisting of a rectangular array of light bulbs. As we drive by, we can interpret the pattern of light as letters and words, but the message we read is actually nowhere contained in the display.

Imagine a space alien interpreting the display as a binary code, with each column of eight light bulbs conveying one byte. How would they interpret a sign that to us read DANGER—CONSTRUCTION AHEAD? A computer is processing data (information) only because we interpret it as doing so; a brain behaves as it does without interpretation.
In other words, the arrangement of the bulbs in the sign has a meaning to us, it is about something, but how do the reactions in our brains when we see the sign generate that meaning? The brain is just an enormously complex system of neurons. Where does the meaning come from? There's no meaning in the chemical reactions that fill the brain when we observe the sign. Nor does a computer generate meaning. It simply produces data. Meaning is the product of conscious observers.

Kassan finishes with a couple more thoughts about all this:
There’s another irony concerning the notion that we’re all just computer simulations. If you believe you’re living in a computer simulation, then everything you think you know about the world — including its vastness, the probability of intelligent life elsewhere in the universe, and even the very existence of computers — is part of that simulation, and so is completely worthless. The evidence on which the entire chain of reasoning depends, in short, is illusory — and so nothing at all can be argued from it.
This is an interesting point. On the simulation hypothesis none of our beliefs are reliable since they're all just beliefs we hold because we've been programmed to do so. Among those simulated beliefs are our moral beliefs:
If we believe we’re just simulations, how should we behave? Should we treat everyone around us as if they’re just a figment of someone else’s imagination, shamelessly manipulating them for our own pleasure or gain? Should we take careless risks, knowing we’ll live again in another simulation or after a reboot? Should we even bother to get out of bed, knowing that it is all unreal? I think not.
If the universe is a simulation then we're all programmed to live the way we do. No behavior is wrong in any meaningful sense. There's no free will, no morality, no meaning to our existence, no justice or injustice.

We're all just actors on a stage manipulated by an intelligent programmer for his own purposes. Thus, there is nor can there be, any value to our lives.

This, by the way, would be true as well if the programmer were a God who preordains every aspect of our lives.

Belief in a real world and other minds besides our own is properly basic. We are within our epistemic rights to believe that the world exists objectively unless and until we are confronted with a compelling defeater for that belief, but the simulation hypothesis falls short of being a compelling defeater.

Tuesday, March 12, 2019

Are We Real? (Pt. I)

I have occasionally written (see most recently here and here) on the fascinating notion that the universe we live in is actually not "real" but is rather a computer simulation designed by some intelligent creatures living in a different world altogether. This theory has been popularized, perhaps most notably, by philosopher Nick Bostrom.

I find the theory fascinating not because I think it's plausible but because those who do are actually trying to account for the enormous amount of apparently intelligent engineering and design manifested by the fine-tuning of our universe without having to concede that theism is true. They are right, I think, to see an intelligence behind the universe, but wrong if they conclude that the intelligence is anything less than the Maximally Great Being posited by theism.

There's a good article by computer expert Peter Kassan at Skeptic.com in which he explains the simulation hypothesis and offers several criticisms of it.

Here's his summary of the argument for thinking we actually live in a computer simulation:
  • The universe contains a vast number of stars.
  • Some of these stars have planets.
  • Some of these planets must be like Earth.
  • Since intelligent life arose and eventually invented computers on Earth, intelligent life must have arisen and invented computers on some of these planets.
  • It is (or inevitably will be) possible to simulate intelligent life inhabiting a simulated reality on a computer.
  • Since it’s possible, it must have been done.
  • There must be a vast number of such simulations on a vast number of computers on a vast number of planets.
  • Since there’s only one real universe but there’s a vast number of simulations, the probability that you’re living in a simulation approaches one, while the probability that you’re living in the real universe approaches zero.
As Kassan observes there is no empirical evidence for, or testable implications of, this argument. It's therefore not a scientific hypothesis. It's more akin to science fiction or theology. Kassan calls it "cybernetic solipsism". There’s little reason, he says, to argue that anyone else in your simulated universe is conscious — to achieve verisimilitude, there’d be no need to actually program anyone else’s consciousness but yours.

More than that, though, even an immensely powerful computer would not be able to program human consciousness:
But even a superdupercomputer wouldn’t produce even a single conscious being. The crucial move in the argument is that the simulation of a human mind would actually be conscious in the same sense that you and I are.

Your computer simulation wouldn’t simply behave exactly like a real person, it would actually feel pain, pleasure, lust, fear, anger, love, nausea, angst, ennui, and everything else you can feel.

It would actually experience the same optical (and other sensory) illusions you do. It would feel what you feel when you get sick, or when you drink or take drugs. It would fall asleep and dream, and then wake up to realize that it was only dreaming. Presumably, it would even die.
In other words, the qualia or phenomena of sensory experience would have to somehow emerge from the whirrings of the computer's hard drive, but a physical computer can only produce physical outputs, and our sensations - pain, color, sound, etc. - are not physical or material.

They're produced by physical stimuli, they're generated by electrochemical reactions in our nervous system, but the sensation of blueness when we look at the sky, to take one example, is not itself physical.

In other words, our conscious experience is not simulatable and therefore we cannot be a simulation.

I'll have more on Kassan's argument tomorrow.

Monday, March 11, 2019

Human Language and Abstract Thinking

An interesting short article by Michael Egnor at Mind Matters explores the difference between human language and the vocalizations of other animals. Despite having much of the hearing and vocal apparatus necessary for speech, animals are not capable of language.

He begins by quoting science writer Tom Seigfried who states that: It’s true that humans, and humans alone, evolved the complex set of voice, hearing and brain-processing skills enabling full-scale sophisticated vocal communication. Yet animals can make complicated sounds; parrots can mimic human speech and cats can clearly convey that it’s time for a treat.

Many animals possess an acute sense of hearing and are able to distinguish random noises from intentional communication.

Much of the physiological apparatus for hearing and speaking is found in all land-dwelling vertebrates — the tetrapods — including mammals, birds, amphibians and reptiles. “Humans share a significant proportion of our basic machinery of hearing and vocal production with other tetrapods,” Fitch writes in the Annual Review of Linguistics. Even so, only humans have language, Egnor argues. Here's part of his reasoning:
[Animals] can make and respond to signs—gestures, grunts and the like. A dog, for example, can respond appropriately to simple words directed at him (“Sit!” “Fetch!”). But all animal communication is symbols, that is, signals that point directly to an object. In this case, the object is a simple expected action the animal is to perform immediately.

What animals cannot do is communicate using abstractions. They cannot use designators, — words employed abstractly as language. For example, a dog can be trained, by reward and punishment, to stay when told, “Stay!”

He associates the sound “s-t-a-y” with a behavior and performs the behavior. But he doesn’t know what you mean when you say “Let’s stay a bit longer on the beach,” “He extended his stay in Peru,” or “The judge issued a stay of the eviction order.”

Animals can only think concretely. Their thought is of particulars—the particular bowl of food, thrown stick, or warm bed. They don’t contemplate nutrition, exercise, or rest. Humans can think abstractly, without any particular physical object in mind. For example, a vet might tell her client during an office visit, “Tuffy here needs to lose about 1.5 kg. I suggest a lower calorie kibble and more exercise—if possible, before bedtime.”

She can explain it to her client but not to the dog because it’s all abstractions about times, places, things, and concepts. Of course, he might recognize his name, “Tuffy,” and raise his ears slightly to see if he is being told to do something concrete.

Concrete thought needs no language because the concrete thinker focuses on a perceived object. Tuffy thinks of his bowl of food. If he were to think of nutrition, an abstract concept, he would need abstract designators as objects, not only to express his thought but even to think it.

In short, animals don’t have language because they don’t have abstract thought and thus have neither the capacity nor the need for abstract designators—words as language.
Language is a tool for abstract thinking and only human beings can do that. The gap between animals and human beings is not just a narrow evolutionary jump. It's a chasm.