Tuesday, March 31, 2015


There's been a lot of hand-wringing this week over the Indiana Religious Freedom Restoration Act (RFRA) which protects individuals from being forced to compromise their religious beliefs in their place of business. This has been interpreted by many, particularly on the left, as an open the door to discrimination against gays and minorities, and thus the expressions of outrage directed at the Indiana legislature. This interpretation is, however, wrong. The law only allows a defendant to use religious liberty as a defense against lawsuits, it doesn't guarantee that the defense will succeed. Even so, what's been missed in all the commotion and teeth gnashing is that some twenty states have such laws, state senator Barack Obama voted for a RFRA in Illinois when he served in that state's legislature, and Bill Clinton signed a similar law at the federal level in 1993. No matter. It's okay when Democrats pass laws like this, but when Republicans do it you just know there's treachery afoot.

Gabriel Malor at The Federalist has a good piece explaining the Indiana law and illustrating how shallow is so much of the protest against it, but perhaps the blue ribbon for nincompoopery among the protests goes to Connecticut governor Dan Malloy who has proclaimed that he will ban all state travel to Indiana to punish that state for its retrograde belief that religious freedom deserves to be protected. What Gov. Malloy evidently didn't realize as he basked in the glow of his righteous indignation was that Connecticut has had an even more restrictive law than Indiana's on the books for twenty years. Perhaps the rest of the country should ban travel to Connecticut.

These RFRA's stem from the fact that business people have been sued and some have been forced out of business because they felt that it violated their religious convictions to, for example, bake a wedding cake for a gay couple or photograph their wedding. Whether their religious convictions are right or wrong is not the issue. The issue is whether anyone who has strong religious or moral objections to a particular practice must nevertheless set those convictions aside as they conduct their business. To say they must disregard their beliefs, as some insist, is to say that one person's freedom of religion must be subordinated to another person's freedom of sexual expression, or whatever. The purpose of the RFRA, in all the states that have them and in the law that Bill Clinton signed, is to protect those who have religious/moral objections to what their customers are asking them to do.

Of course, one baker's refusal to countenance a gay union may be an inconvenience to a gay couple, but there are plenty of other bakers who would be happy to bake cakes for them and film their weddings. The couple's inconvenience hardly seems worth forcing a family-run bakery to choose between their conscience, a lawsuit, and going out of business.

Monday, March 30, 2015

A Problem With Teaching Ethics

Ray Penning at Cardus Blog asks the question, "Can ethics be taught?" The answer, of course, is yes and no. Ethics, as the study of the rules that philosophers have prescribed to govern our moral behavior, can certainly be taught, but, although thousands of books have been written about this, I doubt that any of them have changed anyone's actual behavior. Part of the reason is that, as Penning observes:
Ethics courses that leave students with a bunch of “you shoulds” or “you should nots” are not effective. There are deeper questions that proceed from our understanding of what human nature is about and what we see as the purpose of our life together.
This is true as far as it goes, but the reason teaching such rules is not effective is that focusing on the rules fails to address the metaethical question of why we should follow any of those rules in the first place. What answer can be given to the question why one should not just be selfish, or adopt a might-makes-right ethic? At bottom secular philosophy has no convincing answer. Philosophers simply utter platitudes like "we wouldn't want others to treat us selfishly, so we shouldn't treat them selfishly," which, of course, is completely unhelpful unless one is talking to children.

The reply is unhelpful when aimed at adult students because students will discern that the reply simply asserts that we shouldn't be selfish because it's selfish to be selfish. The question, though, is why, exactly, is it wrong to do to others something we wouldn't done to us? What is it about selfishness that makes selfishness wrong?

Moreover, this sort of answer simply glosses over the problem of what it means to say that something is in fact "wrong" in the first place. Does "wrong" merely mean something one shouldn't do? If so, we might ask why one shouldn't do it, which likely elicits the reply that one shouldn't do it because it's wrong. The circularity of this is obvious.

The only way to break out of the circle, the only way we can make sense of propositions like "X is wrong," is to posit the existence of a transcendent moral authority, a personal being, who serves as the objective foundation for all our moral judgments. If there is no such being then neither are there any objective moral values or duties to which we must, or even should, adhere. This lack of any real meaning to the word "wrong" is a major consequence of the secularization of our culture, and it's one of the major themes of my novel In the Absence of God (see link at the top of this page) which I heartily recommend to readers of Viewpoint.

Saturday, March 28, 2015

The Interaction Problem

I've run a few posts on the topic of mind and matter this past week, largely because we've been discussing it in my classes, and because the topic is, I think, fascinating.

There's one more thing I'd like to say about it, particularly with regard to one of the common objections materialists make to the belief that we are at least partly comprised of immaterial mental substance. This is the objection based on what philosophers call the interaction problem. The problem is that it's inconceivable or literally unthinkable that two completely different substances, mind and matter (brain), could in any way interact with each other. Given that we can't describe how brains interact with immaterial minds and vice versa, belief that they somehow do is unwarranted, or so it is claimed.

The problem with the interaction objection is that it seems to be based on the assumption that something can only be affected by other things which are like it. That is, matter, like brains or bodies, can only be affected by other things which are material, but this principle - that like can only affect like - is surely not true. We see counter examples all around us:
  • An immaterial phenomenon like the idea of food causes the physical reaction of salivary glands secreting saliva.
  • The excitation of cone cells in the retina, a physical reaction, produces the sensation of red which is non-physical.
  • Swirling fluid in your inner ear, a physical condition, causes the sensation of dizziness which is non-physical.
  • Getting your fingers caught in a closing car door, a material situation, causes pain which is an immaterial phenomenon.
And so on. The only way that the principle that "like causes like" can be known to be true is if we know to start with that materialism is true, but the truth of materialism is the very point that's in question in this discussion.

The materialist can attempt to evade the examples given above by insisting that ideas, color, pain and other sensations are not really immaterial phenomena but are merely names we give to certain types of neurochemical events in the brain, but this is unconvincing. It may be that when you experience pain there are certain chemical reactions occurring in your brain and certain nerve fibers being stimulated, but those physical phenomena are not what pain is any more than thunder is the same thing as lightning. In fact, thunder can at least be explained in terms of lightning, but there's no way to explain how nerve fibers firing produces pain.

Friday, March 27, 2015

The Evolution of Consciousness

Sal Cordova at Uncommon Descent talks about how reflecting on the phenomenon of human consciousness as a high school student led him to doubt the Darwinian story:
I remember sitting in class and the biology teacher gave the standard talking points. But for some reason, the fact I was conscious did not seem reducible to evolutionary explanations. Strange that I would even be perplexed about it as a high school student, but I was. That was the beginning of my doubts about Darwin…

Years later, when I related the story to Walter ReMine, he explained to me that consciousness poses a serious problem for evolution.

He said something to the effect, “Say an animal has to flee a predator — all it has to do is run away. Why does it have to evolve consciousness in order to flee predators?” Mechanically speaking the animal can be programmed to flee, or even hunt, without having to be self-aware. Why does it have to evolve consciousness to do anything for survival?

Why would selection favor the evolution of consciousness? How does natural selection select for the pre-cursors of consciousness? I don’t think it can. Ergo, consciousness didn’t evolve, or it’s just a maladaptation, or an illusion — or maybe it is created by God. Materialists can say consciousness is an illusion all they want, but once upon a time, when my arm was broken in a hang gliding crash, I felt real pain. It would have been nice if consciousness were an illusion back then, but it wasn’t.
Somehow, at some point in our embryonic development consciousness arises, but how does a particular configuration of material stuff generate it? Dead people have the same configuration of matter in their brains (unless they suffered a head injury) that they had before dying and yet before death they were conscious and after death they are not. Why? What's missing after death?

How do physical processes like electrochemical reactions in the brain produce a belief, or a doubt, or understanding? How do atoms whirling about in our neuronal matrix give rise to our sense that the distant past is different from the recent past? How do chemical reactions translate a pattern of ink on paper into a meaning and how do firing synapses translate electrical pulses into the sensation of red? Not only does no one kno0w the answers to these questions, it's very hard to see how they even could have an answer if our material brain is the only entity responsible for them.

Consciousness is an incredibly intriguing phenomenon and not only is there no explanation of it in a materialist ontology, there's also no explanation for how it could ever have evolved through purely random physical, material processes.

Cordova has more at the link.

Thursday, March 26, 2015

Mind and Materialism

Raymond Tallis at The New Atlantis discusses the devastating assault on philosophical materialism that began in the 1970s when American philosopher Thomas Nagel explored the question, "What is it like to be a bat?"

Nagel argued that there is something it is like to be a bat whereas it does not make sense to say that it is like something to be a stone. Bats, and people, have conscious experience that purely material objects do not have, and it is this conscious experience that is the defining feature of minds. Moreover, this experience is not a fact about the physical realm. Tallis writes:
This difference between a person’s experience and a pebble’s non-experience cannot be captured by the sum total of the objective knowledge we can have about the physical makeup of human beings and pebbles. Conscious experience, subjective as it is to the individual organism, lies beyond the reach of such knowledge. I could know everything there is to know about a bat and still not know what it is like to be a bat — to have a bat’s experiences and live a bat’s life in a bat’s world.

This claim has been argued over at great length by myriad philosophers, who have mobilized a series of thought experiments to investigate Nagel’s claim. Among the most famous involves a fictional super-scientist named Mary, who studies the world from a room containing only the colors black and white, but has complete knowledge of the mechanics of optics, electromagnetic radiation, and the functioning of the human visual system.

When Mary is finally released from the room she begins to see colors for the first time. She now knows not only how different wavelengths of light affect the visual system, but also the direct experience of what it is like to see colors. Therefore, felt experiences and sensations are more than the physical processes that underlie them.
Nagel goes on to make the claim, a claim that has put him in the bad graces of his fellow naturalists, that naturalism simply lacks the resources to account for conscious experience. Tallis goes on to say that,
But none of the main features of minds — which Nagel identifies as consciousness, cognition, and [moral] value — can be accommodated by this worldview’s [naturalism's] identification of the mind with physical events in the brain, or by its assumption that human beings are no more than animal organisms whose behavior is fully explicable by evolutionary processes.
One might wonder why naturalistic materialists are so reluctant to acknowledge that there's more to us than just physical matter. What difference does it make if an essential aspect of our being is mental? What does it matter if we're not just matter but also a mind? Indeed, what does it matter if we are fundamentally mind?

Perhaps the answer is that given by philosopher J.P.Moreland. Moreland makes an argument in his book Consciousness and the Existence of God that naturalism entails the view that everything that exists is reducible to matter and energy, that is, there are no immaterial substances. Thus, the existence of human consciousness must be explicable in terms of material substance or naturalism is likely to be false. Moreland also argues that there is no good naturalistic explanation for consciousness and that, indeed, the existence of consciousness is strong evidence for the existence of God.

Nagel, an atheist, doesn't go as far as Moreland in believing that the phenomena of conscious experience point to the existence of God, but he comes close, arguing that there must be some mental, telic principle in the universe that somehow imbues the world with consciousness. There is nothing about matter, even the matter which constitutes the brain, that can account for conscious experiences like the sensations of color or a toothache. There's nothing about a chemical reaction or the firing of nerve fibers that can conceivably account for what we experience when we see red, hear middle C, taste sweetness, or feel pain. Nor is there anything about matter that can account for the existence of moral value.

If it turns out that naturalism remains unable to rise to the challenge presented by consciousness then naturalism, and materialism, will forfeit their hegemony among philosophers, a hegemony that has already been seriously eroded.

You can read the rest of Tallis' article at the link. It's very good.

Wednesday, March 25, 2015

Beetle Body

Are we identical with our bodies or are we beings which have bodies? Philosopher Alvin Plantinga makes an argument, borrowing from Franz Kafka's novel Metamorphosis, for the latter view:
If we just are our bodies then it would be incoherent to think that we could exist apart from our bodies, that is, there would be a contradiction involved in trying to imagine such a state of affairs. It would be like trying to imagine a triangle that doesn't have three sides.

Here's a thought experiment: Imagine yourself having an out-of-the-body experience where you're looking down from above on an operating table upon which your body is lying. Whether or not you believe OBEs happen, such a scenario doesn't seem to be incoherent, like trying to imagine a table with no surface, but if it is coherent to imagine our selves being separated from our bodies then our selves can't be identical to our bodies.

Tuesday, March 24, 2015

Collapsing Talks?

President Obama is exceedingly wroth with Israeli Prime Minister Benjamin Netanyahu for his public criticism of Mr. Obama's plans for a nuclear deal with Iran, but it may be that Mr. Netanyahu isn't the only concerned party who's balking at the deal. According to debkafile (a source of questionable reliability, it must be said) both Germany and France, the latter of which is acting under pressure from Saudi Arabia and the U.A.E., are also getting cold feet:
President Barack Obama failed to shift French President France Hollande from his objections to the nuclear accord taking shape between the US and Iran in the call he put through to the Elysée Friday night, March 20. US Secretary of State John Kerry fared no better Saturday, when he met British, French and German Foreign ministers in London for a briefing on the talks’ progress intended to line the Europeans up with the American position. He then found, according to debkafile’s sources, that France was not alone; Germany, too, balked at parts of the deal in the making.

The French are demanding changes in five main points agreed between Kerry and Iranian Foreign Minister Javad Zarif before the Iranians quit the talks Friday:

They insist that -
  • Iran can’t be allowed to retain all the 6,500 centrifuges (for enriching uranium) conceded by the Americans. This figure must be reduced.
  • Similarly, the stocks of enriched uranium accepted by the US to remain in Iranian hands are too large.
  • France insists on a longer period of restrictions on Iran's nuclear work before sanctions are eased.
  • It's pushing for a longer moratorium – 25 years rather than the 15 years offered by the Obama administration – and guarantees at every stage.
  • The main sticking point however is France’s insistence that UN sanctions stay in place until Iran fully explains the evidence that has raised suspicions of past work on developing a nuclear warhead design.
The Iranians counter that they could never satisfy the French condition, because they would never be able to prove a negative and disprove evidence of a weapons program that is forged. There is no chance of Tehran ever admitting to working on a nuclear warhead – or allowing US inspectors access to suspected testing sites – because that would belie supreme leader Ayatollah Ali Khamenei’s solemn contention that Iran’s nuclear program is solely for peaceful purposes and always has been.

debkafile’s Gulf sources disclose that the tough French bargaining position in the nuclear talks stems partly from its intense ties with Saudi Arabia and other Gulf nations, including the United Arab Emirates.
If debkafile's report is true (a big "if" given past experience with them) all this insolence is doubtless not going down well in the White House where failure to truckle to Mr. Obama's wishes is taken to be an act of lèse majesty. He has publicly let it be known that Israel will be punished in the U.N. for its temerity in re-electing Mr. Netanyahu against our president's clear desire that they repudiate him at the polls. We might well wonder what he has in store for France, Germany, Saudi Arabia, U.A.E. and any other nation which presumes to know more about what's in their own best interests than he does.

Monday, March 23, 2015

Bad Science Guy

Bill Nye won the hearts of a lot of kids who watched his videos in their science classes. He was goofy yet funny, corny but personable, and very likeable. In the last few years, however, a different side of "The Science Guy" has revealed itself. In 2010 Nye gave a speech to the American Humanist Association in which he declared that,
I'm insignificant. ... I am just another speck of sand. And the earth really in the cosmic scheme of things is another speck. And the sun an unremarkable star. ... And the galaxy is a speck. I'm a speck on a speck orbiting a speck among other specks among still other specks in the middle of specklessness. I suck.
It's ironic that he received a hearty ovation for this speech given that not only does it reveal a bleak, even nihilistic view of himself in particular and mankind in general, but one of the criticisms that humanists make of Christians is that their belief in their inherent sinfulness ("I suck") is dehumanizing and depressing. Maybe it's okay to be dehumanizing and depressing as long as one agrees with the humanist world picture.

At any rate, Casey Luskin at Evolution News and Views talks about some of the scientific infelicities in Nye's new book Undeniable: Evolution and the Science of Creation. Luskin writes:
Undeniable promotes the standard dumbed-down atheistic narrative about science, society, and evolution -- except now his book is influencing younger thinkers who mistakenly think Nye is an objective source of information for everything about science...

Later, Nye reveals that his view that humans "suck" comes directly from his study of evolution: "As I learned more about evolution, I realized that from nature's point of view, you and I ain't such a big deal." According to evolution, Nye says, "humankind may not be that special."

And why aren't we special? Under Nye's nihilistic thinking, "evolution is not guided by a mind or a plan," and nature even shows "lack of evidence of a plan." For Nye, "Every other aspect of life that was once attributed to divine intent is now elegantly and completely explained in the context of evolutionary science."

Under Nye's outlook, even humanity's advanced abilities, like our moral codes and selfless altruism, are not special gifts that show we were made for a higher purpose. Rather, "Altruism is not a moral or religious ideal, no matter what some people might tell you," for human morality is merely a "biological part of who or what we are as a species."
If that's true, of course, then there's no reason why we should think that we have any objective moral duty to do anything. Nothing is really right or wrong if human morality is simply the product of blind, impersonal processes which cannot know what they were creating and cannot hold anyone accountable.

Luskin moves from Nye's metaphysics to reviewing some of his scientific claims and finds that Nye's science is still stuck in the 1970s:
On the natural chemical origins of life, Nye maintains that the famous Miller-Urey experiments "simulate[d] the conditions on earth in primordial times," and "produced the natural amino acids." Yet it's been known for decades that the Miller-Urey experiments did not correctly simulate the earth's early atmosphere. An article in Science explains why the experiments are now considered largely irrelevant: "the early atmosphere looked nothing like the Miller-Urey situation."

Nye also promotes the unsophisticated argument that humans and apes must share a common ancestor because our gene-coding DNA is only about 1 percent different. "This is striking evidence for chimps and chumps to have a common ancestor," he writes.

This argument is not just simplistic, it's also false.

Another article in Science challenged " the myth of 1%," suggesting the statistic is a "truism [that] should be retired," and noting, "studies are showing that [humans and chimps] are not as similar as many tend to believe." Geneticist Richard Buggs argues more accurate genetic comparisons show "the total similarity of the genomes could be below 70 percent."

But if we do share DNA with chimps, why should that demonstrate our common ancestry? Intelligent agents regularly re-use parts that work in different systems (e.g., wheels for cars and wheels for airplanes). Genetic similarities between humans and chimps could easily be seen as the result of common design rather than common descent. Nye's crude argument ignores this possibility.
Nye fares no better in his discussion of fossil transitional forms:
Nye cites Tiktaalik as a "'fishapod' (transition between fish and tetrapod, or land animal with four legs)" that is a fulfilled "prediction" of evolution because of when and where it was found in the fossil record.... Nye is apparently unaware that this so-called evolutionary "prediction" went belly-up after scientists found tracks of true tetrapods with digits some 18 million years before Tiktaalik in the fossil record. As the world's top scientific journal Nature put it, this means Tiktaalik is not a "direct transitional form."

In another instance, Nye claims we've "found a whole range of human ancestors, including Sahelanthropus tchadensis," apparently not realizing that an article in Nature reported there are "many... features that link the specimen with chimpanzees, gorillas or both," since " Sahelanthropus was an ape."

Nye calls the fossil mammal Ambulocetus a "walking whale" with "whalelike flippers, and feet with toes." Nye apparently missed a paper in Annual Review of Ecology and Systematics which found that Ambulocetus had "large feet" and called its mode of swimming "inefficient" -- very different from whales. Another paper found that unlike whales, Ambulocetus was tied to freshwater environments and lived near "the mouths of rivers, lunging out at terrestrial prey -- analogous to the hunting process of crocodilians." This mammal had nothing like "whalelike flippers."
Luskin mercifully concludes his recitation of Nye's embarrassing unfamiliarity with current discoveries in biology with one more illustration. Word has apparently yet to reach the "science guy" that one of the anti-designers' favorite examples of poor design, the human eye, is actually an example of excellent design:
Nye also promotes an old canard that the human eye is wired backwards. According to Nye, "the human eye's light-sensing cells are tucked behind other layers of tissue" which is "not an optimal optical arrangement." He apparently never saw a 2010 paper in Physical Review Letters which found that our eyes have special glial cells which sit over the retina, acting like fiber-optic cables to channel light through the tissue directly onto our photoreceptor cells. According to the paper, the human retina is "an optimal structure designed for improving the sharpness of images." Indeed, just this month a headline at Scientific American reports: " The Purpose of Our Eyes' Strange Wiring Is Unveiled." That article confirms the purpose lies in, "increasing and sharpening our color vision."

Nye tells his readers that the eyes of cephalopods like the octopus have "a better design than yours." But an article at Phys.org called our retinal glial cells a "design feature," and concluded: "The idea that the vertebrate eye, like a traditional front-illuminated camera, might have been improved somehow if it had only been able to orient its wiring behind the photoreceptor layer, like a cephalopod, is folly."
There are more examples of Nye's faulty science in Luskin's article which also contains links to his sources.

Evolution, at least in it's Darwinian form (i.e. a process that admits no "non-natural" influences), is a theory in crisis, as geneticist Michael Denton has described it, but if one is a naturalistic materialist it's really the only game in town, which is the main reason many scientists cling to it. Scientists like to say that they follow the evidence wherever it leads, but what counts as evidence is, as the philosophers of science like to say, theory-laden. That is, only evidence that fits the scientist's worldview is allowed to count as data. Everything else is ignored. It is an interesting fact that science is often driven by the scientist's metaphysical commitments, and only secondarily by empirical evidence. That's certainly not the way it's supposed to be or the way it's portrayed in the popular culture, but it is the way it too often is.

Saturday, March 21, 2015

Ten Ways

A writer for Salon.com named Kali Holloway lists "Ten Ways White People Are More Racist Than They Realize." It's an interesting article, but it may be a case of there being less here than meets the eye. Holloway's first misstep is to fail to define what she means by the term "racism." The word means different things to different people and her failure to tell us what she means by it confuses her whole column.

For example, she shows that there are racial disparities in our society, but it hardly follows that disparities are due to racism. She attributes some data that she adduces to white racism, but the data show no such thing. She says in her intro that whites "believe racism is over" but again this is a meaningless claim in lieu of a plausible definition of racism. A lot of whites think, rightly in my opinion, that because blacks are no longer the target of discriminatory laws and are in fact protected by the law from overt discrimination, and because they benefit from affirmative action in all its various guises, and because educators bend over backwards to help black kids succeed in school, that genuine racism, while still holding out in some backwaters, is pretty much in retreat. But if by racism being over Holloway has in mind a state in which all whites love and admire all blacks and think the black sub-culture is a wonderful thing, well, then, racism isn't over, nor, on that definition of racism, is its persistence necessarily a bad thing.

At the risk of causing readers' eyes to glaze over here's why I think her ten reasons don't show what she seems to think they show. One caveat: I haven't followed up on the links she offers so I reserve the right to change my mind if the links were to prove any more convincing than are Holloway's arguments. Here are her reasons for thinking whites are racist to a greater extent than even they realize with a brief response from me:

1. College professors, across race/ethnicity and gender, are more likely to respond to queries from students they believe are white males. Okay, but whatever the reason for this, how can it be an exemplification of white racism if the more positive response to white males is, as Holloway acknowledges, exhibited by black professors as well as white?

2. White people, including white children, are less moved by the pain of people of color, including children of color, than by the pain of fellow whites. Probably true, but why is that? Is it because white children instinctively disdain black children, or is it because, as a matter of human nature, people universally have more difficulty relating to those who are unlike themselves? To show that this is a feature of white racism in particular rather than a general feature of human nature the studies would have to show that children of other races in the U.S. feel as much empathy for children of all other races as they do for other children of their own race. I bet a study wouldn't show that.

3. White people are more likely to have done illegal drugs than blacks or Latinos, but are far less likely to go to to jail for it. Maybe so, but what are the unstated relevant factors? Were there prior arrests, was there resistance to arrest, were the whites able to secure better legal representation? Unless these questions are answered Holloway's citation of the disparity in sentencing tells us little about why the disparity exists.

4. Black men are sentenced to far lengthier prison sentences than white men for the same crimes. See #3

5. White people, including police, see black children as older and less innocent than white children. It's hard to see why this is even on the list. Whites tend to see Asians as younger than they really are. So what? Moreover, blacks often recount how they had to grow up fast on the streets and that they experienced more of life by the time they were thirteen than do most middle class people. If so, why is it racist on the part of whites to assume that blacks are telling them the truth about that?

6. Black children are more likely to be tried as adults and are given harsher sentences than white children. There's some interesting stuff in the details of Holloway's elaboration on this one. She writes: "That might explain why, of the roughly 2,500 juveniles in the U.S. who have been sentenced to life without parole, nearly all (97 percent) were male and (60 percent) black." Once again, no discussion of the relevant incidentals accompanies this stat, an omission which should at least give us pause. In order to tell whether there's an injustice lurking in this statistic we'd have to know what percentage of serious violent crimes were committed by blacks. If blacks commit 60% or more of such crimes then a 60% rate of life sentences seems completely unremarkable.

Holloway goes on to say that, "for black kids, killing a white person was a good way to end up behind bars for their entire adult life. For white kids, killing a black person actually helped their chances of ensuring their prison stay would be temporary. From the report: “[T]he proportion of African American [juveniles sentenced to life without parole] for the killing of a white person (43.4 percent) is nearly twice the rate at which African American juveniles overall have taken a white person’s life (23.2 percent). What’s more, we find that the odds of a [juvenile life without probation] sentence for a white offender who killed a black victim are only about half as likely (3.6 percent) as the proportion of white juveniles arrested for killing blacks (6.4 percent).”

If Holloway is looking for evidence of racism in these statistics then one of them is pretty damning, but not in the way she thinks. According to what she writes black juveniles kill whites at a rate almost four times greater than white juveniles kill blacks. I should think that that stat would count for far more in the "who's a racist sweepstakes" than, say, #2 above.

7. White people are more likely to support the criminal justice system, including the death penalty, when they think it’s disproportionately punitive toward black people. I'm going to go out on a limb here. Without having read the reports to which she links I'm going to predict that they are nonsense. I know, you're going to ask me how I can say that. The reason is that all of us have a built in Baloney Detector and mine red-lined when I read #7.

8. The more “stereotypically black” a defendant looks in a murder case, the higher the likelihood he will be sentenced to death. Ms Holloway must herself be black herself because if a white person wrote about stereotypically black criminals they'd be pilloried for it at Salon. But let's grant the concept of "stereotypically black" criminal since she introduced it. Is the determinative factor, as she puts it, broad nose, thick lips, and dark skin, or is it the fact that the convicted killer comes across as inarticulate, stupid, brutal and thuggish that sways the jurors against him? If it's the latter then why is this racist?

9. Conversely, white people falsely recall black men they perceive as being “smart” as being lighter-skinned. See #10

10. A number of studies find white people view lighter-skinned African Americans (and Latinos) as more intelligent, competent, trustworthy and reliable than their darker-skinned peers. Aside from the redundancy in what she says about "trustworthy and reliable" I have no doubt what she claims is true if the lighter-skinned African Americans are also more articulate, better educated, and don't wear their hats sideways and their pants around the middle of their butts. More significantly, I suspect that light skin is valued in the black community as well since that's who's buying all those skin lighteners she talks about in her explanation. Does that mean blacks are racists? If blacks wanting lighter skin is in some convoluted way indicative of white racism, is the desire for darker skin among whites who visit tanning salons indicative of black racism?

Leftist/Progressives think that the locus of racism in this country is on the right in groups like the Tea Party, but using them as a test case, how does someone like Holloway explain the popularity among Tea Partiers of people like Herman Cain in 2012, Ben Carson in 2016, Thomas Sowell, Clarence Thomas, and Walter Williams? All of these men have dark skin. So does Gov. Bobby Jindal of Louisianna. So does Mia Love (elected to congress in Utah, of all places), Condaleeza Rice, Star Parker, Jason Riley and many, many others. Maybe it's not the color of one's skin that matters to people but rather the content of their ideas.

Sometime ago I wrote on VP about a study that showed that blacks themselves think racism afflicts a higher percentage of blacks than it does whites. If that's so it's another reason Ms. Holloway's case is weak. It sounds too much like special pleading.

Friday, March 20, 2015

Compulsory Voting

The staunchly pro-choice Mr. Obama created a bit of a stir the other day when he seemed to suggest that people should have no choice at all when it comes to deciding whether they should vote. The man who believes it should be legal to choose to terminate a pregnancy seemed to say that it would be a good thing if people could not choose to abstain from voting. Presumably the requirement would apply only to citizens, but who knows? This is a terrible idea even if he wishes to compel only citizens to vote - perhaps leaving it optional for non-citizens - and is completely at odds with the concept of a free people in a free society.

Here's what the president said during a Q&A session at a Cleveland town hall event last Wednesday:
I don’t think I’ve ever said this publicly, but I’m going to go ahead and say it now. In Australia and some other countries, there’s mandatory voting. It would be transformative if everybody [in the U.S.] voted — that would counteract money more than anything. If everybody voted then it would completely change the political map in this country.
I guess technically he doesn't actually endorse mandatory voting here although it's hard to understand why he mentioned it right before saying how it would be a wonderful thing if everybody voted. In any case, the dreadfulness of endorsing compulsory voting, complete with fines and prison sentences for those who fail to show up at the polls, is exceeded by the reasons why he wants everyone to vote:
The people who tend not to vote are young, they’re lower income, they’re skewed more heavily towards immigrant groups and minority groups. They are often the folks who are scratching and climbing to get into the middle-class. They’re working hard.
One can understand why Mr. Obama wants these particular people to vote. When they do they tend to vote overwhelmingly for Democrats, but I could never understand why anyone who cares about the well-being of our republic would want them to vote. The demographic the president identifies is usually not only the most politically disengaged they're also the most apathetic, least informed, and least invested in their communities. If they do vote their ballot is often based not on an informed consideration of the issues and candidates, and what's best for the country, but on the basis of emotional appeals or self-interest, the candidate's looks, charisma, sex, race, or age. None of these are good reasons for voting for someone - voting for president is not like voting for homecoming queen - but they're often the only reasons the uninformed can think of.

Here's a simple test: If someone can't name their state's U.S. senators and their district's congressional representative that should tell them that they really don't know enough yet to vote responsibly. If someone doesn't care sufficiently about voting to register on time, to secure an ID, to study the issues, to read a newspaper or its electronic equivalent, why should we make it easier for them to vote? The only answer is that if the uninformed can be compelled to vote it would guarantee political hegemony for whichever party promised them the most free stuff, and we know which party that would be.

We should not be encouraging people to vote who have no idea what they're voting for. What we should be encouraging people to do is to educate themselves on what's going on in their country and the world. If they do that then they'll be self-motivated to vote and the government won't have to make it easy or force them to do it. They'll care enough to do what's necessary to cast their ballot. Every adult citizen should be eligible to vote, but only those who are informed and engaged should be encouraged to vote, and those who are informed and engaged won't need much encouragement.

Thursday, March 19, 2015


I'll be attending a conference this weekend at which geneticist Michael Denton will be speaking so I thought I'd rerun this post from last year in which Denton discusses one of the strangest phenomena in cell biology and a huge problem for Darwinian explanations of the evolution of the cell:

Geneticist Michael Denton is the author of two outstanding books, one (Evolution: A Theory in Crisis) on why Darwinism simply can't explain life and one (Nature's Destiny) on how the laws of physics and chemistry and the properties of water and carbon dioxide make the world an extraordinarily fit place for the emergence of higher forms of life.

He's interviewed at a site called The Successful Student and the interview is a must read for anyone interested in how discoveries in biology consistently refute the Darwinian paradigm.

Here's just one of the problems he discusses, a problem I confess I had never heard of before reading the interview:
At King’s [College in London] the subject of my PhD thesis was the development of the red [blood] cell and it seemed to me there were aspects of red cell development which posed a severe challenge to the Darwinian framework. The red cell performs one of the most important physiological functions on earth: the carriage of oxygen to the tissues. And in mammals the nucleus is lost in the final stages of red cell development, which is a unique phenomenon.

The problem that the process of enucleation poses for Darwinism is twofold: first of all, the final exclusion of the nucleus is a dramatically saltational event and quite enigmatic in terms of any sort of gradualistic explanation in terms of a succession of little adaptive Darwinian steps. Stated bluntly; how does the cell test the adaptive state of ‘not having a nucleus’ gradually? I mean there is no intermediate stable state between having a nucleus and not having a nucleus.

This is perhaps an even greater challenge to Darwinian gradualism than the evolution of the bacterial flagellum because no cell has ever been known to have a nucleus sitting stably on the fence half way in/half way out! So how did this come about by natural selection, which is a gradual process involving the accumulation of small adaptive steps?

The complexity of the process — which is probably a type of asymmetric cell division — whereby the cell extrudes the nucleus is quite staggering, involving a whole lot of complex mechanisms inside of the cell. These force the nucleus, first to the periphery of the cell and then eventually force it out of the cell altogether. It struck me as a process which was completely inexplicable in terms of Darwinian evolution — a slam-dunk if you want.

And there’s another catch: the ultimate catch perhaps? is an enucleate red cell adaptive? Because birds, which have a higher metabolic rate than mammals, keep their nucleus. So how come that organisms, which have a bigger demand for oxygen than mammals, they get to keep their nucleus while we get rid of ours?

And this raises of course an absolutely horrendous problem that in the case of one of the most crucial physiological processes on earth there are critical features that we can’t say definitively are adaptive.... Every single day I was in the lab at King’s I was thinking about this, and had to face the obvious conclusion that the extrusion of the red cell nucleus could not be explained in terms of the Darwinian framework.

And if there was a problem in giving an account of the shape of a red cell, in terms of adaptation, you might as well give up the Darwinian paradigm; you might as well "go home." .... It’s performing the most critical physiological function on the planet, and you’re grappling around trying to give an adaptive explanation for its enucleate state. And the fact that birds get by very, very well (you can certainly argue that birds are every bit as successful as mammals). So, what’s going on? What gives? And it was contemplating this very curious ‘adaptation’ which was one factor that led me to see that many Darwinian explanations were “just-so" stories.
Denton also talks about another fascinating development in biology - the growing realization that everything in the cell affects everything else. That even the shape, or topology, of the cell determines what genes will be expressed and that the regulation of all of the cellular activities is far more complex than any device human beings have ever been able to devise.

It's all very fascinating stuff.

Wednesday, March 18, 2015

Netanyahu's Win

So Israeli Prime Minister Bibi Netanyahu won big in yesterday's election, defying White House hopes that a more tractable man would be elected to lead Israel, a man who would bend more easily to Mr. Obama's will. Netanyahu's victory not only disappointed the Obama administration, it also confuted many confident prognostications of the media punditry who predicted that his party, the Likud, would fall. The results of the Israeli election are a major setback to those who want Israel to simply capitulate to its enemies, give them what they want, and commit national suicide. Netanyahu, much to the chagrin of the American left, is no Neville Chamberlain.

He was strongly criticized for coming to the U.S. a week or so ago to give a speech before Congress, a perfectly reasonable thing for him to do given that he sees the fate of his country about to be sacrificed to political expediency in the Iranian nuclear negotiations. The Democrats thought Netanyahu's visit was a provocation and insult to the president so they boycotted his speech, and Mr. Obama completely shunned him. Indeed, Mr. Obama has been displaying his contempt for the Israeli Prime Minister almost from his first day in office. Even now his petulance seems to prevent him from offering Mr. Netanyahu his congratulations, something he did even after the Iranian election.

It's rather strange that Mr. Obama never says anything derogatory about the enemies of the United States in Cuba, North Korea, and Venezuela and treats them with more deference and diplomatic courtesy than he treats Mr. Netanyahu. One has to wonder why.

Mr. Obama visited many of the Arab nations in the Middle East on his world tour, bowing to sheiks and petty tyrants, but snubbed Israel. He later complained about having to deal with Mr. Netanyahu on an inadvertently open mic to French President Sarkozy, and subsequently left Netanyahu, the Prime Minister of a major ally, mind you, to sit for an hour in the White House while he went upstairs to eat dinner. Add to this a number of other undiplomatic insults heaped upon Netanyahu by other members of the administration and it's little wonder that Mr. Netanyahu waved off the advice of those who suggested he not displease the president by giving that speech to the U.S. Congress.

In any case, Mr. Netanyahu has been reelected, even though Democrat electoral operatives went to Israel to advise his opponent and even though a number of American liberal groups spent lots of money to unseat him. The American media are blaming racism for the result, apparently because Mr. Netanyahu correctly pointed out, at the last minute when it's doubtful that it changed any votes, that supporters of Mr. Herzog, his opponent, were busing Arab citizens to the polls to vote for Herzog.

The media is upset by the Prime Minister's decisive victory because it's a defeat for President Obama, but Mr. Obama should soon realize that his support in an election often seems to be the kiss of death for whomever he supports. Only when he himself has been on the ticket has he prevailed in any election he has tried to influence, and his personal success has had less to do with his views than with his race and public charisma. In every other election in which he took an interest - The off-year U.S. election of 2010, the Canadian election in 2011, the recent U.S. election in 2014, and now this one in Israel - his support has been either ignored or repudiated by the voters.

Given this unenviable track record maybe Hillary is hoping that the rumors about Obama trying to sabotage her candidacy for 2016 are true.

Tuesday, March 17, 2015


Shelby Steele of the Hoover Institution has a fine piece at National Review on the different ways conservatives and liberals see their country and how liberals see conservatism.

He begins by describing the reaction of a few members of an audience at a charity banquet where he was giving a speech when he made mention, in the context of work being done in Africa by American charitable organizations, of "American exceptionalism." The comment was met by polite boos from one segment of the audience, and Steele uses this incident as a springboard for explaining the left's attitude toward America in general and conservatives in particular. I'd like to be able to excerpt the entire essay, but I'll focus on just a few of Steele's points and urge you to read the whole thing.

Referring to his experience with the boo-birds at the banquet, he writes:
It was as if they were saying, “Don’t you understand that even the phrase ‘American exceptionalism’ is a hubris that evokes the evils of white supremacy? It is an indecency that we won’t be associated with.” In booing, these audience members were acting out an irony: They were good Americans precisely because they were skeptical of American greatness. Their skepticism was a badge of innocence because it dissociated them from America’s history of evil. To unreservedly buy into American exceptionalism was, for them, to turn a blind eye on this evil, and they wanted to make the point that they were far too evolved for that. They would never be like those head-in-the-sand Americans who didn’t understand that American greatness was tainted by evil.
Steele believes that this desire to dissociate themselves from the evils of the past drives much of liberal thinking today.
In its hunger for innocence, post-1960s liberalism fell into a pattern in which anti-Americanism — the impulse, as the cliché puts it, to “blame America first” — guaranteed one’s innocence of the American past. Here in anti-Americanism was the Left’s all-defining formula: relativism-dissociation-legitimacy-power. Anti-Americanism is essentially a relativism — a false equivalency — that says America, despite her greatness, is no better an example to the world than many other countries. And in this self-effacement there is a perfect dissociation from the American past, and thus a new moral legitimacy — and so, finally, an entitlement to power .... American exceptionalism was a scandal that one booed in the name of humility and decency. Dissociation from it was the road to the Good. And this was so sealed a matter that booing me was only an expression of one’s moral self-esteem — the goodness in oneself bursting forth to censure a heretic.
Steele's essay is a reminder that different people can look at the same set of facts and interpret them completely differently. Just as some looking at the picture below see an old woman and some see a young girl, liberals looking at America see its historical flaws and conservatives see its historical uniqueness. It's flaws are troublesome, but every nation has them. Its uniqueness, however, has made America great not only in terms of military and economic power, but also in terms of the freedoms, opportunities, and assistance it offers not only to its own citizens but also to those of much of the rest of the world. The fact that so many people wish to emigrate to the U.S. or at least live under its protective umbrella is an implicit confirmation that they think so, too.

Steele's portrait of the racism he experienced growing up (Steele is African American) will perhaps come as a surprise to younger readers conditioned to think of the brutality on the Edmund Pettus Bridge in Selma or the unleashing of dogs and firehoses on peaceful protestors in Alabama as typical of how whites looked at and treated blacks in the 20th century. It wasn't:
When I was a boy growing up under segregation, racism was not seen as evil by most whites. It was simply recognition of a natural law: that some races were inferior to others and that people needed and wanted to be with “their own kind.” Most whites were quite polite about this — blacks were in their place and it was not proper to humiliate them for their lowly position. Racism was not meant to be menacing; it was only a kind of fatalism, an acceptance of God’s will. And so most whites could claim they held no animus toward blacks. Their prejudice, if it was prejudice at all, was perfectly impersonal. It left them free to feel compassion and sometimes even deep affection for those inferiors who cleaned their houses, or served them at table, or suckled their babies. And this was the meaning of things.
Of course, the fact that so many people had benign, even compassionate feelings to those they deemed inferior, doesn't justify the attitude that it was in the nature of things that blacks should always be second-class citizens. The attitude was obviously wrong, but as it manifested itself in many people of the time it was not evil. Unfortunately, it requires a mind able to handle nuance to understand the way things were back then, and nuance is one of the first casualties when the epithet of "racism" takes wing. To illustrate the attitude Steele is describing think of the movie Driving Miss Daisy.

The last part of his essay is given to a description of the liberal perception of conservatives and the irony of that perception given the repeated, and sometimes disastrous, failures of liberal social policy:
Conservatism — liberals believed — facilitated America’s moral hypocrisy. Its high-flown constitutional principles only covered up the low motivations that actually drove the country: the self-absorbed pursuit of wealth, the insatiable quest for hegemony in the world, the unacknowledged longing for hierarchy, the repression of women, the exploitation of minorities, and so on. Conservatism took the hit for all the hypocrisies that came to light in the 1960s. And it remains today an ideology branded with America’s shames. Liberalism, on the other hand, won for its followers a veil of innocence. And this is the gift that recommends it despite its legacy of failed, even destructive, public policies. We can chalk up the black underclass, the near disintegration of the black family, and the general decline of public education — among many other things — to liberal social policies.

Welfare policies beginning in the 1970s incentivized black women not to marry when they became pregnant, thereby undermining the black family and generating a black underclass. The public schools in many inner cities became more and more dysfunctional as various laws and court cases hampered the ability of school officials and classroom teachers to enforce discipline. Meanwhile, the schools fell under the sway of multiculturalism as well as powerful teachers’ unions that often oppose reforms that would make their members more accountable. Students in these schools, after the welfare-inspired breakdown of the black family, were less and less prepared to learn.

Affirmative action presumed black inferiority to be a given, so that racial preferences locked blacks into low self-esteem and hence low standards of academic achievement. “Yes, we are weak and non-competitive and look to be preferred for this; our weakness is our talent.”

School busing to achieve integration led only to a more extensive tracking system (classes that are assigned by academic performance) within the integrated schools, so that blacks were effectively segregated all over again in the lower academic tracks. And so on. Post-1960s liberalism — on the hunt for white American innocence — has done little more than toy with blacks. Yet it is conservatives who now feel evicted from their culture, who are made to feel like outsiders even as they are accused of being traditionalists. And contemporary conservatism is now animated by a sense of grievance, by the feeling that the great principles it celebrates are now dismissed as mere hypocrisies.
There's much more wisdom and insight at the link. If you want to understand the culture conflicts of our day, particularly as it relates to race, Steele's work, including his several books (The Content of Our Character: A New Vision of Race in America, A Dream Deferred: The Second Betrayal of Black Freedom in America, and White Guilt: How Blacks and Whites Together Destroyed the Promise of the Civil Rights Era.) are good places to start.

Monday, March 16, 2015

Raising Wages, Lowering Jobs

One of the arguments against raising the minimum wage is that it will do more harm to poor people than it will do good. By raising the minimum wage many employers who work on tight profit margins will be forced either to lay off employees or go out of business. It doesn't help a single mom trying to make a few extra bucks if her wage is raised to $15 an hour but her hours are reduced to zero. Nor does it help lower income people when the businesses at which they shop have to charge their customers more in order to pay the higher labor cost, or when a business that might have added an extra worker or two decides to forego taking on the additional cost.

This article reports that that's the very thing that's beginning to happen in Seattle as a result of the town fathers' decision to raise the city's minimum wage:
Seattle’s $15 minimum wage law goes into effect on April 1, 2015. As that date approaches, restaurants across the city are making the financial decision to close shop. The Washington Policy Center writes that “closings have occurred across the city, from Grub in the upscale Queen Anne Hill neighborhood, to Little Uncle in gritty Pioneer Square, to the Boat Street Cafe on Western Avenue near the waterfront.”

Of course, restaurants close for a variety of reasons. But, according to Seattle Magazine, the “impending minimum wage hike to $15 per hour” is playing a “major factor.” That’s not surprising, considering “about 36% of restaurant earnings go to paying labor costs."

Washington Restaurant Association’s Anthony Anton puts it this way: “It’s not a political problem; it’s a math problem. He estimates that a common budget breakdown among sustaining Seattle restaurants so far has been the following: 36 percent of funds are devoted to labor, 30 percent to food costs and 30 percent go to everything else (all other operational costs). The remaining 4 percent has been the profit margin, and as a result, in a $700,000 restaurant, he estimates that the average restauranteur in Seattle has been making $28,000 a year.

With the minimum wage spike, however, he says that if restaurant owners made no changes, the labor cost in quick service restaurants would rise to 42 percent and in full service restaurants to 47 percent.

Restaurant owners, expecting to operate on thinner margins, have tried to adapt in several ways including “ higher menu prices, cheaper, lower-quality ingredients, reduced opening times, and cutting work hours and firing workers,” according to The Seattle Times and Seattle Eater magazine. As the Washington Policy Center points out, when these strategies are not enough, businesses close, “workers lose their jobs and the neighborhood loses a prized amenity."

A spokesman for the Washington Restaurant Association told the Washington Policy Center, “Every [restaurant] operator I’m talking to is in panic mode, trying to figure out what the new world will look like… Seattle is the first city in this thing and everyone’s watching, asking how is this going to change? Seattle is rightly famous for great neighborhood restaurants. That won’t change. What will change is that fewer people will be able to afford to dine out, and as a result there will be fewer great restaurants to enjoy. People probably won’t notice when some restaurant workers lose their jobs, but as prices rise and some neighborhood businesses close, the quality of life in urban Seattle will become a little bit poorer.”
If raising the minimum wage is a good idea why not eliminate poverty altogether and raise it to $50 an hour? The answer is obvious, of course. Businesses couldn't handle it, and would have to close down, but many of them can't handle an increase to $15 an hour either. So why is the city requiring them to do it?

Maybe these businesses could apply to Washington for subsidies to pay their workers, sort of like how people can apply for subsidies to pay their increased health care premiums under Obamacare.

Saturday, March 14, 2015

St. Patrick

The following is a post I've run on previous St. Patrick's Days and thought I'd run again this year because, I say in all modesty, it's pretty interesting:

Millions of Americans, many of them descendents of Irish immigrants, celebrate their Irish heritage by observing St. Patrick's Day today. We are indebted to Thomas Cahill and his best-selling book How The Irish Saved Civilization for explaining to us why Patrick's is a life worth commemorating. As improbable as his title may sound, Cahill weaves a fascinating and compelling tale of how the Irish in general, and Patrick and his spiritual heirs in particular, served as a tenuous but crucial cultural bridge from the classical world to the medieval age and, by so doing, made Western civilization possible.

Born a Roman citizen in 390 B.C., Patrick had been kidnapped as a boy of sixteen from his home on the coast of Britain and taken by Irish barbarians to Ireland. There he languished in slavery until he was able to escape six years later. Upon his homecoming he became a Christian, studied for the priesthood, and eventually returned to Ireland where he would spend the rest of his life laboring to persuade the Irish to accept the Gospel and to abolish slavery. Patrick was the first person in history, in fact, to speak out unequivocally against slavery and, according to Cahill, the last person to do so until the 17th century.

Meanwhile, Roman control of Europe had begun to collapse. Rome was sacked by Alaric in 410 A.D. and barbarians were sweeping across the continent, forcing the Romans back to Italy, and plunging Europe into the Dark Ages. Throughout the continent unwashed, illiterate hordes descended on the once grand Roman cities, looting artifacts and burning books. Learning ground to a halt and the literary heritage of the classical world was burned or moldered into dust. Almost all of it, Cahill claims, would surely have been lost if not for the Irish.

Having been converted to Christianity through the labors of Patrick, the Irish took with gusto to reading, writing and learning. They delighted in letters and bookmaking and painstakingly created indescribably beautiful Biblical manuscripts such as the Book of Kells which is on display today in the library of Trinity College in Dublin. Aware that the great works of the past were disappearing, they applied themselves assiduously to the daunting task of copying all surviving Western literature - everything they could lay their hands on.

For a century after the fall of Rome, Irish monks sequestered themselves in cold, damp, cramped mud huts called scriptoria, so remote and isolated from the world that they were seldom threatened by the marauding pagans. Here these men spent their entire adult lives reproducing the old manuscripts and preserving literacy and learning for the time when people would be once again ready to receive them.

These scribes and their successors served as the conduits through which the Graeco-Roman and Judeo-Christian cultures were transmitted to the benighted tribes of Europe, newly settled amid the rubble and ruin of the civilization they had recently overwhelmed. Around the late 6th century, three generations after Patrick, Irish missionaries with names like Columcille, Aidan, and Columbanus began to venture out from their monasteries and refuges, clutching their precious books to their hearts, sailing to England and the continent, founding their own monasteries and schools among the barbarians and teaching them how to read, write and make books of their own.

Absent the willingness of these courageous men to endure deprivations and hardships of every kind for the sake of the Gospel and learning, Cahill argues, the world that came after them would have been completely different. It would likely have been a world without books. Europe almost certainly would have been illiterate, and it would probably have been unable to resist the Muslim incursions that arrived a few centuries later.

The Europeans, starved for knowledge, soaked up everything the Irish missionaries could give them. From such seeds as these modern Western civilization germinated. From the Greeks the descendents of the Goths and Vandals learned philosophy, from the Romans they learned about law, from the Bible they learned of the worth of the individual who, created and loved by God, is therefore significant and not merely a brutish aggregation of matter.

From the Bible, too, they learned that the universe was created by a rational Mind and was thus not capricious, random, or chaotic. It would yield its secrets to rational investigation. Out of these assumptions, once their implications were finally and fully developed, grew historically unprecedented views of the value of the individual and the flowering of modern science.

Our cultural heritage is thus, in a very important sense, a legacy from the Irish. A legacy from Patrick. It is worth pondering on this St. Patrick's Day what the world would be like today had it not been for those early Irish scribes and missionaries thirteen centuries ago.

Buiochas le Dia ar son na nGaeil (Thank God for the Irish), and I hope you have a great St. Patrick's Day.

Friday, March 13, 2015

Why Naturalism Is Self-Refuting

Among the live options for worldviews on offer in modern times certainly one of the most popular among scientists, philosophers, and other academics is the view called naturalism. This view states that all that exists is ultimately explicable in terms of the sorts of explanations employed by scientists. In other words, nature and nature's laws are all there is, there's nothing else.

Most worldviews offer a Grand Story for how we got here. In naturalism the Story is some iteration of Darwinian evolution. Blind, natural processes generated life which evolved through genetic mutation, natural selection, and genetic drift to produce the grand diversity of life including man and his marvelous powers of reason. The problem with this Story, though, is that according to philosopher John Gray (who is himself a naturalist) it's self-refuting.

In a piece in The New Republic critical of his fellow atheist Richard Dawkins Gray writes:
[T]he ideas and the arguments that [Dawkins] presents are in no sense novel or original, and he seems unaware of the critiques of positivism that appeared in its Victorian heyday.

Some of them bear re-reading today. One of the subtlest and most penetrating came from the pen of Arthur Balfour, the Conservative statesman, British foreign secretary, and sometime prime minister. Well over a century ago, Balfour identified a problem with the evolutionary thinking that was gaining ascendancy at the time. If the human mind has evolved in obedience to the imperatives of survival, what reason is there for thinking that it can acquire knowledge of reality, when all that is required in order to reproduce the species is that its errors and illusions are not fatal?

A purely naturalistic philosophy cannot account for the knowledge that we believe we possess. As he framed the problem in The Foundations of Belief in 1895, “We have not merely stumbled on truth in spite of error and illusion, which is odd, but because of error and illusion, which is even odder.” Balfour’s solution was that naturalism is self-defeating: humans can gain access to the truth only because the human mind has been shaped by a divine mind. Similar arguments can be found in a number of contemporary philosophers, most notably Alvin Plantinga. Again, one does not need to accept Balfour’s theistic solution to see the force of his argument. A rigorously naturalistic account of the human mind entails a much more skeptical view of human knowledge than is commonly acknowledged.
Balfour's argument is sometimes a bit difficult to understand when encountered for the first time, but it says essentially that any trait selected for survival by natural selection can only coincidentally be a trait that is useful in discovering truth. If human reason is the product of naturalistic evolution it would have been selected for because it somehow conferred survival value, not because it was reliable in finding truth. There's no necessary connection between knowing truth and survival of a species.

In her recent book titled Finding Truth: 5 Principles for Unmasking Atheism Secularism, and Other God Substitutes Nancy Pearcy quotes a number of naturalist thinkers who make this point but who don't seem to realize that it undercuts their own naturalism (The following draws upon an excerpt of Pearcy's book at Evolution News and Views). Pearcy writes:
Of course, the sheer pressure to survive is likely to produce some correct ideas. A zebra that thinks lions are friendly will not live long. But false ideas may be useful for survival. Evolutionists admit as much: Eric Baum says, "Sometimes you are more likely to survive and propagate if you believe a falsehood than if you believe the truth." Steven Pinker writes, "Our brains were shaped for fitness, not for truth. Sometimes the truth is adaptive, but sometimes it is not." The upshot is that survival is no guarantee of truth. If survival is the only standard, we can never know which ideas are true and which are adaptive but false.

An example comes from Francis Crick. In The Astonishing Hypothesis, he writes, "Our highly developed brains, after all, were not evolved under the pressure of discovering scientific truths but only to enable us to be clever enough to survive."
But, Pearcy tells us, that means Crick's own theory cannot be relied upon to be true.
To make the dilemma even more puzzling, evolutionists tell us that natural selection has produced all sorts of false concepts in the human mind. Many evolutionary materialists maintain that free will is an illusion, consciousness is an illusion, even our sense of self is an illusion -- and that all these false ideas were selected for their survival value.
The same thing is often said about morality. "It's an illusion," philosopher Michael Ruse wrote, "fobbed off on us by our genes to get us to cooperate." But if all of these things are the illusory products of evolution how do we know that the theory of evolution and the naturalistic worldview it supports are not also illusions? Why should we think these things true if our thinking is as likely to lead us to falsehood as it is to lead us to truth? Pearcy continues:
A few thinkers, to their credit, recognize the problem. Literary critic Leon Wieseltier writes, "If reason is a product of natural selection, then how much confidence can we have in a rational argument for natural selection? ... Evolutionary biology cannot invoke the power of reason even as it destroys it."

On a similar note, philosopher Thomas Nagel asks, "Is the [evolutionary] hypothesis really compatible with the continued confidence in reason as a source of knowledge?" His answer is no: "I have to be able to believe ... that I follow the rules of logic because they are correct -- not merely because I am biologically programmed to do so." Hence, "insofar as the evolutionary hypothesis itself depends on reason, it would be self-undermining."
Pearcy goes on to show that Darwin himself, and many of his followers, argued that man's mind leads him to belief in God but that our minds, being the product of blind chance and selection, are too untrustworthy to credit that conclusion. Yet they never applied that same skepticism to the theory of evolution itself.
People are sometimes under the impression that Darwin himself recognized the problem. They typically cite Darwin's famous "horrid doubt" passage where he questions whether the human mind can be trustworthy if it is a product of evolution: "With me, the horrid doubt always arises whether the convictions of man's mind, which has been developed from the mind of the lower animals, are of any value or at all trustworthy."

But, of course, Darwin's theory itself was a "conviction of man's mind." So why should it be "at all trustworthy"?

Surprisingly, however, Darwin never confronted this internal contradiction in his theory. Why not? Because he expressed his "horrid doubt" selectively -- only when considering the case for a Creator.

In another passage Darwin admitted, "I feel compelled to look to a First Cause having an intelligent mind in some degree analogous to that of man." Again, however, he immediately veered off into skepticism: "But then arises the doubt -- can the mind of man, which has, as I fully believe, been developed from a mind as low as that possessed by the lowest animal, be trusted when it draws such grand conclusions?" That is, can it be trusted when it draws "grand conclusions" about a First Cause? Perhaps the concept of God is merely an instinct programmed into us by natural selection, Darwin added, like a monkey's "instinctive fear and hatred of a snake."

In short, it was on occasions when Darwin's mind led him to a theistic conclusion that he dismissed the mind as untrustworthy. He failed to recognize, though, that to be logically consistent he needed to apply the same skepticism to his own theory.
Pearcy concludes the excerpt with a quote from Oxford mathematician John Lennox who wrote that according to atheism "the mind that does science ... is the end product of a mindless unguided process. Now, if you knew your computer was the product of a mindless unguided process, you wouldn't trust it. So, to me atheism undermines the rationality I need to do science."

One way to summarize all this is to say that you can believe your reason is trustworthy or you can believe in naturalistic evolution, but you can't believe in both.

Thursday, March 12, 2015

The Gang of 47

Last week 47 Republican senators signed a brief letter to Iran's regime informing them that any deal they sign with the administration regulating their nuclear weapons program has to be approved by the U.S. Senate. I don't know what effect the letter has had on the Iranians but it has sent Democrats in congress and the media into hysterics. They're demanding that the 47 be tried for treason. they're demanding that this is the most seditious behavior ever in the history of the universe. What they're not doing much of, though, is telling us exactly how the senators were violating the law and how the letter undermines the administration's negotiations.

Nor have I heard much from the left on why what these senators did is anywhere near as bad as what, say, Ted Kennedy did back in the 80s when he secretly sent emissaries Moscow to stab Ronald Reagan in the back, or what any number of Democrats, including current Secretary of State John Kerry, did in the 80s when they went to Nicaragua to give aid and comfort the Sandinistas, or what Representative Jim McDermott and other Democrats did in 2002 when he went to Baghdad to undermine George W. Bush, or Nancy Pelosi's visit to Syria in 2007 for similar reasons.

When Democrats deliberately try to undermine the foreign policy of a Republican administration they're celebrated as heroic patriots, but when Republicans do something far less toxic the liberals in the media and congress want them all arrested. Geez, you'd think the GOP had used the IRS to punish their political enemies or something.

Anyway, Breitbart has a very good piece on all this plus a video clip of the author of the letter, Senator Tom Cotton, a veteran of Afghanistan and Iraq, being grilled by the folks on MSNBC's Morning Joe program. Here's the heart of the Breitbart article wherein the writer, Ben Shapiro, names names:
Senators John Sparkman (D-AL) and George McGovern (D-SD). The two Senators visited Cuba and met with government actors there in 1975. They said that they did not act on behalf of the United States, so the State Department ignored their activity.

Senator Teddy Kennedy (D-MA). In 1983, Teddy Kennedy sent emissaries to the Soviets to undermine Ronald Reagan’s foreign policy. According to a memo finally released in 1991 from head of the KGB Victor Chebrikov to then-Soviet leader Yuri Andropov: On 9-10 May of this year, Sen. Edward Kennedy’s close friend and trusted confidant [John] Tunney was in Moscow. The senator charged Tunney to convey the following message, through confidential contacts, to the General Secretary of the Central Committee of the Communist Party of the Soviet Union, Y. Andropov.
What was the message? That Teddy would help stifle Reagan’s anti-Soviet foreign policy if the Soviets would help Teddy run against Reagan in 1984. Kennedy offered to visit Moscow to “arm Soviet officials with explanations regarding problems of nuclear disarmament so they may be better prepared and more convincing during appearances in the USA.” Then he said that he would set up interviews with Andropov in the United States. “Kennedy and his friends will bring about suitable steps to have representatives of the largest television companies in the USA contact Y.V. Andropov for an invitation to Moscow for the interviews…Like other rational people, [Kennedy] is very troubled by the current state of Soviet-American relations,” the letter explained.
The memo concluded:
Tunney remarked that the senator wants to run for president in 1988. Kennedy does not discount that during the 1984 campaign, the Democratic Party may officially turn to him to lead the fight against the Republicans and elect their candidate president.
House Speaker Jim Wright (D-TX). In 1984, 10 Democrats sent a letter to Daniel Ortega, the head of the military dictatorship in Nicaragua, praising Ortega for “taking steps to open up the political process in your country.” House Speaker Jim Wright signed the letter.

In 1987, Wright worked out a deal to bring Ortega to the United States to visit with lawmakers. As The New York Times reported:
There were times when the White House seemed left out of the peace process, uninformed, irritated. ”We don’t have any idea what’s going on,” an Administration official said Thursday. And there was a bizarre atmosphere to the motion and commotion: the leftist Mr. Ortega, one of President Reagan’s arch enemies, heads a Government that the Administration has been trying to overthrow by helping to finance a war that has killed thousands of Nicaraguans on both sides. Yet he was freely moving around Washington, visiting Mr. Wright in his Capitol Hill office, arguing his case in Congress and at heavily covered televised news conferences. He criticized President Reagan; he recalled that the United States, whose troops intervened in Nicaragua several times between 1909 and 1933, had supported the Somoza family dictatorship which lasted for 43 years until the Sandinistas overthrew it in 1979.
Ortega then sat next to Wright as he presented a “detailed cease-fire proposal.” The New York Times said, “Mr. Ortega seemed delighted to turn to Mr. Wright.” Senator John Kerry (D-MA). Kerry jumped into the pro-Sandanista pool himself in 1985, when he traveled to Nicaragua to negotiate with the regime. He wasn’t alone; Senator Tom Harkin (D-IA) joined him. The Christian Science Monitor reported that the two senators “brought back word that Mr. Ortega would be willing to accept a cease-fire if Congress rejected aid to the rebels…That week the House initially voted down aid to the contras, and Mr. Ortega made an immediate trip to Moscow.” Kerry then shilled on behalf of the Ortega government:
We are still trying to overthrow the politics of another country in contravention of international law, against the Organization of American States charter. We negotiated with North Vietnam. Why can we not negotiate with a country smaller than North Carolina and with half the population of Massachusetts? It’s beyond me. And the reason is that they just want to get rid of them [the Sandinistas], they want to throw them out, they don’t want to talk to them.
Representatives Jim McDermott (D-WA), David Bonior (D-MI), and Mike Thompson (D-CA). In 2002, the three Congressmen visited Baghdad to play defense for Saddam Hussein’s regime. There, McDermott laid the groundwork for the Democratic Party’s later rip on President George W. Bush, stating, “the president of the United States will lie to the American people in order to get us into this war.” McDermott, along with his colleagues, suggested that the American administration give the Iraqi regime “due process” and “take the Iraqis on their face value.” Bonior said openly he was acting on behalf of the government:
The purpose of our trip was to make it very clear, as I said in my opening statement, to the officials in Iraq how serious we–the United States is about going to war and that they will have war unless these inspections are allowed to go unconditionally and unfettered and open. And that was our point. And that was in the best interest of not only Iraq, but the American citizens and our troops. And that’s what we were emphasizing. That was our primary concern–that and looking at the humanitarian situation.
Senator Jay Rockefeller (D-WV). In 2002, Rockefeller told Fox News’ Chris Wallace, “I took a trip by myself in January of 2002 to Saudi Arabia, Jordan and Syria, and I told each of the heads of state that it was my view that George Bush had already made up his mind to go to war against Iraq, that that was a predetermined set course which had taken shape shortly after 9/11.” That would have given Saddam Hussein fourteen months in which to prepare for war. House Speaker Nancy Pelosi (D-CA). In April 2007, as the Bush administration pursued pressure against Syrian dictator Bashar Assad, House Speaker Nancy Pelosi went to visit him. There, according to The New York Times, the two “discussed a variety of Middle Eastern issues, including the situations in Iraq and Lebanon and the prospect of peace talks between Syria and Israel.” Pelosi was accompanied by Reps. Henry Waxman (D-CA), Tom Lantos (D-CA), Louise M. Slaughter (D-NY), Nick J. Rahall II (D-WV), and Keith Ellison (D-MN). Zaid Haider, Damascus bureau chief for Al Safir, reportedly said, ‘There is a feeling now that change is going on in American policy – even if it’s being led by the opposition.”
Don't look for any of the above on MSNBC, however, or in the New York Times or Washington Post. It doesn't fit the narrative they want the American people to accept about how uniquely evil Republicans are.

Wednesday, March 11, 2015


The nature of time has been described as perhaps the universe's most baffling mystery. Immanuel Kant thought that time was a part of our mental structure that enabled us to experience an external world that would be incomprehensibly incoherent without that structure.

Many modern thinkers, on the other hand, think of time as an objective reality that exists statically and through which we somehow pass. In this view there is no past or future. Every moment exists simultaneously but we experience them sequentially.

Somewhat between these two views is the theory that what we call time is a collection of states of affairs that we experience, but that time itself is an illusion.

There's an interesting article on time in Popular Science by astrophysicist Adam Frank who seems to adopt this latter view. Frank presents us with an excerpt from his book About Time in which he advances the interesting theory that time has no real existence and describes the thinking of physicist Julian Barbour in support of his theory:
Julian Barbour’s solution to the problem of time in physics and cosmology is as simply stated as it is radical: there is no such thing as time.

“If you try to get your hands on time, it’s always slipping through your fingers,” says Barbour. “People are sure time is there, but they can’t get hold of it. My feeling is that they can’t get hold of it because it isn’t there at all.”

Isaac Newton thought of time as a river flowing at the same rate everywhere. Einstein changed this picture by unifying space and time into a single 4-Dimensional entity. But even Einstein failed to challenge the concept of time as a measure of change. In Barbour’s view, the question must be turned on its head. It is change that provides the illusion of time. Channeling the ghost of Parmenides, Barbour sees each individual moment as a whole, complete and existing in its own right. He calls these moments “Nows.”

“As we live, we seem to move through a succession of Nows,” says Barbour, “and the question is, what are they?” For Barbour each Now is an arrangement of everything in the universe. “We have the strong impression that things have definite positions relative to each other. I aim to abstract away everything we cannot see (directly or indirectly) and simply keep this idea of many different things coexisting at once. There are simply the Nows, nothing more, nothing less.”

Barbour’s Nows can be imagined as pages of a novel ripped from the book’s spine and tossed randomly onto the floor. Each page is a separate entity existing without time, existing outside of time. Arranging the pages in some special order and moving through them in a step-by-step fashion makes a story unfold. Still, no matter how we arrange the sheets, each page is complete and independent.

As Barbour says, “The cat that jumps is not the same cat that lands.” The physics of reality for Barbour is the physics of these Nows taken together as a whole. There is no past moment that flows into a future moment. Instead all the different possible configurations of the universe, every possible location of every atom throughout all of creation, exist simultaneously. Barbour’s Nows all exist at once in a vast Platonic realm that stands completely and absolutely without time.

“What really intrigues me,” says Barbour, “is that the totality of all possible Nows has a very special structure. You can think of it as a landscape or country. Each point in this country is a Now and I call the country Platonia, because it is timeless and created by perfect mathematical rules.” The question of “before” the Big Bang never arises for Barbour because his cosmology has no time. All that exists is a landscape of configurations, the landscape of Nows.

“Platonia is the true arena of the universe,” he says, “and its structure has a deep influence on whatever physics, classical or quantum, is played out in it.” For Barbour, the Big Bang is not an explosion in the distant past. It’s just a special place in Platonia, his terrain of independent Nows.
There's more on this at the link. Take the time to read it, or perhaps I should say, enter the Now of reading the article.

Tuesday, March 10, 2015

Some Hatred Is Okay, Some Isn't

The news out of Oklahoma last weekend is pretty ugly. A bunch of frat boys at Oklahoma University filmed themselves engaging in dehumanizing, racist chants. The video went public and the president of the university acted with dispatch and shut the fraternity down. There are a couple of things about this sordid affair upon which I'd like to comment. One thought I had is that it was a little surprising to see the number of people in the media who expressed surprise that young people (!) would exhibit racist sentiments. Young people, many observers seem to assume, are far more tolerant and open-minded and not nearly so bigoted as older generations. In fact, in my book In the Absence of God (See link at top right of this page) certain racial attitudes are presented that some readers thought were anachronistic. I was told by one young reader that race just isn't an issue for young people now-a-days. I wonder what that reader is thinking today.

Anyway, anyone surprised that college students would flout the pieties, racial or otherwise, of their elders and others in authority just doesn't know young people. That's what young people, especially college students, have always done, and the more strenuously university authorities and the modern thought police insist that students toe the line of political correctness, the more likely young people are to deliberately cross the line of propriety to stick their thumb in the over-forty crowd's eye and mock their orthodoxies.

Their dismayed elders did the same thing when they were students, of course, and now they're surprised that some of their kids are doing it today. That they are is wrong, to be sure, but it'd be silly to draw grand sociological conclusions from what these guys did. They behaved like jerks, so why is that noteworthy?

Another thought that I heard expressed over the weekend is that this shows that racism is still alive and well among white people, but if that were so these young men would be lauded by their peers and on social media. Instead they've been humiliated by the national media, the student body has banded together to censure their actions, and the university president was uncharacteristically forceful (for a university president) in condemning their conduct, even to the point of expelling two of them.

Which raises another thought. As bad as their behavior was it was nothing compared to the virulent hate and bigotry spewed by people like Louis Farrakhan (not to mention numerous black murderers who claimed that they wanted to kill white people, and did), yet even though he speaks for tens of thousands of African-Americans who sympathize with, or, bizarre as it may sound, even share, his views the media and many in the black community completely ignore him. There are no editorials or talking heads on the Cable nets deploring his vile speeches. There are no vigils in the black community with people carrying signs declaring that Farrakhan doesn't represent them.

Yet when a bunch of immature white goofballs indulge in relatively mild (compared to Farrakhan, at least) racial insults the media and the university (rightly, in my view) go to code red. The student body massively repudiates it, the campus media strongly condemns it, and the administration quickly punishes it. What these guys did was detestable. It was hurtful to people, and the university was right to send the message that this sort of thing will not be tolerated. That they did so shows that racism, at least in the white community, may be alive, but it's on life support and it's not doing at all well.

Now, if only college administrators elsewhere would follow President Boren's lead and direct the same degree of outrage at anti-semites on their campuses who evidently feel free to intimidate, threaten and express their hatred of Jewish students. Or is hatred on American campuses okay as long as it's only directed at Jews?

Monday, March 9, 2015

Benjamin Libet on Free Will and Determinism

This post is from the archive but is relevant to a topic we're discussing, or soon will be discussing, in class:

Students of psychology, philosophy and other disciplines which touch upon the operations of the mind and the question of free will may have heard mention of the experiments of Benjamin Libet, a University of California at San Francisco neurobiologist who conducted some remarkable research into the brain and human consciousness in the last decades of the 20th century.

One of Libet's most famous discoveries was that the brain "decides" on a particular choice milliseconds before we ourselves are conscious of deciding. The brain creates an electrochemical "Readiness Potential" (RP) that precedes by milliseconds the conscious decision to do something. This has been seized upon by materialists who use it as proof that our decisions are not really chosen by us but are rather the unconscious product of our brain's neurochemistry. The decision is made before we're even aware of what's going on, they claim, and this fact undermines the notion that we have free will as this video explains:
Michael Egnor, at ENV, points out, however, that so far from supporting determinism, Libet himself believed in free will, his research supported that belief, and, what's more, his research also reinforced, in Libet's own words, classical religious views of sin.

Libet discovered that the decision to do X is indeed pre-conscious, but he also found that the decision to do X can be consciously vetoed by us and that no RP precedes that veto. In other words, the decision of the brain to act in a particular way is determined by unconscious factors, but we retain the ability to consciously (freely) choose not to follow through with that decision. Our freedom lies in our ability to refuse any or all of the choices our brain presents to us. Or, we might say, free will is really "free won't."

Egnor's article is a fascinating piece if you're interested in the question of free will and Libet's contribution to our understanding of it.

Saturday, March 7, 2015

The Crusades

In the wake of President Obama's infelicitous comments during the National Prayer Breakfast tacitly implicating Christianity in atrocities during the Crusades I decided to undertake a re-reading of Rodney Stark's excellent work on the Crusades titled God's Battalions.

Stark points out, and documents, that contrary to popular mythology and perhaps the president's understanding of them, the Crusades were not motivated by greed and a desire to pillage. Nor were they wars of imperialism waged by brutal Europeans against peaceful and gentle Muslims. Nor did the crusaders seek to convert the population or oppress them. Nor were the crusaders colonists who turned the recaptured cities in what is today Syria and Israel into cash cows for Europe. Indeed, there was very little wealth to be had in Arab lands and the financial and other personal costs incurred by the crusaders in their mission to free the Holy Land was immense. As Stark observes, the flow of wealth during the two centuries of crusader rule went very much from Europe to the Middle East, not the other way around. The Crusades were, in fact, wars of self-defense undertaken at enormous risk, cost, and suffering to the crusaders, after centuries of Muslim torture, slaughter, and conquest of Christian cities and pilgrims. Their entire purpose seems to have been to liberate the Holy Land from the terrorism of the Muslims and make it safe for Christian pilgrims to journey there.

The expeditions stretched over a span of two centuries, roughly from 1097 - 1289 and were amazingly successful in military terms. Much larger Muslim forces were no match for the technological superiority and disciplined forces of the outnumbered crusaders who had traveled some 2500 miles on foot to reach the Holy Land. Indeed, the crusaders were much more likely to fall victim to disease, deprivation, and betrayal by the Byzantine emperors in Constantinople than to Muslim armies. They ultimately failed to hold on to their gains in the Middle East because their European supporters grew weary of the investment it took to supply them thousands of miles away.

Perhaps more to the point of this post, there were three events that occurred during the Crusades that some historians point to as proof of crusader savagery and barbarism, and of the Church's complicity in their crimes, but Stark explains that those events, while tragic, are more complex than some historians admit.

The first was the slaughter of the Rhineland Jews by German crusaders enroute to the Middle East. This was a horrific crime, but it's simplistic to think that it was an instance of Christian persecution of Jews. As a relatively small force of German crusaders began their trek through the Rhine valley on their way to the Holy Land they massacred Jews in a number of cities along the way. The knights who perpetrated these atrocities were confronted in every town by bishops of the Church who, at much personal risk, hid Jews and sought to protect them from the ghastly violence. Moreover, the slaughters were strongly condemned by the pope, though he was unfortunately powerless to prevent them. The Germans who murdered the Jews were themselves almost completely wiped out when they reached Hungary by Hungarian knights, a denouement which most European Christians believed to be condign punishment delivered from heaven for the crimes these men had committed in the Rhineland.

The second event was the massacre of Muslims that followed the taking of Jerusalem during the First Crusade in the summer of 1099. This massacre, Stark writes, "has been used again and again to vilify the crusaders," but "it is not only absurd, but quite disingenuous to use this event to 'prove' that the crusaders were bloodthirsty barbarians in contrast to the more civilized and tolerant Muslims." Dozens of cities in the region had been completely destroyed and their inhabitants beheaded, tortured, or sold into slavery by Muslims in the years leading up to the Crusade. Moreover, the prevailing rule of warfare at the time was that if a city surrendered when beseiged the inhabitants would be spared, but if they refused to surrender and instead forced the attackers to take it through combat, and thus sustain high casualties, the inhabitants could expect to be slain. Stark suggests that had the Muslims surrendered before the scaling towers were rolled against the walls the resulting blood-letting would not have happened, but they did not. This rule of seige warfare may seem barbaric to us, but all sides abided by it in the eleventh century (although Muslim forces regularly violated it and several times massacred all the inhabitants of cities that had surrendered to them).

The third event was the sack of Constantinople in 1204 during the Fourth Crusade. Constantinople was an ostensibly Christian city (Greek Orthodox) which makes the destruction of it by European Christian crusaders difficult to understand, until one reads the history. By 1204 the Byzantine emperors at Constantinople had repeatedly betrayed crusader armies traversing their territory enroute to the Holy Land with the result that tens of thousands of crusaders and their families perished. Moreover, the Byzantines themselves in the course of internecine warfare plundered their capital for three days in 1081. In 1182 the emperor had all Latin Christians in the city put to the sword resulting in the deaths of thousands of women, children and elderly - many more dead than were believed to have been killed when the crusaders, betrayed once more by the Byzantine emperor and under attack by his navy, finally had had enough and attacked the city. The city fell, about two thousand of the 150,000 residents were killed and much of the many art treasures were plundered. It is this last that has outraged some historians more than the deaths of the inhabitants of the city.

In any case, while preparing this post I came across an article in The Federalist which also discusses Stark's book. The article is pretty good and you might wish to check it out.