Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Saturday, September 24, 2016

Willful ignorance and Irish slavery

Prompted by yesterday's post regarding the tendency of some people to amplify their feelings into facts (and in the process, ignore the actual facts), a loyal reader of Skeptophilia put me on the trail of a fine, if disturbing, example of this phenomenon: the claim that there were Irish slaves, and they had it worse than the African ones did.

I had seen a version of the claim before, posted on Facebook.  This is the one I ran into:


My impression was that it was just one more in the long line of claims intended to make white people feel like they have no reason to address the sordid history of North America with respect to their treatment of minorities and indigenous peoples.  "Hey, y'all," it seems to say, "we had it bad too, you know."

What I didn't realize until today was that there's a far uglier implication here, made plain in some of the websites where you see the above posted; that not only were the Irish oppressed (a point no one with any knowledge of history would argue), but that Irish immigrants to North America were oppressed by the African Americans.  If you look at those websites -- which I would not recommend to anyone who has a weak stomach or slim tolerance for racist garbage -- you find claims that Africans and Mulattos enslaved, raped, tortured, and killed Irish slaves, especially Irish women, all through the 18th and first half of the 19th century.

The claim is thoroughly debunked by history scholar Liam Hogan, who addresses each piece of the claim, uncovering the bogus nature of the supporting evidence.  Some of the "evidence" is outright falsification; for example, one website uses gruesome photos from Andersonville Prison and the Holocaust and claims that they were pictures of Irish slaves; another shows a drawing of 18th century psychopathic murderer Elizabeth Brownrigg flogging a servant, and claims instead that it is a drawing of a poor Irish slave in the early United States being whipped.  In fact, the claim that the Irish were enslaved at all is mixing up indentured servitude with chattel slavery, a distinction that none of the slave owners back then were confused about in the least.

All of this would be another exercise in believe-what-you-want-to-believe if the whole idea hadn't been taken up by the white supremacists and neo-Nazis.  The "Irish slave" trope figures into the whole mythology you see on websites like Stormfront, revolving around the idea that the whites are in constant danger of being attacked and destroyed by people of color.  And as strategies for convincing followers go, it's pretty powerful.  If you can persuade yourself that white privilege is nonexistent, that the whites all along have had it as bad as the minorities, it is only a short step to the attitude that any demands made by minorities that the whites address institutional racism are ill-founded and unfair.

Frighteningly, that's exactly what's happening.  Donald Trump's running mate, Indiana Governor Mike Pence, has gone on record that institutional racism only exists if we talk about it:
Donald Trump and I both believe that there’s been far too much of this talk of institutional bias or racism in law enforcement. We ought to set aside this talk, this talk about institutional racism and institutional bias, the rhetoric of division.
The Trump campaign chair in Ohio, Kathy Miller (who has since resigned), went even further, blaming President Obama for racism, and claiming that it didn't exist before he became president:
If you’re black and you haven’t been successful in the last fifty years, it’s your own fault. You’ve had every opportunity, it was given to you. You’ve had the same schools everybody else went to. You had benefits to go to college that white kids didn’t have. You had all the advantages and didn’t take advantage of it.  It’s not our fault, certainly... Growing up as a kid, there was no racism, believe me.  We were just all kids going to school. 
I don’t think there was any racism until Obama got elected.  We never had problems like this...  Now, with the people with the guns, and shooting up neighborhoods, and not being responsible citizens, that’s a big change, and I think that’s the philosophy that Obama has perpetuated on America.
Well, of course you didn't experience racism, you nitwit.  You're not a minority.  As for the rest of it, this surpasses willful ignorance.  I'm not even sure what you'd call it.  Especially since the interviewer said to Miller that some people would take exception to what she'd said, and she responded, "I don't care.  It's the truth."

So here's a particularly awful example of what I was talking about yesterday; people elevating their own feelings, biases, and prejudices to the level of facts.  Taking the fact that for a white person, talking about racism can be uncomfortable, and using that discomfort as an excuse for believing that racism itself doesn't exist.

Well, I'm sorry, but the world doesn't work that way.  The truth doesn't change because thinking about it makes you feel wonky.  And neither can you substitute your mythology for actual history as a way of whitewashing the role your ancestors (and mine) had in oppressing other cultures.  All that does is perpetuate the very attitudes that created the problem in the first place -- and makes it less likely that our children and our children's children will live in a world where everyone is treated fairly and equitably.

Friday, September 23, 2016

Turning feelings into facts

A couple of days ago, I saw the following screed posted:
Do you think that Obama is intentionally trying to destroy America?  Anyone who doesn't see it or believe it is either blind, or prejudiced because of a like nationality...it's such a shame that our first African American president has done so much destruction to our nation!...  Pray very hard that Trump wins because for all his faults he truly loves his country and we WILL NOT survive Hillary Clinton.
I try like hell to avoid politics here on Skeptophilia, partly because I'm not knowledgeable enough to comment on most political topics, partly because I see most issues of governance as so hopelessly complicated that it's unclear that there even is a solution, and partly because most folks enter any political discussion so completely opinionated that it's hard to see how anything I could say would change anyone's mind on anything.

[image courtesy of the Wikimedia Commons]

But this statement was so extreme that it was tempting to post a response, a temptation I successfully resisted.  The comment rankled, though, and ultimately I felt like I had to respond in some way, so here we are with today's topic.

What I find most bizarre about the statement itself is that if you look around you, America is pretty much loping along as it always has, miraculously undestroyed after eight years of Obama's leadership.  And if you dig a little deeper -- by which I mean not simply shrieking an opinion but examining the facts -- you find something even odder.

The Balance, a non-partisan economic and financial media source, just posted an article yesterday that the U.S. economy is pretty healthy -- in fact, the article's author, Kimberly Amadeo, said it's "very nearly a Goldilocks economy."  In the past few years the GDP has grown at an ideal annual rate of between 1.8 and 2.5 percent.  U.S. manufacturing has grown even faster -- up 2.6% this year, and forecast to remain around that rate for the next four years.

What about the deficit?  Since President Obama took office, the deficit has dropped by 2/3, from $1.4 trillion to $489 billion.  (Now, I agree that $489 billion is still a pretty huge number, but at least it's moving the right direction.)

Likewise, the unemployment rate has shown a steady drop, from a high of 10% in October 2009 to 4.8% today.  Even the crime rate -- one of Trump's major issues -- has dropped steadily, and in fact has been on the decline since a peak way back in 1994.  (The same holds true even if you just look at the rate of violent crimes involving guns; so despite the hype in the media, you're actually less likely to be killed by a gun now than you were twenty years ago.)

What about those illegal immigrants "pouring across our borders?"  According to a study by the non-partisan Pew Research Group, the rate of illegal immigration has been stable for years, and in fact was considerably higher in 2007 than it is now.  (You might argue that it's still too high -- but the fact is, it's actually lower today than it was during George W. Bush's presidency.)

Even the common claim that "Obama is comin' for your guns" has turned out to be horseshit.  Look around you.  We're still as heavily armed as ever.

About the only statistics I could track down where Obama's track record kind of sucks is the male/female wage gap (which has barely moved in the past twenty years), the racial wage gap (just a couple of days ago a study by the Economic Policy Institute announced that it's the highest it's been in forty years), and the wealth gap between the richest and poorest (which is going the wrong way -- up -- and has been for thirty years).

So okay, you think that Obama is destroying the nation.  Maybe even deliberately.  Can you show me one metric -- just one -- that shows that that's true?

I mean, I get it if you don't like his policies on pro-choice/pro-life, LGBT issues, and so on.  Those tend to be divisive and engender high emotion.  But if you're trying to tell me that the United States has gone to wrack and ruin in the past eight years, can you show me why?

The whole thing is reminiscent of the interview with Newt Gingrich in which he said that people feel increasingly unsafe from violent crime.  The interviewer said, "Violent crime across the country is down."  Gingrich responded, "The average American... does not think crime is down, does not think they are safer."  The interviewer -- who at this point seemed to be trying to stop herself from laughing in his face -- said, "But we are safer, and it is down."  Gingrich said, "That's your view."

The interviewer said, "No, it's not my view, it's a fact..."

Gingrich interrupted with a patronizing smile and said, "What I said is also a fact."

And this seems to me to be the heart of the problem.  We are at the point that your "feeling" that we're spiraling into chaos trumps my facts that we're not.  Or -- scarily -- that if you're feeling something strongly enough, it becomes a fact.  The world, then, is constrained to fitting into whatever your particular narrative says it is.

Which is all very well until people start voting on the basis of ignoring facts and relying on feelings -- because that is a strategy that can lead to disaster.

Thursday, September 22, 2016

Chemical round-up

Yesterday's post about people who are fact-resistant is an easy segue into today's topic, which is: a viral post I've now seen at least a half-dozen times on social media that claims that there's RoundUp in vaccines.

The article, written by one Catherine J. Frompovich, starts with the following:
An absolute BOMBSHELL has just hit Big Pharma's vaccine industry!
Which, in my opinion, is a phrase that means, "Nothing important has happened."  Every time we hear that there's an ABSOLUTE BOMBSHELL that's going to (1) destroy Hillary Clinton, (2) destroy Donald Trump, (3) expose the lies of Big Pharma, or (4) cause a devastating scandal in Congress, we wait breathlessly...

... and nothing happens.

Of course, the people making the claim have an explanation for that; the "MSM" (Mainstream Media), who in this worldview is second only to "Big Pharma" as a stand-in for Satan himself, has covered the whole thing up.

In this case, we find out that a research scientist named Anthony Samsel has discovered traces of glyphosate (better known under its trade name as the herbicide RoundUp) in vaccines.  Then we're given the following alarming information:
In high school chemistry aren’t students taught the importance of chemical interactions, especially when mixing several chemicals in a laboratory beaker?  What can happen?  An explosion!  A similar chemical reaction occurs within the human body — the largest living, working test tube on earth, however it causes adverse health effects, not an explosion.
So, what you're saying is: if you put "chemicals" together, they explode, except that we're talking about putting chemicals together here, and they don't explode?

But even so they're really really bad.  Because they're chemicals.  So q.e.d., apparently.

[image courtesy of the Wikimedia Commons]

Then, of course (since it's RoundUp), we immediately launch into the argumentum ad Monsantum fallacy, which is to claim that anything even tangentially connected to Monsanto must be evil.   The implication is that Monsanto is deliberately tainting vaccines with their nasty chemicals for some diabolical reason, most likely to get rid of anyone who is stupid enough to fall for their cunning plans.

The whole argument falls apart, however, when you start looking at the details.  Going to the blog that brought Samsel's research to the public eye, we find out that there have been traces of RoundUp found in vaccines, most likely due to the inclusion of animal-derived products such as glycerine, but the amounts are almost all less than one part per billion.  Still, that doesn't tell us much about toxicity -- Frompovich is correct that some substances are toxic in vanishingly small quantities.  But then you look at the end of Samsel's data table, and you find out that "gummi bears" have quantities of RoundUp that are on the order of eighty times higher than any of the vaccines studied.

Interesting that there's all of this hoopla about Big Pharma and toxins in vaccines, but there's no mention of the role of Big Gummi in poisoning our children's candy.

A further, and more serious, problem comes to light when you start digging into the background of Anthony Samsel himself, and his alleged studies linking glyphosate to every human malady except the common cold via scary-sounding biochemical pathways.  An exposé by Tamar Haspel three years ago found that the supposed peer-reviewed research Samsel and a woman named Stephanie Seneff conducted into the presence of glyphosate and its effects on human tissue almost certainly never occurred.  Haspel writes:
Samsel and Seneff didn’t conduct any studies.  They don’t seem interested in the levels at which humans are actually exposed to glyphosate.  They simply speculated that, if anyone, anywhere, found that glyphosate could do anything in any organism, that thing must also be happening in humans everywhere.  I’d like to meet the “peers” who “reviewed” this.
Worse still, neither Samsel nor Seneff is a biochemist, or even a cellular biologist. Seneff is a computer scientist at MIT; Samsel is a "consultant" who does "charitable community investigations of industrial polluters."  As Haspel put it, "I think it's fair to say that they probably went into this with a point of view."

And if you needed one further death-blow to the whole argument, the woman who wrote the ABSOLUTE BOMBSHELL article, Catherine J. Frompovich, is a staff writer for...

... The Daily Sheeple.

So to those folks who keep circulating this article and ones like it, I'm respectfully asking you to stop.  There's enough misinformation out there on health in general and vaccines in particular.  To say it for probably the 13,537th time: vaccines are safe, effective, protect you and your children from diseases that can kill you, and have a very very low likelihood of side effects.  Myself, I'll take the chance of the health effects of minuscule amounts of glyphosate rather than those from getting the measles, hepatitis A, or even the flu.

On the other hand, I am having second thoughts about gummi bears.

Wednesday, September 21, 2016

The index case for fact-resistance

I think a standard question for anyone who holds an anti-science stance -- so climate change deniers, antivaxxers, people who are pro-homeopathy -- should be: "What would it take to convince you that you are wrong?"

I'll be up front that this idea is not original to me.  It was the single question that still stands out in my mind as the most important in the infamous Bill Nye/Ken Ham debate.  Nye responded, in essence, that one piece of information that could not be explained except by the young-Earth model is all it would take.  Ham, on the other hand, said that nothing could convince him.  No evidence, no logical argument, nada.

And therein, folks, lies the difference between the scientific and anti-scientific view of the world.

It is a question I wish had come up during a hearing this week in the House Committee on Science (controlled, as I have mentioned before, almost entirely by anti-science types).  The topic was the subpoenas being sent out to climate scientists in an attempt to intimidate them into backing down on their (at this point incontrovertible) claim that the world is warming up.  One of the people who spoke in favor of the subpoenas was Ronald Rotunda, professor of law at Chapman University.

This in itself is an odd choice.  Rotunda is a lawyer, not a scientist.  Wouldn't you want the scientists -- i.e., the people who know what the hell they're talking about -- to weigh in?  Of course, it doesn't take a genius to see that wasn't the point here.  The point was getting some talking heads to reinforce the view of the committee that climate change is a hoax.  But what happened afterwards is pretty interesting -- and heartening.

Rotunda was trying to make the case that the scientists disagree on the idea of climate change and (specifically) sea level rise, and cited research by Harvard geoscientist Jerry Mitrovica, claiming that it showed that the melting of the Greenland ice cap would actually cause the sea level to fall.  Of course, Rotunda was completely misrepresenting Mitrovica's work; Mitrovica had shown that due to a combination of gravitational effects and isostatic rebound (the lifting of land masses when a weight such as an ice cap is taken from them), the sea level around Greenland as measured from the coast of Greenland might fall.  What Rotunda conveniently forgot to mention was that the melted ice combined with the aforementioned factors would cause the sea level to rise more elsewhere.

That's not what the representatives on the committee wanted to hear, of course, so it never came up.

Coastal Greenland [image courtesy of the Wikimedia Commons]

What's encouraging in all of this depressing business is the response of one person on the committee -- Bill Foster of Illinois, the committee's only trained scientist (he started his career as a physicist).  Foster listened politely to what Rotunda was saying.

But he wasn't buying it.

What Foster did was brilliant -- he merely asked Rotunda to explain how his claim worked.  "I was fascinated by what seemed to be apparent support of an argument that the Greenland ice sheet would melt, and thereby lower the sea level," Foster said, "and I was wondering if you can expound on how exactly the physics of this works."

Rotunda, who apparently has less understanding of physics than your typical 12th grade physics student, immediately began to babble.  "When the ice sheet melts, all the gravity that was then part of the island of New Greenland [sic] disappears into the ocean, it just goes away.  And that ice has been pushing Greenland down, and now Greenland will be moving up, because the water is all over the place."

All I can say is that if I gave explanations like that in my high school classes, I would quite rightly be tarred and feathered.

So that's the next best thing to "What would it take to change your mind?" -- "Can you explain to me how that would work?"  Both of these, in my opinion, should be the immediate go-to questions in any debate on climate change -- or any other discussion that has become contaminated with anti-science.

Of course, the downside of all of this is that the climate change deniers on the Science Committee, with the exception of Bill Foster, all just nodded sagely while Rotunda spewed his bullshit.  If you already have assumed your conclusion, no amount of logic or evidence would ever sway you.

It reminds me of a brilliant satirical piece written by Andy Borowitz for New Yorker earlier this year entitled, "Scientists: Earth Endangered By New Strain of Fact-Resistant Humans."  A quote from Borowitz seems an appropriate way to end this post, especially given that the House Committee on Science -- of all groups -- seems to be the index case for fact-resistance:
The research, conducted by the University of Minnesota, identifies a virulent strain of humans who are virtually immune to any form of verifiable knowledge, leaving scientists at a loss as to how to combat them. 
“These humans appear to have all the faculties necessary to receive and process information,” Davis Logsdon, one of the scientists who contributed to the study, said.  “And yet, somehow, they have developed defenses that, for all intents and purposes, have rendered those faculties totally inactive.” 
More worryingly, Logsdon said, “As facts have multiplied, their defenses against those facts have only grown more powerful.” 
While scientists have no clear understanding of the mechanisms that prevent the fact-resistant humans from absorbing data, they theorize that the strain may have developed the ability to intercept and discard information en route from the auditory nerve to the brain.  “The normal functions of human consciousness have been completely nullified,” Logsdon said.

Tuesday, September 20, 2016

There goes the Sun

Yesterday I received a friendly email from a loyal reader of Skeptophilia of the "You think that is stupid, wait till you see this" variety.  As well-intentioned as these generally are, I always hesitate to read further, because my general impression of human foolishness and gullibility really doesn't need any further reinforcement.

This one was in response to last week's post about the Flat Earthers, so already we've set the bar for comparative idiocy pretty high.  But as I continued to read the email (yes, I succumbed to my 'satiable curiosity), I found that said bar was cleared in a single leap by this particular claim.

So without further ado: the idea that makes the Flat Earthers look sane and sensible.  Ready?

The Sun doesn't exist.

According to a group of loons calling themselves "asunists," what we're calling the Sun is just an illusion generated by light collected and beamed at the Earth by an array of curved mirrors.  You might be asking, "Light coming from where, exactly?", but that is only the first of the many problems we encounter upon delving into the situation.  Apparently the idea came about when someone googled "solar simulator" and found that there is a device that approximates the radiation spectrum and illuminance of the Sun, and is used for testing solar cells, sunscreen, plastics, and so forth.  So in a classic case of adding two and two and getting 147, they then interpreted this to mean that the Sun itself was a simulation.

[image courtesy of NASA]

Who is responsible for this?  Well, nasty old NASA, of course.  Same ones who keep the Moon hologram going and are suppressing information about the Earth being flat and/or hollow, not to mention the impending catastrophic visit by the fabled planet Nibiru.

What evidence do we have?  The producer of the above-linked YouTube video explains how he knows that the Sun isn't real, and a lot of it seems to be the fact that in some photographs, the outline of the Sun is "fuzzy."  It used to be clear and sharp, but now because of "chemicals in the air" the Sun has gotten all blurred.  So apparently we used to have a real Sun, but now it's been replaced by a simulator which just isn't as good as the real thing.

My question is -- well, among my many questions is -- don't you think someone would have noticed when the real Sun was taken down, and the simulator put in place?  Oh, and what did they do with the old Sun?  Was it sent to the stellar retirement home?  Was it just turned out into the cold vacuum of space, to wander, lost and forlorn forever?

Of course, the question that applies to all of these wacko conspiracy theories is why anyone would bother to do all of this.  Don't you think that if the Sun really was a big bunch of mirrors, the Earth was flat, or whatnot, the scientists at NASA would tell us?  What could they possibly gain by pretending that the Sun exists and the Earth is an oblate spheroid?

The oddly hilarious postscript to all of this is that the whole the-Sun-doesn't-exist conspiracy theory received a boost from none other than Ray "Mr. Banana" Comfort, the outspoken young-earth creationist who a couple of years ago got his ass handed to him when he showed up to distribute creationist literature at a talk by Richard Dawkins hosted by the Skeptic Society.  Well, Comfort has picked up on the "asunist" thing and used it as an argument against atheism (in Comfort's mind, everything is an argument against atheism).  He tells us about his perception of the "asunists" -- mischaracterizing their claim as stating that they believe we're actually in the dark -- and compares that to atheists' conclusion that god doesn't exist.

Which just shows you that there is no idea so completely stupid that you can't alter it so as to make it way stupider.

So to the loyal reader who sent me the email, all I can say is "thanks."  I now am even more convinced that Idiocracy was a non-fiction documentary.  It's time to get myself a cup of coffee and try to reboot my brain so that I make some degree of sense in class today.  Also time to start watching for the sunrise.

Or the solarsimulatorrise.  Or whatever.

Monday, September 19, 2016

Slowing down the copy-and-paste

I'm really interested in research on aging, and I'd like to think that it's not solely because I'm Of A Certain Age myself.  The whole fact of our undergoing age-related system degradation is fascinating -- moreso when you realize that other vertebrates age at dramatically different rates.  Mice and rats age out after about a year and a half to two years; dogs (sadly) rarely make it past fifteen (much less in some breeds); and the Galapagos Tortoise can still be hale and hearty at two hundred years of age.

A lot of research has gone into why different organisms age at such different speeds, and (more importantly) how to control it.  The ultimate goal, selfish though it may sound, is extending the healthy human life span.  Imagine if we reached our healthy adult physiology at (say) age 25 or so, and then went into stasis with respect to aging for two hundred or three hundred years -- or more?

Heady stuff.  For me, the attraction is not so much avoiding death (although that's nice, too).  I was just chatting with a friend yesterday about the fact that one of my biggest fears is being dependent on others for my care.  The idea of my body and/or mind degrading to the point that I can no longer care for my own needs is profoundly terrifying to me.  And when you add to the normal age-related degradation the specter of diseases such as Alzheimer's and ALS -- well, all I can say is that I agree with my dad, who said that compared with that fate, "I'd rather get run over by a truck."

A particularly interesting piece of research in this field that was published last week in the Proceedings of the National Academy of Sciences gives us one more piece of the puzzle.  But to understand it, you have to know a little bit about a peculiarity of genetics first.

Several decades ago, a geneticist named Barbara McClintock was working with patterns of seed color inheritance in "Indian corn."  In this variety, one cob can bear seeds with dozens of different colors and patterns.  After much study, she concluded that her data could only be explained by there being "transposable elements" -- genetic sequences that were either clipped out and moved, or else copied and moved -- functions similar to the "cut-and-paste" and "copy-and-paste" commands on your computer.  McClintock wrote a paper about it...

... and was immediately ignored.  For one thing, she was a woman in science, and back when she was doing her research -- in the 1960s and 1970s -- that was sufficient reason to discount it.  Her colleagues derisively nicknamed her theory "jumping genes" and laughed it into oblivion.

Except that McClintock wouldn't let it go.  She was convinced she was right, and kept doggedly pursuing more data, data that would render her conclusion incontrovertible.  She found it -- and won the Nobel Prize in Physiology and Medicine in 1983, at the age of 81.

Barbara McClintock in her laboratory at Cold Spring Harbor [image courtesy of the Wikimedia Commons]

McClintock's "transposable elements" (now called "transposons") have been found in every vertebrate studied.  They are used to provide additional copies of essential genes, so that if one copy succumbs to a mutation, there's an additional working copy that can take over.  They are also used in gene switching.  Move a gene near an on-switch called a promoter, and it turns on; move it away, and it turns off.

The problem is, like any natural process, it can go awry.  The copy-and-paste function especially seems to have that tendency.  When it malfunctions, it can be like a runaway copy-and-paste would be in your word processing software.  Imagine the havoc that would ensue if you had an important document, and the computer was inserting one phrase over and over again in random points in the text.

This should give you an idea of why it's so important to keep this process under control.

You have a way of taking care of these "rogue transposons" (as they're called).  One such mechanism is methylation, which is a chemical means of tangling up and permanently shutting down genes.  But the research just released suggests that aging is (at least in part) due to rogue transposition getting ahead of methylation -- leaving random copied chunks of DNA scattered across the genome.

A study by Jason Wood et al. of Brown University has found that fruit flies near the end of their life have a far greater number of active transposons than young flies do.  In fact, as they age, the number increases exponentially, the result being interference with gene function and a system-wide degradation.  Most interesting is that they found two genes -- Su(var)3-9 and Dicer-2 -- that when enhanced both substantially increase longevity in fruit flies.  Su(var)3-9 seems to be involved in increasing the methylation rate of rogue transposons, and Dicer-2 in suppressing the transposition process itself.  An increase in the activity of these genes raised the average longevity of fruit flies from sixty to eighty days -- an increase of 33%.

Of course, there's no guarantee that even if these genes turn out to have similar effects in humans, that the longevity increase will scale up by the same amount (if it did, it would raise the average human age at death to around 100 years).  But the whole thing is tremendously interesting anyhow.  On the other hand, I have to say that the idea that we are getting to the point that we can tinker around with fundamental processes like aging is a little frightening.  It opens up practical and ethical issues we've never had to consider before; how this would affect human population growth, who would have access to such genetic modifications if they proved effective and safe, even such things as how we approach the idea of careers and retirement.

Imagine if you reached the age of sixty and could expect another thirty or more years of active health.  Imagine if the effect on humans was greater -- and the upper bound of human life span was increased to two hundred or three hundred years.  It seems like science fiction, but with the research that is currently happening, it's not outside of the realm of possibility.

If you had the physiology and mental acuity of a twenty-five year old, who would want to retire at sixty?  At the same point, who would want to stay in the same job for another hundred years?  I love my students, but that definitely falls into the "shoot me now" category.

The whole thing would require a drastic reorganization of our society, a far more pervasive set of changes than any scientific discovery has yet caused.  And lest you think that I'm exaggerating the likelihood of such an eventuality; remember how much progress has happened in biological science in the last century.  Only a hundred years ago, children in industrialized countries were still dying by the thousands of diphtheria and measles.  There were dozens of structures in cells, and a good many organs in humans, about whose function we knew essentially nothing.  We knew that DNA existed, but had no idea that it was the genetic material, much less how it worked.

Makes you wonder what our understanding will be in another hundred years, doesn't it?

And maybe some of the people reading this right now will be around to see it.

Saturday, September 17, 2016

The language of morality

If we needed any more indication that our moral judgments aren't as solid as we'd like to think, take a look at some research by Janet Geipel and Constantinos Hadjichristidis of the University of Trento (Italy), working with Luca Surian of Leeds University (UK).

The study, entitled "How Foreign Language Shapes Moral Judgment," appeared in the Journal of Social Psychology.  What Geipel et al. did was to present multilingual individuals with situations which most people consider morally reprehensible, but where no one (not even an animal) was deliberately hurt -- such as two siblings engaging in consensual and safe sex, and a man cooking and eating his dog after it was struck by a car and killed.  These types of situations make the vast majority of us go "Ewwwww" -- but it's sometimes hard to pinpoint exactly why that is.

"It's just horrible," is the usual fallback answer.

So did the test subjects in the study find such behavior immoral or unethical?  The unsettling answer is: it depends on what language the situation was presented in.

Across the board, if the situation was presented in the subject's first language, the judgments regarding the situation were uniformly harsher and more negative.  Presented in languages learned later in life, the subjects were much more forgiving.

The researchers controlled for which languages were being spoken; they tested (for example) native speakers of Italian who had learned English, and native speakers of English who had learned Italian.  It didn't matter what the language was; what mattered was when you learned it.

[image courtesy of the Wikimedia Commons]

The explanation they offer is that the effort of speaking a non-native language "ties up" the cognitive centers, making us focus more on the acts of speaking and understanding and less on the act of passing moral judgment.  I wonder, however, if it's more that we expect more in the way of obeying social mores from our own tribe -- we subconsciously expect people speaking other languages to act differently than we do, and therefore are more likely to give a pass to them if they break the rules that we consider proper behavior.

A related study by Catherine L. Harris, Ayşe Ayçiçeĝi, and Jean Berko Gleason appeared in Applied Psycholinguistics.  Entitled "Taboo Words and Reprimands Elicit a Greater Autonomic Reactivity in a First Language Than in a Second Language," the study showed that our emotional reaction (as measured by skin conductivity) to swear words and harsh judgments (such as "Shame on you!") is much stronger if we hear them in our native tongue.  Even if we're fluent in the second language, we just don't take its taboo expressions and reprimands as seriously.  (Which explains why my mother, whose first language was French, smacked me in the head when I was five years old and asked her -- on my uncle's prompting -- what "va t'faire foutre" meant.)

All of which, as both a linguistics geek and someone who is interested in ethics and morality, I find fascinating.  Our moral judgments aren't as rock-solid as we think they are, and how we communicate alters our brain, sometimes in completely subconscious ways.  Once again, the neurological underpinnings of our morality turns out to be strongly dependent on context -- which is simultaneously cool and a little disturbing.